From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754585AbcILPlr (ORCPT ); Mon, 12 Sep 2016 11:41:47 -0400 Received: from mail-dm3nam03on0089.outbound.protection.outlook.com ([104.47.41.89]:3367 "EHLO NAM03-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751608AbcILPlj (ORCPT ); Mon, 12 Sep 2016 11:41:39 -0400 Authentication-Results: google.com; dkim=none (message not signed) header.d=none;google.com; dmarc=none action=none header.from=amd.com; Subject: Re: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute To: Borislav Petkov References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> <20160909172314.ifcteua7nr52mzgs@pd.tnic> CC: , , , , , , , , , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov From: Tom Lendacky Message-ID: <4e423d15-7fe2-450a-05dd-1674bd281124@amd.com> Date: Mon, 12 Sep 2016 10:41:29 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160909172314.ifcteua7nr52mzgs@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [165.204.77.1] X-ClientProxiedBy: YTOPR01CA0027.CANPRD01.PROD.OUTLOOK.COM (10.166.147.37) To DM5PR12MB1146.namprd12.prod.outlook.com (10.168.236.141) X-MS-Office365-Filtering-Correlation-Id: d2fa2bed-732f-4829-37b4-08d3db23479a X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;2:bCaJ0L+/98YY35Ilq0Km8UUG//BCn4QbkKb9T1B5cdu4FONw8N++394R4pZxnDQdbpUlPNB6/jmP79vofLFIs7loSKshOI6fZsL0ZTFGtYnUqjTv1fZLuRmF2RuL+AtpwnciXZIabvVgi3mfCsnb0jrY0Vryqz+puNcZgGzEy676t4abotvh4vMClsz+dR6F;3:1JjpwCO2eXulkh6TsHsWzJyAViVM9clDtu2PKp4fyCTBb8P1qrQkLzmD1NFw3Dkmw2ivoq65Y7ydeSi2mYsvYwmZYtJGR4NvyhDdEa4c1T2XVUJ3KOYKHC0YBRUyghdG;25:YJNVrEtoHlsbFyklmeKoWVG0sOSJZ07la/dEqB44pJxx2sp20G8I9L37T/b7kSH6O+zPzPErq3nUCVKZaYdEwPeQ0PAq0ziUqBWyBVC79pIPDjbcTtGqTtqSc0SrMtcNl1TbRjH/q2wBC8pspY/t2uPfe3VdGeEYSU3/JIvW5E/4ASO8iL2xwMFGl6aT7+KAHCh/Me4X0Nh7wl+6yYNQsduKQxe+RgvSWNh0GpxgUTcmCayXZB1y+s3sJtAOrU3KiVVtratWXTcWw40Hes2+DRLEdn5+KY7fG7k+ur7pHZvfnzIjBxnp9y+QBqHAxTzi+SDLwQSlLdRSpCAkHKjs0OnuSnUqs2WVR4wI+4FjFTSy2jIWPYGUeYf4yPz9bN9m6hxpPTue75hSL7/TDV0Nzsy/aTHt2z5PILarGPAGjrg= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1146; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;31:4E35JQUQSPw/anfgxRSg3aYz/X2ZKkodnasQW78QFgi6E1CRz4G3CqegLIRJ2BYvM8jUkkFZCzndrxO+gJOVz3B6zHF+0pBBCcxhvrIJQhSBkjU8dpks2VVm4plT2tTY0WS4lhV/MVFMgGPFgGA9uDNpRcTG45lLy19Nt1suPlB1BFRy3U7pIar0n6awOLVMOMdecgkIkuC178GB+D+cWvGmA5Jw/KV8R/xST6uUsHU=;20:ndHdnEjLoYzbfMTk23UGX7hl0PAyliwFuXDIsYLDFANcLDMykELSdwQ3QcmWZHKooG0K64AYkcvg2ZbIKO33S9EpltGpHkxMvMvkVEmZn+IuSG/BV9dvLGt/sM9OMYXTJb/u/FhMl1seerytszIaILHxZwxCSQuUOVzUKjwBWUwNaaUUI71++PS2HsBzUUawCb3Yi5vvbuu/NmhGh9YN38v6GBiXPZcxhrvqJHuRaz2AeNIQVvLhQGXjGGZfFY70CK/+Rs+AWWfDTV8PWWl2gk7DQIIdDagL8fCbb07yDWNh03ZfCsX92fffxEye0tzceIKUI7WBrySYC7R896JZPUswGUqWBVWnGGVrUnmMhtMIsKmeL8HrXs29Fgs+6B5C/Whe5Lk2DZerrcVn6QPyZ5hQzSD3cjtVdhEd69El1F0ihohsHRyo+NhwQLgj+Bwlaauq0oW1jDFa4RPNP5y/Eit1fEN2VSEJDulbTb8pM8sXUmQPlAWRgSM0bJDDOcoO X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026);SRVR:DM5PR12MB1146;BCL:0;PCL:0;RULEID:;SRVR:DM5PR12MB1146; X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;4:OmzP8V/6EbFMo8gaCl3ppS1+Jw92r4cbmLRw34HKqgDGSPeAlw/fL9iKti8L/vOGzS+bnausYcd9cSjoR/Gb/fssZxz1IBXnaczX/VZByTnSSIuaajhfGeIdD9la38Hwm8OahnCxWI4aNOmKXNQwr+L+nW28l1MD4pQ5VQTJKveHVwhN7sCRUlFH4hQUJa2SM7USrrs0DPG9N4HlXZBe9GiNYrj4kyQywbKcL9d/BRJDxuoSh4HGvPQZpwPJf2VRh4+BGHytcjQ2F9wSc6dqV15VzEOYoHbo6ccd8QV5Xa+vVp+7sQCKsv9YRTGOwYDtFCWOQfVTZ+2MW04yXDKXIMxMLy3LTiB12vHlaCLd0SwzWwb27hzJPrXC7+de09yYMPu5kMxUwqprhxRyyr0wfPQZ+9PSDYDRiHG7SKSnoMMPKB0iHCizFH27U97tZOPL X-Forefront-PRVS: 006339698F X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10009020)(4630300001)(6049001)(6009001)(7916002)(24454002)(43544003)(377454003)(23676002)(4326007)(64126003)(7416002)(31696002)(42186005)(86362001)(586003)(33646002)(6116002)(230700001)(5660300001)(2950100001)(3846002)(8676002)(65956001)(66066001)(81166006)(47776003)(2906002)(65806001)(36756003)(76176999)(31686004)(4001350100001)(92566002)(7736002)(65826007)(50986999)(54356999)(305945005)(189998001)(87386001)(110136002)(4001520100001)(19580395003)(50466002)(77096005)(83506001)(7846002)(19580405001)(5004840100003)(217873001);DIR:OUT;SFP:1101;SCL:1;SRVR:DM5PR12MB1146;H:[10.236.18.82];FPR:;SPF:None;LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTVQUjEyTUIxMTQ2OzIzOkpERmVtU0lhUUN0Vi8yaXBZQTZzV1lEOWc2?= =?utf-8?B?QS9jTStMYnFkVXJPWjlOVUQ0aTZ1NU1ZeTVtcW9xQXR4OUN2S052SU1Tanpr?= =?utf-8?B?a3dRNHhMUXVwL1cwWThzUkI2b0czVUowZitjWHZNK1doY1F4cDREQ3Fhcmg4?= =?utf-8?B?M0YvYm5GRXFFcGNFdFJCZ21YVjZQbzJ6TjFNVDBNMmt3TEFtQTQ0Si9SeGc1?= =?utf-8?B?SVVacVpyTjROQzZEYWJXQktpWE93eTIxcTRxdGlOSUtrTk9ST0RQNzNJS1d1?= =?utf-8?B?bXR5UThGZ2kwaHBibDVYZnNDdjFORWFvOGF2UUcyRkJHRnk5WkV0UEZwb1hB?= =?utf-8?B?Q1doWUpBemtjRGVVeWpVZ1dEUGNWTDlhNzVFWXoyblZleUFybmRQYS9iQjln?= =?utf-8?B?d09GL3M0aG4yS3RHcXdXVGN6WWwrTEFPTWdhUkVOZ2VhcTFXMHBFVjJ2cjJE?= =?utf-8?B?SFNiNmdRTkFkNHRWWXZDSHBCUkZjRVRXNzhNYnNDc0lzWXJtQlU4OHBRN3FF?= =?utf-8?B?M1JrdERRZjB4Y29qR1BwaXVaK09rbVBBWURHOGZENGxxN1J5YmVhRlBTVWwx?= =?utf-8?B?K1dtbStxZWVkcmF0TnFFcm5USk45L09iK0Vzekl5QXIxWDdXNkVlNEZQei9X?= =?utf-8?B?MElnYjR5a2kvS3VMTEVxK2Z3QWRHQ2ZiVGxFbmR5MEx5NHl5T0xHZlBldFlj?= =?utf-8?B?ZnhkaWRDdWlGcThab0pMRWNVL0Ria3J2NGZuWFVVUEhYQWJoYUF6bkt3cUc2?= =?utf-8?B?MWNjTDlHY1dYY1BBNDluRThoM2xwK1dkb1IrYmQzREhoWStWWEFaOXl5NHc0?= =?utf-8?B?M1RGRHpJZUJZdEJ3TGVDenR2cyt5WkxMSDFiTC9rQ3daQS9pcGVjS213K1FQ?= =?utf-8?B?YTVBaDl4UWVPT3ZGbEtGdVhSZXRrdTdEQ0hSNkgrZzd2UVUzYVo5dWNiVUpR?= =?utf-8?B?WUhaN09yazlYSllvSEpydmErdzVZd2RCdUdLQzNUb3UxM05DZVNYYTdTbVFQ?= =?utf-8?B?cDR1dlJvYjJCdjBMV2UxTkc1NG5WNmYyZzJzSEw5SGhHaXV4UXNXT2NRdDlL?= =?utf-8?B?dE4xbmxYYmxPbTlUNGpMaFNDdzFxd0VJUVlMLzh5eTZnMWpyWjgyVmJiUUFz?= =?utf-8?B?WWJlVFdoN0lXUWNzWVpqaDVXcVk2aW9HazRHbnZPdjhOY09DMm45a2gvUldk?= =?utf-8?B?SWRFaWM3dHpTcFBZdU01M1Nwc0FiaCsxbU9TUTVPWkM3b04vSUVlTWUwQzIz?= =?utf-8?B?Z0NhdnN5MlBMYVNpK2dFOXRTaTczbXFVa3lSMUpJK0t6aXl6b0xFQThGMlZX?= =?utf-8?B?YnV0QjZFMk5QL0FaS1kyTzhiZXlQVjdqTTFSd3dTQzZxSEU0Smp0OG1vckxp?= =?utf-8?B?UlhmL0xyblpBUi9jZkZIZkJJQTRPNGZKdEROY1hiTDFjTXA3bmx0elpqSDlz?= =?utf-8?B?WUZNS1NtMURPVENzVVlsb1RxNWFxZUdGZXBtTnUvYjRWVlh3L0Y4ZmNkUDQ0?= =?utf-8?B?Zkd1Sk8vejl6N3dLdFBxV0NzR0c2a1NOSDg4VXBTeTZVMW1oNU1GQ2VGWnRj?= =?utf-8?B?Yi9WZkNUZ0NVOElHNm1xd0lFWWYrbWsvWjFTb1ZodHRwZXlyUjdjQ0piZlVJ?= =?utf-8?B?NWI4S0d0dlJubDVtdHM0RG9nbXJWVHB5TlJhVE5iWmh4Z0VtRGEzN29lbHly?= =?utf-8?Q?2vdZRIw1JMQiwdD3fDIP9dX4FoF+C/RcJ4/g9iM?= X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;6:XfIzcZUmLndSMLl6pxBZYZIR2tacdVxoy8XEEWgNSkbtMWoDgQViKnQ8C6AY6dFa6sZVmR6lpOtVL4Q/d1qfoBiXkilxUeyxrVwhDQ0LG4YpYuXn/cts3HtXaesclAy/cmsiK3QlyGC2/M98t5voYzE0FcHLE32djp1Dx9kLmWsCfb2tSP9nybw4vRe8MnSnUhd5vDQPSj2WUNOO3U7InXXnzRIxTJhOg5dbRBbn2xaAkLCRmAaBVSK0mLoq/Ueti5rYQ1XSJ5tO7QwznxVSJFz3WSMHkhihSKkgPuc2arppD+oNfcjm0PzdTSrQkvPFTwq0NlrISRicSadJCcGMIA==;5:2IJ5uvLOGFkyhvn1PyWD8AZBo0aaIPmgc9KjTBp4tqULwJ4SrZHZqBWj+EMHUjYdukq0lLe6MfjeV7ggElNzY7UiNKILFIw7xBNccUv3fQbM3OAJWR4qf1Sj5KbT9bhfQyaFKWkHmxkQ7greTENqGQ==;24:inEnGoF9rb8P9ySs4zTLR/c/NbhX9wRAi0BE5amutq2uW+yIgDdN3E0KfYGIhoaF8z0xn1pJBqMa0DvB7RieVNTE7JyKuc2X7kQvKRW8NeI=;7:Jq1h6fJ3j9iWOr+OIvWmHCWapQdXZHL2xR46MRPV8u/gzESl8sYjZW1r5/PDKHgu1pZoQXvpVZkQLdfi2L0JZrPlkQ1Djv/7CBprL1rHwUAdNNMHgOHMCIyu2yp0W88n0iATgWH6HtpJ/6WPGgnIAzMz4xDmp46zQb1a/4Aj3ymukTkc7GQy/hRJBHY2VPHtxGL/l2N/Sihf5NltTfadPf1L7pP0IpN7fV+RsDq4yZXGn+Q8IaZ9Hw7lFItHgxWb SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;DM5PR12MB1146;20:LjwjceGs0gCbuhnJRG8UcDWbUpjHSO8hzBGFLYNr4c4k0GHNjJeGlFOmD3zRDwrkJf66QiXmSk0KxuSgBG1yRZsuL1RQNCvRkDQxMpVyJAWZt7rv8chekGNsQwghoQYvSXFdYM1fMkKvIzRiu3qkER/b7/9+pzOJeNPpdgwodhSLIDiA27jEQfgQX4SjehN2j1PSoFYf2X5yXpxlH0UUlIF+fxVvLiPb+JAk9u8pXdM8M9qRZTje3dfGMyJlSFzc X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Sep 2016 15:41:34.1312 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1146 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/09/2016 12:23 PM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:37:49PM -0500, Tom Lendacky wrote: >> This patch adds support to be change the memory encryption attribute for >> one or more memory pages. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/include/asm/cacheflush.h | 3 + >> arch/x86/include/asm/mem_encrypt.h | 13 ++++++ >> arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ >> arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ >> 4 files changed, 134 insertions(+) > > ... > >> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c >> index 72c292d..0ba9382 100644 >> --- a/arch/x86/mm/pageattr.c >> +++ b/arch/x86/mm/pageattr.c >> @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) >> __pgprot(0), 1, 0, NULL); >> } >> >> +static int __set_memory_enc_dec(struct cpa_data *cpa) >> +{ >> + unsigned long addr; >> + int numpages; >> + int ret; >> + >> + if (*cpa->vaddr & ~PAGE_MASK) { >> + *cpa->vaddr &= PAGE_MASK; >> + >> + /* People should not be passing in unaligned addresses */ >> + WARN_ON_ONCE(1); > > Let's make this more user-friendly: > > if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, "Misaligned address: 0x%lx\n", *cpa->vaddr)) > *cpa->vaddr &= PAGE_MASK; Will do. > >> + } >> + >> + addr = *cpa->vaddr; >> + numpages = cpa->numpages; >> + >> + /* Must avoid aliasing mappings in the highmem code */ >> + kmap_flush_unused(); >> + vm_unmap_aliases(); >> + >> + ret = __change_page_attr_set_clr(cpa, 1); >> + >> + /* Check whether we really changed something */ >> + if (!(cpa->flags & CPA_FLUSHTLB)) >> + goto out; >> + >> + /* >> + * On success we use CLFLUSH, when the CPU supports it to >> + * avoid the WBINVD. >> + */ >> + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) >> + cpa_flush_range(addr, numpages, 1); >> + else >> + cpa_flush_all(1); > > So if we fail (ret != 0) we do WBINVD unconditionally even if we don't > have to? Looking at __change_page_attr_set_clr() isn't it possible for some of the pages to be changed before an error is encountered since it is looping? If so, we may still need to flush. The CPA_FLUSHTLB flag should take care of a failing case where no attributes have actually been changed. Thanks, Tom > > Don't you want this instead: > > ret = __change_page_attr_set_clr(cpa, 1); > if (ret) > goto out; > > /* Check whether we really changed something */ > if (!(cpa->flags & CPA_FLUSHTLB)) > goto out; > > /* > * On success we use CLFLUSH, when the CPU supports it to > * avoid the WBINVD. > */ > if (static_cpu_has(X86_FEATURE_CLFLUSH)) > cpa_flush_range(addr, numpages, 1); > else > cpa_flush_all(1); > > out: > return ret; > } > > ? > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tom Lendacky Subject: Re: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute Date: Mon, 12 Sep 2016 10:41:29 -0500 Message-ID: <4e423d15-7fe2-450a-05dd-1674bd281124@amd.com> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> <20160909172314.ifcteua7nr52mzgs@pd.tnic> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20160909172314.ifcteua7nr52mzgs-fF5Pk5pvG8Y@public.gmane.org> Sender: linux-efi-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Borislav Petkov Cc: linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dm List-Id: linux-efi@vger.kernel.org On 09/09/2016 12:23 PM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:37:49PM -0500, Tom Lendacky wrote: >> This patch adds support to be change the memory encryption attribute for >> one or more memory pages. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/include/asm/cacheflush.h | 3 + >> arch/x86/include/asm/mem_encrypt.h | 13 ++++++ >> arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ >> arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ >> 4 files changed, 134 insertions(+) > > ... > >> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c >> index 72c292d..0ba9382 100644 >> --- a/arch/x86/mm/pageattr.c >> +++ b/arch/x86/mm/pageattr.c >> @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) >> __pgprot(0), 1, 0, NULL); >> } >> >> +static int __set_memory_enc_dec(struct cpa_data *cpa) >> +{ >> + unsigned long addr; >> + int numpages; >> + int ret; >> + >> + if (*cpa->vaddr & ~PAGE_MASK) { >> + *cpa->vaddr &= PAGE_MASK; >> + >> + /* People should not be passing in unaligned addresses */ >> + WARN_ON_ONCE(1); > > Let's make this more user-friendly: > > if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, "Misaligned address: 0x%lx\n", *cpa->vaddr)) > *cpa->vaddr &= PAGE_MASK; Will do. > >> + } >> + >> + addr = *cpa->vaddr; >> + numpages = cpa->numpages; >> + >> + /* Must avoid aliasing mappings in the highmem code */ >> + kmap_flush_unused(); >> + vm_unmap_aliases(); >> + >> + ret = __change_page_attr_set_clr(cpa, 1); >> + >> + /* Check whether we really changed something */ >> + if (!(cpa->flags & CPA_FLUSHTLB)) >> + goto out; >> + >> + /* >> + * On success we use CLFLUSH, when the CPU supports it to >> + * avoid the WBINVD. >> + */ >> + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) >> + cpa_flush_range(addr, numpages, 1); >> + else >> + cpa_flush_all(1); > > So if we fail (ret != 0) we do WBINVD unconditionally even if we don't > have to? Looking at __change_page_attr_set_clr() isn't it possible for some of the pages to be changed before an error is encountered since it is looping? If so, we may still need to flush. The CPA_FLUSHTLB flag should take care of a failing case where no attributes have actually been changed. Thanks, Tom > > Don't you want this instead: > > ret = __change_page_attr_set_clr(cpa, 1); > if (ret) > goto out; > > /* Check whether we really changed something */ > if (!(cpa->flags & CPA_FLUSHTLB)) > goto out; > > /* > * On success we use CLFLUSH, when the CPU supports it to > * avoid the WBINVD. > */ > if (static_cpu_has(X86_FEATURE_CLFLUSH)) > cpa_flush_range(addr, numpages, 1); > else > cpa_flush_all(1); > > out: > return ret; > } > > ? > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f69.google.com (mail-oi0-f69.google.com [209.85.218.69]) by kanga.kvack.org (Postfix) with ESMTP id D3C356B0038 for ; Mon, 12 Sep 2016 11:41:39 -0400 (EDT) Received: by mail-oi0-f69.google.com with SMTP id l64so271250839oif.3 for ; Mon, 12 Sep 2016 08:41:39 -0700 (PDT) Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0059.outbound.protection.outlook.com. [104.47.42.59]) by mx.google.com with ESMTPS id a127si3558378oii.256.2016.09.12.08.41.37 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 12 Sep 2016 08:41:38 -0700 (PDT) Subject: Re: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> <20160909172314.ifcteua7nr52mzgs@pd.tnic> From: Tom Lendacky Message-ID: <4e423d15-7fe2-450a-05dd-1674bd281124@amd.com> Date: Mon, 12 Sep 2016 10:41:29 -0500 MIME-Version: 1.0 In-Reply-To: <20160909172314.ifcteua7nr52mzgs@pd.tnic> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Borislav Petkov Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov On 09/09/2016 12:23 PM, Borislav Petkov wrote: > On Mon, Aug 22, 2016 at 05:37:49PM -0500, Tom Lendacky wrote: >> This patch adds support to be change the memory encryption attribute for >> one or more memory pages. >> >> Signed-off-by: Tom Lendacky >> --- >> arch/x86/include/asm/cacheflush.h | 3 + >> arch/x86/include/asm/mem_encrypt.h | 13 ++++++ >> arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ >> arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ >> 4 files changed, 134 insertions(+) > > ... > >> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c >> index 72c292d..0ba9382 100644 >> --- a/arch/x86/mm/pageattr.c >> +++ b/arch/x86/mm/pageattr.c >> @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) >> __pgprot(0), 1, 0, NULL); >> } >> >> +static int __set_memory_enc_dec(struct cpa_data *cpa) >> +{ >> + unsigned long addr; >> + int numpages; >> + int ret; >> + >> + if (*cpa->vaddr & ~PAGE_MASK) { >> + *cpa->vaddr &= PAGE_MASK; >> + >> + /* People should not be passing in unaligned addresses */ >> + WARN_ON_ONCE(1); > > Let's make this more user-friendly: > > if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, "Misaligned address: 0x%lx\n", *cpa->vaddr)) > *cpa->vaddr &= PAGE_MASK; Will do. > >> + } >> + >> + addr = *cpa->vaddr; >> + numpages = cpa->numpages; >> + >> + /* Must avoid aliasing mappings in the highmem code */ >> + kmap_flush_unused(); >> + vm_unmap_aliases(); >> + >> + ret = __change_page_attr_set_clr(cpa, 1); >> + >> + /* Check whether we really changed something */ >> + if (!(cpa->flags & CPA_FLUSHTLB)) >> + goto out; >> + >> + /* >> + * On success we use CLFLUSH, when the CPU supports it to >> + * avoid the WBINVD. >> + */ >> + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) >> + cpa_flush_range(addr, numpages, 1); >> + else >> + cpa_flush_all(1); > > So if we fail (ret != 0) we do WBINVD unconditionally even if we don't > have to? Looking at __change_page_attr_set_clr() isn't it possible for some of the pages to be changed before an error is encountered since it is looping? If so, we may still need to flush. The CPA_FLUSHTLB flag should take care of a failing case where no attributes have actually been changed. Thanks, Tom > > Don't you want this instead: > > ret = __change_page_attr_set_clr(cpa, 1); > if (ret) > goto out; > > /* Check whether we really changed something */ > if (!(cpa->flags & CPA_FLUSHTLB)) > goto out; > > /* > * On success we use CLFLUSH, when the CPU supports it to > * avoid the WBINVD. > */ > if (static_cpu_has(X86_FEATURE_CLFLUSH)) > cpa_flush_range(addr, numpages, 1); > else > cpa_flush_all(1); > > out: > return ret; > } > > ? > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org