From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753753AbcIIRXY (ORCPT ); Fri, 9 Sep 2016 13:23:24 -0400 Received: from mail.skyhub.de ([78.46.96.112]:48215 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750938AbcIIRXV (ORCPT ); Fri, 9 Sep 2016 13:23:21 -0400 Date: Fri, 9 Sep 2016 19:23:15 +0200 From: Borislav Petkov To: Tom Lendacky Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Radim =?utf-8?B?S3LEjW3DocWZ?= , Arnd Bergmann , Jonathan Corbet , Matt Fleming , Joerg Roedel , Konrad Rzeszutek Wilk , Andrey Ryabinin , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Paolo Bonzini , Alexander Potapenko , Thomas Gleixner , Dmitry Vyukov Subject: Re: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute Message-ID: <20160909172314.ifcteua7nr52mzgs@pd.tnic> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> User-Agent: NeoMutt/ (1.7.0) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 22, 2016 at 05:37:49PM -0500, Tom Lendacky wrote: > This patch adds support to be change the memory encryption attribute for > one or more memory pages. > > Signed-off-by: Tom Lendacky > --- > arch/x86/include/asm/cacheflush.h | 3 + > arch/x86/include/asm/mem_encrypt.h | 13 ++++++ > arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ > arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ > 4 files changed, 134 insertions(+) ... > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c > index 72c292d..0ba9382 100644 > --- a/arch/x86/mm/pageattr.c > +++ b/arch/x86/mm/pageattr.c > @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) > __pgprot(0), 1, 0, NULL); > } > > +static int __set_memory_enc_dec(struct cpa_data *cpa) > +{ > + unsigned long addr; > + int numpages; > + int ret; > + > + if (*cpa->vaddr & ~PAGE_MASK) { > + *cpa->vaddr &= PAGE_MASK; > + > + /* People should not be passing in unaligned addresses */ > + WARN_ON_ONCE(1); Let's make this more user-friendly: if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, "Misaligned address: 0x%lx\n", *cpa->vaddr)) *cpa->vaddr &= PAGE_MASK; > + } > + > + addr = *cpa->vaddr; > + numpages = cpa->numpages; > + > + /* Must avoid aliasing mappings in the highmem code */ > + kmap_flush_unused(); > + vm_unmap_aliases(); > + > + ret = __change_page_attr_set_clr(cpa, 1); > + > + /* Check whether we really changed something */ > + if (!(cpa->flags & CPA_FLUSHTLB)) > + goto out; > + > + /* > + * On success we use CLFLUSH, when the CPU supports it to > + * avoid the WBINVD. > + */ > + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) > + cpa_flush_range(addr, numpages, 1); > + else > + cpa_flush_all(1); So if we fail (ret != 0) we do WBINVD unconditionally even if we don't have to? Don't you want this instead: ret = __change_page_attr_set_clr(cpa, 1); if (ret) goto out; /* Check whether we really changed something */ if (!(cpa->flags & CPA_FLUSHTLB)) goto out; /* * On success we use CLFLUSH, when the CPU supports it to * avoid the WBINVD. */ if (static_cpu_has(X86_FEATURE_CLFLUSH)) cpa_flush_range(addr, numpages, 1); else cpa_flush_all(1); out: return ret; } ? -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Borislav Petkov Subject: Re: [RFC PATCH v2 12/20] x86: Add support for changing memory encryption attribute Date: Fri, 9 Sep 2016 19:23:15 +0200 Message-ID: <20160909172314.ifcteua7nr52mzgs@pd.tnic> References: <20160822223529.29880.50884.stgit@tlendack-t1.amdoffice.net> <20160822223749.29880.10183.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20160822223749.29880.10183.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Tom Lendacky Cc: linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Radim =?utf-8?B?S3LEjW3DocWZ?= , Matt Fleming , x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, Alexander Potapenko , "H. Peter Anvin" , linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Jonathan Corbet , linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, Ingo Molnar , Andrey Ryabinin , Arnd Bergmann , Andy Lutomirski , Thomas Gleixner , Dmitry Vyukov , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, Paolo Bonzini List-Id: linux-efi@vger.kernel.org On Mon, Aug 22, 2016 at 05:37:49PM -0500, Tom Lendacky wrote: > This patch adds support to be change the memory encryption attribute for > one or more memory pages. > > Signed-off-by: Tom Lendacky > --- > arch/x86/include/asm/cacheflush.h | 3 + > arch/x86/include/asm/mem_encrypt.h | 13 ++++++ > arch/x86/mm/mem_encrypt.c | 43 +++++++++++++++++++++ > arch/x86/mm/pageattr.c | 75 ++++++++++++++++++++++++++++++++++++ > 4 files changed, 134 insertions(+) ... > diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c > index 72c292d..0ba9382 100644 > --- a/arch/x86/mm/pageattr.c > +++ b/arch/x86/mm/pageattr.c > @@ -1728,6 +1728,81 @@ int set_memory_4k(unsigned long addr, int numpages) > __pgprot(0), 1, 0, NULL); > } > > +static int __set_memory_enc_dec(struct cpa_data *cpa) > +{ > + unsigned long addr; > + int numpages; > + int ret; > + > + if (*cpa->vaddr & ~PAGE_MASK) { > + *cpa->vaddr &= PAGE_MASK; > + > + /* People should not be passing in unaligned addresses */ > + WARN_ON_ONCE(1); Let's make this more user-friendly: if (WARN_ONCE(*cpa->vaddr & ~PAGE_MASK, "Misaligned address: 0x%lx\n", *cpa->vaddr)) *cpa->vaddr &= PAGE_MASK; > + } > + > + addr = *cpa->vaddr; > + numpages = cpa->numpages; > + > + /* Must avoid aliasing mappings in the highmem code */ > + kmap_flush_unused(); > + vm_unmap_aliases(); > + > + ret = __change_page_attr_set_clr(cpa, 1); > + > + /* Check whether we really changed something */ > + if (!(cpa->flags & CPA_FLUSHTLB)) > + goto out; > + > + /* > + * On success we use CLFLUSH, when the CPU supports it to > + * avoid the WBINVD. > + */ > + if (!ret && static_cpu_has(X86_FEATURE_CLFLUSH)) > + cpa_flush_range(addr, numpages, 1); > + else > + cpa_flush_all(1); So if we fail (ret != 0) we do WBINVD unconditionally even if we don't have to? Don't you want this instead: ret = __change_page_attr_set_clr(cpa, 1); if (ret) goto out; /* Check whether we really changed something */ if (!(cpa->flags & CPA_FLUSHTLB)) goto out; /* * On success we use CLFLUSH, when the CPU supports it to * avoid the WBINVD. */ if (static_cpu_has(X86_FEATURE_CLFLUSH)) cpa_flush_range(addr, numpages, 1); else cpa_flush_all(1); out: return ret; } ? -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply.