From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752606AbdIBDWR (ORCPT ); Fri, 1 Sep 2017 23:22:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:51314 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750941AbdIBDWO (ORCPT ); Fri, 1 Sep 2017 23:22:14 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F0D0C21BBA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: ADKCNb5u5YIT40VEVRLvJUcfupTUd5gkbSrYXfopF2YAIgZWl0Tcuidx13yqTZaxN66SHpNOJWRYZCBtRCF2NjSmSfg= MIME-Version: 1.0 In-Reply-To: <8155b5b2-b2b3-bc8f-33ae-b81b661a2e38@amd.com> References: <20170724190757.11278-1-brijesh.singh@amd.com> <20170724190757.11278-17-brijesh.singh@amd.com> <20170829102258.gxk227js4yw47qi3@pd.tnic> <0810a732-9c77-a543-ffeb-7fd2d8f46266@amd.com> <20170830174655.ehrnmynotmp7laka@pd.tnic> <8155b5b2-b2b3-bc8f-33ae-b81b661a2e38@amd.com> From: Andy Lutomirski Date: Fri, 1 Sep 2017 20:21:52 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC Part1 PATCH v3 16/17] X86/KVM: Provide support to create Guest and HV shared per-CPU variables To: Brijesh Singh Cc: Borislav Petkov , "linux-kernel@vger.kernel.org" , X86 ML , "linux-efi@vger.kernel.org" , linuxppc-dev , kvm list , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Tony Luck , Piotr Luc , Tom Lendacky , Fenghua Yu , Lu Baolu , Reza Arbab , David Howells , Matt Fleming , "Kirill A . Shutemov" , Laura Abbott , Ard Biesheuvel , Andrew Morton , Eric Biederman , Benjamin Herrenschmidt , Paul Mackerras , Konrad Rzeszutek Wilk , Jonathan Corbet , Dave Airlie , Kees Cook , Paolo Bonzini , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Arnd Bergmann , Tejun Heo , Christoph Lameter Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 1, 2017 at 3:52 PM, Brijesh Singh wrote: > Hi Boris, > > On 08/30/2017 12:46 PM, Borislav Petkov wrote: >> >> On Wed, Aug 30, 2017 at 11:18:42AM -0500, Brijesh Singh wrote: >>> >>> I was trying to avoid mixing early and no-early set_memory_decrypted() >>> but if >>> feedback is: use early_set_memory_decrypted() only if its required >>> otherwise >>> use set_memory_decrypted() then I can improve the logic in next rev. >>> thanks >> >> >> Yes, I think you should use the early versions when you're, well, >> *early* :-) But get rid of that for_each_possible_cpu() and do it only >> on the current CPU, as this is a per-CPU path anyway. If you need to >> do it on *every* CPU and very early, then you need a separate function >> which is called in kvm_smp_prepare_boot_cpu() as there you're pre-SMP. >> > > I am trying to implement your feedback and now remember why I choose to > use early_set_memory_decrypted() and for_each_possible_cpu loop. These > percpu variables are static. Hence before clearing the C-bit we must > perform the in-place decryption so that original assignment is preserved > after we change the C-bit. Tom's SME patch [1] added sme_early_decrypt() > -- which can be used to perform the in-place decryption but we do not have > similar routine for non-early cases. In order to address your feedback, > we have to add similar functions. So far, we have not seen the need for > having such functions except this cases. The approach we have right now > works just fine and not sure if its worth adding new functions. > > Thoughts ? > > [1] Commit :7f8b7e7 x86/mm: Add support for early encryption/decryption of > memory Shouldn't this be called DEFINE_PER_CPU_UNENCRYPTED? ISTM the "HV shared" bit is incidental. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andy Lutomirski Subject: Re: [RFC Part1 PATCH v3 16/17] X86/KVM: Provide support to create Guest and HV shared per-CPU variables Date: Fri, 1 Sep 2017 20:21:52 -0700 Message-ID: References: <20170724190757.11278-1-brijesh.singh@amd.com> <20170724190757.11278-17-brijesh.singh@amd.com> <20170829102258.gxk227js4yw47qi3@pd.tnic> <0810a732-9c77-a543-ffeb-7fd2d8f46266@amd.com> <20170830174655.ehrnmynotmp7laka@pd.tnic> <8155b5b2-b2b3-bc8f-33ae-b81b661a2e38@amd.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Return-path: In-Reply-To: <8155b5b2-b2b3-bc8f-33ae-b81b661a2e38@amd.com> Sender: linux-kernel-owner@vger.kernel.org To: Brijesh Singh Cc: Borislav Petkov , "linux-kernel@vger.kernel.org" , X86 ML , "linux-efi@vger.kernel.org" , linuxppc-dev , kvm list , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Tony Luck , Piotr Luc , Tom Lendacky , Fenghua Yu , Lu Baolu , Reza Arbab , David Howells , Matt Fleming , "Kirill A . Shutemov" List-Id: linux-efi@vger.kernel.org On Fri, Sep 1, 2017 at 3:52 PM, Brijesh Singh wrote: > Hi Boris, > > On 08/30/2017 12:46 PM, Borislav Petkov wrote: >> >> On Wed, Aug 30, 2017 at 11:18:42AM -0500, Brijesh Singh wrote: >>> >>> I was trying to avoid mixing early and no-early set_memory_decrypted() >>> but if >>> feedback is: use early_set_memory_decrypted() only if its required >>> otherwise >>> use set_memory_decrypted() then I can improve the logic in next rev. >>> thanks >> >> >> Yes, I think you should use the early versions when you're, well, >> *early* :-) But get rid of that for_each_possible_cpu() and do it only >> on the current CPU, as this is a per-CPU path anyway. If you need to >> do it on *every* CPU and very early, then you need a separate function >> which is called in kvm_smp_prepare_boot_cpu() as there you're pre-SMP. >> > > I am trying to implement your feedback and now remember why I choose to > use early_set_memory_decrypted() and for_each_possible_cpu loop. These > percpu variables are static. Hence before clearing the C-bit we must > perform the in-place decryption so that original assignment is preserved > after we change the C-bit. Tom's SME patch [1] added sme_early_decrypt() > -- which can be used to perform the in-place decryption but we do not have > similar routine for non-early cases. In order to address your feedback, > we have to add similar functions. So far, we have not seen the need for > having such functions except this cases. The approach we have right now > works just fine and not sure if its worth adding new functions. > > Thoughts ? > > [1] Commit :7f8b7e7 x86/mm: Add support for early encryption/decryption of > memory Shouldn't this be called DEFINE_PER_CPU_UNENCRYPTED? ISTM the "HV shared" bit is incidental. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andy Lutomirski Subject: Re: [RFC Part1 PATCH v3 16/17] X86/KVM: Provide support to create Guest and HV shared per-CPU variables Date: Fri, 1 Sep 2017 20:21:52 -0700 Message-ID: References: <20170724190757.11278-1-brijesh.singh@amd.com> <20170724190757.11278-17-brijesh.singh@amd.com> <20170829102258.gxk227js4yw47qi3@pd.tnic> <0810a732-9c77-a543-ffeb-7fd2d8f46266@amd.com> <20170830174655.ehrnmynotmp7laka@pd.tnic> <8155b5b2-b2b3-bc8f-33ae-b81b661a2e38@amd.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: Borislav Petkov , "linux-kernel@vger.kernel.org" , X86 ML , "linux-efi@vger.kernel.org" , linuxppc-dev , kvm list , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Andy Lutomirski , Tony Luck , Piotr Luc , Tom Lendacky , Fenghua Yu , Lu Baolu , Reza Arbab , David Howells , Matt Fleming , "Kirill A . Shutemov" , To: Brijesh Singh Return-path: In-Reply-To: <8155b5b2-b2b3-bc8f-33ae-b81b661a2e38@amd.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Fri, Sep 1, 2017 at 3:52 PM, Brijesh Singh wrote: > Hi Boris, > > On 08/30/2017 12:46 PM, Borislav Petkov wrote: >> >> On Wed, Aug 30, 2017 at 11:18:42AM -0500, Brijesh Singh wrote: >>> >>> I was trying to avoid mixing early and no-early set_memory_decrypted() >>> but if >>> feedback is: use early_set_memory_decrypted() only if its required >>> otherwise >>> use set_memory_decrypted() then I can improve the logic in next rev. >>> thanks >> >> >> Yes, I think you should use the early versions when you're, well, >> *early* :-) But get rid of that for_each_possible_cpu() and do it only >> on the current CPU, as this is a per-CPU path anyway. If you need to >> do it on *every* CPU and very early, then you need a separate function >> which is called in kvm_smp_prepare_boot_cpu() as there you're pre-SMP. >> > > I am trying to implement your feedback and now remember why I choose to > use early_set_memory_decrypted() and for_each_possible_cpu loop. These > percpu variables are static. Hence before clearing the C-bit we must > perform the in-place decryption so that original assignment is preserved > after we change the C-bit. Tom's SME patch [1] added sme_early_decrypt() > -- which can be used to perform the in-place decryption but we do not have > similar routine for non-early cases. In order to address your feedback, > we have to add similar functions. So far, we have not seen the need for > having such functions except this cases. The approach we have right now > works just fine and not sure if its worth adding new functions. > > Thoughts ? > > [1] Commit :7f8b7e7 x86/mm: Add support for early encryption/decryption of > memory Shouldn't this be called DEFINE_PER_CPU_UNENCRYPTED? ISTM the "HV shared" bit is incidental.