From mboxrd@z Thu Jan 1 00:00:00 1970 From: Borislav Petkov Subject: Re: [PATCH v5 12/32] x86/mm: Insure that boot memory areas are mapped properly Date: Thu, 4 May 2017 12:16:09 +0200 Message-ID: <20170504101609.vazu4tuc3gqapaqk@pd.tnic> References: <20170418211612.10190.82788.stgit@tlendack-t1.amdoffice.net> <20170418211822.10190.67435.stgit@tlendack-t1.amdoffice.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20170418211822.10190.67435.stgit-qCXWGYdRb2BnqfbPTmsdiZQ+2ll4COg0XqFh9Ls21Oc@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Tom Lendacky Cc: linux-efi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Brijesh Singh , Toshimitsu Kani , Radim =?utf-8?B?S3LEjW3DocWZ?= , Matt Fleming , x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Jonathan Corbet , linux-doc-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, kasan-dev-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org, Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Andy Lutomirski , Thomas Gleixner , Dmitry Vyukov , kexec-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, "Michael S. Tsirkin" List-Id: linux-mm.kvack.org On Tue, Apr 18, 2017 at 04:18:22PM -0500, Tom Lendacky wrote: > The boot data and command line data are present in memory in a decrypted > state and are copied early in the boot process. The early page fault > support will map these areas as encrypted, so before attempting to copy > them, add decrypted mappings so the data is accessed properly when copied. > > For the initrd, encrypt this data in place. Since the future mapping of the > initrd area will be mapped as encrypted the data will be accessed properly. > > Signed-off-by: Tom Lendacky > --- > arch/x86/include/asm/mem_encrypt.h | 11 +++++ > arch/x86/include/asm/pgtable.h | 3 + > arch/x86/kernel/head64.c | 30 ++++++++++++-- > arch/x86/kernel/setup.c | 10 +++++ > arch/x86/mm/mem_encrypt.c | 77 ++++++++++++++++++++++++++++++++++++ > 5 files changed, 127 insertions(+), 4 deletions(-) ... > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > index 603a166..a95800b 100644 > --- a/arch/x86/kernel/setup.c > +++ b/arch/x86/kernel/setup.c > @@ -115,6 +115,7 @@ > #include > #include > #include > +#include > > /* > * max_low_pfn_mapped: highest direct mapped pfn under 4GB > @@ -374,6 +375,15 @@ static void __init reserve_initrd(void) > !ramdisk_image || !ramdisk_size) > return; /* No initrd provided by bootloader */ > > + /* > + * If SME is active, this memory will be marked encrypted by the > + * kernel when it is accessed (including relocation). However, the > + * ramdisk image was loaded decrypted by the bootloader, so make > + * sure that it is encrypted before accessing it. > + */ > + if (sme_active()) That test is not needed here because __sme_early_enc_dec() already tests sme_me_mask. There you should change that test to sme_active() instead. > + sme_early_encrypt(ramdisk_image, ramdisk_end - ramdisk_image); > + > initrd_start = 0; > > mapped_size = memblock_mem_size(max_pfn_mapped); -- Regards/Gruss, Boris. Good mailing practices for 400: avoid top-posting and trim the reply.