From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752445AbdFNQqB (ORCPT ); Wed, 14 Jun 2017 12:46:01 -0400 Received: from mail.skyhub.de ([5.9.137.197]:59538 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752247AbdFNQp6 (ORCPT ); Wed, 14 Jun 2017 12:45:58 -0400 Date: Wed, 14 Jun 2017 18:45:53 +0200 From: Borislav Petkov To: Tom Lendacky Cc: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Rik van Riel , Radim =?utf-8?B?S3LEjW3DocWZ?= , Toshimitsu Kani , Arnd Bergmann , Jonathan Corbet , Matt Fleming , "Michael S. Tsirkin" , Joerg Roedel , Konrad Rzeszutek Wilk , Paolo Bonzini , Larry Woodman , Brijesh Singh , Ingo Molnar , Andy Lutomirski , "H. Peter Anvin" , Andrey Ryabinin , Alexander Potapenko , Dave Young , Thomas Gleixner , Dmitry Vyukov Subject: Re: [PATCH v6 24/34] x86, swiotlb: Add memory encryption support Message-ID: <20170614164553.jwcfgugpizz5pc2e@pd.tnic> References: <20170607191309.28645.15241.stgit@tlendack-t1.amdoffice.net> <20170607191721.28645.96519.stgit@tlendack-t1.amdoffice.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20170607191721.28645.96519.stgit@tlendack-t1.amdoffice.net> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 07, 2017 at 02:17:21PM -0500, Tom Lendacky wrote: > Since DMA addresses will effectively look like 48-bit addresses when the > memory encryption mask is set, SWIOTLB is needed if the DMA mask of the > device performing the DMA does not support 48-bits. SWIOTLB will be > initialized to create decrypted bounce buffers for use by these devices. > > Signed-off-by: Tom Lendacky > --- ... > diff --git a/init/main.c b/init/main.c > index df58a41..7125b5f 100644 > --- a/init/main.c > +++ b/init/main.c > @@ -488,6 +488,10 @@ void __init __weak thread_stack_cache_init(void) > } > #endif > > +void __init __weak mem_encrypt_init(void) > +{ > +} void __init __weak mem_encrypt_init(void) { } saves some real estate. Please do that for the rest of the stubs you're adding, for the next version. > + > /* > * Set up kernel memory allocators > */ > @@ -640,6 +644,15 @@ asmlinkage __visible void __init start_kernel(void) > */ > locking_selftest(); > > + /* > + * This needs to be called before any devices perform DMA > + * operations that might use the SWIOTLB bounce buffers. > + * This call will mark the bounce buffers as decrypted so > + * that their usage will not cause "plain-text" data to be > + * decrypted when accessed. s/This call/It/ > + */ > + mem_encrypt_init(); > + > #ifdef CONFIG_BLK_DEV_INITRD > if (initrd_start && !initrd_below_start_ok && > page_to_pfn(virt_to_page((void *)initrd_start)) < min_low_pfn) { > diff --git a/lib/swiotlb.c b/lib/swiotlb.c > index a8d74a7..74d6557 100644 > --- a/lib/swiotlb.c > +++ b/lib/swiotlb.c > @@ -30,6 +30,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -155,6 +156,17 @@ unsigned long swiotlb_size_or_default(void) > return size ? size : (IO_TLB_DEFAULT_SIZE); > } > > +void __weak swiotlb_set_mem_attributes(void *vaddr, unsigned long size) > +{ > +} As above. -- Regards/Gruss, Boris. Good mailing practices for 400: avoid top-posting and trim the reply.