From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6CE02566 for ; Wed, 27 Jul 2022 17:01:49 +0000 (UTC) Received: from zn.tnic (p200300ea970f4fe3329c23fffea6a903.dip0.t-ipconnect.de [IPv6:2003:ea:970f:4fe3:329c:23ff:fea6:a903]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id F050F1EC04DA; Wed, 27 Jul 2022 19:01:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim; t=1658941298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=RWJzFBq0WyV6tQ+ZqLZ1AEB5KdQiAYWTFZnp9js+IYk=; b=E1jfg4k4GTyDy4tx4WSwNylGy++reaewl6nh/UZAIdvOu54O6BrtIP6SOogm1scyjOXkAm 8OhwlXGLsnG/RMNgmC8aGcjmXcdcrTMVKMEdygb4vDnz8FHsZzd0q5O65TP4M8EaAActq7 gUMbQ6dBJ9jj5WGbPtQGg33bagoOLwQ= Date: Wed, 27 Jul 2022 19:01:34 +0200 From: Borislav Petkov To: Ashish Kalra Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, jroedel@suse.de, thomas.lendacky@amd.com, hpa@zytor.com, ardb@kernel.org, pbonzini@redhat.com, seanjc@google.com, vkuznets@redhat.com, jmattson@google.com, luto@kernel.org, dave.hansen@linux.intel.com, slp@redhat.com, pgonda@google.com, peterz@infradead.org, srinivas.pandruvada@linux.intel.com, rientjes@google.com, dovmurik@linux.ibm.com, tobin@ibm.com, michael.roth@amd.com, vbabka@suse.cz, kirill@shutemov.name, ak@linux.intel.com, tony.luck@intel.com, marcorr@google.com, sathyanarayanan.kuppuswamy@linux.intel.com, alpergun@google.com, dgilbert@redhat.com, jarkko@kernel.org Subject: Re: [PATCH Part2 v6 07/49] x86/sev: Invalid pages from direct map when adding it to RMP table Message-ID: References: <243778c282cd55a554af9c11d2ecd3ff9ea6820f.1655761627.git.ashish.kalra@amd.com> Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <243778c282cd55a554af9c11d2ecd3ff9ea6820f.1655761627.git.ashish.kalra@amd.com> On Mon, Jun 20, 2022 at 11:03:07PM +0000, Ashish Kalra wrote: > Subject: x86/sev: Invalid pages from direct map when adding it to RMP table "...: Invalidate pages from the direct map when adding them to the RMP table" > +static int restore_direct_map(u64 pfn, int npages) > +{ > + int i, ret = 0; > + > + for (i = 0; i < npages; i++) { > + ret = set_direct_map_default_noflush(pfn_to_page(pfn + i)); set_memory_p() ? > + if (ret) > + goto cleanup; > + } > + > +cleanup: > + WARN(ret > 0, "Failed to restore direct map for pfn 0x%llx\n", pfn + i); Warn for each pfn?! That'll flood dmesg mightily. > + return ret; > +} > + > +static int invalid_direct_map(unsigned long pfn, int npages) > +{ > + int i, ret = 0; > + > + for (i = 0; i < npages; i++) { > + ret = set_direct_map_invalid_noflush(pfn_to_page(pfn + i)); As above, set_memory_np() doesn't work here instead of looping over each page? > @@ -2462,11 +2494,38 @@ static int rmpupdate(u64 pfn, struct rmpupdate *val) > if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) > return -ENXIO; > > + level = RMP_TO_X86_PG_LEVEL(val->pagesize); > + npages = page_level_size(level) / PAGE_SIZE; > + > + /* > + * If page is getting assigned in the RMP table then unmap it from the > + * direct map. > + */ > + if (val->assigned) { > + if (invalid_direct_map(pfn, npages)) { > + pr_err("Failed to unmap pfn 0x%llx pages %d from direct_map\n", "Failed to unmap %d pages at pfn 0x... from the direct map\n" > + pfn, npages); > + return -EFAULT; > + } > + } > + > /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */ > asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFE" > : "=a"(ret) > : "a"(paddr), "c"((unsigned long)val) > : "memory", "cc"); > + > + /* > + * Restore the direct map after the page is removed from the RMP table. > + */ > + if (!ret && !val->assigned) { > + if (restore_direct_map(pfn, npages)) { > + pr_err("Failed to map pfn 0x%llx pages %d in direct_map\n", "Failed to map %d pages at pfn 0x... into the direct map\n" Thx. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette