From: Sathyanarayanan Kuppuswamy <sathyanarayanan.kuppuswamy@linux.intel.com>
To: Tom Lendacky <thomas.lendacky@amd.com>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Peter Zijlstra <peterz@infradead.org>,
Andy Lutomirski <luto@kernel.org>,
Bjorn Helgaas <bhelgaas@google.com>,
Richard Henderson <rth@twiddle.net>,
Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
James E J Bottomley <James.Bottomley@HansenPartnership.com>,
Helge Deller <deller@gmx.de>,
"David S . Miller" <davem@davemloft.net>,
Arnd Bergmann <arnd@arndb.de>, Jonathan Corbet <corbet@lwn.net>,
"Michael S . Tsirkin" <mst@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
David Hildenbrand <david@redhat.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Peter H Anvin <hpa@zytor.com>,
Dave Hansen <dave.hansen@intel.com>,
Tony Luck <tony.luck@intel.com>,
Dan Williams <dan.j.williams@intel.com>,
Andi Kleen <ak@linux.intel.com>,
Kirill Shutemov <kirill.shutemov@linux.intel.com>,
Sean Christopherson <seanjc@google.com>,
Kuppuswamy Sathyanarayanan <knsathya@kernel.org>,
x86@kernel.org, linux-kernel@vger.kernel.org,
linux-pci@vger.kernel.org, linux-alpha@vger.kernel.org,
linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org,
sparclinux@vger.kernel.org, linux-arch@vger.kernel.org,
linux-doc@vger.kernel.org,
virtualization@lists.linux-foundation.org
Subject: Re: [PATCH v5 06/16] x86/tdx: Make DMA pages shared
Date: Wed, 20 Oct 2021 09:45:50 -0700 [thread overview]
Message-ID: <66acafb6-7659-7d76-0f52-d002cfae9cc8@linux.intel.com> (raw)
In-Reply-To: <654455db-a605-5069-d652-fe822ae066b0@amd.com>
On 10/20/21 9:33 AM, Tom Lendacky wrote:
> On 10/8/21 7:37 PM, Kuppuswamy Sathyanarayanan wrote:
>> From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
>>
>> Just like MKTME, TDX reassigns bits of the physical address for
>> metadata. MKTME used several bits for an encryption KeyID. TDX
>> uses a single bit in guests to communicate whether a physical page
>> should be protected by TDX as private memory (bit set to 0) or
>> unprotected and shared with the VMM (bit set to 1).
>>
>> __set_memory_enc_dec() is now aware about TDX and sets Shared bit
>> accordingly following with relevant TDX hypercall.
>>
>> Also, Do TDX_ACCEPT_PAGE on every 4k page after mapping the GPA range
>> when converting memory to private. Using 4k page size limit is due
>> to current TDX spec restriction. Also, If the GPA (range) was
>> already mapped as an active, private page, the host VMM may remove
>> the private page from the TD by following the “Removing TD Private
>> Pages” sequence in the Intel TDX-module specification [1] to safely
>> block the mapping(s), flush the TLB and cache, and remove the
>> mapping(s).
>>
>> BUG() if TDX_ACCEPT_PAGE fails (except "previously accepted page" case)
>> , as the guest is completely hosed if it can't access memory.
>>
>> [1]
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsoftware.intel.com%2Fcontent%2Fdam%2Fdevelop%2Fexternal%2Fus%2Fen%2Fdocuments%2Ftdx-module-1eas-v0.85.039.pdf&data=04%7C01%7Cthomas.lendacky%40amd.com%7C0e667adf5a4042abce3908d98abd07a8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637693367201703893%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=UGxQ9xBjWsmev7PetX%2BuS0RChkAXyaH7q6JHO9ZiUtY%3D&reserved=0
>>
>>
>> Tested-by: Kai Huang <kai.huang@linux.intel.com>
>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
>> Reviewed-by: Andi Kleen <ak@linux.intel.com>
>> Reviewed-by: Tony Luck <tony.luck@intel.com>
>> Signed-off-by: Kuppuswamy Sathyanarayanan
>> <sathyanarayanan.kuppuswamy@linux.intel.com>
>
> ...
>
>> diff --git a/arch/x86/mm/mem_encrypt_common.c
>> b/arch/x86/mm/mem_encrypt_common.c
>> index f063c885b0a5..119a9056efbb 100644
>> --- a/arch/x86/mm/mem_encrypt_common.c
>> +++ b/arch/x86/mm/mem_encrypt_common.c
>> @@ -9,9 +9,18 @@
>> #include <asm/mem_encrypt_common.h>
>> #include <linux/dma-mapping.h>
>> +#include <linux/cc_platform.h>
>> /* Override for DMA direct allocation check -
>> ARCH_HAS_FORCE_DMA_UNENCRYPTED */
>> bool force_dma_unencrypted(struct device *dev)
>> {
>> - return amd_force_dma_unencrypted(dev);
>> + if (cc_platform_has(CC_ATTR_GUEST_TDX) &&
>> + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
>> + return true;
>> +
>> + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) ||
>> + cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
>> + return amd_force_dma_unencrypted(dev);
>> +
>> + return false;
>
> Assuming the original force_dma_unencrypted() function was moved here or
> cc_platform.c, then you shouldn't need any changes. Both SEV and TDX
> require true be returned if CC_ATTR_GUEST_MEM_ENCRYPT returns true. And
> then TDX should never return true for CC_ATTR_HOST_MEM_ENCRYPT.
For non TDX case, in CC_ATTR_HOST_MEM_ENCRYPT, we should still call
amd_force_dma_unencrypted() right?
>
>> }
>> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
>> index 527957586f3c..6c531d5cb5fd 100644
>> --- a/arch/x86/mm/pat/set_memory.c
>> +++ b/arch/x86/mm/pat/set_memory.c
>> @@ -30,6 +30,7 @@
>> #include <asm/proto.h>
>> #include <asm/memtype.h>
>> #include <asm/set_memory.h>
>> +#include <asm/tdx.h>
>> #include "../mm_internal.h"
>> @@ -1981,8 +1982,10 @@ int set_memory_global(unsigned long addr, int
>> numpages)
>> __pgprot(_PAGE_GLOBAL), 0);
>> }
>> -static int __set_memory_enc_dec(unsigned long addr, int numpages,
>> bool enc)
>> +static int __set_memory_protect(unsigned long addr, int numpages,
>> bool protect)
>> {
>> + pgprot_t mem_protected_bits, mem_plain_bits;
>> + enum tdx_map_type map_type;
>> struct cpa_data cpa;
>> int ret;
>> @@ -1997,8 +2000,25 @@ static int __set_memory_enc_dec(unsigned long
>> addr, int numpages, bool enc)
>> memset(&cpa, 0, sizeof(cpa));
>> cpa.vaddr = &addr;
>> cpa.numpages = numpages;
>> - cpa.mask_set = enc ? __pgprot(_PAGE_ENC) : __pgprot(0);
>> - cpa.mask_clr = enc ? __pgprot(0) : __pgprot(_PAGE_ENC);
>> +
>> + if (cc_platform_has(CC_ATTR_GUEST_SHARED_MAPPING_INIT)) {
>> + mem_protected_bits = __pgprot(0);
>> + mem_plain_bits = pgprot_cc_shared_mask();
>
> How about having generic versions for both shared and private that
> return the proper value for SEV or TDX. Then this remains looking
> similar to as it does now, just replacing the __pgprot() calls with the
> appropriate pgprot_cc_{shared,private}_mask().
Makes sense.
>
> Thanks,
> Tom
>
>> + } else {
>> + mem_protected_bits = __pgprot(_PAGE_ENC);
>> + mem_plain_bits = __pgprot(0);
>> + }
>> +
>> + if (protect) {
>> + cpa.mask_set = mem_protected_bits;
>> + cpa.mask_clr = mem_plain_bits;
>> + map_type = TDX_MAP_PRIVATE;
>> + } else {
>> + cpa.mask_set = mem_plain_bits;
>> + cpa.mask_clr = mem_protected_bits;
>> + map_type = TDX_MAP_SHARED;
>> + }
>> +
>> cpa.pgd = init_mm.pgd;
>> /* Must avoid aliasing mappings in the highmem code */
>> @@ -2006,9 +2026,17 @@ static int __set_memory_enc_dec(unsigned long
>> addr, int numpages, bool enc)
>> vm_unmap_aliases();
>> /*
>> - * Before changing the encryption attribute, we need to flush
>> caches.
>> + * Before changing the encryption attribute, flush caches.
>> + *
>> + * For TDX, guest is responsible for flushing caches on
>> private->shared
>> + * transition. VMM is responsible for flushing on shared->private.
>> */
>> - cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT));
>> + if (cc_platform_has(CC_ATTR_GUEST_TDX)) {
>> + if (map_type == TDX_MAP_SHARED)
>> + cpa_flush(&cpa, 1);
>> + } else {
>> + cpa_flush(&cpa, !this_cpu_has(X86_FEATURE_SME_COHERENT));
>> + }
>> ret = __change_page_attr_set_clr(&cpa, 1);
>> @@ -2021,18 +2049,21 @@ static int __set_memory_enc_dec(unsigned long
>> addr, int numpages, bool enc)
>> */
>> cpa_flush(&cpa, 0);
>> + if (!ret && cc_platform_has(CC_ATTR_GUEST_SHARED_MAPPING_INIT))
>> + ret = tdx_hcall_gpa_intent(__pa(addr), numpages, map_type);
>> +
>> return ret;
>> }
>> int set_memory_encrypted(unsigned long addr, int numpages)
>> {
>> - return __set_memory_enc_dec(addr, numpages, true);
>> + return __set_memory_protect(addr, numpages, true);
>> }
>> EXPORT_SYMBOL_GPL(set_memory_encrypted);
>> int set_memory_decrypted(unsigned long addr, int numpages)
>> {
>> - return __set_memory_enc_dec(addr, numpages, false);
>> + return __set_memory_protect(addr, numpages, false);
>> }
>> EXPORT_SYMBOL_GPL(set_memory_decrypted);
>>
--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer
next prev parent reply other threads:[~2021-10-20 16:46 UTC|newest]
Thread overview: 85+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-09 0:36 [PATCH v5 00/16] Add TDX Guest Support (shared-mm support) Kuppuswamy Sathyanarayanan
2021-10-09 0:36 ` [PATCH v5 01/16] x86/mm: Move force_dma_unencrypted() to common code Kuppuswamy Sathyanarayanan
2021-10-20 16:11 ` Tom Lendacky
2021-10-20 16:43 ` Sathyanarayanan Kuppuswamy
2021-10-09 0:36 ` [PATCH v5 02/16] x86/tdx: Get TD execution environment information via TDINFO Kuppuswamy Sathyanarayanan
2021-10-09 0:36 ` [PATCH v5 03/16] x86/tdx: Exclude Shared bit from physical_mask Kuppuswamy Sathyanarayanan
2021-11-05 22:11 ` Sean Christopherson
2021-11-08 14:45 ` Kirill A. Shutemov
2021-10-09 0:36 ` [PATCH v5 04/16] x86/tdx: Make pages shared in ioremap() Kuppuswamy Sathyanarayanan
2021-10-20 16:03 ` Tom Lendacky
2021-10-20 16:41 ` Sathyanarayanan Kuppuswamy
2021-10-09 0:37 ` [PATCH v5 05/16] x86/tdx: Add helper to do MapGPA hypercall Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 06/16] x86/tdx: Make DMA pages shared Kuppuswamy Sathyanarayanan
2021-10-20 16:33 ` Tom Lendacky
2021-10-20 16:45 ` Sathyanarayanan Kuppuswamy [this message]
2021-10-20 17:22 ` Tom Lendacky
2021-10-20 17:26 ` Sathyanarayanan Kuppuswamy
2021-10-09 0:37 ` [PATCH v5 07/16] x86/kvm: Use bounce buffers for TD guest Kuppuswamy Sathyanarayanan
2021-10-20 16:39 ` Tom Lendacky
2021-10-20 16:50 ` Sathyanarayanan Kuppuswamy
2021-10-20 17:26 ` Tom Lendacky
2021-10-09 0:37 ` [PATCH v5 08/16] x86/tdx: ioapic: Add shared bit for IOAPIC base address Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 09/16] x86/tdx: Enable shared memory confidential guest flags for TDX guest Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 10/16] PCI: Consolidate pci_iomap_range(), pci_iomap_wc_range() Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 11/16] asm/io.h: Add ioremap_host_shared fallback Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 12/16] PCI: Add pci_iomap_host_shared(), pci_iomap_host_shared_range() Kuppuswamy Sathyanarayanan
2021-10-09 9:53 ` Michael S. Tsirkin
2021-10-09 20:39 ` Dan Williams
2021-10-10 22:11 ` Andi Kleen
2021-10-12 17:42 ` Dan Williams
2021-10-12 18:35 ` Andi Kleen
2021-10-12 21:14 ` Dan Williams
2021-10-12 21:18 ` Michael S. Tsirkin
2021-10-12 21:24 ` Andi Kleen
2021-10-12 21:28 ` Andi Kleen
2021-10-12 22:00 ` Dan Williams
2021-10-18 12:13 ` Greg KH
2021-10-12 18:36 ` Reshetova, Elena
2021-10-12 18:38 ` Andi Kleen
2021-10-12 18:57 ` Reshetova, Elena
2021-10-12 19:13 ` Dan Williams
2021-10-12 19:49 ` Andi Kleen
2021-10-12 21:11 ` Michael S. Tsirkin
2021-10-14 6:32 ` Reshetova, Elena
2021-10-14 6:57 ` Michael S. Tsirkin
2021-10-14 7:27 ` Reshetova, Elena
2021-10-14 9:26 ` Michael S. Tsirkin
2021-10-14 12:33 ` Reshetova, Elena
2021-10-17 22:17 ` Michael S. Tsirkin
2021-10-14 11:49 ` Michael S. Tsirkin
2021-10-17 21:52 ` Thomas Gleixner
2021-10-18 7:03 ` Reshetova, Elena
2021-10-18 0:55 ` Thomas Gleixner
2021-10-18 1:10 ` Thomas Gleixner
2021-10-18 12:08 ` Greg KH
2021-10-10 22:22 ` Andi Kleen
2021-10-11 11:59 ` Michael S. Tsirkin
2021-10-11 17:32 ` Andi Kleen
2021-10-11 18:22 ` Michael S. Tsirkin
2021-10-18 12:15 ` Greg KH
2021-10-18 13:17 ` Michael S. Tsirkin
2021-10-11 7:58 ` Christoph Hellwig
2021-10-11 17:23 ` Andi Kleen
2021-10-11 19:09 ` Michael S. Tsirkin
2021-10-12 5:31 ` Christoph Hellwig
2021-10-12 18:37 ` Andi Kleen
2021-10-09 0:37 ` [PATCH v5 13/16] PCI: Mark MSI data shared Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 14/16] virtio: Use shared mappings for virtio PCI devices Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 15/16] x86/tdx: Implement ioremap_host_shared for x86 Kuppuswamy Sathyanarayanan
2021-10-09 0:37 ` [PATCH v5 16/16] x86/tdx: Add cmdline option to force use of ioremap_host_shared Kuppuswamy Sathyanarayanan
2021-10-09 1:45 ` Randy Dunlap
2021-10-09 2:10 ` Kuppuswamy, Sathyanarayanan
2021-10-09 11:04 ` Michael S. Tsirkin
2021-10-11 2:39 ` Andi Kleen
2021-10-11 12:04 ` Michael S. Tsirkin
2021-10-11 17:35 ` Andi Kleen
2021-10-11 18:28 ` Michael S. Tsirkin
2021-10-12 17:55 ` Andi Kleen
2021-10-12 20:59 ` Michael S. Tsirkin
2021-10-12 21:18 ` Andi Kleen
2021-10-12 21:30 ` Michael S. Tsirkin
2021-10-15 5:50 ` Andi Kleen
2021-10-15 6:57 ` Michael S. Tsirkin
2021-10-15 13:34 ` Andi Kleen
2021-10-17 22:34 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=66acafb6-7659-7d76-0f52-d002cfae9cc8@linux.intel.com \
--to=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=James.Bottomley@HansenPartnership.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=arnd@arndb.de \
--cc=bhelgaas@google.com \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=davem@davemloft.net \
--cc=david@redhat.com \
--cc=deller@gmx.de \
--cc=hpa@zytor.com \
--cc=jpoimboe@redhat.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=knsathya@kernel.org \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-parisc@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rth@twiddle.net \
--cc=seanjc@google.com \
--cc=sparclinux@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=tony.luck@intel.com \
--cc=tsbogend@alpha.franken.de \
--cc=virtualization@lists.linux-foundation.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).