linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kai Huang <kai.huang@intel.com>
To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: seanjc@google.com, pbonzini@redhat.com, dave.hansen@intel.com,
	len.brown@intel.com, tony.luck@intel.com,
	rafael.j.wysocki@intel.com, reinette.chatre@intel.com,
	dan.j.williams@intel.com, peterz@infradead.org,
	ak@linux.intel.com, kirill.shutemov@linux.intel.com,
	sathyanarayanan.kuppuswamy@linux.intel.com,
	isaku.yamahata@intel.com, kai.huang@intel.com
Subject: [PATCH v4 14/22] x86/virt/tdx: Create TDMRs to cover all memblock memory regions
Date: Wed,  1 Jun 2022 07:39:37 +1200	[thread overview]
Message-ID: <78fd4dd9f4659ae66b262864ebe58fbb0f4d5c73.1654025431.git.kai.huang@intel.com> (raw)
In-Reply-To: <cover.1654025430.git.kai.huang@intel.com>

The kernel configures TDX-usable memory regions by passing an array of
"TD Memory Regions" (TDMRs) to the TDX module.  Each TDMR contains the
information of the base/size of a memory region, the base/size of the
associated Physical Address Metadata Table (PAMT) and a list of reserved
areas in the region.

Create a number of TDMRs according to the memblock memory regions.  To
keep it simple, always try to create one TDMR for each memory region.
As the first step only set up the base/size for each TDMR.

Each TDMR must be 1G aligned and the size must be in 1G granularity.
This implies that one TDMR could cover multiple memory regions.  If a
memory region spans the 1GB boundary and the former part is already
covered by the previous TDMR, just create a new TDMR for the remaining
part.

TDX only supports a limited number of TDMRs.  Disable TDX if all TDMRs
are consumed but there is more memory region to cover.

Signed-off-by: Kai Huang <kai.huang@intel.com>
---

- v3 -> v4:
 - Removed allocating TDMR individually.
 - Improved changelog by using Dave's words.
 - Made TDMR_START() and TDMR_END() as static inline function.

---
 arch/x86/virt/vmx/tdx/tdx.c | 104 +++++++++++++++++++++++++++++++++++-
 1 file changed, 103 insertions(+), 1 deletion(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 4aa02a992b59..129994c191f5 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -433,6 +433,24 @@ static int check_memblock_tdx_convertible(void)
 	return 0;
 }
 
+/* TDMR must be 1gb aligned */
+#define TDMR_ALIGNMENT		BIT_ULL(30)
+#define TDMR_PFN_ALIGNMENT	(TDMR_ALIGNMENT >> PAGE_SHIFT)
+
+/* Align up and down the address to TDMR boundary */
+#define TDMR_ALIGN_DOWN(_addr)	ALIGN_DOWN((_addr), TDMR_ALIGNMENT)
+#define TDMR_ALIGN_UP(_addr)	ALIGN((_addr), TDMR_ALIGNMENT)
+
+static inline u64 tdmr_start(struct tdmr_info *tdmr)
+{
+	return tdmr->base;
+}
+
+static inline u64 tdmr_end(struct tdmr_info *tdmr)
+{
+	return tdmr->base + tdmr->size;
+}
+
 /* Calculate the actual TDMR_INFO size */
 static inline int cal_tdmr_size(void)
 {
@@ -470,6 +488,82 @@ static struct tdmr_info *alloc_tdmr_array(int *array_sz)
 	return alloc_pages_exact(*array_sz, GFP_KERNEL | __GFP_ZERO);
 }
 
+static struct tdmr_info *tdmr_array_entry(struct tdmr_info *tdmr_array,
+					  int idx)
+{
+	return (struct tdmr_info *)((unsigned long)tdmr_array +
+			cal_tdmr_size() * idx);
+}
+
+/*
+ * Create TDMRs to cover all memory regions in memblock.  The actual
+ * number of TDMRs is set to @tdmr_num.
+ */
+static int create_tdmrs(struct tdmr_info *tdmr_array, int *tdmr_num)
+{
+	unsigned long start_pfn, end_pfn;
+	int i, nid, tdmr_idx = 0;
+
+	/*
+	 * Loop over all memory regions in memblock and create TDMRs to
+	 * cover them.  To keep it simple, always try to use one TDMR to
+	 * cover memory region.
+	 */
+	memblock_for_each_tdx_mem_pfn_range(i, &start_pfn, &end_pfn, &nid) {
+		struct tdmr_info *tdmr;
+		u64 start, end;
+
+		tdmr = tdmr_array_entry(tdmr_array, tdmr_idx);
+		start = TDMR_ALIGN_DOWN(start_pfn << PAGE_SHIFT);
+		end = TDMR_ALIGN_UP(end_pfn << PAGE_SHIFT);
+
+		/*
+		 * If the current TDMR's size hasn't been initialized,
+		 * it is a new TDMR to cover the new memory region.
+		 * Otherwise, the current TDMR has already covered the
+		 * previous memory region.  In the latter case, check
+		 * whether the current memory region has been fully or
+		 * partially covered by the current TDMR, since TDMR is
+		 * 1G aligned.
+		 */
+		if (tdmr->size) {
+			/*
+			 * Loop to the next memory region if the current
+			 * region has already fully covered by the
+			 * current TDMR.
+			 */
+			if (end <= tdmr_end(tdmr))
+				continue;
+
+			/*
+			 * If part of the current memory region has
+			 * already been covered by the current TDMR,
+			 * skip the already covered part.
+			 */
+			if (start < tdmr_end(tdmr))
+				start = tdmr_end(tdmr);
+
+			/*
+			 * Create a new TDMR to cover the current memory
+			 * region, or the remaining part of it.
+			 */
+			tdmr_idx++;
+			if (tdmr_idx >= tdx_sysinfo.max_tdmrs)
+				return -E2BIG;
+
+			tdmr = tdmr_array_entry(tdmr_array, tdmr_idx);
+		}
+
+		tdmr->base = start;
+		tdmr->size = end - start;
+	}
+
+	/* @tdmr_idx is always the index of last valid TDMR. */
+	*tdmr_num = tdmr_idx + 1;
+
+	return 0;
+}
+
 /*
  * Construct an array of TDMRs to cover all memory regions in memblock.
  * This makes sure all pages managed by the page allocator are TDX
@@ -478,8 +572,16 @@ static struct tdmr_info *alloc_tdmr_array(int *array_sz)
 static int construct_tdmrs_memeblock(struct tdmr_info *tdmr_array,
 				     int *tdmr_num)
 {
+	int ret;
+
+	ret = create_tdmrs(tdmr_array, tdmr_num);
+	if (ret)
+		goto err;
+
 	/* Return -EINVAL until constructing TDMRs is done */
-	return -EINVAL;
+	ret = -EINVAL;
+err:
+	return ret;
 }
 
 /*
-- 
2.35.3


  parent reply	other threads:[~2022-05-31 19:41 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-31 19:39 [PATCH v4 00/22] TDX host kernel support Kai Huang
2022-05-31 19:39 ` [PATCH v4 01/22] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
2022-05-31 19:39 ` [PATCH v4 02/22] cc_platform: Add new attribute to prevent ACPI CPU hotplug Kai Huang
2022-05-31 19:39 ` [PATCH v4 03/22] cc_platform: Add new attribute to prevent ACPI memory hotplug Kai Huang
2022-05-31 19:39 ` [PATCH v4 04/22] x86/virt/tdx: Prevent ACPI CPU hotplug and " Kai Huang
2022-05-31 19:39 ` [PATCH v4 05/22] x86/virt/tdx: Prevent hot-add driver managed memory Kai Huang
2022-05-31 19:39 ` [PATCH v4 06/22] x86/virt/tdx: Add skeleton to initialize TDX on demand Kai Huang
2022-05-31 19:39 ` [PATCH v4 07/22] x86/virt/tdx: Implement SEAMCALL function Kai Huang
2022-05-31 19:39 ` [PATCH v4 08/22] x86/virt/tdx: Shut down TDX module in case of error Kai Huang
2022-05-31 19:39 ` [PATCH v4 09/22] x86/virt/tdx: Detect TDX module by doing module global initialization Kai Huang
2022-05-31 19:39 ` [PATCH v4 10/22] x86/virt/tdx: Do logical-cpu scope TDX module initialization Kai Huang
2022-05-31 19:39 ` [PATCH v4 11/22] x86/virt/tdx: Get information about TDX module and TDX-capable memory Kai Huang
2022-05-31 19:39 ` [PATCH v4 12/22] x86/virt/tdx: Convert all memory regions in memblock to TDX memory Kai Huang
2022-05-31 19:39 ` [PATCH v4 13/22] x86/virt/tdx: Add placeholder to construct TDMRs based on memblock Kai Huang
2022-05-31 19:39 ` Kai Huang [this message]
2022-05-31 19:39 ` [PATCH v4 15/22] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
2022-05-31 19:39 ` [PATCH v4 16/22] x86/virt/tdx: Set up reserved areas for all TDMRs Kai Huang
2022-05-31 19:39 ` [PATCH v4 17/22] x86/virt/tdx: Reserve TDX module global KeyID Kai Huang
2022-05-31 19:39 ` [PATCH v4 18/22] x86/virt/tdx: Configure TDX module with TDMRs and " Kai Huang
2022-05-31 19:39 ` [PATCH v4 19/22] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
2022-05-31 19:39 ` [PATCH v4 20/22] x86/virt/tdx: Initialize all TDMRs Kai Huang
2022-05-31 19:39 ` [PATCH v4 21/22] x86/virt/tdx: Support kexec() Kai Huang
2022-05-31 19:39 ` [PATCH v4 22/22] Documentation/x86: Add documentation for TDX host support Kai Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=78fd4dd9f4659ae66b262864ebe58fbb0f4d5c73.1654025431.git.kai.huang@intel.com \
    --to=kai.huang@intel.com \
    --cc=ak@linux.intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=reinette.chatre@intel.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=seanjc@google.com \
    --cc=tony.luck@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).