From: Dave Hansen <dave.hansen@intel.com>
To: Kai Huang <kai.huang@intel.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: seanjc@google.com, pbonzini@redhat.com, len.brown@intel.com,
tony.luck@intel.com, rafael.j.wysocki@intel.com,
reinette.chatre@intel.com, dan.j.williams@intel.com,
peterz@infradead.org, ak@linux.intel.com,
kirill.shutemov@linux.intel.com,
sathyanarayanan.kuppuswamy@linux.intel.com,
isaku.yamahata@intel.com
Subject: Re: [PATCH v5 12/22] x86/virt/tdx: Convert all memory regions in memblock to TDX memory
Date: Fri, 24 Jun 2022 12:40:29 -0700 [thread overview]
Message-ID: <20d63398-928f-0c6f-47ec-8e225c049ad8@intel.com> (raw)
In-Reply-To: <8288396be7fedd10521a28531e138579594d757a.1655894131.git.kai.huang@intel.com>
On 6/22/22 04:17, Kai Huang wrote:
...
> Also, explicitly exclude memory regions below first 1MB as TDX memory
> because those regions may not be reported as convertible memory. This
> is OK as the first 1MB is always reserved during kernel boot and won't
> end up to the page allocator.
Are you sure? I wasn't for a few minutes until I found reserve_real_mode()
Could we point to that in this changelog, please?
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index efa830853e98..4988a91d5283 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1974,6 +1974,7 @@ config INTEL_TDX_HOST
> depends on X86_64
> depends on KVM_INTEL
> select ARCH_HAS_CC_PLATFORM
> + select ARCH_KEEP_MEMBLOCK
> help
> Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
> host and certain physical attacks. This option enables necessary TDX
> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index 1bc97756bc0d..2b20d4a7a62b 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -15,6 +15,8 @@
> #include <linux/cpumask.h>
> #include <linux/smp.h>
> #include <linux/atomic.h>
> +#include <linux/sizes.h>
> +#include <linux/memblock.h>
> #include <asm/cpufeatures.h>
> #include <asm/cpufeature.h>
> #include <asm/msr-index.h>
> @@ -338,6 +340,91 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *tdsysinfo,
> return check_cmrs(cmr_array, actual_cmr_num);
> }
>
> +/*
> + * Skip the memory region below 1MB. Return true if the entire
> + * region is skipped. Otherwise, the updated range is returned.
> + */
> +static bool pfn_range_skip_lowmem(unsigned long *p_start_pfn,
> + unsigned long *p_end_pfn)
> +{
> + u64 start, end;
> +
> + start = *p_start_pfn << PAGE_SHIFT;
> + end = *p_end_pfn << PAGE_SHIFT;
> +
> + if (start < SZ_1M)
> + start = SZ_1M;
> +
> + if (start >= end)
> + return true;
> +
> + *p_start_pfn = (start >> PAGE_SHIFT);
> +
> + return false;
> +}
> +
> +/*
> + * Walks over all memblock memory regions that are intended to be
> + * converted to TDX memory. Essentially, it is all memblock memory
> + * regions excluding the low memory below 1MB.
> + *
> + * This is because on some TDX platforms the low memory below 1MB is
> + * not included in CMRs. Excluding the low 1MB can still guarantee
> + * that the pages managed by the page allocator are always TDX memory,
> + * as the low 1MB is reserved during kernel boot and won't end up to
> + * the ZONE_DMA (see reserve_real_mode()).
> + */
> +#define memblock_for_each_tdx_mem_pfn_range(i, p_start, p_end, p_nid) \
> + for_each_mem_pfn_range(i, MAX_NUMNODES, p_start, p_end, p_nid) \
> + if (!pfn_range_skip_lowmem(p_start, p_end))
Let's summarize where we are at this point:
1. All RAM is described in memblocks
2. Some memblocks are reserved and some are free
3. The lower 1MB is marked reserved
4. for_each_mem_pfn_range() walks all reserved and free memblocks, so we
have to exclude the lower 1MB as a special case.
That seems superficially rather ridiculous. Shouldn't we just pick a
memblock iterator that skips the 1MB? Surely there is such a thing.
Or, should we be doing something different with the 1MB in the memblock
structure?
> +/* Check whether first range is the subrange of the second */
> +static bool is_subrange(u64 r1_start, u64 r1_end, u64 r2_start, u64 r2_end)
> +{
> + return r1_start >= r2_start && r1_end <= r2_end;
> +}
> +
> +/* Check whether address range is covered by any CMR or not. */
> +static bool range_covered_by_cmr(struct cmr_info *cmr_array, int cmr_num,
> + u64 start, u64 end)
> +{
> + int i;
> +
> + for (i = 0; i < cmr_num; i++) {
> + struct cmr_info *cmr = &cmr_array[i];
> +
> + if (is_subrange(start, end, cmr->base, cmr->base + cmr->size))
> + return true;
> + }
> +
> + return false;
> +}
> +
> +/*
> + * Check whether all memory regions in memblock are TDX convertible
> + * memory. Return 0 if all memory regions are convertible, or error.
> + */
> +static int check_memblock_tdx_convertible(void)
> +{
> + unsigned long start_pfn, end_pfn;
> + int i;
> +
> + memblock_for_each_tdx_mem_pfn_range(i, &start_pfn, &end_pfn, NULL) {
> + u64 start, end;
> +
> + start = start_pfn << PAGE_SHIFT;
> + end = end_pfn << PAGE_SHIFT;
> + if (!range_covered_by_cmr(tdx_cmr_array, tdx_cmr_num, start,
> + end)) {
> + pr_err("[0x%llx, 0x%llx) is not fully convertible memory\n",
> + start, end);
> + return -EINVAL;
> + }
> + }
> +
> + return 0;
> +}
> +
> /*
> * Detect and initialize the TDX module.
> *
> @@ -371,6 +458,19 @@ static int init_tdx_module(void)
> if (ret)
> goto out;
>
> + /*
> + * To avoid having to modify the page allocator to distinguish
> + * TDX and non-TDX memory allocation, convert all memory regions
> + * in memblock to TDX memory to make sure all pages managed by
> + * the page allocator are TDX memory.
> + *
> + * Sanity check all memory regions are fully covered by CMRs to
> + * make sure they are truly convertible.
> + */
> + ret = check_memblock_tdx_convertible();
> + if (ret)
> + goto out;
> +
> /*
> * Return -EINVAL until all steps of TDX module initialization
> * process are done.
next prev parent reply other threads:[~2022-06-24 19:41 UTC|newest]
Thread overview: 114+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-22 11:15 [PATCH v5 00/22] TDX host kernel support Kai Huang
2022-06-22 11:15 ` [PATCH v5 01/22] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
2022-06-23 5:57 ` Chao Gao
2022-06-23 9:23 ` Kai Huang
2022-08-02 2:01 ` [PATCH v5 1/22] " Wu, Binbin
2022-08-03 9:25 ` Kai Huang
2022-06-22 11:15 ` [PATCH v5 02/22] cc_platform: Add new attribute to prevent ACPI CPU hotplug Kai Huang
2022-06-22 11:42 ` Rafael J. Wysocki
2022-06-23 0:01 ` Kai Huang
2022-06-27 8:01 ` Igor Mammedov
2022-06-28 10:04 ` Kai Huang
2022-06-28 11:52 ` Igor Mammedov
2022-06-28 17:33 ` Rafael J. Wysocki
2022-06-28 23:41 ` Kai Huang
2022-06-24 18:57 ` Dave Hansen
2022-06-27 5:05 ` Kai Huang
2022-07-13 11:09 ` Kai Huang
2022-07-19 17:46 ` Dave Hansen
2022-07-19 23:54 ` Kai Huang
2022-08-03 3:40 ` Binbin Wu
2022-08-03 9:20 ` Kai Huang
2022-06-29 5:33 ` Christoph Hellwig
2022-06-29 9:09 ` Kai Huang
2022-08-03 3:55 ` Binbin Wu
2022-08-03 9:21 ` Kai Huang
2022-06-22 11:15 ` [PATCH v5 03/22] cc_platform: Add new attribute to prevent ACPI memory hotplug Kai Huang
2022-06-22 11:45 ` Rafael J. Wysocki
2022-06-23 0:08 ` Kai Huang
2022-06-28 17:55 ` Rafael J. Wysocki
2022-06-28 12:01 ` Igor Mammedov
2022-06-28 23:49 ` Kai Huang
2022-06-29 8:48 ` Igor Mammedov
2022-06-29 9:13 ` Kai Huang
2022-06-22 11:16 ` [PATCH v5 04/22] x86/virt/tdx: Prevent ACPI CPU hotplug and " Kai Huang
2022-06-24 1:41 ` Chao Gao
2022-06-24 11:21 ` Kai Huang
2022-06-29 8:35 ` Yuan Yao
2022-06-29 9:17 ` Kai Huang
2022-06-29 14:22 ` Dave Hansen
2022-06-29 23:02 ` Kai Huang
2022-06-30 15:44 ` Dave Hansen
2022-06-30 22:45 ` Kai Huang
2022-06-22 11:16 ` [PATCH v5 05/22] x86/virt/tdx: Prevent hot-add driver managed memory Kai Huang
2022-06-24 2:12 ` Chao Gao
2022-06-24 11:23 ` Kai Huang
2022-06-24 19:01 ` Dave Hansen
2022-06-27 5:27 ` Kai Huang
2022-06-22 11:16 ` [PATCH v5 06/22] x86/virt/tdx: Add skeleton to initialize TDX on demand Kai Huang
2022-06-24 2:39 ` Chao Gao
2022-06-24 11:27 ` Kai Huang
2022-06-22 11:16 ` [PATCH v5 07/22] x86/virt/tdx: Implement SEAMCALL function Kai Huang
2022-06-24 18:38 ` Dave Hansen
2022-06-27 5:23 ` Kai Huang
2022-06-27 20:58 ` Dave Hansen
2022-06-27 22:10 ` Kai Huang
2022-07-19 19:39 ` Dan Williams
2022-07-19 23:28 ` Kai Huang
2022-07-20 10:18 ` Kai Huang
2022-07-20 16:48 ` Dave Hansen
2022-07-21 1:52 ` Kai Huang
2022-07-27 0:34 ` Kai Huang
2022-07-27 0:50 ` Dave Hansen
2022-07-27 12:46 ` Kai Huang
2022-08-03 2:37 ` Kai Huang
2022-08-03 14:20 ` Dave Hansen
2022-08-03 22:35 ` Kai Huang
2022-08-04 10:06 ` Kai Huang
2022-06-22 11:16 ` [PATCH v5 08/22] x86/virt/tdx: Shut down TDX module in case of error Kai Huang
2022-06-24 18:50 ` Dave Hansen
2022-06-27 5:26 ` Kai Huang
2022-06-27 20:46 ` Dave Hansen
2022-06-27 22:34 ` Kai Huang
2022-06-27 22:56 ` Dave Hansen
2022-06-27 23:59 ` Kai Huang
2022-06-28 0:03 ` Dave Hansen
2022-06-28 0:11 ` Kai Huang
2022-06-22 11:16 ` [PATCH v5 09/22] x86/virt/tdx: Detect TDX module by doing module global initialization Kai Huang
2022-06-22 11:16 ` [PATCH v5 10/22] x86/virt/tdx: Do logical-cpu scope TDX module initialization Kai Huang
2022-06-22 11:17 ` [PATCH v5 11/22] x86/virt/tdx: Get information about TDX module and TDX-capable memory Kai Huang
2022-06-22 11:17 ` [PATCH v5 12/22] x86/virt/tdx: Convert all memory regions in memblock to TDX memory Kai Huang
2022-06-24 19:40 ` Dave Hansen [this message]
2022-06-27 6:16 ` Kai Huang
2022-07-07 2:37 ` Kai Huang
2022-07-07 14:26 ` Dave Hansen
2022-07-07 14:36 ` Juergen Gross
2022-07-07 23:42 ` Kai Huang
2022-07-07 23:34 ` Kai Huang
2022-08-03 1:30 ` Kai Huang
2022-08-03 14:22 ` Dave Hansen
2022-08-03 22:14 ` Kai Huang
2022-06-22 11:17 ` [PATCH v5 13/22] x86/virt/tdx: Add placeholder to construct TDMRs based on memblock Kai Huang
2022-06-22 11:17 ` [PATCH v5 14/22] x86/virt/tdx: Create TDMRs to cover all memblock memory regions Kai Huang
2022-06-22 11:17 ` [PATCH v5 15/22] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
2022-06-24 20:13 ` Dave Hansen
2022-06-27 10:31 ` Kai Huang
2022-06-27 20:41 ` Dave Hansen
2022-06-27 22:50 ` Kai Huang
2022-06-27 22:57 ` Dave Hansen
2022-06-27 23:05 ` Kai Huang
2022-06-28 0:48 ` Xiaoyao Li
2022-06-28 17:03 ` Dave Hansen
2022-08-17 22:46 ` Sagi Shahar
2022-08-17 23:43 ` Huang, Kai
2022-06-22 11:17 ` [PATCH v5 16/22] x86/virt/tdx: Set up reserved areas for all TDMRs Kai Huang
2022-06-22 11:17 ` [PATCH v5 17/22] x86/virt/tdx: Reserve TDX module global KeyID Kai Huang
2022-06-22 11:17 ` [PATCH v5 18/22] x86/virt/tdx: Configure TDX module with TDMRs and " Kai Huang
2022-06-22 11:17 ` [PATCH v5 19/22] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
2022-06-22 11:17 ` [PATCH v5 20/22] x86/virt/tdx: Initialize all TDMRs Kai Huang
2022-06-22 11:17 ` [PATCH v5 21/22] x86/virt/tdx: Support kexec() Kai Huang
2022-06-22 11:17 ` [PATCH v5 22/22] Documentation/x86: Add documentation for TDX host support Kai Huang
2022-08-18 4:07 ` Bagas Sanjaya
2022-08-18 9:33 ` Huang, Kai
2022-06-24 19:47 ` [PATCH v5 00/22] TDX host kernel support Dave Hansen
2022-06-27 4:09 ` Kai Huang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20d63398-928f-0c6f-47ec-8e225c049ad8@intel.com \
--to=dave.hansen@intel.com \
--cc=ak@linux.intel.com \
--cc=dan.j.williams@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=kai.huang@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=reinette.chatre@intel.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=seanjc@google.com \
--cc=tony.luck@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).