From: "Huang, Kai" <kai.huang@intel.com>
To: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Cc: "Hansen, Dave" <dave.hansen@intel.com>,
"Luck, Tony" <tony.luck@intel.com>,
"bagasdotme@gmail.com" <bagasdotme@gmail.com>,
"ak@linux.intel.com" <ak@linux.intel.com>,
"Wysocki, Rafael J" <rafael.j.wysocki@intel.com>,
"kirill.shutemov@linux.intel.com"
<kirill.shutemov@linux.intel.com>, "Christopherson,,
Sean" <seanjc@google.com>,
"Chatre, Reinette" <reinette.chatre@intel.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"Yamahata, Isaku" <isaku.yamahata@intel.com>,
"peterz@infradead.org" <peterz@infradead.org>,
"Shahar, Sagi" <sagis@google.com>,
"imammedo@redhat.com" <imammedo@redhat.com>,
"Gao, Chao" <chao.gao@intel.com>,
"Brown, Len" <len.brown@intel.com>,
"sathyanarayanan.kuppuswamy@linux.intel.com"
<sathyanarayanan.kuppuswamy@linux.intel.com>,
"Huang, Ying" <ying.huang@intel.com>,
"Williams, Dan J" <dan.j.williams@intel.com>
Subject: Re: [PATCH v7 14/20] x86/virt/tdx: Set up reserved areas for all TDMRs
Date: Mon, 28 Nov 2022 09:14:24 +0000 [thread overview]
Message-ID: <0ebf26d1ee540b4e082b65c51cabe3327f6d51ac.camel@intel.com> (raw)
In-Reply-To: <5526fedd-fa54-0ad2-f356-94b167e6e290@intel.com>
On Wed, 2022-11-23 at 15:39 -0800, Dave Hansen wrote:
> > +static int tdmr_set_up_memory_hole_rsvd_areas(struct tdmr_info *tdmr,
> > + int *rsvd_idx)
> > +{
>
> This needs a comment.
>
> This is another case where it's confusing to be passing around 'struct
> tdmr_info *'. Is this *A* TDMR or an array?
It is for one TDMR but not an array. All functions in form of tdmr_xxx() are
operations for one TDMR.
But I agree it's not clear.
>
> /*
> * Go through tdx_memlist to find holes between memory areas. If any of
> * those holes fall within @tdmr, set up a TDMR reserved area to cover
> * the hole.
> */
> static int tdmr_populate_rsvd_holes(struct list_head *tdx_memlist,
> struct tdmr_info *tdmr,
> int *rsvd_idx)
Thanks!
Should I also change below function 'tdmr_set_up_pamt_rsvd_areas()' to, i.e.
tdmr_populate_rsvd_pamts()?
Actually, there are two more functions in this patch: tdmr_set_up_rsvd_areas()
and tdmrs_set_up_rsvd_areas_all(). Should I also change them to
tdmr_populate_rsvd_areas() and tdmrs_populate_rsvd_areas_all()?
>
> > + struct tdx_memblock *tmb;
> > + u64 prev_end;
> > + int ret;
> > +
> > + /* Mark holes between memory regions as reserved */
> > + prev_end = tdmr_start(tdmr);
>
> I'm having a hard time following this, especially the mixing of
> semantics between 'prev_end' both pointing to tdmr and to tmb addresses.
>
> Here, 'prev_end' logically represents the last address which we know has
> been handled. All of the holes in the addresses below it have been
> dealt with. It is safe to set here to tdmr_start() because this
> function call is uniquely tasked with setting up reserved areas in
> 'tdmr'. So, it can safely consider any holes before tdmr_start(tdmr) as
> being handled.
Yes.
>
> But, dang, there's a lot of complexity there.
>
> First, the:
>
> /* Mark holes between memory regions as reserved */
>
> comment is misleading. It has *ZILCH* to do with the "prev_end =
> tdmr_start(tdmr);" assignment.
>
> This at least needs:
>
> /* Start looking for reserved blocks at the beginning of the TDMR: */
> prev_end = tdmr_start(tdmr);
Right. Sorry for the bad comment.
>
> but I also get the feeling that 'prev_end' is a crummy variable name. I
> don't have any better suggestions at the moment.
>
> > + list_for_each_entry(tmb, &tdx_memlist, list) {
> > + u64 start, end;
> > +
> > + start = tmb->start_pfn << PAGE_SHIFT;
> > + end = tmb->end_pfn << PAGE_SHIFT;
> > +
>
> More alignment opportunities:
>
> start = tmb->start_pfn << PAGE_SHIFT;
> end = tmb->end_pfn << PAGE_SHIFT;
Should I use PFN_PHYS()? Then looks we don't need this alignment.
>
>
> > + /* Break if this region is after the TDMR */
> > + if (start >= tdmr_end(tdmr))
> > + break;
> > +
> > + /* Exclude regions before this TDMR */
> > + if (end < tdmr_start(tdmr))
> > + continue;
> > +
> > + /*
> > + * Skip if no hole exists before this region. "<=" is
> > + * used because one memory region might span two TDMRs
> > + * (when the previous TDMR covers part of this region).
> > + * In this case the start address of this region is
> > + * smaller than the start address of the second TDMR.
> > + *
> > + * Update the prev_end to the end of this region where
> > + * the possible memory hole starts.
> > + */
>
> Can't this just be:
>
> /*
> * Skip over memory areas that
> * have already been dealt with.
> */
I think so. This actually also covers the "two contiguous memory regions" case,
which isn't mentioned in the original comment.
>
> > + if (start <= prev_end) {
> > + prev_end = end;
> > + continue;
> > + }
> > +
> > + /* Add the hole before this region */
> > + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
> > + start - prev_end);
> > + if (ret)
> > + return ret;
> > +
> > + prev_end = end;
> > + }
> > +
> > + /* Add the hole after the last region if it exists. */
> > + if (prev_end < tdmr_end(tdmr)) {
> > + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
> > + tdmr_end(tdmr) - prev_end);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int tdmr_set_up_pamt_rsvd_areas(struct tdmr_info *tdmr, int
> > *rsvd_idx,
> > + struct tdmr_info *tdmr_array,
> > + int tdmr_num)
> > +{
> > + int i, ret;
> > +
> > + /*
> > + * If any PAMT overlaps with this TDMR, the overlapping part
> > + * must also be put to the reserved area too. Walk over all
> > + * TDMRs to find out those overlapping PAMTs and put them to
> > + * reserved areas.
> > + */
> > + for (i = 0; i < tdmr_num; i++) {
> > + struct tdmr_info *tmp = tdmr_array_entry(tdmr_array, i);
> > + unsigned long pamt_start_pfn, pamt_npages;
> > + u64 pamt_start, pamt_end;
> > +
> > + tdmr_get_pamt(tmp, &pamt_start_pfn, &pamt_npages);
> > + /* Each TDMR must already have PAMT allocated */
> > + WARN_ON_ONCE(!pamt_npages || !pamt_start_pfn);
> > +
> > + pamt_start = pamt_start_pfn << PAGE_SHIFT;
> > + pamt_end = pamt_start + (pamt_npages << PAGE_SHIFT);
> > +
> > + /* Skip PAMTs outside of the given TDMR */
> > + if ((pamt_end <= tdmr_start(tdmr)) ||
> > + (pamt_start >= tdmr_end(tdmr)))
> > + continue;
> > +
> > + /* Only mark the part within the TDMR as reserved */
> > + if (pamt_start < tdmr_start(tdmr))
> > + pamt_start = tdmr_start(tdmr);
> > + if (pamt_end > tdmr_end(tdmr))
> > + pamt_end = tdmr_end(tdmr);
> > +
> > + ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, pamt_start,
> > + pamt_end - pamt_start);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/* Compare function called by sort() for TDMR reserved areas */
> > +static int rsvd_area_cmp_func(const void *a, const void *b)
> > +{
> > + struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a;
> > + struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b;
> > +
> > + if (r1->offset + r1->size <= r2->offset)
> > + return -1;
> > + if (r1->offset >= r2->offset + r2->size)
> > + return 1;
> > +
> > + /* Reserved areas cannot overlap. The caller should guarantee. */
> > + WARN_ON_ONCE(1);
> > + return -1;
> > +}
> > +
> > +/* Set up reserved areas for a TDMR, including memory holes and PAMTs */
> > +static int tdmr_set_up_rsvd_areas(struct tdmr_info *tdmr,
> > + struct tdmr_info *tdmr_array,
> > + int tdmr_num)
> > +{
> > + int ret, rsvd_idx = 0;
> > +
> > + /* Put all memory holes within the TDMR into reserved areas */
> > + ret = tdmr_set_up_memory_hole_rsvd_areas(tdmr, &rsvd_idx);
> > + if (ret)
> > + return ret;
> > +
> > + /* Put all (overlapping) PAMTs within the TDMR into reserved areas
> > */
> > + ret = tdmr_set_up_pamt_rsvd_areas(tdmr, &rsvd_idx, tdmr_array,
> > tdmr_num);
> > + if (ret)
> > + return ret;
> > +
> > + /* TDX requires reserved areas listed in address ascending order */
> > + sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct
> > tdmr_reserved_area),
> > + rsvd_area_cmp_func, NULL);
>
> Ugh, and I guess we don't know where the PAMTs will be ordered with
> respect to holes, so sorting is the easiest way to do this.
>
> <snip>
Right.
next prev parent reply other threads:[~2022-11-28 9:14 UTC|newest]
Thread overview: 163+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-21 0:26 [PATCH v7 00/20] TDX host kernel support Kai Huang
2022-11-21 0:26 ` [PATCH v7 01/20] x86/tdx: Define TDX supported page sizes as macros Kai Huang
2022-11-21 2:52 ` Sathyanarayanan Kuppuswamy
2022-11-21 9:15 ` Huang, Kai
2022-11-21 17:23 ` Sathyanarayanan Kuppuswamy
2022-11-21 18:12 ` Dave Hansen
2022-11-21 23:48 ` Dave Hansen
2022-11-22 0:01 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 02/20] x86/virt/tdx: Detect TDX during kernel boot Kai Huang
2022-11-21 3:07 ` Sathyanarayanan Kuppuswamy
2022-11-21 9:37 ` Huang, Kai
2022-11-21 23:57 ` Sathyanarayanan Kuppuswamy
2022-11-22 0:10 ` Dave Hansen
2022-11-22 11:28 ` Huang, Kai
2022-11-22 16:50 ` Dave Hansen
2022-11-22 23:21 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 03/20] x86/virt/tdx: Disable TDX if X2APIC is not enabled Kai Huang
2022-11-21 3:51 ` Sathyanarayanan Kuppuswamy
2022-11-21 9:44 ` Huang, Kai
2022-11-21 22:00 ` Sathyanarayanan Kuppuswamy
2022-11-21 23:40 ` Huang, Kai
2022-11-21 23:46 ` Dave Hansen
2022-11-22 0:30 ` Huang, Kai
2022-11-22 0:44 ` Dave Hansen
2022-11-22 0:58 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 04/20] x86/virt/tdx: Add skeleton to initialize TDX on demand Kai Huang
2022-11-22 9:02 ` Peter Zijlstra
2022-11-22 10:31 ` Thomas Gleixner
2022-11-22 15:35 ` Dave Hansen
2022-11-22 20:03 ` Thomas Gleixner
2022-11-22 20:11 ` Sean Christopherson
2022-11-23 0:30 ` Huang, Kai
2022-11-23 1:12 ` Huang, Kai
2022-11-23 11:05 ` Thomas Gleixner
2022-11-23 12:22 ` Huang, Kai
2022-11-22 18:05 ` Dave Hansen
2022-11-23 10:18 ` Huang, Kai
2022-11-23 16:58 ` Dave Hansen
2022-11-23 21:58 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 05/20] x86/virt/tdx: Implement functions to make SEAMCALL Kai Huang
2022-11-22 9:06 ` Peter Zijlstra
2022-11-23 8:53 ` Huang, Kai
2022-11-22 18:20 ` Dave Hansen
2022-11-23 10:43 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 06/20] x86/virt/tdx: Shut down TDX module in case of error Kai Huang
2022-11-22 9:10 ` Peter Zijlstra
2022-11-22 9:13 ` Peter Zijlstra
2022-11-22 15:14 ` Dave Hansen
2022-11-22 19:13 ` Peter Zijlstra
2022-11-22 19:24 ` Dave Hansen
2022-11-22 19:33 ` Peter Zijlstra
2022-11-23 1:14 ` Huang, Kai
2022-11-29 21:40 ` Dave Hansen
2022-11-30 11:09 ` Thomas Gleixner
2022-11-23 0:58 ` Huang, Kai
2022-11-23 1:04 ` Dave Hansen
2022-11-23 1:22 ` Huang, Kai
2022-11-23 16:20 ` Sean Christopherson
2022-11-23 16:41 ` Dave Hansen
2022-11-23 17:37 ` Sean Christopherson
2022-11-23 18:18 ` Dave Hansen
2022-11-23 19:03 ` Sean Christopherson
2022-11-22 9:20 ` Peter Zijlstra
2022-11-22 15:06 ` Thomas Gleixner
2022-11-22 19:06 ` Peter Zijlstra
2022-11-22 19:31 ` Sean Christopherson
2022-11-23 9:39 ` Huang, Kai
2022-11-22 15:20 ` Dave Hansen
2022-11-22 16:52 ` Thomas Gleixner
2022-11-22 18:57 ` Dave Hansen
2022-11-22 19:14 ` Peter Zijlstra
2022-11-23 1:24 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 07/20] x86/virt/tdx: Do TDX module global initialization Kai Huang
2022-11-22 19:14 ` Dave Hansen
2022-11-23 11:45 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 08/20] x86/virt/tdx: Do logical-cpu scope TDX module initialization Kai Huang
2022-11-21 0:26 ` [PATCH v7 09/20] x86/virt/tdx: Get information about TDX module and TDX-capable memory Kai Huang
2022-11-22 23:39 ` Dave Hansen
2022-11-23 11:40 ` Huang, Kai
2022-11-23 16:44 ` Dave Hansen
2022-11-23 22:53 ` Huang, Kai
2022-12-02 11:19 ` Huang, Kai
2022-12-02 17:25 ` Dave Hansen
2022-12-02 21:57 ` Huang, Kai
2022-12-02 11:11 ` Huang, Kai
2022-12-02 17:06 ` Dave Hansen
2022-12-02 21:56 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 10/20] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory Kai Huang
2022-11-21 5:37 ` Huang, Ying
2022-11-21 9:09 ` Huang, Kai
2022-11-22 1:54 ` Huang, Ying
2022-11-22 9:16 ` Huang, Kai
2022-11-24 0:47 ` Huang, Ying
2022-11-22 10:10 ` Peter Zijlstra
2022-11-22 11:40 ` Huang, Kai
2022-11-23 0:21 ` Dave Hansen
2022-11-23 9:29 ` Peter Zijlstra
2022-11-24 1:04 ` Huang, Kai
2022-11-24 1:22 ` Dave Hansen
2022-11-24 2:27 ` Huang, Kai
2022-11-24 1:50 ` Dan Williams
2022-11-24 9:06 ` Huang, Kai
2022-11-25 9:28 ` David Hildenbrand
2022-11-28 8:38 ` Huang, Kai
2022-11-28 8:43 ` David Hildenbrand
2022-11-28 9:21 ` Huang, Kai
2022-11-28 9:26 ` David Hildenbrand
2022-11-28 9:50 ` Huang, Kai
2022-11-24 9:26 ` Peter Zijlstra
2022-11-24 10:02 ` Huang, Kai
2022-11-30 22:26 ` Dave Hansen
2022-11-21 0:26 ` [PATCH v7 11/20] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions Kai Huang
2022-11-23 22:17 ` Dave Hansen
2022-11-24 9:51 ` Huang, Kai
2022-11-24 12:02 ` Huang, Kai
2022-11-28 15:59 ` Dave Hansen
2022-11-28 22:13 ` Huang, Kai
2022-11-28 22:19 ` Dave Hansen
2022-11-28 22:50 ` Huang, Kai
2022-12-07 11:47 ` Huang, Kai
2022-12-08 12:56 ` Huang, Kai
2022-12-08 14:58 ` Dave Hansen
2022-12-08 23:29 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 12/20] x86/virt/tdx: Create " Kai Huang
2022-11-23 22:41 ` Dave Hansen
2022-11-24 11:29 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 13/20] x86/virt/tdx: Allocate and set up PAMTs for TDMRs Kai Huang
2022-11-23 22:57 ` Dave Hansen
2022-11-24 11:46 ` Huang, Kai
2022-11-28 16:39 ` Dave Hansen
2022-11-28 22:48 ` Huang, Kai
2022-11-28 22:56 ` Dave Hansen
2022-11-28 23:14 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 14/20] x86/virt/tdx: Set up reserved areas for all TDMRs Kai Huang
2022-11-23 23:39 ` Dave Hansen
2022-11-28 9:14 ` Huang, Kai [this message]
2022-11-28 13:18 ` Dave Hansen
2022-11-28 22:24 ` Huang, Kai
2022-11-28 22:58 ` Dave Hansen
2022-11-28 23:10 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 15/20] x86/virt/tdx: Reserve TDX module global KeyID Kai Huang
2022-11-23 23:40 ` Dave Hansen
2022-11-24 22:39 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 16/20] x86/virt/tdx: Configure TDX module with TDMRs and " Kai Huang
2022-11-23 23:56 ` Dave Hansen
2022-11-25 0:59 ` Huang, Kai
2022-11-25 1:18 ` Dave Hansen
2022-11-25 1:44 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 17/20] x86/virt/tdx: Configure global KeyID on all packages Kai Huang
2022-11-24 0:28 ` Dave Hansen
2022-11-24 22:28 ` Huang, Kai
2022-11-25 0:08 ` Huang, Kai
2022-11-30 3:35 ` Binbin Wu
2022-11-30 8:34 ` Huang, Kai
2022-11-30 14:04 ` kirill.shutemov
2022-11-30 15:13 ` Dave Hansen
2022-11-30 20:17 ` Huang, Kai
2022-11-30 17:37 ` Dave Hansen
2022-11-21 0:26 ` [PATCH v7 18/20] x86/virt/tdx: Initialize all TDMRs Kai Huang
2022-11-24 0:42 ` Dave Hansen
2022-11-25 2:27 ` Huang, Kai
2022-11-21 0:26 ` [PATCH v7 19/20] x86/virt/tdx: Flush cache in kexec() when TDX is enabled Kai Huang
2022-11-21 0:26 ` [PATCH v7 20/20] Documentation/x86: Add documentation for TDX host support Kai Huang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0ebf26d1ee540b4e082b65c51cabe3327f6d51ac.camel@intel.com \
--to=kai.huang@intel.com \
--cc=ak@linux.intel.com \
--cc=bagasdotme@gmail.com \
--cc=chao.gao@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=imammedo@redhat.com \
--cc=isaku.yamahata@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=reinette.chatre@intel.com \
--cc=sagis@google.com \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
--cc=seanjc@google.com \
--cc=tony.luck@intel.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).