All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Huang, Kai" <kai.huang@intel.com>
To: "mingo@redhat.com" <mingo@redhat.com>,
	"jarkko@kernel.org" <jarkko@kernel.org>,
	"x86@kernel.org" <x86@kernel.org>,
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"hpa@zytor.com" <hpa@zytor.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-sgx@vger.kernel.org" <linux-sgx@vger.kernel.org>,
	"mkoutny@suse.com" <mkoutny@suse.com>,
	"haitao.huang@linux.intel.com" <haitao.huang@linux.intel.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"tj@kernel.org" <tj@kernel.org>,
	"Mehta, Sohil" <sohil.mehta@intel.com>,
	"bp@alien8.de" <bp@alien8.de>
Cc: "mikko.ylinen@linux.intel.com" <mikko.ylinen@linux.intel.com>,
	"seanjc@google.com" <seanjc@google.com>,
	"Zhang, Bo" <zhanb@microsoft.com>,
	"kristen@linux.intel.com" <kristen@linux.intel.com>,
	"anakrish@microsoft.com" <anakrish@microsoft.com>,
	"sean.j.christopherson@intel.com"
	<sean.j.christopherson@intel.com>,
	"Li, Zhiquan1" <zhiquan1.li@intel.com>,
	"yangjie@microsoft.com" <yangjie@microsoft.com>
Subject: Re: [PATCH v6 09/12] x86/sgx: Restructure top-level EPC reclaim function
Date: Mon, 27 Nov 2023 09:57:03 +0000	[thread overview]
Message-ID: <431c5d7f5aee7d11ec2e8aa2e526fde438fa53b4.camel@intel.com> (raw)
In-Reply-To: <op.2e0yod2lwjvjmi@hhuan26-mobl.amr.corp.intel.com>

On Mon, 2023-11-27 at 00:27 +0800, Haitao Huang wrote:
> On Mon, 20 Nov 2023 11:45:46 +0800, Huang, Kai <kai.huang@intel.com> wrote:
> 
> > On Mon, 2023-10-30 at 11:20 -0700, Haitao Huang wrote:
> > > From: Sean Christopherson <sean.j.christopherson@intel.com>
> > > 
> > > To prepare for per-cgroup reclamation, separate the top-level reclaim
> > > function, sgx_reclaim_epc_pages(), into two separate functions:
> > > 
> > > - sgx_isolate_epc_pages() scans and isolates reclaimable pages from a  
> > > given LRU list.
> > > - sgx_do_epc_reclamation() performs the real reclamation for the  
> > > already isolated pages.
> > > 
> > > Create a new function, sgx_reclaim_epc_pages_global(), calling those two
> > > in succession, to replace the original sgx_reclaim_epc_pages(). The
> > > above two functions will serve as building blocks for the reclamation
> > > flows in later EPC cgroup implementation.
> > > 
> > > sgx_do_epc_reclamation() returns the number of reclaimed pages. The EPC
> > > cgroup will use the result to track reclaiming progress.
> > > 
> > > sgx_isolate_epc_pages() returns the additional number of pages to scan
> > > for current epoch of reclamation. The EPC cgroup will use the result to
> > > determine if more scanning to be done in LRUs in its children groups.
> > 
> > This changelog says nothing about "why", but only mentions the  
> > "implementation".
> > 
> > For instance, assuming we need to reclaim @npages_to_reclaim from the
> > @epc_cgrp_to_reclaim and its descendants, why cannot we do:
> > 
> > 	for_each_cgroup_and_descendants(&epc_cgrp_to_reclaim, &epc_cgrp) {
> > 		if (npages_to_reclaim <= 0)
> > 			return;
> > 
> > 		npages_to_reclaim -= sgx_reclaim_pages_lru(&epc_cgrp->lru,
> > 					npages_to_reclaim);
> > 	}
> > 
> > Is there any difference to have "isolate" + "reclaim"?
> > 
> 
> This is to optimize "reclaim". See how etrack was done in sgx_encl_ewb.
> Here we just follow the same design as ksgxd for each reclamation cycle.

I don't see how did you "follow" ksgxd.  If I am guessing correctly, you are
afraid of there might be less than 16 pages in a given EPC cgroup, thus w/o
splitting into "isolate" + "reclaim" you might feed the "reclaim" less than 16
pages, which might cause some performance degrade?

But is this a common case?  Should we even worry about this?

I suppose for such new feature we should bring functionality first and then
optimization if you have real performance data to show.

> 
> > > 
> > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > Co-developed-by: Kristen Carlson Accardi <kristen@linux.intel.com>
> > > Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com>
> > > Co-developed-by: Haitao Huang <haitao.huang@linux.intel.com>
> > > Signed-off-by: Haitao Huang <haitao.huang@linux.intel.com>
> > > Cc: Sean Christopherson <seanjc@google.com>
> > > ---
> > > 
> > 
> > [...]
> > 
> > > +/**
> > > + * sgx_do_epc_reclamation() - Perform reclamation for isolated EPC  
> > > pages.
> > > + * @iso:		List of isolated pages for reclamation
> > > + *
> > > + * Take a list of EPC pages and reclaim them to the enclave's private  
> > > shmem files.  Do not
> > > + * reclaim the pages that have been accessed since the last scan, and  
> > > move each of those pages
> > > + * to the tail of its tracking LRU list.
> > > + *
> > > + * Limit the number of pages to be processed up to SGX_NR_TO_SCAN_MAX  
> > > per call in order to
> > > + * degrade amount of IPI's and ETRACK's potentially required.  
> > > sgx_encl_ewb() does degrade a bit
> > > + * among the HW threads with three stage EWB pipeline (EWB, ETRACK +  
> > > EWB and IPI + EWB) but not
> > > + * sufficiently. Reclaiming one page at a time would also be  
> > > problematic as it would increase
> > > + * the lock contention too much, which would halt forward progress.
> > 
> > This is kinda optimization, correct?  Is there any real performance data  
> > to
> > justify this?
> 
> The above sentences were there originally. This optimization was justified.

I am talking about 16 -> 32.

> 
> > If this optimization is useful, shouldn't we bring this
> > optimization to the current sgx_reclaim_pages() instead, e.g., just  
> > increase
> > SGX_NR_TO_SCAN (16) to SGX_NR_TO_SCAN_MAX (32)?
> > 
> 
> SGX_NR_TO_SCAN_MAX might be designed earlier for other reasons I don't  
> know. Currently it is really the buffer size allocated in  
> sgx_reclaim_pages(). Both cgroup and ksgxd scan 16 pages a time.Maybe we  
> should just use SGX_NR_TO_SCAN. No _MAX needed. The point was to batch  
> reclamation to certain number to minimize impact of EWB pipeline. 16 was  
> the original design.
> 

Please don't leave why you are trying to do this to the reviewers.  If you don't
know, then just drop this.


  reply	other threads:[~2023-11-27  9:59 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-30 18:20 [PATCH v6 00/12] Add Cgroup support for SGX EPC memory Haitao Huang
2023-10-30 18:20 ` [PATCH v6 01/12] cgroup/misc: Add per resource callbacks for CSS events Haitao Huang
2023-11-15 20:25   ` Jarkko Sakkinen
2024-01-09  3:37     ` Haitao Huang
2024-01-10 19:55       ` Jarkko Sakkinen
2024-01-05  9:45   ` Michal Koutný
2024-01-06  1:42     ` Haitao Huang
2023-10-30 18:20 ` [PATCH v6 02/12] cgroup/misc: Export APIs for SGX driver Haitao Huang
2023-10-30 18:20 ` [PATCH v6 03/12] cgroup/misc: Add SGX EPC resource type Haitao Huang
2023-10-30 18:20 ` [PATCH v6 04/12] x86/sgx: Implement basic EPC misc cgroup functionality Haitao Huang
2023-11-06 12:09   ` Huang, Kai
2023-11-06 18:59     ` Haitao Huang
2023-11-06 22:18       ` Huang, Kai
2023-11-07  1:16         ` Haitao Huang
2023-11-07  2:08           ` Haitao Huang
2023-11-07 19:07             ` Huang, Kai
2023-11-20  3:16             ` Huang, Kai
2023-11-26 16:01               ` Haitao Huang
2023-11-26 16:32                 ` Haitao Huang
2023-11-06 22:23   ` Huang, Kai
2023-11-15 20:48   ` Jarkko Sakkinen
2023-10-30 18:20 ` [PATCH v6 05/12] x86/sgx: Add sgx_epc_lru_list to encapsulate LRU list Haitao Huang
2023-10-30 18:20 ` [PATCH v6 06/12] x86/sgx: Use sgx_epc_lru_list for existing active page list Haitao Huang
2023-10-30 18:20 ` [PATCH v6 07/12] x86/sgx: Introduce EPC page states Haitao Huang
2023-11-15 20:53   ` Jarkko Sakkinen
2024-01-05 17:57   ` Dave Hansen
2024-01-06  1:45     ` Haitao Huang
2023-10-30 18:20 ` [PATCH v6 08/12] x86/sgx: Use a list to track to-be-reclaimed pages Haitao Huang
2023-11-15 20:59   ` Jarkko Sakkinen
2023-10-30 18:20 ` [PATCH v6 09/12] x86/sgx: Restructure top-level EPC reclaim function Haitao Huang
2023-11-20  3:45   ` Huang, Kai
2023-11-26 16:27     ` Haitao Huang
2023-11-27  9:57       ` Huang, Kai [this message]
2023-12-12  4:04         ` Haitao Huang
2023-12-13 11:17           ` Huang, Kai
2023-12-15 19:49             ` Haitao Huang
2023-12-18  1:44               ` Huang, Kai
2023-12-18 17:32                 ` Mikko Ylinen
2023-12-18 21:24                 ` Haitao Huang
2024-01-03 16:37                   ` Dave Hansen
2024-01-04 19:11                     ` Haitao Huang
2024-01-04 19:19                       ` Jarkko Sakkinen
2024-01-04 19:27                       ` Dave Hansen
2024-01-04 21:01                         ` Haitao Huang
2024-01-05 14:43                       ` Mikko Ylinen
2024-01-04 12:38                   ` Michal Koutný
2024-01-04 19:20                     ` Haitao Huang
2024-01-12 17:07                 ` Haitao Huang
2024-01-13 21:04                   ` Jarkko Sakkinen
2024-01-13 21:08                     ` Jarkko Sakkinen
2023-10-30 18:20 ` [PATCH v6 10/12] x86/sgx: Implement EPC reclamation for cgroup Haitao Huang
2023-11-06 15:58   ` [PATCH] x86/sgx: Charge proper mem_cgroup for usage due to EPC reclamation by cgroups Haitao Huang
2023-11-06 16:10   ` [PATCH v6 10/12] x86/sgx: Implement EPC reclamation for cgroup Haitao Huang
2023-10-30 18:20 ` [PATCH v6 11/12] Docs/x86/sgx: Add description for cgroup support Haitao Huang
2023-10-30 18:20 ` [PATCH v6 12/12] selftests/sgx: Add scripts for EPC cgroup testing Haitao Huang
2023-11-15 21:00   ` Jarkko Sakkinen
2023-11-15 21:22     ` Haitao Huang
2023-11-06  3:26 ` [PATCH v6 00/12] Add Cgroup support for SGX EPC memory Jarkko Sakkinen
2023-11-06 15:48   ` Haitao Huang
2023-11-08  1:00     ` Haitao Huang
2024-01-05 18:29 ` Dave Hansen
2024-01-05 20:13   ` Haitao Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=431c5d7f5aee7d11ec2e8aa2e526fde438fa53b4.camel@intel.com \
    --to=kai.huang@intel.com \
    --cc=anakrish@microsoft.com \
    --cc=bp@alien8.de \
    --cc=cgroups@vger.kernel.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=haitao.huang@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jarkko@kernel.org \
    --cc=kristen@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-sgx@vger.kernel.org \
    --cc=mikko.ylinen@linux.intel.com \
    --cc=mingo@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=seanjc@google.com \
    --cc=sohil.mehta@intel.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=x86@kernel.org \
    --cc=yangjie@microsoft.com \
    --cc=zhanb@microsoft.com \
    --cc=zhiquan1.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.