linux-sgx.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Haitao Huang" <haitao.huang@linux.intel.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Mehta, Sohil" <sohil.mehta@intel.com>,
	"jarkko@kernel.org" <jarkko@kernel.org>,
	"x86@kernel.org" <x86@kernel.org>,
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
	"cgroups@vger.kernel.org" <cgroups@vger.kernel.org>,
	"hpa@zytor.com" <hpa@zytor.com>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"tj@kernel.org" <tj@kernel.org>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"linux-sgx@vger.kernel.org" <linux-sgx@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"bp@alien8.de" <bp@alien8.de>, "Huang, Kai" <kai.huang@intel.com>,
	"mikko.ylinen@linux.intel.com" <mikko.ylinen@linux.intel.com>,
	"seanjc@google.com" <seanjc@google.com>,
	"Zhang, Bo" <zhanb@microsoft.com>,
	"kristen@linux.intel.com" <kristen@linux.intel.com>,
	"anakrish@microsoft.com" <anakrish@microsoft.com>,
	"sean.j.christopherson@intel.com"
	<sean.j.christopherson@intel.com>,
	"Li, Zhiquan1" <zhiquan1.li@intel.com>,
	"yangjie@microsoft.com" <yangjie@microsoft.com>
Subject: Re: [PATCH v6 09/12] x86/sgx: Restructure top-level EPC reclaim function
Date: Thu, 04 Jan 2024 13:20:38 -0600	[thread overview]
Message-ID: <op.2g1eoojxwjvjmi@hhuan26-mobl.amr.corp.intel.com> (raw)
In-Reply-To: <mmcdj5ulscoxyofetdgmygytd2dnjusnoj3ypix2plnwp7nbks@bqesmrpmsca7>

Hi Michal,

On Thu, 04 Jan 2024 06:38:41 -0600, Michal Koutný <mkoutny@suse.com> wrote:

> Hello.
>
> On Mon, Dec 18, 2023 at 03:24:40PM -0600, Haitao Huang  
> <haitao.huang@linux.intel.com> wrote:
>> Thanks for raising this. Actually my understanding the above discussion  
>> was
>> mainly about not doing reclaiming by killing enclaves, i.e., I assumed
>> "reclaiming" within that context only meant for that particular kind.
>>
>> As Mikko pointed out, without reclaiming per-cgroup, the max limit of  
>> each
>> cgroup needs to accommodate the peak usage of enclaves within that  
>> cgroup.
>> That may be inconvenient for practical usage and limits could be forced  
>> to
>> be set larger than necessary to run enclaves performantly. For example,  
>> we
>> can observe following undesired consequences comparing a system with  
>> current
>> kernel loaded with enclaves whose total peak usage is greater than the  
>> EPC
>> capacity.
>>
>> 1) If a user wants to load the same exact enclaves but in separate  
>> cgroups,
>> then the sum of cgroup limits must be higher than the capacity and the
>> system will end up doing the same old global reclaiming as it is  
>> currently
>> doing. Cgroup is not useful at all for isolating EPC consumptions.
>
> That is the use of limits to prevent a runaway cgroup smothering the
> system. Overcommited values in such a config are fine because the more
> simultaneous runaways, the less likely.
> The peak consumption is on the fair expense of others (some efficiency)
> and the limit contains the runaway (hence the isolation).
>

This makes sense to me in theory. Mikko, Chris Y/Bo Z, your thoughts on  
whether this is good enough for your intended usages?

>> 2) To isolate impact of usage of each cgroup on other cgroups and yet  
>> still
>> being able to load each enclave, the user basically has to carefully  
>> plan to
>> ensure the sum of cgroup max limits, i.e., the sum of peak usage of
>> enclaves, is not reaching over the capacity. That means no  
>> over-commiting
>> allowed and the same system may not be able to load as many enclaves as  
>> with
>> current kernel.
>
> Another "config layout" of limits is to achieve partitioning (sum ==
> capacity). That is perfect isolation but it naturally goes against
> efficient utilization. The way other controllers approach this trade-off
> is with weights (cpu, io) or protections (memory). I'm afraid misc
> controller is not ready for this.
>
> My opinion is to start with the simple limits (first patches) and think
> of prioritization/guarantee mechanism based on real cases.
>

We moved away from using mem like custom controller with (low, high, max)  
to misc controller. But if we need add those down the road, then the  
interface needs be changed. So my concern on this route would be whether  
misc would allow any of those extensions. On the other hand, it might turn  
out less complex just doing the reclamation per cgroup.

Thanks a lot for your comments and they are really helpful!

Haitao


  reply	other threads:[~2024-01-04 19:20 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-30 18:20 [PATCH v6 00/12] Add Cgroup support for SGX EPC memory Haitao Huang
2023-10-30 18:20 ` [PATCH v6 01/12] cgroup/misc: Add per resource callbacks for CSS events Haitao Huang
2023-11-15 20:25   ` Jarkko Sakkinen
2024-01-09  3:37     ` Haitao Huang
2024-01-10 19:55       ` Jarkko Sakkinen
2024-01-05  9:45   ` Michal Koutný
2024-01-06  1:42     ` Haitao Huang
2023-10-30 18:20 ` [PATCH v6 02/12] cgroup/misc: Export APIs for SGX driver Haitao Huang
2023-10-30 18:20 ` [PATCH v6 03/12] cgroup/misc: Add SGX EPC resource type Haitao Huang
2023-10-30 18:20 ` [PATCH v6 04/12] x86/sgx: Implement basic EPC misc cgroup functionality Haitao Huang
2023-11-06 12:09   ` Huang, Kai
2023-11-06 18:59     ` Haitao Huang
2023-11-06 22:18       ` Huang, Kai
2023-11-07  1:16         ` Haitao Huang
2023-11-07  2:08           ` Haitao Huang
2023-11-07 19:07             ` Huang, Kai
2023-11-20  3:16             ` Huang, Kai
2023-11-26 16:01               ` Haitao Huang
2023-11-26 16:32                 ` Haitao Huang
2023-11-06 22:23   ` Huang, Kai
2023-11-15 20:48   ` Jarkko Sakkinen
2023-10-30 18:20 ` [PATCH v6 05/12] x86/sgx: Add sgx_epc_lru_list to encapsulate LRU list Haitao Huang
2023-10-30 18:20 ` [PATCH v6 06/12] x86/sgx: Use sgx_epc_lru_list for existing active page list Haitao Huang
2023-10-30 18:20 ` [PATCH v6 07/12] x86/sgx: Introduce EPC page states Haitao Huang
2023-11-15 20:53   ` Jarkko Sakkinen
2024-01-05 17:57   ` Dave Hansen
2024-01-06  1:45     ` Haitao Huang
2023-10-30 18:20 ` [PATCH v6 08/12] x86/sgx: Use a list to track to-be-reclaimed pages Haitao Huang
2023-11-15 20:59   ` Jarkko Sakkinen
2023-10-30 18:20 ` [PATCH v6 09/12] x86/sgx: Restructure top-level EPC reclaim function Haitao Huang
2023-11-20  3:45   ` Huang, Kai
2023-11-26 16:27     ` Haitao Huang
2023-11-27  9:57       ` Huang, Kai
2023-12-12  4:04         ` Haitao Huang
2023-12-13 11:17           ` Huang, Kai
2023-12-15 19:49             ` Haitao Huang
2023-12-18  1:44               ` Huang, Kai
2023-12-18 17:32                 ` Mikko Ylinen
2023-12-18 21:24                 ` Haitao Huang
2024-01-03 16:37                   ` Dave Hansen
2024-01-04 19:11                     ` Haitao Huang
2024-01-04 19:19                       ` Jarkko Sakkinen
2024-01-04 19:27                       ` Dave Hansen
2024-01-04 21:01                         ` Haitao Huang
2024-01-05 14:43                       ` Mikko Ylinen
2024-01-04 12:38                   ` Michal Koutný
2024-01-04 19:20                     ` Haitao Huang [this message]
2024-01-12 17:07                 ` Haitao Huang
2024-01-13 21:04                   ` Jarkko Sakkinen
2024-01-13 21:08                     ` Jarkko Sakkinen
2023-10-30 18:20 ` [PATCH v6 10/12] x86/sgx: Implement EPC reclamation for cgroup Haitao Huang
2023-11-06 15:58   ` [PATCH] x86/sgx: Charge proper mem_cgroup for usage due to EPC reclamation by cgroups Haitao Huang
2023-11-06 16:10   ` [PATCH v6 10/12] x86/sgx: Implement EPC reclamation for cgroup Haitao Huang
2023-10-30 18:20 ` [PATCH v6 11/12] Docs/x86/sgx: Add description for cgroup support Haitao Huang
2023-10-30 18:20 ` [PATCH v6 12/12] selftests/sgx: Add scripts for EPC cgroup testing Haitao Huang
2023-11-15 21:00   ` Jarkko Sakkinen
2023-11-15 21:22     ` Haitao Huang
2023-11-06  3:26 ` [PATCH v6 00/12] Add Cgroup support for SGX EPC memory Jarkko Sakkinen
2023-11-06 15:48   ` Haitao Huang
2023-11-08  1:00     ` Haitao Huang
2024-01-05 18:29 ` Dave Hansen
2024-01-05 20:13   ` Haitao Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=op.2g1eoojxwjvjmi@hhuan26-mobl.amr.corp.intel.com \
    --to=haitao.huang@linux.intel.com \
    --cc=anakrish@microsoft.com \
    --cc=bp@alien8.de \
    --cc=cgroups@vger.kernel.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jarkko@kernel.org \
    --cc=kai.huang@intel.com \
    --cc=kristen@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-sgx@vger.kernel.org \
    --cc=mikko.ylinen@linux.intel.com \
    --cc=mingo@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=seanjc@google.com \
    --cc=sohil.mehta@intel.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=x86@kernel.org \
    --cc=yangjie@microsoft.com \
    --cc=zhanb@microsoft.com \
    --cc=zhiquan1.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).