From: Haitao Huang <haitao.huang@linux.intel.com> To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, x86@kernel.org, cgroups@vger.kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, sohil.mehta@intel.com Cc: zhiquan1.li@intel.com, kristen@linux.intel.com, seanjc@google.com, zhanb@microsoft.com, anakrish@microsoft.com, mikko.ylinen@linux.intel.com, yangjie@microsoft.com Subject: [PATCH v5 13/18] x86/sgx: Expose sgx_reclaim_pages() for use by EPC cgroup Date: Fri, 22 Sep 2023 20:06:52 -0700 [thread overview] Message-ID: <20230923030657.16148-14-haitao.huang@linux.intel.com> (raw) In-Reply-To: <20230923030657.16148-1-haitao.huang@linux.intel.com> From: Sean Christopherson <sean.j.christopherson@intel.com> Adjust and expose the top-level reclaim function as sgx_reclaim_epc_pages() for use by the upcoming EPC cgroup, which will initiate reclaim to enforce the max limit. Make these adjustments to the function signature. 1) To take a parameter that specifies the number of pages to scan for reclaiming. Define a max value of 32, but scan 16 in the case for the global reclaimer (ksgxd). The EPC cgroup will use it to specify a desired number of pages to be reclaimed up to the max value of 32. 2) To take a flag to force reclaiming a page regardless of its age. The EPC cgroup will use the flag to enforce its limits by draining the reclaimable lists before resorting to other measures, e.g. forcefully kill enclaves. 3) Return the number of reclaimed pages. The EPC cgroup will use the result to track reclaiming progress and escalate to a more forceful reclaiming mode, e.g., calling this function with the flag to ignore age of pages. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Co-developed-by: Kristen Carlson Accardi <kristen@linux.intel.com> Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com> Co-developed-by: Haitao Huang <haitao.huang@linux.intel.com> Signed-off-by: Haitao Huang <haitao.huang@linux.intel.com> Cc: Sean Christopherson <seanjc@google.com> --- V4: - Combined the 3 patches that made the individual changes to the function signature. - Removed 'high' limit in commit message. --- arch/x86/kernel/cpu/sgx/main.c | 31 +++++++++++++++++++++---------- arch/x86/kernel/cpu/sgx/sgx.h | 1 + 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 3b875ab4dcd0..4e1a3e038db5 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -18,6 +18,11 @@ #include "encl.h" #include "encls.h" +/* + * Maximum number of pages to scan for reclaiming. + */ +#define SGX_NR_TO_SCAN_MAX 32 + struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; static int sgx_nr_epc_sections; static struct task_struct *ksgxd_tsk; @@ -279,7 +284,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, mutex_unlock(&encl->lock); } -/* +/** + * sgx_reclaim_epc_pages() - Reclaim EPC pages from the consumers + * @nr_to_scan: Number of EPC pages to scan for reclaim + * @ignore_age: Reclaim a page even if it is young + * * Take a fixed number of pages from the head of the active page pool and * reclaim them to the enclave's private shmem files. Skip the pages, which have * been accessed since the last scan. Move those pages to the tail of active @@ -292,15 +301,14 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, * problematic as it would increase the lock contention too much, which would * halt forward progress. */ -static void sgx_reclaim_pages(void) +size_t sgx_reclaim_epc_pages(size_t nr_to_scan, bool ignore_age) { - struct sgx_backing backing[SGX_NR_TO_SCAN]; + struct sgx_backing backing[SGX_NR_TO_SCAN_MAX]; struct sgx_epc_page *epc_page, *tmp; struct sgx_encl_page *encl_page; pgoff_t page_index; LIST_HEAD(iso); - int ret; - int i; + size_t ret, i; spin_lock(&sgx_global_lru.lock); for (i = 0; i < SGX_NR_TO_SCAN; i++) { @@ -326,13 +334,14 @@ static void sgx_reclaim_pages(void) spin_unlock(&sgx_global_lru.lock); if (list_empty(&iso)) - return; + return 0; i = 0; list_for_each_entry_safe(epc_page, tmp, &iso, list) { encl_page = epc_page->encl_page; - if (!sgx_reclaimer_age(epc_page)) + if (i == SGX_NR_TO_SCAN_MAX || + (!ignore_age && !sgx_reclaimer_age(epc_page))) goto skip; page_index = PFN_DOWN(encl_page->desc - encl_page->encl->base); @@ -371,6 +380,8 @@ static void sgx_reclaim_pages(void) sgx_free_epc_page(epc_page); } + + return i; } static bool sgx_should_reclaim(unsigned long watermark) @@ -387,7 +398,7 @@ static bool sgx_should_reclaim(unsigned long watermark) void sgx_reclaim_direct(void) { if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) - sgx_reclaim_pages(); + sgx_reclaim_epc_pages(SGX_NR_TO_SCAN, false); } static int ksgxd(void *p) @@ -410,7 +421,7 @@ static int ksgxd(void *p) sgx_should_reclaim(SGX_NR_HIGH_PAGES)); if (sgx_should_reclaim(SGX_NR_HIGH_PAGES)) - sgx_reclaim_pages(); + sgx_reclaim_epc_pages(SGX_NR_TO_SCAN, false); cond_resched(); } @@ -582,7 +593,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) break; } - sgx_reclaim_pages(); + sgx_reclaim_epc_pages(SGX_NR_TO_SCAN, false); cond_resched(); } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 6c0bfdc209c0..7e7f1f36d31e 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -179,6 +179,7 @@ void sgx_record_epc_page(struct sgx_epc_page *page, unsigned long flags); int sgx_drop_epc_page(struct sgx_epc_page *page); struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim); bool sgx_epc_oom(struct sgx_epc_lru_lists *lrus); +size_t sgx_reclaim_epc_pages(size_t nr_to_scan, bool ignore_age); void sgx_ipi_cb(void *info); -- 2.25.1
WARNING: multiple messages have this Message-ID (diff)
From: Haitao Huang <haitao.huang-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> To: jarkko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, dave.hansen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org, tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-sgx-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org, mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, bp-Gina5bIWoIWzQB+pC5nmwQ@public.gmane.org, hpa-YMNOUZJC4hwAvxtiuMwx3w@public.gmane.org, sohil.mehta-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org Cc: zhiquan1.li-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, kristen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org, seanjc-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, zhanb-0li6OtcxBFHby3iVrkZq2A@public.gmane.org, anakrish-0li6OtcxBFHby3iVrkZq2A@public.gmane.org, mikko.ylinen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org, yangjie-0li6OtcxBFHby3iVrkZq2A@public.gmane.org Subject: [PATCH v5 13/18] x86/sgx: Expose sgx_reclaim_pages() for use by EPC cgroup Date: Fri, 22 Sep 2023 20:06:52 -0700 [thread overview] Message-ID: <20230923030657.16148-14-haitao.huang@linux.intel.com> (raw) In-Reply-To: <20230923030657.16148-1-haitao.huang-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> From: Sean Christopherson <sean.j.christopherson-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> Adjust and expose the top-level reclaim function as sgx_reclaim_epc_pages() for use by the upcoming EPC cgroup, which will initiate reclaim to enforce the max limit. Make these adjustments to the function signature. 1) To take a parameter that specifies the number of pages to scan for reclaiming. Define a max value of 32, but scan 16 in the case for the global reclaimer (ksgxd). The EPC cgroup will use it to specify a desired number of pages to be reclaimed up to the max value of 32. 2) To take a flag to force reclaiming a page regardless of its age. The EPC cgroup will use the flag to enforce its limits by draining the reclaimable lists before resorting to other measures, e.g. forcefully kill enclaves. 3) Return the number of reclaimed pages. The EPC cgroup will use the result to track reclaiming progress and escalate to a more forceful reclaiming mode, e.g., calling this function with the flag to ignore age of pages. Signed-off-by: Sean Christopherson <sean.j.christopherson-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> Co-developed-by: Kristen Carlson Accardi <kristen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> Signed-off-by: Kristen Carlson Accardi <kristen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> Co-developed-by: Haitao Huang <haitao.huang-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> Signed-off-by: Haitao Huang <haitao.huang-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> Cc: Sean Christopherson <seanjc-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> --- V4: - Combined the 3 patches that made the individual changes to the function signature. - Removed 'high' limit in commit message. --- arch/x86/kernel/cpu/sgx/main.c | 31 +++++++++++++++++++++---------- arch/x86/kernel/cpu/sgx/sgx.h | 1 + 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 3b875ab4dcd0..4e1a3e038db5 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -18,6 +18,11 @@ #include "encl.h" #include "encls.h" +/* + * Maximum number of pages to scan for reclaiming. + */ +#define SGX_NR_TO_SCAN_MAX 32 + struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; static int sgx_nr_epc_sections; static struct task_struct *ksgxd_tsk; @@ -279,7 +284,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, mutex_unlock(&encl->lock); } -/* +/** + * sgx_reclaim_epc_pages() - Reclaim EPC pages from the consumers + * @nr_to_scan: Number of EPC pages to scan for reclaim + * @ignore_age: Reclaim a page even if it is young + * * Take a fixed number of pages from the head of the active page pool and * reclaim them to the enclave's private shmem files. Skip the pages, which have * been accessed since the last scan. Move those pages to the tail of active @@ -292,15 +301,14 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page, * problematic as it would increase the lock contention too much, which would * halt forward progress. */ -static void sgx_reclaim_pages(void) +size_t sgx_reclaim_epc_pages(size_t nr_to_scan, bool ignore_age) { - struct sgx_backing backing[SGX_NR_TO_SCAN]; + struct sgx_backing backing[SGX_NR_TO_SCAN_MAX]; struct sgx_epc_page *epc_page, *tmp; struct sgx_encl_page *encl_page; pgoff_t page_index; LIST_HEAD(iso); - int ret; - int i; + size_t ret, i; spin_lock(&sgx_global_lru.lock); for (i = 0; i < SGX_NR_TO_SCAN; i++) { @@ -326,13 +334,14 @@ static void sgx_reclaim_pages(void) spin_unlock(&sgx_global_lru.lock); if (list_empty(&iso)) - return; + return 0; i = 0; list_for_each_entry_safe(epc_page, tmp, &iso, list) { encl_page = epc_page->encl_page; - if (!sgx_reclaimer_age(epc_page)) + if (i == SGX_NR_TO_SCAN_MAX || + (!ignore_age && !sgx_reclaimer_age(epc_page))) goto skip; page_index = PFN_DOWN(encl_page->desc - encl_page->encl->base); @@ -371,6 +380,8 @@ static void sgx_reclaim_pages(void) sgx_free_epc_page(epc_page); } + + return i; } static bool sgx_should_reclaim(unsigned long watermark) @@ -387,7 +398,7 @@ static bool sgx_should_reclaim(unsigned long watermark) void sgx_reclaim_direct(void) { if (sgx_should_reclaim(SGX_NR_LOW_PAGES)) - sgx_reclaim_pages(); + sgx_reclaim_epc_pages(SGX_NR_TO_SCAN, false); } static int ksgxd(void *p) @@ -410,7 +421,7 @@ static int ksgxd(void *p) sgx_should_reclaim(SGX_NR_HIGH_PAGES)); if (sgx_should_reclaim(SGX_NR_HIGH_PAGES)) - sgx_reclaim_pages(); + sgx_reclaim_epc_pages(SGX_NR_TO_SCAN, false); cond_resched(); } @@ -582,7 +593,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) break; } - sgx_reclaim_pages(); + sgx_reclaim_epc_pages(SGX_NR_TO_SCAN, false); cond_resched(); } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 6c0bfdc209c0..7e7f1f36d31e 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -179,6 +179,7 @@ void sgx_record_epc_page(struct sgx_epc_page *page, unsigned long flags); int sgx_drop_epc_page(struct sgx_epc_page *page); struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim); bool sgx_epc_oom(struct sgx_epc_lru_lists *lrus); +size_t sgx_reclaim_epc_pages(size_t nr_to_scan, bool ignore_age); void sgx_ipi_cb(void *info); -- 2.25.1
next prev parent reply other threads:[~2023-09-23 3:08 UTC|newest] Thread overview: 144+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-09-23 3:06 [PATCH v5 00/18] Add Cgroup support for SGX EPC memory Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-23 3:06 ` [PATCH v5 01/18] cgroup/misc: Add per resource callbacks for CSS events Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-25 17:09 ` Jarkko Sakkinen 2023-09-25 17:09 ` Jarkko Sakkinen 2023-09-26 3:04 ` Haitao Huang 2023-09-26 13:10 ` Jarkko Sakkinen 2023-09-26 13:10 ` Jarkko Sakkinen 2023-09-26 13:13 ` Jarkko Sakkinen 2023-09-26 13:13 ` Jarkko Sakkinen 2023-09-27 1:56 ` Haitao Huang 2023-10-02 22:47 ` Jarkko Sakkinen 2023-10-02 22:55 ` Jarkko Sakkinen 2023-10-04 15:45 ` Haitao Huang 2023-10-04 17:18 ` Tejun Heo 2023-09-27 9:20 ` Huang, Kai 2023-10-03 14:29 ` Haitao Huang 2023-10-17 18:55 ` Michal Koutný 2023-09-23 3:06 ` [PATCH v5 02/18] cgroup/misc: Add SGX EPC resource type and export APIs for SGX driver Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-25 18:50 ` Tejun Heo 2023-09-25 18:50 ` Tejun Heo 2023-09-28 3:59 ` Huang, Kai 2023-10-03 7:00 ` Haitao Huang 2023-10-03 19:33 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 03/18] x86/sgx: Add sgx_epc_lru_lists to encapsulate LRU lists Haitao Huang 2023-09-23 3:06 ` [PATCH v5 04/18] x86/sgx: Use sgx_epc_lru_lists for existing active page list Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-23 3:06 ` [PATCH v5 05/18] x86/sgx: Store reclaimable EPC pages in sgx_epc_lru_lists Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-27 10:14 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 06/18] x86/sgx: Introduce EPC page states Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-25 17:11 ` Jarkko Sakkinen 2023-09-25 17:11 ` Jarkko Sakkinen 2023-09-27 10:28 ` Huang, Kai 2023-10-03 4:49 ` Haitao Huang 2023-10-03 20:03 ` Huang, Kai 2023-10-04 15:24 ` Haitao Huang 2023-10-04 21:05 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 07/18] x86/sgx: Introduce RECLAIM_IN_PROGRESS state Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-25 17:13 ` Jarkko Sakkinen 2023-09-25 17:13 ` Jarkko Sakkinen 2023-09-27 10:42 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 08/18] x86/sgx: Use a list to track to-be-reclaimed pages Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-28 9:28 ` Huang, Kai 2023-10-03 5:09 ` Haitao Huang 2023-09-23 3:06 ` [PATCH v5 09/18] x86/sgx: Store struct sgx_encl when allocating new VA pages Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-27 11:14 ` Huang, Kai 2023-09-27 15:35 ` Haitao Huang 2023-09-27 21:21 ` Huang, Kai 2023-09-29 15:06 ` Haitao Huang 2023-10-02 11:05 ` Huang, Kai 2023-09-27 11:35 ` Huang, Kai 2023-10-03 6:45 ` Haitao Huang 2023-10-03 20:07 ` Huang, Kai 2023-10-04 15:03 ` Haitao Huang 2023-10-04 21:13 ` Huang, Kai 2023-10-05 4:22 ` Haitao Huang 2023-10-05 6:49 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 10/18] x86/sgx: Add EPC page flags to identify owner types Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-23 3:06 ` [PATCH v5 11/18] x86/sgx: store unreclaimable pages in LRU lists Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-27 11:57 ` Huang, Kai 2023-10-03 5:42 ` Haitao Huang 2023-09-28 9:41 ` Huang, Kai 2023-10-03 5:15 ` Haitao Huang 2023-10-03 20:12 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 12/18] x86/sgx: Add EPC OOM path to forcefully reclaim EPC Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-10-09 23:45 ` Huang, Kai 2023-10-10 0:23 ` Sean Christopherson 2023-10-10 0:50 ` Huang, Kai 2023-10-10 1:34 ` Huang, Kai 2023-10-10 16:49 ` Haitao Huang 2023-10-11 0:51 ` Huang, Kai 2023-10-12 13:27 ` Haitao Huang 2023-10-16 10:57 ` Huang, Kai 2023-10-16 19:52 ` Haitao Huang 2023-10-16 21:09 ` Huang, Kai 2023-10-17 0:10 ` Haitao Huang 2023-10-17 1:34 ` Huang, Kai 2023-10-17 12:58 ` Haitao Huang 2023-10-17 18:54 ` Michal Koutný 2023-10-17 19:13 ` Michal Koutný 2023-10-18 4:39 ` Haitao Huang 2023-10-18 4:37 ` Haitao Huang 2023-10-18 13:55 ` Dave Hansen 2023-10-18 15:26 ` Haitao Huang 2023-10-18 15:37 ` Dave Hansen 2023-10-18 15:52 ` Michal Koutný 2023-10-18 16:25 ` Haitao Huang 2023-10-16 21:32 ` Sean Christopherson 2023-10-17 0:09 ` Haitao Huang 2023-10-17 15:43 ` Sean Christopherson 2023-10-17 11:49 ` Mikko Ylinen 2023-10-11 1:14 ` Huang, Kai 2023-10-16 11:02 ` Huang, Kai 2023-10-10 1:42 ` Haitao Huang 2023-10-10 2:23 ` Huang, Kai 2023-10-10 13:26 ` Haitao Huang 2023-10-11 0:01 ` Sean Christopherson 2023-10-11 15:02 ` Haitao Huang 2023-10-10 1:04 ` Haitao Huang 2023-10-10 1:18 ` Huang, Kai 2023-10-10 1:38 ` Haitao Huang 2023-10-10 2:12 ` Huang, Kai 2023-10-10 17:05 ` Haitao Huang 2023-10-11 0:31 ` Huang, Kai 2023-10-11 16:04 ` Haitao Huang 2023-09-23 3:06 ` Haitao Huang [this message] 2023-09-23 3:06 ` [PATCH v5 13/18] x86/sgx: Expose sgx_reclaim_pages() for use by EPC cgroup Haitao Huang 2023-10-05 12:24 ` Huang, Kai 2023-10-05 19:23 ` Haitao Huang 2023-10-05 20:25 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 14/18] x86/sgx: Add helper to grab pages from an arbitrary EPC LRU Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-23 3:06 ` [PATCH v5 15/18] x86/sgx: Prepare for multiple LRUs Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-10-05 12:30 ` Huang, Kai 2023-10-05 19:33 ` Haitao Huang 2023-10-05 20:38 ` Huang, Kai 2023-09-23 3:06 ` [PATCH v5 16/18] x86/sgx: Limit process EPC usage with misc cgroup controller Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-25 17:15 ` Jarkko Sakkinen 2023-09-25 17:15 ` Jarkko Sakkinen 2023-10-05 21:01 ` Huang, Kai 2023-10-10 0:12 ` Huang, Kai 2023-10-10 0:16 ` Huang, Kai 2023-10-10 0:26 ` Huang, Kai 2023-10-22 18:26 ` Haitao Huang 2023-10-10 9:19 ` Huang, Kai 2023-10-10 9:32 ` Huang, Kai 2023-10-17 18:54 ` Michal Koutný 2023-10-19 16:05 ` Haitao Huang 2023-09-23 3:06 ` [PATCH v5 17/18] Docs/x86/sgx: Add description for cgroup support Haitao Huang 2023-09-23 3:06 ` Haitao Huang 2023-09-23 3:06 ` [PATCH v5 18/18] selftests/sgx: Add scripts for EPC cgroup testing Haitao Huang 2023-09-23 3:06 ` Haitao Huang
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20230923030657.16148-14-haitao.huang@linux.intel.com \ --to=haitao.huang@linux.intel.com \ --cc=anakrish@microsoft.com \ --cc=bp@alien8.de \ --cc=cgroups@vger.kernel.org \ --cc=dave.hansen@linux.intel.com \ --cc=hpa@zytor.com \ --cc=jarkko@kernel.org \ --cc=kristen@linux.intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-sgx@vger.kernel.org \ --cc=mikko.ylinen@linux.intel.com \ --cc=mingo@redhat.com \ --cc=seanjc@google.com \ --cc=sohil.mehta@intel.com \ --cc=tglx@linutronix.de \ --cc=tj@kernel.org \ --cc=x86@kernel.org \ --cc=yangjie@microsoft.com \ --cc=zhanb@microsoft.com \ --cc=zhiquan1.li@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.