All of lore.kernel.org
 help / color / mirror / Atom feed
From: Haitao Huang <haitao.huang@linux.intel.com>
To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org,
	linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org,
	cgroups@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>
Cc: kai.huang@intel.com, reinette.chatre@intel.com,
	zhiquan1.li@intel.com, kristen@linux.intel.com,
	seanjc@google.com
Subject: [PATCH v3 01/28] x86/sgx: Store struct sgx_encl when allocating new VA pages
Date: Wed, 12 Jul 2023 16:01:35 -0700	[thread overview]
Message-ID: <20230712230202.47929-2-haitao.huang@linux.intel.com> (raw)
In-Reply-To: <20230712230202.47929-1-haitao.huang@linux.intel.com>

In a later patch, when a cgroup has exceeded the max capacity for EPC pages
and there are no more Enclave EPC pages associated with the cgroup that can
be reclaimed, the only pages still associated with an enclave will be the
unreclaimable Version Array (VA) pages or SECS pages, and the entire
enclave will need to be killed to free up those pages.

Currently, given an enclave pointer it is easy to find the associated VA
pages and free them, however, OOM killing an enclave based on cgroup limits
will require examining a cgroup's unreclaimable page list, and finding an
enclave given a SECS page or a VA page. This will require a backpointer
from a page to an enclave, including for VA pages.

When allocating new Version Array (VA) pages, pass the struct sgx_encl of
the enclave that is allocating the page. sgx_alloc_epc_page() will store
this value in the owner field of the struct sgx_epc_page.  In a later
patch, VA pages will be placed in an unreclaimable queue, and then when the
cgroup max limit is reached and there are no more reclaimable pages and the
enclave must be OOM killed, all the VA pages associated with that enclave
can be uncharged and freed.

To avoid casting needed to access the two types of owners: sgx_encl for VA
pages, sgx_encl_page for other pages, replace 'owner' field in sgx_epc_page
with a union of the two types.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com>
Signed-off-by: Haitao Huang <haitao.huang@linux.intel.com>
Cc: Sean Christopherson <seanjc@google.com>

V3:
- rename encl_owner to encl_page.
- revise commit messages
---
 arch/x86/kernel/cpu/sgx/encl.c  |  5 +++--
 arch/x86/kernel/cpu/sgx/encl.h  |  2 +-
 arch/x86/kernel/cpu/sgx/ioctl.c |  2 +-
 arch/x86/kernel/cpu/sgx/main.c  | 20 ++++++++++----------
 arch/x86/kernel/cpu/sgx/sgx.h   |  5 ++++-
 5 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 2a0e90fe2abc..98e1086eab07 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -1210,6 +1210,7 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
 
 /**
  * sgx_alloc_va_page() - Allocate a Version Array (VA) page
+ * @encl:    The enclave that this page is allocated to.
  * @reclaim: Reclaim EPC pages directly if none available. Enclave
  *           mutex should not be held if this is set.
  *
@@ -1219,12 +1220,12 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
  *   a VA page,
  *   -errno otherwise
  */
-struct sgx_epc_page *sgx_alloc_va_page(bool reclaim)
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_encl *encl, bool reclaim)
 {
 	struct sgx_epc_page *epc_page;
 	int ret;
 
-	epc_page = sgx_alloc_epc_page(NULL, reclaim);
+	epc_page = sgx_alloc_epc_page(encl, reclaim);
 	if (IS_ERR(epc_page))
 		return ERR_CAST(epc_page);
 
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index f94ff14c9486..831d63f80f5a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -116,7 +116,7 @@ struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 					  unsigned long offset,
 					  u64 secinfo_flags);
 void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr);
-struct sgx_epc_page *sgx_alloc_va_page(bool reclaim);
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_encl *encl, bool reclaim);
 unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
 void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
 bool sgx_va_page_full(struct sgx_va_page *va_page);
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 21ca0a831b70..fa8c3f32ccf6 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -30,7 +30,7 @@ struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl, bool reclaim)
 		if (!va_page)
 			return ERR_PTR(-ENOMEM);
 
-		va_page->epc_page = sgx_alloc_va_page(reclaim);
+		va_page->epc_page = sgx_alloc_va_page(encl, reclaim);
 		if (IS_ERR(va_page->epc_page)) {
 			err = ERR_CAST(va_page->epc_page);
 			kfree(va_page);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 166692f2d501..39939b7496b0 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -108,7 +108,7 @@ static unsigned long __sgx_sanitize_pages(struct list_head *dirty_page_list)
 
 static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
 {
-	struct sgx_encl_page *page = epc_page->owner;
+	struct sgx_encl_page *page = epc_page->encl_page;
 	struct sgx_encl *encl = page->encl;
 	struct sgx_encl_mm *encl_mm;
 	bool ret = true;
@@ -140,7 +140,7 @@ static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
 
 static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
 {
-	struct sgx_encl_page *page = epc_page->owner;
+	struct sgx_encl_page *page = epc_page->encl_page;
 	unsigned long addr = page->desc & PAGE_MASK;
 	struct sgx_encl *encl = page->encl;
 	int ret;
@@ -197,7 +197,7 @@ void sgx_ipi_cb(void *info)
 static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 			 struct sgx_backing *backing)
 {
-	struct sgx_encl_page *encl_page = epc_page->owner;
+	struct sgx_encl_page *encl_page = epc_page->encl_page;
 	struct sgx_encl *encl = encl_page->encl;
 	struct sgx_va_page *va_page;
 	unsigned int va_offset;
@@ -250,7 +250,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
 				struct sgx_backing *backing)
 {
-	struct sgx_encl_page *encl_page = epc_page->owner;
+	struct sgx_encl_page *encl_page = epc_page->encl_page;
 	struct sgx_encl *encl = encl_page->encl;
 	struct sgx_backing secs_backing;
 	int ret;
@@ -312,7 +312,7 @@ static void sgx_reclaim_pages(void)
 		epc_page = list_first_entry(&sgx_active_page_list,
 					    struct sgx_epc_page, list);
 		list_del_init(&epc_page->list);
-		encl_page = epc_page->owner;
+		encl_page = epc_page->encl_page;
 
 		if (kref_get_unless_zero(&encl_page->encl->refcount) != 0)
 			chunk[cnt++] = epc_page;
@@ -326,7 +326,7 @@ static void sgx_reclaim_pages(void)
 
 	for (i = 0; i < cnt; i++) {
 		epc_page = chunk[i];
-		encl_page = epc_page->owner;
+		encl_page = epc_page->encl_page;
 
 		if (!sgx_reclaimer_age(epc_page))
 			goto skip;
@@ -365,7 +365,7 @@ static void sgx_reclaim_pages(void)
 		if (!epc_page)
 			continue;
 
-		encl_page = epc_page->owner;
+		encl_page = epc_page->encl_page;
 		sgx_reclaimer_write(epc_page, &backing[i]);
 
 		kref_put(&encl_page->encl->refcount, sgx_encl_release);
@@ -563,7 +563,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
 	for ( ; ; ) {
 		page = __sgx_alloc_epc_page();
 		if (!IS_ERR(page)) {
-			page->owner = owner;
+			page->encl_page = owner;
 			break;
 		}
 
@@ -606,7 +606,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page)
 
 	spin_lock(&node->lock);
 
-	page->owner = NULL;
+	page->encl_page = NULL;
 	if (page->poison)
 		list_add(&page->list, &node->sgx_poison_page_list);
 	else
@@ -641,7 +641,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
 	for (i = 0; i < nr_pages; i++) {
 		section->pages[i].section = index;
 		section->pages[i].flags = 0;
-		section->pages[i].owner = NULL;
+		section->pages[i].encl_page = NULL;
 		section->pages[i].poison = 0;
 		list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
 	}
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index d2dad21259a8..dc1cbcfcf2d4 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -33,7 +33,10 @@ struct sgx_epc_page {
 	unsigned int section;
 	u16 flags;
 	u16 poison;
-	struct sgx_encl_page *owner;
+	union {
+		struct sgx_encl_page *encl_page;
+		struct sgx_encl *encl;
+	};
 	struct list_head list;
 };
 
-- 
2.25.1


WARNING: multiple messages have this Message-ID (diff)
From: Haitao Huang <haitao.huang-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
To: jarkko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	dave.hansen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org,
	tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-sgx-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>,
	Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Borislav Petkov <bp-Gina5bIWoIWzQB+pC5nmwQ@public.gmane.org>,
	x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	"H. Peter Anvin" <hpa-YMNOUZJC4hwAvxtiuMwx3w@public.gmane.org>
Cc: kai.huang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org,
	reinette.chatre-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org,
	zhiquan1.li-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org,
	kristen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org,
	seanjc-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org
Subject: [PATCH v3 01/28] x86/sgx: Store struct sgx_encl when allocating new VA pages
Date: Wed, 12 Jul 2023 16:01:35 -0700	[thread overview]
Message-ID: <20230712230202.47929-2-haitao.huang@linux.intel.com> (raw)
In-Reply-To: <20230712230202.47929-1-haitao.huang-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>

In a later patch, when a cgroup has exceeded the max capacity for EPC pages
and there are no more Enclave EPC pages associated with the cgroup that can
be reclaimed, the only pages still associated with an enclave will be the
unreclaimable Version Array (VA) pages or SECS pages, and the entire
enclave will need to be killed to free up those pages.

Currently, given an enclave pointer it is easy to find the associated VA
pages and free them, however, OOM killing an enclave based on cgroup limits
will require examining a cgroup's unreclaimable page list, and finding an
enclave given a SECS page or a VA page. This will require a backpointer
from a page to an enclave, including for VA pages.

When allocating new Version Array (VA) pages, pass the struct sgx_encl of
the enclave that is allocating the page. sgx_alloc_epc_page() will store
this value in the owner field of the struct sgx_epc_page.  In a later
patch, VA pages will be placed in an unreclaimable queue, and then when the
cgroup max limit is reached and there are no more reclaimable pages and the
enclave must be OOM killed, all the VA pages associated with that enclave
can be uncharged and freed.

To avoid casting needed to access the two types of owners: sgx_encl for VA
pages, sgx_encl_page for other pages, replace 'owner' field in sgx_epc_page
with a union of the two types.

Signed-off-by: Sean Christopherson <sean.j.christopherson-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Signed-off-by: Kristen Carlson Accardi <kristen-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Signed-off-by: Haitao Huang <haitao.huang-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Cc: Sean Christopherson <seanjc-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

V3:
- rename encl_owner to encl_page.
- revise commit messages
---
 arch/x86/kernel/cpu/sgx/encl.c  |  5 +++--
 arch/x86/kernel/cpu/sgx/encl.h  |  2 +-
 arch/x86/kernel/cpu/sgx/ioctl.c |  2 +-
 arch/x86/kernel/cpu/sgx/main.c  | 20 ++++++++++----------
 arch/x86/kernel/cpu/sgx/sgx.h   |  5 ++++-
 5 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 2a0e90fe2abc..98e1086eab07 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -1210,6 +1210,7 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
 
 /**
  * sgx_alloc_va_page() - Allocate a Version Array (VA) page
+ * @encl:    The enclave that this page is allocated to.
  * @reclaim: Reclaim EPC pages directly if none available. Enclave
  *           mutex should not be held if this is set.
  *
@@ -1219,12 +1220,12 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
  *   a VA page,
  *   -errno otherwise
  */
-struct sgx_epc_page *sgx_alloc_va_page(bool reclaim)
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_encl *encl, bool reclaim)
 {
 	struct sgx_epc_page *epc_page;
 	int ret;
 
-	epc_page = sgx_alloc_epc_page(NULL, reclaim);
+	epc_page = sgx_alloc_epc_page(encl, reclaim);
 	if (IS_ERR(epc_page))
 		return ERR_CAST(epc_page);
 
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index f94ff14c9486..831d63f80f5a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -116,7 +116,7 @@ struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 					  unsigned long offset,
 					  u64 secinfo_flags);
 void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr);
-struct sgx_epc_page *sgx_alloc_va_page(bool reclaim);
+struct sgx_epc_page *sgx_alloc_va_page(struct sgx_encl *encl, bool reclaim);
 unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
 void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
 bool sgx_va_page_full(struct sgx_va_page *va_page);
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 21ca0a831b70..fa8c3f32ccf6 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -30,7 +30,7 @@ struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl, bool reclaim)
 		if (!va_page)
 			return ERR_PTR(-ENOMEM);
 
-		va_page->epc_page = sgx_alloc_va_page(reclaim);
+		va_page->epc_page = sgx_alloc_va_page(encl, reclaim);
 		if (IS_ERR(va_page->epc_page)) {
 			err = ERR_CAST(va_page->epc_page);
 			kfree(va_page);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 166692f2d501..39939b7496b0 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -108,7 +108,7 @@ static unsigned long __sgx_sanitize_pages(struct list_head *dirty_page_list)
 
 static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
 {
-	struct sgx_encl_page *page = epc_page->owner;
+	struct sgx_encl_page *page = epc_page->encl_page;
 	struct sgx_encl *encl = page->encl;
 	struct sgx_encl_mm *encl_mm;
 	bool ret = true;
@@ -140,7 +140,7 @@ static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
 
 static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
 {
-	struct sgx_encl_page *page = epc_page->owner;
+	struct sgx_encl_page *page = epc_page->encl_page;
 	unsigned long addr = page->desc & PAGE_MASK;
 	struct sgx_encl *encl = page->encl;
 	int ret;
@@ -197,7 +197,7 @@ void sgx_ipi_cb(void *info)
 static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 			 struct sgx_backing *backing)
 {
-	struct sgx_encl_page *encl_page = epc_page->owner;
+	struct sgx_encl_page *encl_page = epc_page->encl_page;
 	struct sgx_encl *encl = encl_page->encl;
 	struct sgx_va_page *va_page;
 	unsigned int va_offset;
@@ -250,7 +250,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
 				struct sgx_backing *backing)
 {
-	struct sgx_encl_page *encl_page = epc_page->owner;
+	struct sgx_encl_page *encl_page = epc_page->encl_page;
 	struct sgx_encl *encl = encl_page->encl;
 	struct sgx_backing secs_backing;
 	int ret;
@@ -312,7 +312,7 @@ static void sgx_reclaim_pages(void)
 		epc_page = list_first_entry(&sgx_active_page_list,
 					    struct sgx_epc_page, list);
 		list_del_init(&epc_page->list);
-		encl_page = epc_page->owner;
+		encl_page = epc_page->encl_page;
 
 		if (kref_get_unless_zero(&encl_page->encl->refcount) != 0)
 			chunk[cnt++] = epc_page;
@@ -326,7 +326,7 @@ static void sgx_reclaim_pages(void)
 
 	for (i = 0; i < cnt; i++) {
 		epc_page = chunk[i];
-		encl_page = epc_page->owner;
+		encl_page = epc_page->encl_page;
 
 		if (!sgx_reclaimer_age(epc_page))
 			goto skip;
@@ -365,7 +365,7 @@ static void sgx_reclaim_pages(void)
 		if (!epc_page)
 			continue;
 
-		encl_page = epc_page->owner;
+		encl_page = epc_page->encl_page;
 		sgx_reclaimer_write(epc_page, &backing[i]);
 
 		kref_put(&encl_page->encl->refcount, sgx_encl_release);
@@ -563,7 +563,7 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim)
 	for ( ; ; ) {
 		page = __sgx_alloc_epc_page();
 		if (!IS_ERR(page)) {
-			page->owner = owner;
+			page->encl_page = owner;
 			break;
 		}
 
@@ -606,7 +606,7 @@ void sgx_free_epc_page(struct sgx_epc_page *page)
 
 	spin_lock(&node->lock);
 
-	page->owner = NULL;
+	page->encl_page = NULL;
 	if (page->poison)
 		list_add(&page->list, &node->sgx_poison_page_list);
 	else
@@ -641,7 +641,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
 	for (i = 0; i < nr_pages; i++) {
 		section->pages[i].section = index;
 		section->pages[i].flags = 0;
-		section->pages[i].owner = NULL;
+		section->pages[i].encl_page = NULL;
 		section->pages[i].poison = 0;
 		list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
 	}
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index d2dad21259a8..dc1cbcfcf2d4 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -33,7 +33,10 @@ struct sgx_epc_page {
 	unsigned int section;
 	u16 flags;
 	u16 poison;
-	struct sgx_encl_page *owner;
+	union {
+		struct sgx_encl_page *encl_page;
+		struct sgx_encl *encl;
+	};
 	struct list_head list;
 };
 
-- 
2.25.1


  reply	other threads:[~2023-07-12 23:02 UTC|newest]

Thread overview: 102+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-12 23:01 [PATCH v3 00/28] Add Cgroup support for SGX EPC memory Haitao Huang
2023-07-12 23:01 ` Haitao Huang
2023-07-12 23:01 ` Haitao Huang [this message]
2023-07-12 23:01   ` [PATCH v3 01/28] x86/sgx: Store struct sgx_encl when allocating new VA pages Haitao Huang
2023-07-17 11:14   ` Jarkko Sakkinen
2023-07-17 11:14     ` Jarkko Sakkinen
2023-07-12 23:01 ` [PATCH v3 02/28] x86/sgx: Add EPC page flags to identify owner type Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-17 12:41   ` Jarkko Sakkinen
2023-07-17 12:41     ` Jarkko Sakkinen
2023-07-17 12:43     ` Jarkko Sakkinen
2023-07-17 12:43       ` Jarkko Sakkinen
2023-07-12 23:01 ` [PATCH v3 03/28] x86/sgx: Add 'struct sgx_epc_lru_lists' to encapsulate lru list(s) Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-17 12:45   ` Jarkko Sakkinen
2023-07-17 12:45     ` Jarkko Sakkinen
2023-07-17 13:23     ` Haitao Huang
2023-07-17 13:23       ` Haitao Huang
2023-07-17 14:39       ` Jarkko Sakkinen
2023-07-17 14:39         ` Jarkko Sakkinen
2023-07-24 10:04       ` Huang, Kai
2023-07-24 10:04         ` Huang, Kai
2023-07-24 14:55         ` Haitao Huang
2023-07-24 14:55           ` Haitao Huang
2023-07-24 23:31           ` Huang, Kai
2023-07-31 20:35             ` Haitao Huang
2023-07-31 20:35               ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 04/28] x86/sgx: Use sgx_epc_lru_lists for existing active page list Haitao Huang
2023-07-17 12:47   ` Jarkko Sakkinen
2023-07-17 12:47     ` Jarkko Sakkinen
2023-07-31 20:43     ` Haitao Huang
2023-07-31 20:43       ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 05/28] x86/sgx: Store reclaimable epc pages in sgx_epc_lru_lists Haitao Huang
2023-07-12 23:01 ` [PATCH v3 06/28] x86/sgx: store unreclaimable EPC " Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 07/28] x86/sgx: Introduce EPC page states Haitao Huang
2023-07-12 23:01 ` [PATCH v3 08/28] x86/sgx: Introduce RECLAIM_IN_PROGRESS state Haitao Huang
2023-07-12 23:01 ` [PATCH v3 09/28] x86/sgx: Use a list to track to-be-reclaimed pages Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 10/28] x86/sgx: Allow reclaiming up to 32 pages, but scan 16 by default Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 11/28] x85/sgx: Return the number of EPC pages that were successfully reclaimed Haitao Huang
2023-07-29 12:47   ` Pavel Machek
2023-07-31 11:10     ` Jarkko Sakkinen
2023-07-31 11:10       ` Jarkko Sakkinen
2023-07-12 23:01 ` [PATCH v3 12/28] x86/sgx: Add option to ignore age of page during EPC reclaim Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 13/28] x86/sgx: Prepare for multiple LRUs Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 14/28] x86/sgx: Expose sgx_reclaim_pages() for use by EPC cgroup Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 15/28] x86/sgx: Add helper to grab pages from an arbitrary EPC LRU Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 16/28] x86/sgx: Add EPC OOM path to forcefully reclaim EPC Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 17/28] x86/sgx: fix a NULL pointer Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-17 12:48   ` Jarkko Sakkinen
2023-07-17 12:48     ` Jarkko Sakkinen
2023-07-17 12:49     ` Jarkko Sakkinen
2023-07-17 12:49       ` Jarkko Sakkinen
2023-07-17 13:14       ` Haitao Huang
2023-07-17 14:33         ` Jarkko Sakkinen
2023-07-17 14:33           ` Jarkko Sakkinen
2023-07-17 15:49     ` Dave Hansen
2023-07-17 15:49       ` Dave Hansen
2023-07-17 18:49       ` Haitao Huang
2023-07-17 18:52       ` Jarkko Sakkinen
2023-07-17 18:52         ` Jarkko Sakkinen
2023-07-12 23:01 ` [PATCH v3 18/28] cgroup/misc: Fix an overflow Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-17 13:15   ` Jarkko Sakkinen
2023-07-17 13:15     ` Jarkko Sakkinen
2023-07-12 23:01 ` [PATCH v3 19/28] cgroup/misc: Add per resource callbacks for CSS events Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-17 13:16   ` Jarkko Sakkinen
2023-07-17 13:16     ` Jarkko Sakkinen
2023-07-12 23:01 ` [PATCH v3 20/28] cgroup/misc: Add SGX EPC resource type and export APIs for SGX driver Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-12 23:01 ` [PATCH v3 21/28] x86/sgx: Limit process EPC usage with misc cgroup controller Haitao Huang
2023-07-12 23:01   ` Haitao Huang
2023-07-13  0:03   ` Randy Dunlap
2023-07-13  0:03     ` Randy Dunlap
2023-08-17 15:12   ` Mikko Ylinen
2023-07-12 23:01 ` [PATCH v3 22/28] Docs/x86/sgx: Add description for cgroup support Haitao Huang
2023-07-13  0:10   ` Randy Dunlap
2023-07-14 20:01     ` Haitao Huang
2023-07-14 20:26   ` Haitao Huang
2023-08-17 15:18   ` Mikko Ylinen
2023-08-17 15:18     ` Mikko Ylinen
2023-07-12 23:01 ` [PATCH v3 23/28] selftests/sgx: Retry the ioctl()'s returned with EAGAIN Haitao Huang
2023-07-12 23:01 ` [PATCH v3 24/28] selftests/sgx: Move ENCL_HEAP_SIZE_DEFAULT to main.c Haitao Huang
2023-07-12 23:01 ` [PATCH v3 25/28] selftests/sgx: Use encl->encl_size in sigstruct.c Haitao Huang
2023-07-12 23:02 ` [PATCH v3 26/28] selftests/sgx: Include the dynamic heap size to the ELRANGE calculation Haitao Huang
2023-07-12 23:02 ` [PATCH v3 27/28] selftests/sgx: Add SGX selftest augment_via_eaccept_long Haitao Huang
2023-07-12 23:02 ` [PATCH v3 28/28] selftests/sgx: Add scripts for epc cgroup testing Haitao Huang
2023-07-17 11:02 ` [PATCH v3 00/28] Add Cgroup support for SGX EPC memory Jarkko Sakkinen
2023-07-17 11:02   ` Jarkko Sakkinen
2023-07-24 19:09 ` Sohil Mehta
2023-07-24 19:09   ` Sohil Mehta
2023-07-25 17:16   ` Haitao Huang
2023-08-17 15:04 ` Mikko Ylinen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230712230202.47929-2-haitao.huang@linux.intel.com \
    --to=haitao.huang@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=cgroups@vger.kernel.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jarkko@kernel.org \
    --cc=kai.huang@intel.com \
    --cc=kristen@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-sgx@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=reinette.chatre@intel.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=x86@kernel.org \
    --cc=zhiquan1.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.