linux-sgx.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Cc: linux-sgx@vger.kernel.org
Subject: [PATCH for_v23 3/3] x86/sgx: Move reclaim logic out of sgx_free_page()
Date: Tue, 22 Oct 2019 15:49:22 -0700	[thread overview]
Message-ID: <20191022224922.28144-4-sean.j.christopherson@intel.com> (raw)
In-Reply-To: <20191022224922.28144-1-sean.j.christopherson@intel.com>

Move the reclaim logic out of sgx_free_page() and into a standalone
helper to avoid taking sgx_active_page_list_lock when the page is known
to be unreclaimable, which is the vast majority of flows that free EPC
pages.

Movig reclaim logic to a separate function also eliminates any
possibility of silently leaking a page because it is unexpectedly
reclaimable (and being actively reclaimed).

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---

I really don't like the sgx_unmark_...() name, but couldn't come up with
anything better.  Suggestions welcome...

 arch/x86/kernel/cpu/sgx/encl.c    |  3 ++-
 arch/x86/kernel/cpu/sgx/main.c    | 32 ++++++++-----------------------
 arch/x86/kernel/cpu/sgx/reclaim.c | 32 +++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/sgx/sgx.h     |  3 ++-
 4 files changed, 44 insertions(+), 26 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 8045f1ddfd62..22186d89042a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -474,9 +474,10 @@ void sgx_encl_destroy(struct sgx_encl *encl)
 			 * The page and its radix tree entry cannot be freed
 			 * if the page is being held by the reclaimer.
 			 */
-			if (sgx_free_page(entry->epc_page))
+			if (sgx_unmark_page_reclaimable(entry->epc_page))
 				continue;
 
+			sgx_free_page(entry->epc_page);
 			encl->secs_child_cnt--;
 			entry->epc_page = NULL;
 		}
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 8e7557d3ff03..cfd8480ef563 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -108,45 +108,29 @@ struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim)
  * sgx_free_page() - Free an EPC page
  * @page:	pointer a previously allocated EPC page
  *
- * EREMOVE an EPC page and insert it back to the list of free pages. If the
- * page is reclaimable, delete it from the active page list.
- *
- * Return:
- *   0 on success,
- *   -EBUSY if a reclaim is in progress
+ * EREMOVE an EPC page and insert it back to the list of free pages. The page
+ * must not be reclaimable.
  */
-int sgx_free_page(struct sgx_epc_page *page)
+void sgx_free_page(struct sgx_epc_page *page)
 {
 	struct sgx_epc_section *section = sgx_epc_section(page);
 	int ret;
 
 	/*
-	 * Remove the page from the active list if necessary.  If the page
-	 * is actively being reclaimed, i.e. RECLAIMABLE is set but the
-	 * page isn't on the active list, return -EBUSY as we can't free
-	 * the page at this time since it is "owned" by the reclaimer.
+	 * Don't take sgx_active_page_list_lock when asserting the page isn't
+	 * reclaimable, missing a WARN in the very rare case is preferable to
+	 * unnecessarily taking a global lock in the common case.
 	 */
-	spin_lock(&sgx_active_page_list_lock);
-	if (page->desc & SGX_EPC_PAGE_RECLAIMABLE) {
-		if (list_empty(&page->list)) {
-			spin_unlock(&sgx_active_page_list_lock);
-			return -EBUSY;
-		}
-		list_del(&page->list);
-		page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
-	}
-	spin_unlock(&sgx_active_page_list_lock);
+	WARN_ON_ONCE(page->desc & SGX_EPC_PAGE_RECLAIMABLE);
 
 	ret = __eremove(sgx_epc_addr(page));
 	if (WARN_ONCE(ret, "EREMOVE returned %d (0x%x)", ret, ret))
-		return -EIO;
+		return;
 
 	spin_lock(&section->lock);
 	list_add_tail(&page->list, &section->page_list);
 	atomic_inc(&sgx_nr_free_pages);
 	spin_unlock(&section->lock);
-
-	return 0;
 }
 
 static void __init sgx_free_epc_section(struct sgx_epc_section *section)
diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 8067ce1915a4..e64c810883ec 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -125,6 +125,38 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page)
 	spin_unlock(&sgx_active_page_list_lock);
 }
 
+/**
+ * sgx_unmark_page_reclaimable() - Remove a page from the reclaim list
+ * @page:	EPC page
+ *
+ * Clear the reclaimable flag and remove the page from the active page list.
+ *
+ * Return:
+ *   0 on success,
+ *   -EBUSY if the page is in the process of being reclaimed
+ */
+int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
+{
+	/*
+	 * Remove the page from the active list if necessary.  If the page
+	 * is actively being reclaimed, i.e. RECLAIMABLE is set but the
+	 * page isn't on the active list, return -EBUSY as we can't free
+	 * the page at this time since it is "owned" by the reclaimer.
+	 */
+	spin_lock(&sgx_active_page_list_lock);
+	if (page->desc & SGX_EPC_PAGE_RECLAIMABLE) {
+		if (list_empty(&page->list)) {
+			spin_unlock(&sgx_active_page_list_lock);
+			return -EBUSY;
+		}
+		list_del(&page->list);
+		page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
+	}
+	spin_unlock(&sgx_active_page_list_lock);
+
+	return 0;
+}
+
 static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page)
 {
 	struct sgx_encl_page *page = epc_page->owner;
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 45753f236a83..f6d23ef7c74a 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -81,10 +81,11 @@ extern spinlock_t sgx_active_page_list_lock;
 
 bool __init sgx_page_reclaimer_init(void);
 void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
+int sgx_unmark_page_reclaimable(struct sgx_epc_page *page);
 void sgx_reclaim_pages(void);
 
 struct sgx_epc_page *sgx_try_alloc_page(void);
 struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim);
-int sgx_free_page(struct sgx_epc_page *page);
+void sgx_free_page(struct sgx_epc_page *page);
 
 #endif /* _X86_SGX_H */
-- 
2.22.0


  parent reply	other threads:[~2019-10-22 22:49 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-22 22:49 [PATCH for_v23 0/3] x86/sgx: More cleanup for v23 Sean Christopherson
2019-10-22 22:49 ` [PATCH for_v23 1/3] x86/sgx: Update the free page count in a single operation Sean Christopherson
2019-10-23 12:44   ` Jarkko Sakkinen
2019-10-24 13:11     ` Jarkko Sakkinen
2019-10-22 22:49 ` [PATCH for_v23 2/3] x86/sgx: Do not add in-use EPC page to the free page list Sean Christopherson
2019-10-23 12:43   ` Jarkko Sakkinen
2019-10-23 15:23     ` Sean Christopherson
2019-10-22 22:49 ` Sean Christopherson [this message]
2019-10-23 12:42   ` [PATCH for_v23 3/3] x86/sgx: Move reclaim logic out of sgx_free_page() Jarkko Sakkinen
2019-10-23 15:19     ` Sean Christopherson
2019-10-24 18:35 ` [PATCH for_v23 0/3] x86/sgx: More cleanup for v23 Jarkko Sakkinen
2019-10-24 20:22   ` Sean Christopherson
2019-10-28 20:35     ` Jarkko Sakkinen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191022224922.28144-4-sean.j.christopherson@intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=jarkko.sakkinen@linux.intel.com \
    --cc=linux-sgx@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).