linux-sgx.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] x86/sgx: Fix lock ordering bug w/ EADD
@ 2019-08-27 19:27 Sean Christopherson
  2019-08-27 19:27 ` [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic Sean Christopherson
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Sean Christopherson @ 2019-08-27 19:27 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

As discovered by Jarkko, taking mm->mmap_sem around EADD can lead to
deadlock as attempting to acquire mmap_sem while holding encl->lock
violates SGX's lock ordering.

Resolving the issue simply by reversing the lock ordering gets ugly due
to the behavior of sgx_encl_grow(), which has a path that drops encl->lock
in order to do EPC page reclaim, i.e. moving mm->mmap_sem up would require
it to be dropped and reacquired as well.

Luckily, the entire flow can be simplified by preventing userspace from
invoking concurrent ioctls on a single enclave.  Requiring ioctls to be
serialized means encl->lock doesn't need to be held to grow/shrink the
enclave, since only ioctls can grow/shrink the enclave.  This also
eliminates theoretical cases that sgx_encl_grow() has to handle, e.g. the
enclave being initialized while it's waiting on reclaim, since the
protection provided by serializing ioctls isn't dropped to do reclaim.

v2:
  - Make encl->flags an atomic, reuse for "in ioctl" detection. [Jarkko]
  - Drop smp_{r,w}mb() patch as it is superceded by atomic flags. [Jarkko]
  - Tack on two interdependent bug fixes found when auditing encl->flags.
  - Rebase to Jarkko's latest master.

Sean Christopherson (5):
  x86/sgx: Convert encl->flags from an unsigned int to an atomic
  x86/sgx: Reject concurrent ioctls on single enclave
  x86/sgx: Take encl->lock inside of mm->mmap_sem for EADD
  x86/sgx: Reject all ioctls on dead enclaves
  x86/sgx: Destroy the enclave if EEXTEND fails

 arch/x86/kernel/cpu/sgx/driver.c  |   3 +-
 arch/x86/kernel/cpu/sgx/encl.c    |  18 ++--
 arch/x86/kernel/cpu/sgx/encl.h    |   3 +-
 arch/x86/kernel/cpu/sgx/ioctl.c   | 163 ++++++++++++++++--------------
 arch/x86/kernel/cpu/sgx/reclaim.c |  10 +-
 5 files changed, 105 insertions(+), 92 deletions(-)

-- 
2.22.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic
  2019-08-27 19:27 [PATCH v2 0/5] x86/sgx: Fix lock ordering bug w/ EADD Sean Christopherson
@ 2019-08-27 19:27 ` Sean Christopherson
  2019-08-29 13:09   ` Jarkko Sakkinen
  2019-08-27 19:27 ` [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave Sean Christopherson
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2019-08-27 19:27 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

Make encl->flags an atomic to ensure that all enclave state is visible
in memory prior to SGX_ENCL_CREATED being set.  Without a compiler
memory barrier, adding pages and or initializing the enclave could
theoretically consume stale data.

Alternatively, a smp_wmb() + smp_rmb() could be added specifically for
SGX_ENCL_CREATED, but making flags an atomic allows future patches to
introduce similar checks, e.g. to force serialize ioctls, and the
downsides to using an atomic are negligible, e.g. locked transactions
are only introduced in slow paths since most flows can use atomic_read()
to query flags.

Suggested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver.c  |  3 ++-
 arch/x86/kernel/cpu/sgx/encl.c    | 18 +++++++++++-------
 arch/x86/kernel/cpu/sgx/encl.h    |  2 +-
 arch/x86/kernel/cpu/sgx/ioctl.c   | 16 ++++++++--------
 arch/x86/kernel/cpu/sgx/reclaim.c | 10 +++++-----
 5 files changed, 27 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
index e740d71e2311..a1da479ffd3a 100644
--- a/arch/x86/kernel/cpu/sgx/driver.c
+++ b/arch/x86/kernel/cpu/sgx/driver.c
@@ -31,6 +31,7 @@ static int sgx_open(struct inode *inode, struct file *file)
 	if (!encl)
 		return -ENOMEM;
 
+	atomic_set(&encl->flags, 0);
 	kref_init(&encl->refcount);
 	INIT_LIST_HEAD(&encl->va_pages);
 	INIT_RADIX_TREE(&encl->page_tree, GFP_KERNEL);
@@ -77,7 +78,7 @@ static int sgx_release(struct inode *inode, struct file *file)
 	};
 
 	mutex_lock(&encl->lock);
-	encl->flags |= SGX_ENCL_DEAD;
+	atomic_or(SGX_ENCL_DEAD, &encl->flags);
 	mutex_unlock(&encl->lock);
 
 	kref_put(&encl->refcount, sgx_encl_release);
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 08ca0cfb1742..b6cd2a800eb2 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -107,6 +107,7 @@ static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
 {
 	struct sgx_epc_page *epc_page;
 	struct sgx_encl_page *entry;
+	unsigned int encl_flags;
 
 	/* If process was forked, VMA is still there but vm_private_data is set
 	 * to NULL.
@@ -114,8 +115,9 @@ static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
 	if (!encl)
 		return ERR_PTR(-EFAULT);
 
-	if ((encl->flags & SGX_ENCL_DEAD) ||
-	    !(encl->flags & SGX_ENCL_INITIALIZED))
+	encl_flags = atomic_read(&encl->flags);
+	if ((encl_flags & SGX_ENCL_DEAD) ||
+	    !(encl_flags & SGX_ENCL_INITIALIZED))
 		return ERR_PTR(-EFAULT);
 
 	entry = radix_tree_lookup(&encl->page_tree, addr >> PAGE_SHIFT);
@@ -217,7 +219,7 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm)
 	struct sgx_encl_mm *encl_mm;
 	int ret;
 
-	if (encl->flags & SGX_ENCL_DEAD)
+	if (atomic_read(&encl->flags) & SGX_ENCL_DEAD)
 		return -EINVAL;
 
 	/*
@@ -395,6 +397,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr,
 	struct sgx_encl_page *entry = NULL;
 	unsigned long align;
 	char data[sizeof(unsigned long)];
+	unsigned int encl_flags;
 	int offset;
 	int cnt;
 	int ret = 0;
@@ -406,9 +409,10 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr,
 	if (!encl)
 		return -EFAULT;
 
-	if (!(encl->flags & SGX_ENCL_DEBUG) ||
-	    !(encl->flags & SGX_ENCL_INITIALIZED) ||
-	    (encl->flags & SGX_ENCL_DEAD))
+	encl_flags = atomic_read(&encl->flags);
+	if (!(encl_flags & SGX_ENCL_DEBUG) ||
+	    !(encl_flags & SGX_ENCL_INITIALIZED) ||
+	    (encl_flags & SGX_ENCL_DEAD))
 		return -EFAULT;
 
 	for (i = 0; i < len; i += cnt) {
@@ -495,7 +499,7 @@ void sgx_encl_destroy(struct sgx_encl *encl)
 	struct radix_tree_iter iter;
 	void **slot;
 
-	encl->flags |= SGX_ENCL_DEAD;
+	atomic_or(SGX_ENCL_DEAD, &encl->flags);
 
 	radix_tree_for_each_slot(slot, &encl->page_tree, &iter, 0) {
 		entry = *slot;
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index 37b5c4bcda7a..ffc09e1a0fc6 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -67,7 +67,7 @@ struct sgx_encl_mm {
 };
 
 struct sgx_encl {
-	unsigned int flags;
+	atomic_t flags;
 	u64 secs_attributes;
 	u64 allowed_attributes;
 	unsigned int page_cnt;
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 1740eca4f163..d05cf051ca45 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -23,7 +23,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
 	BUILD_BUG_ON(SGX_VA_SLOT_COUNT !=
 		(SGX_ENCL_PAGE_VA_OFFSET_MASK >> 3) + 1);
 
-	if (encl->flags & disallowed_flags)
+	if (atomic_read(&encl->flags) & disallowed_flags)
 		return ERR_PTR(-EFAULT);
 
 	if (!(encl->page_cnt % SGX_VA_SLOT_COUNT)) {
@@ -42,7 +42,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
 			err = ERR_CAST(va_page->epc_page);
 			kfree(va_page);
 			return err;
-		} else if (encl->flags & disallowed_flags) {
+		} else if (atomic_read(&encl->flags) & disallowed_flags) {
 			sgx_free_page(va_page->epc_page);
 			kfree(va_page);
 			return ERR_PTR(-EFAULT);
@@ -219,7 +219,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 	}
 
 	if (secs->attributes & SGX_ATTR_DEBUG)
-		encl->flags |= SGX_ENCL_DEBUG;
+		atomic_or(SGX_ENCL_DEBUG, &encl->flags);
 
 	encl->secs.encl = encl;
 	encl->secs_attributes = secs->attributes;
@@ -233,7 +233,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 	 * allows other flows to check if the enclave has been created without
 	 * taking encl->lock.
 	 */
-	encl->flags |= SGX_ENCL_CREATED;
+	atomic_or(SGX_ENCL_CREATED, &encl->flags);
 
 	mutex_unlock(&encl->lock);
 	return 0;
@@ -478,7 +478,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	struct sgx_enclave_add_page addp;
 	struct sgx_secinfo secinfo;
 
-	if (!(encl->flags & SGX_ENCL_CREATED))
+	if (!(atomic_read(&encl->flags) & SGX_ENCL_CREATED))
 		return -EINVAL;
 
 	if (copy_from_user(&addp, arg, sizeof(addp)))
@@ -544,7 +544,7 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
 
 	mutex_lock(&encl->lock);
 
-	if (encl->flags & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) {
+	if (atomic_read(&encl->flags) & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) {
 		ret = -EFAULT;
 		goto err_out;
 	}
@@ -579,7 +579,7 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
 	} else if (encls_returned_code(ret)) {
 		pr_debug("EINIT returned %d\n", ret);
 	} else {
-		encl->flags |= SGX_ENCL_INITIALIZED;
+		atomic_or(SGX_ENCL_INITIALIZED, &encl->flags);
 	}
 
 err_out:
@@ -611,7 +611,7 @@ static long sgx_ioc_enclave_init(struct file *filep, void __user *arg)
 	struct page *initp_page;
 	int ret;
 
-	if (!(encl->flags & SGX_ENCL_CREATED))
+	if (!(atomic_read(&encl->flags) & SGX_ENCL_CREATED))
 		return -EINVAL;
 
 	if (copy_from_user(&einit, arg, sizeof(einit)))
diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 3f10a8ff00b7..e7a559c41f86 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -156,7 +156,7 @@ static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
 
 		mmput_async(encl_mm->mm);
 
-		if (!ret || (encl->flags & SGX_ENCL_DEAD))
+		if (!ret || (atomic_read(&encl->flags) & SGX_ENCL_DEAD))
 			break;
 	}
 
@@ -170,7 +170,7 @@ static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
 	 * accessed page is more common and avoiding lock contention in that
 	 * case is a boon to performance.
 	 */
-	if (!ret && !(encl->flags & SGX_ENCL_DEAD))
+	if (!ret && !(atomic_read(&encl->flags) & SGX_ENCL_DEAD))
 		return false;
 
 	mutex_lock(&encl->lock);
@@ -210,7 +210,7 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
 
 	mutex_lock(&encl->lock);
 
-	if (!(encl->flags & SGX_ENCL_DEAD)) {
+	if (!(atomic_read(&encl->flags) & SGX_ENCL_DEAD)) {
 		ret = __eblock(sgx_epc_addr(epc_page));
 		if (encls_failed(ret))
 			ENCLS_WARN(ret, "EBLOCK");
@@ -314,7 +314,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt)
 
 	encl_page->desc &= ~SGX_ENCL_PAGE_RECLAIMED;
 
-	if (!(encl->flags & SGX_ENCL_DEAD)) {
+	if (!(atomic_read(&encl->flags) & SGX_ENCL_DEAD)) {
 		va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
 					   list);
 		va_offset = sgx_alloc_va_slot(va_page);
@@ -383,7 +383,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 	encl->secs_child_cnt--;
 
 	if (!encl->secs_child_cnt &&
-	    (encl->flags & (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED)))
+	    (atomic_read(&encl->flags) & (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED)))
 		sgx_encl_ewb(encl->secs.epc_page, SGX_SECINFO_SECS);
 
 	mutex_unlock(&encl->lock);
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave
  2019-08-27 19:27 [PATCH v2 0/5] x86/sgx: Fix lock ordering bug w/ EADD Sean Christopherson
  2019-08-27 19:27 ` [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic Sean Christopherson
@ 2019-08-27 19:27 ` Sean Christopherson
  2019-08-29 13:15   ` Jarkko Sakkinen
  2019-08-27 19:27 ` [PATCH v2 3/5] x86/sgx: Take encl->lock inside of mm->mmap_sem for EADD Sean Christopherson
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2019-08-27 19:27 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

Except for ENCLAVE_SET_ATTRIBUTE, all SGX ioctls() must be executed
serially to successfully initialize an enclave, e.g. the kernel already
strictly requires ECREATE->EADD->EINIT, and concurrent EADDs will result
in an unstable MRENCLAVE.  Explicitly enforce serialization by returning
EINVAL if userspace attempts an ioctl while a different ioctl for the
same enclave is in progress.

The primary beneficiary of explicit serialization is sgx_encl_grow() as
it no longer has to deal with dropping and reacquiring encl->lock when
a new VA page needs to be allocated.  Eliminating the lock juggling in
sgx_encl_grow() paves the way for fixing a lock ordering bug in
ENCLAVE_ADD_PAGE without having to also juggle mm->mmap_sem.

Serializing ioctls also fixes a race between ENCLAVE_CREATE and
ENCLAVE_SET_ATTRIBUTE, as the latter does not take encl->lock, e.g.
concurrent updates to allowed_attributes could result in a stale
value.  The race could also be fixed by taking encl->lock or making
allowed_attributes atomic, but both add unnecessary overhead with this
change applied.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.h  |  1 +
 arch/x86/kernel/cpu/sgx/ioctl.c | 91 +++++++++++++++------------------
 2 files changed, 42 insertions(+), 50 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index ffc09e1a0fc6..c7608964d69e 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -56,6 +56,7 @@ enum sgx_encl_flags {
 	SGX_ENCL_INITIALIZED	= BIT(1),
 	SGX_ENCL_DEBUG		= BIT(2),
 	SGX_ENCL_DEAD		= BIT(3),
+	SGX_ENCL_IOCTL		= BIT(4),
 };
 
 struct sgx_encl_mm {
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index d05cf051ca45..1ce8e3021a5c 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -14,8 +14,7 @@
 #include <linux/suspend.h>
 #include "driver.h"
 
-static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
-					 unsigned int disallowed_flags)
+static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
 {
 	struct sgx_va_page *va_page = NULL;
 	void *err;
@@ -23,36 +22,20 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
 	BUILD_BUG_ON(SGX_VA_SLOT_COUNT !=
 		(SGX_ENCL_PAGE_VA_OFFSET_MASK >> 3) + 1);
 
-	if (atomic_read(&encl->flags) & disallowed_flags)
-		return ERR_PTR(-EFAULT);
-
 	if (!(encl->page_cnt % SGX_VA_SLOT_COUNT)) {
-		mutex_unlock(&encl->lock);
-
 		va_page = kzalloc(sizeof(*va_page), GFP_KERNEL);
-		if (!va_page) {
-			mutex_lock(&encl->lock);
+		if (!va_page)
 			return ERR_PTR(-ENOMEM);
-		}
 
 		va_page->epc_page = sgx_alloc_va_page();
-		mutex_lock(&encl->lock);
-
 		if (IS_ERR(va_page->epc_page)) {
 			err = ERR_CAST(va_page->epc_page);
 			kfree(va_page);
 			return err;
-		} else if (atomic_read(&encl->flags) & disallowed_flags) {
-			sgx_free_page(va_page->epc_page);
-			kfree(va_page);
-			return ERR_PTR(-EFAULT);
-		} else if (encl->page_cnt % SGX_VA_SLOT_COUNT) {
-			sgx_free_page(va_page->epc_page);
-			kfree(va_page);
-			va_page = NULL;
-		} else {
-			list_add(&va_page->list, &encl->va_pages);
 		}
+
+		WARN_ON_ONCE(encl->page_cnt % SGX_VA_SLOT_COUNT);
+		list_add(&va_page->list, &encl->va_pages);
 	}
 	encl->page_cnt++;
 	return va_page;
@@ -174,13 +157,12 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 	struct file *backing;
 	long ret;
 
-	mutex_lock(&encl->lock);
+	if (atomic_read(&encl->flags) & SGX_ENCL_CREATED)
+		return -EINVAL;
 
-	va_page = sgx_encl_grow(encl, SGX_ENCL_CREATED | SGX_ENCL_DEAD);
-	if (IS_ERR(va_page)) {
-		ret = PTR_ERR(va_page);
-		goto err_out_unlock;
-	}
+	va_page = sgx_encl_grow(encl);
+	if (IS_ERR(va_page))
+		return PTR_ERR(va_page);
 
 	ssaframesize = sgx_calc_ssaframesize(secs->miscselect, secs->xfrm);
 	if (sgx_validate_secs(secs, ssaframesize)) {
@@ -230,12 +212,11 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 
 	/*
 	 * Set SGX_ENCL_CREATED only after the enclave is fully prepped.  This
-	 * allows other flows to check if the enclave has been created without
-	 * taking encl->lock.
+	 * allows setting and checking enclave creation without having to take
+	 * encl->lock.
 	 */
 	atomic_or(SGX_ENCL_CREATED, &encl->flags);
 
-	mutex_unlock(&encl->lock);
 	return 0;
 
 err_out:
@@ -249,8 +230,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 err_out_shrink:
 	sgx_encl_shrink(encl, va_page);
 
-err_out_unlock:
-	mutex_unlock(&encl->lock);
 	return ret;
 }
 
@@ -270,9 +249,8 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
  *   0 on success,
  *   -errno otherwise
  */
-static long sgx_ioc_enclave_create(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_create(struct sgx_encl *encl, void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_create ecreate;
 	struct page *secs_page;
 	struct sgx_secs *secs;
@@ -404,14 +382,14 @@ static int sgx_encl_add_page(struct sgx_encl *encl,
 		return PTR_ERR(epc_page);
 	}
 
-	mutex_lock(&encl->lock);
-
-	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
+	va_page = sgx_encl_grow(encl);
 	if (IS_ERR(va_page)) {
 		ret = PTR_ERR(va_page);
 		goto err_out_free;
 	}
 
+	mutex_lock(&encl->lock);
+
 	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
 				encl_page);
 	if (ret)
@@ -430,13 +408,13 @@ static int sgx_encl_add_page(struct sgx_encl *encl,
 			  PFN_DOWN(encl_page->desc));
 
 err_out_shrink:
+	mutex_unlock(&encl->lock);
 	sgx_encl_shrink(encl, va_page);
 
 err_out_free:
 	sgx_free_page(epc_page);
 	kfree(encl_page);
 
-	mutex_unlock(&encl->lock);
 	return ret;
 }
 
@@ -472,9 +450,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl,
  *   -ENOMEM if any memory allocation, including EPC, fails,
  *   -errno otherwise
  */
-static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_add_page(struct sgx_encl *encl, void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_add_page addp;
 	struct sgx_secinfo secinfo;
 
@@ -602,9 +579,8 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
  *   SGX error code on EINIT failure,
  *   -errno otherwise
  */
-static long sgx_ioc_enclave_init(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_init(struct sgx_encl *encl, void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_einittoken *einittoken;
 	struct sgx_sigstruct *sigstruct;
 	struct sgx_enclave_init einit;
@@ -658,9 +634,9 @@ static long sgx_ioc_enclave_init(struct file *filep, void __user *arg)
  *
  * Return: 0 on success, -errno otherwise
  */
-static long sgx_ioc_enclave_set_attribute(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_set_attribute(struct sgx_encl *encl,
+					  void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_set_attribute params;
 	struct file *attribute_file;
 	int ret;
@@ -687,16 +663,31 @@ static long sgx_ioc_enclave_set_attribute(struct file *filep, void __user *arg)
 
 long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 {
+	struct sgx_encl *encl = filep->private_data;
+	int ret;
+
+	if (atomic_fetch_or(SGX_ENCL_IOCTL, &encl->flags) & SGX_ENCL_IOCTL)
+		return -EBUSY;
+
 	switch (cmd) {
 	case SGX_IOC_ENCLAVE_CREATE:
-		return sgx_ioc_enclave_create(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_create(encl, (void __user *)arg);
+		break;
 	case SGX_IOC_ENCLAVE_ADD_PAGE:
-		return sgx_ioc_enclave_add_page(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_add_page(encl, (void __user *)arg);
+		break;
 	case SGX_IOC_ENCLAVE_INIT:
-		return sgx_ioc_enclave_init(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_init(encl, (void __user *)arg);
+		break;
 	case SGX_IOC_ENCLAVE_SET_ATTRIBUTE:
-		return sgx_ioc_enclave_set_attribute(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_set_attribute(encl, (void __user *)arg);
+		break;
 	default:
-		return -ENOIOCTLCMD;
+		ret = -ENOIOCTLCMD;
+		break;
 	}
+
+	atomic_andnot(SGX_ENCL_IOCTL, &encl->flags);
+
+	return ret;
 }
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 3/5] x86/sgx: Take encl->lock inside of mm->mmap_sem for EADD
  2019-08-27 19:27 [PATCH v2 0/5] x86/sgx: Fix lock ordering bug w/ EADD Sean Christopherson
  2019-08-27 19:27 ` [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic Sean Christopherson
  2019-08-27 19:27 ` [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave Sean Christopherson
@ 2019-08-27 19:27 ` Sean Christopherson
  2019-08-27 19:27 ` [PATCH v2 4/5] x86/sgx: Reject all ioctls on dead enclaves Sean Christopherson
  2019-08-27 19:27 ` [PATCH v2 5/5] x86/sgx: Destroy the enclave if EEXTEND fails Sean Christopherson
  4 siblings, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2019-08-27 19:27 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

Reverse the order in which encl->lock and mm->mmap_sem are taken during
ENCLAVE_ADD_PAGE so as to adhere to SGX's lock ordering requirements.

Refactor EEXTEND and the final bookeeping out of __sgx_encl_add_page()
so that mm->mmap_sem can be dropped after EADD without spreading the
lock/unlock across multiple functions.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/ioctl.c | 55 ++++++++++++++++++++-------------
 1 file changed, 33 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 1ce8e3021a5c..a0ffbbb0dad1 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -307,43 +307,40 @@ static int sgx_validate_secinfo(struct sgx_secinfo *secinfo)
 static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       struct sgx_encl_page *encl_page,
 			       struct sgx_epc_page *epc_page,
-			       struct sgx_secinfo *secinfo, unsigned long src,
-			       unsigned long mrmask)
+			       struct sgx_secinfo *secinfo, unsigned long src)
 {
 	struct sgx_pageinfo pginfo;
 	struct vm_area_struct *vma;
 	int ret;
-	int i;
 
 	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
 	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
 	pginfo.metadata = (unsigned long)secinfo;
 	pginfo.contents = src;
 
-	down_read(&current->mm->mmap_sem);
-
 	/* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */
 	if (encl_page->vm_max_prot_bits & VM_EXEC) {
 		vma = find_vma(current->mm, src);
-		if (!vma) {
-			up_read(&current->mm->mmap_sem);
+		if (!vma)
 			return -EFAULT;
-		}
 
-		if (!(vma->vm_flags & VM_MAYEXEC)) {
-			up_read(&current->mm->mmap_sem);
+		if (!(vma->vm_flags & VM_MAYEXEC))
 			return -EACCES;
-		}
 	}
 
 	__uaccess_begin();
 	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
 	__uaccess_end();
 
-	up_read(&current->mm->mmap_sem);
+	return ret ? -EFAULT : 0;
+}
 
-	if (ret)
-		return -EFAULT;
+static int __sgx_encl_extend(struct sgx_encl *encl,
+			     struct sgx_epc_page *epc_page,
+			     unsigned long mrmask)
+{
+	int ret;
+	int i;
 
 	for_each_set_bit(i, &mrmask, 16) {
 		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
@@ -354,12 +351,6 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 			return -EFAULT;
 		}
 	}
-
-	encl_page->encl = encl;
-	encl_page->epc_page = epc_page;
-	encl->secs_child_cnt++;
-	sgx_mark_page_reclaimable(encl_page->epc_page);
-
 	return 0;
 }
 
@@ -388,19 +379,39 @@ static int sgx_encl_add_page(struct sgx_encl *encl,
 		goto err_out_free;
 	}
 
+	down_read(&current->mm->mmap_sem);
+
 	mutex_lock(&encl->lock);
 
+	/*
+	 * Insert prior to EADD in case of OOM.  EADD modifies MRENCLAVE, i.e.
+	 * can't be gracefully unwound, while failure on EADD/EXTEND is limited
+	 * to userspace errors (or kernel/hardware bugs).
+	 */
 	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
 				encl_page);
-	if (ret)
+	if (ret) {
+		up_read(&current->mm->mmap_sem);
 		goto err_out_shrink;
+	}
 
 	ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo,
-				  addp->src, addp->mrmask);
+				  addp->src);
+	up_read(&current->mm->mmap_sem);
+
+	if (ret)
+		goto err_out;
+
+	ret = __sgx_encl_extend(encl, epc_page, addp->mrmask);
 	if (ret)
 		goto err_out;
 
+	encl_page->encl = encl;
+	encl_page->epc_page = epc_page;
+	encl->secs_child_cnt++;
+	sgx_mark_page_reclaimable(encl_page->epc_page);
 	mutex_unlock(&encl->lock);
+
 	return 0;
 
 err_out:
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 4/5] x86/sgx: Reject all ioctls on dead enclaves
  2019-08-27 19:27 [PATCH v2 0/5] x86/sgx: Fix lock ordering bug w/ EADD Sean Christopherson
                   ` (2 preceding siblings ...)
  2019-08-27 19:27 ` [PATCH v2 3/5] x86/sgx: Take encl->lock inside of mm->mmap_sem for EADD Sean Christopherson
@ 2019-08-27 19:27 ` Sean Christopherson
  2019-08-27 19:27 ` [PATCH v2 5/5] x86/sgx: Destroy the enclave if EEXTEND fails Sean Christopherson
  4 siblings, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2019-08-27 19:27 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

Do not allow userspace to operate on a dead enclave.

Note, moving the SGX_ENCL_DEAD for EINIT outside of encl->lock is safe
now that sgx_ioctl() prevents concurrent calls.  SGX_ENCL_DEAD is only
set when the fd is released, i.e. EINIT can no longer be reached, or
within an ioctl call.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/ioctl.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index a0ffbbb0dad1..7f605fb7e0f4 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -532,7 +532,7 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
 
 	mutex_lock(&encl->lock);
 
-	if (atomic_read(&encl->flags) & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) {
+	if (atomic_read(&encl->flags) & SGX_ENCL_INITIALIZED) {
 		ret = -EFAULT;
 		goto err_out;
 	}
@@ -675,11 +675,15 @@ static long sgx_ioc_enclave_set_attribute(struct sgx_encl *encl,
 long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 {
 	struct sgx_encl *encl = filep->private_data;
-	int ret;
+	int ret, encl_flags;
 
-	if (atomic_fetch_or(SGX_ENCL_IOCTL, &encl->flags) & SGX_ENCL_IOCTL)
+	encl_flags = atomic_fetch_or(SGX_ENCL_IOCTL, &encl->flags);
+	if (encl_flags & SGX_ENCL_IOCTL)
 		return -EBUSY;
 
+	if (encl_flags & SGX_ENCL_DEAD)
+		return -EFAULT;
+
 	switch (cmd) {
 	case SGX_IOC_ENCLAVE_CREATE:
 		ret = sgx_ioc_enclave_create(encl, (void __user *)arg);
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 5/5] x86/sgx: Destroy the enclave if EEXTEND fails
  2019-08-27 19:27 [PATCH v2 0/5] x86/sgx: Fix lock ordering bug w/ EADD Sean Christopherson
                   ` (3 preceding siblings ...)
  2019-08-27 19:27 ` [PATCH v2 4/5] x86/sgx: Reject all ioctls on dead enclaves Sean Christopherson
@ 2019-08-27 19:27 ` Sean Christopherson
  4 siblings, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2019-08-27 19:27 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

Mark an enclave as dead and release its resources if EEXTEND fails, as
the driver cannot gracefully unwind from EEXTEND failure and does not
provide userspace enough information to restart the ioctl.  Allowing
EEXTEND to be restarted is not a requirement as EEXTEND can only fail in
the event of a kernel or hardware bug.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/ioctl.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 7f605fb7e0f4..03428a404878 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -348,6 +348,7 @@ static int __sgx_encl_extend(struct sgx_encl *encl,
 		if (ret) {
 			if (encls_failed(ret))
 				ENCLS_WARN(ret, "EEXTEND");
+			sgx_encl_destroy(encl);
 			return -EFAULT;
 		}
 	}
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic
  2019-08-27 19:27 ` [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic Sean Christopherson
@ 2019-08-29 13:09   ` Jarkko Sakkinen
  2019-08-29 15:59     ` Sean Christopherson
  2019-08-30  0:02     ` Sean Christopherson
  0 siblings, 2 replies; 15+ messages in thread
From: Jarkko Sakkinen @ 2019-08-29 13:09 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx

On Tue, Aug 27, 2019 at 12:27:13PM -0700, Sean Christopherson wrote:
> @@ -77,7 +78,7 @@ static int sgx_release(struct inode *inode, struct file *file)
>  	};
>  
>  	mutex_lock(&encl->lock);
> -	encl->flags |= SGX_ENCL_DEAD;
> +	atomic_or(SGX_ENCL_DEAD, &encl->flags);
>  	mutex_unlock(&encl->lock);

Had a couple of checkpatch.pl errors (not a biggie, just a remark).

Is there reason to keep lock there?

/Jarkko

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave
  2019-08-27 19:27 ` [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave Sean Christopherson
@ 2019-08-29 13:15   ` Jarkko Sakkinen
  2019-08-29 16:00     ` Sean Christopherson
  0 siblings, 1 reply; 15+ messages in thread
From: Jarkko Sakkinen @ 2019-08-29 13:15 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx

On Tue, Aug 27, 2019 at 12:27:14PM -0700, Sean Christopherson wrote:
> Except for ENCLAVE_SET_ATTRIBUTE, all SGX ioctls() must be executed
> serially to successfully initialize an enclave, e.g. the kernel already
> strictly requires ECREATE->EADD->EINIT, and concurrent EADDs will result
> in an unstable MRENCLAVE.  Explicitly enforce serialization by returning
> EINVAL if userspace attempts an ioctl while a different ioctl for the
> same enclave is in progress.
> 
> The primary beneficiary of explicit serialization is sgx_encl_grow() as
> it no longer has to deal with dropping and reacquiring encl->lock when
> a new VA page needs to be allocated.  Eliminating the lock juggling in
> sgx_encl_grow() paves the way for fixing a lock ordering bug in
> ENCLAVE_ADD_PAGE without having to also juggle mm->mmap_sem.
> 
> Serializing ioctls also fixes a race between ENCLAVE_CREATE and
> ENCLAVE_SET_ATTRIBUTE, as the latter does not take encl->lock, e.g.
> concurrent updates to allowed_attributes could result in a stale
> value.  The race could also be fixed by taking encl->lock or making
> allowed_attributes atomic, but both add unnecessary overhead with this
> change applied.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

#PF handler should be good as it has this conditional:

flags = atomic_read(&encl->flags);

if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
	return ERR_PTR(-EFAULT);

What about the reclaimer?

/Jarkko

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic
  2019-08-29 13:09   ` Jarkko Sakkinen
@ 2019-08-29 15:59     ` Sean Christopherson
  2019-08-29 18:01       ` Jarkko Sakkinen
  2019-08-30  0:02     ` Sean Christopherson
  1 sibling, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2019-08-29 15:59 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

On Thu, Aug 29, 2019 at 04:09:29PM +0300, Jarkko Sakkinen wrote:
> On Tue, Aug 27, 2019 at 12:27:13PM -0700, Sean Christopherson wrote:
> > @@ -77,7 +78,7 @@ static int sgx_release(struct inode *inode, struct file *file)
> >  	};
> >  
> >  	mutex_lock(&encl->lock);
> > -	encl->flags |= SGX_ENCL_DEAD;
> > +	atomic_or(SGX_ENCL_DEAD, &encl->flags);
> >  	mutex_unlock(&encl->lock);
> 
> Had a couple of checkpatch.pl errors (not a biggie, just a remark).
> 
> Is there reason to keep lock there?

Yes, I couldn't convince myself the reclaim flows would work correctly
if SGX_ENCL_DEAD could be set while the reclaimer held encl->lock.  But
I'm also not 100% certain holding encl->lock is strictly necessary in
this case, so I didn't add a comment either.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave
  2019-08-29 13:15   ` Jarkko Sakkinen
@ 2019-08-29 16:00     ` Sean Christopherson
  2019-08-29 18:14       ` Jarkko Sakkinen
  0 siblings, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2019-08-29 16:00 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

On Thu, Aug 29, 2019 at 04:15:43PM +0300, Jarkko Sakkinen wrote:
> On Tue, Aug 27, 2019 at 12:27:14PM -0700, Sean Christopherson wrote:
> > Except for ENCLAVE_SET_ATTRIBUTE, all SGX ioctls() must be executed
> > serially to successfully initialize an enclave, e.g. the kernel already
> > strictly requires ECREATE->EADD->EINIT, and concurrent EADDs will result
> > in an unstable MRENCLAVE.  Explicitly enforce serialization by returning
> > EINVAL if userspace attempts an ioctl while a different ioctl for the
> > same enclave is in progress.
> > 
> > The primary beneficiary of explicit serialization is sgx_encl_grow() as
> > it no longer has to deal with dropping and reacquiring encl->lock when
> > a new VA page needs to be allocated.  Eliminating the lock juggling in
> > sgx_encl_grow() paves the way for fixing a lock ordering bug in
> > ENCLAVE_ADD_PAGE without having to also juggle mm->mmap_sem.
> > 
> > Serializing ioctls also fixes a race between ENCLAVE_CREATE and
> > ENCLAVE_SET_ATTRIBUTE, as the latter does not take encl->lock, e.g.
> > concurrent updates to allowed_attributes could result in a stale
> > value.  The race could also be fixed by taking encl->lock or making
> > allowed_attributes atomic, but both add unnecessary overhead with this
> > change applied.
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> #PF handler should be good as it has this conditional:
> 
> flags = atomic_read(&encl->flags);
> 
> if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> 	return ERR_PTR(-EFAULT);
> 
> What about the reclaimer?

Can you elaborate?  I'm not sure what you're asking.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic
  2019-08-29 15:59     ` Sean Christopherson
@ 2019-08-29 18:01       ` Jarkko Sakkinen
  0 siblings, 0 replies; 15+ messages in thread
From: Jarkko Sakkinen @ 2019-08-29 18:01 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx

On Thu, Aug 29, 2019 at 08:59:14AM -0700, Sean Christopherson wrote:
> On Thu, Aug 29, 2019 at 04:09:29PM +0300, Jarkko Sakkinen wrote:
> > On Tue, Aug 27, 2019 at 12:27:13PM -0700, Sean Christopherson wrote:
> > > @@ -77,7 +78,7 @@ static int sgx_release(struct inode *inode, struct file *file)
> > >  	};
> > >  
> > >  	mutex_lock(&encl->lock);
> > > -	encl->flags |= SGX_ENCL_DEAD;
> > > +	atomic_or(SGX_ENCL_DEAD, &encl->flags);
> > >  	mutex_unlock(&encl->lock);
> > 
> > Had a couple of checkpatch.pl errors (not a biggie, just a remark).
> > 
> > Is there reason to keep lock there?
> 
> Yes, I couldn't convince myself the reclaim flows would work correctly
> if SGX_ENCL_DEAD could be set while the reclaimer held encl->lock.  But
> I'm also not 100% certain holding encl->lock is strictly necessary in
> this case, so I didn't add a comment either.

Right. Lets keep it like it is for the moment.

/Jarkko

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave
  2019-08-29 16:00     ` Sean Christopherson
@ 2019-08-29 18:14       ` Jarkko Sakkinen
  2019-08-29 18:43         ` Sean Christopherson
  0 siblings, 1 reply; 15+ messages in thread
From: Jarkko Sakkinen @ 2019-08-29 18:14 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx

On Thu, Aug 29, 2019 at 09:00:01AM -0700, Sean Christopherson wrote:
> > #PF handler should be good as it has this conditional:
> > 
> > flags = atomic_read(&encl->flags);
> > 
> > if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> > 	return ERR_PTR(-EFAULT);
> > 
> > What about the reclaimer?
> 
> Can you elaborate?  I'm not sure what you're asking.

I'm thinking of a race between list_add() in the ioctl and
list_move_tail() in the reclaimer.

A quick way to fix this would be move sgx_alloc_va_page() from
sgx_encl_grow() and return NULL if a new allocation is required.

In the ioctl you can then allocate the page before taking locks and
do "list_add(&va_page->list, &encl->va_pages);" behind the locks.

/Jarkko

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave
  2019-08-29 18:14       ` Jarkko Sakkinen
@ 2019-08-29 18:43         ` Sean Christopherson
  2019-08-29 22:22           ` Jarkko Sakkinen
  0 siblings, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2019-08-29 18:43 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

On Thu, Aug 29, 2019 at 09:14:38PM +0300, Jarkko Sakkinen wrote:
> On Thu, Aug 29, 2019 at 09:00:01AM -0700, Sean Christopherson wrote:
> > > #PF handler should be good as it has this conditional:
> > > 
> > > flags = atomic_read(&encl->flags);
> > > 
> > > if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> > > 	return ERR_PTR(-EFAULT);
> > > 
> > > What about the reclaimer?
> > 
> > Can you elaborate?  I'm not sure what you're asking.
> 
> I'm thinking of a race between list_add() in the ioctl and
> list_move_tail() in the reclaimer.

Ah crud, I forgot that the reclaimer can manipulate the list of VA pages,
I was thinking they were invisible to the reclaimer.

> A quick way to fix this would be move sgx_alloc_va_page() from
> sgx_encl_grow() and return NULL if a new allocation is required.

We don't even need to do that, moving the list_add() from sgx_encl_grow()
to its caller would be sufficient.  Same concept, but the allocation would
be handled by sgx_encl_grow() instead of having to duplicate that code in
sgx_encl_add_page() and sgx_encl_create().

> In the ioctl you can then allocate the page before taking locks and
> do "list_add(&va_page->list, &encl->va_pages);" behind the locks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave
  2019-08-29 18:43         ` Sean Christopherson
@ 2019-08-29 22:22           ` Jarkko Sakkinen
  0 siblings, 0 replies; 15+ messages in thread
From: Jarkko Sakkinen @ 2019-08-29 22:22 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx

On Thu, Aug 29, 2019 at 11:43:33AM -0700, Sean Christopherson wrote:
> On Thu, Aug 29, 2019 at 09:14:38PM +0300, Jarkko Sakkinen wrote:
> > On Thu, Aug 29, 2019 at 09:00:01AM -0700, Sean Christopherson wrote:
> > > > #PF handler should be good as it has this conditional:
> > > > 
> > > > flags = atomic_read(&encl->flags);
> > > > 
> > > > if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> > > > 	return ERR_PTR(-EFAULT);
> > > > 
> > > > What about the reclaimer?
> > > 
> > > Can you elaborate?  I'm not sure what you're asking.
> > 
> > I'm thinking of a race between list_add() in the ioctl and
> > list_move_tail() in the reclaimer.
> 
> Ah crud, I forgot that the reclaimer can manipulate the list of VA pages,
> I was thinking they were invisible to the reclaimer.
> 
> > A quick way to fix this would be move sgx_alloc_va_page() from
> > sgx_encl_grow() and return NULL if a new allocation is required.
> 
> We don't even need to do that, moving the list_add() from sgx_encl_grow()
> to its caller would be sufficient.  Same concept, but the allocation would
> be handled by sgx_encl_grow() instead of having to duplicate that code in
> sgx_encl_add_page() and sgx_encl_create().

Yeah, that was whole point that list_add() cannot be done without
encl->lock. Anything that works goes...

/Jarkko

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic
  2019-08-29 13:09   ` Jarkko Sakkinen
  2019-08-29 15:59     ` Sean Christopherson
@ 2019-08-30  0:02     ` Sean Christopherson
  1 sibling, 0 replies; 15+ messages in thread
From: Sean Christopherson @ 2019-08-30  0:02 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx

On Thu, Aug 29, 2019 at 04:09:29PM +0300, Jarkko Sakkinen wrote:
> On Tue, Aug 27, 2019 at 12:27:13PM -0700, Sean Christopherson wrote:
> > @@ -77,7 +78,7 @@ static int sgx_release(struct inode *inode, struct file *file)
> >  	};
> >  
> >  	mutex_lock(&encl->lock);
> > -	encl->flags |= SGX_ENCL_DEAD;
> > +	atomic_or(SGX_ENCL_DEAD, &encl->flags);
> >  	mutex_unlock(&encl->lock);
> 
> Had a couple of checkpatch.pl errors (not a biggie, just a remark).

Regarding the checkpatch warnings, I intentionally ignored these:

WARNING: line over 80 characters
#181: FILE: arch/x86/kernel/cpu/sgx/ioctl.c:547:
+       if (atomic_read(&encl->flags) & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) {

WARNING: line over 80 characters
#248: FILE: arch/x86/kernel/cpu/sgx/reclaim.c:386:
+           (atomic_read(&encl->flags) & (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED)))

total: 0 errors, 2 warnings, 181 lines checked


The one in ioctl.c is ephemeral, i.e. won't exist once the whole series is
squashed, and I felt the code in reclaim.c was more readable by letting
the line poke out a few chars.

That being said, I definitely don't care if you want to tweak the code to
eliminate the warning.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-08-30  0:02 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-27 19:27 [PATCH v2 0/5] x86/sgx: Fix lock ordering bug w/ EADD Sean Christopherson
2019-08-27 19:27 ` [PATCH v2 1/5] x86/sgx: Convert encl->flags from an unsigned int to an atomic Sean Christopherson
2019-08-29 13:09   ` Jarkko Sakkinen
2019-08-29 15:59     ` Sean Christopherson
2019-08-29 18:01       ` Jarkko Sakkinen
2019-08-30  0:02     ` Sean Christopherson
2019-08-27 19:27 ` [PATCH v2 2/5] x86/sgx: Reject concurrent ioctls on single enclave Sean Christopherson
2019-08-29 13:15   ` Jarkko Sakkinen
2019-08-29 16:00     ` Sean Christopherson
2019-08-29 18:14       ` Jarkko Sakkinen
2019-08-29 18:43         ` Sean Christopherson
2019-08-29 22:22           ` Jarkko Sakkinen
2019-08-27 19:27 ` [PATCH v2 3/5] x86/sgx: Take encl->lock inside of mm->mmap_sem for EADD Sean Christopherson
2019-08-27 19:27 ` [PATCH v2 4/5] x86/sgx: Reject all ioctls on dead enclaves Sean Christopherson
2019-08-27 19:27 ` [PATCH v2 5/5] x86/sgx: Destroy the enclave if EEXTEND fails Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).