Linux-Sgx Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy
@ 2019-08-08 22:13 Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 1/6] x86/sgx: Validate generic SECINFO immediately after copying from user Sean Christopherson
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-08 22:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

As suggested by Andy, remove the work queue in favor of handling the
entire EADD flow in the context of the ioctl().  After the worker is
gone, pass the source page/address directly to EADD instead of first
copying the data into kernel memory.

This applies on top the bug fix series I sent yesterday,
https://patchwork.kernel.org/cover/11082995/.

Sean Christopherson (6):
  x86/sgx: Validate generic SECINFO immediately after copying from user
  x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
  x86/sgx: Move encl_page insertion into tree out of alloc flow
  x86/sgx: Allocate encl_page prior to taking encl->lock
  x86/sgx: Remove the EADD page worker
  x86/sgx: Pass userspace source address directly to EADD

 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 402 +++++++------------------
 arch/x86/kernel/cpu/sgx/driver/main.c  |   4 -
 arch/x86/kernel/cpu/sgx/encl.h         |   2 -
 3 files changed, 103 insertions(+), 305 deletions(-)

-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH for_v22 1/6] x86/sgx: Validate generic SECINFO immediately after copying from user
  2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
@ 2019-08-08 22:13 ` Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 2/6] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Sean Christopherson
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-08 22:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

When adding pages to the encalve, verify the SECINFO flags provided by
userspace are valid prior to consuming the protection bits and to avoid
allocating a page when SECINFO is invalid.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index f2b9f0ece9ca..f14d9abd9b0d 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -518,8 +518,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	struct sgx_va_page *va_page;
 	int ret;
 
-	if (sgx_validate_secinfo(secinfo))
-		return -EINVAL;
 	if (page_type == SGX_SECINFO_TCS) {
 		ret = sgx_validate_tcs(encl, data);
 		if (ret)
@@ -634,6 +632,9 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 			   sizeof(secinfo)))
 		return -EFAULT;
 
+	if (sgx_validate_secinfo(&secinfo))
+		return -EINVAL;
+
 	data_page = alloc_page(GFP_HIGHUSER);
 	if (!data_page)
 		return -ENOMEM;
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH for_v22 2/6] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
  2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 1/6] x86/sgx: Validate generic SECINFO immediately after copying from user Sean Christopherson
@ 2019-08-08 22:13 ` Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 3/6] x86/sgx: Move encl_page insertion into tree out of alloc flow Sean Christopherson
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-08 22:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Set SGX_ENCL_PAGE_TCS when encl_page->desc is initialized in
sgx_encl_page_alloc() to improve readability, and so that the code
isn't affected when the bulk of __sgx_encl_add_page() is rewritten
to remove the EADD worker in a future patch.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index f14d9abd9b0d..64deab36ed05 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -247,7 +247,8 @@ static int sgx_validate_secs(const struct sgx_secs *secs,
 
 static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 						 unsigned long addr,
-						 unsigned long prot)
+						 unsigned long prot,
+						 u64 page_type)
 {
 	struct sgx_encl_page *encl_page;
 	int ret;
@@ -258,6 +259,8 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 	if (!encl_page)
 		return ERR_PTR(-ENOMEM);
 	encl_page->desc = addr;
+	if (page_type == SGX_SECINFO_TCS)
+		encl_page->desc |= SGX_ENCL_PAGE_TCS;
 	encl_page->encl = encl;
 	encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0);
 	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
@@ -475,7 +478,6 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       unsigned int mrmask)
 {
 	unsigned long page_index = sgx_encl_get_index(encl, encl_page);
-	u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK;
 	struct sgx_add_page_req *req = NULL;
 	struct page *backing;
 	void *backing_ptr;
@@ -494,8 +496,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 	backing_ptr = kmap(backing);
 	memcpy(backing_ptr, data, PAGE_SIZE);
 	kunmap(backing);
-	if (page_type == SGX_SECINFO_TCS)
-		encl_page->desc |= SGX_ENCL_PAGE_TCS;
+
 	memcpy(&req->secinfo, secinfo, sizeof(*secinfo));
 	req->encl = encl;
 	req->encl_page = encl_page;
@@ -532,7 +533,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 		goto err_out_unlock;
 	}
 
-	encl_page = sgx_encl_page_alloc(encl, addr, prot);
+	encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type);
 	if (IS_ERR(encl_page)) {
 		ret = PTR_ERR(encl_page);
 		goto err_out_shrink;
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH for_v22 3/6] x86/sgx: Move encl_page insertion into tree out of alloc flow
  2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 1/6] x86/sgx: Validate generic SECINFO immediately after copying from user Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 2/6] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Sean Christopherson
@ 2019-08-08 22:13 ` Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 4/6] x86/sgx: Allocate encl_page prior to taking encl->lock Sean Christopherson
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-08 22:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Move insertion into the page tree out of sgx_encl_page_alloc() so that
the function can be moved out from under encl->lock.  This is a
preparatory step for removing the add page worker, as the encl_page is
needed to allocate its EPC page, and EPC page allocation must be done
without holding encl->lock so that it can trigger reclaim if necessary.

Note, radix_tree_insert() returns -EEXIST if the the slot is already in
use, i.e. there's no need to pre-check via radix_tree_lookup().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 64deab36ed05..126f95afa9a1 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -251,10 +251,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 						 u64 page_type)
 {
 	struct sgx_encl_page *encl_page;
-	int ret;
 
-	if (radix_tree_lookup(&encl->page_tree, PFN_DOWN(addr)))
-		return ERR_PTR(-EEXIST);
 	encl_page = kzalloc(sizeof(*encl_page), GFP_KERNEL);
 	if (!encl_page)
 		return ERR_PTR(-ENOMEM);
@@ -263,12 +260,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 		encl_page->desc |= SGX_ENCL_PAGE_TCS;
 	encl_page->encl = encl;
 	encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0);
-	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
-				encl_page);
-	if (ret) {
-		kfree(encl_page);
-		return ERR_PTR(ret);
-	}
+
 	return encl_page;
 }
 
@@ -539,6 +531,11 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 		goto err_out_shrink;
 	}
 
+	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
+				encl_page);
+	if (ret)
+		goto err_out_free;
+
 	ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask);
 	if (ret)
 		goto err_out;
@@ -549,6 +546,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 err_out:
 	radix_tree_delete(&encl_page->encl->page_tree,
 			  PFN_DOWN(encl_page->desc));
+err_out_free:
 	kfree(encl_page);
 
 err_out_shrink:
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH for_v22 4/6] x86/sgx: Allocate encl_page prior to taking encl->lock
  2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (2 preceding siblings ...)
  2019-08-08 22:13 ` [PATCH for_v22 3/6] x86/sgx: Move encl_page insertion into tree out of alloc flow Sean Christopherson
@ 2019-08-08 22:13 ` Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 5/6] x86/sgx: Remove the EADD page worker Sean Christopherson
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-08 22:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Refactor sgx_encl_add_page() to allocate the encl_page prior to taking
encl->lock so that the encl_page can be used to allocate its associated
EPC page without having to drop and retake encl->lock.  Removal of the
add page worker will move EPC page allocation to sgx_encl_add_page().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 29 ++++++++++++--------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 126f95afa9a1..2d876429ba9a 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -517,24 +517,22 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 			return ret;
 	}
 
-	mutex_lock(&encl->lock);
-
-	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
-	if (IS_ERR(va_page)) {
-		ret = PTR_ERR(va_page);
-		goto err_out_unlock;
-	}
-
 	encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type);
-	if (IS_ERR(encl_page)) {
-		ret = PTR_ERR(encl_page);
-		goto err_out_shrink;
+	if (IS_ERR(encl_page))
+		return PTR_ERR(encl_page);
+
+	mutex_lock(&encl->lock);
+
+	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
+	if (IS_ERR(va_page)) {
+		ret = PTR_ERR(va_page);
+		goto err_out_free;
 	}
 
 	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
 				encl_page);
 	if (ret)
-		goto err_out_free;
+		goto err_out_shrink;
 
 	ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask);
 	if (ret)
@@ -546,13 +544,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 err_out:
 	radix_tree_delete(&encl_page->encl->page_tree,
 			  PFN_DOWN(encl_page->desc));
+err_out_shrink:
+	sgx_encl_shrink(encl, va_page);
+
 err_out_free:
 	kfree(encl_page);
 
-err_out_shrink:
-	sgx_encl_shrink(encl, va_page);
-
-err_out_unlock:
 	mutex_unlock(&encl->lock);
 	return ret;
 }
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH for_v22 5/6] x86/sgx: Remove the EADD page worker
  2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (3 preceding siblings ...)
  2019-08-08 22:13 ` [PATCH for_v22 4/6] x86/sgx: Allocate encl_page prior to taking encl->lock Sean Christopherson
@ 2019-08-08 22:13 ` Sean Christopherson
  2019-08-08 22:13 ` [PATCH for_v22 6/6] x86/sgx: Pass userspace source address directly to EADD Sean Christopherson
  2019-08-09 16:21 ` [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Jarkko Sakkinen
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-08 22:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Remove the work queue approach to adding pages to an enclave.  There
are several benefits to fully adding pages within the context of the
ioctl() call:

  - Simplifies the code base
  - Provides userspace with accurate error information, e.g. the ioctl()
    now fails if EPC allocation fails
  - Paves the way for passing the user's source page directly to EADD
    to eliminate the overhead of allocating a kernel page and copying
    the user data into said kernel page.

The downside to removing the worker is that applications with their own
scheduler, e.g. Go's M:N scheduler, can see a significant reduction in
throughput (10x or more) when building multiple enclaves in parallel,
e.g. in the Go case, spinning up several goroutines with each goroutine
building a different enclave.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 191 ++++++-------------------
 arch/x86/kernel/cpu/sgx/driver/main.c  |   4 -
 arch/x86/kernel/cpu/sgx/encl.h         |   2 -
 3 files changed, 40 insertions(+), 157 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 2d876429ba9a..e083625dcd15 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -14,14 +14,6 @@
 #include <linux/suspend.h>
 #include "driver.h"
 
-struct sgx_add_page_req {
-	struct sgx_encl *encl;
-	struct sgx_encl_page *encl_page;
-	struct sgx_secinfo secinfo;
-	unsigned long mrmask;
-	struct list_head list;
-};
-
 static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
 					 unsigned int disallowed_flags)
 {
@@ -77,115 +69,6 @@ static void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page)
 	}
 }
 
-static bool sgx_process_add_page_req(struct sgx_add_page_req *req,
-				     struct sgx_epc_page *epc_page)
-{
-	struct sgx_encl_page *encl_page = req->encl_page;
-	struct sgx_encl *encl = req->encl;
-	unsigned long page_index = sgx_encl_get_index(encl, encl_page);
-	struct sgx_secinfo secinfo;
-	struct sgx_pageinfo pginfo;
-	struct page *backing;
-	unsigned long addr;
-	int ret;
-	int i;
-
-	if (encl->flags &  SGX_ENCL_DEAD)
-		return false;
-
-	addr = SGX_ENCL_PAGE_ADDR(encl_page);
-
-	backing = sgx_encl_get_backing_page(encl, page_index);
-	if (IS_ERR(backing))
-		return false;
-
-	/*
-	 * The SECINFO field must be 64-byte aligned, copy it to a local
-	 * variable that is guaranteed to be aligned as req->secinfo may
-	 * or may not be 64-byte aligned, e.g. req may have been allocated
-	 * via kzalloc which is not aware of __aligned attributes.
-	 */
-	memcpy(&secinfo, &req->secinfo, sizeof(secinfo));
-
-	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
-	pginfo.addr = addr;
-	pginfo.metadata = (unsigned long)&secinfo;
-	pginfo.contents = (unsigned long)kmap_atomic(backing);
-	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
-	kunmap_atomic((void *)(unsigned long)pginfo.contents);
-
-	put_page(backing);
-
-	if (ret) {
-		if (encls_failed(ret))
-			ENCLS_WARN(ret, "EADD");
-		return false;
-	}
-
-	for_each_set_bit(i, &req->mrmask, 16) {
-		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
-				sgx_epc_addr(epc_page) + (i * 0x100));
-		if (ret) {
-			if (encls_failed(ret))
-				ENCLS_WARN(ret, "EEXTEND");
-			return false;
-		}
-	}
-
-	encl_page->encl = encl;
-	encl_page->epc_page = epc_page;
-	encl->secs_child_cnt++;
-	sgx_mark_page_reclaimable(encl_page->epc_page);
-
-	return true;
-}
-
-static void sgx_add_page_worker(struct work_struct *work)
-{
-	struct sgx_add_page_req *req;
-	bool skip_rest = false;
-	bool is_empty = false;
-	struct sgx_encl *encl;
-	struct sgx_epc_page *epc_page;
-
-	encl = container_of(work, struct sgx_encl, work);
-
-	do {
-		schedule();
-
-		mutex_lock(&encl->lock);
-		if (encl->flags & SGX_ENCL_DEAD)
-			skip_rest = true;
-
-		req = list_first_entry(&encl->add_page_reqs,
-				       struct sgx_add_page_req, list);
-		list_del(&req->list);
-		is_empty = list_empty(&encl->add_page_reqs);
-		mutex_unlock(&encl->lock);
-
-		if (skip_rest)
-			goto next;
-
-		epc_page = sgx_alloc_page(req->encl_page, true);
-
-		mutex_lock(&encl->lock);
-
-		if (IS_ERR(epc_page)) {
-			sgx_encl_destroy(encl);
-			skip_rest = true;
-		} else if (!sgx_process_add_page_req(req, epc_page)) {
-			sgx_free_page(epc_page);
-			sgx_encl_destroy(encl);
-			skip_rest = true;
-		}
-
-		mutex_unlock(&encl->lock);
-
-next:
-		kfree(req);
-	} while (!is_empty);
-}
-
 static u32 sgx_calc_ssaframesize(u32 miscselect, u64 xfrm)
 {
 	u32 size_max = PAGE_SIZE;
@@ -298,8 +181,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 
 	encl->backing = backing;
 
-	INIT_WORK(&encl->work, sgx_add_page_worker);
-
 	secs_epc = sgx_alloc_page(&encl->secs, true);
 	if (IS_ERR(secs_epc)) {
 		ret = PTR_ERR(secs_epc);
@@ -465,40 +346,42 @@ static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs)
 
 static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       struct sgx_encl_page *encl_page,
+			       struct sgx_epc_page *epc_page,
 			       void *data,
 			       struct sgx_secinfo *secinfo,
-			       unsigned int mrmask)
+			       unsigned long mrmask)
 {
-	unsigned long page_index = sgx_encl_get_index(encl, encl_page);
-	struct sgx_add_page_req *req = NULL;
-	struct page *backing;
-	void *backing_ptr;
-	int empty;
+	struct sgx_pageinfo pginfo;
+	int ret;
+	int i;
 
-	req = kzalloc(sizeof(*req), GFP_KERNEL);
-	if (!req)
-		return -ENOMEM;
+	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
+	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
+	pginfo.metadata = (unsigned long)secinfo;
+	pginfo.contents = (unsigned long)data;
 
-	backing = sgx_encl_get_backing_page(encl, page_index);
-	if (IS_ERR(backing)) {
-		kfree(req);
-		return PTR_ERR(backing);
+	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
+	if (ret) {
+		if (encls_failed(ret))
+			ENCLS_WARN(ret, "EADD");
+		return -EFAULT;
 	}
 
-	backing_ptr = kmap(backing);
-	memcpy(backing_ptr, data, PAGE_SIZE);
-	kunmap(backing);
+	for_each_set_bit(i, &mrmask, 16) {
+		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
+				sgx_epc_addr(epc_page) + (i * 0x100));
+		if (ret) {
+			if (encls_failed(ret))
+				ENCLS_WARN(ret, "EEXTEND");
+			return -EFAULT;
+		}
+	}
+
+	encl_page->encl = encl;
+	encl_page->epc_page = epc_page;
+	encl->secs_child_cnt++;
+	sgx_mark_page_reclaimable(encl_page->epc_page);
 
-	memcpy(&req->secinfo, secinfo, sizeof(*secinfo));
-	req->encl = encl;
-	req->encl_page = encl_page;
-	req->mrmask = mrmask;
-	empty = list_empty(&encl->add_page_reqs);
-	list_add_tail(&req->list, &encl->add_page_reqs);
-	if (empty)
-		queue_work(sgx_encl_wq, &encl->work);
-	set_page_dirty(backing);
-	put_page(backing);
 	return 0;
 }
 
@@ -508,6 +391,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 {
 	u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK;
 	struct sgx_encl_page *encl_page;
+	struct sgx_epc_page *epc_page;
 	struct sgx_va_page *va_page;
 	int ret;
 
@@ -521,6 +405,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	if (IS_ERR(encl_page))
 		return PTR_ERR(encl_page);
 
+	epc_page = sgx_alloc_page(encl_page, true);
+	if (IS_ERR(epc_page)) {
+		kfree(encl_page);
+		return PTR_ERR(epc_page);
+	}
+
 	mutex_lock(&encl->lock);
 
 	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
@@ -534,7 +424,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	if (ret)
 		goto err_out_shrink;
 
-	ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask);
+	ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo,
+				  mrmask);
 	if (ret)
 		goto err_out;
 
@@ -548,6 +439,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	sgx_encl_shrink(encl, va_page);
 
 err_out_free:
+	sgx_free_page(epc_page);
 	kfree(encl_page);
 
 	mutex_unlock(&encl->lock);
@@ -591,14 +483,13 @@ static int sgx_encl_page_import_user(void *dst, unsigned long src,
  * @arg:	a user pointer to a struct sgx_enclave_add_page instance
  *
  * Add a page to an uninitialized enclave (EADD), and optionally extend the
- * enclave's measurement with the contents of the page (EEXTEND). Adding is done
- * asynchronously. A success only indicates that the page has been added to a
- * work queue.
+ * enclave's measurement with the contents of the page (EEXTEND).
  *
  * Return:
  *   0 on success,
  *   -EINVAL if other than RWX protection bits have been set
  *   -EACCES if the source page is located in a noexec partition
+ *   -ENOMEM if any memory allocation, including EPC, fails
  *   -errno otherwise
  */
 static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
@@ -701,8 +592,6 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
 	if (ret)
 		return ret;
 
-	flush_work(&encl->work);
-
 	mutex_lock(&encl->lock);
 
 	if (encl->flags & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) {
diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c
index dfa107247f2d..e740d71e2311 100644
--- a/arch/x86/kernel/cpu/sgx/driver/main.c
+++ b/arch/x86/kernel/cpu/sgx/driver/main.c
@@ -32,7 +32,6 @@ static int sgx_open(struct inode *inode, struct file *file)
 		return -ENOMEM;
 
 	kref_init(&encl->refcount);
-	INIT_LIST_HEAD(&encl->add_page_reqs);
 	INIT_LIST_HEAD(&encl->va_pages);
 	INIT_RADIX_TREE(&encl->page_tree, GFP_KERNEL);
 	mutex_init(&encl->lock);
@@ -81,9 +80,6 @@ static int sgx_release(struct inode *inode, struct file *file)
 	encl->flags |= SGX_ENCL_DEAD;
 	mutex_unlock(&encl->lock);
 
-	if (encl->work.func)
-		flush_work(&encl->work);
-
 	kref_put(&encl->refcount, sgx_encl_release);
 	return 0;
 }
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index 379fe4c22b5d..79885aae8869 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -81,8 +81,6 @@ struct sgx_encl {
 	unsigned long ssaframesize;
 	struct list_head va_pages;
 	struct radix_tree_root page_tree;
-	struct list_head add_page_reqs;
-	struct work_struct work;
 	struct sgx_encl_page secs;
 	cpumask_t cpumask;
 };
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH for_v22 6/6] x86/sgx: Pass userspace source address directly to EADD
  2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (4 preceding siblings ...)
  2019-08-08 22:13 ` [PATCH for_v22 5/6] x86/sgx: Remove the EADD page worker Sean Christopherson
@ 2019-08-08 22:13 ` Sean Christopherson
  2019-08-09 16:21 ` [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Jarkko Sakkinen
  6 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-08 22:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Invoke EADD with the userspace source address instead of first copying
the data to a kernel page to avoid the overhead of alloc_page() and
copy_from_user().

Since EADD can now fault due to consuming a userspace address, drop the
TCS page validation and let hardware generate a fault if the TCS is bad.
Because the vast majority of the TCS is reserved bytes, verifying the
TCS essentially requires reading the entire page, which runs counter to
the goal of invoking EADD with the userspace address.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 148 ++++++-------------------
 1 file changed, 33 insertions(+), 115 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index e083625dcd15..3b4297441e27 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -301,71 +301,46 @@ static int sgx_validate_secinfo(struct sgx_secinfo *secinfo)
 	return 0;
 }
 
-static bool sgx_validate_offset(struct sgx_encl *encl, unsigned long offset)
-{
-	if (offset & (PAGE_SIZE - 1))
-		return false;
-
-	if (offset >= encl->size)
-		return false;
-
-	return true;
-}
-
-static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs)
-{
-	int i;
-
-	if (tcs->flags & SGX_TCS_RESERVED_MASK)
-		return -EINVAL;
-
-	if (tcs->flags & SGX_TCS_DBGOPTIN)
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->ssa_offset))
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->fs_offset))
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->gs_offset))
-		return -EINVAL;
-
-	if ((tcs->fs_limit & 0xFFF) != 0xFFF)
-		return -EINVAL;
-
-	if ((tcs->gs_limit & 0xFFF) != 0xFFF)
-		return -EINVAL;
-
-	for (i = 0; i < SGX_TCS_RESERVED_SIZE; i++)
-		if (tcs->reserved[i])
-			return -EINVAL;
-
-	return 0;
-}
-
 static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       struct sgx_encl_page *encl_page,
 			       struct sgx_epc_page *epc_page,
-			       void *data,
-			       struct sgx_secinfo *secinfo,
-			       unsigned long mrmask)
+			       struct sgx_secinfo *secinfo, unsigned long src,
+			       unsigned long prot, unsigned long mrmask)
 {
 	struct sgx_pageinfo pginfo;
+	struct vm_area_struct *vma;
 	int ret;
 	int i;
 
 	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
 	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
 	pginfo.metadata = (unsigned long)secinfo;
-	pginfo.contents = (unsigned long)data;
+	pginfo.contents = src;
 
+	down_read(&current->mm->mmap_sem);
+
+	/* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */
+	if (encl_page->vm_prot_bits & VM_EXEC) {
+		vma = find_vma(current->mm, src);
+		if (!vma) {
+			up_read(&current->mm->mmap_sem);
+			return -EFAULT;
+		}
+
+		if (!(vma->vm_flags & VM_MAYEXEC)) {
+			up_read(&current->mm->mmap_sem);
+			return -EACCES;
+		}
+	}
+
+	__uaccess_begin();
 	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
-	if (ret) {
-		if (encls_failed(ret))
-			ENCLS_WARN(ret, "EADD");
+	__uaccess_end();
+
+	up_read(&current->mm->mmap_sem);
+
+	if (ret)
 		return -EFAULT;
-	}
 
 	for_each_set_bit(i, &mrmask, 16) {
 		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
@@ -385,9 +360,9 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 	return 0;
 }
 
-static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
-			     void *data, struct sgx_secinfo *secinfo,
-			     unsigned int mrmask, unsigned long prot)
+static int sgx_encl_add_page(struct sgx_encl *encl,
+			     struct sgx_enclave_add_page *addp,
+			     struct sgx_secinfo *secinfo, unsigned long prot)
 {
 	u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK;
 	struct sgx_encl_page *encl_page;
@@ -395,13 +370,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	struct sgx_va_page *va_page;
 	int ret;
 
-	if (page_type == SGX_SECINFO_TCS) {
-		ret = sgx_validate_tcs(encl, data);
-		if (ret)
-			return ret;
-	}
-
-	encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type);
+	encl_page = sgx_encl_page_alloc(encl, addp->addr, prot, page_type);
 	if (IS_ERR(encl_page))
 		return PTR_ERR(encl_page);
 
@@ -424,8 +393,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	if (ret)
 		goto err_out_shrink;
 
-	ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo,
-				  mrmask);
+	ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo,
+				  addp->src, prot, addp->mrmask);
 	if (ret)
 		goto err_out;
 
@@ -446,36 +415,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	return ret;
 }
 
-static int sgx_encl_page_import_user(void *dst, unsigned long src,
-				     unsigned long prot)
-{
-	struct vm_area_struct *vma;
-	int ret = 0;
-
-	down_read(&current->mm->mmap_sem);
-
-	/* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */
-	if (prot & PROT_EXEC) {
-		vma = find_vma(current->mm, src);
-		if (!vma) {
-			ret = -EFAULT;
-			goto out;
-		}
-
-		if (!(vma->vm_flags & VM_MAYEXEC)) {
-			ret = -EACCES;
-			goto out;
-		}
-	}
-
-	if (copy_from_user(dst, (void __user *)src, PAGE_SIZE))
-		ret = -EFAULT;
-
-out:
-	up_read(&current->mm->mmap_sem);
-	return ret;
-}
-
 /**
  * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE
  *
@@ -497,10 +436,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_add_page addp;
 	struct sgx_secinfo secinfo;
-	struct page *data_page;
 	unsigned long prot;
-	void *data;
-	int ret;
 
 	if (!(encl->flags & SGX_ENCL_CREATED))
 		return -EINVAL;
@@ -522,12 +458,6 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	if (sgx_validate_secinfo(&secinfo))
 		return -EINVAL;
 
-	data_page = alloc_page(GFP_HIGHUSER);
-	if (!data_page)
-		return -ENOMEM;
-
-	data = kmap(data_page);
-
 	prot = _calc_vm_trans(secinfo.flags, SGX_SECINFO_R, PROT_READ)  |
 	       _calc_vm_trans(secinfo.flags, SGX_SECINFO_W, PROT_WRITE) |
 	       _calc_vm_trans(secinfo.flags, SGX_SECINFO_X, PROT_EXEC);
@@ -536,19 +466,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	if ((secinfo.flags & SGX_SECINFO_PAGE_TYPE_MASK) == SGX_SECINFO_TCS)
 		prot |= PROT_READ | PROT_WRITE;
 
-	ret = sgx_encl_page_import_user(data, addp.src, prot);
-	if (ret)
-		goto out;
-
-	ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask,
-				prot);
-	if (ret)
-		goto out;
-
-out:
-	kunmap(data_page);
-	__free_page(data_page);
-	return ret;
+	return sgx_encl_add_page(encl, &addp, &secinfo, prot);
 }
 
 static int __sgx_get_key_hash(struct crypto_shash *tfm, const void *modulus,
-- 
2.22.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy
  2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (5 preceding siblings ...)
  2019-08-08 22:13 ` [PATCH for_v22 6/6] x86/sgx: Pass userspace source address directly to EADD Sean Christopherson
@ 2019-08-09 16:21 ` Jarkko Sakkinen
  2019-08-09 16:22   ` Sean Christopherson
  6 siblings, 1 reply; 9+ messages in thread
From: Jarkko Sakkinen @ 2019-08-09 16:21 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Thu, 2019-08-08 at 15:13 -0700, Sean Christopherson wrote:
> As suggested by Andy, remove the work queue in favor of handling the
> entire EADD flow in the context of the ioctl().  After the worker is
> gone, pass the source page/address directly to EADD instead of first
> copying the data into kernel memory.
> 
> This applies on top the bug fix series I sent yesterday,
> https://patchwork.kernel.org/cover/11082995/.
> 
> Sean Christopherson (6):
>   x86/sgx: Validate generic SECINFO immediately after copying from user
>   x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
>   x86/sgx: Move encl_page insertion into tree out of alloc flow
>   x86/sgx: Allocate encl_page prior to taking encl->lock
>   x86/sgx: Remove the EADD page worker
>   x86/sgx: Pass userspace source address directly to EADD
> 
>  arch/x86/kernel/cpu/sgx/driver/ioctl.c | 402 +++++++------------------
>  arch/x86/kernel/cpu/sgx/driver/main.c  |   4 -
>  arch/x86/kernel/cpu/sgx/encl.h         |   2 -
>  3 files changed, 103 insertions(+), 305 deletions(-)

Can you resend a rebased one, once I've squashed the fixes?

/Jarkko


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy
  2019-08-09 16:21 ` [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Jarkko Sakkinen
@ 2019-08-09 16:22   ` Sean Christopherson
  0 siblings, 0 replies; 9+ messages in thread
From: Sean Christopherson @ 2019-08-09 16:22 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

On Fri, Aug 09, 2019 at 07:21:18PM +0300, Jarkko Sakkinen wrote:
> On Thu, 2019-08-08 at 15:13 -0700, Sean Christopherson wrote:
> > As suggested by Andy, remove the work queue in favor of handling the
> > entire EADD flow in the context of the ioctl().  After the worker is
> > gone, pass the source page/address directly to EADD instead of first
> > copying the data into kernel memory.
> > 
> > This applies on top the bug fix series I sent yesterday,
> > https://patchwork.kernel.org/cover/11082995/.
> > 
> > Sean Christopherson (6):
> >   x86/sgx: Validate generic SECINFO immediately after copying from user
> >   x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
> >   x86/sgx: Move encl_page insertion into tree out of alloc flow
> >   x86/sgx: Allocate encl_page prior to taking encl->lock
> >   x86/sgx: Remove the EADD page worker
> >   x86/sgx: Pass userspace source address directly to EADD
> > 
> >  arch/x86/kernel/cpu/sgx/driver/ioctl.c | 402 +++++++------------------
> >  arch/x86/kernel/cpu/sgx/driver/main.c  |   4 -
> >  arch/x86/kernel/cpu/sgx/encl.h         |   2 -
> >  3 files changed, 103 insertions(+), 305 deletions(-)
> 
> Can you resend a rebased one, once I've squashed the fixes?

Yep, not a problem.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, back to index

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-08 22:13 [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Sean Christopherson
2019-08-08 22:13 ` [PATCH for_v22 1/6] x86/sgx: Validate generic SECINFO immediately after copying from user Sean Christopherson
2019-08-08 22:13 ` [PATCH for_v22 2/6] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Sean Christopherson
2019-08-08 22:13 ` [PATCH for_v22 3/6] x86/sgx: Move encl_page insertion into tree out of alloc flow Sean Christopherson
2019-08-08 22:13 ` [PATCH for_v22 4/6] x86/sgx: Allocate encl_page prior to taking encl->lock Sean Christopherson
2019-08-08 22:13 ` [PATCH for_v22 5/6] x86/sgx: Remove the EADD page worker Sean Christopherson
2019-08-08 22:13 ` [PATCH for_v22 6/6] x86/sgx: Pass userspace source address directly to EADD Sean Christopherson
2019-08-09 16:21 ` [PATCH for_v22 0/6] x86/sgx: Remove EADD worker and page copy Jarkko Sakkinen
2019-08-09 16:22   ` Sean Christopherson

Linux-Sgx Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-sgx/0 linux-sgx/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-sgx linux-sgx/ https://lore.kernel.org/linux-sgx \
		linux-sgx@vger.kernel.org linux-sgx@archiver.kernel.org
	public-inbox-index linux-sgx


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-sgx


AGPL code for this site: git clone https://public-inbox.org/ public-inbox