Linux-Sgx Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy
@ 2019-08-13  1:12 Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary Sean Christopherson
                   ` (8 more replies)
  0 siblings, 9 replies; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

As suggested by Andy, remove the work queue in favor of handling the
entire EADD flow in the context of the ioctl().  After the worker is
gone, pass the source page/address directly to EADD instead of first
copying the data into kernel memory.

v2:
  - Rebase to master, commit 36e0186296a7.
  - Incorporate patch to require EADD be aligned, and add a selftest
    update to obey the new alignment requirement.

Sean Christopherson (8):
  selftests/x86/sgx: Align enclave binary on 4k boundary
  x86/sgx: Require EADD source to be page aligned
  x86/sgx: Validate generic SECINFO immediately after copying from user
  x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
  x86/sgx: Move encl_page insertion into tree out of alloc flow
  x86/sgx: Allocate encl_page prior to taking encl->lock
  x86/sgx: Remove the EADD page worker
  x86/sgx: Pass userspace source address directly to EADD

 arch/x86/kernel/cpu/sgx/driver/ioctl.c       | 405 +++++--------------
 arch/x86/kernel/cpu/sgx/driver/main.c        |   4 -
 arch/x86/kernel/cpu/sgx/encl.h               |   2 -
 tools/testing/selftests/x86/sgx/encl_piggy.S |   1 +
 4 files changed, 106 insertions(+), 306 deletions(-)

-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-16 13:55   ` Jarkko Sakkinen
  2019-08-13  1:12 ` [PATCH for_v22 v2 2/8] x86/sgx: Require EADD source to be page aligned Sean Christopherson
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Align the enclave's binary blob to 4096 bytes so that a pointer to the
blob satisfies hardware's requirements that the source data for EADD be
page aligned.  An upcoming kernel change will extend the alignment
requirement to userspace so that the kernel can avoid copying the source
into an internal buffer, i.e. pass the userspace pointer directly to
EADD.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 tools/testing/selftests/x86/sgx/encl_piggy.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/testing/selftests/x86/sgx/encl_piggy.S b/tools/testing/selftests/x86/sgx/encl_piggy.S
index 542001658afb..a7f6447abbba 100644
--- a/tools/testing/selftests/x86/sgx/encl_piggy.S
+++ b/tools/testing/selftests/x86/sgx/encl_piggy.S
@@ -4,6 +4,7 @@
  */
 
 	.section ".rodata", "a"
+	.balign 4096
 
 encl_bin:
 	.globl encl_bin
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 2/8] x86/sgx: Require EADD source to be page aligned
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-16 14:15   ` Jarkko Sakkinen
  2019-08-13  1:12 ` [PATCH for_v22 v2 3/8] x86/sgx: Validate generic SECINFO immediately after copying from user Sean Christopherson
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Reject the EADD ioctl() if the source address provided by userspace is
not page aligned.  Page alignment is required by hardware, but this is
not enforced on userspace as the kernel first copies the source page to
an internal (page aligned) buffer.  Require the userspace address to be
page aligned in preparation for reworking EADD to directly consume the
userspace address.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 9b784a061a47..bc65249ed5df 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -624,7 +624,8 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	if (copy_from_user(&addp, arg, sizeof(addp)))
 		return -EFAULT;
 
-	if (!IS_ALIGNED(addp.addr, PAGE_SIZE))
+	if (!IS_ALIGNED(addp.addr, PAGE_SIZE) ||
+	    !IS_ALIGNED(addp.src, PAGE_SIZE))
 		return -EINVAL;
 
 	if (addp.addr < encl->base || addp.addr - encl->base >= encl->size)
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 3/8] x86/sgx: Validate generic SECINFO immediately after copying from user
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 2/8] x86/sgx: Require EADD source to be page aligned Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Sean Christopherson
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

When adding pages to the encalve, verify the SECINFO flags provided by
userspace are valid prior to consuming the protection bits and to avoid
allocating a page when SECINFO is invalid.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index bc65249ed5df..5831f51d64cd 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -519,8 +519,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	struct sgx_va_page *va_page;
 	int ret;
 
-	if (sgx_validate_secinfo(secinfo))
-		return -EINVAL;
 	if (page_type == SGX_SECINFO_TCS) {
 		ret = sgx_validate_tcs(encl, data);
 		if (ret)
@@ -635,6 +633,9 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 			   sizeof(secinfo)))
 		return -EFAULT;
 
+	if (sgx_validate_secinfo(&secinfo))
+		return -EINVAL;
+
 	data_page = alloc_page(GFP_HIGHUSER);
 	if (!data_page)
 		return -ENOMEM;
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (2 preceding siblings ...)
  2019-08-13  1:12 ` [PATCH for_v22 v2 3/8] x86/sgx: Validate generic SECINFO immediately after copying from user Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-22 12:56   ` Jarkko Sakkinen
  2019-08-13  1:12 ` [PATCH for_v22 v2 5/8] x86/sgx: Move encl_page insertion into tree out of alloc flow Sean Christopherson
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Set SGX_ENCL_PAGE_TCS when encl_page->desc is initialized in
sgx_encl_page_alloc() to improve readability, and so that the code
isn't affected when the bulk of __sgx_encl_add_page() is rewritten
to remove the EADD worker in a future patch.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 5831f51d64cd..2b3b86412131 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -247,7 +247,8 @@ static int sgx_validate_secs(const struct sgx_secs *secs,
 
 static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 						 unsigned long addr,
-						 unsigned long prot)
+						 unsigned long prot,
+						 u64 page_type)
 {
 	struct sgx_encl_page *encl_page;
 	int ret;
@@ -258,6 +259,8 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 	if (!encl_page)
 		return ERR_PTR(-ENOMEM);
 	encl_page->desc = addr;
+	if (page_type == SGX_SECINFO_TCS)
+		encl_page->desc |= SGX_ENCL_PAGE_TCS;
 	encl_page->encl = encl;
 	encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0);
 	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
@@ -476,7 +479,6 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       unsigned int mrmask)
 {
 	unsigned long page_index = sgx_encl_get_index(encl_page);
-	u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK;
 	struct sgx_add_page_req *req = NULL;
 	struct page *backing;
 	void *backing_ptr;
@@ -495,8 +497,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 	backing_ptr = kmap(backing);
 	memcpy(backing_ptr, data, PAGE_SIZE);
 	kunmap(backing);
-	if (page_type == SGX_SECINFO_TCS)
-		encl_page->desc |= SGX_ENCL_PAGE_TCS;
+
 	memcpy(&req->secinfo, secinfo, sizeof(*secinfo));
 	req->encl = encl;
 	req->encl_page = encl_page;
@@ -533,7 +534,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 		goto err_out_unlock;
 	}
 
-	encl_page = sgx_encl_page_alloc(encl, addr, prot);
+	encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type);
 	if (IS_ERR(encl_page)) {
 		ret = PTR_ERR(encl_page);
 		goto err_out_shrink;
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 5/8] x86/sgx: Move encl_page insertion into tree out of alloc flow
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (3 preceding siblings ...)
  2019-08-13  1:12 ` [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 6/8] x86/sgx: Allocate encl_page prior to taking encl->lock Sean Christopherson
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Move insertion into the page tree out of sgx_encl_page_alloc() so that
the function can be moved out from under encl->lock.  This is a
preparatory step for removing the add page worker, as the encl_page is
needed to allocate its EPC page, and EPC page allocation must be done
without holding encl->lock so that it can trigger reclaim if necessary.

Note, radix_tree_insert() returns -EEXIST if the the slot is already in
use, i.e. there's no need to pre-check via radix_tree_lookup().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 2b3b86412131..55e0fe261b8c 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -251,10 +251,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 						 u64 page_type)
 {
 	struct sgx_encl_page *encl_page;
-	int ret;
 
-	if (radix_tree_lookup(&encl->page_tree, PFN_DOWN(addr)))
-		return ERR_PTR(-EEXIST);
 	encl_page = kzalloc(sizeof(*encl_page), GFP_KERNEL);
 	if (!encl_page)
 		return ERR_PTR(-ENOMEM);
@@ -263,12 +260,7 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 		encl_page->desc |= SGX_ENCL_PAGE_TCS;
 	encl_page->encl = encl;
 	encl_page->vm_prot_bits = calc_vm_prot_bits(prot, 0);
-	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
-				encl_page);
-	if (ret) {
-		kfree(encl_page);
-		return ERR_PTR(ret);
-	}
+
 	return encl_page;
 }
 
@@ -540,6 +532,11 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 		goto err_out_shrink;
 	}
 
+	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
+				encl_page);
+	if (ret)
+		goto err_out_free;
+
 	ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask);
 	if (ret)
 		goto err_out;
@@ -550,6 +547,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 err_out:
 	radix_tree_delete(&encl_page->encl->page_tree,
 			  PFN_DOWN(encl_page->desc));
+err_out_free:
 	kfree(encl_page);
 
 err_out_shrink:
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 6/8] x86/sgx: Allocate encl_page prior to taking encl->lock
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (4 preceding siblings ...)
  2019-08-13  1:12 ` [PATCH for_v22 v2 5/8] x86/sgx: Move encl_page insertion into tree out of alloc flow Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 7/8] x86/sgx: Remove the EADD page worker Sean Christopherson
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Refactor sgx_encl_add_page() to allocate the encl_page prior to taking
encl->lock so that the encl_page can be used to allocate its associated
EPC page without having to drop and retake encl->lock.  Removal of the
add page worker will move EPC page allocation to sgx_encl_add_page().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 29 ++++++++++++--------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 55e0fe261b8c..49407ccb26c8 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -518,24 +518,22 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 			return ret;
 	}
 
-	mutex_lock(&encl->lock);
-
-	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
-	if (IS_ERR(va_page)) {
-		ret = PTR_ERR(va_page);
-		goto err_out_unlock;
-	}
-
 	encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type);
-	if (IS_ERR(encl_page)) {
-		ret = PTR_ERR(encl_page);
-		goto err_out_shrink;
+	if (IS_ERR(encl_page))
+		return PTR_ERR(encl_page);
+
+	mutex_lock(&encl->lock);
+
+	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
+	if (IS_ERR(va_page)) {
+		ret = PTR_ERR(va_page);
+		goto err_out_free;
 	}
 
 	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
 				encl_page);
 	if (ret)
-		goto err_out_free;
+		goto err_out_shrink;
 
 	ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask);
 	if (ret)
@@ -547,13 +545,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 err_out:
 	radix_tree_delete(&encl_page->encl->page_tree,
 			  PFN_DOWN(encl_page->desc));
+err_out_shrink:
+	sgx_encl_shrink(encl, va_page);
+
 err_out_free:
 	kfree(encl_page);
 
-err_out_shrink:
-	sgx_encl_shrink(encl, va_page);
-
-err_out_unlock:
 	mutex_unlock(&encl->lock);
 	return ret;
 }
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 7/8] x86/sgx: Remove the EADD page worker
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (5 preceding siblings ...)
  2019-08-13  1:12 ` [PATCH for_v22 v2 6/8] x86/sgx: Allocate encl_page prior to taking encl->lock Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-13  1:12 ` [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD Sean Christopherson
  2019-08-15 22:00 ` [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Jarkko Sakkinen
  8 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Remove the work queue approach to adding pages to an enclave.  There
are several benefits to fully adding pages within the context of the
ioctl() call:

  - Simplifies the code base
  - Provides userspace with accurate error information, e.g. the ioctl()
    now fails if EPC allocation fails
  - Paves the way for passing the user's source page directly to EADD
    to eliminate the overhead of allocating a kernel page and copying
    the user data into said kernel page.

The downside to removing the worker is that applications with their own
scheduler, e.g. Go's M:N schedule, can see a significant reduction in
throughput (10x or more) when building multiple enclaves in parallel,
e.g. in the Go case, spinning up several goroutines with each goroutine
building a different enclave.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 191 ++++++-------------------
 arch/x86/kernel/cpu/sgx/driver/main.c  |   4 -
 arch/x86/kernel/cpu/sgx/encl.h         |   2 -
 3 files changed, 40 insertions(+), 157 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 49407ccb26c8..840376cf352f 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -14,14 +14,6 @@
 #include <linux/suspend.h>
 #include "driver.h"
 
-struct sgx_add_page_req {
-	struct sgx_encl *encl;
-	struct sgx_encl_page *encl_page;
-	struct sgx_secinfo secinfo;
-	unsigned long mrmask;
-	struct list_head list;
-};
-
 static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
 					 unsigned int disallowed_flags)
 {
@@ -77,115 +69,6 @@ static void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page)
 	}
 }
 
-static bool sgx_process_add_page_req(struct sgx_add_page_req *req,
-				     struct sgx_epc_page *epc_page)
-{
-	struct sgx_encl_page *encl_page = req->encl_page;
-	struct sgx_encl *encl = req->encl;
-	unsigned long page_index = sgx_encl_get_index(encl_page);
-	struct sgx_secinfo secinfo;
-	struct sgx_pageinfo pginfo;
-	struct page *backing;
-	unsigned long addr;
-	int ret;
-	int i;
-
-	if (encl->flags &  SGX_ENCL_DEAD)
-		return false;
-
-	addr = SGX_ENCL_PAGE_ADDR(encl_page);
-
-	backing = sgx_encl_get_backing_page(encl, page_index);
-	if (IS_ERR(backing))
-		return false;
-
-	/*
-	 * The SECINFO field must be 64-byte aligned, copy it to a local
-	 * variable that is guaranteed to be aligned as req->secinfo may
-	 * or may not be 64-byte aligned, e.g. req may have been allocated
-	 * via kzalloc which is not aware of __aligned attributes.
-	 */
-	memcpy(&secinfo, &req->secinfo, sizeof(secinfo));
-
-	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
-	pginfo.addr = addr;
-	pginfo.metadata = (unsigned long)&secinfo;
-	pginfo.contents = (unsigned long)kmap_atomic(backing);
-	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
-	kunmap_atomic((void *)(unsigned long)pginfo.contents);
-
-	put_page(backing);
-
-	if (ret) {
-		if (encls_failed(ret))
-			ENCLS_WARN(ret, "EADD");
-		return false;
-	}
-
-	for_each_set_bit(i, &req->mrmask, 16) {
-		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
-				sgx_epc_addr(epc_page) + (i * 0x100));
-		if (ret) {
-			if (encls_failed(ret))
-				ENCLS_WARN(ret, "EEXTEND");
-			return false;
-		}
-	}
-
-	encl_page->encl = encl;
-	encl_page->epc_page = epc_page;
-	encl->secs_child_cnt++;
-	sgx_mark_page_reclaimable(encl_page->epc_page);
-
-	return true;
-}
-
-static void sgx_add_page_worker(struct work_struct *work)
-{
-	struct sgx_add_page_req *req;
-	bool skip_rest = false;
-	bool is_empty = false;
-	struct sgx_encl *encl;
-	struct sgx_epc_page *epc_page;
-
-	encl = container_of(work, struct sgx_encl, work);
-
-	do {
-		schedule();
-
-		mutex_lock(&encl->lock);
-		if (encl->flags & SGX_ENCL_DEAD)
-			skip_rest = true;
-
-		req = list_first_entry(&encl->add_page_reqs,
-				       struct sgx_add_page_req, list);
-		list_del(&req->list);
-		is_empty = list_empty(&encl->add_page_reqs);
-		mutex_unlock(&encl->lock);
-
-		if (skip_rest)
-			goto next;
-
-		epc_page = sgx_alloc_page(req->encl_page, true);
-
-		mutex_lock(&encl->lock);
-
-		if (IS_ERR(epc_page)) {
-			sgx_encl_destroy(encl);
-			skip_rest = true;
-		} else if (!sgx_process_add_page_req(req, epc_page)) {
-			sgx_free_page(epc_page);
-			sgx_encl_destroy(encl);
-			skip_rest = true;
-		}
-
-		mutex_unlock(&encl->lock);
-
-next:
-		kfree(req);
-	} while (!is_empty);
-}
-
 static u32 sgx_calc_ssaframesize(u32 miscselect, u64 xfrm)
 {
 	u32 size_max = PAGE_SIZE;
@@ -299,8 +182,6 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 
 	encl->backing = backing;
 
-	INIT_WORK(&encl->work, sgx_add_page_worker);
-
 	secs_epc = sgx_alloc_page(&encl->secs, true);
 	if (IS_ERR(secs_epc)) {
 		ret = PTR_ERR(secs_epc);
@@ -466,40 +347,42 @@ static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs)
 
 static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       struct sgx_encl_page *encl_page,
+			       struct sgx_epc_page *epc_page,
 			       void *data,
 			       struct sgx_secinfo *secinfo,
-			       unsigned int mrmask)
+			       unsigned long mrmask)
 {
-	unsigned long page_index = sgx_encl_get_index(encl_page);
-	struct sgx_add_page_req *req = NULL;
-	struct page *backing;
-	void *backing_ptr;
-	int empty;
+	struct sgx_pageinfo pginfo;
+	int ret;
+	int i;
 
-	req = kzalloc(sizeof(*req), GFP_KERNEL);
-	if (!req)
-		return -ENOMEM;
+	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
+	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
+	pginfo.metadata = (unsigned long)secinfo;
+	pginfo.contents = (unsigned long)data;
 
-	backing = sgx_encl_get_backing_page(encl, page_index);
-	if (IS_ERR(backing)) {
-		kfree(req);
-		return PTR_ERR(backing);
+	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
+	if (ret) {
+		if (encls_failed(ret))
+			ENCLS_WARN(ret, "EADD");
+		return -EFAULT;
 	}
 
-	backing_ptr = kmap(backing);
-	memcpy(backing_ptr, data, PAGE_SIZE);
-	kunmap(backing);
+	for_each_set_bit(i, &mrmask, 16) {
+		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
+				sgx_epc_addr(epc_page) + (i * 0x100));
+		if (ret) {
+			if (encls_failed(ret))
+				ENCLS_WARN(ret, "EEXTEND");
+			return -EFAULT;
+		}
+	}
+
+	encl_page->encl = encl;
+	encl_page->epc_page = epc_page;
+	encl->secs_child_cnt++;
+	sgx_mark_page_reclaimable(encl_page->epc_page);
 
-	memcpy(&req->secinfo, secinfo, sizeof(*secinfo));
-	req->encl = encl;
-	req->encl_page = encl_page;
-	req->mrmask = mrmask;
-	empty = list_empty(&encl->add_page_reqs);
-	list_add_tail(&req->list, &encl->add_page_reqs);
-	if (empty)
-		queue_work(sgx_encl_wq, &encl->work);
-	set_page_dirty(backing);
-	put_page(backing);
 	return 0;
 }
 
@@ -509,6 +392,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 {
 	u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK;
 	struct sgx_encl_page *encl_page;
+	struct sgx_epc_page *epc_page;
 	struct sgx_va_page *va_page;
 	int ret;
 
@@ -522,6 +406,12 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	if (IS_ERR(encl_page))
 		return PTR_ERR(encl_page);
 
+	epc_page = sgx_alloc_page(encl_page, true);
+	if (IS_ERR(epc_page)) {
+		kfree(encl_page);
+		return PTR_ERR(epc_page);
+	}
+
 	mutex_lock(&encl->lock);
 
 	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
@@ -535,7 +425,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	if (ret)
 		goto err_out_shrink;
 
-	ret = __sgx_encl_add_page(encl, encl_page, data, secinfo, mrmask);
+	ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo,
+				  mrmask);
 	if (ret)
 		goto err_out;
 
@@ -549,6 +440,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	sgx_encl_shrink(encl, va_page);
 
 err_out_free:
+	sgx_free_page(epc_page);
 	kfree(encl_page);
 
 	mutex_unlock(&encl->lock);
@@ -592,14 +484,13 @@ static int sgx_encl_page_import_user(void *dst, unsigned long src,
  * @arg:	a user pointer to a struct sgx_enclave_add_page instance
  *
  * Add a page to an uninitialized enclave (EADD), and optionally extend the
- * enclave's measurement with the contents of the page (EEXTEND). Adding is done
- * asynchronously. A success only indicates that the page has been added to a
- * work queue.
+ * enclave's measurement with the contents of the page (EEXTEND).
  *
  * Return:
  *   0 on success,
  *   -EINVAL if other than RWX protection bits have been set
  *   -EACCES if the source page is located in a noexec partition
+ *   -ENOMEM if any memory allocation, including EPC, fails
  *   -errno otherwise
  */
 static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
@@ -702,8 +593,6 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
 	if (ret)
 		return ret;
 
-	flush_work(&encl->work);
-
 	mutex_lock(&encl->lock);
 
 	if (encl->flags & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) {
diff --git a/arch/x86/kernel/cpu/sgx/driver/main.c b/arch/x86/kernel/cpu/sgx/driver/main.c
index dfa107247f2d..e740d71e2311 100644
--- a/arch/x86/kernel/cpu/sgx/driver/main.c
+++ b/arch/x86/kernel/cpu/sgx/driver/main.c
@@ -32,7 +32,6 @@ static int sgx_open(struct inode *inode, struct file *file)
 		return -ENOMEM;
 
 	kref_init(&encl->refcount);
-	INIT_LIST_HEAD(&encl->add_page_reqs);
 	INIT_LIST_HEAD(&encl->va_pages);
 	INIT_RADIX_TREE(&encl->page_tree, GFP_KERNEL);
 	mutex_init(&encl->lock);
@@ -81,9 +80,6 @@ static int sgx_release(struct inode *inode, struct file *file)
 	encl->flags |= SGX_ENCL_DEAD;
 	mutex_unlock(&encl->lock);
 
-	if (encl->work.func)
-		flush_work(&encl->work);
-
 	kref_put(&encl->refcount, sgx_encl_release);
 	return 0;
 }
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index d3a1687ed84c..b1f4e4f0fa65 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -82,8 +82,6 @@ struct sgx_encl {
 	unsigned long ssaframesize;
 	struct list_head va_pages;
 	struct radix_tree_root page_tree;
-	struct list_head add_page_reqs;
-	struct work_struct work;
 	struct sgx_encl_page secs;
 	cpumask_t cpumask;
 };
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (6 preceding siblings ...)
  2019-08-13  1:12 ` [PATCH for_v22 v2 7/8] x86/sgx: Remove the EADD page worker Sean Christopherson
@ 2019-08-13  1:12 ` Sean Christopherson
  2019-08-22 14:37   ` Jarkko Sakkinen
  2019-08-15 22:00 ` [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Jarkko Sakkinen
  8 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2019-08-13  1:12 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

Invoke EADD with the userspace source address instead of first copying
the data to a kernel page to avoid the overhead of alloc_page() and
copy_from_user().

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 148 ++++++-------------------
 1 file changed, 33 insertions(+), 115 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 840376cf352f..a55a138826d5 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -302,71 +302,46 @@ static int sgx_validate_secinfo(struct sgx_secinfo *secinfo)
 	return 0;
 }
 
-static bool sgx_validate_offset(struct sgx_encl *encl, unsigned long offset)
-{
-	if (offset & (PAGE_SIZE - 1))
-		return false;
-
-	if (offset >= encl->size)
-		return false;
-
-	return true;
-}
-
-static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs)
-{
-	int i;
-
-	if (tcs->flags & SGX_TCS_RESERVED_MASK)
-		return -EINVAL;
-
-	if (tcs->flags & SGX_TCS_DBGOPTIN)
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->ssa_offset))
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->fs_offset))
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->gs_offset))
-		return -EINVAL;
-
-	if ((tcs->fs_limit & 0xFFF) != 0xFFF)
-		return -EINVAL;
-
-	if ((tcs->gs_limit & 0xFFF) != 0xFFF)
-		return -EINVAL;
-
-	for (i = 0; i < SGX_TCS_RESERVED_SIZE; i++)
-		if (tcs->reserved[i])
-			return -EINVAL;
-
-	return 0;
-}
-
 static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       struct sgx_encl_page *encl_page,
 			       struct sgx_epc_page *epc_page,
-			       void *data,
-			       struct sgx_secinfo *secinfo,
-			       unsigned long mrmask)
+			       struct sgx_secinfo *secinfo, unsigned long src,
+			       unsigned long prot, unsigned long mrmask)
 {
 	struct sgx_pageinfo pginfo;
+	struct vm_area_struct *vma;
 	int ret;
 	int i;
 
 	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
 	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
 	pginfo.metadata = (unsigned long)secinfo;
-	pginfo.contents = (unsigned long)data;
+	pginfo.contents = src;
 
+	down_read(&current->mm->mmap_sem);
+
+	/* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */
+	if (encl_page->vm_prot_bits & VM_EXEC) {
+		vma = find_vma(current->mm, src);
+		if (!vma) {
+			up_read(&current->mm->mmap_sem);
+			return -EFAULT;
+		}
+
+		if (!(vma->vm_flags & VM_MAYEXEC)) {
+			up_read(&current->mm->mmap_sem);
+			return -EACCES;
+		}
+	}
+
+	__uaccess_begin();
 	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
-	if (ret) {
-		if (encls_failed(ret))
-			ENCLS_WARN(ret, "EADD");
+	__uaccess_end();
+
+	up_read(&current->mm->mmap_sem);
+
+	if (ret)
 		return -EFAULT;
-	}
 
 	for_each_set_bit(i, &mrmask, 16) {
 		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
@@ -386,9 +361,9 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 	return 0;
 }
 
-static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
-			     void *data, struct sgx_secinfo *secinfo,
-			     unsigned int mrmask, unsigned long prot)
+static int sgx_encl_add_page(struct sgx_encl *encl,
+			     struct sgx_enclave_add_page *addp,
+			     struct sgx_secinfo *secinfo, unsigned long prot)
 {
 	u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK;
 	struct sgx_encl_page *encl_page;
@@ -396,13 +371,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	struct sgx_va_page *va_page;
 	int ret;
 
-	if (page_type == SGX_SECINFO_TCS) {
-		ret = sgx_validate_tcs(encl, data);
-		if (ret)
-			return ret;
-	}
-
-	encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type);
+	encl_page = sgx_encl_page_alloc(encl, addp->addr, prot, page_type);
 	if (IS_ERR(encl_page))
 		return PTR_ERR(encl_page);
 
@@ -425,8 +394,8 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	if (ret)
 		goto err_out_shrink;
 
-	ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo,
-				  mrmask);
+	ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo,
+				  addp->src, prot, addp->mrmask);
 	if (ret)
 		goto err_out;
 
@@ -447,36 +416,6 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	return ret;
 }
 
-static int sgx_encl_page_import_user(void *dst, unsigned long src,
-				     unsigned long prot)
-{
-	struct vm_area_struct *vma;
-	int ret = 0;
-
-	down_read(&current->mm->mmap_sem);
-
-	/* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */
-	if (prot & PROT_EXEC) {
-		vma = find_vma(current->mm, src);
-		if (!vma) {
-			ret = -EFAULT;
-			goto out;
-		}
-
-		if (!(vma->vm_flags & VM_MAYEXEC)) {
-			ret = -EACCES;
-			goto out;
-		}
-	}
-
-	if (copy_from_user(dst, (void __user *)src, PAGE_SIZE))
-		ret = -EFAULT;
-
-out:
-	up_read(&current->mm->mmap_sem);
-	return ret;
-}
-
 /**
  * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE
  *
@@ -498,10 +437,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_add_page addp;
 	struct sgx_secinfo secinfo;
-	struct page *data_page;
 	unsigned long prot;
-	void *data;
-	int ret;
 
 	if (!(encl->flags & SGX_ENCL_CREATED))
 		return -EINVAL;
@@ -523,12 +459,6 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	if (sgx_validate_secinfo(&secinfo))
 		return -EINVAL;
 
-	data_page = alloc_page(GFP_HIGHUSER);
-	if (!data_page)
-		return -ENOMEM;
-
-	data = kmap(data_page);
-
 	prot = _calc_vm_trans(secinfo.flags, SGX_SECINFO_R, PROT_READ)  |
 	       _calc_vm_trans(secinfo.flags, SGX_SECINFO_W, PROT_WRITE) |
 	       _calc_vm_trans(secinfo.flags, SGX_SECINFO_X, PROT_EXEC);
@@ -537,19 +467,7 @@ static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	if ((secinfo.flags & SGX_SECINFO_PAGE_TYPE_MASK) == SGX_SECINFO_TCS)
 		prot |= PROT_READ | PROT_WRITE;
 
-	ret = sgx_encl_page_import_user(data, addp.src, prot);
-	if (ret)
-		goto out;
-
-	ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask,
-				prot);
-	if (ret)
-		goto out;
-
-out:
-	kunmap(data_page);
-	__free_page(data_page);
-	return ret;
+	return sgx_encl_add_page(encl, &addp, &secinfo, prot);
 }
 
 static int __sgx_get_key_hash(struct crypto_shash *tfm, const void *modulus,
-- 
2.22.0


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy
  2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
                   ` (7 preceding siblings ...)
  2019-08-13  1:12 ` [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD Sean Christopherson
@ 2019-08-15 22:00 ` Jarkko Sakkinen
  8 siblings, 0 replies; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-15 22:00 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Mon, Aug 12, 2019 at 06:12:44PM -0700, Sean Christopherson wrote:
> As suggested by Andy, remove the work queue in favor of handling the
> entire EADD flow in the context of the ioctl().  After the worker is
> gone, pass the source page/address directly to EADD instead of first
> copying the data into kernel memory.
> 
> v2:
>   - Rebase to master, commit 36e0186296a7.
>   - Incorporate patch to require EADD be aligned, and add a selftest
>     update to obey the new alignment requirement.

Looks great!

I'll squash these and some selftest changes and send the next version
of the patch set. Thank you.

/Jarkko

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary
  2019-08-13  1:12 ` [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary Sean Christopherson
@ 2019-08-16 13:55   ` Jarkko Sakkinen
  2019-08-16 15:36     ` Jethro Beekman
  0 siblings, 1 reply; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-16 13:55 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Mon, Aug 12, 2019 at 06:12:45PM -0700, Sean Christopherson wrote:
> Align the enclave's binary blob to 4096 bytes so that a pointer to the
> blob satisfies hardware's requirements that the source data for EADD be
> page aligned.  An upcoming kernel change will extend the alignment
> requirement to userspace so that the kernel can avoid copying the source
> into an internal buffer, i.e. pass the userspace pointer directly to
> EADD.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

Requirement is too way strong vocabulary here. Whether it should be required
or there should fast and slow paths is debatable.

/Jarkko

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 2/8] x86/sgx: Require EADD source to be page aligned
  2019-08-13  1:12 ` [PATCH for_v22 v2 2/8] x86/sgx: Require EADD source to be page aligned Sean Christopherson
@ 2019-08-16 14:15   ` Jarkko Sakkinen
  0 siblings, 0 replies; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-16 14:15 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Mon, Aug 12, 2019 at 06:12:46PM -0700, Sean Christopherson wrote:
> Reject the EADD ioctl() if the source address provided by userspace is
> not page aligned.  Page alignment is required by hardware, but this is
> not enforced on userspace as the kernel first copies the source page to
> an internal (page aligned) buffer.  Require the userspace address to be
> page aligned in preparation for reworking EADD to directly consume the
> userspace address.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

I missed that there was this change.

For most, supporting a fast path does make sense. Removing slow path
needs to be brought up separately in the patch set review.

Even if this is still a patch set, bundling major shifts to semantics
like this is against my maintainer ethics.

I hope you get my point of view here.

/Jarkko

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary
  2019-08-16 13:55   ` Jarkko Sakkinen
@ 2019-08-16 15:36     ` Jethro Beekman
  2019-08-16 23:41       ` Jarkko Sakkinen
  0 siblings, 1 reply; 22+ messages in thread
From: Jethro Beekman @ 2019-08-16 15:36 UTC (permalink / raw)
  To: Jarkko Sakkinen, Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

[-- Attachment #1: Type: text/plain, Size: 373 bytes --]

On 2019-08-16 06:55, Jarkko Sakkinen wrote:
> Requirement is too way strong vocabulary here. Whether it should be required
> or there should fast and slow paths is debatable.

After userspace has gone through all the trouble to map an enclave using 
the new API, aligning the input to this ioctl seems like a minor 
inconvenience.

--
Jethro Beekman | Fortanix


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3990 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary
  2019-08-16 15:36     ` Jethro Beekman
@ 2019-08-16 23:41       ` Jarkko Sakkinen
  0 siblings, 0 replies; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-16 23:41 UTC (permalink / raw)
  To: Jethro Beekman; +Cc: Sean Christopherson, linux-sgx, Andy Lutomirski

On Fri, Aug 16, 2019 at 03:36:11PM +0000, Jethro Beekman wrote:
> On 2019-08-16 06:55, Jarkko Sakkinen wrote:
> > Requirement is too way strong vocabulary here. Whether it should be required
> > or there should fast and slow paths is debatable.
> 
> After userspace has gone through all the trouble to map an enclave using the
> new API, aligning the input to this ioctl seems like a minor inconvenience.

You might in some cases mmap() less than the ELRANGE for legit reasons.
It is not really a question of convenience but rather just taking
advantage of robustness brought by the loose binding to the VMA.

/Jarkko

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
  2019-08-13  1:12 ` [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Sean Christopherson
@ 2019-08-22 12:56   ` Jarkko Sakkinen
  2019-08-22 14:24     ` Sean Christopherson
  0 siblings, 1 reply; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-22 12:56 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Mon, Aug 12, 2019 at 06:12:48PM -0700, Sean Christopherson wrote:
> Set SGX_ENCL_PAGE_TCS when encl_page->desc is initialized in
> sgx_encl_page_alloc() to improve readability, and so that the code
> isn't affected when the bulk of __sgx_encl_add_page() is rewritten
> to remove the EADD worker in a future patch.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

I don't mean to be impolite but this change only decreases readability,
and in no possible way improves it. Clear semantics and such things
improve readability

The semantics are terrible if you have a parameter that can have
multiple values but only a single value would trigger something. We
don't want that kind of weirdness to the codebase. We want clean
up such glitches.

If you have a boolean behaviour, please use a boolean value then.

I changed it to always assign the page type and added spacing to make
the code more readable. SECINFO gets validated early in the ioctl so
there should not be problem.

/Jarkko

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page
  2019-08-22 12:56   ` Jarkko Sakkinen
@ 2019-08-22 14:24     ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2019-08-22 14:24 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

On Thu, Aug 22, 2019 at 03:56:43PM +0300, Jarkko Sakkinen wrote:
> On Mon, Aug 12, 2019 at 06:12:48PM -0700, Sean Christopherson wrote:
> > Set SGX_ENCL_PAGE_TCS when encl_page->desc is initialized in
> > sgx_encl_page_alloc() to improve readability, and so that the code
> > isn't affected when the bulk of __sgx_encl_add_page() is rewritten
> > to remove the EADD worker in a future patch.
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> I don't mean to be impolite but this change only decreases readability,
> and in no possible way improves it. Clear semantics and such things
> improve readability

No worries, it's not the first time my interpretation of what's readable
has deviated from the norm :-)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD
  2019-08-13  1:12 ` [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD Sean Christopherson
@ 2019-08-22 14:37   ` Jarkko Sakkinen
  2019-08-22 14:50     ` Sean Christopherson
  0 siblings, 1 reply; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-22 14:37 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> Invoke EADD with the userspace source address instead of first copying
> the data to a kernel page to avoid the overhead of alloc_page() and
> copy_from_user().
> 
> Suggested-by: Andy Lutomirski <luto@kernel.org>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

NAK because takes away TCS validation and the commit message
does not give any reasoning for doing that.

Other patches have been squashed.

/Jarkko


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD
  2019-08-22 14:37   ` Jarkko Sakkinen
@ 2019-08-22 14:50     ` Sean Christopherson
  2019-08-22 17:00       ` Jarkko Sakkinen
  0 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2019-08-22 14:50 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > Invoke EADD with the userspace source address instead of first copying
> > the data to a kernel page to avoid the overhead of alloc_page() and
> > copy_from_user().
> > 
> > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> NAK because takes away TCS validation and the commit message
> does not give any reasoning for doing that.

Doh, I have a thorough explanation, but apparently it never made it from
my head to the changelog.  I'll send v2 as a standalone patch.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD
  2019-08-22 14:50     ` Sean Christopherson
@ 2019-08-22 17:00       ` Jarkko Sakkinen
  2019-08-23  1:25         ` Jarkko Sakkinen
  0 siblings, 1 reply; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-22 17:00 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > Invoke EADD with the userspace source address instead of first copying
> > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > copy_from_user().
> > > 
> > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > NAK because takes away TCS validation and the commit message
> > does not give any reasoning for doing that.
> 
> Doh, I have a thorough explanation, but apparently it never made it from
> my head to the changelog.  I'll send v2 as a standalone patch.

Yeah, w/o explanation I won't just take away functionality :-)

/Jarkko


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD
  2019-08-22 17:00       ` Jarkko Sakkinen
@ 2019-08-23  1:25         ` Jarkko Sakkinen
  2019-08-23  1:28           ` Sean Christopherson
  0 siblings, 1 reply; 22+ messages in thread
From: Jarkko Sakkinen @ 2019-08-23  1:25 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Andy Lutomirski

On Thu, Aug 22, 2019 at 08:00:15PM +0300, Jarkko Sakkinen wrote:
> On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> > On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > > Invoke EADD with the userspace source address instead of first copying
> > > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > > copy_from_user().
> > > > 
> > > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > 
> > > NAK because takes away TCS validation and the commit message
> > > does not give any reasoning for doing that.
> > 
> > Doh, I have a thorough explanation, but apparently it never made it from
> > my head to the changelog.  I'll send v2 as a standalone patch.
> 
> Yeah, w/o explanation I won't just take away functionality :-)

I came to realize that also from security perspective it might be
helpful to EADD, not from a copy of the source, but from the
actual source.

So yes, I'm for not supporting copy approach at all. I think this
viewpoint is important to note in addition to the performance
perspective.

/Jarkko

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD
  2019-08-23  1:25         ` Jarkko Sakkinen
@ 2019-08-23  1:28           ` Sean Christopherson
  2019-08-23  1:34             ` Sean Christopherson
  0 siblings, 1 reply; 22+ messages in thread
From: Sean Christopherson @ 2019-08-23  1:28 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

On Fri, Aug 23, 2019 at 04:25:16AM +0300, Jarkko Sakkinen wrote:
> On Thu, Aug 22, 2019 at 08:00:15PM +0300, Jarkko Sakkinen wrote:
> > On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> > > On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > > > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > > > Invoke EADD with the userspace source address instead of first copying
> > > > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > > > copy_from_user().
> > > > > 
> > > > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > > 
> > > > NAK because takes away TCS validation and the commit message
> > > > does not give any reasoning for doing that.
> > > 
> > > Doh, I have a thorough explanation, but apparently it never made it from
> > > my head to the changelog.  I'll send v2 as a standalone patch.
> > 
> > Yeah, w/o explanation I won't just take away functionality :-)
> 
> I came to realize that also from security perspective it might be
> helpful to EADD, not from a copy of the source, but from the
> actual source.
> 
> So yes, I'm for not supporting copy approach at all. I think this
> viewpoint is important to note in addition to the performance
> perspective.

Side topic, I'm getting ELDU MAC failures on master.  Are there any known
regressions, or should I start debugging?

[   18.005880] ------------[ cut here ]------------
[   18.005884] sgx: ELDU returned 9 (0x9)
[   18.005906] WARNING: CPU: 4 PID: 1142 at /home/sean/go/src/kernel.org/linux/arch/x86/kernel/cpu/sgx/encl.c:50 sgx_encl_eldu+0x363/0x390
[   18.005907] Modules linked in:
[   18.005913] CPU: 4 PID: 1142 Comm: lsdt Tainted: G        W         5.3.0-rc3+ #725
[   18.005914] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[   18.005916] RIP: 0010:sgx_encl_eldu+0x363/0x390
[   18.005917] Code: 75 11 f7 c3 00 00 00 40 ba f2 ff ff ff 0f 85 d4 fe ff ff 89 d9 89 da 48 c7 c6 4e af db 81 48 c7 c7 53 af db 81 e8 8d 1f 03 00 <0f> 0b ba f2 ff ff ff e9 b1 fe ff ff 31 c0 48 c7 c7 40 10 53 82 e9
[   18.005917] RSP: 0000:ffffc90000703d00 EFLAGS: 00010286
[   18.005918] RAX: 0000000000000000 RBX: 0000000000000009 RCX: 0000000000000006
[   18.005918] RDX: 0000000000000007 RSI: 0000000000000082 RDI: ffff88846f816560
[   18.005919] RBP: ffffc90000703d90 R08: 00000000000002da R09: 0000000000000004
[   18.005919] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88846d08a100
[   18.005920] R13: ffffea00118daa40 R14: ffff888467d3e740 R15: ffffea001189c680
[   18.005920] FS:  00007fa90afff700(0000) GS:ffff88846f800000(0000) knlGS:0000000000000000
[   18.005922] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   18.005923] CR2: 00007fa8e3001000 CR3: 0000000466389002 CR4: 0000000000360ee0
[   18.005923] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   18.005923] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   18.005924] Call Trace:
[   18.005934]  ? sgx_encl_load_page.part.15+0x56/0x90
[   18.005935]  sgx_encl_load_page.part.15+0x56/0x90
[   18.005936]  sgx_vma_fault+0x84/0xf0
[   18.005941]  __do_fault+0x4f/0x87
[   18.005943]  __handle_mm_fault+0xa65/0x1020
[   18.005945]  handle_mm_fault+0xb0/0x1f0
[   18.005946]  __do_page_fault+0x231/0x4d0
[   18.005951]  async_page_fault+0x2f/0x40
[   18.005955] RIP: 0033:0x7ffe7bd62a39
[   18.005955] Code: 74 05 c1 e8 0c 89 06 31 c0 5d c3 90 90 90 90 90 90 55 48 89 e5 83 f8 02 72 67 83 f8 03 77 62 48 8b 5d 10 48 8d 0d 00 00 00 00 <0f> 01 d7 31 db 48 8b 4d 18 e3 10 89 01 73 0c 66 89 79 04 66 89 71
[   18.005956] RSP: 002b:00007fa90affd8c0 EFLAGS: 00000202
[   18.005956] RAX: 0000000000000003 RBX: 00007fa8e3a10000 RCX: 00007ffe7bd62a39
[   18.005957] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[   18.005957] RBP: 00007fa90affd8c0 R08: 0000000000000000 R09: 0000000000000000
[   18.005957] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[   18.005958] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[   18.005959] ---[ end trace c120e55e5ad35ff0 ]---

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD
  2019-08-23  1:28           ` Sean Christopherson
@ 2019-08-23  1:34             ` Sean Christopherson
  0 siblings, 0 replies; 22+ messages in thread
From: Sean Christopherson @ 2019-08-23  1:34 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Andy Lutomirski

On Thu, Aug 22, 2019 at 06:28:39PM -0700, Sean Christopherson wrote:
> On Fri, Aug 23, 2019 at 04:25:16AM +0300, Jarkko Sakkinen wrote:
> > On Thu, Aug 22, 2019 at 08:00:15PM +0300, Jarkko Sakkinen wrote:
> > > On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> > > > On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > > > > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > > > > Invoke EADD with the userspace source address instead of first copying
> > > > > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > > > > copy_from_user().
> > > > > > 
> > > > > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > > > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > > > 
> > > > > NAK because takes away TCS validation and the commit message
> > > > > does not give any reasoning for doing that.
> > > > 
> > > > Doh, I have a thorough explanation, but apparently it never made it from
> > > > my head to the changelog.  I'll send v2 as a standalone patch.
> > > 
> > > Yeah, w/o explanation I won't just take away functionality :-)
> > 
> > I came to realize that also from security perspective it might be
> > helpful to EADD, not from a copy of the source, but from the
> > actual source.
> > 
> > So yes, I'm for not supporting copy approach at all. I think this
> > viewpoint is important to note in addition to the performance
> > perspective.
> 
> Side topic, I'm getting ELDU MAC failures on master.  Are there any known
> regressions, or should I start debugging?

Never mind, think I spotted the issue.  Hooray for git diff on branches.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, back to index

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-13  1:12 [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Sean Christopherson
2019-08-13  1:12 ` [PATCH for_v22 v2 1/8] selftests/x86/sgx: Align enclave binary on 4k boundary Sean Christopherson
2019-08-16 13:55   ` Jarkko Sakkinen
2019-08-16 15:36     ` Jethro Beekman
2019-08-16 23:41       ` Jarkko Sakkinen
2019-08-13  1:12 ` [PATCH for_v22 v2 2/8] x86/sgx: Require EADD source to be page aligned Sean Christopherson
2019-08-16 14:15   ` Jarkko Sakkinen
2019-08-13  1:12 ` [PATCH for_v22 v2 3/8] x86/sgx: Validate generic SECINFO immediately after copying from user Sean Christopherson
2019-08-13  1:12 ` [PATCH for_v22 v2 4/8] x86/sgx: Set SGX_ENCL_PAGE_TCS when allocating encl_page Sean Christopherson
2019-08-22 12:56   ` Jarkko Sakkinen
2019-08-22 14:24     ` Sean Christopherson
2019-08-13  1:12 ` [PATCH for_v22 v2 5/8] x86/sgx: Move encl_page insertion into tree out of alloc flow Sean Christopherson
2019-08-13  1:12 ` [PATCH for_v22 v2 6/8] x86/sgx: Allocate encl_page prior to taking encl->lock Sean Christopherson
2019-08-13  1:12 ` [PATCH for_v22 v2 7/8] x86/sgx: Remove the EADD page worker Sean Christopherson
2019-08-13  1:12 ` [PATCH for_v22 v2 8/8] x86/sgx: Pass userspace source address directly to EADD Sean Christopherson
2019-08-22 14:37   ` Jarkko Sakkinen
2019-08-22 14:50     ` Sean Christopherson
2019-08-22 17:00       ` Jarkko Sakkinen
2019-08-23  1:25         ` Jarkko Sakkinen
2019-08-23  1:28           ` Sean Christopherson
2019-08-23  1:34             ` Sean Christopherson
2019-08-15 22:00 ` [PATCH for_v22 v2 0/8] x86/sgx: Remove EADD worker and page copy Jarkko Sakkinen

Linux-Sgx Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-sgx/0 linux-sgx/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-sgx linux-sgx/ https://lore.kernel.org/linux-sgx \
		linux-sgx@vger.kernel.org
	public-inbox-index linux-sgx

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-sgx


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git