linux-sgx.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/17] Fixes and updates for v23
@ 2019-09-16 10:17 Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 01/17] selftest/x86/sgx: Remove encl_piggy.h Jarkko Sakkinen
                   ` (16 more replies)
  0 siblings, 17 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

This patch set contains my fixes and updates for v23 except
sgx_alloc_page() changes that I will submit a bit later.

v3:
* Rebasing did not went right in the last version.
v2:
* Added the bug fix to not allow legitly BLOCKED pages to end up
  to the free page pool.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>

Jarkko Sakkinen (17):
  selftest/x86/sgx: Remove encl_piggy.h
  x86/sgx: Clean up internal includes
  x86/sgx: Write backing storage only if EWB is successful
  x86/sgx: Rename 'j' as 'cnt' in sgx_reclaim_pages()
  x86/sgx: Turn encls_failed() as inline function
  x86/sgx: Move sgx_einit() to encls.c
  x86/sgx: Remove pages in sgx_reclaimer_write()
  x86/sgx: Calculate page index in sgx_reclaimer_write()
  x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write()
  x86/sgx: Free VA slot when the EWB flow fails
  x86/sgx: Call sgx_encl_destroy() when the EWB flow fails
  x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put()
  x86/sgx: Introduce sgx_can_reclaim()
  x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages
  x86/sgx: sgx_vma_access(): Do not return -ECANCELED on invalid TCS
    pages
  x86/sgx: Introduce sgx_encl_get_backing()
  x86/sgx: Fix pages in the BLOCKED state ending up to the free pool

 arch/x86/kernel/cpu/sgx/driver.c             |   2 +
 arch/x86/kernel/cpu/sgx/driver.h             |   3 -
 arch/x86/kernel/cpu/sgx/encl.c               | 104 +++----
 arch/x86/kernel/cpu/sgx/encl.h               |  23 +-
 arch/x86/kernel/cpu/sgx/encls.c              |  54 +++-
 arch/x86/kernel/cpu/sgx/encls.h              |  23 +-
 arch/x86/kernel/cpu/sgx/ioctl.c              |   5 +-
 arch/x86/kernel/cpu/sgx/main.c               |  64 +---
 arch/x86/kernel/cpu/sgx/reclaim.c            | 299 +++++++++----------
 arch/x86/kernel/cpu/sgx/sgx.h                |   6 +-
 tools/testing/selftests/x86/sgx/encl_piggy.h |  14 -
 tools/testing/selftests/x86/sgx/main.c       |   1 -
 12 files changed, 276 insertions(+), 322 deletions(-)
 delete mode 100644 tools/testing/selftests/x86/sgx/encl_piggy.h

-- 
2.20.1


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v3 01/17] selftest/x86/sgx: Remove encl_piggy.h
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 02/17] x86/sgx: Clean up internal includes Jarkko Sakkinen
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

When encl.bin and encl.ss where detached from the test_sgx ELF binary this
file should have been removed. Take care of that.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 tools/testing/selftests/x86/sgx/encl_piggy.h | 14 --------------
 tools/testing/selftests/x86/sgx/main.c       |  1 -
 2 files changed, 15 deletions(-)
 delete mode 100644 tools/testing/selftests/x86/sgx/encl_piggy.h

diff --git a/tools/testing/selftests/x86/sgx/encl_piggy.h b/tools/testing/selftests/x86/sgx/encl_piggy.h
deleted file mode 100644
index ee8224f8cc8d..000000000000
--- a/tools/testing/selftests/x86/sgx/encl_piggy.h
+++ /dev/null
@@ -1,14 +0,0 @@
-/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
-/*
- * Copyright(c) 2016-18 Intel Corporation.
- */
-
-#ifndef ENCL_PIGGY_H
-#define ENCL_PIGGY_H
-
-extern unsigned char encl_bin[];
-extern unsigned char encl_bin_end[];
-extern unsigned char encl_ss[];
-extern unsigned char encl_ss_end[];
-
-#endif /* ENCL_PIGGY_H */
diff --git a/tools/testing/selftests/x86/sgx/main.c b/tools/testing/selftests/x86/sgx/main.c
index 2160bcd0ccd9..f78ff458b0dd 100644
--- a/tools/testing/selftests/x86/sgx/main.c
+++ b/tools/testing/selftests/x86/sgx/main.c
@@ -15,7 +15,6 @@
 #include <sys/stat.h>
 #include <sys/time.h>
 #include <sys/types.h>
-#include "encl_piggy.h"
 #include "defines.h"
 #include "../../../../../arch/x86/kernel/cpu/sgx/arch.h"
 #include "../../../../../arch/x86/include/uapi/asm/sgx.h"
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 02/17] x86/sgx: Clean up internal includes
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 01/17] selftest/x86/sgx: Remove encl_piggy.h Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 03/17] x86/sgx: Write backing storage only if EWB is successful Jarkko Sakkinen
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Include arch.h from sgx.h and include sgx.h from other headers. This
makes a sane include order. Any new header should only need to include
sgx.h. The .c files must just include the modules that they use.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/driver.c  | 2 ++
 arch/x86/kernel/cpu/sgx/driver.h  | 3 ---
 arch/x86/kernel/cpu/sgx/encl.c    | 2 --
 arch/x86/kernel/cpu/sgx/encl.h    | 1 +
 arch/x86/kernel/cpu/sgx/encls.h   | 2 +-
 arch/x86/kernel/cpu/sgx/ioctl.c   | 2 ++
 arch/x86/kernel/cpu/sgx/main.c    | 3 +--
 arch/x86/kernel/cpu/sgx/reclaim.c | 3 ++-
 arch/x86/kernel/cpu/sgx/sgx.h     | 1 +
 9 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
index a1da479ffd3a..1ceeb742c1da 100644
--- a/arch/x86/kernel/cpu/sgx/driver.c
+++ b/arch/x86/kernel/cpu/sgx/driver.c
@@ -9,6 +9,8 @@
 #include <linux/suspend.h>
 #include <asm/traps.h>
 #include "driver.h"
+#include "encl.h"
+#include "encls.h"
 
 MODULE_DESCRIPTION("Intel SGX Enclave Driver");
 MODULE_AUTHOR("Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>");
diff --git a/arch/x86/kernel/cpu/sgx/driver.h b/arch/x86/kernel/cpu/sgx/driver.h
index 1e35933cf8a4..2f13886522a8 100644
--- a/arch/x86/kernel/cpu/sgx/driver.h
+++ b/arch/x86/kernel/cpu/sgx/driver.h
@@ -10,9 +10,6 @@
 #include <linux/sched.h>
 #include <linux/workqueue.h>
 #include <uapi/asm/sgx.h>
-#include "arch.h"
-#include "encl.h"
-#include "encls.h"
 #include "sgx.h"
 
 #define SGX_DRV_NR_DEVICES	2
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 9bf49e30a568..1c1fbc95be33 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -7,10 +7,8 @@
 #include <linux/shmem_fs.h>
 #include <linux/suspend.h>
 #include <linux/sched/mm.h>
-#include "arch.h"
 #include "encl.h"
 #include "encls.h"
-#include "sgx.h"
 
 static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
 			   struct sgx_epc_page *epc_page,
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index c7608964d69e..95e5713a50ad 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -15,6 +15,7 @@
 #include <linux/radix-tree.h>
 #include <linux/srcu.h>
 #include <linux/workqueue.h>
+#include "sgx.h"
 
 /**
  * enum sgx_encl_page_desc - defines bits for an enclave page's descriptor
diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
index aea3b9d09936..6631afbf2f64 100644
--- a/arch/x86/kernel/cpu/sgx/encls.h
+++ b/arch/x86/kernel/cpu/sgx/encls.h
@@ -8,7 +8,7 @@
 #include <linux/rwsem.h>
 #include <linux/types.h>
 #include <asm/asm.h>
-#include "arch.h"
+#include "sgx.h"
 
 /**
  * ENCLS_FAULT_FLAG - flag signifying an ENCLS return code is a trapnr
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index e2205a433b87..e57dda38513b 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -13,6 +13,8 @@
 #include <linux/slab.h>
 #include <linux/suspend.h>
 #include "driver.h"
+#include "encl.h"
+#include "encls.h"
 
 static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
 {
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index e4f751651a65..c58ab5f28669 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -9,9 +9,8 @@
 #include <linux/sched/signal.h>
 #include <linux/slab.h>
 #include "driver.h"
-#include "arch.h"
+#include "encl.h"
 #include "encls.h"
-#include "sgx.h"
 
 struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
 int sgx_nr_epc_sections;
diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index bd8ac11a4278..d82ce1eaa60e 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -10,7 +10,8 @@
 #include <linux/sched/mm.h>
 #include <linux/sched/signal.h>
 #include "driver.h"
-#include "sgx.h"
+#include "encl.h"
+#include "encls.h"
 
 struct task_struct *ksgxswapd_tsk;
 DECLARE_WAIT_QUEUE_HEAD(ksgxswapd_waitq);
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index f0ff7bd3d18e..bc6a644af2b5 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -9,6 +9,7 @@
 #include <linux/types.h>
 #include <asm/asm.h>
 #include <uapi/asm/sgx_errno.h>
+#include "arch.h"
 
 struct sgx_epc_page {
 	unsigned long desc;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 03/17] x86/sgx: Write backing storage only if EWB is successful
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 01/17] selftest/x86/sgx: Remove encl_piggy.h Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 02/17] x86/sgx: Clean up internal includes Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 04/17] x86/sgx: Rename 'j' as 'cnt' in sgx_reclaim_pages() Jarkko Sakkinen
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Do not call set_page_dirty() unless the EWB operation is successful.
Probably nothing bad will happen by doing this but it is still an
unnecessary action.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index d82ce1eaa60e..213406756443 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -266,9 +266,12 @@ static int __sgx_encl_ewb(struct sgx_encl *encl, struct sgx_epc_page *epc_page,
 	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - pcmd_offset));
 	kunmap_atomic((void *)(unsigned long)pginfo.contents);
 
-	set_page_dirty(pcmd);
+	if (!ret) {
+		set_page_dirty(pcmd);
+		set_page_dirty(backing);
+	}
+
 	put_page(pcmd);
-	set_page_dirty(backing);
 
 err_pcmd:
 	put_page(backing);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 04/17] x86/sgx: Rename 'j' as 'cnt' in sgx_reclaim_pages()
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (2 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 03/17] x86/sgx: Write backing storage only if EWB is successful Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 05/17] x86/sgx: Turn encls_failed() as inline function Jarkko Sakkinen
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

It is better to have more descriptive name than 'j' for the associated
local variable in sgx_reclaim_pages() as it does not represent just a
trivial loop index like 'i' does. Thus, rename it as 'cnt'.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 213406756443..cfda38a9b05c 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -403,12 +403,13 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 void sgx_reclaim_pages(void)
 {
 	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN + 1];
-	struct sgx_epc_page *epc_page;
 	struct sgx_epc_section *section;
-	int i, j;
+	struct sgx_epc_page *epc_page;
+	int cnt = 0;
+	int i;
 
 	spin_lock(&sgx_active_page_list_lock);
-	for (i = 0, j = 0; i < SGX_NR_TO_SCAN; i++) {
+	for (i = 0; i < SGX_NR_TO_SCAN; i++) {
 		if (list_empty(&sgx_active_page_list))
 			break;
 
@@ -417,7 +418,7 @@ void sgx_reclaim_pages(void)
 		list_del_init(&epc_page->list);
 
 		if (sgx_reclaimer_get(epc_page))
-			chunk[j++] = epc_page;
+			chunk[cnt++] = epc_page;
 		else
 			/* The owner is freeing the page. No need to add the
 			 * page back to the list of reclaimable pages.
@@ -426,7 +427,7 @@ void sgx_reclaim_pages(void)
 	}
 	spin_unlock(&sgx_active_page_list_lock);
 
-	for (i = 0; i < j; i++) {
+	for (i = 0; i < cnt; i++) {
 		epc_page = chunk[i];
 		if (sgx_reclaimer_evict(epc_page))
 			continue;
@@ -440,13 +441,13 @@ void sgx_reclaim_pages(void)
 		chunk[i] = NULL;
 	}
 
-	for (i = 0; i < j; i++) {
+	for (i = 0; i < cnt; i++) {
 		epc_page = chunk[i];
 		if (epc_page)
 			sgx_reclaimer_block(epc_page);
 	}
 
-	for (i = 0; i < j; i++) {
+	for (i = 0; i < cnt; i++) {
 		epc_page = chunk[i];
 		if (epc_page) {
 			sgx_reclaimer_write(epc_page);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 05/17] x86/sgx: Turn encls_failed() as inline function
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (3 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 04/17] x86/sgx: Rename 'j' as 'cnt' in sgx_reclaim_pages() Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 06/17] x86/sgx: Move sgx_einit() to encls.c Jarkko Sakkinen
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Turn encls_failed() as an inline function. As far as the tracing goes we
issue warning consistently in the call sites, which is enough to catch
the potential issues.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/Makefile |  2 +-
 arch/x86/kernel/cpu/sgx/encls.c  | 24 ------------------------
 arch/x86/kernel/cpu/sgx/encls.h  | 18 +++++++++++++++++-
 3 files changed, 18 insertions(+), 26 deletions(-)
 delete mode 100644 arch/x86/kernel/cpu/sgx/encls.c

diff --git a/arch/x86/kernel/cpu/sgx/Makefile b/arch/x86/kernel/cpu/sgx/Makefile
index 379e9c52848e..cfd29c42264b 100644
--- a/arch/x86/kernel/cpu/sgx/Makefile
+++ b/arch/x86/kernel/cpu/sgx/Makefile
@@ -1,5 +1,5 @@
 # core
-obj-y += encl.o encls.o main.o reclaim.o
+obj-y += encl.o main.o reclaim.o
 
 # driver
 obj-y += driver.o ioctl.o
diff --git a/arch/x86/kernel/cpu/sgx/encls.c b/arch/x86/kernel/cpu/sgx/encls.c
deleted file mode 100644
index 1b492c15a2b8..000000000000
--- a/arch/x86/kernel/cpu/sgx/encls.c
+++ /dev/null
@@ -1,24 +0,0 @@
-// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
-// Copyright(c) 2016-19 Intel Corporation.
-
-#include <asm/cpufeature.h>
-#include <asm/traps.h>
-#include "encls.h"
-#include "sgx.h"
-
-/**
- * encls_failed() - Check if an ENCLS leaf function failed
- * @ret:	the return value of an ENCLS leaf function call
- *
- * Check if an ENCLS leaf function failed. This is a condition where the leaf
- * function causes a fault that is not caused by an EPCM conflict.
- *
- * Return: true if there was a fault other than an EPCM conflict
- */
-bool encls_failed(int ret)
-{
-	int epcm_trapnr = boot_cpu_has(X86_FEATURE_SGX2) ?
-			  X86_TRAP_PF : X86_TRAP_GP;
-
-	return encls_faulted(ret) && ENCLS_TRAPNR(ret) != epcm_trapnr;
-}
diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
index 6631afbf2f64..b7e6462e58b8 100644
--- a/arch/x86/kernel/cpu/sgx/encls.h
+++ b/arch/x86/kernel/cpu/sgx/encls.h
@@ -8,6 +8,7 @@
 #include <linux/rwsem.h>
 #include <linux/types.h>
 #include <asm/asm.h>
+#include <asm/traps.h>
 #include "sgx.h"
 
 /**
@@ -66,7 +67,22 @@ static inline bool encls_returned_code(int ret)
 	return !encls_faulted(ret) && ret;
 }
 
-bool encls_failed(int ret);
+/**
+ * encls_failed() - Check if an ENCLS leaf function failed
+ * @ret:	the return value of an ENCLS leaf function call
+ *
+ * Check if an ENCLS leaf function failed. This is a condition where the leaf
+ * function causes a fault that is not caused by an EPCM conflict.
+ *
+ * Return: true if there was a fault other than an EPCM conflict
+ */
+static inline bool encls_failed(int ret)
+{
+	int epcm_trapnr = boot_cpu_has(X86_FEATURE_SGX2) ?
+			  X86_TRAP_PF : X86_TRAP_GP;
+
+	return encls_faulted(ret) && ENCLS_TRAPNR(ret) != epcm_trapnr;
+}
 
 /**
  * __encls_ret_N - encode an ENCLS leaf that returns an error code in EAX
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 06/17] x86/sgx: Move sgx_einit() to encls.c
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (4 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 05/17] x86/sgx: Turn encls_failed() as inline function Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 07/17] x86/sgx: Remove pages in sgx_reclaimer_write() Jarkko Sakkinen
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Move sgx_einit() to encls.c as it is essentially a global wrapper for
EINIT somewhat independent of the code using it. It does not have any
binding with the code in main.c.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/Makefile |  2 +-
 arch/x86/kernel/cpu/sgx/encls.c  | 56 ++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/sgx/encls.h  |  3 ++
 arch/x86/kernel/cpu/sgx/main.c   | 50 ----------------------------
 arch/x86/kernel/cpu/sgx/sgx.h    |  2 --
 5 files changed, 60 insertions(+), 53 deletions(-)
 create mode 100644 arch/x86/kernel/cpu/sgx/encls.c

diff --git a/arch/x86/kernel/cpu/sgx/Makefile b/arch/x86/kernel/cpu/sgx/Makefile
index cfd29c42264b..379e9c52848e 100644
--- a/arch/x86/kernel/cpu/sgx/Makefile
+++ b/arch/x86/kernel/cpu/sgx/Makefile
@@ -1,5 +1,5 @@
 # core
-obj-y += encl.o main.o reclaim.o
+obj-y += encl.o encls.o main.o reclaim.o
 
 # driver
 obj-y += driver.o ioctl.o
diff --git a/arch/x86/kernel/cpu/sgx/encls.c b/arch/x86/kernel/cpu/sgx/encls.c
new file mode 100644
index 000000000000..cda09cf8b927
--- /dev/null
+++ b/arch/x86/kernel/cpu/sgx/encls.c
@@ -0,0 +1,56 @@
+// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+// Copyright(c) 2016-19 Intel Corporation.
+
+#include <linux/highmem.h>
+#include <asm/cpufeature.h>
+#include "encls.h"
+
+/* A per-cpu cache for the last known values of IA32_SGXLEPUBKEYHASHx MSRs. */
+static DEFINE_PER_CPU(u64 [4], sgx_lepubkeyhash_cache);
+
+static void sgx_update_lepubkeyhash_msrs(u64 *lepubkeyhash, bool enforce)
+{
+	u64 *cache;
+	int i;
+
+	cache = per_cpu(sgx_lepubkeyhash_cache, smp_processor_id());
+	for (i = 0; i < 4; i++) {
+		if (enforce || (lepubkeyhash[i] != cache[i])) {
+			wrmsrl(MSR_IA32_SGXLEPUBKEYHASH0 + i, lepubkeyhash[i]);
+			cache[i] = lepubkeyhash[i];
+		}
+	}
+}
+
+/**
+ * sgx_einit() - Initialize an enclave
+ * @sigstruct:		a pointer a SIGSTRUCT
+ * @token:		a pointer an EINITTOKEN (optional)
+ * @secs:		a pointer a SECS
+ * @lepubkeyhash:	the desired value for IA32_SGXLEPUBKEYHASHx MSRs
+ *
+ * Execute ENCLS[EINIT], writing the IA32_SGXLEPUBKEYHASHx MSRs according
+ * to @lepubkeyhash (if possible and necessary).
+ *
+ * Return:
+ *   0 on success,
+ *   -errno or SGX error on failure
+ */
+int sgx_einit(struct sgx_sigstruct *sigstruct, struct sgx_einittoken *token,
+	      struct sgx_epc_page *secs, u64 *lepubkeyhash)
+{
+	int ret;
+
+	if (!boot_cpu_has(X86_FEATURE_SGX_LC))
+		return __einit(sigstruct, token, sgx_epc_addr(secs));
+
+	preempt_disable();
+	sgx_update_lepubkeyhash_msrs(lepubkeyhash, false);
+	ret = __einit(sigstruct, token, sgx_epc_addr(secs));
+	if (ret == SGX_INVALID_EINITTOKEN) {
+		sgx_update_lepubkeyhash_msrs(lepubkeyhash, true);
+		ret = __einit(sigstruct, token, sgx_epc_addr(secs));
+	}
+	preempt_enable();
+	return ret;
+}
diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
index b7e6462e58b8..e3713337c187 100644
--- a/arch/x86/kernel/cpu/sgx/encls.h
+++ b/arch/x86/kernel/cpu/sgx/encls.h
@@ -257,4 +257,7 @@ static inline int __emodt(struct sgx_secinfo *secinfo, void *addr)
 	return __encls_ret_2(SGX_EMODT, secinfo, addr);
 }
 
+int sgx_einit(struct sgx_sigstruct *sigstruct, struct sgx_einittoken *token,
+	      struct sgx_epc_page *secs, u64 *lepubkeyhash);
+
 #endif /* _X86_ENCLS_H */
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index c58ab5f28669..4c03e5f33414 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -15,9 +15,6 @@
 struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
 int sgx_nr_epc_sections;
 
-/* A per-cpu cache for the last known values of IA32_SGXLEPUBKEYHASHx MSRs. */
-static DEFINE_PER_CPU(u64 [4], sgx_lepubkeyhash_cache);
-
 static struct sgx_epc_page *sgx_section_try_take_page(
 	struct sgx_epc_section *section)
 {
@@ -162,53 +159,6 @@ void sgx_free_page(struct sgx_epc_page *page)
 	WARN(ret > 0, "sgx: EREMOVE returned %d (0x%x)", ret, ret);
 }
 
-static void sgx_update_lepubkeyhash_msrs(u64 *lepubkeyhash, bool enforce)
-{
-	u64 *cache;
-	int i;
-
-	cache = per_cpu(sgx_lepubkeyhash_cache, smp_processor_id());
-	for (i = 0; i < 4; i++) {
-		if (enforce || (lepubkeyhash[i] != cache[i])) {
-			wrmsrl(MSR_IA32_SGXLEPUBKEYHASH0 + i, lepubkeyhash[i]);
-			cache[i] = lepubkeyhash[i];
-		}
-	}
-}
-
-/**
- * sgx_einit - initialize an enclave
- * @sigstruct:		a pointer a SIGSTRUCT
- * @token:		a pointer an EINITTOKEN (optional)
- * @secs:		a pointer a SECS
- * @lepubkeyhash:	the desired value for IA32_SGXLEPUBKEYHASHx MSRs
- *
- * Execute ENCLS[EINIT], writing the IA32_SGXLEPUBKEYHASHx MSRs according
- * to @lepubkeyhash (if possible and necessary).
- *
- * Return:
- *   0 on success,
- *   -errno or SGX error on failure
- */
-int sgx_einit(struct sgx_sigstruct *sigstruct, struct sgx_einittoken *token,
-	      struct sgx_epc_page *secs, u64 *lepubkeyhash)
-{
-	int ret;
-
-	if (!boot_cpu_has(X86_FEATURE_SGX_LC))
-		return __einit(sigstruct, token, sgx_epc_addr(secs));
-
-	preempt_disable();
-	sgx_update_lepubkeyhash_msrs(lepubkeyhash, false);
-	ret = __einit(sigstruct, token, sgx_epc_addr(secs));
-	if (ret == SGX_INVALID_EINITTOKEN) {
-		sgx_update_lepubkeyhash_msrs(lepubkeyhash, true);
-		ret = __einit(sigstruct, token, sgx_epc_addr(secs));
-	}
-	preempt_enable();
-	return ret;
-}
-
 static __init void sgx_free_epc_section(struct sgx_epc_section *section)
 {
 	struct sgx_epc_page *page;
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index bc6a644af2b5..9b08690262b5 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -85,7 +85,5 @@ void sgx_reclaim_pages(void);
 struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim);
 int __sgx_free_page(struct sgx_epc_page *page);
 void sgx_free_page(struct sgx_epc_page *page);
-int sgx_einit(struct sgx_sigstruct *sigstruct, struct sgx_einittoken *token,
-	      struct sgx_epc_page *secs, u64 *lepubkeyhash);
 
 #endif /* _X86_SGX_H */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 07/17] x86/sgx: Remove pages in sgx_reclaimer_write()
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (5 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 06/17] x86/sgx: Move sgx_einit() to encls.c Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 08/17] x86/sgx: Calculate page index " Jarkko Sakkinen
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

The overall flow is somewhat simpler if sgx_reclaimer_write() takes care
of freeing and removing pages and sgx_encl_ewb() focuses only on doing
ENCLS[EWB]. Move sgx_free_page() and __eremove() from sgx_encl_ewb() to
sgx_reclaimer_write().

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index cfda38a9b05c..353888256b2b 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -357,16 +357,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt)
 
 		encl_page->desc |= va_offset;
 		encl_page->va_page = va_page;
-	} else if (pt != SGX_SECINFO_SECS) {
-		ret = __eremove(sgx_epc_addr(epc_page));
-		WARN(ret, "EREMOVE returned %d\n", ret);
 	}
-
-	/* The reclaimer is not aware of SECS pages. */
-	if (pt == SGX_SECINFO_SECS)
-		sgx_free_page(epc_page);
-
-	encl_page->epc_page = NULL;
 }
 
 static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
@@ -374,6 +365,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 	struct sgx_encl_page *encl_page = epc_page->owner;
 	struct sgx_encl *encl = encl_page->encl;
 	unsigned int pt;
+	int ret;
 
 	if (encl_page->desc & SGX_ENCL_PAGE_TCS)
 		pt = SGX_SECINFO_TCS;
@@ -383,13 +375,22 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 	mutex_lock(&encl->lock);
 
 	sgx_encl_ewb(epc_page, pt);
+	if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) {
+		ret = __eremove(sgx_epc_addr(epc_page));
+		WARN(ret, "EREMOVE returned %d\n", ret);
+	}
 
+	encl_page->epc_page = NULL;
 	encl->secs_child_cnt--;
 
 	if (!encl->secs_child_cnt &&
 	    (atomic_read(&encl->flags) &
-	     (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED)))
+	     (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED))) {
 		sgx_encl_ewb(encl->secs.epc_page, SGX_SECINFO_SECS);
+		sgx_free_page(encl->secs.epc_page);
+
+		encl->secs.epc_page = NULL;
+	}
 
 	mutex_unlock(&encl->lock);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 08/17] x86/sgx: Calculate page index in sgx_reclaimer_write()
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (6 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 07/17] x86/sgx: Remove pages in sgx_reclaimer_write() Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write() Jarkko Sakkinen
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Move page index calculation for backing storage to sgx_reclaimer_write()
as it somewhat simplifies the flow and makes it also easier to control
as the high level write logic is consolidated to a single function.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 33 +++++++++----------------------
 1 file changed, 9 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 353888256b2b..e758a06919e4 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -222,26 +222,15 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
 
 static int __sgx_encl_ewb(struct sgx_encl *encl, struct sgx_epc_page *epc_page,
 			  struct sgx_va_page *va_page, unsigned int va_offset,
-			  unsigned int pt)
+			  unsigned int page_index)
 {
-	struct sgx_encl_page *encl_page = epc_page->owner;
 	struct sgx_pageinfo pginfo;
 	unsigned long pcmd_offset;
 	struct page *backing;
-	pgoff_t page_index;
 	pgoff_t pcmd_index;
 	struct page *pcmd;
 	int ret;
 
-	if (pt != SGX_SECINFO_SECS && pt != SGX_SECINFO_TCS &&
-	    pt != SGX_SECINFO_REG)
-		return -EINVAL;
-
-	if (pt == SGX_SECINFO_SECS)
-		page_index = PFN_DOWN(encl->size);
-	else
-		page_index = SGX_ENCL_PAGE_INDEX(encl_page);
-
 	pcmd_index = sgx_pcmd_index(encl, page_index);
 	pcmd_offset = sgx_pcmd_offset(page_index);
 
@@ -308,7 +297,8 @@ static const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl)
 	return cpumask;
 }
 
-static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt)
+static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
+			 unsigned int page_index)
 {
 	struct sgx_encl_page *encl_page = epc_page->owner;
 	struct sgx_encl *encl = encl_page->encl;
@@ -325,7 +315,8 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt)
 		if (sgx_va_page_full(va_page))
 			list_move_tail(&va_page->list, &encl->va_pages);
 
-		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, pt);
+		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
+				     page_index);
 		if (ret == SGX_NOT_TRACKED) {
 			ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
 			if (ret) {
@@ -335,7 +326,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt)
 			}
 
 			ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
-					     pt);
+					     page_index);
 			if (ret == SGX_NOT_TRACKED) {
 				/*
 				 * Slow path, send IPIs to kick cpus out of the
@@ -347,7 +338,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page, unsigned int pt)
 				on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
 						 sgx_ipi_cb, NULL, 1);
 				ret = __sgx_encl_ewb(encl, epc_page, va_page,
-						     va_offset, pt);
+						     va_offset, page_index);
 			}
 		}
 
@@ -364,17 +355,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 {
 	struct sgx_encl_page *encl_page = epc_page->owner;
 	struct sgx_encl *encl = encl_page->encl;
-	unsigned int pt;
 	int ret;
 
-	if (encl_page->desc & SGX_ENCL_PAGE_TCS)
-		pt = SGX_SECINFO_TCS;
-	else
-		pt = SGX_SECINFO_REG;
-
 	mutex_lock(&encl->lock);
 
-	sgx_encl_ewb(epc_page, pt);
+	sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
 	if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) {
 		ret = __eremove(sgx_epc_addr(epc_page));
 		WARN(ret, "EREMOVE returned %d\n", ret);
@@ -386,7 +371,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 	if (!encl->secs_child_cnt &&
 	    (atomic_read(&encl->flags) &
 	     (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED))) {
-		sgx_encl_ewb(encl->secs.epc_page, SGX_SECINFO_SECS);
+		sgx_encl_ewb(encl->secs.epc_page, PFN_DOWN(encl->size));
 		sgx_free_page(encl->secs.epc_page);
 
 		encl->secs.epc_page = NULL;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write()
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (7 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 08/17] x86/sgx: Calculate page index " Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-17 23:13   ` Sean Christopherson
  2019-09-17 23:21   ` Sean Christopherson
  2019-09-16 10:17 ` [PATCH v3 10/17] x86/sgx: Free VA slot when the EWB flow fails Jarkko Sakkinen
                   ` (7 subsequent siblings)
  16 siblings, 2 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Do enclave state checks only in sgx_reclaimer_write(). Checking the
enclave state is not part of the sgx_encl_ewb() flow. The check is done
differently for SECS and for addressable pages.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 69 +++++++++++++++----------------
 1 file changed, 34 insertions(+), 35 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index e758a06919e4..a3e36f959c74 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -308,47 +308,45 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 
 	encl_page->desc &= ~SGX_ENCL_PAGE_RECLAIMED;
 
-	if (!(atomic_read(&encl->flags) & SGX_ENCL_DEAD)) {
-		va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
-					   list);
-		va_offset = sgx_alloc_va_slot(va_page);
-		if (sgx_va_page_full(va_page))
-			list_move_tail(&va_page->list, &encl->va_pages);
+	va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
+				   list);
+	va_offset = sgx_alloc_va_slot(va_page);
+	if (sgx_va_page_full(va_page))
+		list_move_tail(&va_page->list, &encl->va_pages);
+
+	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
+			     page_index);
+	if (ret == SGX_NOT_TRACKED) {
+		ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
+		if (ret) {
+			if (encls_failed(ret) ||
+			    encls_returned_code(ret))
+				ENCLS_WARN(ret, "ETRACK");
+		}
 
 		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
 				     page_index);
 		if (ret == SGX_NOT_TRACKED) {
-			ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
-			if (ret) {
-				if (encls_failed(ret) ||
-				    encls_returned_code(ret))
-					ENCLS_WARN(ret, "ETRACK");
-			}
-
-			ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
-					     page_index);
-			if (ret == SGX_NOT_TRACKED) {
-				/*
-				 * Slow path, send IPIs to kick cpus out of the
-				 * enclave.  Note, it's imperative that the cpu
-				 * mask is generated *after* ETRACK, else we'll
-				 * miss cpus that entered the enclave between
-				 * generating the mask and incrementing epoch.
-				 */
-				on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
-						 sgx_ipi_cb, NULL, 1);
-				ret = __sgx_encl_ewb(encl, epc_page, va_page,
-						     va_offset, page_index);
-			}
+			/*
+			 * Slow path, send IPIs to kick cpus out of the
+			 * enclave.  Note, it's imperative that the cpu
+			 * mask is generated *after* ETRACK, else we'll
+			 * miss cpus that entered the enclave between
+			 * generating the mask and incrementing epoch.
+			 */
+			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
+					 sgx_ipi_cb, NULL, 1);
+			ret = __sgx_encl_ewb(encl, epc_page, va_page,
+					     va_offset, page_index);
 		}
+	}
 
-		if (ret)
-			if (encls_failed(ret) || encls_returned_code(ret))
-				ENCLS_WARN(ret, "EWB");
+	if (ret)
+		if (encls_failed(ret) || encls_returned_code(ret))
+			ENCLS_WARN(ret, "EWB");
 
-		encl_page->desc |= va_offset;
-		encl_page->va_page = va_page;
-	}
+	encl_page->desc |= va_offset;
+	encl_page->va_page = va_page;
 }
 
 static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
@@ -359,10 +357,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 
 	mutex_lock(&encl->lock);
 
-	sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
 	if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) {
 		ret = __eremove(sgx_epc_addr(epc_page));
 		WARN(ret, "EREMOVE returned %d\n", ret);
+	} else {
+		sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
 	}
 
 	encl_page->epc_page = NULL;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 10/17] x86/sgx: Free VA slot when the EWB flow fails
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (8 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write() Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 11/17] x86/sgx: Call sgx_encl_destroy() " Jarkko Sakkinen
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Call sgx_free_va_slot() when the EWB flow fails. Otherwise, they will
leak in the failure case.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index a3e36f959c74..4ae6122c18e5 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -341,12 +341,15 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 		}
 	}
 
-	if (ret)
+	if (ret) {
 		if (encls_failed(ret) || encls_returned_code(ret))
 			ENCLS_WARN(ret, "EWB");
 
-	encl_page->desc |= va_offset;
-	encl_page->va_page = va_page;
+		sgx_free_va_slot(va_page, va_offset);
+	} else {
+		encl_page->desc |= va_offset;
+		encl_page->va_page = va_page;
+	}
 }
 
 static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 11/17] x86/sgx: Call sgx_encl_destroy() when the EWB flow fails
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (9 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 10/17] x86/sgx: Free VA slot when the EWB flow fails Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-16 10:17 ` [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put() Jarkko Sakkinen
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

There is not much we can do if the EWB flow fails. It can fail if the
binding of the backing storage fails or if the enclave is running inside
a VM and the host is suspended.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 4ae6122c18e5..c2a85db68307 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -345,7 +345,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 		if (encls_failed(ret) || encls_returned_code(ret))
 			ENCLS_WARN(ret, "EWB");
 
-		sgx_free_va_slot(va_page, va_offset);
+		sgx_encl_destroy(encl);
 	} else {
 		encl_page->desc |= va_offset;
 		encl_page->va_page = va_page;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put()
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (10 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 11/17] x86/sgx: Call sgx_encl_destroy() " Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-17 23:07   ` Sean Christopherson
  2019-09-16 10:17 ` [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim() Jarkko Sakkinen
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Open code sgx_reclaimer_get() and sgx_reclaimer_put(). They are legacy
from the callback interface. Wrapping a function call inside a function
does not make sense.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 47 ++++++++++++-------------------
 1 file changed, 18 insertions(+), 29 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index c2a85db68307..5a4d44dd02d7 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -121,22 +121,6 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page)
 	spin_unlock(&sgx_active_page_list_lock);
 }
 
-bool sgx_reclaimer_get(struct sgx_epc_page *epc_page)
-{
-	struct sgx_encl_page *encl_page = epc_page->owner;
-	struct sgx_encl *encl = encl_page->encl;
-
-	return kref_get_unless_zero(&encl->refcount) != 0;
-}
-
-void sgx_reclaimer_put(struct sgx_epc_page *epc_page)
-{
-	struct sgx_encl_page *encl_page = epc_page->owner;
-	struct sgx_encl *encl = encl_page->encl;
-
-	kref_put(&encl->refcount, sgx_encl_release);
-}
-
 static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
 {
 	struct sgx_encl_page *page = epc_page->owner;
@@ -392,6 +376,7 @@ void sgx_reclaim_pages(void)
 {
 	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN + 1];
 	struct sgx_epc_section *section;
+	struct sgx_encl_page *encl_page;
 	struct sgx_epc_page *epc_page;
 	int cnt = 0;
 	int i;
@@ -404,8 +389,9 @@ void sgx_reclaim_pages(void)
 		epc_page = list_first_entry(&sgx_active_page_list,
 					    struct sgx_epc_page, list);
 		list_del_init(&epc_page->list);
+		encl_page = epc_page->owner;
 
-		if (sgx_reclaimer_get(epc_page))
+		if (kref_get_unless_zero(&encl_page->encl->refcount) != 0)
 			chunk[cnt++] = epc_page;
 		else
 			/* The owner is freeing the page. No need to add the
@@ -417,10 +403,12 @@ void sgx_reclaim_pages(void)
 
 	for (i = 0; i < cnt; i++) {
 		epc_page = chunk[i];
+		encl_page = epc_page->owner;
+
 		if (sgx_reclaimer_evict(epc_page))
 			continue;
 
-		sgx_reclaimer_put(epc_page);
+		kref_put(&encl_page->encl->refcount, sgx_encl_release);
 
 		spin_lock(&sgx_active_page_list_lock);
 		list_add_tail(&epc_page->list, &sgx_active_page_list);
@@ -437,19 +425,20 @@ void sgx_reclaim_pages(void)
 
 	for (i = 0; i < cnt; i++) {
 		epc_page = chunk[i];
-		if (epc_page) {
-			sgx_reclaimer_write(epc_page);
-			sgx_reclaimer_put(epc_page);
-			epc_page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
+		if (!epc_page)
+			continue;
 
-			section = sgx_epc_section(epc_page);
 
-			spin_lock(&section->lock);
-			list_add_tail(&epc_page->list,
-				      &section->page_list);
-			section->free_cnt++;
-			spin_unlock(&section->lock);
-		}
+		encl_page = epc_page->owner;
+		sgx_reclaimer_write(epc_page);
+		kref_put(&encl_page->encl->refcount, sgx_encl_release);
+		epc_page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
+
+		section = sgx_epc_section(epc_page);
+		spin_lock(&section->lock);
+		list_add_tail(&epc_page->list, &section->page_list);
+		section->free_cnt++;
+		spin_unlock(&section->lock);
 	}
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim()
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (11 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put() Jarkko Sakkinen
@ 2019-09-16 10:17 ` Jarkko Sakkinen
  2019-09-17 23:25   ` Sean Christopherson
  2019-09-25 18:28   ` Sean Christopherson
  2019-09-16 10:18 ` [PATCH v3 14/17] x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages Jarkko Sakkinen
                   ` (3 subsequent siblings)
  16 siblings, 2 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Make sgx_reclaim_evict() idempotennt and rename it to sgx_can_reclaim() and
set SGX_ENCL_PAGE_RECLAIMED in the call site.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 32 +++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 5a4d44dd02d7..cc3155b61513 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -121,7 +121,19 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page)
 	spin_unlock(&sgx_active_page_list_lock);
 }
 
-static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
+/**
+ * sgx_can_reclaim() - Filter out the pages that should not be reclaimed
+ * @epc_page:	a candidate EPC page
+ *
+ * Do not reclaim this page if it has been recently accessed by any mm_struct
+ * *and* if the enclave is still alive.  No need to take the enclave's lock,
+ * worst case scenario reclaiming pages from a dead enclave is delayed slightly.
+ * A live enclave with a recently accessed page is more common and avoiding lock
+ * contention in that case is a boon to performance.
+ *
+ * Return: true if the page should be reclaimed
+ */
+static bool sgx_can_reclaim(struct sgx_epc_page *epc_page)
 {
 	struct sgx_encl_page *page = epc_page->owner;
 	struct sgx_encl *encl = page->encl;
@@ -147,21 +159,9 @@ static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
 
 	srcu_read_unlock(&encl->srcu, idx);
 
-	/*
-	 * Do not reclaim this page if it has been recently accessed by any
-	 * mm_struct *and* if the enclave is still alive.  No need to take
-	 * the enclave's lock, worst case scenario reclaiming pages from a
-	 * dead enclave is delayed slightly.  A live enclave with a recently
-	 * accessed page is more common and avoiding lock contention in that
-	 * case is a boon to performance.
-	 */
 	if (!ret && !(atomic_read(&encl->flags) & SGX_ENCL_DEAD))
 		return false;
 
-	mutex_lock(&encl->lock);
-	page->desc |= SGX_ENCL_PAGE_RECLAIMED;
-	mutex_unlock(&encl->lock);
-
 	return true;
 }
 
@@ -405,8 +405,12 @@ void sgx_reclaim_pages(void)
 		epc_page = chunk[i];
 		encl_page = epc_page->owner;
 
-		if (sgx_reclaimer_evict(epc_page))
+		if (sgx_can_reclaim(epc_page)) {
+			mutex_lock(&encl_page->encl->lock);
+			encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
+			mutex_unlock(&encl_page->encl->lock);
 			continue;
+		}
 
 		kref_put(&encl_page->encl->refcount, sgx_encl_release);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 14/17] x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (12 preceding siblings ...)
  2019-09-16 10:17 ` [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim() Jarkko Sakkinen
@ 2019-09-16 10:18 ` Jarkko Sakkinen
  2019-09-17 22:50   ` Sean Christopherson
  2019-09-16 10:18 ` [PATCH v3 15/17] x86/sgx: sgx_vma_access(): Do not return -ECANCELED on invalid TCS pages Jarkko Sakkinen
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:18 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Replace section specific counters with a single gloal counter for free
pages. In effect, remove sgx_calc_free_cnt().

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/main.c    | 11 ++++++-----
 arch/x86/kernel/cpu/sgx/reclaim.c | 19 ++-----------------
 arch/x86/kernel/cpu/sgx/sgx.h     |  3 +--
 3 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 4c03e5f33414..f37d28023b97 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -14,19 +14,20 @@
 
 struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
 int sgx_nr_epc_sections;
+unsigned long sgx_nr_free_pages;
 
 static struct sgx_epc_page *sgx_section_try_take_page(
 	struct sgx_epc_section *section)
 {
 	struct sgx_epc_page *page;
 
-	if (!section->free_cnt)
+	if (list_empty(&section->page_list))
 		return NULL;
 
 	page = list_first_entry(&section->page_list, struct sgx_epc_page,
 				list);
 	list_del_init(&page->list);
-	section->free_cnt--;
+	sgx_nr_free_pages--;
 	return page;
 }
 
@@ -90,7 +91,7 @@ struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim)
 		schedule();
 	}
 
-	if (sgx_calc_free_cnt() < SGX_NR_LOW_PAGES)
+	if (sgx_nr_free_pages < SGX_NR_LOW_PAGES)
 		wake_up(&ksgxswapd_waitq);
 
 	return entry;
@@ -136,7 +137,7 @@ int __sgx_free_page(struct sgx_epc_page *page)
 
 	spin_lock(&section->lock);
 	list_add_tail(&page->list, &section->page_list);
-	section->free_cnt++;
+	sgx_nr_free_pages++;
 	spin_unlock(&section->lock);
 
 	return 0;
@@ -202,7 +203,7 @@ static __init int sgx_init_epc_section(u64 addr, u64 size, unsigned long index,
 			goto out;
 		page->desc = (addr + (i << PAGE_SHIFT)) | index;
 		list_add_tail(&page->list, &section->unsanitized_page_list);
-		section->free_cnt++;
+		sgx_nr_free_pages++;
 	}
 
 	return 0;
diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index cc3155b61513..2e04a923d8dc 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -64,7 +64,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section)
 
 static inline bool sgx_should_reclaim(void)
 {
-	return sgx_calc_free_cnt() < SGX_NR_HIGH_PAGES &&
+	return sgx_nr_free_pages < SGX_NR_HIGH_PAGES &&
 	       !list_empty(&sgx_active_page_list);
 }
 
@@ -432,7 +432,6 @@ void sgx_reclaim_pages(void)
 		if (!epc_page)
 			continue;
 
-
 		encl_page = epc_page->owner;
 		sgx_reclaimer_write(epc_page);
 		kref_put(&encl_page->encl->refcount, sgx_encl_release);
@@ -441,21 +440,7 @@ void sgx_reclaim_pages(void)
 		section = sgx_epc_section(epc_page);
 		spin_lock(&section->lock);
 		list_add_tail(&epc_page->list, &section->page_list);
-		section->free_cnt++;
+		sgx_nr_free_pages++;
 		spin_unlock(&section->lock);
 	}
 }
-
-unsigned long sgx_calc_free_cnt(void)
-{
-	struct sgx_epc_section *section;
-	unsigned long free_cnt = 0;
-	int i;
-
-	for (i = 0; i < sgx_nr_epc_sections; i++) {
-		section = &sgx_epc_sections[i];
-		free_cnt += section->free_cnt;
-	}
-
-	return free_cnt;
-}
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 9b08690262b5..56d0bde3f4d8 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -30,7 +30,6 @@ struct sgx_epc_section {
 	void *va;
 	struct list_head page_list;
 	struct list_head unsanitized_page_list;
-	unsigned long free_cnt;
 	spinlock_t lock;
 };
 
@@ -72,6 +71,7 @@ static inline void *sgx_epc_addr(struct sgx_epc_page *page)
 #define SGX_NR_HIGH_PAGES	64
 
 extern int sgx_nr_epc_sections;
+extern unsigned long sgx_nr_free_pages;
 extern struct task_struct *ksgxswapd_tsk;
 extern struct wait_queue_head(ksgxswapd_waitq);
 extern struct list_head sgx_active_page_list;
@@ -79,7 +79,6 @@ extern spinlock_t sgx_active_page_list_lock;
 
 int sgx_page_reclaimer_init(void);
 void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
-unsigned long sgx_calc_free_cnt(void);
 void sgx_reclaim_pages(void);
 
 struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 15/17] x86/sgx: sgx_vma_access(): Do not return -ECANCELED on invalid TCS pages
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (13 preceding siblings ...)
  2019-09-16 10:18 ` [PATCH v3 14/17] x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages Jarkko Sakkinen
@ 2019-09-16 10:18 ` Jarkko Sakkinen
  2019-09-16 10:18 ` [PATCH v3 16/17] x86/sgx: Introduce sgx_encl_get_backing() Jarkko Sakkinen
  2019-09-16 10:18 ` [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool Jarkko Sakkinen
  16 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:18 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

When validating a TCS page one should consider the man page of ptrace
(man 2 ptrace):

"request is invalid, or an attempt was made to read from or write to an
invalid area in the tracer's or the tracee's memory, or there was a
word-alignment violation, or an invalid signal was specified during a
restart request."

Thus, returning -ECANCELED is not right thing to do.

Instead, return -EIO when TCS validation fails. In effect, this renders
out the validation code. Remove SGX_ENCL_PAGE_TCS as it is no longer
used for anything.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c  | 16 ++--------------
 arch/x86/kernel/cpu/sgx/encl.h  |  2 --
 arch/x86/kernel/cpu/sgx/ioctl.c |  3 ---
 3 files changed, 2 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 1c1fbc95be33..66762b9c1517 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -352,14 +352,9 @@ static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start,
 static int sgx_edbgrd(struct sgx_encl *encl, struct sgx_encl_page *page,
 		      unsigned long addr, void *data)
 {
-	unsigned long offset;
+	unsigned long offset = addr & ~PAGE_MASK;
 	int ret;
 
-	offset = addr & ~PAGE_MASK;
-
-	if ((page->desc & SGX_ENCL_PAGE_TCS) &&
-	    offset > offsetof(struct sgx_tcs, gs_limit))
-		return -ECANCELED;
 
 	ret = __edbgrd(sgx_epc_addr(page->epc_page) + offset, data);
 	if (ret)
@@ -371,16 +366,9 @@ static int sgx_edbgrd(struct sgx_encl *encl, struct sgx_encl_page *page,
 static int sgx_edbgwr(struct sgx_encl *encl, struct sgx_encl_page *page,
 		      unsigned long addr, void *data)
 {
-	unsigned long offset;
+	unsigned long offset = addr & ~PAGE_MASK;
 	int ret;
 
-	offset = addr & ~PAGE_MASK;
-
-	/* Writing anything else than flags will cause #GP */
-	if ((page->desc & SGX_ENCL_PAGE_TCS) &&
-	    offset != offsetof(struct sgx_tcs, flags))
-		return -ECANCELED;
-
 	ret = __edbgwr(sgx_epc_addr(page->epc_page) + offset, data);
 	if (ret)
 		return -EIO;
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index 95e5713a50ad..c7abca1fcb9d 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -19,7 +19,6 @@
 
 /**
  * enum sgx_encl_page_desc - defines bits for an enclave page's descriptor
- * %SGX_ENCL_PAGE_TCS:			The page is a TCS page.
  * %SGX_ENCL_PAGE_RECLAIMED:		The page is in the process of being
  *					reclaimed.
  * %SGX_ENCL_PAGE_VA_OFFSET_MASK:	Holds the offset in the Version Array
@@ -30,7 +29,6 @@
  * the SECS page.
  */
 enum sgx_encl_page_desc {
-	SGX_ENCL_PAGE_TCS		= BIT(0),
 	/* Bits 11:3 are available when the page is not swapped. */
 	SGX_ENCL_PAGE_RECLAIMED		= BIT(3),
 	SGX_ENCL_PAGE_VA_OFFSET_MASK	= GENMASK_ULL(11, 3),
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index e57dda38513b..cc77728af7da 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -126,9 +126,6 @@ static struct sgx_encl_page *sgx_encl_page_alloc(struct sgx_encl *encl,
 	encl_page->desc = addr;
 	encl_page->encl = encl;
 
-	if (secinfo_flags & SGX_SECINFO_TCS)
-		encl_page->desc |= SGX_ENCL_PAGE_TCS;
-
 	prot = _calc_vm_trans(secinfo_flags, SGX_SECINFO_R, PROT_READ)  |
 	       _calc_vm_trans(secinfo_flags, SGX_SECINFO_W, PROT_WRITE) |
 	       _calc_vm_trans(secinfo_flags, SGX_SECINFO_X, PROT_EXEC);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 16/17] x86/sgx: Introduce sgx_encl_get_backing()
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (14 preceding siblings ...)
  2019-09-16 10:18 ` [PATCH v3 15/17] x86/sgx: sgx_vma_access(): Do not return -ECANCELED on invalid TCS pages Jarkko Sakkinen
@ 2019-09-16 10:18 ` Jarkko Sakkinen
  2019-09-17 23:05   ` Sean Christopherson
  2019-09-16 10:18 ` [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool Jarkko Sakkinen
  16 siblings, 1 reply; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:18 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

Since __sgx_encl_eldu() and __sgx_encl_ewb() pin the backing storage in the
same way, consolidate this to sgx_encl_get_backing() replacing
sgx_encl_get_backing_page().

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c    | 86 +++++++++++++++++++------------
 arch/x86/kernel/cpu/sgx/encl.h    | 20 +++----
 arch/x86/kernel/cpu/sgx/reclaim.c | 39 ++++----------
 3 files changed, 71 insertions(+), 74 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 66762b9c1517..a71414ce05b1 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -17,11 +17,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
 	unsigned long va_offset = SGX_ENCL_PAGE_VA_OFFSET(encl_page);
 	struct sgx_encl *encl = encl_page->encl;
 	struct sgx_pageinfo pginfo;
-	unsigned long pcmd_offset;
-	struct page *backing;
+	struct sgx_backing b;
 	pgoff_t page_index;
-	pgoff_t pcmd_index;
-	struct page *pcmd;
 	int ret;
 
 	if (secs_page)
@@ -29,24 +26,14 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
 	else
 		page_index = PFN_DOWN(encl->size);
 
-	pcmd_index = sgx_pcmd_index(encl, page_index);
-	pcmd_offset = sgx_pcmd_offset(page_index);
-
-	backing = sgx_encl_get_backing_page(encl, page_index);
-	if (IS_ERR(backing)) {
-		ret = PTR_ERR(backing);
-		goto err_backing;
-	}
-
-	pcmd = sgx_encl_get_backing_page(encl, pcmd_index);
-	if (IS_ERR(pcmd)) {
-		ret = PTR_ERR(pcmd);
-		goto err_pcmd;
-	}
+	ret = sgx_encl_get_backing(encl, page_index, &b);
+	if (ret)
+		return ret;
 
 	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
-	pginfo.contents = (unsigned long)kmap_atomic(backing);
-	pginfo.metadata = (unsigned long)kmap_atomic(pcmd) + pcmd_offset;
+	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
+	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) +
+			  b.pcmd_offset;
 
 	if (secs_page)
 		pginfo.secs = (u64)sgx_epc_addr(secs_page);
@@ -62,15 +49,12 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
 		ret = -EFAULT;
 	}
 
-	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - pcmd_offset));
+	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
 	kunmap_atomic((void *)(unsigned long)pginfo.contents);
 
-	put_page(pcmd);
+	put_page(b.pcmd);
+	put_page(b.contents);
 
-err_pcmd:
-	put_page(backing);
-
-err_backing:
 	return ret;
 }
 
@@ -537,14 +521,8 @@ void sgx_encl_release(struct kref *ref)
 	kfree(encl);
 }
 
-/**
- * sgx_encl_encl_get_backing_page() - Pin the backing page
- * @encl:	an enclave
- * @index:	page index
- *
- * Return: the pinned backing page
- */
-struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, pgoff_t index)
+static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl,
+					      pgoff_t index)
 {
 	struct inode *inode = encl->backing->f_path.dentry->d_inode;
 	struct address_space *mapping = inode->i_mapping;
@@ -553,6 +531,46 @@ struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, pgoff_t index)
 	return shmem_read_mapping_page_gfp(mapping, index, gfpmask);
 }
 
+/**
+ * sgx_encl_get_backing() - Access the backing storage
+ * @encl:	an enclave
+ * @page_index:	enclave page index
+ * @backing:	data for accessing backing storage for the page
+ *
+ * Pin the backing storage pages for storing the encrypted contents and Paging
+ * Crypto MetaData (PCMD) of an enclave page.
+ *
+ * Return:
+ *   0 on success,
+ *   -errno otherwise.
+ */
+int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+			 struct sgx_backing *backing)
+{
+	pgoff_t pcmd_index = PFN_DOWN(encl->size) + 1 + (page_index >> 5);
+	struct page *contents;
+	struct page *pcmd;
+
+	contents = sgx_encl_get_backing_page(encl, page_index);
+	if (IS_ERR(contents))
+		return PTR_ERR(contents);
+
+	pcmd = sgx_encl_get_backing_page(encl, pcmd_index);
+	if (IS_ERR(pcmd)) {
+		put_page(contents);
+		return PTR_ERR(pcmd);
+	}
+
+	backing->page_index = page_index;
+	backing->contents = contents;
+	backing->pcmd = pcmd;
+	backing->pcmd_offset =
+		(page_index & (PAGE_SIZE / sizeof(struct sgx_pcmd) - 1)) *
+		sizeof(struct sgx_pcmd);
+
+	return 0;
+}
+
 static int sgx_encl_test_and_clear_young_cb(pte_t *ptep, unsigned long addr,
 					    void *data)
 {
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index c7abca1fcb9d..59bc4a5bf7e6 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -97,23 +97,19 @@ struct sgx_va_page {
 
 extern const struct vm_operations_struct sgx_vm_ops;
 
-static inline pgoff_t sgx_pcmd_index(struct sgx_encl *encl,
-				     pgoff_t page_index)
-{
-	return PFN_DOWN(encl->size) + 1 + (page_index >> 5);
-}
-
-static inline unsigned long sgx_pcmd_offset(pgoff_t page_index)
-{
-	return (page_index & (PAGE_SIZE / sizeof(struct sgx_pcmd) - 1)) *
-	       sizeof(struct sgx_pcmd);
-}
+struct sgx_backing {
+	pgoff_t page_index;
+	struct page *contents;
+	struct page *pcmd;
+	unsigned long pcmd_offset;
+};
 
 int sgx_encl_find(struct mm_struct *mm, unsigned long addr,
 		  struct vm_area_struct **vma);
 void sgx_encl_destroy(struct sgx_encl *encl);
 void sgx_encl_release(struct kref *ref);
-struct page *sgx_encl_get_backing_page(struct sgx_encl *encl, pgoff_t index);
+int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+			 struct sgx_backing *backing);
 int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm);
 int sgx_encl_test_and_clear_young(struct mm_struct *mm,
 				  struct sgx_encl_page *page);
diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 2e04a923d8dc..7d628a1388e2 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -209,47 +209,30 @@ static int __sgx_encl_ewb(struct sgx_encl *encl, struct sgx_epc_page *epc_page,
 			  unsigned int page_index)
 {
 	struct sgx_pageinfo pginfo;
-	unsigned long pcmd_offset;
-	struct page *backing;
-	pgoff_t pcmd_index;
-	struct page *pcmd;
+	struct sgx_backing b;
 	int ret;
 
-	pcmd_index = sgx_pcmd_index(encl, page_index);
-	pcmd_offset = sgx_pcmd_offset(page_index);
-
-	backing = sgx_encl_get_backing_page(encl, page_index);
-	if (IS_ERR(backing)) {
-		ret = PTR_ERR(backing);
-		goto err_backing;
-	}
-
-	pcmd = sgx_encl_get_backing_page(encl, pcmd_index);
-	if (IS_ERR(pcmd)) {
-		ret = PTR_ERR(pcmd);
-		goto err_pcmd;
-	}
+	ret = sgx_encl_get_backing(encl, page_index, &b);
+	if (ret)
+		return ret;
 
 	pginfo.addr = 0;
-	pginfo.contents = (unsigned long)kmap_atomic(backing);
-	pginfo.metadata = (unsigned long)kmap_atomic(pcmd) + pcmd_offset;
+	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
+	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) + b.pcmd_offset;
 	pginfo.secs = 0;
 	ret = __ewb(&pginfo, sgx_epc_addr(epc_page),
 		    sgx_epc_addr(va_page->epc_page) + va_offset);
-	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - pcmd_offset));
+	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
 	kunmap_atomic((void *)(unsigned long)pginfo.contents);
 
 	if (!ret) {
-		set_page_dirty(pcmd);
-		set_page_dirty(backing);
+		set_page_dirty(b.pcmd);
+		set_page_dirty(b.contents);
 	}
 
-	put_page(pcmd);
-
-err_pcmd:
-	put_page(backing);
+	put_page(b.pcmd);
+	put_page(b.contents);
 
-err_backing:
 	return ret;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool
  2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
                   ` (15 preceding siblings ...)
  2019-09-16 10:18 ` [PATCH v3 16/17] x86/sgx: Introduce sgx_encl_get_backing() Jarkko Sakkinen
@ 2019-09-16 10:18 ` Jarkko Sakkinen
  2019-09-17 23:34   ` Sean Christopherson
  16 siblings, 1 reply; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-16 10:18 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Sean Christopherson, Shay Katz-zamir, Serge Ayoun

A blocked page can end up legitly to the free pool if pinning fails because
we interpret that as an EWB failure and simply put it to the free pool.
This corrupts the EPC page allocator.

Fix the bug by pinning the backing storage when picking the victim pages. A
clean rollback can still be done when the memory allocation fails as pages
can be still returned back to the enclave.

This in effect removes any other failure cases from sgx_encl_ewb() other
than EPCM conflict when the host has went through a sleep cycle. In that
case putting a page back to the free pool is perfectly fine because it is
uninitialized.

Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
Cc: Serge Ayoun <serge.ayoun@intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/reclaim.c | 95 ++++++++++++++++++-------------
 1 file changed, 57 insertions(+), 38 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
index 7d628a1388e2..d6e580e55456 100644
--- a/arch/x86/kernel/cpu/sgx/reclaim.c
+++ b/arch/x86/kernel/cpu/sgx/reclaim.c
@@ -206,32 +206,24 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
 
 static int __sgx_encl_ewb(struct sgx_encl *encl, struct sgx_epc_page *epc_page,
 			  struct sgx_va_page *va_page, unsigned int va_offset,
-			  unsigned int page_index)
+			  struct sgx_backing *backing)
 {
 	struct sgx_pageinfo pginfo;
-	struct sgx_backing b;
 	int ret;
 
-	ret = sgx_encl_get_backing(encl, page_index, &b);
-	if (ret)
-		return ret;
-
 	pginfo.addr = 0;
-	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
-	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) + b.pcmd_offset;
 	pginfo.secs = 0;
+
+	pginfo.contents = (unsigned long)kmap_atomic(backing->contents);
+	pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) +
+			  backing->pcmd_offset;
+
 	ret = __ewb(&pginfo, sgx_epc_addr(epc_page),
 		    sgx_epc_addr(va_page->epc_page) + va_offset);
-	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
-	kunmap_atomic((void *)(unsigned long)pginfo.contents);
 
-	if (!ret) {
-		set_page_dirty(b.pcmd);
-		set_page_dirty(b.contents);
-	}
-
-	put_page(b.pcmd);
-	put_page(b.contents);
+	kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
+					      backing->pcmd_offset));
+	kunmap_atomic((void *)(unsigned long)pginfo.contents);
 
 	return ret;
 }
@@ -265,7 +257,7 @@ static const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl)
 }
 
 static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
-			 unsigned int page_index)
+			 struct sgx_backing *backing)
 {
 	struct sgx_encl_page *encl_page = epc_page->owner;
 	struct sgx_encl *encl = encl_page->encl;
@@ -281,8 +273,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 	if (sgx_va_page_full(va_page))
 		list_move_tail(&va_page->list, &encl->va_pages);
 
-	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
-			     page_index);
+	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, backing);
 	if (ret == SGX_NOT_TRACKED) {
 		ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
 		if (ret) {
@@ -292,7 +283,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 		}
 
 		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
-				     page_index);
+				     backing);
 		if (ret == SGX_NOT_TRACKED) {
 			/*
 			 * Slow path, send IPIs to kick cpus out of the
@@ -304,7 +295,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
 					 sgx_ipi_cb, NULL, 1);
 			ret = __sgx_encl_ewb(encl, epc_page, va_page,
-					     va_offset, page_index);
+					     va_offset, backing);
 		}
 	}
 
@@ -314,15 +305,20 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 
 		sgx_encl_destroy(encl);
 	} else {
+		set_page_dirty(backing->pcmd);
+		set_page_dirty(backing->contents);
+
 		encl_page->desc |= va_offset;
 		encl_page->va_page = va_page;
 	}
 }
 
-static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
+static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
+				struct sgx_backing *backing)
 {
 	struct sgx_encl_page *encl_page = epc_page->owner;
 	struct sgx_encl *encl = encl_page->encl;
+	struct sgx_backing secs_backing;
 	int ret;
 
 	mutex_lock(&encl->lock);
@@ -331,7 +327,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 		ret = __eremove(sgx_epc_addr(epc_page));
 		WARN(ret, "EREMOVE returned %d\n", ret);
 	} else {
-		sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
+		sgx_encl_ewb(epc_page, backing);
 	}
 
 	encl_page->epc_page = NULL;
@@ -340,10 +336,17 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 	if (!encl->secs_child_cnt &&
 	    (atomic_read(&encl->flags) &
 	     (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED))) {
-		sgx_encl_ewb(encl->secs.epc_page, PFN_DOWN(encl->size));
-		sgx_free_page(encl->secs.epc_page);
+		ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size),
+					   &secs_backing);
+		if (!ret) {
+			sgx_encl_ewb(encl->secs.epc_page, &secs_backing);
+			sgx_free_page(encl->secs.epc_page);
+
+			encl->secs.epc_page = NULL;
 
-		encl->secs.epc_page = NULL;
+			put_page(secs_backing.pcmd);
+			put_page(secs_backing.contents);
+		}
 	}
 
 	mutex_unlock(&encl->lock);
@@ -351,17 +354,21 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
 
 /**
  * sgx_reclaim_pages() - Reclaim EPC pages from the consumers
- * Takes a fixed chunk of pages from the global list of consumed EPC pages and
- * tries to swap them. Only the pages that are either being freed by the
- * consumer or actively used are skipped.
+ *
+ * Take a fixed number of pages from the head of the active page pool and
+ * reclaim them to the enclave's private shmem files. Skip the pages, which
+ * have been accessed since the last scan. Move those pages to the tail of
+ * active page pool so that the pages get scanned in LRU like fashion.
  */
 void sgx_reclaim_pages(void)
 {
-	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN + 1];
+	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN];
+	struct sgx_backing backing[SGX_NR_TO_SCAN];
 	struct sgx_epc_section *section;
 	struct sgx_encl_page *encl_page;
 	struct sgx_epc_page *epc_page;
 	int cnt = 0;
+	int ret;
 	int i;
 
 	spin_lock(&sgx_active_page_list_lock);
@@ -388,13 +395,21 @@ void sgx_reclaim_pages(void)
 		epc_page = chunk[i];
 		encl_page = epc_page->owner;
 
-		if (sgx_can_reclaim(epc_page)) {
-			mutex_lock(&encl_page->encl->lock);
-			encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
-			mutex_unlock(&encl_page->encl->lock);
-			continue;
-		}
+		if (!sgx_can_reclaim(epc_page))
+			goto skip;
 
+		ret = sgx_encl_get_backing(encl_page->encl,
+					   SGX_ENCL_PAGE_INDEX(encl_page),
+					   &backing[i]);
+		if (ret)
+			goto skip;
+
+		mutex_lock(&encl_page->encl->lock);
+		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
+		mutex_unlock(&encl_page->encl->lock);
+		continue;
+
+skip:
 		kref_put(&encl_page->encl->refcount, sgx_encl_release);
 
 		spin_lock(&sgx_active_page_list_lock);
@@ -416,7 +431,11 @@ void sgx_reclaim_pages(void)
 			continue;
 
 		encl_page = epc_page->owner;
-		sgx_reclaimer_write(epc_page);
+		sgx_reclaimer_write(epc_page, &backing[i]);
+
+		put_page(backing->pcmd);
+		put_page(backing->contents);
+
 		kref_put(&encl_page->encl->refcount, sgx_encl_release);
 		epc_page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 14/17] x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages
  2019-09-16 10:18 ` [PATCH v3 14/17] x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages Jarkko Sakkinen
@ 2019-09-17 22:50   ` Sean Christopherson
  2019-09-18  4:07     ` Jarkko Sakkinen
  0 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-17 22:50 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:18:00PM +0300, Jarkko Sakkinen wrote:
> Replace section specific counters with a single gloal counter for free
> pages. In effect, remove sgx_calc_free_cnt().
> 
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/main.c    | 11 ++++++-----
>  arch/x86/kernel/cpu/sgx/reclaim.c | 19 ++-----------------
>  arch/x86/kernel/cpu/sgx/sgx.h     |  3 +--
>  3 files changed, 9 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 4c03e5f33414..f37d28023b97 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -14,19 +14,20 @@
>  
>  struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
>  int sgx_nr_epc_sections;
> +unsigned long sgx_nr_free_pages;
>  
>  static struct sgx_epc_page *sgx_section_try_take_page(
>  	struct sgx_epc_section *section)
>  {
>  	struct sgx_epc_page *page;
>  
> -	if (!section->free_cnt)
> +	if (list_empty(&section->page_list))
>  		return NULL;
>  
>  	page = list_first_entry(&section->page_list, struct sgx_epc_page,
>  				list);
>  	list_del_init(&page->list);
> -	section->free_cnt--;
> +	sgx_nr_free_pages--;
>  	return page;
>  }
>  
> @@ -90,7 +91,7 @@ struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim)
>  		schedule();
>  	}
>  
> -	if (sgx_calc_free_cnt() < SGX_NR_LOW_PAGES)
> +	if (sgx_nr_free_pages < SGX_NR_LOW_PAGES)
>  		wake_up(&ksgxswapd_waitq);
>  
>  	return entry;
> @@ -136,7 +137,7 @@ int __sgx_free_page(struct sgx_epc_page *page)
>  
>  	spin_lock(&section->lock);
>  	list_add_tail(&page->list, &section->page_list);
> -	section->free_cnt++;
> +	sgx_nr_free_pages++;

This isn't safe when there are multiple EPC sections as each section has
its own spinlock.

>  	spin_unlock(&section->lock);
>  
>  	return 0;
> @@ -202,7 +203,7 @@ static __init int sgx_init_epc_section(u64 addr, u64 size, unsigned long index,
>  			goto out;
>  		page->desc = (addr + (i << PAGE_SHIFT)) | index;
>  		list_add_tail(&page->list, &section->unsanitized_page_list);
> -		section->free_cnt++;
> +		sgx_nr_free_pages++;
>  	}
>  
>  	return 0;
> diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> index cc3155b61513..2e04a923d8dc 100644
> --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> @@ -64,7 +64,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section)
>  
>  static inline bool sgx_should_reclaim(void)
>  {
> -	return sgx_calc_free_cnt() < SGX_NR_HIGH_PAGES &&
> +	return sgx_nr_free_pages < SGX_NR_HIGH_PAGES &&
>  	       !list_empty(&sgx_active_page_list);
>  }
>  
> @@ -432,7 +432,6 @@ void sgx_reclaim_pages(void)
>  		if (!epc_page)
>  			continue;
>  
> -
>  		encl_page = epc_page->owner;
>  		sgx_reclaimer_write(epc_page);
>  		kref_put(&encl_page->encl->refcount, sgx_encl_release);
> @@ -441,21 +440,7 @@ void sgx_reclaim_pages(void)
>  		section = sgx_epc_section(epc_page);
>  		spin_lock(&section->lock);
>  		list_add_tail(&epc_page->list, &section->page_list);
> -		section->free_cnt++;
> +		sgx_nr_free_pages++;
>  		spin_unlock(&section->lock);
>  	}
>  }
> -
> -unsigned long sgx_calc_free_cnt(void)
> -{
> -	struct sgx_epc_section *section;
> -	unsigned long free_cnt = 0;
> -	int i;
> -
> -	for (i = 0; i < sgx_nr_epc_sections; i++) {
> -		section = &sgx_epc_sections[i];
> -		free_cnt += section->free_cnt;
> -	}
> -
> -	return free_cnt;
> -}
> diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
> index 9b08690262b5..56d0bde3f4d8 100644
> --- a/arch/x86/kernel/cpu/sgx/sgx.h
> +++ b/arch/x86/kernel/cpu/sgx/sgx.h
> @@ -30,7 +30,6 @@ struct sgx_epc_section {
>  	void *va;
>  	struct list_head page_list;
>  	struct list_head unsanitized_page_list;
> -	unsigned long free_cnt;
>  	spinlock_t lock;
>  };
>  
> @@ -72,6 +71,7 @@ static inline void *sgx_epc_addr(struct sgx_epc_page *page)
>  #define SGX_NR_HIGH_PAGES	64
>  
>  extern int sgx_nr_epc_sections;
> +extern unsigned long sgx_nr_free_pages;
>  extern struct task_struct *ksgxswapd_tsk;
>  extern struct wait_queue_head(ksgxswapd_waitq);
>  extern struct list_head sgx_active_page_list;
> @@ -79,7 +79,6 @@ extern spinlock_t sgx_active_page_list_lock;
>  
>  int sgx_page_reclaimer_init(void);
>  void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
> -unsigned long sgx_calc_free_cnt(void);
>  void sgx_reclaim_pages(void);
>  
>  struct sgx_epc_page *sgx_alloc_page(void *owner, bool reclaim);
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 16/17] x86/sgx: Introduce sgx_encl_get_backing()
  2019-09-16 10:18 ` [PATCH v3 16/17] x86/sgx: Introduce sgx_encl_get_backing() Jarkko Sakkinen
@ 2019-09-17 23:05   ` Sean Christopherson
  2019-09-18  4:10     ` Jarkko Sakkinen
  0 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-17 23:05 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:18:02PM +0300, Jarkko Sakkinen wrote:
> Since __sgx_encl_eldu() and __sgx_encl_ewb() pin the backing storage in the
> same way, consolidate this to sgx_encl_get_backing() replacing
> sgx_encl_get_backing_page().
> 
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/encl.c    | 86 +++++++++++++++++++------------
>  arch/x86/kernel/cpu/sgx/encl.h    | 20 +++----
>  arch/x86/kernel/cpu/sgx/reclaim.c | 39 ++++----------
>  3 files changed, 71 insertions(+), 74 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index 66762b9c1517..a71414ce05b1 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -17,11 +17,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
>  	unsigned long va_offset = SGX_ENCL_PAGE_VA_OFFSET(encl_page);
>  	struct sgx_encl *encl = encl_page->encl;
>  	struct sgx_pageinfo pginfo;
> -	unsigned long pcmd_offset;
> -	struct page *backing;
> +	struct sgx_backing b;
>  	pgoff_t page_index;
> -	pgoff_t pcmd_index;
> -	struct page *pcmd;
>  	int ret;
>  
>  	if (secs_page)
> @@ -29,24 +26,14 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
>  	else
>  		page_index = PFN_DOWN(encl->size);
>  
> -	pcmd_index = sgx_pcmd_index(encl, page_index);
> -	pcmd_offset = sgx_pcmd_offset(page_index);
> -
> -	backing = sgx_encl_get_backing_page(encl, page_index);
> -	if (IS_ERR(backing)) {
> -		ret = PTR_ERR(backing);
> -		goto err_backing;
> -	}
> -
> -	pcmd = sgx_encl_get_backing_page(encl, pcmd_index);
> -	if (IS_ERR(pcmd)) {
> -		ret = PTR_ERR(pcmd);
> -		goto err_pcmd;
> -	}
> +	ret = sgx_encl_get_backing(encl, page_index, &b);
> +	if (ret)
> +		return ret;
>  
>  	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
> -	pginfo.contents = (unsigned long)kmap_atomic(backing);
> -	pginfo.metadata = (unsigned long)kmap_atomic(pcmd) + pcmd_offset;
> +	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
> +	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) +
> +			  b.pcmd_offset;
>  
>  	if (secs_page)
>  		pginfo.secs = (u64)sgx_epc_addr(secs_page);
> @@ -62,15 +49,12 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
>  		ret = -EFAULT;
>  	}
>  
> -	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - pcmd_offset));
> +	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
>  	kunmap_atomic((void *)(unsigned long)pginfo.contents);
>  
> -	put_page(pcmd);
> +	put_page(b.pcmd);
> +	put_page(b.contents);

If we're going to have sgx_encl_get_backing(), it should be paired with
sgx_encl_put_backing(), e.g. it's not immediately obvious whose doing the
"get page" corresponding to the put_page() calls.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put()
  2019-09-16 10:17 ` [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put() Jarkko Sakkinen
@ 2019-09-17 23:07   ` Sean Christopherson
  2019-09-18  4:12     ` Jarkko Sakkinen
  0 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-17 23:07 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:17:58PM +0300, Jarkko Sakkinen wrote:
> Open code sgx_reclaimer_get() and sgx_reclaimer_put(). They are legacy
> from the callback interface. Wrapping a function call inside a function
> does not make sense.

I actually like the functions, IMO they make the loops more readable.
They should be static, and probably static inline, but I doubt either
annotation affects the compiler output.

> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/reclaim.c | 47 ++++++++++++-------------------
>  1 file changed, 18 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> index c2a85db68307..5a4d44dd02d7 100644
> --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> @@ -121,22 +121,6 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page)
>  	spin_unlock(&sgx_active_page_list_lock);
>  }
>  
> -bool sgx_reclaimer_get(struct sgx_epc_page *epc_page)
> -{
> -	struct sgx_encl_page *encl_page = epc_page->owner;
> -	struct sgx_encl *encl = encl_page->encl;
> -
> -	return kref_get_unless_zero(&encl->refcount) != 0;
> -}
> -
> -void sgx_reclaimer_put(struct sgx_epc_page *epc_page)
> -{
> -	struct sgx_encl_page *encl_page = epc_page->owner;
> -	struct sgx_encl *encl = encl_page->encl;
> -
> -	kref_put(&encl->refcount, sgx_encl_release);
> -}
> -
>  static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
>  {
>  	struct sgx_encl_page *page = epc_page->owner;
> @@ -392,6 +376,7 @@ void sgx_reclaim_pages(void)
>  {
>  	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN + 1];
>  	struct sgx_epc_section *section;
> +	struct sgx_encl_page *encl_page;
>  	struct sgx_epc_page *epc_page;
>  	int cnt = 0;
>  	int i;
> @@ -404,8 +389,9 @@ void sgx_reclaim_pages(void)
>  		epc_page = list_first_entry(&sgx_active_page_list,
>  					    struct sgx_epc_page, list);
>  		list_del_init(&epc_page->list);
> +		encl_page = epc_page->owner;
>  
> -		if (sgx_reclaimer_get(epc_page))
> +		if (kref_get_unless_zero(&encl_page->encl->refcount) != 0)
>  			chunk[cnt++] = epc_page;
>  		else
>  			/* The owner is freeing the page. No need to add the
> @@ -417,10 +403,12 @@ void sgx_reclaim_pages(void)
>  
>  	for (i = 0; i < cnt; i++) {
>  		epc_page = chunk[i];
> +		encl_page = epc_page->owner;
> +
>  		if (sgx_reclaimer_evict(epc_page))
>  			continue;
>  
> -		sgx_reclaimer_put(epc_page);
> +		kref_put(&encl_page->encl->refcount, sgx_encl_release);
>  
>  		spin_lock(&sgx_active_page_list_lock);
>  		list_add_tail(&epc_page->list, &sgx_active_page_list);
> @@ -437,19 +425,20 @@ void sgx_reclaim_pages(void)
>  
>  	for (i = 0; i < cnt; i++) {
>  		epc_page = chunk[i];
> -		if (epc_page) {
> -			sgx_reclaimer_write(epc_page);
> -			sgx_reclaimer_put(epc_page);
> -			epc_page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
> +		if (!epc_page)
> +			continue;
>  
> -			section = sgx_epc_section(epc_page);
>  
> -			spin_lock(&section->lock);
> -			list_add_tail(&epc_page->list,
> -				      &section->page_list);
> -			section->free_cnt++;
> -			spin_unlock(&section->lock);
> -		}
> +		encl_page = epc_page->owner;
> +		sgx_reclaimer_write(epc_page);
> +		kref_put(&encl_page->encl->refcount, sgx_encl_release);
> +		epc_page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
> +
> +		section = sgx_epc_section(epc_page);
> +		spin_lock(&section->lock);
> +		list_add_tail(&epc_page->list, &section->page_list);
> +		section->free_cnt++;
> +		spin_unlock(&section->lock);
>  	}
>  }
>  
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write()
  2019-09-16 10:17 ` [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write() Jarkko Sakkinen
@ 2019-09-17 23:13   ` Sean Christopherson
  2019-09-18  4:15     ` Jarkko Sakkinen
  2019-09-17 23:21   ` Sean Christopherson
  1 sibling, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-17 23:13 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:17:55PM +0300, Jarkko Sakkinen wrote:
> Do enclave state checks only in sgx_reclaimer_write(). Checking the
> enclave state is not part of the sgx_encl_ewb() flow. The check is done
> differently for SECS and for addressable pages.
> 
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/reclaim.c | 69 +++++++++++++++----------------
>  1 file changed, 34 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> index e758a06919e4..a3e36f959c74 100644
> --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> @@ -308,47 +308,45 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
>  
>  	encl_page->desc &= ~SGX_ENCL_PAGE_RECLAIMED;
>  
> -	if (!(atomic_read(&encl->flags) & SGX_ENCL_DEAD)) {
> -		va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
> -					   list);
> -		va_offset = sgx_alloc_va_slot(va_page);
> -		if (sgx_va_page_full(va_page))
> -			list_move_tail(&va_page->list, &encl->va_pages);
> +	va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
> +				   list);
> +	va_offset = sgx_alloc_va_slot(va_page);
> +	if (sgx_va_page_full(va_page))
> +		list_move_tail(&va_page->list, &encl->va_pages);
> +
> +	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> +			     page_index);
> +	if (ret == SGX_NOT_TRACKED) {
> +		ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
> +		if (ret) {
> +			if (encls_failed(ret) ||
> +			    encls_returned_code(ret))
> +				ENCLS_WARN(ret, "ETRACK");
> +		}
>  
>  		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
>  				     page_index);
>  		if (ret == SGX_NOT_TRACKED) {
> -			ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
> -			if (ret) {
> -				if (encls_failed(ret) ||
> -				    encls_returned_code(ret))
> -					ENCLS_WARN(ret, "ETRACK");
> -			}
> -
> -			ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> -					     page_index);
> -			if (ret == SGX_NOT_TRACKED) {
> -				/*
> -				 * Slow path, send IPIs to kick cpus out of the
> -				 * enclave.  Note, it's imperative that the cpu
> -				 * mask is generated *after* ETRACK, else we'll
> -				 * miss cpus that entered the enclave between
> -				 * generating the mask and incrementing epoch.
> -				 */
> -				on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
> -						 sgx_ipi_cb, NULL, 1);
> -				ret = __sgx_encl_ewb(encl, epc_page, va_page,
> -						     va_offset, page_index);
> -			}
> +			/*
> +			 * Slow path, send IPIs to kick cpus out of the
> +			 * enclave.  Note, it's imperative that the cpu
> +			 * mask is generated *after* ETRACK, else we'll
> +			 * miss cpus that entered the enclave between
> +			 * generating the mask and incrementing epoch.
> +			 */
> +			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
> +					 sgx_ipi_cb, NULL, 1);
> +			ret = __sgx_encl_ewb(encl, epc_page, va_page,
> +					     va_offset, page_index);
>  		}
> +	}
>  
> -		if (ret)
> -			if (encls_failed(ret) || encls_returned_code(ret))
> -				ENCLS_WARN(ret, "EWB");
> +	if (ret)
> +		if (encls_failed(ret) || encls_returned_code(ret))
> +			ENCLS_WARN(ret, "EWB");
>  
> -		encl_page->desc |= va_offset;
> -		encl_page->va_page = va_page;
> -	}
> +	encl_page->desc |= va_offset;
> +	encl_page->va_page = va_page;
>  }
>  
>  static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> @@ -359,10 +357,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
>  
>  	mutex_lock(&encl->lock);
>  
> -	sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
>  	if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) {
>  		ret = __eremove(sgx_epc_addr(epc_page));
>  		WARN(ret, "EREMOVE returned %d\n", ret);

SGX_ENCL_PAGE_RECLAIMED is left dangling on encl_page in the SGX_ENCL_DEAD
case.  It doesn't affect functionality, but it could complicate debug if
anything goes awry.

> +	} else {
> +		sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
>  	}
>  
>  	encl_page->epc_page = NULL;
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write()
  2019-09-16 10:17 ` [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write() Jarkko Sakkinen
  2019-09-17 23:13   ` Sean Christopherson
@ 2019-09-17 23:21   ` Sean Christopherson
  2019-09-18  4:16     ` Jarkko Sakkinen
  1 sibling, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-17 23:21 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:17:55PM +0300, Jarkko Sakkinen wrote:
> Do enclave state checks only in sgx_reclaimer_write(). Checking the
> enclave state is not part of the sgx_encl_ewb() flow. The check is done
> differently for SECS and for addressable pages.
> 
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/reclaim.c | 69 +++++++++++++++----------------
>  1 file changed, 34 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> index e758a06919e4..a3e36f959c74 100644
> --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> @@ -308,47 +308,45 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
>  
>  	encl_page->desc &= ~SGX_ENCL_PAGE_RECLAIMED;
>  
> -	if (!(atomic_read(&encl->flags) & SGX_ENCL_DEAD)) {
> -		va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
> -					   list);
> -		va_offset = sgx_alloc_va_slot(va_page);
> -		if (sgx_va_page_full(va_page))
> -			list_move_tail(&va_page->list, &encl->va_pages);
> +	va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
> +				   list);
> +	va_offset = sgx_alloc_va_slot(va_page);
> +	if (sgx_va_page_full(va_page))
> +		list_move_tail(&va_page->list, &encl->va_pages);
> +
> +	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> +			     page_index);
> +	if (ret == SGX_NOT_TRACKED) {
> +		ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
> +		if (ret) {
> +			if (encls_failed(ret) ||
> +			    encls_returned_code(ret))
> +				ENCLS_WARN(ret, "ETRACK");
> +		}
>  
>  		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
>  				     page_index);
>  		if (ret == SGX_NOT_TRACKED) {
> -			ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
> -			if (ret) {
> -				if (encls_failed(ret) ||
> -				    encls_returned_code(ret))
> -					ENCLS_WARN(ret, "ETRACK");
> -			}
> -
> -			ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> -					     page_index);
> -			if (ret == SGX_NOT_TRACKED) {
> -				/*
> -				 * Slow path, send IPIs to kick cpus out of the
> -				 * enclave.  Note, it's imperative that the cpu
> -				 * mask is generated *after* ETRACK, else we'll
> -				 * miss cpus that entered the enclave between
> -				 * generating the mask and incrementing epoch.
> -				 */
> -				on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
> -						 sgx_ipi_cb, NULL, 1);
> -				ret = __sgx_encl_ewb(encl, epc_page, va_page,
> -						     va_offset, page_index);
> -			}
> +			/*
> +			 * Slow path, send IPIs to kick cpus out of the
> +			 * enclave.  Note, it's imperative that the cpu
> +			 * mask is generated *after* ETRACK, else we'll
> +			 * miss cpus that entered the enclave between
> +			 * generating the mask and incrementing epoch.
> +			 */
> +			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
> +					 sgx_ipi_cb, NULL, 1);
> +			ret = __sgx_encl_ewb(encl, epc_page, va_page,
> +					     va_offset, page_index);
>  		}
> +	}
>  
> -		if (ret)
> -			if (encls_failed(ret) || encls_returned_code(ret))
> -				ENCLS_WARN(ret, "EWB");
> +	if (ret)
> +		if (encls_failed(ret) || encls_returned_code(ret))
> +			ENCLS_WARN(ret, "EWB");
>  
> -		encl_page->desc |= va_offset;
> -		encl_page->va_page = va_page;
> -	}
> +	encl_page->desc |= va_offset;
> +	encl_page->va_page = va_page;
>  }
>  
>  static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> @@ -359,10 +357,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
>  
>  	mutex_lock(&encl->lock);
>  
> -	sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
>  	if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) {
>  		ret = __eremove(sgx_epc_addr(epc_page));
>  		WARN(ret, "EREMOVE returned %d\n", ret);
> +	} else {
> +		sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));

The sgx_encl_ewb() for SECS also needs to be skipped, otherwise we'll
attempt EWB on a dead enclave.  If the enclave is dead we can simply
free the SECS.

>  	}
>  
>  	encl_page->epc_page = NULL;
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim()
  2019-09-16 10:17 ` [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim() Jarkko Sakkinen
@ 2019-09-17 23:25   ` Sean Christopherson
  2019-09-25 18:28   ` Sean Christopherson
  1 sibling, 0 replies; 37+ messages in thread
From: Sean Christopherson @ 2019-09-17 23:25 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:17:59PM +0300, Jarkko Sakkinen wrote:
> Make sgx_reclaim_evict() idempotennt and rename it to sgx_can_reclaim() and
> set SGX_ENCL_PAGE_RECLAIMED in the call site.
> 
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/reclaim.c | 32 +++++++++++++++++--------------
>  1 file changed, 18 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> index 5a4d44dd02d7..cc3155b61513 100644
> --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> @@ -121,7 +121,19 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page)
>  	spin_unlock(&sgx_active_page_list_lock);
>  }
>  
> -static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
> +/**
> + * sgx_can_reclaim() - Filter out the pages that should not be reclaimed
> + * @epc_page:	a candidate EPC page
> + *
> + * Do not reclaim this page if it has been recently accessed by any mm_struct
> + * *and* if the enclave is still alive.  No need to take the enclave's lock,
> + * worst case scenario reclaiming pages from a dead enclave is delayed slightly.
> + * A live enclave with a recently accessed page is more common and avoiding lock
> + * contention in that case is a boon to performance.
> + *
> + * Return: true if the page should be reclaimed
> + */
> +static bool sgx_can_reclaim(struct sgx_epc_page *epc_page)
>  {
>  	struct sgx_encl_page *page = epc_page->owner;
>  	struct sgx_encl *encl = page->encl;
> @@ -147,21 +159,9 @@ static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
>  
>  	srcu_read_unlock(&encl->srcu, idx);
>  
> -	/*
> -	 * Do not reclaim this page if it has been recently accessed by any
> -	 * mm_struct *and* if the enclave is still alive.  No need to take
> -	 * the enclave's lock, worst case scenario reclaiming pages from a
> -	 * dead enclave is delayed slightly.  A live enclave with a recently
> -	 * accessed page is more common and avoiding lock contention in that
> -	 * case is a boon to performance.
> -	 */
>  	if (!ret && !(atomic_read(&encl->flags) & SGX_ENCL_DEAD))
>  		return false;
>  
> -	mutex_lock(&encl->lock);
> -	page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> -	mutex_unlock(&encl->lock);
> -
>  	return true;
>  }
>  
> @@ -405,8 +405,12 @@ void sgx_reclaim_pages(void)
>  		epc_page = chunk[i];
>  		encl_page = epc_page->owner;
>  
> -		if (sgx_reclaimer_evict(epc_page))
> +		if (sgx_can_reclaim(epc_page)) {
> +			mutex_lock(&encl_page->encl->lock);
> +			encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;

If we move setting SGX_ENCL_PAGE_RECLAIMED here, then we should also
move the clearing (currently in sgx_encl_ewb()) into the loop that
calls sgx_reclaimer_write().  That would also fix the issue of
SGX_ENCL_PAGE_RECLAIMED being left set when the enclave is dead.

> +			mutex_unlock(&encl_page->encl->lock);
>  			continue;
> +		}
>  
>  		kref_put(&encl_page->encl->refcount, sgx_encl_release);
>  
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool
  2019-09-16 10:18 ` [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool Jarkko Sakkinen
@ 2019-09-17 23:34   ` Sean Christopherson
  2019-09-18  4:21     ` Jarkko Sakkinen
  0 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-17 23:34 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:18:03PM +0300, Jarkko Sakkinen wrote:
> A blocked page can end up legitly to the free pool if pinning fails because
> we interpret that as an EWB failure and simply put it to the free pool.
> This corrupts the EPC page allocator.
> 
> Fix the bug by pinning the backing storage when picking the victim pages. A
> clean rollback can still be done when the memory allocation fails as pages
> can be still returned back to the enclave.
> 
> This in effect removes any other failure cases from sgx_encl_ewb() other
> than EPCM conflict when the host has went through a sleep cycle. In that
> case putting a page back to the free pool is perfectly fine because it is
> uninitialized.
> 
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/reclaim.c | 95 ++++++++++++++++++-------------
>  1 file changed, 57 insertions(+), 38 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> index 7d628a1388e2..d6e580e55456 100644
> --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> @@ -206,32 +206,24 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
>  
>  static int __sgx_encl_ewb(struct sgx_encl *encl, struct sgx_epc_page *epc_page,
>  			  struct sgx_va_page *va_page, unsigned int va_offset,
> -			  unsigned int page_index)
> +			  struct sgx_backing *backing)
>  {
>  	struct sgx_pageinfo pginfo;
> -	struct sgx_backing b;
>  	int ret;
>  
> -	ret = sgx_encl_get_backing(encl, page_index, &b);
> -	if (ret)
> -		return ret;
> -
>  	pginfo.addr = 0;
> -	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
> -	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) + b.pcmd_offset;
>  	pginfo.secs = 0;
> +
> +	pginfo.contents = (unsigned long)kmap_atomic(backing->contents);
> +	pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) +
> +			  backing->pcmd_offset;
> +
>  	ret = __ewb(&pginfo, sgx_epc_addr(epc_page),
>  		    sgx_epc_addr(va_page->epc_page) + va_offset);
> -	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
> -	kunmap_atomic((void *)(unsigned long)pginfo.contents);
>  
> -	if (!ret) {
> -		set_page_dirty(b.pcmd);
> -		set_page_dirty(b.contents);
> -	}
> -
> -	put_page(b.pcmd);
> -	put_page(b.contents);
> +	kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
> +					      backing->pcmd_offset));
> +	kunmap_atomic((void *)(unsigned long)pginfo.contents);
>  
>  	return ret;
>  }
> @@ -265,7 +257,7 @@ static const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl)
>  }
>  
>  static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
> -			 unsigned int page_index)
> +			 struct sgx_backing *backing)
>  {
>  	struct sgx_encl_page *encl_page = epc_page->owner;
>  	struct sgx_encl *encl = encl_page->encl;
> @@ -281,8 +273,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
>  	if (sgx_va_page_full(va_page))
>  		list_move_tail(&va_page->list, &encl->va_pages);
>  
> -	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> -			     page_index);
> +	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, backing);
>  	if (ret == SGX_NOT_TRACKED) {
>  		ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
>  		if (ret) {
> @@ -292,7 +283,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
>  		}
>  
>  		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> -				     page_index);
> +				     backing);
>  		if (ret == SGX_NOT_TRACKED) {
>  			/*
>  			 * Slow path, send IPIs to kick cpus out of the
> @@ -304,7 +295,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
>  			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
>  					 sgx_ipi_cb, NULL, 1);
>  			ret = __sgx_encl_ewb(encl, epc_page, va_page,
> -					     va_offset, page_index);
> +					     va_offset, backing);
>  		}
>  	}
>  
> @@ -314,15 +305,20 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
>  
>  		sgx_encl_destroy(encl);
>  	} else {
> +		set_page_dirty(backing->pcmd);
> +		set_page_dirty(backing->contents);
> +
>  		encl_page->desc |= va_offset;
>  		encl_page->va_page = va_page;
>  	}
>  }
>  
> -static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> +static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
> +				struct sgx_backing *backing)
>  {
>  	struct sgx_encl_page *encl_page = epc_page->owner;
>  	struct sgx_encl *encl = encl_page->encl;
> +	struct sgx_backing secs_backing;
>  	int ret;
>  
>  	mutex_lock(&encl->lock);
> @@ -331,7 +327,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
>  		ret = __eremove(sgx_epc_addr(epc_page));
>  		WARN(ret, "EREMOVE returned %d\n", ret);
>  	} else {
> -		sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
> +		sgx_encl_ewb(epc_page, backing);
>  	}
>  
>  	encl_page->epc_page = NULL;
> @@ -340,10 +336,17 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
>  	if (!encl->secs_child_cnt &&
>  	    (atomic_read(&encl->flags) &
>  	     (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED))) {
> -		sgx_encl_ewb(encl->secs.epc_page, PFN_DOWN(encl->size));
> -		sgx_free_page(encl->secs.epc_page);
> +		ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size),
> +					   &secs_backing);
> +		if (!ret) {
> +			sgx_encl_ewb(encl->secs.epc_page, &secs_backing);
> +			sgx_free_page(encl->secs.epc_page);
> +
> +			encl->secs.epc_page = NULL;
>  
> -		encl->secs.epc_page = NULL;
> +			put_page(secs_backing.pcmd);
> +			put_page(secs_backing.contents);
> +		}
>  	}
>  
>  	mutex_unlock(&encl->lock);
> @@ -351,17 +354,21 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
>  
>  /**
>   * sgx_reclaim_pages() - Reclaim EPC pages from the consumers
> - * Takes a fixed chunk of pages from the global list of consumed EPC pages and
> - * tries to swap them. Only the pages that are either being freed by the
> - * consumer or actively used are skipped.
> + *
> + * Take a fixed number of pages from the head of the active page pool and
> + * reclaim them to the enclave's private shmem files. Skip the pages, which
> + * have been accessed since the last scan. Move those pages to the tail of
> + * active page pool so that the pages get scanned in LRU like fashion.
>   */
>  void sgx_reclaim_pages(void)
>  {
> -	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN + 1];
> +	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN];
> +	struct sgx_backing backing[SGX_NR_TO_SCAN];
>  	struct sgx_epc_section *section;
>  	struct sgx_encl_page *encl_page;
>  	struct sgx_epc_page *epc_page;
>  	int cnt = 0;
> +	int ret;
>  	int i;
>  
>  	spin_lock(&sgx_active_page_list_lock);
> @@ -388,13 +395,21 @@ void sgx_reclaim_pages(void)
>  		epc_page = chunk[i];
>  		encl_page = epc_page->owner;
>  
> -		if (sgx_can_reclaim(epc_page)) {
> -			mutex_lock(&encl_page->encl->lock);
> -			encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> -			mutex_unlock(&encl_page->encl->lock);
> -			continue;
> -		}
> +		if (!sgx_can_reclaim(epc_page))

Would it make sense to use a more explicit name for sgx_can_reclaim(),
e.g. sgx_age_epc_page() or something?  "can reclaim" makes it sound like
there are scenarios where reclaim is impossible, but really it's just that
we don't want to reclaim a recently accessed page.

> +			goto skip;
>  
> +		ret = sgx_encl_get_backing(encl_page->encl,
> +					   SGX_ENCL_PAGE_INDEX(encl_page),
> +					   &backing[i]);
> +		if (ret)
> +			goto skip;
> +
> +		mutex_lock(&encl_page->encl->lock);
> +		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> +		mutex_unlock(&encl_page->encl->lock);
> +		continue;
> +
> +skip:

Eww.  The call to sgx_encl_get_backing() makes it rather ugly no matter
what, but this seems slightly less ugly:

	for (i = 0; i < cnt; i++) {
		epc_page = chunk[i];
		encl_page = epc_page->owner;

		if (!sgx_can_reclaim(chunk[i]) ||
		    sgx_encl_get_backing(encl_page->encl,
					 SGX_ENCL_PAGE_INDEX(encl_page),
					 &backing[i]) {
			kref_put(&encl_page->encl->refcount, sgx_encl_release);

			spin_lock(&sgx_active_page_list_lock);
			list_add_tail(&epc_page->list, &sgx_active_page_list);
			spin_unlock(&sgx_active_page_list_lock);

			chunk[i] = NULL;
			continue;
		}

		mutex_lock(&encl_page->encl->lock);
		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
		mutex_unlock(&encl_page->encl->lock);
	}

>  		kref_put(&encl_page->encl->refcount, sgx_encl_release);
>  
>  		spin_lock(&sgx_active_page_list_lock);
> @@ -416,7 +431,11 @@ void sgx_reclaim_pages(void)
>  			continue;
>  
>  		encl_page = epc_page->owner;
> -		sgx_reclaimer_write(epc_page);
> +		sgx_reclaimer_write(epc_page, &backing[i]);
> +
> +		put_page(backing->pcmd);
> +		put_page(backing->contents);

These should be backing[i]->

> +
>  		kref_put(&encl_page->encl->refcount, sgx_encl_release);
>  		epc_page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
>  
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 14/17] x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages
  2019-09-17 22:50   ` Sean Christopherson
@ 2019-09-18  4:07     ` Jarkko Sakkinen
  0 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-18  4:07 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Tue, Sep 17, 2019 at 03:50:28PM -0700, Sean Christopherson wrote:
> This isn't safe when there are multiple EPC sections as each section has
> its own spinlock.

The variable does not need to be exact as long as it settles eventually
to the correct value. It is only used to retrigger the reclaimer.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 16/17] x86/sgx: Introduce sgx_encl_get_backing()
  2019-09-17 23:05   ` Sean Christopherson
@ 2019-09-18  4:10     ` Jarkko Sakkinen
  0 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-18  4:10 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Tue, Sep 17, 2019 at 04:05:18PM -0700, Sean Christopherson wrote:
> On Mon, Sep 16, 2019 at 01:18:02PM +0300, Jarkko Sakkinen wrote:
> > Since __sgx_encl_eldu() and __sgx_encl_ewb() pin the backing storage in the
> > same way, consolidate this to sgx_encl_get_backing() replacing
> > sgx_encl_get_backing_page().
> > 
> > Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> > Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> > Cc: Serge Ayoun <serge.ayoun@intel.com>
> > Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> > ---
> >  arch/x86/kernel/cpu/sgx/encl.c    | 86 +++++++++++++++++++------------
> >  arch/x86/kernel/cpu/sgx/encl.h    | 20 +++----
> >  arch/x86/kernel/cpu/sgx/reclaim.c | 39 ++++----------
> >  3 files changed, 71 insertions(+), 74 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> > index 66762b9c1517..a71414ce05b1 100644
> > --- a/arch/x86/kernel/cpu/sgx/encl.c
> > +++ b/arch/x86/kernel/cpu/sgx/encl.c
> > @@ -17,11 +17,8 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> >  	unsigned long va_offset = SGX_ENCL_PAGE_VA_OFFSET(encl_page);
> >  	struct sgx_encl *encl = encl_page->encl;
> >  	struct sgx_pageinfo pginfo;
> > -	unsigned long pcmd_offset;
> > -	struct page *backing;
> > +	struct sgx_backing b;
> >  	pgoff_t page_index;
> > -	pgoff_t pcmd_index;
> > -	struct page *pcmd;
> >  	int ret;
> >  
> >  	if (secs_page)
> > @@ -29,24 +26,14 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> >  	else
> >  		page_index = PFN_DOWN(encl->size);
> >  
> > -	pcmd_index = sgx_pcmd_index(encl, page_index);
> > -	pcmd_offset = sgx_pcmd_offset(page_index);
> > -
> > -	backing = sgx_encl_get_backing_page(encl, page_index);
> > -	if (IS_ERR(backing)) {
> > -		ret = PTR_ERR(backing);
> > -		goto err_backing;
> > -	}
> > -
> > -	pcmd = sgx_encl_get_backing_page(encl, pcmd_index);
> > -	if (IS_ERR(pcmd)) {
> > -		ret = PTR_ERR(pcmd);
> > -		goto err_pcmd;
> > -	}
> > +	ret = sgx_encl_get_backing(encl, page_index, &b);
> > +	if (ret)
> > +		return ret;
> >  
> >  	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
> > -	pginfo.contents = (unsigned long)kmap_atomic(backing);
> > -	pginfo.metadata = (unsigned long)kmap_atomic(pcmd) + pcmd_offset;
> > +	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
> > +	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) +
> > +			  b.pcmd_offset;
> >  
> >  	if (secs_page)
> >  		pginfo.secs = (u64)sgx_epc_addr(secs_page);
> > @@ -62,15 +49,12 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
> >  		ret = -EFAULT;
> >  	}
> >  
> > -	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - pcmd_offset));
> > +	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
> >  	kunmap_atomic((void *)(unsigned long)pginfo.contents);
> >  
> > -	put_page(pcmd);
> > +	put_page(b.pcmd);
> > +	put_page(b.contents);
> 
> If we're going to have sgx_encl_get_backing(), it should be paired with
> sgx_encl_put_backing(), e.g. it's not immediately obvious whose doing the
> "get page" corresponding to the put_page() calls.

OK, I can add that.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put()
  2019-09-17 23:07   ` Sean Christopherson
@ 2019-09-18  4:12     ` Jarkko Sakkinen
  2019-09-20 13:38       ` Jarkko Sakkinen
  0 siblings, 1 reply; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-18  4:12 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Tue, Sep 17, 2019 at 04:07:40PM -0700, Sean Christopherson wrote:
> On Mon, Sep 16, 2019 at 01:17:58PM +0300, Jarkko Sakkinen wrote:
> > Open code sgx_reclaimer_get() and sgx_reclaimer_put(). They are legacy
> > from the callback interface. Wrapping a function call inside a function
> > does not make sense.
> 
> I actually like the functions, IMO they make the loops more readable.
> They should be static, and probably static inline, but I doubt either
> annotation affects the compiler output.

The way I see it they only add to need to cross reference when you read
the code. Wrapping a single function call makes only sense when you
actually have a semantical need for it.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write()
  2019-09-17 23:13   ` Sean Christopherson
@ 2019-09-18  4:15     ` Jarkko Sakkinen
  0 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-18  4:15 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Tue, Sep 17, 2019 at 04:13:26PM -0700, Sean Christopherson wrote:
> SGX_ENCL_PAGE_RECLAIMED is left dangling on encl_page in the SGX_ENCL_DEAD
> case.  It doesn't affect functionality, but it could complicate debug if
> anything goes awry.

Agreed. It is a regression in this patch actually because my purpose was
not to affect semantics.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write()
  2019-09-17 23:21   ` Sean Christopherson
@ 2019-09-18  4:16     ` Jarkko Sakkinen
  0 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-18  4:16 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Tue, Sep 17, 2019 at 04:21:19PM -0700, Sean Christopherson wrote:
> On Mon, Sep 16, 2019 at 01:17:55PM +0300, Jarkko Sakkinen wrote:
> > Do enclave state checks only in sgx_reclaimer_write(). Checking the
> > enclave state is not part of the sgx_encl_ewb() flow. The check is done
> > differently for SECS and for addressable pages.
> > 
> > Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> > Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> > Cc: Serge Ayoun <serge.ayoun@intel.com>
> > Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> > ---
> >  arch/x86/kernel/cpu/sgx/reclaim.c | 69 +++++++++++++++----------------
> >  1 file changed, 34 insertions(+), 35 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> > index e758a06919e4..a3e36f959c74 100644
> > --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> > +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> > @@ -308,47 +308,45 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
> >  
> >  	encl_page->desc &= ~SGX_ENCL_PAGE_RECLAIMED;
> >  
> > -	if (!(atomic_read(&encl->flags) & SGX_ENCL_DEAD)) {
> > -		va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
> > -					   list);
> > -		va_offset = sgx_alloc_va_slot(va_page);
> > -		if (sgx_va_page_full(va_page))
> > -			list_move_tail(&va_page->list, &encl->va_pages);
> > +	va_page = list_first_entry(&encl->va_pages, struct sgx_va_page,
> > +				   list);
> > +	va_offset = sgx_alloc_va_slot(va_page);
> > +	if (sgx_va_page_full(va_page))
> > +		list_move_tail(&va_page->list, &encl->va_pages);
> > +
> > +	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> > +			     page_index);
> > +	if (ret == SGX_NOT_TRACKED) {
> > +		ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
> > +		if (ret) {
> > +			if (encls_failed(ret) ||
> > +			    encls_returned_code(ret))
> > +				ENCLS_WARN(ret, "ETRACK");
> > +		}
> >  
> >  		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> >  				     page_index);
> >  		if (ret == SGX_NOT_TRACKED) {
> > -			ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
> > -			if (ret) {
> > -				if (encls_failed(ret) ||
> > -				    encls_returned_code(ret))
> > -					ENCLS_WARN(ret, "ETRACK");
> > -			}
> > -
> > -			ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> > -					     page_index);
> > -			if (ret == SGX_NOT_TRACKED) {
> > -				/*
> > -				 * Slow path, send IPIs to kick cpus out of the
> > -				 * enclave.  Note, it's imperative that the cpu
> > -				 * mask is generated *after* ETRACK, else we'll
> > -				 * miss cpus that entered the enclave between
> > -				 * generating the mask and incrementing epoch.
> > -				 */
> > -				on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
> > -						 sgx_ipi_cb, NULL, 1);
> > -				ret = __sgx_encl_ewb(encl, epc_page, va_page,
> > -						     va_offset, page_index);
> > -			}
> > +			/*
> > +			 * Slow path, send IPIs to kick cpus out of the
> > +			 * enclave.  Note, it's imperative that the cpu
> > +			 * mask is generated *after* ETRACK, else we'll
> > +			 * miss cpus that entered the enclave between
> > +			 * generating the mask and incrementing epoch.
> > +			 */
> > +			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
> > +					 sgx_ipi_cb, NULL, 1);
> > +			ret = __sgx_encl_ewb(encl, epc_page, va_page,
> > +					     va_offset, page_index);
> >  		}
> > +	}
> >  
> > -		if (ret)
> > -			if (encls_failed(ret) || encls_returned_code(ret))
> > -				ENCLS_WARN(ret, "EWB");
> > +	if (ret)
> > +		if (encls_failed(ret) || encls_returned_code(ret))
> > +			ENCLS_WARN(ret, "EWB");
> >  
> > -		encl_page->desc |= va_offset;
> > -		encl_page->va_page = va_page;
> > -	}
> > +	encl_page->desc |= va_offset;
> > +	encl_page->va_page = va_page;
> >  }
> >  
> >  static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> > @@ -359,10 +357,11 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> >  
> >  	mutex_lock(&encl->lock);
> >  
> > -	sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
> >  	if (atomic_read(&encl->flags) & SGX_ENCL_DEAD) {
> >  		ret = __eremove(sgx_epc_addr(epc_page));
> >  		WARN(ret, "EREMOVE returned %d\n", ret);
> > +	} else {
> > +		sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
> 
> The sgx_encl_ewb() for SECS also needs to be skipped, otherwise we'll
> attempt EWB on a dead enclave.  If the enclave is dead we can simply
> free the SECS.

Thanks! I'll refine.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool
  2019-09-17 23:34   ` Sean Christopherson
@ 2019-09-18  4:21     ` Jarkko Sakkinen
  2019-09-25  0:27       ` Jarkko Sakkinen
  0 siblings, 1 reply; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-18  4:21 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Tue, Sep 17, 2019 at 04:34:35PM -0700, Sean Christopherson wrote:
> On Mon, Sep 16, 2019 at 01:18:03PM +0300, Jarkko Sakkinen wrote:
> > A blocked page can end up legitly to the free pool if pinning fails because
> > we interpret that as an EWB failure and simply put it to the free pool.
> > This corrupts the EPC page allocator.
> > 
> > Fix the bug by pinning the backing storage when picking the victim pages. A
> > clean rollback can still be done when the memory allocation fails as pages
> > can be still returned back to the enclave.
> > 
> > This in effect removes any other failure cases from sgx_encl_ewb() other
> > than EPCM conflict when the host has went through a sleep cycle. In that
> > case putting a page back to the free pool is perfectly fine because it is
> > uninitialized.
> > 
> > Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> > Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> > Cc: Serge Ayoun <serge.ayoun@intel.com>
> > Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> > ---
> >  arch/x86/kernel/cpu/sgx/reclaim.c | 95 ++++++++++++++++++-------------
> >  1 file changed, 57 insertions(+), 38 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> > index 7d628a1388e2..d6e580e55456 100644
> > --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> > +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> > @@ -206,32 +206,24 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
> >  
> >  static int __sgx_encl_ewb(struct sgx_encl *encl, struct sgx_epc_page *epc_page,
> >  			  struct sgx_va_page *va_page, unsigned int va_offset,
> > -			  unsigned int page_index)
> > +			  struct sgx_backing *backing)
> >  {
> >  	struct sgx_pageinfo pginfo;
> > -	struct sgx_backing b;
> >  	int ret;
> >  
> > -	ret = sgx_encl_get_backing(encl, page_index, &b);
> > -	if (ret)
> > -		return ret;
> > -
> >  	pginfo.addr = 0;
> > -	pginfo.contents = (unsigned long)kmap_atomic(b.contents);
> > -	pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) + b.pcmd_offset;
> >  	pginfo.secs = 0;
> > +
> > +	pginfo.contents = (unsigned long)kmap_atomic(backing->contents);
> > +	pginfo.metadata = (unsigned long)kmap_atomic(backing->pcmd) +
> > +			  backing->pcmd_offset;
> > +
> >  	ret = __ewb(&pginfo, sgx_epc_addr(epc_page),
> >  		    sgx_epc_addr(va_page->epc_page) + va_offset);
> > -	kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset));
> > -	kunmap_atomic((void *)(unsigned long)pginfo.contents);
> >  
> > -	if (!ret) {
> > -		set_page_dirty(b.pcmd);
> > -		set_page_dirty(b.contents);
> > -	}
> > -
> > -	put_page(b.pcmd);
> > -	put_page(b.contents);
> > +	kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
> > +					      backing->pcmd_offset));
> > +	kunmap_atomic((void *)(unsigned long)pginfo.contents);
> >  
> >  	return ret;
> >  }
> > @@ -265,7 +257,7 @@ static const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl)
> >  }
> >  
> >  static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
> > -			 unsigned int page_index)
> > +			 struct sgx_backing *backing)
> >  {
> >  	struct sgx_encl_page *encl_page = epc_page->owner;
> >  	struct sgx_encl *encl = encl_page->encl;
> > @@ -281,8 +273,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
> >  	if (sgx_va_page_full(va_page))
> >  		list_move_tail(&va_page->list, &encl->va_pages);
> >  
> > -	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> > -			     page_index);
> > +	ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset, backing);
> >  	if (ret == SGX_NOT_TRACKED) {
> >  		ret = __etrack(sgx_epc_addr(encl->secs.epc_page));
> >  		if (ret) {
> > @@ -292,7 +283,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
> >  		}
> >  
> >  		ret = __sgx_encl_ewb(encl, epc_page, va_page, va_offset,
> > -				     page_index);
> > +				     backing);
> >  		if (ret == SGX_NOT_TRACKED) {
> >  			/*
> >  			 * Slow path, send IPIs to kick cpus out of the
> > @@ -304,7 +295,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
> >  			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
> >  					 sgx_ipi_cb, NULL, 1);
> >  			ret = __sgx_encl_ewb(encl, epc_page, va_page,
> > -					     va_offset, page_index);
> > +					     va_offset, backing);
> >  		}
> >  	}
> >  
> > @@ -314,15 +305,20 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
> >  
> >  		sgx_encl_destroy(encl);
> >  	} else {
> > +		set_page_dirty(backing->pcmd);
> > +		set_page_dirty(backing->contents);
> > +
> >  		encl_page->desc |= va_offset;
> >  		encl_page->va_page = va_page;
> >  	}
> >  }
> >  
> > -static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> > +static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
> > +				struct sgx_backing *backing)
> >  {
> >  	struct sgx_encl_page *encl_page = epc_page->owner;
> >  	struct sgx_encl *encl = encl_page->encl;
> > +	struct sgx_backing secs_backing;
> >  	int ret;
> >  
> >  	mutex_lock(&encl->lock);
> > @@ -331,7 +327,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> >  		ret = __eremove(sgx_epc_addr(epc_page));
> >  		WARN(ret, "EREMOVE returned %d\n", ret);
> >  	} else {
> > -		sgx_encl_ewb(epc_page, SGX_ENCL_PAGE_INDEX(encl_page));
> > +		sgx_encl_ewb(epc_page, backing);
> >  	}
> >  
> >  	encl_page->epc_page = NULL;
> > @@ -340,10 +336,17 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> >  	if (!encl->secs_child_cnt &&
> >  	    (atomic_read(&encl->flags) &
> >  	     (SGX_ENCL_DEAD | SGX_ENCL_INITIALIZED))) {
> > -		sgx_encl_ewb(encl->secs.epc_page, PFN_DOWN(encl->size));
> > -		sgx_free_page(encl->secs.epc_page);
> > +		ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size),
> > +					   &secs_backing);
> > +		if (!ret) {
> > +			sgx_encl_ewb(encl->secs.epc_page, &secs_backing);
> > +			sgx_free_page(encl->secs.epc_page);
> > +
> > +			encl->secs.epc_page = NULL;
> >  
> > -		encl->secs.epc_page = NULL;
> > +			put_page(secs_backing.pcmd);
> > +			put_page(secs_backing.contents);
> > +		}
> >  	}
> >  
> >  	mutex_unlock(&encl->lock);
> > @@ -351,17 +354,21 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page)
> >  
> >  /**
> >   * sgx_reclaim_pages() - Reclaim EPC pages from the consumers
> > - * Takes a fixed chunk of pages from the global list of consumed EPC pages and
> > - * tries to swap them. Only the pages that are either being freed by the
> > - * consumer or actively used are skipped.
> > + *
> > + * Take a fixed number of pages from the head of the active page pool and
> > + * reclaim them to the enclave's private shmem files. Skip the pages, which
> > + * have been accessed since the last scan. Move those pages to the tail of
> > + * active page pool so that the pages get scanned in LRU like fashion.
> >   */
> >  void sgx_reclaim_pages(void)
> >  {
> > -	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN + 1];
> > +	struct sgx_epc_page *chunk[SGX_NR_TO_SCAN];
> > +	struct sgx_backing backing[SGX_NR_TO_SCAN];
> >  	struct sgx_epc_section *section;
> >  	struct sgx_encl_page *encl_page;
> >  	struct sgx_epc_page *epc_page;
> >  	int cnt = 0;
> > +	int ret;
> >  	int i;
> >  
> >  	spin_lock(&sgx_active_page_list_lock);
> > @@ -388,13 +395,21 @@ void sgx_reclaim_pages(void)
> >  		epc_page = chunk[i];
> >  		encl_page = epc_page->owner;
> >  
> > -		if (sgx_can_reclaim(epc_page)) {
> > -			mutex_lock(&encl_page->encl->lock);
> > -			encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > -			mutex_unlock(&encl_page->encl->lock);
> > -			continue;
> > -		}
> > +		if (!sgx_can_reclaim(epc_page))
> 
> Would it make sense to use a more explicit name for sgx_can_reclaim(),
> e.g. sgx_age_epc_page() or something?  "can reclaim" makes it sound like
> there are scenarios where reclaim is impossible, but really it's just that
> we don't want to reclaim a recently accessed page.
> 
> > +			goto skip;
> >  
> > +		ret = sgx_encl_get_backing(encl_page->encl,
> > +					   SGX_ENCL_PAGE_INDEX(encl_page),
> > +					   &backing[i]);
> > +		if (ret)
> > +			goto skip;
> > +
> > +		mutex_lock(&encl_page->encl->lock);
> > +		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > +		mutex_unlock(&encl_page->encl->lock);
> > +		continue;
> > +
> > +skip:
> 
> Eww.  The call to sgx_encl_get_backing() makes it rather ugly no matter
> what, but this seems slightly less ugly:
> 
> 	for (i = 0; i < cnt; i++) {
> 		epc_page = chunk[i];
> 		encl_page = epc_page->owner;
> 
> 		if (!sgx_can_reclaim(chunk[i]) ||
> 		    sgx_encl_get_backing(encl_page->encl,
> 					 SGX_ENCL_PAGE_INDEX(encl_page),
> 					 &backing[i]) {
> 			kref_put(&encl_page->encl->refcount, sgx_encl_release);
> 
> 			spin_lock(&sgx_active_page_list_lock);
> 			list_add_tail(&epc_page->list, &sgx_active_page_list);
> 			spin_unlock(&sgx_active_page_list_lock);
> 
> 			chunk[i] = NULL;
> 			continue;
> 		}
> 
> 		mutex_lock(&encl_page->encl->lock);
> 		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> 		mutex_unlock(&encl_page->encl->lock);
> 	}
> 
> >  		kref_put(&encl_page->encl->refcount, sgx_encl_release);
> >  
> >  		spin_lock(&sgx_active_page_list_lock);
> > @@ -416,7 +431,11 @@ void sgx_reclaim_pages(void)
> >  			continue;
> >  
> >  		encl_page = epc_page->owner;
> > -		sgx_reclaimer_write(epc_page);
> > +		sgx_reclaimer_write(epc_page, &backing[i]);
> > +
> > +		put_page(backing->pcmd);
> > +		put_page(backing->contents);
> 
> These should be backing[i]->
> 
> > +
> >  		kref_put(&encl_page->encl->refcount, sgx_encl_release);
> >  		epc_page->desc &= ~SGX_EPC_PAGE_RECLAIMABLE;
> >  
> > -- 
> > 2.20.1
> > 

Good remarks. Have to check why the last one did not give me crashes.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put()
  2019-09-18  4:12     ` Jarkko Sakkinen
@ 2019-09-20 13:38       ` Jarkko Sakkinen
  0 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-20 13:38 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Wed, Sep 18, 2019 at 07:12:25AM +0300, Jarkko Sakkinen wrote:
> On Tue, Sep 17, 2019 at 04:07:40PM -0700, Sean Christopherson wrote:
> > On Mon, Sep 16, 2019 at 01:17:58PM +0300, Jarkko Sakkinen wrote:
> > > Open code sgx_reclaimer_get() and sgx_reclaimer_put(). They are legacy
> > > from the callback interface. Wrapping a function call inside a function
> > > does not make sense.
> > 
> > I actually like the functions, IMO they make the loops more readable.
> > They should be static, and probably static inline, but I doubt either
> > annotation affects the compiler output.
> 
> The way I see it they only add to need to cross reference when you read
> the code. Wrapping a single function call makes only sense when you
> actually have a semantical need for it.

It also incosistent to have the wrappers as in other call sites kref_put()
is used "raw".

If we have a wrapper, it should be used consistently in every possible
call site.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool
  2019-09-18  4:21     ` Jarkko Sakkinen
@ 2019-09-25  0:27       ` Jarkko Sakkinen
  2019-09-25 18:33         ` Sean Christopherson
  0 siblings, 1 reply; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-25  0:27 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Wed, Sep 18, 2019 at 07:21:20AM +0300, Jarkko Sakkinen wrote:
> > > +			goto skip;
> > >  
> > > +		ret = sgx_encl_get_backing(encl_page->encl,
> > > +					   SGX_ENCL_PAGE_INDEX(encl_page),
> > > +					   &backing[i]);
> > > +		if (ret)
> > > +			goto skip;
> > > +
> > > +		mutex_lock(&encl_page->encl->lock);
> > > +		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > > +		mutex_unlock(&encl_page->encl->lock);
> > > +		continue;
> > > +
> > > +skip:
> > 
> > Eww.  The call to sgx_encl_get_backing() makes it rather ugly no matter
> > what, but this seems slightly less ugly:
> > 
> > 	for (i = 0; i < cnt; i++) {
> > 		epc_page = chunk[i];
> > 		encl_page = epc_page->owner;
> > 
> > 		if (!sgx_can_reclaim(chunk[i]) ||
> > 		    sgx_encl_get_backing(encl_page->encl,
> > 					 SGX_ENCL_PAGE_INDEX(encl_page),
> > 					 &backing[i]) {
> > 			kref_put(&encl_page->encl->refcount, sgx_encl_release);
> > 
> > 			spin_lock(&sgx_active_page_list_lock);
> > 			list_add_tail(&epc_page->list, &sgx_active_page_list);
> > 			spin_unlock(&sgx_active_page_list_lock);
> > 
> > 			chunk[i] = NULL;
> > 			continue;
> > 		}
> > 
> > 		mutex_lock(&encl_page->encl->lock);
> > 		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > 		mutex_unlock(&encl_page->encl->lock);
> > 	}
> > 

Well that is one big nested mess where as the version I did is legit use
of gotos: two conditions that can cause to skip the action. And also
fairly normal use of gotos with same ideas as with out/err etc. labels
except now it is used inside a loop..

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim()
  2019-09-16 10:17 ` [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim() Jarkko Sakkinen
  2019-09-17 23:25   ` Sean Christopherson
@ 2019-09-25 18:28   ` Sean Christopherson
  2019-09-27 15:33     ` Jarkko Sakkinen
  1 sibling, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-25 18:28 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Mon, Sep 16, 2019 at 01:17:59PM +0300, Jarkko Sakkinen wrote:
> Make sgx_reclaim_evict() idempotennt and rename it to sgx_can_reclaim() and
> set SGX_ENCL_PAGE_RECLAIMED in the call site.
> 
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> Cc: Serge Ayoun <serge.ayoun@intel.com>
> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/reclaim.c | 32 +++++++++++++++++--------------
>  1 file changed, 18 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> index 5a4d44dd02d7..cc3155b61513 100644
> --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> @@ -121,7 +121,19 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page)
>  	spin_unlock(&sgx_active_page_list_lock);
>  }
>  
> -static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
> +/**
> + * sgx_can_reclaim() - Filter out the pages that should not be reclaimed
> + * @epc_page:	a candidate EPC page
> + *
> + * Do not reclaim this page if it has been recently accessed by any mm_struct
> + * *and* if the enclave is still alive.  No need to take the enclave's lock,
> + * worst case scenario reclaiming pages from a dead enclave is delayed slightly.
> + * A live enclave with a recently accessed page is more common and avoiding lock
> + * contention in that case is a boon to performance.
> + *
> + * Return: true if the page should be reclaimed
> + */
> +static bool sgx_can_reclaim(struct sgx_epc_page *epc_page)

Belated thought.  The param can be 'struct sgx_encl_page *encl_page'.  The
caller already has the encl_page, and the function only uses the epc_page
to retrieve the encl_page.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool
  2019-09-25  0:27       ` Jarkko Sakkinen
@ 2019-09-25 18:33         ` Sean Christopherson
  2019-09-27 15:39           ` Jarkko Sakkinen
  0 siblings, 1 reply; 37+ messages in thread
From: Sean Christopherson @ 2019-09-25 18:33 UTC (permalink / raw)
  To: Jarkko Sakkinen; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Wed, Sep 25, 2019 at 03:27:49AM +0300, Jarkko Sakkinen wrote:
> On Wed, Sep 18, 2019 at 07:21:20AM +0300, Jarkko Sakkinen wrote:
> > > > +			goto skip;
> > > >  
> > > > +		ret = sgx_encl_get_backing(encl_page->encl,
> > > > +					   SGX_ENCL_PAGE_INDEX(encl_page),
> > > > +					   &backing[i]);
> > > > +		if (ret)
> > > > +			goto skip;
> > > > +
> > > > +		mutex_lock(&encl_page->encl->lock);
> > > > +		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > > > +		mutex_unlock(&encl_page->encl->lock);
> > > > +		continue;
> > > > +
> > > > +skip:
> > > 
> > > Eww.  The call to sgx_encl_get_backing() makes it rather ugly no matter
> > > what, but this seems slightly less ugly:
> > > 
> > > 	for (i = 0; i < cnt; i++) {
> > > 		epc_page = chunk[i];
> > > 		encl_page = epc_page->owner;
> > > 
> > > 		if (!sgx_can_reclaim(chunk[i]) ||
> > > 		    sgx_encl_get_backing(encl_page->encl,
> > > 					 SGX_ENCL_PAGE_INDEX(encl_page),
> > > 					 &backing[i]) {
> > > 			kref_put(&encl_page->encl->refcount, sgx_encl_release);
> > > 
> > > 			spin_lock(&sgx_active_page_list_lock);
> > > 			list_add_tail(&epc_page->list, &sgx_active_page_list);
> > > 			spin_unlock(&sgx_active_page_list_lock);
> > > 
> > > 			chunk[i] = NULL;
> > > 			continue;
> > > 		}
> > > 
> > > 		mutex_lock(&encl_page->encl->lock);
> > > 		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > > 		mutex_unlock(&encl_page->encl->lock);
> > > 	}
> > > 
> 
> Well that is one big nested mess where as the version I did is legit use
> of gotos: two conditions that can cause to skip the action. And also
> fairly normal use of gotos with same ideas as with out/err etc. labels
> except now it is used inside a loop..

Yeah, it's the "inside a loop" part that's ugly.  I agree the nested code
is also heinous.

What if we add a helper to split out the verbose check?  Maybe as below,
or move the entire guts to a separate helper?

static int sgx_prepare_to_reclaim(struct sgx_encl_page *encl_page,
				  struct sgx_backing *backing)
{
	if (!sgx_can_reclaim(encl_page))
		return -EBUSY;

	return sgx_encl_get_backing(encl_page->encl,
				    SGX_ENCL_PAGE_INDEX(encl_page), backing);
}

void sgx_reclaim_pages(void)
{
	for (i = 0; i < cnt; i++) {
	epc_page = chunk[i];
	encl_page = epc_page->owner;

	if (!sgx_prepare_to_reclaim(encl_page, &backing[i])) {
		mutex_lock(&encl_page->encl->lock);
		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
		mutex_unlock(&encl_page->encl->lock);
	else {
		kref_put(&encl_page->encl->refcount, sgx_encl_release);

		spin_lock(&sgx_active_page_list_lock);
		list_add_tail(&epc_page->list, &sgx_active_page_list);
		spin_unlock(&sgx_active_page_list_lock);

		chunk[i] = NULL;
	}
}

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim()
  2019-09-25 18:28   ` Sean Christopherson
@ 2019-09-27 15:33     ` Jarkko Sakkinen
  0 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-27 15:33 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Wed, Sep 25, 2019 at 11:28:27AM -0700, Sean Christopherson wrote:
> On Mon, Sep 16, 2019 at 01:17:59PM +0300, Jarkko Sakkinen wrote:
> > Make sgx_reclaim_evict() idempotennt and rename it to sgx_can_reclaim() and
> > set SGX_ENCL_PAGE_RECLAIMED in the call site.
> > 
> > Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> > Cc: Shay Katz-zamir <shay.katz-zamir@intel.com>
> > Cc: Serge Ayoun <serge.ayoun@intel.com>
> > Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
> > ---
> >  arch/x86/kernel/cpu/sgx/reclaim.c | 32 +++++++++++++++++--------------
> >  1 file changed, 18 insertions(+), 14 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/sgx/reclaim.c b/arch/x86/kernel/cpu/sgx/reclaim.c
> > index 5a4d44dd02d7..cc3155b61513 100644
> > --- a/arch/x86/kernel/cpu/sgx/reclaim.c
> > +++ b/arch/x86/kernel/cpu/sgx/reclaim.c
> > @@ -121,7 +121,19 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page)
> >  	spin_unlock(&sgx_active_page_list_lock);
> >  }
> >  
> > -static bool sgx_reclaimer_evict(struct sgx_epc_page *epc_page)
> > +/**
> > + * sgx_can_reclaim() - Filter out the pages that should not be reclaimed
> > + * @epc_page:	a candidate EPC page
> > + *
> > + * Do not reclaim this page if it has been recently accessed by any mm_struct
> > + * *and* if the enclave is still alive.  No need to take the enclave's lock,
> > + * worst case scenario reclaiming pages from a dead enclave is delayed slightly.
> > + * A live enclave with a recently accessed page is more common and avoiding lock
> > + * contention in that case is a boon to performance.
> > + *
> > + * Return: true if the page should be reclaimed
> > + */
> > +static bool sgx_can_reclaim(struct sgx_epc_page *epc_page)
> 
> Belated thought.  The param can be 'struct sgx_encl_page *encl_page'.  The
> caller already has the encl_page, and the function only uses the epc_page
> to retrieve the encl_page.

I used sgx_reclaimer_age() for the function when I merged.
Agree that the function name was not right because it is not
a pure query but also changes state.

/Jarkko

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool
  2019-09-25 18:33         ` Sean Christopherson
@ 2019-09-27 15:39           ` Jarkko Sakkinen
  0 siblings, 0 replies; 37+ messages in thread
From: Jarkko Sakkinen @ 2019-09-27 15:39 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: linux-sgx, Shay Katz-zamir, Serge Ayoun

On Wed, Sep 25, 2019 at 11:33:03AM -0700, Sean Christopherson wrote:
> On Wed, Sep 25, 2019 at 03:27:49AM +0300, Jarkko Sakkinen wrote:
> > On Wed, Sep 18, 2019 at 07:21:20AM +0300, Jarkko Sakkinen wrote:
> > > > > +			goto skip;
> > > > >  
> > > > > +		ret = sgx_encl_get_backing(encl_page->encl,
> > > > > +					   SGX_ENCL_PAGE_INDEX(encl_page),
> > > > > +					   &backing[i]);
> > > > > +		if (ret)
> > > > > +			goto skip;
> > > > > +
> > > > > +		mutex_lock(&encl_page->encl->lock);
> > > > > +		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > > > > +		mutex_unlock(&encl_page->encl->lock);
> > > > > +		continue;
> > > > > +
> > > > > +skip:
> > > > 
> > > > Eww.  The call to sgx_encl_get_backing() makes it rather ugly no matter
> > > > what, but this seems slightly less ugly:
> > > > 
> > > > 	for (i = 0; i < cnt; i++) {
> > > > 		epc_page = chunk[i];
> > > > 		encl_page = epc_page->owner;
> > > > 
> > > > 		if (!sgx_can_reclaim(chunk[i]) ||
> > > > 		    sgx_encl_get_backing(encl_page->encl,
> > > > 					 SGX_ENCL_PAGE_INDEX(encl_page),
> > > > 					 &backing[i]) {
> > > > 			kref_put(&encl_page->encl->refcount, sgx_encl_release);
> > > > 
> > > > 			spin_lock(&sgx_active_page_list_lock);
> > > > 			list_add_tail(&epc_page->list, &sgx_active_page_list);
> > > > 			spin_unlock(&sgx_active_page_list_lock);
> > > > 
> > > > 			chunk[i] = NULL;
> > > > 			continue;
> > > > 		}
> > > > 
> > > > 		mutex_lock(&encl_page->encl->lock);
> > > > 		encl_page->desc |= SGX_ENCL_PAGE_RECLAIMED;
> > > > 		mutex_unlock(&encl_page->encl->lock);
> > > > 	}
> > > > 
> > 
> > Well that is one big nested mess where as the version I did is legit use
> > of gotos: two conditions that can cause to skip the action. And also
> > fairly normal use of gotos with same ideas as with out/err etc. labels
> > except now it is used inside a loop..
> 
> Yeah, it's the "inside a loop" part that's ugly.  I agree the nested code
> is also heinous.
> 
> What if we add a helper to split out the verbose check?  Maybe as below,
> or move the entire guts to a separate helper?
> 
> static int sgx_prepare_to_reclaim(struct sgx_encl_page *encl_page,
> 				  struct sgx_backing *backing)
> {
> 	if (!sgx_can_reclaim(encl_page))
> 		return -EBUSY;
> 
> 	return sgx_encl_get_backing(encl_page->encl,
> 				    SGX_ENCL_PAGE_INDEX(encl_page), backing);
> }

If you want to make it cleaner, it would be better keep
sgx_reclaimer_age() call outside the function in order to keep the call
hierarchy kind of "flat". Aging is kind of core part of the flow works
and I don't want it to be buried too deep.

E.g. inside the sgx_reclaim_pages():

if (sgx_reclaimer_age(epc_page))
	sgx_reclaimer_prepare(epc_page, &backing[i]);

sgx_reclaimer_prepare() would do then all the preparation work (also
set SGX_ENCL_PAGE_RECLAIMED).

/Jarkko



^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2019-09-27 15:40 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-16 10:17 [PATCH v3 00/17] Fixes and updates for v23 Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 01/17] selftest/x86/sgx: Remove encl_piggy.h Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 02/17] x86/sgx: Clean up internal includes Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 03/17] x86/sgx: Write backing storage only if EWB is successful Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 04/17] x86/sgx: Rename 'j' as 'cnt' in sgx_reclaim_pages() Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 05/17] x86/sgx: Turn encls_failed() as inline function Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 06/17] x86/sgx: Move sgx_einit() to encls.c Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 07/17] x86/sgx: Remove pages in sgx_reclaimer_write() Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 08/17] x86/sgx: Calculate page index " Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 09/17] x86/sgx: Move SGX_ENCL_DEAD check to sgx_reclaimer_write() Jarkko Sakkinen
2019-09-17 23:13   ` Sean Christopherson
2019-09-18  4:15     ` Jarkko Sakkinen
2019-09-17 23:21   ` Sean Christopherson
2019-09-18  4:16     ` Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 10/17] x86/sgx: Free VA slot when the EWB flow fails Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 11/17] x86/sgx: Call sgx_encl_destroy() " Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 12/17] x86/sgx: Open code sgx_reclaimer_get() and sgx_reclaimer_put() Jarkko Sakkinen
2019-09-17 23:07   ` Sean Christopherson
2019-09-18  4:12     ` Jarkko Sakkinen
2019-09-20 13:38       ` Jarkko Sakkinen
2019-09-16 10:17 ` [PATCH v3 13/17] x86/sgx: Introduce sgx_can_reclaim() Jarkko Sakkinen
2019-09-17 23:25   ` Sean Christopherson
2019-09-25 18:28   ` Sean Christopherson
2019-09-27 15:33     ` Jarkko Sakkinen
2019-09-16 10:18 ` [PATCH v3 14/17] x86/sgx: Replace section->free_cnt with a global sgx_nr_free_pages Jarkko Sakkinen
2019-09-17 22:50   ` Sean Christopherson
2019-09-18  4:07     ` Jarkko Sakkinen
2019-09-16 10:18 ` [PATCH v3 15/17] x86/sgx: sgx_vma_access(): Do not return -ECANCELED on invalid TCS pages Jarkko Sakkinen
2019-09-16 10:18 ` [PATCH v3 16/17] x86/sgx: Introduce sgx_encl_get_backing() Jarkko Sakkinen
2019-09-17 23:05   ` Sean Christopherson
2019-09-18  4:10     ` Jarkko Sakkinen
2019-09-16 10:18 ` [PATCH v3 17/17] x86/sgx: Fix pages in the BLOCKED state ending up to the free pool Jarkko Sakkinen
2019-09-17 23:34   ` Sean Christopherson
2019-09-18  4:21     ` Jarkko Sakkinen
2019-09-25  0:27       ` Jarkko Sakkinen
2019-09-25 18:33         ` Sean Christopherson
2019-09-27 15:39           ` Jarkko Sakkinen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).