linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers
@ 2022-03-04  9:34 Jarkko Sakkinen
  2022-03-04  9:34 ` [RFC PATCH v2.1 02/30] x86/sgx: Add wrapper for SGX2 EMODPR function Jarkko Sakkinen
                   ` (30 more replies)
  0 siblings, 31 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:34 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Dave Hansen,
	Jarkko Sakkinen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

The SGX ENCLS instruction uses EAX to specify an SGX function and
may require additional registers, depending on the SGX function.
ENCLS invokes the specified privileged SGX function for managing
and debugging enclaves. Macros are used to wrap the ENCLS
functionality and several wrappers are used to wrap the macros to
make the different SGX functions accessible in the code.

The wrappers of the supported SGX functions are cryptic. Add short
descriptions of each as a comment.

Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encls.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
index fa04a73daf9c..0e22fa8f77c5 100644
--- a/arch/x86/kernel/cpu/sgx/encls.h
+++ b/arch/x86/kernel/cpu/sgx/encls.h
@@ -136,57 +136,71 @@ static inline bool encls_failed(int ret)
 	ret;						\
 	})
 
+/* Initialize an EPC page into an SGX Enclave Control Structure (SECS) page. */
 static inline int __ecreate(struct sgx_pageinfo *pginfo, void *secs)
 {
 	return __encls_2(ECREATE, pginfo, secs);
 }
 
+/* Hash a 256 byte region of an enclave page to SECS:MRENCLAVE. */
 static inline int __eextend(void *secs, void *addr)
 {
 	return __encls_2(EEXTEND, secs, addr);
 }
 
+/*
+ * Associate an EPC page to an enclave either as a REG or TCS page
+ * populated with the provided data.
+ */
 static inline int __eadd(struct sgx_pageinfo *pginfo, void *addr)
 {
 	return __encls_2(EADD, pginfo, addr);
 }
 
+/* Finalize enclave build, initialize enclave for user code execution. */
 static inline int __einit(void *sigstruct, void *token, void *secs)
 {
 	return __encls_ret_3(EINIT, sigstruct, secs, token);
 }
 
+/* Disassociate EPC page from its enclave and mark it as unused. */
 static inline int __eremove(void *addr)
 {
 	return __encls_ret_1(EREMOVE, addr);
 }
 
+/* Copy data to an EPC page belonging to a debug enclave. */
 static inline int __edbgwr(void *addr, unsigned long *data)
 {
 	return __encls_2(EDGBWR, *data, addr);
 }
 
+/* Copy data from an EPC page belonging to a debug enclave. */
 static inline int __edbgrd(void *addr, unsigned long *data)
 {
 	return __encls_1_1(EDGBRD, *data, addr);
 }
 
+/* Track that software has completed the required TLB address clears. */
 static inline int __etrack(void *addr)
 {
 	return __encls_ret_1(ETRACK, addr);
 }
 
+/* Load, verify, and unblock an EPC page. */
 static inline int __eldu(struct sgx_pageinfo *pginfo, void *addr,
 			 void *va)
 {
 	return __encls_ret_3(ELDU, pginfo, addr, va);
 }
 
+/* Make EPC page inaccessible to enclave, ready to be written to memory. */
 static inline int __eblock(void *addr)
 {
 	return __encls_ret_1(EBLOCK, addr);
 }
 
+/* Initialize an EPC page into a Version Array (VA) page. */
 static inline int __epa(void *addr)
 {
 	unsigned long rbx = SGX_PAGE_TYPE_VA;
@@ -194,6 +208,7 @@ static inline int __epa(void *addr)
 	return __encls_2(EPA, rbx, addr);
 }
 
+/* Invalidate an EPC page and write it out to main memory. */
 static inline int __ewb(struct sgx_pageinfo *pginfo, void *addr,
 			void *va)
 {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 02/30] x86/sgx: Add wrapper for SGX2 EMODPR function
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
@ 2022-03-04  9:34 ` Jarkko Sakkinen
  2022-03-04  9:34 ` [RFC PATCH v2.1 03/30] x86/sgx: Add wrapper for SGX2 EMODT function Jarkko Sakkinen
                   ` (29 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:34 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

Add a wrapper for the EMODPR ENCLS leaf function used to
restrict enclave page permissions as maintained in the
SGX hardware's Enclave Page Cache Map (EPCM).

EMODPR:
1) Updates the EPCM permissions of an enclave page by treating
   the new permissions as a mask - supplying a value that relaxes
   EPCM permissions has no effect.
2) Sets the PR bit in the EPCM entry of the enclave page to
   indicate that permission restriction is in progress. The bit
   is reset by the enclave by invoking ENCLU leaf function
   EACCEPT or EACCEPTCOPY.

The enclave may access the page throughout the entire process
if conforming to the EPCM permissions for the enclave page.

After performing the permission restriction by issuing EMODPR
the kernel needs to collaborate with the hardware to ensure that
all logical processors sees the new restricted permissions. This
is required for the enclave's EACCEPT/EACCEPTCOPY to succeed and
is accomplished with the ETRACK flow.

Expand enum sgx_return_code with the possible EMODPR return
values.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/include/asm/sgx.h      | 5 +++++
 arch/x86/kernel/cpu/sgx/encls.h | 6 ++++++
 2 files changed, 11 insertions(+)

diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h
index 3f9334ef67cd..d67810b50a81 100644
--- a/arch/x86/include/asm/sgx.h
+++ b/arch/x86/include/asm/sgx.h
@@ -65,17 +65,22 @@ enum sgx_encls_function {
 
 /**
  * enum sgx_return_code - The return code type for ENCLS, ENCLU and ENCLV
+ * %SGX_EPC_PAGE_CONFLICT:	Page is being written by other ENCLS function.
  * %SGX_NOT_TRACKED:		Previous ETRACK's shootdown sequence has not
  *				been completed yet.
  * %SGX_CHILD_PRESENT		SECS has child pages present in the EPC.
  * %SGX_INVALID_EINITTOKEN:	EINITTOKEN is invalid and enclave signer's
  *				public key does not match IA32_SGXLEPUBKEYHASH.
+ * %SGX_PAGE_NOT_MODIFIABLE:	The EPC page cannot be modified because it
+ *				is in the PENDING or MODIFIED state.
  * %SGX_UNMASKED_EVENT:		An unmasked event, e.g. INTR, was received
  */
 enum sgx_return_code {
+	SGX_EPC_PAGE_CONFLICT		= 7,
 	SGX_NOT_TRACKED			= 11,
 	SGX_CHILD_PRESENT		= 13,
 	SGX_INVALID_EINITTOKEN		= 16,
+	SGX_PAGE_NOT_MODIFIABLE		= 20,
 	SGX_UNMASKED_EVENT		= 128,
 };
 
diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
index 0e22fa8f77c5..2b091912f038 100644
--- a/arch/x86/kernel/cpu/sgx/encls.h
+++ b/arch/x86/kernel/cpu/sgx/encls.h
@@ -215,4 +215,10 @@ static inline int __ewb(struct sgx_pageinfo *pginfo, void *addr,
 	return __encls_ret_3(EWB, pginfo, addr, va);
 }
 
+/* Restrict the EPCM permissions of an EPC page. */
+static inline int __emodpr(struct sgx_secinfo *secinfo, void *addr)
+{
+	return __encls_ret_2(EMODPR, secinfo, addr);
+}
+
 #endif /* _X86_ENCLS_H */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 03/30] x86/sgx: Add wrapper for SGX2 EMODT function
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
  2022-03-04  9:34 ` [RFC PATCH v2.1 02/30] x86/sgx: Add wrapper for SGX2 EMODPR function Jarkko Sakkinen
@ 2022-03-04  9:34 ` Jarkko Sakkinen
  2022-03-04  9:34 ` [RFC PATCH v2.1 04/30] x86/sgx: Add wrapper for SGX2 EAUG function Jarkko Sakkinen
                   ` (28 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:34 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

Add a wrapper for the EMODT ENCLS leaf function used to
change the type of an enclave page as maintained in the
SGX hardware's Enclave Page Cache Map (EPCM).

EMODT:
1) Updates the EPCM page type of the enclave page.
2) Sets the MODIFIED bit in the EPCM entry of the enclave page.
   This bit is reset by the enclave by invoking ENCLU leaf
   function EACCEPT or EACCEPTCOPY.

Access from within the enclave to the enclave page is not possible
while the MODIFIED bit is set.

After changing the enclave page type by issuing EMODT the kernel
needs to collaborate with the hardware to ensure that no logical
processor continues to hold a reference to the changed page. This
is required to ensure no required security checks are circumvented
and is required for the enclave's EACCEPT/EACCEPTCOPY to succeed.
Ensuring that no references to the changed page remain is
accomplished with the ETRACK flow.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encls.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
index 2b091912f038..7a1ecf704ec1 100644
--- a/arch/x86/kernel/cpu/sgx/encls.h
+++ b/arch/x86/kernel/cpu/sgx/encls.h
@@ -221,4 +221,10 @@ static inline int __emodpr(struct sgx_secinfo *secinfo, void *addr)
 	return __encls_ret_2(EMODPR, secinfo, addr);
 }
 
+/* Change the type of an EPC page. */
+static inline int __emodt(struct sgx_secinfo *secinfo, void *addr)
+{
+	return __encls_ret_2(EMODT, secinfo, addr);
+}
+
 #endif /* _X86_ENCLS_H */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 04/30] x86/sgx: Add wrapper for SGX2 EAUG function
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
  2022-03-04  9:34 ` [RFC PATCH v2.1 02/30] x86/sgx: Add wrapper for SGX2 EMODPR function Jarkko Sakkinen
  2022-03-04  9:34 ` [RFC PATCH v2.1 03/30] x86/sgx: Add wrapper for SGX2 EMODT function Jarkko Sakkinen
@ 2022-03-04  9:34 ` Jarkko Sakkinen
  2022-03-04  9:34 ` [RFC PATCH v2.1 05/30] Documentation/x86: Document SGX permission details Jarkko Sakkinen
                   ` (27 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:34 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

Add a wrapper for the EAUG ENCLS leaf function used to
add a page to an initialized enclave.

EAUG:
1) Stores all properties of the new enclave page in the SGX
   hardware's Enclave Page Cache Map (EPCM).
2) Sets the PENDING bit in the EPCM entry of the enclave page.
   This bit is cleared by the enclave by invoking ENCLU leaf
   function EACCEPT or EACCEPTCOPY.

Access from within the enclave to the new enclave page is not
possible until the PENDING bit is cleared.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encls.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
index 7a1ecf704ec1..99004b02e2ed 100644
--- a/arch/x86/kernel/cpu/sgx/encls.h
+++ b/arch/x86/kernel/cpu/sgx/encls.h
@@ -227,4 +227,10 @@ static inline int __emodt(struct sgx_secinfo *secinfo, void *addr)
 	return __encls_ret_2(EMODT, secinfo, addr);
 }
 
+/* Zero a page of EPC memory and add it to an initialized enclave. */
+static inline int __eaug(struct sgx_pageinfo *pginfo, void *addr)
+{
+	return __encls_2(EAUG, pginfo, addr);
+}
+
 #endif /* _X86_ENCLS_H */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 05/30] Documentation/x86: Document SGX permission details
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (2 preceding siblings ...)
  2022-03-04  9:34 ` [RFC PATCH v2.1 04/30] x86/sgx: Add wrapper for SGX2 EAUG function Jarkko Sakkinen
@ 2022-03-04  9:34 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 06/30] x86/sgx: Support VMA permissions more relaxed than enclave permissions Jarkko Sakkinen
                   ` (26 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:34 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Andy Lutomirski,
	Jarkko Sakkinen, Dave Hansen, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Jonathan Corbet,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:DOCUMENTATION

From: Reinette Chatre <reinette.chatre@intel.com>

Provide summary of the various permissions involved in
managing access to enclave pages. This summary documents
the foundation for additions related to runtime managing of
enclave page permissions that is made possible with SGX2.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 Documentation/x86/sgx.rst | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/Documentation/x86/sgx.rst b/Documentation/x86/sgx.rst
index 265568a9292c..89ff924b1480 100644
--- a/Documentation/x86/sgx.rst
+++ b/Documentation/x86/sgx.rst
@@ -71,16 +71,34 @@ The processor tracks EPC pages in a hardware metadata structure called the
 which describes the owning enclave, access rights and page type among the other
 things.
 
-EPCM permissions are separate from the normal page tables.  This prevents the
-kernel from, for instance, allowing writes to data which an enclave wishes to
-remain read-only.  EPCM permissions may only impose additional restrictions on
-top of normal x86 page permissions.
-
 For all intents and purposes, the SGX architecture allows the processor to
 invalidate all EPCM entries at will.  This requires that software be prepared to
 handle an EPCM fault at any time.  In practice, this can happen on events like
 power transitions when the ephemeral key that encrypts enclave memory is lost.
 
+Details about enclave page permissions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+EPCM permissions are separate from the normal page tables.  This prevents the
+kernel from, for instance, allowing writes to data which an enclave wishes
+to remain read-only.
+
+Three permission masks are relevant to SGX:
+
+* EPCM permissions.
+* Page Table Entry (PTE) permissions.
+* Virtual Memory Area (VMA) permissions.
+
+An enclave is only able to access an enclave page if all three permission
+masks enable it to do so.
+
+The relationships between the different permission masks are:
+
+* An SGX VMA can only be created if its permissions are the same or weaker
+  than the EPCM permissions.
+* PTEs are installed to match the EPCM permissions, but not be more
+  relaxed than the VMA permissions.
+
 Application interface
 =====================
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 06/30] x86/sgx: Support VMA permissions more relaxed than enclave permissions
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (3 preceding siblings ...)
  2022-03-04  9:34 ` [RFC PATCH v2.1 05/30] Documentation/x86: Document SGX permission details Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 07/30] x86/sgx: Add pfn_mkwrite() handler for present PTEs Jarkko Sakkinen
                   ` (25 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Jonathan Corbet,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:DOCUMENTATION

From: Reinette Chatre <reinette.chatre@intel.com>

=== Summary ===

An SGX VMA can only be created if its permissions are the same or
weaker than the Enclave Page Cache Map (EPCM) permissions. After VMA
creation this same rule is again enforced by the page fault handler:
faulted enclave pages are required to have equal or more relaxed
EPCM permissions than the VMA permissions.

On SGX1 systems the additional enforcement in the page fault handler
is redundant and on SGX2 systems it incorrectly prevents access.
On SGX1 systems it is unnecessary to repeat the enforcement of the
permission rule. The rule used during original VMA creation will
ensure that any access attempt will use correct permissions.
With SGX2 the EPCM permissions of a page can change after VMA
creation resulting in the VMA permissions potentially being more
relaxed than the EPCM permissions and the page fault handler
incorrectly blocking valid access attempts.

Enable the VMA's pages to remain accessible while ensuring that
the PTEs are installed to match the EPCM permissions but not be
more relaxed than the VMA permissions.

=== Full Changelog ===

An SGX enclave is an area of memory where parts of an application
can reside. First an enclave is created and loaded (from
non-enclave memory) with the code and data of an application,
then user space can map (mmap()) the enclave memory to
be able to enter the enclave at its defined entry points for
execution within it.

The hardware maintains a secure structure, the Enclave Page Cache Map
(EPCM), that tracks the contents of the enclave. Of interest here is
its tracking of the enclave page permissions. When a page is loaded
into the enclave its permissions are specified and recorded in the
EPCM. In parallel the kernel maintains permissions within the
page table entries (PTEs) and the rule is that PTE permissions
are not allowed to be more relaxed than the EPCM permissions.

A new mapping (mmap()) of enclave memory can only succeed if the
mapping has the same or weaker permissions than the permissions that
were vetted during enclave creation. This is enforced by
sgx_encl_may_map() that is called on the mmap() as well as mprotect()
paths. This rule remains.

One feature of SGX2 is to support the modification of EPCM permissions
after enclave initialization. Enclave pages may thus already be part
of a VMA at the time their EPCM permissions are changed resulting
in the VMA's permissions potentially being more relaxed than the EPCM
permissions.

Allow permissions of existing VMAs to be more relaxed than EPCM
permissions in preparation for dynamic EPCM permission changes
made possible in SGX2.  New VMAs that attempt to have more relaxed
permissions than EPCM permissions continue to be unsupported.

Reasons why permissions of existing VMAs are allowed to be more relaxed
than EPCM permissions instead of dynamically changing VMA permissions
when EPCM permissions change are:
1) Changing VMA permissions involve splitting VMAs which is an
   operation that can fail. Additionally changing EPCM permissions of
   a range of pages could also fail on any of the pages involved.
   Handling these error cases causes problems. For example, if an
   EPCM permission change fails and the VMA has already been split
   then it is not possible to undo the VMA split nor possible to
   undo the EPCM permission changes that did succeed before the
   failure.
2) The kernel has little insight into the user space where EPCM
   permissions are controlled from. For example, a RW page may
   be made RO just before it is made RX and splitting the VMAs
   while the VMAs may change soon is unnecessary.

Remove the extra permission check called on a page fault
(vm_operations_struct->fault) or during debugging
(vm_operations_struct->access) when loading the enclave page from swap
that ensures that the VMA permissions are not more relaxed than the
EPCM permissions. Since a VMA could only exist if it passed the
original permission checks during mmap() and a VMA may indeed
have more relaxed permissions than the EPCM permissions this extra
permission check is no longer appropriate.

With the permission check removed, ensure that PTEs do
not blindly inherit the VMA permissions but instead the permissions
that the VMA and EPCM agree on. PTEs for writable pages (from VMA
and enclave perspective) are installed with the writable bit set,
reducing the need for this additional flow to the permission mismatch
cases handled next.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 Documentation/x86/sgx.rst      | 10 +++++++++
 arch/x86/kernel/cpu/sgx/encl.c | 38 ++++++++++++++++++----------------
 2 files changed, 30 insertions(+), 18 deletions(-)

diff --git a/Documentation/x86/sgx.rst b/Documentation/x86/sgx.rst
index 89ff924b1480..5659932728a5 100644
--- a/Documentation/x86/sgx.rst
+++ b/Documentation/x86/sgx.rst
@@ -99,6 +99,16 @@ The relationships between the different permission masks are:
 * PTEs are installed to match the EPCM permissions, but not be more
   relaxed than the VMA permissions.
 
+On systems supporting SGX2 EPCM permissions may change while the
+enclave page belongs to a VMA without impacting the VMA permissions.
+This means that a running VMA may appear to allow access to an enclave
+page that is not allowed by its EPCM permissions. For example, when an
+enclave page with RW EPCM permissions is mapped by a RW VMA but is
+subsequently changed to have read-only EPCM permissions. The kernel
+continues to maintain correct access to the enclave page through the
+PTE that will ensure that only access allowed by both the VMA
+and EPCM permissions are permitted.
+
 Application interface
 =====================
 
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 001808e3901c..20e97d3abdce 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -91,10 +91,8 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
 }
 
 static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
-						unsigned long addr,
-						unsigned long vm_flags)
+						unsigned long addr)
 {
-	unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC);
 	struct sgx_epc_page *epc_page;
 	struct sgx_encl_page *entry;
 
@@ -102,14 +100,6 @@ static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
 	if (!entry)
 		return ERR_PTR(-EFAULT);
 
-	/*
-	 * Verify that the faulted page has equal or higher build time
-	 * permissions than the VMA permissions (i.e. the subset of {VM_READ,
-	 * VM_WRITE, VM_EXECUTE} in vma->vm_flags).
-	 */
-	if ((entry->vm_max_prot_bits & vm_prot_bits) != vm_prot_bits)
-		return ERR_PTR(-EFAULT);
-
 	/* Entry successfully located. */
 	if (entry->epc_page) {
 		if (entry->desc & SGX_ENCL_PAGE_BEING_RECLAIMED)
@@ -138,7 +128,9 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 {
 	unsigned long addr = (unsigned long)vmf->address;
 	struct vm_area_struct *vma = vmf->vma;
+	unsigned long page_prot_bits;
 	struct sgx_encl_page *entry;
+	unsigned long vm_prot_bits;
 	unsigned long phys_addr;
 	struct sgx_encl *encl;
 	vm_fault_t ret;
@@ -155,7 +147,7 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 
 	mutex_lock(&encl->lock);
 
-	entry = sgx_encl_load_page(encl, addr, vma->vm_flags);
+	entry = sgx_encl_load_page(encl, addr);
 	if (IS_ERR(entry)) {
 		mutex_unlock(&encl->lock);
 
@@ -167,7 +159,19 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 
 	phys_addr = sgx_get_epc_phys_addr(entry->epc_page);
 
-	ret = vmf_insert_pfn(vma, addr, PFN_DOWN(phys_addr));
+	/*
+	 * Insert PTE to match the EPCM page permissions ensured to not
+	 * exceed the VMA permissions.
+	 */
+	vm_prot_bits = vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC);
+	page_prot_bits = entry->vm_max_prot_bits & vm_prot_bits;
+	/*
+	 * Add VM_SHARED so that PTE is made writable right away if VMA
+	 * and EPCM are writable (no COW in SGX).
+	 */
+	page_prot_bits |= (vma->vm_flags & VM_SHARED);
+	ret = vmf_insert_pfn_prot(vma, addr, PFN_DOWN(phys_addr),
+				  vm_get_page_prot(page_prot_bits));
 	if (ret != VM_FAULT_NOPAGE) {
 		mutex_unlock(&encl->lock);
 
@@ -295,15 +299,14 @@ static int sgx_encl_debug_write(struct sgx_encl *encl, struct sgx_encl_page *pag
  * Load an enclave page to EPC if required, and take encl->lock.
  */
 static struct sgx_encl_page *sgx_encl_reserve_page(struct sgx_encl *encl,
-						   unsigned long addr,
-						   unsigned long vm_flags)
+						   unsigned long addr)
 {
 	struct sgx_encl_page *entry;
 
 	for ( ; ; ) {
 		mutex_lock(&encl->lock);
 
-		entry = sgx_encl_load_page(encl, addr, vm_flags);
+		entry = sgx_encl_load_page(encl, addr);
 		if (PTR_ERR(entry) != -EBUSY)
 			break;
 
@@ -339,8 +342,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr,
 		return -EFAULT;
 
 	for (i = 0; i < len; i += cnt) {
-		entry = sgx_encl_reserve_page(encl, (addr + i) & PAGE_MASK,
-					      vma->vm_flags);
+		entry = sgx_encl_reserve_page(encl, (addr + i) & PAGE_MASK);
 		if (IS_ERR(entry)) {
 			ret = PTR_ERR(entry);
 			break;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 07/30] x86/sgx: Add pfn_mkwrite() handler for present PTEs
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (4 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 06/30] x86/sgx: Support VMA permissions more relaxed than enclave permissions Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 08/30] x86/sgx: Export sgx_encl_ewb_cpumask() Jarkko Sakkinen
                   ` (24 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

By default a write page fault on a present PTE inherits the
permissions of the VMA.

When using SGX2, enclave page permissions maintained in the
hardware's Enclave Page Cache Map (EPCM) may change after a VMA
accessing the page is created. A VMA's permissions may thus be
more relaxed than the EPCM permissions even though the VMA was
originally created not to have more relaxed permissions. Following
the default behavior during a page fault on a present PTE while
the VMA permissions are more relaxed than the EPCM permissions would
result in the PTE for an enclave page to be writable even
though the page is not writable according to the EPCM permissions.

The kernel should not allow writing to a page if that page is not
writable: the PTE should accurately reflect the EPCM permissions
while not being more relaxed than the VMA permissions.

Do not blindly accept VMA permissions on a page fault due to a
write attempt to a present PTE. Install a pfn_mkwrite() handler
that ensures that the VMA permissions agree with the EPCM
permissions in this regard.

Before and after page fault flow scenarios
==========================================

Consider the following scenario that will be possible when using SGX2:
* An enclave page exists with RW EPCM permissions.
* A RW VMA maps the range spanning the enclave page.
* The enclave page's EPCM permissions are changed to read-only.
* There is no PTE for the enclave page.

Considering that the PTE is not present in the scenario,
user space will observe the following when attempting to write to the
enclave page from within the enclave:
 1) Instruction writing to enclave page is run from within the enclave.
 2) A page fault with second and third bits set (0x6) is encountered
    and handled by the SGX handler sgx_vma_fault() that installs a
    read-only page table entry following previous patch that installs
    a PTE with permissions that VMA and enclave agree on
    (read-only in this case).
 3) Instruction writing to enclave page is re-attempted.
 4) A page fault with first three bits set (0x7) is encountered and
    transparently (from SGX driver and user space perspective) handled
    by the kernel with the PTE made writable because the VMA is
    writable.
 5) Instruction writing to enclave page is re-attempted.
 6) Since the EPCM permissions prevents writing to the page a new page
    fault is encountered, this time with the SGX flag set in the error
    code (0x8007). No action is taken by the kernel for this page fault
    and execution returns to user space.
 7) Typically such a fault will be passed on to an application with a
    signal but if the enclave is entered with the vDSO function provided
    by the kernel then user space does not receive a signal but instead
    the vDSO function returns successfully with exception information
    (vector=14, error code=0x8007, and address) within the exception
    fields within the vDSO function's struct sgx_enclave_run.

As can be observed it is not possible for user space to write to an
enclave page if that page's EPCM permissions do not allow so,
no matter what the VMA or PTE allows.

Even so, the kernel should not allow writing to a page if that page is
not writable. The PTE should accurately reflect the EPCM permissions.

With a pfn_mkwrite() handler that ensures that the VMA permissions
agree with the EPCM permissions user space observes the following
when attempting to write to the enclave page from within the enclave:
 1) Instruction writing to enclave page is run from within the enclave.
 2) A page fault with second and third bits set (0x6) is encountered
    and handled by the SGX handler sgx_vma_fault() that installs a
    read-only page table entry following previous patch that installs
    a PTE with permissions that VMA and enclave agree on
    (read-only in this case).
 3) Instruction writing to enclave page is re-attempted.
 4) A page fault with first three bits set (0x7) is encountered and
    passed to the pfn_mkwrite() handler for consideration. The handler
    determines that the page should not be writable and returns SIGBUS.
 5) Typically such a fault will be passed on to an application with a
    signal but if the enclave is entered with the vDSO function provided
    by the kernel then user space does not receive a signal but instead
    the vDSO function returns successfully with exception information
    (vector=14, error code=0x7, and address) within the exception fields
    within the vDSO function's struct sgx_enclave_run.

The accurate exception information supports the SGX runtime, which is
virtually always implemented inside a shared library, by providing
accurate information in support of its management of the SGX enclave.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c | 42 ++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 20e97d3abdce..6d25f7ed1294 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -184,6 +184,47 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 	return VM_FAULT_NOPAGE;
 }
 
+/*
+ * A fault occurred while writing to a present enclave PTE. Since PTE is
+ * present this will not be handled by sgx_vma_fault(). VMA may allow
+ * writing to the page while enclave (as based on EPCM permissions) does
+ * not. Do not follow the default of inheriting VMA permissions in this
+ * regard, ensure enclave also allows writing to the page.
+ */
+static vm_fault_t sgx_vma_pfn_mkwrite(struct vm_fault *vmf)
+{
+	unsigned long addr = (unsigned long)vmf->address;
+	struct vm_area_struct *vma = vmf->vma;
+	struct sgx_encl_page *entry;
+	struct sgx_encl *encl;
+	vm_fault_t ret = 0;
+
+	encl = vma->vm_private_data;
+
+	/*
+	 * It's very unlikely but possible that allocating memory for the
+	 * mm_list entry of a forked process failed in sgx_vma_open(). When
+	 * this happens, vm_private_data is set to NULL.
+	 */
+	if (unlikely(!encl))
+		return VM_FAULT_SIGBUS;
+
+	mutex_lock(&encl->lock);
+
+	entry = xa_load(&encl->page_array, PFN_DOWN(addr));
+	if (!entry) {
+		ret = VM_FAULT_SIGBUS;
+		goto out;
+	}
+
+	if (!(entry->vm_max_prot_bits & VM_WRITE))
+		ret = VM_FAULT_SIGBUS;
+
+out:
+	mutex_unlock(&encl->lock);
+	return ret;
+}
+
 static void sgx_vma_open(struct vm_area_struct *vma)
 {
 	struct sgx_encl *encl = vma->vm_private_data;
@@ -381,6 +422,7 @@ const struct vm_operations_struct sgx_vm_ops = {
 	.mprotect = sgx_vma_mprotect,
 	.open = sgx_vma_open,
 	.access = sgx_vma_access,
+	.pfn_mkwrite = sgx_vma_pfn_mkwrite,
 };
 
 /**
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 08/30] x86/sgx: Export sgx_encl_ewb_cpumask()
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (5 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 07/30] x86/sgx: Add pfn_mkwrite() handler for present PTEs Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 09/30] x86/sgx: Rename sgx_encl_ewb_cpumask() as sgx_encl_cpumask() Jarkko Sakkinen
                   ` (23 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

Using sgx_encl_ewb_cpumask() to learn which CPUs might have executed
an enclave is useful to ensure that TLBs are cleared when changes are
made to enclave pages.

sgx_encl_ewb_cpumask() is used within the reclaimer when an enclave
page is evicted. The upcoming SGX2 support enables changes to be
made to enclave pages and will require TLBs to not refer to the
changed pages and thus will be needing sgx_encl_ewb_cpumask().

Relocate sgx_encl_ewb_cpumask() to be with the rest of the enclave
code in encl.c now that it is no longer unique to the reclaimer.

Take care to ensure that any future usage maintains the
current context requirement that ETRACK has been called first.
Expand the existing comments to highlight this while moving them
to a more prominent location before the function.

No functional change.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c | 67 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/sgx/encl.h |  1 +
 arch/x86/kernel/cpu/sgx/main.c | 29 ---------------
 3 files changed, 68 insertions(+), 29 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 6d25f7ed1294..353866e1b16f 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -595,6 +595,73 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm)
 	return 0;
 }
 
+/**
+ * sgx_encl_ewb_cpumask() - Query which CPUs might be accessing the enclave
+ * @encl: the enclave
+ *
+ * Some SGX functions require that no cached linear-to-physical address
+ * mappings are present before they can succeed. For example, ENCLS[EWB]
+ * copies a page from the enclave page cache to regular main memory but
+ * it fails if it cannot ensure that there are no cached
+ * linear-to-physical address mappings referring to the page.
+ *
+ * SGX hardware flushes all cached linear-to-physical mappings on a CPU
+ * when an enclave is exited via ENCLU[EEXIT] or an Asynchronous Enclave
+ * Exit (AEX). Exiting an enclave will thus ensure cached linear-to-physical
+ * address mappings are cleared but coordination with the tracking done within
+ * the SGX hardware is needed to support the SGX functions that depend on this
+ * cache clearing.
+ *
+ * When the ENCLS[ETRACK] function is issued on an enclave the hardware
+ * tracks threads operating inside the enclave at that time. The SGX
+ * hardware tracking require that all the identified threads must have
+ * exited the enclave in order to flush the mappings before a function such
+ * as ENCLS[EWB] will be permitted
+ *
+ * The following flow is used to support SGX functions that require that
+ * no cached linear-to-physical address mappings are present:
+ * 1) Execute ENCLS[ETRACK] to initiate hardware tracking.
+ * 2) Use this function (sgx_encl_ewb_cpumask()) to query which CPUs might be
+ *    accessing the enclave.
+ * 3) Send IPI to identified CPUs, kicking them out of the enclave and
+ *    thus flushing all locally cached linear-to-physical address mappings.
+ * 4) Execute SGX function.
+ *
+ * Context: It is required to call this function after ENCLS[ETRACK].
+ *          This will ensure that if any new mm appears (racing with
+ *          sgx_encl_mm_add()) then the new mm will enter into the
+ *          enclave with fresh linear-to-physical address mappings.
+ *
+ *          It is required that all IPIs are completed before a new
+ *          ENCLS[ETRACK] is issued so be sure to protect steps 1 to 3
+ *          of the above flow with the enclave's mutex.
+ *
+ * Return: cpumask of CPUs that might be accessing @encl
+ */
+const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl)
+{
+	cpumask_t *cpumask = &encl->cpumask;
+	struct sgx_encl_mm *encl_mm;
+	int idx;
+
+	cpumask_clear(cpumask);
+
+	idx = srcu_read_lock(&encl->srcu);
+
+	list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
+		if (!mmget_not_zero(encl_mm->mm))
+			continue;
+
+		cpumask_or(cpumask, cpumask, mm_cpumask(encl_mm->mm));
+
+		mmput_async(encl_mm->mm);
+	}
+
+	srcu_read_unlock(&encl->srcu, idx);
+
+	return cpumask;
+}
+
 static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl,
 					      pgoff_t index)
 {
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index fec43ca65065..2b15615be8c6 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -105,6 +105,7 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 
 void sgx_encl_release(struct kref *ref);
 int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm);
+const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl);
 int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
 			 struct sgx_backing *backing);
 void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 4b41efc9e367..d481e8b0e7bc 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -203,35 +203,6 @@ static void sgx_ipi_cb(void *info)
 {
 }
 
-static const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl)
-{
-	cpumask_t *cpumask = &encl->cpumask;
-	struct sgx_encl_mm *encl_mm;
-	int idx;
-
-	/*
-	 * Can race with sgx_encl_mm_add(), but ETRACK has already been
-	 * executed, which means that the CPUs running in the new mm will enter
-	 * into the enclave with a fresh epoch.
-	 */
-	cpumask_clear(cpumask);
-
-	idx = srcu_read_lock(&encl->srcu);
-
-	list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
-		if (!mmget_not_zero(encl_mm->mm))
-			continue;
-
-		cpumask_or(cpumask, cpumask, mm_cpumask(encl_mm->mm));
-
-		mmput_async(encl_mm->mm);
-	}
-
-	srcu_read_unlock(&encl->srcu, idx);
-
-	return cpumask;
-}
-
 /*
  * Swap page to the regular memory transformed to the blocked state by using
  * EBLOCK, which means that it can no longer be referenced (no new TLB entries).
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 09/30] x86/sgx: Rename sgx_encl_ewb_cpumask() as sgx_encl_cpumask()
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (6 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 08/30] x86/sgx: Export sgx_encl_ewb_cpumask() Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 10/30] x86/sgx: Move PTE zap code to new sgx_zap_enclave_ptes() Jarkko Sakkinen
                   ` (22 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

sgx_encl_ewb_cpumask() is no longer unique to the reclaimer where it
is used during the EWB ENCLS leaf function when EPC pages are written
out to main memory and sgx_encl_ewb_cpumask() is used to learn which
CPUs might have executed the enclave to ensure that TLBs are cleared.

Upcoming SGX2 enabling will use sgx_encl_ewb_cpumask() during the
EMODPR and EMODT ENCLS leaf functions that make changes to enclave
pages. The function is needed for the same reason it is used now: to
learn which CPUs might have executed the enclave to ensure that TLBs
no longer point to the changed pages.

Rename sgx_encl_ewb_cpumask() to sgx_encl_cpumask() to reflect the
broader usage.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c | 6 +++---
 arch/x86/kernel/cpu/sgx/encl.h | 2 +-
 arch/x86/kernel/cpu/sgx/main.c | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 353866e1b16f..54716886e804 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -596,7 +596,7 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm)
 }
 
 /**
- * sgx_encl_ewb_cpumask() - Query which CPUs might be accessing the enclave
+ * sgx_encl_cpumask() - Query which CPUs might be accessing the enclave
  * @encl: the enclave
  *
  * Some SGX functions require that no cached linear-to-physical address
@@ -621,7 +621,7 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm)
  * The following flow is used to support SGX functions that require that
  * no cached linear-to-physical address mappings are present:
  * 1) Execute ENCLS[ETRACK] to initiate hardware tracking.
- * 2) Use this function (sgx_encl_ewb_cpumask()) to query which CPUs might be
+ * 2) Use this function (sgx_encl_cpumask()) to query which CPUs might be
  *    accessing the enclave.
  * 3) Send IPI to identified CPUs, kicking them out of the enclave and
  *    thus flushing all locally cached linear-to-physical address mappings.
@@ -638,7 +638,7 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm)
  *
  * Return: cpumask of CPUs that might be accessing @encl
  */
-const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl)
+const cpumask_t *sgx_encl_cpumask(struct sgx_encl *encl)
 {
 	cpumask_t *cpumask = &encl->cpumask;
 	struct sgx_encl_mm *encl_mm;
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index 2b15615be8c6..817edefa884a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -105,7 +105,7 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 
 void sgx_encl_release(struct kref *ref);
 int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm);
-const cpumask_t *sgx_encl_ewb_cpumask(struct sgx_encl *encl);
+const cpumask_t *sgx_encl_cpumask(struct sgx_encl *encl);
 int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
 			 struct sgx_backing *backing);
 void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index d481e8b0e7bc..60b166bff7b4 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -249,7 +249,7 @@ static void sgx_encl_ewb(struct sgx_epc_page *epc_page,
 			 * miss cpus that entered the enclave between
 			 * generating the mask and incrementing epoch.
 			 */
-			on_each_cpu_mask(sgx_encl_ewb_cpumask(encl),
+			on_each_cpu_mask(sgx_encl_cpumask(encl),
 					 sgx_ipi_cb, NULL, 1);
 			ret = __sgx_encl_ewb(epc_page, va_slot, backing);
 		}
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 10/30] x86/sgx: Move PTE zap code to new sgx_zap_enclave_ptes()
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (7 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 09/30] x86/sgx: Rename sgx_encl_ewb_cpumask() as sgx_encl_cpumask() Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 11/30] x86/sgx: Make sgx_ipi_cb() available internally Jarkko Sakkinen
                   ` (21 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

The SGX reclaimer removes page table entries pointing to pages that are
moved to swap.

SGX2 enables changes to pages belonging to an initialized enclave, thus
enclave pages may have their permission or type changed while the page
is being accessed by an enclave. Supporting SGX2 requires page table
entries to be removed so that any cached mappings to changed pages
are removed. For example, with the ability to change enclave page types
a regular enclave page may be changed to a Thread Control Structure
(TCS) page that may not be accessed by an enclave.

Factor out the code removing page table entries to a separate function
sgx_zap_enclave_ptes(), fixing accuracy of comments in the process,
and make it available to the upcoming SGX2 code.

Place sgx_zap_enclave_ptes() with the rest of the enclave code in
encl.c interacting with the page table since this code is no longer
unique to the reclaimer.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c | 45 +++++++++++++++++++++++++++++++++-
 arch/x86/kernel/cpu/sgx/encl.h |  2 +-
 arch/x86/kernel/cpu/sgx/main.c | 31 ++---------------------
 3 files changed, 47 insertions(+), 31 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 54716886e804..6cc732b88c37 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -587,7 +587,7 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm)
 
 	spin_lock(&encl->mm_lock);
 	list_add_rcu(&encl_mm->list, &encl->mm_list);
-	/* Pairs with smp_rmb() in sgx_reclaimer_block(). */
+	/* Pairs with smp_rmb() in sgx_zap_enclave_ptes(). */
 	smp_wmb();
 	encl->mm_list_version++;
 	spin_unlock(&encl->mm_lock);
@@ -776,6 +776,49 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
 	return ret;
 }
 
+/**
+ * sgx_zap_enclave_ptes() - remove PTEs mapping the address from enclave
+ * @encl: the enclave
+ * @addr: page aligned pointer to single page for which PTEs will be removed
+ *
+ * Multiple VMAs may have an enclave page mapped. Remove the PTE mapping
+ * @addr from each VMA. Ensure that page fault handler is ready to handle
+ * new mappings of @addr before calling this function.
+ */
+void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
+{
+	unsigned long mm_list_version;
+	struct sgx_encl_mm *encl_mm;
+	struct vm_area_struct *vma;
+	int idx, ret;
+
+	do {
+		mm_list_version = encl->mm_list_version;
+
+		/* Pairs with smp_wmb() in sgx_encl_mm_add(). */
+		smp_rmb();
+
+		idx = srcu_read_lock(&encl->srcu);
+
+		list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
+			if (!mmget_not_zero(encl_mm->mm))
+				continue;
+
+			mmap_read_lock(encl_mm->mm);
+
+			ret = sgx_encl_find(encl_mm->mm, addr, &vma);
+			if (!ret && encl == vma->vm_private_data)
+				zap_vma_ptes(vma, addr, PAGE_SIZE);
+
+			mmap_read_unlock(encl_mm->mm);
+
+			mmput_async(encl_mm->mm);
+		}
+
+		srcu_read_unlock(&encl->srcu, idx);
+	} while (unlikely(encl->mm_list_version != mm_list_version));
+}
+
 /**
  * sgx_alloc_va_page() - Allocate a Version Array (VA) page
  *
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index 817edefa884a..e760991762e4 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -111,7 +111,7 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
 void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
 int sgx_encl_test_and_clear_young(struct mm_struct *mm,
 				  struct sgx_encl_page *page);
-
+void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr);
 struct sgx_epc_page *sgx_alloc_va_page(void);
 unsigned int sgx_alloc_va_slot(struct sgx_va_page *va_page);
 void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 60b166bff7b4..06492dcffcf1 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -137,36 +137,9 @@ static void sgx_reclaimer_block(struct sgx_epc_page *epc_page)
 	struct sgx_encl_page *page = epc_page->owner;
 	unsigned long addr = page->desc & PAGE_MASK;
 	struct sgx_encl *encl = page->encl;
-	unsigned long mm_list_version;
-	struct sgx_encl_mm *encl_mm;
-	struct vm_area_struct *vma;
-	int idx, ret;
-
-	do {
-		mm_list_version = encl->mm_list_version;
-
-		/* Pairs with smp_rmb() in sgx_encl_mm_add(). */
-		smp_rmb();
-
-		idx = srcu_read_lock(&encl->srcu);
-
-		list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
-			if (!mmget_not_zero(encl_mm->mm))
-				continue;
-
-			mmap_read_lock(encl_mm->mm);
-
-			ret = sgx_encl_find(encl_mm->mm, addr, &vma);
-			if (!ret && encl == vma->vm_private_data)
-				zap_vma_ptes(vma, addr, PAGE_SIZE);
-
-			mmap_read_unlock(encl_mm->mm);
-
-			mmput_async(encl_mm->mm);
-		}
+	int ret;
 
-		srcu_read_unlock(&encl->srcu, idx);
-	} while (unlikely(encl->mm_list_version != mm_list_version));
+	sgx_zap_enclave_ptes(encl, addr);
 
 	mutex_lock(&encl->lock);
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 11/30] x86/sgx: Make sgx_ipi_cb() available internally
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (8 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 10/30] x86/sgx: Move PTE zap code to new sgx_zap_enclave_ptes() Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 12/30] x86/sgx: Create utility to validate user provided offset and length Jarkko Sakkinen
                   ` (20 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

The ETRACK function followed by an IPI to all CPUs within an enclave
is a common pattern with more frequent use in support of SGX2.

Make the (empty) IPI callback function available internally in
preparation for usage by SGX2.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/main.c | 2 +-
 arch/x86/kernel/cpu/sgx/sgx.h  | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 06492dcffcf1..1a3014aec490 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -172,7 +172,7 @@ static int __sgx_encl_ewb(struct sgx_epc_page *epc_page, void *va_slot,
 	return ret;
 }
 
-static void sgx_ipi_cb(void *info)
+void sgx_ipi_cb(void *info)
 {
 }
 
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index 0f17def9fe6f..b30cee4de903 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -90,6 +90,8 @@ void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
 int sgx_unmark_page_reclaimable(struct sgx_epc_page *page);
 struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim);
 
+void sgx_ipi_cb(void *info);
+
 #ifdef CONFIG_X86_SGX_KVM
 int __init sgx_vepc_init(void);
 #else
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 12/30] x86/sgx: Create utility to validate user provided offset and length
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (9 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 11/30] x86/sgx: Make sgx_ipi_cb() available internally Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 13/30] x86/sgx: Keep record of SGX page type Jarkko Sakkinen
                   ` (19 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

User provided offset and length is validated when parsing the parameters
of the SGX_IOC_ENCLAVE_ADD_PAGES ioctl(). Extract this validation
into a utility that can be used by the SGX2 ioctl()s that will
also provide these values.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/ioctl.c | 28 ++++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 83df20e3e633..f487549bccba 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -372,6 +372,26 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long src,
 	return ret;
 }
 
+/*
+ * Ensure user provided offset and length values are valid for
+ * an enclave.
+ */
+static int sgx_validate_offset_length(struct sgx_encl *encl,
+				      unsigned long offset,
+				      unsigned long length)
+{
+	if (!IS_ALIGNED(offset, PAGE_SIZE))
+		return -EINVAL;
+
+	if (!length || length & (PAGE_SIZE - 1))
+		return -EINVAL;
+
+	if (offset + length - PAGE_SIZE >= encl->size)
+		return -EINVAL;
+
+	return 0;
+}
+
 /**
  * sgx_ioc_enclave_add_pages() - The handler for %SGX_IOC_ENCLAVE_ADD_PAGES
  * @encl:       an enclave pointer
@@ -425,14 +445,10 @@ static long sgx_ioc_enclave_add_pages(struct sgx_encl *encl, void __user *arg)
 	if (copy_from_user(&add_arg, arg, sizeof(add_arg)))
 		return -EFAULT;
 
-	if (!IS_ALIGNED(add_arg.offset, PAGE_SIZE) ||
-	    !IS_ALIGNED(add_arg.src, PAGE_SIZE))
-		return -EINVAL;
-
-	if (!add_arg.length || add_arg.length & (PAGE_SIZE - 1))
+	if (!IS_ALIGNED(add_arg.src, PAGE_SIZE))
 		return -EINVAL;
 
-	if (add_arg.offset + add_arg.length - PAGE_SIZE >= encl->size)
+	if (sgx_validate_offset_length(encl, add_arg.offset, add_arg.length))
 		return -EINVAL;
 
 	if (copy_from_user(&secinfo, (void __user *)add_arg.secinfo,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 13/30] x86/sgx: Keep record of SGX page type
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (10 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 12/30] x86/sgx: Create utility to validate user provided offset and length Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions Jarkko Sakkinen
                   ` (18 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

SGX2 functions are not allowed on all page types. For example,
ENCLS[EMODPR] is only allowed on regular SGX enclave pages and
ENCLS[EMODPT] is only allowed on TCS and regular pages. If these
functions are attempted on another type of page the hardware would
trigger a fault.

Keep a record of the SGX page type so that there is more
certainty whether an SGX2 instruction can succeed and faults
can be treated as real failures.

The page type is a property of struct sgx_encl_page
and thus does not cover the VA page type. VA pages are maintained
in separate structures and their type can be determined in
a different way. The SGX2 instructions needing the page type do not
operate on VA pages and this is thus not a scenario needing to
be covered at this time.

With the protection bits consuming 16 bits of the unsigned long
there is room available in the bitfield to include the page type
information without increasing the space consumed by the struct.

Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/include/asm/sgx.h      | 3 +++
 arch/x86/kernel/cpu/sgx/encl.h  | 3 ++-
 arch/x86/kernel/cpu/sgx/ioctl.c | 2 ++
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/sgx.h b/arch/x86/include/asm/sgx.h
index d67810b50a81..eae20fa52b93 100644
--- a/arch/x86/include/asm/sgx.h
+++ b/arch/x86/include/asm/sgx.h
@@ -239,6 +239,9 @@ struct sgx_pageinfo {
  * %SGX_PAGE_TYPE_REG:	a regular page
  * %SGX_PAGE_TYPE_VA:	a VA page
  * %SGX_PAGE_TYPE_TRIM:	a page in trimmed state
+ *
+ * Make sure when making changes to this enum that its values can still fit
+ * in the bitfield within &struct sgx_encl_page
  */
 enum sgx_page_type {
 	SGX_PAGE_TYPE_SECS,
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index e760991762e4..94ccea8fbbf2 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -27,7 +27,8 @@
 
 struct sgx_encl_page {
 	unsigned long desc;
-	unsigned long vm_max_prot_bits;
+	u32 vm_max_prot_bits;
+	enum sgx_page_type type:32;
 	struct sgx_epc_page *epc_page;
 	struct sgx_encl *encl;
 	struct sgx_va_page *va_page;
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index f487549bccba..0c211af8e948 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -107,6 +107,7 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 		set_bit(SGX_ENCL_DEBUG, &encl->flags);
 
 	encl->secs.encl = encl;
+	encl->secs.type = SGX_PAGE_TYPE_SECS;
 	encl->base = secs->base;
 	encl->size = secs->size;
 	encl->attributes = secs->attributes;
@@ -344,6 +345,7 @@ static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long src,
 	 */
 	encl_page->encl = encl;
 	encl_page->epc_page = epc_page;
+	encl_page->type = (secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK) >> 8;
 	encl->secs_child_cnt++;
 
 	if (flags & SGX_PAGE_MEASURE) {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (11 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 13/30] x86/sgx: Keep record of SGX page type Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-09  8:52   ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 15/30] selftests/sgx: Add test for EPCM permission changes Jarkko Sakkinen
                   ` (17 subsequent siblings)
  30 siblings, 1 reply; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

In the initial (SGX1) version of SGX, pages in an enclave need to be
created with permissions that support all usages of the pages, from the
time the enclave is initialized until it is unloaded. For example,
pages used by a JIT compiler or when code needs to otherwise be
relocated need to always have RWX permissions.

SGX2 includes a new function ENCLS[EMODPR] that is run from the kernel
and can be used to restrict the EPCM permissions of regular enclave
pages within an initialized enclave.

Introduce ioctl() SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS to support
restricting EPCM permissions. With this ioctl() the user specifies
a page range and the permissions to be applied to all pages in
the provided range. After checking the new permissions (more detail
below) the page table entries are reset and any new page
table entries will contain the new, restricted, permissions.
ENCLS[EMODPR] is run to restrict the EPCM permissions followed by
the ENCLS[ETRACK] flow that will ensure no cached
linear-to-physical address mappings to the changed pages remain.

It is possible for the permission change request to fail on any
page within the provided range, either with an error encountered
by the kernel or by the SGX hardware while running
ENCLS[EMODPR]. To support partial success the ioctl() returns an
error code based on failures encountered by the kernel as well
as two result output parameters: one for the number of pages
that were successfully changed and one for the SGX return code.

Checking user provided new permissions
======================================

Enclave page permission changes need to be approached with care and
for this reason permission changes are only allowed if the new
permissions are the same or more restrictive that the vetted
permissions. No additional checking is done to ensure that the
permissions are actually being restricted. This is because the
enclave may have relaxed the EPCM permissions from within
the enclave without letting the kernel know. An attempt to relax
permissions using this call will be ignored by the hardware.

For example, together with the support for relaxing of EPCM permissions,
enclave pages added with the vetted permissions in brackets below
are allowed to have permissions as follows:
* (RWX) => RW => R => RX => RWX
* (RW) => R => RW
* (RX) => R => RX

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/include/uapi/asm/sgx.h |  21 +++
 arch/x86/kernel/cpu/sgx/encl.c  |   4 +-
 arch/x86/kernel/cpu/sgx/encl.h  |   3 +
 arch/x86/kernel/cpu/sgx/ioctl.c | 277 ++++++++++++++++++++++++++++++++
 4 files changed, 303 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h
index f4b81587e90b..107ffb0a0b48 100644
--- a/arch/x86/include/uapi/asm/sgx.h
+++ b/arch/x86/include/uapi/asm/sgx.h
@@ -29,6 +29,8 @@ enum sgx_page_flags {
 	_IOW(SGX_MAGIC, 0x03, struct sgx_enclave_provision)
 #define SGX_IOC_VEPC_REMOVE_ALL \
 	_IO(SGX_MAGIC, 0x04)
+#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
+	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
 
 /**
  * struct sgx_enclave_create - parameter structure for the
@@ -76,6 +78,25 @@ struct sgx_enclave_provision {
 	__u64 fd;
 };
 
+/**
+ * struct sgx_enclave_restrict_perm - parameters for ioctl
+ *                                    %SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS
+ * @offset:	starting page offset (page aligned relative to enclave base
+ *		address defined in SECS)
+ * @length:	length of memory (multiple of the page size)
+ * @secinfo:	address for the SECINFO data containing the new permission bits
+ *		for pages in range described by @offset and @length
+ * @result:	(output) SGX result code of ENCLS[EMODPR] function
+ * @count:	(output) bytes successfully changed (multiple of page size)
+ */
+struct sgx_enclave_restrict_perm {
+	__u64 offset;
+	__u64 length;
+	__u64 secinfo;
+	__u64 result;
+	__u64 count;
+};
+
 struct sgx_enclave_run;
 
 /**
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 6cc732b88c37..baf798a793a2 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -90,8 +90,8 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
 	return epc_page;
 }
 
-static struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
-						unsigned long addr)
+struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
+					 unsigned long addr)
 {
 	struct sgx_epc_page *epc_page;
 	struct sgx_encl_page *entry;
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index 94ccea8fbbf2..965cfc7b2b93 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -119,4 +119,7 @@ void sgx_free_va_slot(struct sgx_va_page *va_page, unsigned int offset);
 bool sgx_va_page_full(struct sgx_va_page *va_page);
 void sgx_encl_free_epc_page(struct sgx_epc_page *page);
 
+struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
+					 unsigned long addr);
+
 #endif /* _X86_ENCL_H */
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 0c211af8e948..a848f6c38781 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -692,6 +692,280 @@ static long sgx_ioc_enclave_provision(struct sgx_encl *encl, void __user *arg)
 	return sgx_set_attribute(&encl->attributes_mask, params.fd);
 }
 
+static unsigned long vm_prot_from_secinfo(u64 secinfo_perm)
+{
+	unsigned long vm_prot;
+
+	vm_prot = _calc_vm_trans(secinfo_perm, SGX_SECINFO_R, PROT_READ)  |
+		  _calc_vm_trans(secinfo_perm, SGX_SECINFO_W, PROT_WRITE) |
+		  _calc_vm_trans(secinfo_perm, SGX_SECINFO_X, PROT_EXEC);
+	vm_prot = calc_vm_prot_bits(vm_prot, 0);
+
+	return vm_prot;
+}
+
+/*
+ * Ensure enclave is ready for SGX2 functions. Readiness is checked
+ * by ensuring the hardware supports SGX2 and the enclave is initialized
+ * and thus able to handle requests to modify pages within it.
+ */
+static int sgx_ioc_sgx2_ready(struct sgx_encl *encl)
+{
+	if (!(cpu_feature_enabled(X86_FEATURE_SGX2)))
+		return -ENODEV;
+
+	if (!test_bit(SGX_ENCL_INITIALIZED, &encl->flags))
+		return -EINVAL;
+
+	return 0;
+}
+
+/*
+ * Return valid permission fields from a secinfo structure provided by
+ * user space. The secinfo structure is required to only have bits in
+ * the permission fields set.
+ */
+static int sgx_perm_from_user_secinfo(void __user *_secinfo, u64 *secinfo_perm)
+{
+	struct sgx_secinfo secinfo;
+	u64 perm;
+
+	if (copy_from_user(&secinfo, (void __user *)_secinfo,
+			   sizeof(secinfo)))
+		return -EFAULT;
+
+	if (secinfo.flags & ~SGX_SECINFO_PERMISSION_MASK)
+		return -EINVAL;
+
+	if (memchr_inv(secinfo.reserved, 0, sizeof(secinfo.reserved)))
+		return -EINVAL;
+
+	perm = secinfo.flags & SGX_SECINFO_PERMISSION_MASK;
+
+	if ((perm & SGX_SECINFO_W) && !(perm & SGX_SECINFO_R))
+		return -EINVAL;
+
+	*secinfo_perm = perm;
+
+	return 0;
+}
+
+/*
+ * Some SGX functions require that no cached linear-to-physical address
+ * mappings are present before they can succeed. Collaborate with
+ * hardware via ENCLS[ETRACK] to ensure that all cached
+ * linear-to-physical address mappings belonging to all threads of
+ * the enclave are cleared. See sgx_encl_cpumask() for details.
+ */
+static int sgx_enclave_etrack(struct sgx_encl *encl)
+{
+	void *epc_virt;
+	int ret;
+
+	epc_virt = sgx_get_epc_virt_addr(encl->secs.epc_page);
+	ret = __etrack(epc_virt);
+	if (ret) {
+		/*
+		 * ETRACK only fails when there is an OS issue. For
+		 * example, two consecutive ETRACK was sent without
+		 * completed IPI between.
+		 */
+		pr_err_once("ETRACK returned %d (0x%x)", ret, ret);
+		/*
+		 * Send IPIs to kick CPUs out of the enclave and
+		 * try ETRACK again.
+		 */
+		on_each_cpu_mask(sgx_encl_cpumask(encl), sgx_ipi_cb, NULL, 1);
+		ret = __etrack(epc_virt);
+		if (ret) {
+			pr_err_once("ETRACK repeat returned %d (0x%x)",
+				    ret, ret);
+			return -EFAULT;
+		}
+	}
+	on_each_cpu_mask(sgx_encl_cpumask(encl), sgx_ipi_cb, NULL, 1);
+
+	return 0;
+}
+
+/**
+ * sgx_enclave_restrict_perm() - Restrict EPCM permissions and align OS view
+ * @encl:	Enclave to which the pages belong.
+ * @modp:	Checked parameters from user on which pages need modifying.
+ * @secinfo_perm: New (validated) permission bits.
+ *
+ * Return:
+ * - 0:		Success.
+ * - -errno:	Otherwise.
+ */
+static long sgx_enclave_restrict_perm(struct sgx_encl *encl,
+				      struct sgx_enclave_restrict_perm *modp,
+				      u64 secinfo_perm)
+{
+	struct sgx_encl_page *entry;
+	struct sgx_secinfo secinfo;
+	unsigned long vm_prot;
+	unsigned long addr;
+	unsigned long c;
+	void *epc_virt;
+	int ret;
+
+	memset(&secinfo, 0, sizeof(secinfo));
+	secinfo.flags = secinfo_perm;
+
+	vm_prot = vm_prot_from_secinfo(secinfo_perm);
+
+	for (c = 0 ; c < modp->length; c += PAGE_SIZE) {
+		addr = encl->base + modp->offset + c;
+
+		mutex_lock(&encl->lock);
+
+		entry = sgx_encl_load_page(encl, addr);
+		if (IS_ERR(entry)) {
+			ret = PTR_ERR(entry) == -EBUSY ? -EAGAIN : -EFAULT;
+			goto out_unlock;
+		}
+
+		/*
+		 * Changing EPCM permissions is only supported on regular
+		 * SGX pages. Attempting this change on other pages will
+		 * result in #PF.
+		 */
+		if (entry->type != SGX_PAGE_TYPE_REG) {
+			ret = -EINVAL;
+			goto out_unlock;
+		}
+
+		/*
+		 * Do not verify if current runtime protection bits are what
+		 * is being requested. The enclave may have relaxed EPCM
+		 * permissions calls without letting the kernel know and
+		 * thus permission restriction may still be needed even if
+		 * from the kernel's perspective the permissions are unchanged.
+		 */
+
+		/* New permissions should never exceed vetted permissions. */
+		if ((entry->vm_max_prot_bits & vm_prot) != vm_prot) {
+			ret = -EPERM;
+			goto out_unlock;
+		}
+
+		/* Make sure page stays around while releasing mutex. */
+		if (sgx_unmark_page_reclaimable(entry->epc_page)) {
+			ret = -EAGAIN;
+			goto out_unlock;
+		}
+
+		mutex_unlock(&encl->lock);
+		/*
+		 * Do not keep encl->lock because of dependency on
+		 * mmap_lock acquired in sgx_zap_enclave_ptes().
+		 */
+		sgx_zap_enclave_ptes(encl, addr);
+
+		mutex_lock(&encl->lock);
+
+		/* Change EPCM permissions. */
+		epc_virt = sgx_get_epc_virt_addr(entry->epc_page);
+		ret = __emodpr(&secinfo, epc_virt);
+		if (encls_faulted(ret)) {
+			/*
+			 * All possible faults should be avoidable:
+			 * parameters have been checked, will only change
+			 * permissions of a regular page, and no concurrent
+			 * SGX1/SGX2 ENCLS instructions since these
+			 * are protected with mutex.
+			 */
+			pr_err_once("EMODPR encountered exception %d\n",
+				    ENCLS_TRAPNR(ret));
+			ret = -EFAULT;
+			goto out_reclaim;
+		}
+		if (encls_failed(ret)) {
+			modp->result = ret;
+			ret = -EFAULT;
+			goto out_reclaim;
+		}
+
+		ret = sgx_enclave_etrack(encl);
+		if (ret) {
+			ret = -EFAULT;
+			goto out_reclaim;
+		}
+
+		sgx_mark_page_reclaimable(entry->epc_page);
+		mutex_unlock(&encl->lock);
+	}
+
+	ret = 0;
+	goto out;
+
+out_reclaim:
+	sgx_mark_page_reclaimable(entry->epc_page);
+out_unlock:
+	mutex_unlock(&encl->lock);
+out:
+	modp->count = c;
+
+	return ret;
+}
+
+/**
+ * sgx_ioc_enclave_restrict_perm() - handler for
+ *                                   %SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS
+ * @encl:	an enclave pointer
+ * @arg:	userspace pointer to a &struct sgx_enclave_restrict_perm
+ *		instance
+ *
+ * SGX2 distinguishes between relaxing and restricting the enclave page
+ * permissions maintained by the hardware (EPCM permissions) of pages
+ * belonging to an initialized enclave (after SGX_IOC_ENCLAVE_INIT).
+ *
+ * EPCM permissions cannot be restricted from within the enclave, the enclave
+ * requires the kernel to run the privileged level 0 instructions ENCLS[EMODPR]
+ * and ENCLS[ETRACK]. An attempt to relax EPCM permissions with this call
+ * will be ignored by the hardware.
+ *
+ * Enclave page permissions are not allowed to exceed the maximum vetted
+ * permissions maintained in &struct sgx_encl_page->vm_max_prot_bits.
+ *
+ * Return:
+ * - 0:		Success
+ * - -errno:	Otherwise
+ */
+static long sgx_ioc_enclave_restrict_perm(struct sgx_encl *encl,
+					  void __user *arg)
+{
+	struct sgx_enclave_restrict_perm params;
+	u64 secinfo_perm;
+	long ret;
+
+	ret = sgx_ioc_sgx2_ready(encl);
+	if (ret)
+		return ret;
+
+	if (copy_from_user(&params, arg, sizeof(params)))
+		return -EFAULT;
+
+	if (sgx_validate_offset_length(encl, params.offset, params.length))
+		return -EINVAL;
+
+	ret = sgx_perm_from_user_secinfo((void __user *)params.secinfo,
+					 &secinfo_perm);
+	if (ret)
+		return ret;
+
+	if (params.result || params.count)
+		return -EINVAL;
+
+	ret = sgx_enclave_restrict_perm(encl, &params, secinfo_perm);
+
+	if (copy_to_user(arg, &params, sizeof(params)))
+		return -EFAULT;
+
+	return ret;
+}
+
 long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 {
 	struct sgx_encl *encl = filep->private_data;
@@ -713,6 +987,9 @@ long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 	case SGX_IOC_ENCLAVE_PROVISION:
 		ret = sgx_ioc_enclave_provision(encl, (void __user *)arg);
 		break;
+	case SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS:
+		ret = sgx_ioc_enclave_restrict_perm(encl, (void __user *)arg);
+		break;
 	default:
 		ret = -ENOIOCTLCMD;
 		break;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 15/30] selftests/sgx: Add test for EPCM permission changes
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (12 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 16/30] selftests/sgx: Add test for TCS page " Jarkko Sakkinen
                   ` (16 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

EPCM permission changes could be made from within (to relax
permissions) or out (to restrict permissions) the enclave. Kernel
support is needed when permissions are restricted to be able to
call the privileged ENCLS[EMODPR] instruction and ensure PTEs
allowing the restricted permissions are flushed. EPCM permissions
can be relaxed via ENCLU[EMODPE] from within the enclave but the
enclave still depends on the kernel to install PTEs with the new
permissions.

Add a test that exercises a few of the enclave page permission flows:
1) Test starts with a RW (from enclave and kernel perspective)
   enclave page that is mapped via a RW VMA.
2) Use the SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS ioctl() to restrict
   the enclave (EPCM) page permissions to read-only (kernel removes
   PTE in the process).
3) Run ENCLU[EACCEPT] from within the enclave to accept the new page
   permissions.
4) Attempt to write to the enclave page from within the enclave - this
   should fail with a page fault on the PTE since the page
   table entry accurately reflects the (read-only) EPCM permissions.
5) Restore EPCM permissions to RW by running ENCLU[EMODPE] from within
   the enclave.
6) Attempt to write to the enclave page from within the enclave - this
   should fail again with a page fault because even though the EPCM
   permissions are RW the PTE does not yet reflect that.
7) Use the SGX_IOC_ENCLAVE_RELAX_PERMISSIONS ioctl() to inform the
   kernel of new page permissions and PTEs will accurately reflect
   RW EPCM permissions.
8) Writing to enclave page from within enclave succeeds.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/defines.h   |  15 ++
 tools/testing/selftests/sgx/main.c      | 234 ++++++++++++++++++++++++
 tools/testing/selftests/sgx/test_encl.c |  38 ++++
 3 files changed, 287 insertions(+)

diff --git a/tools/testing/selftests/sgx/defines.h b/tools/testing/selftests/sgx/defines.h
index 02d775789ea7..b638eb98c80c 100644
--- a/tools/testing/selftests/sgx/defines.h
+++ b/tools/testing/selftests/sgx/defines.h
@@ -24,6 +24,8 @@ enum encl_op_type {
 	ENCL_OP_PUT_TO_ADDRESS,
 	ENCL_OP_GET_FROM_ADDRESS,
 	ENCL_OP_NOP,
+	ENCL_OP_EACCEPT,
+	ENCL_OP_EMODPE,
 	ENCL_OP_MAX,
 };
 
@@ -53,4 +55,17 @@ struct encl_op_get_from_addr {
 	uint64_t addr;
 };
 
+struct encl_op_eaccept {
+	struct encl_op_header header;
+	uint64_t epc_addr;
+	uint64_t flags;
+	uint64_t ret;
+};
+
+struct encl_op_emodpe {
+	struct encl_op_header header;
+	uint64_t epc_addr;
+	uint64_t flags;
+};
+
 #endif /* DEFINES_H */
diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index dd74fa42302e..aec3355d2ace 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -25,6 +25,18 @@ static const uint64_t MAGIC = 0x1122334455667788ULL;
 static const uint64_t MAGIC2 = 0x8877665544332211ULL;
 vdso_sgx_enter_enclave_t vdso_sgx_enter_enclave;
 
+/*
+ * Security Information (SECINFO) data structure needed by a few SGX
+ * instructions (eg. ENCLU[EACCEPT] and ENCLU[EMODPE]) holds meta-data
+ * about an enclave page. &enum sgx_secinfo_page_state specifies the
+ * secinfo flags used for page state.
+ */
+enum sgx_secinfo_page_state {
+	SGX_SECINFO_PENDING = (1 << 3),
+	SGX_SECINFO_MODIFIED = (1 << 4),
+	SGX_SECINFO_PR = (1 << 5),
+};
+
 struct vdso_symtab {
 	Elf64_Sym *elf_symtab;
 	const char *elf_symstrtab;
@@ -555,4 +567,226 @@ TEST_F(enclave, pte_permissions)
 	EXPECT_EQ(self->run.exception_addr, 0);
 }
 
+/*
+ * Enclave page permission test.
+ *
+ * Modify and restore enclave page's EPCM (enclave) permissions from
+ * outside enclave (ENCLS[EMODPR] via kernel) as well as from within
+ * enclave (via ENCLU[EMODPE]). Kernel should ensure PTE permissions
+ * are the same as the EPCM permissions so check for page fault if
+ * VMA allows access but EPCM and PTE does not.
+ */
+TEST_F(enclave, epcm_permissions)
+{
+	struct sgx_enclave_restrict_perm restrict_ioc;
+	struct encl_op_get_from_addr get_addr_op;
+	struct encl_op_put_to_addr put_addr_op;
+	struct encl_op_eaccept eaccept_op;
+	struct encl_op_emodpe emodpe_op;
+	struct sgx_secinfo secinfo;
+	unsigned long data_start;
+	int ret, errno_save;
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	/*
+	 * Ensure kernel supports needed ioctl() and system supports needed
+	 * commands.
+	 */
+	memset(&restrict_ioc, 0, sizeof(restrict_ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS,
+		    &restrict_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	ASSERT_EQ(ret, -1);
+
+	/* ret == -1 */
+	if (errno_save == ENOTTY)
+		SKIP(return,
+		     "Kernel does not support SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS ioctl()");
+	else if (errno_save == ENODEV)
+		SKIP(return, "System does not support SGX2");
+
+	/*
+	 * Page that will have its permissions changed is the second data
+	 * page in the .data segment. This forms part of the local encl_buffer
+	 * within the enclave.
+	 *
+	 * At start of test @data_start should have EPCM as well as PTE
+	 * permissions of RW.
+	 */
+
+	data_start = self->encl.encl_base +
+		     encl_get_data_offset(&self->encl) + PAGE_SIZE;
+
+	/*
+	 * Sanity check that page at @data_start is writable before making
+	 * any changes to page permissions.
+	 *
+	 * Start by writing MAGIC to test page.
+	 */
+	put_addr_op.value = MAGIC;
+	put_addr_op.addr = data_start;
+	put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Read memory that was just written to, confirming that
+	 * page is writable.
+	 */
+	get_addr_op.value = 0;
+	get_addr_op.addr = data_start;
+	get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Change EPCM permissions to read-only, PTE entry flushed by
+	 * kernel in the process.
+	 */
+	memset(&restrict_ioc, 0, sizeof(restrict_ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = PROT_READ;
+	restrict_ioc.offset = encl_get_data_offset(&self->encl) + PAGE_SIZE;
+	restrict_ioc.length = PAGE_SIZE;
+	restrict_ioc.secinfo = (unsigned long)&secinfo;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS,
+		    &restrict_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(restrict_ioc.result, 0);
+	EXPECT_EQ(restrict_ioc.count, 4096);
+
+	/*
+	 * EPCM permissions changed from kernel, need to EACCEPT from enclave.
+	 */
+	eaccept_op.epc_addr = data_start;
+	eaccept_op.flags = PROT_READ | SGX_SECINFO_REG | SGX_SECINFO_PR;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/*
+	 * EPCM permissions of page is now read-only, expect #PF
+	 * on PTE (not EPCM) when attempting to write to page from
+	 * within enclave.
+	 */
+	put_addr_op.value = MAGIC2;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(self->run.function, ERESUME);
+	EXPECT_EQ(self->run.exception_vector, 14);
+	EXPECT_EQ(self->run.exception_error_code, 0x7);
+	EXPECT_EQ(self->run.exception_addr, data_start);
+
+	self->run.exception_vector = 0;
+	self->run.exception_error_code = 0;
+	self->run.exception_addr = 0;
+
+	/*
+	 * Received AEX but cannot return to enclave at same entrypoint,
+	 * need different TCS from where EPCM permission can be made writable
+	 * again.
+	 */
+	self->run.tcs = self->encl.encl_base + PAGE_SIZE;
+
+	/*
+	 * Enter enclave at new TCS to change EPCM permissions to be
+	 * writable again and thus fix the page fault that triggered the
+	 * AEX.
+	 */
+
+	emodpe_op.epc_addr = data_start;
+	emodpe_op.flags = PROT_READ | PROT_WRITE;
+	emodpe_op.header.type = ENCL_OP_EMODPE;
+
+	EXPECT_EQ(ENCL_CALL(&emodpe_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Attempt to return to main TCS to resume execution at faulting
+	 * instruction, but PTE should still prevent writing to the page.
+	 */
+	self->run.tcs = self->encl.encl_base;
+
+	EXPECT_EQ(vdso_sgx_enter_enclave((unsigned long)&put_addr_op, 0, 0,
+					 ERESUME, 0, 0,
+					 &self->run),
+		  0);
+
+	EXPECT_EQ(self->run.function, ERESUME);
+	EXPECT_EQ(self->run.exception_vector, 14);
+	EXPECT_EQ(self->run.exception_error_code, 0x7);
+	EXPECT_EQ(self->run.exception_addr, data_start);
+
+	self->run.exception_vector = 0;
+	self->run.exception_error_code = 0;
+	self->run.exception_addr = 0;
+
+	/*
+	 * Wrong page permissions that caused original fault has
+	 * now been fixed via EPCM permissions as well as PTE.
+	 * Resume execution in main TCS to re-attempt the memory access.
+	 */
+	self->run.tcs = self->encl.encl_base;
+
+	EXPECT_EQ(vdso_sgx_enter_enclave((unsigned long)&put_addr_op, 0, 0,
+					 ERESUME, 0, 0,
+					 &self->run),
+		  0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	get_addr_op.value = 0;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC2);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.user_data, 0);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+}
+
 TEST_HARNESS_MAIN
diff --git a/tools/testing/selftests/sgx/test_encl.c b/tools/testing/selftests/sgx/test_encl.c
index 4fca01cfd898..5b6c65331527 100644
--- a/tools/testing/selftests/sgx/test_encl.c
+++ b/tools/testing/selftests/sgx/test_encl.c
@@ -11,6 +11,42 @@
  */
 static uint8_t encl_buffer[8192] = { 1 };
 
+enum sgx_enclu_function {
+	EACCEPT = 0x5,
+	EMODPE = 0x6,
+};
+
+static void do_encl_emodpe(void *_op)
+{
+	struct sgx_secinfo secinfo __aligned(sizeof(struct sgx_secinfo)) = {0};
+	struct encl_op_emodpe *op = _op;
+
+	secinfo.flags = op->flags;
+
+	asm volatile(".byte 0x0f, 0x01, 0xd7"
+				:
+				: "a" (EMODPE),
+				  "b" (&secinfo),
+				  "c" (op->epc_addr));
+}
+
+static void do_encl_eaccept(void *_op)
+{
+	struct sgx_secinfo secinfo __aligned(sizeof(struct sgx_secinfo)) = {0};
+	struct encl_op_eaccept *op = _op;
+	int rax;
+
+	secinfo.flags = op->flags;
+
+	asm volatile(".byte 0x0f, 0x01, 0xd7"
+				: "=a" (rax)
+				: "a" (EACCEPT),
+				  "b" (&secinfo),
+				  "c" (op->epc_addr));
+
+	op->ret = rax;
+}
+
 static void *memcpy(void *dest, const void *src, size_t n)
 {
 	size_t i;
@@ -62,6 +98,8 @@ void encl_body(void *rdi,  void *rsi)
 		do_encl_op_put_to_addr,
 		do_encl_op_get_from_addr,
 		do_encl_op_nop,
+		do_encl_eaccept,
+		do_encl_emodpe,
 	};
 
 	struct encl_op_header *op = (struct encl_op_header *)rdi;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 16/30] selftests/sgx: Add test for TCS page permission changes
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (13 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 15/30] selftests/sgx: Add test for EPCM permission changes Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 17/30] x86/sgx: Support adding of pages to an initialized enclave Jarkko Sakkinen
                   ` (15 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

Kernel should not allow permission changes on TCS pages. Add test to
confirm this behavior.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/main.c | 74 ++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index aec3355d2ace..ea5f2e064687 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -121,6 +121,24 @@ static Elf64_Sym *vdso_symtab_get(struct vdso_symtab *symtab, const char *name)
 	return NULL;
 }
 
+/*
+ * Return the offset in the enclave where the TCS segment can be found.
+ * The first RW segment loaded is the TCS.
+ */
+static off_t encl_get_tcs_offset(struct encl *encl)
+{
+	int i;
+
+	for (i = 0; i < encl->nr_segments; i++) {
+		struct encl_segment *seg = &encl->segment_tbl[i];
+
+		if (i == 0 && seg->prot == (PROT_READ | PROT_WRITE))
+			return seg->offset;
+	}
+
+	return -1;
+}
+
 /*
  * Return the offset in the enclave where the data segment can be found.
  * The first RW segment loaded is the TCS, skip that to get info on the
@@ -567,6 +585,62 @@ TEST_F(enclave, pte_permissions)
 	EXPECT_EQ(self->run.exception_addr, 0);
 }
 
+/*
+ * Modifying permissions of TCS page should not be possible.
+ */
+TEST_F(enclave, tcs_permissions)
+{
+	struct sgx_enclave_restrict_perm ioc;
+	struct sgx_secinfo secinfo;
+	int ret, errno_save;
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	memset(&ioc, 0, sizeof(ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	/*
+	 * Ensure kernel supports needed ioctl() and system supports needed
+	 * commands.
+	 */
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS, &ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	ASSERT_EQ(ret, -1);
+
+	/* ret == -1 */
+	if (errno_save == ENOTTY)
+		SKIP(return,
+		     "Kernel does not support SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS ioctl()");
+	else if (errno_save == ENODEV)
+		SKIP(return, "System does not support SGX2");
+
+	/*
+	 * Attempt to make TCS page read-only. This is not allowed and
+	 * should be prevented by the kernel.
+	 */
+	secinfo.flags = PROT_READ;
+	ioc.offset = encl_get_tcs_offset(&self->encl);
+	ioc.length = PAGE_SIZE;
+	ioc.secinfo = (unsigned long)&secinfo;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS, &ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, -1);
+	EXPECT_EQ(errno_save, EINVAL);
+	EXPECT_EQ(ioc.result, 0);
+	EXPECT_EQ(ioc.count, 0);
+}
+
 /*
  * Enclave page permission test.
  *
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 17/30] x86/sgx: Support adding of pages to an initialized enclave
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (14 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 16/30] selftests/sgx: Add test for TCS page " Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 18/30] x86/sgx: Tighten accessible memory range after enclave initialization Jarkko Sakkinen
                   ` (14 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

With SGX1 an enclave needs to be created with its maximum memory demands
allocated. Pages cannot be added to an enclave after it is initialized.
SGX2 introduces a new function, ENCLS[EAUG], that can be used to add
pages to an initialized enclave. With SGX2 the enclave still needs to
set aside address space for its maximum memory demands during enclave
creation, but all pages need not be added before enclave initialization.
Pages can be added during enclave runtime.

Add support for dynamically adding pages to an initialized enclave,
architecturally limited to RW permission. Add pages via the page fault
handler at the time an enclave address without a backing enclave page
is accessed, potentially directly reclaiming pages if no free pages
are available.

The enclave is still required to run ENCLU[EACCEPT] on the page before
it can be used. A useful flow is for the enclave to run ENCLU[EACCEPT]
on an uninitialized address. This will trigger the page fault handler
that will add the enclave page and return execution to the enclave to
repeat the ENCLU[EACCEPT] instruction, this time successful.

If the enclave accesses an uninitialized address in another way, for
example by expanding the enclave stack to a page that has not yet been
added, then the page fault handler would add the page on the first
write but upon returning to the enclave the instruction that triggered
the page fault would be repeated and since ENCLU[EACCEPT] was not run
yet it would trigger a second page fault, this time with the SGX flag
set in the page fault error code. This can only be recovered by entering
the enclave again and directly running the ENCLU[EACCEPT] instruction on
the now initialized address.

Accessing an uninitialized address from outside the enclave also
triggers this flow but the page will remain inaccessible (access will
result in #PF) until accepted from within the enclave via
ENCLU[EACCEPT].

The page is added with the architecturally constrained RW permissions
as runtime as well as maximum allowed permissions. It is understood that
there are some use cases, for example code relocation, that requires RWX
maximum permissions. Supporting these use cases require guidance from
user space policy before such maximum permissions can be allowed.
Integration with user policy is deferred.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c  | 132 ++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/sgx/encl.h  |   2 +
 arch/x86/kernel/cpu/sgx/ioctl.c |   4 +-
 3 files changed, 136 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index baf798a793a2..9602b6db831b 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -124,6 +124,127 @@ struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
 	return entry;
 }
 
+/**
+ * sgx_encl_eaug_page() - Dynamically add page to initialized enclave
+ * @vma:	VMA obtained from fault info from where page is accessed
+ * @encl:	enclave accessing the page
+ * @addr:	address that triggered the page fault
+ *
+ * When an initialized enclave accesses a page with no backing EPC page
+ * on a SGX2 system then the EPC can be added dynamically via the SGX2
+ * ENCLS[EAUG] instruction.
+ *
+ * Returns: Appropriate vm_fault_t: VM_FAULT_NOPAGE when PTE was installed
+ * successfully, VM_FAULT_SIGBUS or VM_FAULT_OOM as error otherwise.
+ */
+static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma,
+				     struct sgx_encl *encl, unsigned long addr)
+{
+	struct sgx_pageinfo pginfo = {0};
+	struct sgx_encl_page *encl_page;
+	struct sgx_epc_page *epc_page;
+	struct sgx_va_page *va_page;
+	unsigned long phys_addr;
+	unsigned long prot;
+	vm_fault_t vmret;
+	int ret;
+
+	if (!test_bit(SGX_ENCL_INITIALIZED, &encl->flags))
+		return VM_FAULT_SIGBUS;
+
+	encl_page = kzalloc(sizeof(*encl_page), GFP_KERNEL);
+	if (!encl_page)
+		return VM_FAULT_OOM;
+
+	encl_page->desc = addr;
+	encl_page->encl = encl;
+
+	/*
+	 * Adding a regular page that is architecturally allowed to only
+	 * be created with RW permissions.
+	 * TBD: Interface with user space policy to support max permissions
+	 * of RWX.
+	 */
+	prot = PROT_READ | PROT_WRITE | PROT_EXEC;
+	encl_page->vm_max_prot_bits = calc_vm_prot_bits(prot, 0);
+
+	epc_page = sgx_alloc_epc_page(encl_page, true);
+	if (IS_ERR(epc_page)) {
+		kfree(encl_page);
+		return VM_FAULT_SIGBUS;
+	}
+
+	va_page = sgx_encl_grow(encl);
+	if (IS_ERR(va_page)) {
+		ret = PTR_ERR(va_page);
+		goto err_out_free;
+	}
+
+	mutex_lock(&encl->lock);
+
+	/*
+	 * Copy comment from sgx_encl_add_page() to maintain guidance in
+	 * this similar flow:
+	 * Adding to encl->va_pages must be done under encl->lock.  Ditto for
+	 * deleting (via sgx_encl_shrink()) in the error path.
+	 */
+	if (va_page)
+		list_add(&va_page->list, &encl->va_pages);
+
+	ret = xa_insert(&encl->page_array, PFN_DOWN(encl_page->desc),
+			encl_page, GFP_KERNEL);
+	/*
+	 * If ret == -EBUSY then page was created in another flow while
+	 * running without encl->lock
+	 */
+	if (ret)
+		goto err_out_unlock;
+
+	pginfo.secs = (unsigned long)sgx_get_epc_virt_addr(encl->secs.epc_page);
+	pginfo.addr = encl_page->desc & PAGE_MASK;
+	pginfo.metadata = 0;
+
+	ret = __eaug(&pginfo, sgx_get_epc_virt_addr(epc_page));
+	if (ret)
+		goto err_out;
+
+	encl_page->encl = encl;
+	encl_page->epc_page = epc_page;
+	encl_page->type = SGX_PAGE_TYPE_REG;
+	encl->secs_child_cnt++;
+
+	sgx_mark_page_reclaimable(encl_page->epc_page);
+
+	phys_addr = sgx_get_epc_phys_addr(epc_page);
+	/*
+	 * Do not undo everything when creating PTE entry fails - next #PF
+	 * would find page ready for a PTE.
+	 * PAGE_SHARED because protection is forced to be RW above and COW
+	 * is not supported.
+	 */
+	vmret = vmf_insert_pfn_prot(vma, addr, PFN_DOWN(phys_addr),
+				    PAGE_SHARED);
+	if (vmret != VM_FAULT_NOPAGE) {
+		mutex_unlock(&encl->lock);
+		return VM_FAULT_SIGBUS;
+	}
+	mutex_unlock(&encl->lock);
+	return VM_FAULT_NOPAGE;
+
+err_out:
+	xa_erase(&encl->page_array, PFN_DOWN(encl_page->desc));
+
+err_out_unlock:
+	sgx_encl_shrink(encl, va_page);
+	mutex_unlock(&encl->lock);
+
+err_out_free:
+	sgx_encl_free_epc_page(epc_page);
+	kfree(encl_page);
+
+	return VM_FAULT_SIGBUS;
+}
+
 static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 {
 	unsigned long addr = (unsigned long)vmf->address;
@@ -145,6 +266,17 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 	if (unlikely(!encl))
 		return VM_FAULT_SIGBUS;
 
+	/*
+	 * The page_array keeps track of all enclave pages, whether they
+	 * are swapped out or not. If there is no entry for this page and
+	 * the system supports SGX2 then it is possible to dynamically add
+	 * a new enclave page. This is only possible for an initialized
+	 * enclave that will be checked for right away.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SGX2) &&
+	    (!xa_load(&encl->page_array, PFN_DOWN(addr))))
+		return sgx_encl_eaug_page(vma, encl, addr);
+
 	mutex_lock(&encl->lock);
 
 	entry = sgx_encl_load_page(encl, addr);
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index 965cfc7b2b93..049b3bb08e63 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -122,4 +122,6 @@ void sgx_encl_free_epc_page(struct sgx_epc_page *page);
 struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
 					 unsigned long addr);
 
+struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl);
+void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page);
 #endif /* _X86_ENCL_H */
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index a848f6c38781..166ec7d79634 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -17,7 +17,7 @@
 #include "encl.h"
 #include "encls.h"
 
-static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
+struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
 {
 	struct sgx_va_page *va_page = NULL;
 	void *err;
@@ -43,7 +43,7 @@ static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
 	return va_page;
 }
 
-static void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page)
+void sgx_encl_shrink(struct sgx_encl *encl, struct sgx_va_page *va_page)
 {
 	encl->page_cnt--;
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 18/30] x86/sgx: Tighten accessible memory range after enclave initialization
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (15 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 17/30] x86/sgx: Support adding of pages to an initialized enclave Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 19/30] selftests/sgx: Test two different SGX2 EAUG flows Jarkko Sakkinen
                   ` (13 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

Before an enclave is initialized the enclave's memory range is unknown.
The enclave's memory range is learned at the time it is created via the
SGX_IOC_ENCLAVE_CREATE ioctl() where the provided memory range is
obtained from an earlier mmap() of /dev/sgx_enclave. After an enclave
is initialized its memory can be mapped into user space (mmap()) from
where it can be entered at its defined entry points.

With the enclave's memory range known after it is initialized there is
no reason why it should be possible to map memory outside this range.

Lock down access to the initialized enclave's memory range by denying
any attempt to map memory outside its memory range.

Locking down the memory range also makes adding pages to an initialized
enclave more efficient. Pages are added to an initialized enclave by
accessing memory that belongs to the enclave's memory range but not yet
backed by an enclave page. If it is possible for user space to map
memory that does not form part of the enclave then an access to this
memory would eventually fail. Failures range from a prompt general
protection fault if the access was an ENCLU[EACCEPT] from within the
enclave, or a page fault via the vDSO if it was another access from
within the enclave, or a SIGBUS (also resulting from a page fault) if
the access was from outside the enclave.

Disallowing invalid memory to be mapped in the first place avoids
preventable failures.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 9602b6db831b..f775995a75e2 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -402,6 +402,11 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 
 	XA_STATE(xas, &encl->page_array, PFN_DOWN(start));
 
+	/* Disallow mapping outside enclave's address range. */
+	if (test_bit(SGX_ENCL_INITIALIZED, &encl->flags) &&
+	    (start < encl->base || end > encl->base + encl->size))
+		return -EACCES;
+
 	/*
 	 * Disallow READ_IMPLIES_EXEC tasks as their VMA permissions might
 	 * conflict with the enclave page permissions.
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 19/30] selftests/sgx: Test two different SGX2 EAUG flows
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (16 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 18/30] x86/sgx: Tighten accessible memory range after enclave initialization Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 20/30] x86/sgx: Support modifying SGX page type Jarkko Sakkinen
                   ` (12 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

Enclave pages can be added to an initialized enclave when an address
belonging to the enclave but without a backing page is accessed from
within the enclave.

Accessing memory without a backing enclave page from within an enclave
can be in different ways:
1) Pre-emptively run ENCLU[EACCEPT]. Since the addition of a page
   always needs to be accepted by the enclave via ENCLU[EACCEPT] this
   flow is efficient since the first execution of ENCLU[EACCEPT]
   triggers the addition of the page and when execution returns to the
   same instruction the second execution would be successful as an
   acceptance of the page.

2) A direct read or write. The flow where a direct read or write
   triggers the page addition execution cannot resume from the
   instruction (read/write) that triggered the fault but instead
   the enclave needs to be entered at a different entry point to
   run needed ENCLU[EACCEPT] before execution can return to the
   original entry point and the read/write instruction that faulted.

Add tests for both flows.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/main.c | 243 +++++++++++++++++++++++++++++
 1 file changed, 243 insertions(+)

diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index ea5f2e064687..13542c5de66f 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -86,6 +86,15 @@ static bool vdso_get_symtab(void *addr, struct vdso_symtab *symtab)
 	return true;
 }
 
+static inline int sgx2_supported(void)
+{
+	unsigned int eax, ebx, ecx, edx;
+
+	__cpuid_count(SGX_CPUID, 0x0, eax, ebx, ecx, edx);
+
+	return eax & 0x2;
+}
+
 static unsigned long elf_sym_hash(const char *name)
 {
 	unsigned long h = 0, high;
@@ -863,4 +872,238 @@ TEST_F(enclave, epcm_permissions)
 	EXPECT_EQ(self->run.exception_addr, 0);
 }
 
+/*
+ * Test the addition of pages to an initialized enclave via writing to
+ * a page belonging to the enclave's address space but was not added
+ * during enclave creation.
+ */
+TEST_F(enclave, augment)
+{
+	struct encl_op_get_from_addr get_addr_op;
+	struct encl_op_put_to_addr put_addr_op;
+	struct encl_op_eaccept eaccept_op;
+	size_t total_size = 0;
+	void *addr;
+	int i;
+
+	if (!sgx2_supported())
+		SKIP(return, "SGX2 not supported");
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	for (i = 0; i < self->encl.nr_segments; i++) {
+		struct encl_segment *seg = &self->encl.segment_tbl[i];
+
+		total_size += seg->size;
+	}
+
+	/*
+	 * Actual enclave size is expected to be larger than the loaded
+	 * test enclave since enclave size must be a power of 2 in bytes
+	 * and test_encl does not consume it all.
+	 */
+	EXPECT_LT(total_size + PAGE_SIZE, self->encl.encl_size);
+
+	/*
+	 * Create memory mapping for the page that will be added. New
+	 * memory mapping is for one page right after all existing
+	 * mappings.
+	 */
+	addr = mmap((void *)self->encl.encl_base + total_size, PAGE_SIZE,
+		    PROT_READ | PROT_WRITE | PROT_EXEC,
+		    MAP_SHARED | MAP_FIXED, self->encl.fd, 0);
+	EXPECT_NE(addr, MAP_FAILED);
+
+	self->run.exception_vector = 0;
+	self->run.exception_error_code = 0;
+	self->run.exception_addr = 0;
+
+	/*
+	 * Attempt to write to the new page from within enclave.
+	 * Expected to fail since page is not (yet) part of the enclave.
+	 * The first #PF will trigger the addition of the page to the
+	 * enclave, but since the new page needs an EACCEPT from within the
+	 * enclave before it can be used it would not be possible
+	 * to successfully return to the failing instruction. This is the
+	 * cause of the second #PF captured here having the SGX bit set,
+	 * it is from hardware preventing the page from being used.
+	 */
+	put_addr_op.value = MAGIC;
+	put_addr_op.addr = (unsigned long)addr;
+	put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(self->run.function, ERESUME);
+	EXPECT_EQ(self->run.exception_vector, 14);
+	EXPECT_EQ(self->run.exception_addr, (unsigned long)addr);
+
+	if (self->run.exception_error_code == 0x6) {
+		munmap(addr, PAGE_SIZE);
+		SKIP(return, "Kernel does not support adding pages to initialized enclave");
+	}
+
+	EXPECT_EQ(self->run.exception_error_code, 0x8007);
+
+	self->run.exception_vector = 0;
+	self->run.exception_error_code = 0;
+	self->run.exception_addr = 0;
+
+	/* Handle AEX by running EACCEPT from new entry point. */
+	self->run.tcs = self->encl.encl_base + PAGE_SIZE;
+
+	eaccept_op.epc_addr = self->encl.encl_base + total_size;
+	eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/* Can now return to main TCS to resume execution. */
+	self->run.tcs = self->encl.encl_base;
+
+	EXPECT_EQ(vdso_sgx_enter_enclave((unsigned long)&put_addr_op, 0, 0,
+					 ERESUME, 0, 0,
+					 &self->run),
+		  0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Read memory from newly added page that was just written to,
+	 * confirming that data previously written (MAGIC) is present.
+	 */
+	get_addr_op.value = 0;
+	get_addr_op.addr = (unsigned long)addr;
+	get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	munmap(addr, PAGE_SIZE);
+}
+
+/*
+ * Test for the addition of pages to an initialized enclave via a
+ * pre-emptive run of EACCEPT on page to be added.
+ */
+TEST_F(enclave, augment_via_eaccept)
+{
+	struct encl_op_get_from_addr get_addr_op;
+	struct encl_op_put_to_addr put_addr_op;
+	struct encl_op_eaccept eaccept_op;
+	size_t total_size = 0;
+	void *addr;
+	int i;
+
+	if (!sgx2_supported())
+		SKIP(return, "SGX2 not supported");
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	for (i = 0; i < self->encl.nr_segments; i++) {
+		struct encl_segment *seg = &self->encl.segment_tbl[i];
+
+		total_size += seg->size;
+	}
+
+	/*
+	 * Actual enclave size is expected to be larger than the loaded
+	 * test enclave since enclave size must be a power of 2 in bytes while
+	 * test_encl does not consume it all.
+	 */
+	EXPECT_LT(total_size + PAGE_SIZE, self->encl.encl_size);
+
+	/*
+	 * mmap() a page at end of existing enclave to be used for dynamic
+	 * EPC page.
+	 */
+
+	addr = mmap((void *)self->encl.encl_base + total_size, PAGE_SIZE,
+		    PROT_READ | PROT_WRITE | PROT_EXEC, MAP_SHARED | MAP_FIXED,
+		    self->encl.fd, 0);
+	EXPECT_NE(addr, MAP_FAILED);
+
+	self->run.exception_vector = 0;
+	self->run.exception_error_code = 0;
+	self->run.exception_addr = 0;
+
+	/*
+	 * Run EACCEPT on new page to trigger the #PF->EAUG->EACCEPT(again
+	 * without a #PF). All should be transparent to userspace.
+	 */
+	eaccept_op.epc_addr = self->encl.encl_base + total_size;
+	eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	if (self->run.exception_vector == 14 &&
+	    self->run.exception_error_code == 4 &&
+	    self->run.exception_addr == self->encl.encl_base + total_size) {
+		munmap(addr, PAGE_SIZE);
+		SKIP(return, "Kernel does not support adding pages to initialized enclave");
+	}
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/*
+	 * New page should be accessible from within enclave - attempt to
+	 * write to it.
+	 */
+	put_addr_op.value = MAGIC;
+	put_addr_op.addr = (unsigned long)addr;
+	put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Read memory from newly added page that was just written to,
+	 * confirming that data previously written (MAGIC) is present.
+	 */
+	get_addr_op.value = 0;
+	get_addr_op.addr = (unsigned long)addr;
+	get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	munmap(addr, PAGE_SIZE);
+}
+
 TEST_HARNESS_MAIN
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 20/30] x86/sgx: Support modifying SGX page type
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (17 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 19/30] selftests/sgx: Test two different SGX2 EAUG flows Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 21/30] x86/sgx: Support complete page removal Jarkko Sakkinen
                   ` (11 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

Every enclave contains one or more Thread Control Structures (TCS). The
TCS contains meta-data used by the hardware to save and restore thread
specific information when entering/exiting the enclave. With SGX1 an
enclave needs to be created with enough TCSs to support the largest
number of threads expecting to use the enclave and enough enclave pages
to meet all its anticipated memory demands. In SGX1 all pages remain in
the enclave until the enclave is unloaded.

SGX2 introduces a new function, ENCLS[EMODT], that is used to change
the type of an enclave page from a regular (SGX_PAGE_TYPE_REG) enclave
page to a TCS (SGX_PAGE_TYPE_TCS) page or change the type from a
regular (SGX_PAGE_TYPE_REG) or TCS (SGX_PAGE_TYPE_TCS)
page to a trimmed (SGX_PAGE_TYPE_TRIM) page (setting it up for later
removal).

With the existing support of dynamically adding regular enclave pages
to an initialized enclave and changing the page type to TCS it is
possible to dynamically increase the number of threads supported by an
enclave.

Changing the enclave page type to SGX_PAGE_TYPE_TRIM is the first step
of dynamically removing pages from an initialized enclave. The complete
page removal flow is:
1) Change the type of the pages to be removed to SGX_PAGE_TYPE_TRIM
   using the SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl() introduced here.
2) Approve the page removal by running ENCLU[EACCEPT] from within
   the enclave.
3) Initiate actual page removal using the ioctl() introduced in the
   following patch.

Add ioctl() SGX_IOC_ENCLAVE_MODIFY_TYPE to support changing SGX
enclave page types within an initialized enclave. With
SGX_IOC_ENCLAVE_MODIFY_TYPE the user specifies a page range and the
enclave page type to be applied to all pages in the provided range.
The ioctl() itself can return an error code based on failures
encountered by the kernel. It is also possible for SGX specific
failures to be encountered.  Add a result output parameter to
communicate the SGX return code. It is possible for the enclave page
type change request to fail on any page within the provided range.
Support partial success by returning the number of pages that were
successfully changed.

After the page type is changed the page continues to be accessible
from the kernel perspective with page table entries and internal
state. The page may be moved to swap. Any access until ENCLU[EACCEPT]
will encounter a page fault with SGX flag set in error code.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/include/uapi/asm/sgx.h |  20 +++
 arch/x86/kernel/cpu/sgx/ioctl.c | 212 ++++++++++++++++++++++++++++++++
 2 files changed, 232 insertions(+)

diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h
index 107ffb0a0b48..003599120edf 100644
--- a/arch/x86/include/uapi/asm/sgx.h
+++ b/arch/x86/include/uapi/asm/sgx.h
@@ -31,6 +31,8 @@ enum sgx_page_flags {
 	_IO(SGX_MAGIC, 0x04)
 #define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
 	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
+#define SGX_IOC_ENCLAVE_MODIFY_TYPE \
+	_IOWR(SGX_MAGIC, 0x06, struct sgx_enclave_modt)
 
 /**
  * struct sgx_enclave_create - parameter structure for the
@@ -97,6 +99,24 @@ struct sgx_enclave_restrict_perm {
 	__u64 count;
 };
 
+/**
+ * struct sgx_enclave_modt - parameters for %SGX_IOC_ENCLAVE_MODIFY_TYPE
+ * @offset:	starting page offset (page aligned relative to enclave base
+ *		address defined in SECS)
+ * @length:	length of memory (multiple of the page size)
+ * @secinfo:	address for the SECINFO data containing the new type
+ *		for pages in range described by @offset and @length
+ * @result:	(output) SGX result code of ENCLS[EMODT] function
+ * @count:	(output) bytes successfully changed (multiple of page size)
+ */
+struct sgx_enclave_modt {
+	__u64 offset;
+	__u64 length;
+	__u64 secinfo;
+	__u64 result;
+	__u64 count;
+};
+
 struct sgx_enclave_run;
 
 /**
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 166ec7d79634..2c8cca51c7df 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -966,6 +966,215 @@ static long sgx_ioc_enclave_restrict_perm(struct sgx_encl *encl,
 	return ret;
 }
 
+/**
+ * sgx_enclave_modt() - Modify type of SGX enclave pages
+ * @encl:	Enclave to which the pages belong.
+ * @modt:	Checked parameters from user about which pages need modifying.
+ * @page_type:	New page type.
+ *
+ * Return:
+ * - 0:		Success
+ * - -errno:	Otherwise
+ */
+static long sgx_enclave_modt(struct sgx_encl *encl,
+			     struct sgx_enclave_modt *modt,
+			     enum sgx_page_type page_type)
+{
+	unsigned long max_prot_restore, run_prot_restore;
+	struct sgx_encl_page *entry;
+	struct sgx_secinfo secinfo;
+	unsigned long prot;
+	unsigned long addr;
+	unsigned long c;
+	void *epc_virt;
+	int ret;
+
+	/*
+	 * The only new page types allowed by hardware are PT_TCS and PT_TRIM.
+	 */
+	if (page_type != SGX_PAGE_TYPE_TCS && page_type != SGX_PAGE_TYPE_TRIM)
+		return -EINVAL;
+
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = page_type << 8;
+
+	for (c = 0 ; c < modt->length; c += PAGE_SIZE) {
+		addr = encl->base + modt->offset + c;
+
+		mutex_lock(&encl->lock);
+
+		entry = sgx_encl_load_page(encl, addr);
+		if (IS_ERR(entry)) {
+			ret = PTR_ERR(entry) == -EBUSY ? -EAGAIN : -EFAULT;
+			goto out_unlock;
+		}
+
+		/*
+		 * Borrow the logic from the Intel SDM. Regular pages
+		 * (SGX_PAGE_TYPE_REG) can change type to SGX_PAGE_TYPE_TCS
+		 * or SGX_PAGE_TYPE_TRIM but TCS pages can only be trimmed.
+		 * CET pages not supported yet.
+		 */
+		if (!(entry->type == SGX_PAGE_TYPE_REG ||
+		      (entry->type == SGX_PAGE_TYPE_TCS &&
+		       page_type == SGX_PAGE_TYPE_TRIM))) {
+			ret = -EINVAL;
+			goto out_unlock;
+		}
+
+		max_prot_restore = entry->vm_max_prot_bits;
+		run_prot_restore = entry->vm_run_prot_bits;
+
+		/*
+		 * Once a regular page becomes a TCS page it cannot be
+		 * changed back. So the maximum allowed protection reflects
+		 * the TCS page that is always RW from kernel perspective but
+		 * will be inaccessible from within enclave. Before doing
+		 * so, do make sure that the new page type continues to
+		 * respect the originally vetted page permissions.
+		 */
+		if (entry->type == SGX_PAGE_TYPE_REG &&
+		    page_type == SGX_PAGE_TYPE_TCS) {
+			if (~entry->vm_max_prot_bits & (VM_READ | VM_WRITE)) {
+				ret = -EPERM;
+				goto out_unlock;
+			}
+			prot = PROT_READ | PROT_WRITE;
+			entry->vm_max_prot_bits = calc_vm_prot_bits(prot, 0);
+			entry->vm_run_prot_bits = entry->vm_max_prot_bits;
+
+			/*
+			 * Prevent page from being reclaimed while mutex
+			 * is released.
+			 */
+			if (sgx_unmark_page_reclaimable(entry->epc_page)) {
+				ret = -EAGAIN;
+				goto out_entry_changed;
+			}
+
+			/*
+			 * Do not keep encl->lock because of dependency on
+			 * mmap_lock acquired in sgx_zap_enclave_ptes().
+			 */
+			mutex_unlock(&encl->lock);
+
+			sgx_zap_enclave_ptes(encl, addr);
+
+			mutex_lock(&encl->lock);
+
+			sgx_mark_page_reclaimable(entry->epc_page);
+		}
+
+		/* Change EPC type */
+		epc_virt = sgx_get_epc_virt_addr(entry->epc_page);
+		ret = __emodt(&secinfo, epc_virt);
+		if (encls_faulted(ret)) {
+			/*
+			 * All possible faults should be avoidable:
+			 * parameters have been checked, will only change
+			 * valid page types, and no concurrent
+			 * SGX1/SGX2 ENCLS instructions since these are
+			 * protected with mutex.
+			 */
+			pr_err_once("EMODT encountered exception %d\n",
+				    ENCLS_TRAPNR(ret));
+			ret = -EFAULT;
+			goto out_entry_changed;
+		}
+		if (encls_failed(ret)) {
+			modt->result = ret;
+			ret = -EFAULT;
+			goto out_entry_changed;
+		}
+
+		ret = sgx_enclave_etrack(encl);
+		if (ret) {
+			ret = -EFAULT;
+			goto out_unlock;
+		}
+
+		entry->type = page_type;
+
+		mutex_unlock(&encl->lock);
+	}
+
+	ret = 0;
+	goto out;
+
+out_entry_changed:
+	entry->vm_max_prot_bits = max_prot_restore;
+	entry->vm_run_prot_bits = run_prot_restore;
+out_unlock:
+	mutex_unlock(&encl->lock);
+out:
+	modt->count = c;
+
+	return ret;
+}
+
+/**
+ * sgx_ioc_enclave_modt() - handler for %SGX_IOC_ENCLAVE_MODIFY_TYPE
+ * @encl:	an enclave pointer
+ * @arg:	userspace pointer to a &struct sgx_enclave_modt instance
+ *
+ * Ability to change the enclave page type supports the following use cases:
+ *
+ * * It is possible to add TCS pages to an enclave by changing the type of
+ *   regular pages (%SGX_PAGE_TYPE_REG) to TCS (%SGX_PAGE_TYPE_TCS) pages.
+ *   With this support the number of threads supported by an initialized
+ *   enclave can be increased dynamically.
+ *
+ * * Regular or TCS pages can dynamically be removed from an initialized
+ *   enclave by changing the page type to %SGX_PAGE_TYPE_TRIM. Changing the
+ *   page type to %SGX_PAGE_TYPE_TRIM marks the page for removal with actual
+ *   removal done by handler of %SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl() called
+ *   after ENCLU[EACCEPT] is run on %SGX_PAGE_TYPE_TRIM page from within the
+ *   enclave.
+ *
+ * Return:
+ * - 0:		Success
+ * - -errno:	Otherwise
+ */
+static long sgx_ioc_enclave_modt(struct sgx_encl *encl, void __user *arg)
+{
+	struct sgx_enclave_modt params;
+	enum sgx_page_type page_type;
+	struct sgx_secinfo secinfo;
+	long ret;
+
+	ret = sgx_ioc_sgx2_ready(encl);
+	if (ret)
+		return ret;
+
+	if (copy_from_user(&params, arg, sizeof(params)))
+		return -EFAULT;
+
+	if (sgx_validate_offset_length(encl, params.offset, params.length))
+		return -EINVAL;
+
+	if (copy_from_user(&secinfo, (void __user *)params.secinfo,
+			   sizeof(secinfo)))
+		return -EFAULT;
+
+	if (secinfo.flags & ~SGX_SECINFO_PAGE_TYPE_MASK)
+		return -EINVAL;
+
+	if (memchr_inv(secinfo.reserved, 0, sizeof(secinfo.reserved)))
+		return -EINVAL;
+
+	if (params.result || params.count)
+		return -EINVAL;
+
+	page_type = (secinfo.flags & SGX_SECINFO_PAGE_TYPE_MASK) >> 8;
+	ret = sgx_enclave_modt(encl, &params, page_type);
+
+	if (copy_to_user(arg, &params, sizeof(params)))
+		return -EFAULT;
+
+	return ret;
+}
+
 long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 {
 	struct sgx_encl *encl = filep->private_data;
@@ -990,6 +1199,9 @@ long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 	case SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS:
 		ret = sgx_ioc_enclave_restrict_perm(encl, (void __user *)arg);
 		break;
+	case SGX_IOC_ENCLAVE_MODIFY_TYPE:
+		ret = sgx_ioc_enclave_modt(encl, (void __user *)arg);
+		break;
 	default:
 		ret = -ENOIOCTLCMD;
 		break;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 21/30] x86/sgx: Support complete page removal
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (18 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 20/30] x86/sgx: Support modifying SGX page type Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 22/30] Documentation/x86: Introduce enclave runtime management section Jarkko Sakkinen
                   ` (10 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

The SGX2 page removal flow was introduced in previous patch and is
as follows:
1) Change the type of the pages to be removed to SGX_PAGE_TYPE_TRIM
   using the ioctl() SGX_IOC_ENCLAVE_MODIFY_TYPE introduced in
   previous patch.
2) Approve the page removal by running ENCLU[EACCEPT] from within
   the enclave.
3) Initiate actual page removal using the ioctl()
   SGX_IOC_ENCLAVE_REMOVE_PAGES introduced here.

Support the final step of the SGX2 page removal flow with ioctl()
SGX_IOC_ENCLAVE_REMOVE_PAGES. With this ioctl() the user specifies
a page range that should be removed. All pages in the provided
range should have the SGX_PAGE_TYPE_TRIM page type and the request
will fail with EPERM (Operation not permitted) if a page that does
not have the correct type is encountered. Page removal can fail
on any page within the provided range. Support partial success by
returning the number of pages that were successfully removed.

Since actual page removal will succeed even if ENCLU[EACCEPT] was not
run from within the enclave the ENCLU[EMODPR] instruction with RWX
permissions is used as a no-op mechanism to ensure ENCLU[EACCEPT] was
successfully run from within the enclave before the enclave page is
removed.

If the user omits running SGX_IOC_ENCLAVE_REMOVE_PAGES the pages will
still be removed when the enclave is unloaded.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/include/uapi/asm/sgx.h |  21 +++++
 arch/x86/kernel/cpu/sgx/ioctl.c | 145 ++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+)

diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h
index 003599120edf..c4e0326d281d 100644
--- a/arch/x86/include/uapi/asm/sgx.h
+++ b/arch/x86/include/uapi/asm/sgx.h
@@ -33,6 +33,8 @@ enum sgx_page_flags {
 	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
 #define SGX_IOC_ENCLAVE_MODIFY_TYPE \
 	_IOWR(SGX_MAGIC, 0x06, struct sgx_enclave_modt)
+#define SGX_IOC_ENCLAVE_REMOVE_PAGES \
+	_IOWR(SGX_MAGIC, 0x08, struct sgx_enclave_remove_pages)
 
 /**
  * struct sgx_enclave_create - parameter structure for the
@@ -117,6 +119,25 @@ struct sgx_enclave_modt {
 	__u64 count;
 };
 
+/**
+ * struct sgx_enclave_remove_pages - %SGX_IOC_ENCLAVE_REMOVE_PAGES parameters
+ * @offset:	starting page offset (page aligned relative to enclave base
+ *		address defined in SECS)
+ * @length:	length of memory (multiple of the page size)
+ * @count:	(output) bytes successfully changed (multiple of page size)
+ *
+ * Regular (PT_REG) or TCS (PT_TCS) can be removed from an initialized
+ * enclave if the system supports SGX2. First, the %SGX_IOC_ENCLAVE_MODIFY_TYPE
+ * ioctl() should be used to change the page type to PT_TRIM. After that
+ * succeeds ENCLU[EACCEPT] should be run from within the enclave and then
+ * %SGX_IOC_ENCLAVE_REMOVE_PAGES can be used to complete the page removal.
+ */
+struct sgx_enclave_remove_pages {
+	__u64 offset;
+	__u64 length;
+	__u64 count;
+};
+
 struct sgx_enclave_run;
 
 /**
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 2c8cca51c7df..99cba5837acc 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -1175,6 +1175,148 @@ static long sgx_ioc_enclave_modt(struct sgx_encl *encl, void __user *arg)
 	return ret;
 }
 
+/**
+ * sgx_encl_remove_pages() - Remove trimmed pages from SGX enclave
+ * @encl:	Enclave to which the pages belong
+ * @params:	Checked parameters from user on which pages need to be removed
+ *
+ * Return:
+ * - 0:		Success.
+ * - -errno:	Otherwise.
+ */
+static long sgx_encl_remove_pages(struct sgx_encl *encl,
+				  struct sgx_enclave_remove_pages *params)
+{
+	struct sgx_encl_page *entry;
+	struct sgx_secinfo secinfo;
+	unsigned long addr;
+	unsigned long c;
+	void *epc_virt;
+	int ret;
+
+	memset(&secinfo, 0, sizeof(secinfo));
+	secinfo.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_X;
+
+	for (c = 0 ; c < params->length; c += PAGE_SIZE) {
+		addr = encl->base + params->offset + c;
+
+		mutex_lock(&encl->lock);
+
+		entry = sgx_encl_load_page(encl, addr);
+		if (IS_ERR(entry)) {
+			ret = PTR_ERR(entry) == -EBUSY ? -EAGAIN : -EFAULT;
+			goto out_unlock;
+		}
+
+		if (entry->type != SGX_PAGE_TYPE_TRIM) {
+			ret = -EPERM;
+			goto out_unlock;
+		}
+
+		/*
+		 * ENCLS[EMODPR] is a no-op instruction used to inform if
+		 * ENCLU[EACCEPT] was run from within the enclave. If
+		 * ENCLS[EMODPR] is run with RWX on a trimmed page that is
+		 * not yet accepted then it will return
+		 * %SGX_PAGE_NOT_MODIFIABLE, after the trimmed page is
+		 * accepted the instruction will encounter a page fault.
+		 */
+		epc_virt = sgx_get_epc_virt_addr(entry->epc_page);
+		ret = __emodpr(&secinfo, epc_virt);
+		if (!encls_faulted(ret) || ENCLS_TRAPNR(ret) != X86_TRAP_PF) {
+			ret = -EPERM;
+			goto out_unlock;
+		}
+
+		if (sgx_unmark_page_reclaimable(entry->epc_page)) {
+			ret = -EBUSY;
+			goto out_unlock;
+		}
+
+		/*
+		 * Do not keep encl->lock because of dependency on
+		 * mmap_lock acquired in sgx_zap_enclave_ptes().
+		 */
+		mutex_unlock(&encl->lock);
+
+		sgx_zap_enclave_ptes(encl, addr);
+
+		mutex_lock(&encl->lock);
+
+		sgx_encl_free_epc_page(entry->epc_page);
+		encl->secs_child_cnt--;
+		entry->epc_page = NULL;
+		xa_erase(&encl->page_array, PFN_DOWN(entry->desc));
+		sgx_encl_shrink(encl, NULL);
+		kfree(entry);
+
+		mutex_unlock(&encl->lock);
+	}
+
+	ret = 0;
+	goto out;
+
+out_unlock:
+	mutex_unlock(&encl->lock);
+out:
+	params->count = c;
+
+	return ret;
+}
+
+/**
+ * sgx_ioc_enclave_remove_pages() - handler for %SGX_IOC_ENCLAVE_REMOVE_PAGES
+ * @encl:	an enclave pointer
+ * @arg:	userspace pointer to &struct sgx_enclave_remove_pages instance
+ *
+ * Final step of the flow removing pages from an initialized enclave. The
+ * complete flow is:
+ *
+ * 1) User changes the type of the pages to be removed to %SGX_PAGE_TYPE_TRIM
+ *    using the %SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl().
+ * 2) User approves the page removal by running ENCLU[EACCEPT] from within
+ *    the enclave.
+ * 3) User initiates actual page removal using the
+ *    %SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl() that is handled here.
+ *
+ * First remove any page table entries pointing to the page and then proceed
+ * with the actual removal of the enclave page and data in support of it.
+ *
+ * VA pages are not affected by this removal. It is thus possible that the
+ * enclave may end up with more VA pages than needed to support all its
+ * pages.
+ *
+ * Return:
+ * - 0:		Success
+ * - -errno:	Otherwise
+ */
+static long sgx_ioc_enclave_remove_pages(struct sgx_encl *encl,
+					 void __user *arg)
+{
+	struct sgx_enclave_remove_pages params;
+	long ret;
+
+	ret = sgx_ioc_sgx2_ready(encl);
+	if (ret)
+		return ret;
+
+	if (copy_from_user(&params, arg, sizeof(params)))
+		return -EFAULT;
+
+	if (sgx_validate_offset_length(encl, params.offset, params.length))
+		return -EINVAL;
+
+	if (params.count)
+		return -EINVAL;
+
+	ret = sgx_encl_remove_pages(encl, &params);
+
+	if (copy_to_user(arg, &params, sizeof(params)))
+		return -EFAULT;
+
+	return ret;
+}
+
 long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 {
 	struct sgx_encl *encl = filep->private_data;
@@ -1202,6 +1344,9 @@ long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 	case SGX_IOC_ENCLAVE_MODIFY_TYPE:
 		ret = sgx_ioc_enclave_modt(encl, (void __user *)arg);
 		break;
+	case SGX_IOC_ENCLAVE_REMOVE_PAGES:
+		ret = sgx_ioc_enclave_remove_pages(encl, (void __user *)arg);
+		break;
 	default:
 		ret = -ENOIOCTLCMD;
 		break;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 22/30] Documentation/x86: Introduce enclave runtime management section
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (19 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 21/30] x86/sgx: Support complete page removal Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 23/30] selftests/sgx: Introduce dynamic entry point Jarkko Sakkinen
                   ` (9 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, Jonathan Corbet,
	open list:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	open list:DOCUMENTATION

From: Reinette Chatre <reinette.chatre@intel.com>

Enclave runtime management is introduced following the pattern
of the section describing enclave building. Provide a brief
summary of enclave runtime management, pointing to the functions
implementing the ioctl()s that will contain details within their
kernel-doc.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 Documentation/x86/sgx.rst | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/Documentation/x86/sgx.rst b/Documentation/x86/sgx.rst
index 5659932728a5..6c66ce0ec69c 100644
--- a/Documentation/x86/sgx.rst
+++ b/Documentation/x86/sgx.rst
@@ -128,6 +128,22 @@ pages and establish enclave page permissions.
                sgx_ioc_enclave_init
                sgx_ioc_enclave_provision
 
+Enclave runtime management
+--------------------------
+
+Systems supporting SGX2 additionally support changes to initialized
+enclaves: modifying enclave page permissions and type, and dynamically
+adding and removing of enclave pages. When an enclave accesses an address
+within its address range that does not have a backing page then a new
+regular page will be dynamically added to the enclave. The enclave is
+still required to run EACCEPT on the new page before it can be used.
+
+.. kernel-doc:: arch/x86/kernel/cpu/sgx/ioctl.c
+   :functions: sgx_ioc_enclave_relax_perm
+               sgx_ioc_enclave_restrict_perm
+               sgx_ioc_enclave_modt
+               sgx_ioc_enclave_remove_pages
+
 Enclave vDSO
 ------------
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 23/30] selftests/sgx: Introduce dynamic entry point
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (20 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 22/30] Documentation/x86: Introduce enclave runtime management section Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 24/30] selftests/sgx: Introduce TCS initialization enclave operation Jarkko Sakkinen
                   ` (8 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

The test enclave (test_encl.elf) is built with two initialized
Thread Control Structures (TCS) included in the binary. Both TCS are
initialized with the same entry point, encl_entry, that correctly
computes the absolute address of the stack based on the stack of each
TCS that is also built into the binary.

A new TCS can be added dynamically to the enclave and requires to be
initialized with an entry point used to enter the enclave. Since the
existing entry point, encl_entry, assumes that the TCS and its stack
exists at particular offsets within the binary it is not able to handle
a dynamically added TCS and its stack.

Introduce a new entry point, encl_dyn_entry, that initializes the
absolute address of that thread's stack to the address immediately
preceding the TCS itself. It is now possible to dynamically add a
contiguous memory region to the enclave with the new stack preceding
the new TCS. With the new TCS initialized with encl_dyn_entry as entry
point the absolute address of the stack is computed correctly on entry.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/test_encl_bootstrap.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tools/testing/selftests/sgx/test_encl_bootstrap.S b/tools/testing/selftests/sgx/test_encl_bootstrap.S
index 82fb0dfcbd23..03ae0f57e29d 100644
--- a/tools/testing/selftests/sgx/test_encl_bootstrap.S
+++ b/tools/testing/selftests/sgx/test_encl_bootstrap.S
@@ -45,6 +45,12 @@ encl_entry:
 	# TCS #2. By adding the value of encl_stack to it, we get
 	# the absolute address for the stack.
 	lea	(encl_stack)(%rbx), %rax
+	jmp encl_entry_core
+encl_dyn_entry:
+	# Entry point for dynamically created TCS page expected to follow
+	# its stack directly.
+	lea -1(%rbx), %rax
+encl_entry_core:
 	xchg	%rsp, %rax
 	push	%rax
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 24/30] selftests/sgx: Introduce TCS initialization enclave operation
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (21 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 23/30] selftests/sgx: Introduce dynamic entry point Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 25/30] selftests/sgx: Test complete changing of page type flow Jarkko Sakkinen
                   ` (7 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

The Thread Control Structure (TCS) contains meta-data used by the
hardware to save and restore thread specific information when
entering/exiting the enclave. A TCS can be added to an initialized
enclave by first adding a new regular enclave page, initializing the
content of the new page from within the enclave, and then changing that
page's type to a TCS.

Support the initialization of a TCS from within the enclave.
The variable information needed that should be provided from outside
the enclave is the address of the TCS, address of the State Save Area
(SSA), and the entry point that the thread should use to enter the
enclave. With this information provided all needed fields of a TCS
can be initialized.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/defines.h   |  8 +++++++
 tools/testing/selftests/sgx/test_encl.c | 30 +++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/tools/testing/selftests/sgx/defines.h b/tools/testing/selftests/sgx/defines.h
index b638eb98c80c..d8587c971941 100644
--- a/tools/testing/selftests/sgx/defines.h
+++ b/tools/testing/selftests/sgx/defines.h
@@ -26,6 +26,7 @@ enum encl_op_type {
 	ENCL_OP_NOP,
 	ENCL_OP_EACCEPT,
 	ENCL_OP_EMODPE,
+	ENCL_OP_INIT_TCS_PAGE,
 	ENCL_OP_MAX,
 };
 
@@ -68,4 +69,11 @@ struct encl_op_emodpe {
 	uint64_t flags;
 };
 
+struct encl_op_init_tcs_page {
+	struct encl_op_header header;
+	uint64_t tcs_page;
+	uint64_t ssa;
+	uint64_t entry;
+};
+
 #endif /* DEFINES_H */
diff --git a/tools/testing/selftests/sgx/test_encl.c b/tools/testing/selftests/sgx/test_encl.c
index 5b6c65331527..c0d6397295e3 100644
--- a/tools/testing/selftests/sgx/test_encl.c
+++ b/tools/testing/selftests/sgx/test_encl.c
@@ -57,6 +57,35 @@ static void *memcpy(void *dest, const void *src, size_t n)
 	return dest;
 }
 
+static void *memset(void *dest, int c, size_t n)
+{
+	size_t i;
+
+	for (i = 0; i < n; i++)
+		((char *)dest)[i] = c;
+
+	return dest;
+}
+
+static void do_encl_init_tcs_page(void *_op)
+{
+	struct encl_op_init_tcs_page *op = _op;
+	void *tcs = (void *)op->tcs_page;
+	uint32_t val_32;
+
+	memset(tcs, 0, 16);			/* STATE and FLAGS */
+	memcpy(tcs + 16, &op->ssa, 8);		/* OSSA */
+	memset(tcs + 24, 0, 4);			/* CSSA */
+	val_32 = 1;
+	memcpy(tcs + 28, &val_32, 4);		/* NSSA */
+	memcpy(tcs + 32, &op->entry, 8);	/* OENTRY */
+	memset(tcs + 40, 0, 24);		/* AEP, OFSBASE, OGSBASE */
+	val_32 = 0xFFFFFFFF;
+	memcpy(tcs + 64, &val_32, 4);		/* FSLIMIT */
+	memcpy(tcs + 68, &val_32, 4);		/* GSLIMIT */
+	memset(tcs + 72, 0, 4024);		/* Reserved */
+}
+
 static void do_encl_op_put_to_buf(void *op)
 {
 	struct encl_op_put_to_buf *op2 = op;
@@ -100,6 +129,7 @@ void encl_body(void *rdi,  void *rsi)
 		do_encl_op_nop,
 		do_encl_eaccept,
 		do_encl_emodpe,
+		do_encl_init_tcs_page,
 	};
 
 	struct encl_op_header *op = (struct encl_op_header *)rdi;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 25/30] selftests/sgx: Test complete changing of page type flow
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (22 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 24/30] selftests/sgx: Introduce TCS initialization enclave operation Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 26/30] selftests/sgx: Test faulty enclave behavior Jarkko Sakkinen
                   ` (6 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

Support for changing an enclave page's type enables an initialized
enclave to be expanded with support for more threads by changing the
type of a regular enclave page to that of a Thread Control Structure
(TCS).  Additionally, being able to change a TCS or regular enclave
page's type to be trimmed (SGX_PAGE_TYPE_TRIM) initiates the removal
of the page from the enclave.

Test changing page type to TCS as well as page removal flows
in two phases: In the first phase support for a new thread is
dynamically added to an initialized enclave and in the second phase
the pages associated with the new thread are removed from the enclave.
As an additional sanity check after the second phase the page used as
a TCS page during the first phase is added back as a regular page and
ensured that it can be written to (which is not possible if it was a
TCS page).

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/load.c |  41 ++++
 tools/testing/selftests/sgx/main.c | 347 +++++++++++++++++++++++++++++
 tools/testing/selftests/sgx/main.h |   1 +
 3 files changed, 389 insertions(+)

diff --git a/tools/testing/selftests/sgx/load.c b/tools/testing/selftests/sgx/load.c
index 006b464c8fc9..94bdeac1cf04 100644
--- a/tools/testing/selftests/sgx/load.c
+++ b/tools/testing/selftests/sgx/load.c
@@ -130,6 +130,47 @@ static bool encl_ioc_add_pages(struct encl *encl, struct encl_segment *seg)
 	return true;
 }
 
+/*
+ * Parse the enclave code's symbol table to locate and return address of
+ * the provided symbol
+ */
+uint64_t encl_get_entry(struct encl *encl, const char *symbol)
+{
+	Elf64_Shdr *sections;
+	Elf64_Sym *symtab;
+	Elf64_Ehdr *ehdr;
+	char *sym_names;
+	int num_sym;
+	int i;
+
+	ehdr = encl->bin;
+	sections = encl->bin + ehdr->e_shoff;
+
+	for (i = 0; i < ehdr->e_shnum; i++) {
+		if (sections[i].sh_type == SHT_SYMTAB) {
+			symtab = (Elf64_Sym *)((char *)encl->bin + sections[i].sh_offset);
+			num_sym = sections[i].sh_size / sections[i].sh_entsize;
+			break;
+		}
+	}
+
+	for (i = 0; i < ehdr->e_shnum; i++) {
+		if (sections[i].sh_type == SHT_STRTAB) {
+			sym_names = (char *)encl->bin + sections[i].sh_offset;
+			break;
+		}
+	}
+
+	for (i = 0; i < num_sym; i++) {
+		Elf64_Sym *sym = &symtab[i];
+
+		if (!strcmp(symbol, sym_names + sym->st_name))
+			return (uint64_t)sym->st_value;
+	}
+
+	return 0;
+}
+
 bool encl_load(const char *path, struct encl *encl, unsigned long heap_size)
 {
 	const char device_path[] = "/dev/sgx_enclave";
diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index 13542c5de66f..f9872c6746a3 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -1106,4 +1106,351 @@ TEST_F(enclave, augment_via_eaccept)
 	munmap(addr, PAGE_SIZE);
 }
 
+/*
+ * SGX2 page type modification test in two phases:
+ * Phase 1:
+ * Create a new TCS, consisting out of three new pages (stack page with regular
+ * page type, SSA page with regular page type, and TCS page with TCS page
+ * type) in an initialized enclave and run a simple workload within it.
+ * Phase 2:
+ * Remove the three pages added in phase 1, add a new regular page at the
+ * same address that previously hosted the TCS page and verify that it can
+ * be modified.
+ */
+TEST_F(enclave, tcs_create)
+{
+	struct encl_op_init_tcs_page init_tcs_page_op;
+	struct sgx_enclave_remove_pages remove_ioc;
+	struct encl_op_get_from_addr get_addr_op;
+	struct encl_op_put_to_addr put_addr_op;
+	struct encl_op_get_from_buf get_buf_op;
+	struct encl_op_put_to_buf put_buf_op;
+	void *addr, *tcs, *stack_end, *ssa;
+	struct encl_op_eaccept eaccept_op;
+	struct sgx_enclave_modt modt_ioc;
+	struct sgx_secinfo secinfo;
+	size_t total_size = 0;
+	uint64_t val_64;
+	int errno_save;
+	int ret, i;
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl,
+				    _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	/*
+	 * Hardware (SGX2) and kernel support is needed for this test. Start
+	 * with check that test has a chance of succeeding.
+	 */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+
+	if (ret == -1) {
+		if (errno == ENOTTY)
+			SKIP(return, "Kernel does not support SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl()");
+		else if (errno == ENODEV)
+			SKIP(return, "System does not support SGX2");
+	}
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	EXPECT_EQ(ret, -1);
+
+	/*
+	 * Add three regular pages via EAUG: one will be the TCS stack, one
+	 * will be the TCS SSA, and one will be the new TCS. The stack and
+	 * SSA will remain as regular pages, the TCS page will need its
+	 * type changed after populated with needed data.
+	 */
+	for (i = 0; i < self->encl.nr_segments; i++) {
+		struct encl_segment *seg = &self->encl.segment_tbl[i];
+
+		total_size += seg->size;
+	}
+
+	/*
+	 * Actual enclave size is expected to be larger than the loaded
+	 * test enclave since enclave size must be a power of 2 in bytes while
+	 * test_encl does not consume it all.
+	 */
+	EXPECT_LT(total_size + 3 * PAGE_SIZE, self->encl.encl_size);
+
+	/*
+	 * mmap() three pages at end of existing enclave to be used for the
+	 * three new pages.
+	 */
+	addr = mmap((void *)self->encl.encl_base + total_size, 3 * PAGE_SIZE,
+		    PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED,
+		    self->encl.fd, 0);
+	EXPECT_NE(addr, MAP_FAILED);
+
+	self->run.exception_vector = 0;
+	self->run.exception_error_code = 0;
+	self->run.exception_addr = 0;
+
+	stack_end = (void *)self->encl.encl_base + total_size;
+	tcs = (void *)self->encl.encl_base + total_size + PAGE_SIZE;
+	ssa = (void *)self->encl.encl_base + total_size + 2 * PAGE_SIZE;
+
+	/*
+	 * Run EACCEPT on each new page to trigger the
+	 * EACCEPT->(#PF)->EAUG->EACCEPT(again without a #PF) flow.
+	 */
+
+	eaccept_op.epc_addr = (unsigned long)stack_end;
+	eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	if (self->run.exception_vector == 14 &&
+	    self->run.exception_error_code == 4 &&
+	    self->run.exception_addr == (unsigned long)stack_end) {
+		munmap(addr, 3 * PAGE_SIZE);
+		SKIP(return, "Kernel does not support adding pages to initialized enclave");
+	}
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	eaccept_op.epc_addr = (unsigned long)ssa;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	eaccept_op.epc_addr = (unsigned long)tcs;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/*
+	 * Three new pages added to enclave. Now populate the TCS page with
+	 * needed data. This should be done from within enclave. Provide
+	 * the function that will do the actual data population with needed
+	 * data.
+	 */
+
+	/*
+	 * New TCS will use the "encl_dyn_entry" entrypoint that expects
+	 * stack to begin in page before TCS page.
+	 */
+	val_64 = encl_get_entry(&self->encl, "encl_dyn_entry");
+	EXPECT_NE(val_64, 0);
+
+	init_tcs_page_op.tcs_page = (unsigned long)tcs;
+	init_tcs_page_op.ssa = (unsigned long)total_size + 2 * PAGE_SIZE;
+	init_tcs_page_op.entry = val_64;
+	init_tcs_page_op.header.type = ENCL_OP_INIT_TCS_PAGE;
+
+	EXPECT_EQ(ENCL_CALL(&init_tcs_page_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/* Change TCS page type to TCS. */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = SGX_PAGE_TYPE_TCS << 8;
+	modt_ioc.offset = total_size + PAGE_SIZE;
+	modt_ioc.length = PAGE_SIZE;
+	modt_ioc.secinfo = (unsigned long)&secinfo;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(modt_ioc.result, 0);
+	EXPECT_EQ(modt_ioc.count, 4096);
+
+	/* EACCEPT new TCS page from enclave. */
+	eaccept_op.epc_addr = (unsigned long)tcs;
+	eaccept_op.flags = SGX_SECINFO_TCS | SGX_SECINFO_MODIFIED;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/* Run workload from new TCS. */
+	self->run.tcs = (unsigned long)tcs;
+
+	/*
+	 * Simple workload to write to data buffer and read value back.
+	 */
+	put_buf_op.header.type = ENCL_OP_PUT_TO_BUFFER;
+	put_buf_op.value = MAGIC;
+
+	EXPECT_EQ(ENCL_CALL(&put_buf_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	get_buf_op.header.type = ENCL_OP_GET_FROM_BUFFER;
+	get_buf_op.value = 0;
+
+	EXPECT_EQ(ENCL_CALL(&get_buf_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_buf_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Phase 2 of test:
+	 * Remove pages associated with new TCS, create a regular page
+	 * where TCS page used to be and verify it can be used as a regular
+	 * page.
+	 */
+
+	/* Start page removal by requesting change of page type to PT_TRIM. */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = SGX_PAGE_TYPE_TRIM << 8;
+	modt_ioc.offset = total_size;
+	modt_ioc.length = 3 * PAGE_SIZE;
+	modt_ioc.secinfo = (unsigned long)&secinfo;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(modt_ioc.result, 0);
+	EXPECT_EQ(modt_ioc.count, 3 * PAGE_SIZE);
+
+	/*
+	 * Enter enclave via TCS #1 and approve page removal by sending
+	 * EACCEPT for each of three removed pages.
+	 */
+	self->run.tcs = self->encl.encl_base;
+
+	eaccept_op.epc_addr = (unsigned long)stack_end;
+	eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	eaccept_op.epc_addr = (unsigned long)tcs;
+	eaccept_op.ret = 0;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	eaccept_op.epc_addr = (unsigned long)ssa;
+	eaccept_op.ret = 0;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/* Send final ioctl() to complete page removal. */
+	memset(&remove_ioc, 0, sizeof(remove_ioc));
+
+	remove_ioc.offset = total_size;
+	remove_ioc.length = 3 * PAGE_SIZE;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_REMOVE_PAGES, &remove_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(remove_ioc.count, 3 * PAGE_SIZE);
+
+	/*
+	 * Enter enclave via TCS #1 and access location where TCS #3 was to
+	 * trigger dynamic add of regular page at that location.
+	 */
+	eaccept_op.epc_addr = (unsigned long)tcs;
+	eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/*
+	 * New page should be accessible from within enclave - write to it.
+	 */
+	put_addr_op.value = MAGIC;
+	put_addr_op.addr = (unsigned long)tcs;
+	put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Read memory from newly added page that was just written to,
+	 * confirming that data previously written (MAGIC) is present.
+	 */
+	get_addr_op.value = 0;
+	get_addr_op.addr = (unsigned long)tcs;
+	get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	munmap(addr, 3 * PAGE_SIZE);
+}
+
 TEST_HARNESS_MAIN
diff --git a/tools/testing/selftests/sgx/main.h b/tools/testing/selftests/sgx/main.h
index b45c52ec7ab3..fc585be97e2f 100644
--- a/tools/testing/selftests/sgx/main.h
+++ b/tools/testing/selftests/sgx/main.h
@@ -38,6 +38,7 @@ void encl_delete(struct encl *ctx);
 bool encl_load(const char *path, struct encl *encl, unsigned long heap_size);
 bool encl_measure(struct encl *encl);
 bool encl_build(struct encl *encl);
+uint64_t encl_get_entry(struct encl *encl, const char *symbol);
 
 int sgx_enter_enclave(void *rdi, void *rsi, long rdx, u32 function, void *r8, void *r9,
 		      struct sgx_enclave_run *run);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 26/30] selftests/sgx: Test faulty enclave behavior
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (23 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 25/30] selftests/sgx: Test complete changing of page type flow Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 27/30] selftests/sgx: Test invalid access to removed enclave page Jarkko Sakkinen
                   ` (5 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

Removing a page from an initialized enclave involves three steps:
first the user requests changing the page type to SGX_PAGE_TYPE_TRIM
via an ioctl(), on success the ENCLU[EACCEPT] instruction needs to be
run from within the enclave to accept the page removal, finally the
user requests page removal to be completed via an ioctl(). Only after
acceptance (ENCLU[EACCEPT]) from within the enclave can the kernel
remove the page from a running enclave.

Test the behavior when the user's request to change the page type
succeeds, but the ENCLU[EACCEPT] instruction is not run before the
ioctl() requesting page removal is run. This should not be permitted.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/main.c | 116 +++++++++++++++++++++++++++++
 1 file changed, 116 insertions(+)

diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index f9872c6746a3..82902dab96bc 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -1453,4 +1453,120 @@ TEST_F(enclave, tcs_create)
 	munmap(addr, 3 * PAGE_SIZE);
 }
 
+/*
+ * Ensure sane behavior if user requests page removal, does not run
+ * EACCEPT from within enclave but still attempts to finalize page removal
+ * with the SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl(). The latter should fail
+ * because the removal was not EACCEPTed from within the enclave.
+ */
+TEST_F(enclave, remove_added_page_no_eaccept)
+{
+	struct sgx_enclave_remove_pages remove_ioc;
+	struct encl_op_get_from_addr get_addr_op;
+	struct encl_op_put_to_addr put_addr_op;
+	struct sgx_enclave_modt modt_ioc;
+	struct sgx_secinfo secinfo;
+	unsigned long data_start;
+	int ret, errno_save;
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	/*
+	 * Hardware (SGX2) and kernel support is needed for this test. Start
+	 * with check that test has a chance of succeeding.
+	 */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+
+	if (ret == -1) {
+		if (errno == ENOTTY)
+			SKIP(return, "Kernel does not support SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl()");
+		else if (errno == ENODEV)
+			SKIP(return, "System does not support SGX2");
+	}
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	EXPECT_EQ(ret, -1);
+
+	/*
+	 * Page that will be removed is the second data page in the .data
+	 * segment. This forms part of the local encl_buffer within the
+	 * enclave.
+	 */
+	data_start = self->encl.encl_base +
+		     encl_get_data_offset(&self->encl) + PAGE_SIZE;
+
+	/*
+	 * Sanity check that page at @data_start is writable before
+	 * removing it.
+	 *
+	 * Start by writing MAGIC to test page.
+	 */
+	put_addr_op.value = MAGIC;
+	put_addr_op.addr = data_start;
+	put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Read memory that was just written to, confirming that data
+	 * previously written (MAGIC) is present.
+	 */
+	get_addr_op.value = 0;
+	get_addr_op.addr = data_start;
+	get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/* Start page removal by requesting change of page type to PT_TRIM */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = SGX_PAGE_TYPE_TRIM << 8;
+	modt_ioc.offset = encl_get_data_offset(&self->encl) + PAGE_SIZE;
+	modt_ioc.length = PAGE_SIZE;
+	modt_ioc.secinfo = (unsigned long)&secinfo;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(modt_ioc.result, 0);
+	EXPECT_EQ(modt_ioc.count, 4096);
+
+	/* Skip EACCEPT */
+
+	/* Send final ioctl() to complete page removal */
+	memset(&remove_ioc, 0, sizeof(remove_ioc));
+
+	remove_ioc.offset = encl_get_data_offset(&self->encl) + PAGE_SIZE;
+	remove_ioc.length = PAGE_SIZE;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_REMOVE_PAGES, &remove_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	/* Operation not permitted since EACCEPT was omitted. */
+	EXPECT_EQ(ret, -1);
+	EXPECT_EQ(errno_save, EPERM);
+	EXPECT_EQ(remove_ioc.count, 0);
+}
+
 TEST_HARNESS_MAIN
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 27/30] selftests/sgx: Test invalid access to removed enclave page
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (24 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 26/30] selftests/sgx: Test faulty enclave behavior Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 28/30] selftests/sgx: Test reclaiming of untouched page Jarkko Sakkinen
                   ` (4 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

Removing a page from an initialized enclave involves three steps:
(1) the user requests changing the page type to SGX_PAGE_TYPE_TRIM
via the SGX_IOC_ENCLAVE_MODIFY_TYPE  ioctl(), (2) on success the
ENCLU[EACCEPT] instruction is run from within the enclave to accept
the page removal, (3) the user initiates the actual removal of the
page via the SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl().

Test two possible invalid accesses during the page removal flow:
* Test the behavior when a request to remove the page by changing its
  type to SGX_PAGE_TYPE_TRIM completes successfully but instead of
  executing ENCLU[EACCEPT] from within the enclave the enclave attempts
  to read from the page. Even though the page is accessible from the
  page table entries its type is SGX_PAGE_TYPE_TRIM and thus not
  accessible according to SGX. The expected behavior is a page fault
  with the SGX flag set in the error code.
* Test the behavior when the page type is changed successfully and
  ENCLU[EACCEPT] was run from within the enclave. The final ioctl(),
  SGX_IOC_ENCLAVE_REMOVE_PAGES, is omitted and replaced with an
  attempt to access the page. Even though the page is accessible
  from the page table entries its type is SGX_PAGE_TYPE_TRIM and
  thus not accessible according to SGX.  The expected behavior is
  a page fault with the SGX flag set in the error code.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/main.c | 247 +++++++++++++++++++++++++++++
 1 file changed, 247 insertions(+)

diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index 82902dab96bc..d132e7d32454 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -1569,4 +1569,251 @@ TEST_F(enclave, remove_added_page_no_eaccept)
 	EXPECT_EQ(remove_ioc.count, 0);
 }
 
+/*
+ * Request enclave page removal but instead of correctly following with
+ * EACCEPT a read attempt to page is made from within the enclave.
+ */
+TEST_F(enclave, remove_added_page_invalid_access)
+{
+	struct encl_op_get_from_addr get_addr_op;
+	struct encl_op_put_to_addr put_addr_op;
+	struct sgx_enclave_modt ioc;
+	struct sgx_secinfo secinfo;
+	unsigned long data_start;
+	int ret, errno_save;
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	/*
+	 * Hardware (SGX2) and kernel support is needed for this test. Start
+	 * with check that test has a chance of succeeding.
+	 */
+	memset(&ioc, 0, sizeof(ioc));
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &ioc);
+
+	if (ret == -1) {
+		if (errno == ENOTTY)
+			SKIP(return, "Kernel does not support SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl()");
+		else if (errno == ENODEV)
+			SKIP(return, "System does not support SGX2");
+	}
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	EXPECT_EQ(ret, -1);
+
+	/*
+	 * Page that will be removed is the second data page in the .data
+	 * segment. This forms part of the local encl_buffer within the
+	 * enclave.
+	 */
+	data_start = self->encl.encl_base +
+		     encl_get_data_offset(&self->encl) + PAGE_SIZE;
+
+	/*
+	 * Sanity check that page at @data_start is writable before
+	 * removing it.
+	 *
+	 * Start by writing MAGIC to test page.
+	 */
+	put_addr_op.value = MAGIC;
+	put_addr_op.addr = data_start;
+	put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Read memory that was just written to, confirming that data
+	 * previously written (MAGIC) is present.
+	 */
+	get_addr_op.value = 0;
+	get_addr_op.addr = data_start;
+	get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/* Start page removal by requesting change of page type to PT_TRIM. */
+	memset(&ioc, 0, sizeof(ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = SGX_PAGE_TYPE_TRIM << 8;
+	ioc.offset = encl_get_data_offset(&self->encl) + PAGE_SIZE;
+	ioc.length = PAGE_SIZE;
+	ioc.secinfo = (unsigned long)&secinfo;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(ioc.result, 0);
+	EXPECT_EQ(ioc.count, 4096);
+
+	/*
+	 * Read from page that was just removed.
+	 */
+	get_addr_op.value = 0;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	/*
+	 * From kernel perspective the page is present but according to SGX the
+	 * page should not be accessible so a #PF with SGX bit set is
+	 * expected.
+	 */
+
+	EXPECT_EQ(self->run.function, ERESUME);
+	EXPECT_EQ(self->run.exception_vector, 14);
+	EXPECT_EQ(self->run.exception_error_code, 0x8005);
+	EXPECT_EQ(self->run.exception_addr, data_start);
+}
+
+/*
+ * Request enclave page removal and correctly follow with
+ * EACCEPT but do not follow with removal ioctl() but instead a read attempt
+ * to removed page is made from within the enclave.
+ */
+TEST_F(enclave, remove_added_page_invalid_access_after_eaccept)
+{
+	struct encl_op_get_from_addr get_addr_op;
+	struct encl_op_put_to_addr put_addr_op;
+	struct encl_op_eaccept eaccept_op;
+	struct sgx_enclave_modt ioc;
+	struct sgx_secinfo secinfo;
+	unsigned long data_start;
+	int ret, errno_save;
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	/*
+	 * Hardware (SGX2) and kernel support is needed for this test. Start
+	 * with check that test has a chance of succeeding.
+	 */
+	memset(&ioc, 0, sizeof(ioc));
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &ioc);
+
+	if (ret == -1) {
+		if (errno == ENOTTY)
+			SKIP(return, "Kernel does not support SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl()");
+		else if (errno == ENODEV)
+			SKIP(return, "System does not support SGX2");
+	}
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	EXPECT_EQ(ret, -1);
+
+	/*
+	 * Page that will be removed is the second data page in the .data
+	 * segment. This forms part of the local encl_buffer within the
+	 * enclave.
+	 */
+	data_start = self->encl.encl_base +
+		     encl_get_data_offset(&self->encl) + PAGE_SIZE;
+
+	/*
+	 * Sanity check that page at @data_start is writable before
+	 * removing it.
+	 *
+	 * Start by writing MAGIC to test page.
+	 */
+	put_addr_op.value = MAGIC;
+	put_addr_op.addr = data_start;
+	put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/*
+	 * Read memory that was just written to, confirming that data
+	 * previously written (MAGIC) is present.
+	 */
+	get_addr_op.value = 0;
+	get_addr_op.addr = data_start;
+	get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	EXPECT_EQ(get_addr_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+
+	/* Start page removal by requesting change of page type to PT_TRIM. */
+	memset(&ioc, 0, sizeof(ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = SGX_PAGE_TYPE_TRIM << 8;
+	ioc.offset = encl_get_data_offset(&self->encl) + PAGE_SIZE;
+	ioc.length = PAGE_SIZE;
+	ioc.secinfo = (unsigned long)&secinfo;
+
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(ioc.result, 0);
+	EXPECT_EQ(ioc.count, 4096);
+
+	eaccept_op.epc_addr = (unsigned long)data_start;
+	eaccept_op.ret = 0;
+	eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	/* Skip ioctl() to remove page. */
+
+	/*
+	 * Read from page that was just removed.
+	 */
+	get_addr_op.value = 0;
+
+	EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0);
+
+	/*
+	 * From kernel perspective the page is present but according to SGX the
+	 * page should not be accessible so a #PF with SGX bit set is
+	 * expected.
+	 */
+
+	EXPECT_EQ(self->run.function, ERESUME);
+	EXPECT_EQ(self->run.exception_vector, 14);
+	EXPECT_EQ(self->run.exception_error_code, 0x8005);
+	EXPECT_EQ(self->run.exception_addr, data_start);
+}
+
 TEST_HARNESS_MAIN
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 28/30] selftests/sgx: Test reclaiming of untouched page
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (25 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 27/30] selftests/sgx: Test invalid access to removed enclave page Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 29/30] x86/sgx: Free up EPC pages directly to support large page ranges Jarkko Sakkinen
                   ` (3 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Haitao Huang,
	Jarkko Sakkinen, Dave Hansen, Shuah Khan,
	open list:KERNEL SELFTEST FRAMEWORK, open list

From: Reinette Chatre <reinette.chatre@intel.com>

Removing a page from an initialized enclave involves three steps:
(1) the user requests changing the page type to PT_TRIM via the
    SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl()
(2) on success the ENCLU[EACCEPT] instruction is run from within
    the enclave to accept the page removal
(3) the user initiates the actual removal of the page via the
    SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl().

Remove a page that has never been accessed. This means that when the
first ioctl() requesting page removal arrives, there will be no page
table entry, yet a valid page table entry needs to exist for the
ENCLU[EACCEPT] function to succeed. In this test it is verified that
a page table entry can still be installed for a page that is in the
process of being removed.

Suggested-by: Haitao Huang <haitao.huang@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/main.c | 82 ++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)

diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index d132e7d32454..c691a4864db8 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -1816,4 +1816,86 @@ TEST_F(enclave, remove_added_page_invalid_access_after_eaccept)
 	EXPECT_EQ(self->run.exception_addr, data_start);
 }
 
+TEST_F(enclave, remove_untouched_page)
+{
+	struct sgx_enclave_remove_pages remove_ioc;
+	struct encl_op_eaccept eaccept_op;
+	struct sgx_enclave_modt modt_ioc;
+	struct sgx_secinfo secinfo;
+	unsigned long data_start;
+	int ret, errno_save;
+
+	ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata));
+
+	/*
+	 * Hardware (SGX2) and kernel support is needed for this test. Start
+	 * with check that test has a chance of succeeding.
+	 */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+
+	if (ret == -1) {
+		if (errno == ENOTTY)
+			SKIP(return, "Kernel does not support SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl()");
+		else if (errno == ENODEV)
+			SKIP(return, "System does not support SGX2");
+	}
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	EXPECT_EQ(ret, -1);
+
+	/* SGX2 is supported by kernel and hardware, test can proceed. */
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	data_start = self->encl.encl_base +
+			 encl_get_data_offset(&self->encl) + PAGE_SIZE;
+
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = SGX_PAGE_TYPE_TRIM << 8;
+	modt_ioc.offset = encl_get_data_offset(&self->encl) + PAGE_SIZE;
+	modt_ioc.length = PAGE_SIZE;
+	modt_ioc.secinfo = (unsigned long)&secinfo;
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(modt_ioc.result, 0);
+	EXPECT_EQ(modt_ioc.count, 4096);
+
+	/*
+	 * Enter enclave via TCS #1 and approve page removal by sending
+	 * EACCEPT for removed page.
+	 */
+
+	eaccept_op.epc_addr = data_start;
+	eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED;
+	eaccept_op.ret = 0;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.exception_vector, 0);
+	EXPECT_EQ(self->run.exception_error_code, 0);
+	EXPECT_EQ(self->run.exception_addr, 0);
+	EXPECT_EQ(eaccept_op.ret, 0);
+
+	memset(&remove_ioc, 0, sizeof(remove_ioc));
+
+	remove_ioc.offset = encl_get_data_offset(&self->encl) + PAGE_SIZE;
+	remove_ioc.length = PAGE_SIZE;
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_REMOVE_PAGES, &remove_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(remove_ioc.count, 4096);
+}
+
 TEST_HARNESS_MAIN
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 29/30] x86/sgx: Free up EPC pages directly to support large page ranges
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (26 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 28/30] selftests/sgx: Test reclaiming of untouched page Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:35 ` [RFC PATCH v2.1 30/30] selftests/sgx: Page removal stress test Jarkko Sakkinen
                   ` (2 subsequent siblings)
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

From: Reinette Chatre <reinette.chatre@intel.com>

The page reclaimer ensures availability of EPC pages across all
enclaves. In support of this it runs independently from the
individual enclaves in order to take locks from the different
enclaves as it writes pages to swap.

When needing to load a page from swap an EPC page needs to be
available for its contents to be loaded into. Loading an existing
enclave page from swap does not reclaim EPC pages directly if
none are available, instead the reclaimer is woken when the
available EPC pages are found to be below a watermark.

When iterating over a large number of pages in an oversubscribed
environment there is a race between the reclaimer woken up and
EPC pages reclaimed fast enough for the page operations to proceed.

Ensure there are EPC pages available before attempting to load
a page that may potentially be pulled from swap into an available
EPC page.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 arch/x86/kernel/cpu/sgx/ioctl.c | 6 ++++++
 arch/x86/kernel/cpu/sgx/main.c  | 6 ++++++
 arch/x86/kernel/cpu/sgx/sgx.h   | 1 +
 3 files changed, 13 insertions(+)

diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 99cba5837acc..d9a5eafda759 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -818,6 +818,8 @@ static long sgx_enclave_restrict_perm(struct sgx_encl *encl,
 	for (c = 0 ; c < modp->length; c += PAGE_SIZE) {
 		addr = encl->base + modp->offset + c;
 
+		sgx_direct_reclaim();
+
 		mutex_lock(&encl->lock);
 
 		entry = sgx_encl_load_page(encl, addr);
@@ -1002,6 +1004,8 @@ static long sgx_enclave_modt(struct sgx_encl *encl,
 	for (c = 0 ; c < modt->length; c += PAGE_SIZE) {
 		addr = encl->base + modt->offset + c;
 
+		sgx_direct_reclaim();
+
 		mutex_lock(&encl->lock);
 
 		entry = sgx_encl_load_page(encl, addr);
@@ -1200,6 +1204,8 @@ static long sgx_encl_remove_pages(struct sgx_encl *encl,
 	for (c = 0 ; c < params->length; c += PAGE_SIZE) {
 		addr = encl->base + params->offset + c;
 
+		sgx_direct_reclaim();
+
 		mutex_lock(&encl->lock);
 
 		entry = sgx_encl_load_page(encl, addr);
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 1a3014aec490..a1cb7435932a 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -378,6 +378,12 @@ static bool sgx_should_reclaim(unsigned long watermark)
 	       !list_empty(&sgx_active_page_list);
 }
 
+void sgx_direct_reclaim(void)
+{
+	if (sgx_should_reclaim(SGX_NR_LOW_PAGES))
+		sgx_reclaim_pages();
+}
+
 static int ksgxd(void *p)
 {
 	set_freezable();
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index b30cee4de903..85cbf103b0dd 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -86,6 +86,7 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page)
 struct sgx_epc_page *__sgx_alloc_epc_page(void);
 void sgx_free_epc_page(struct sgx_epc_page *page);
 
+void sgx_direct_reclaim(void);
 void sgx_mark_page_reclaimable(struct sgx_epc_page *page);
 int sgx_unmark_page_reclaimable(struct sgx_epc_page *page);
 struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2.1 30/30] selftests/sgx: Page removal stress test
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (27 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 29/30] x86/sgx: Free up EPC pages directly to support large page ranges Jarkko Sakkinen
@ 2022-03-04  9:35 ` Jarkko Sakkinen
  2022-03-04  9:40 ` [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
  2022-03-14 19:04 ` Dave Hansen
  30 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Jarkko Sakkinen,
	Dave Hansen, Shuah Khan, open list:KERNEL SELFTEST FRAMEWORK,
	open list

From: Reinette Chatre <reinette.chatre@intel.com>

Create enclave with additional heap that consumes all physical SGX
memory and then remove it.

Depending on the available SGX memory this test could take a
significant time to run (several minutes) as it (1) creates the
enclave, (2) changes the type of every page to be trimmed,
(3) enters the enclave once per page to run EACCEPT, before
(4) the pages are finally removed.

Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
---
 tools/testing/selftests/sgx/main.c | 122 +++++++++++++++++++++++++++++
 1 file changed, 122 insertions(+)

diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
index c691a4864db8..5c4faf6d5640 100644
--- a/tools/testing/selftests/sgx/main.c
+++ b/tools/testing/selftests/sgx/main.c
@@ -378,7 +378,129 @@ TEST_F(enclave, unclobbered_vdso_oversubscribed)
 	EXPECT_EQ(get_op.value, MAGIC);
 	EXPECT_EEXIT(&self->run);
 	EXPECT_EQ(self->run.user_data, 0);
+}
+
+TEST_F_TIMEOUT(enclave, unclobbered_vdso_oversubscribed_remove, 900)
+{
+	struct sgx_enclave_remove_pages remove_ioc;
+	struct encl_op_get_from_buf get_op;
+	struct encl_op_eaccept eaccept_op;
+	struct encl_op_put_to_buf put_op;
+	struct sgx_enclave_modt modt_ioc;
+	struct sgx_secinfo secinfo;
+	struct encl_segment *heap;
+	unsigned long total_mem;
+	int ret, errno_save;
+	unsigned long addr;
+	unsigned long i;
+
+	/*
+	 * Create enclave with additional heap that is as big as all
+	 * available physical SGX memory.
+	 */
+	total_mem = get_total_epc_mem();
+	ASSERT_NE(total_mem, 0);
+	TH_LOG("Creating an enclave with %lu bytes heap may take a while ...",
+	       total_mem);
+	ASSERT_TRUE(setup_test_encl(total_mem, &self->encl, _metadata));
+
+	/*
+	 * Hardware (SGX2) and kernel support is needed for this test. Start
+	 * with check that test has a chance of succeeding.
+	 */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+
+	if (ret == -1) {
+		if (errno == ENOTTY)
+			SKIP(return, "Kernel does not support SGX_IOC_ENCLAVE_MODIFY_TYPE ioctl()");
+		else if (errno == ENODEV)
+			SKIP(return, "System does not support SGX2");
+	}
+
+	/*
+	 * Invalid parameters were provided during sanity check,
+	 * expect command to fail.
+	 */
+	EXPECT_EQ(ret, -1);
+
+	/* SGX2 is supported by kernel and hardware, test can proceed. */
+	memset(&self->run, 0, sizeof(self->run));
+	self->run.tcs = self->encl.encl_base;
+
+	heap = &self->encl.segment_tbl[self->encl.nr_segments - 1];
+
+	put_op.header.type = ENCL_OP_PUT_TO_BUFFER;
+	put_op.value = MAGIC;
+
+	EXPECT_EQ(ENCL_CALL(&put_op, &self->run, false), 0);
+
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.user_data, 0);
+
+	get_op.header.type = ENCL_OP_GET_FROM_BUFFER;
+	get_op.value = 0;
+
+	EXPECT_EQ(ENCL_CALL(&get_op, &self->run, false), 0);
+
+	EXPECT_EQ(get_op.value, MAGIC);
+	EXPECT_EEXIT(&self->run);
+	EXPECT_EQ(self->run.user_data, 0);
+
+	/* Trim entire heap. */
+	memset(&modt_ioc, 0, sizeof(modt_ioc));
+	memset(&secinfo, 0, sizeof(secinfo));
+
+	secinfo.flags = SGX_PAGE_TYPE_TRIM << 8;
+	modt_ioc.offset = heap->offset;
+	modt_ioc.length = heap->size;
+	modt_ioc.secinfo = (unsigned long)&secinfo;
+
+	TH_LOG("Changing type of %zd bytes to trimmed may take a while ...",
+	       heap->size);
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_MODIFY_TYPE, &modt_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(modt_ioc.result, 0);
+	EXPECT_EQ(modt_ioc.count, heap->size);
+
+	/* EACCEPT all removed pages. */
+	addr = self->encl.encl_base + heap->offset;
+
+	eaccept_op.flags = SGX_SECINFO_TRIM | SGX_SECINFO_MODIFIED;
+	eaccept_op.header.type = ENCL_OP_EACCEPT;
+
+	TH_LOG("Entering enclave to run EACCEPT for each page of %zd bytes may take a while ...",
+	       heap->size);
+	for (i = 0; i < heap->size; i += 4096) {
+		eaccept_op.epc_addr = addr + i;
+		eaccept_op.ret = 0;
 
+		EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0);
+
+		EXPECT_EQ(self->run.exception_vector, 0);
+		EXPECT_EQ(self->run.exception_error_code, 0);
+		EXPECT_EQ(self->run.exception_addr, 0);
+		ASSERT_EQ(eaccept_op.ret, 0);
+		ASSERT_EQ(self->run.function, EEXIT);
+	}
+
+	/* Complete page removal. */
+	memset(&remove_ioc, 0, sizeof(remove_ioc));
+
+	remove_ioc.offset = heap->offset;
+	remove_ioc.length = heap->size;
+
+	TH_LOG("Removing %zd bytes from enclave may take a while ...",
+	       heap->size);
+	ret = ioctl(self->encl.fd, SGX_IOC_ENCLAVE_REMOVE_PAGES, &remove_ioc);
+	errno_save = ret == -1 ? errno : 0;
+
+	EXPECT_EQ(ret, 0);
+	EXPECT_EQ(errno_save, 0);
+	EXPECT_EQ(remove_ioc.count, heap->size);
 }
 
 TEST_F(enclave, clobbered_vdso)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (28 preceding siblings ...)
  2022-03-04  9:35 ` [RFC PATCH v2.1 30/30] selftests/sgx: Page removal stress test Jarkko Sakkinen
@ 2022-03-04  9:40 ` Jarkko Sakkinen
  2022-03-04  9:41   ` Jarkko Sakkinen
  2022-03-14 19:04 ` Dave Hansen
  30 siblings, 1 reply; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:40 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Dave Hansen,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

This is just a bug fixed version of v2, and also the requirement
to do obsolete round trip with EMODPE has been optimized away.
I hope that v3 is rolled out quickly because now there is no
comparison point. We can have speculative discussions whether
ioctl or #PF is better but no means to benchmark. Thus, a quick
patch set revision roll out would be such a great thing.

Based on https://lore.kernel.org/linux-sgx/20220304033918.361495-1-jarkko@kernel.org/T/#u

BR, Jarkko

On Fri, Mar 04, 2022 at 11:34:55AM +0200, Jarkko Sakkinen wrote:
> From: Reinette Chatre <reinette.chatre@intel.com>
> 
> The SGX ENCLS instruction uses EAX to specify an SGX function and
> may require additional registers, depending on the SGX function.
> ENCLS invokes the specified privileged SGX function for managing
> and debugging enclaves. Macros are used to wrap the ENCLS
> functionality and several wrappers are used to wrap the macros to
> make the different SGX functions accessible in the code.
> 
> The wrappers of the supported SGX functions are cryptic. Add short
> descriptions of each as a comment.
> 
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/encls.h | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/encls.h b/arch/x86/kernel/cpu/sgx/encls.h
> index fa04a73daf9c..0e22fa8f77c5 100644
> --- a/arch/x86/kernel/cpu/sgx/encls.h
> +++ b/arch/x86/kernel/cpu/sgx/encls.h
> @@ -136,57 +136,71 @@ static inline bool encls_failed(int ret)
>  	ret;						\
>  	})
>  
> +/* Initialize an EPC page into an SGX Enclave Control Structure (SECS) page. */
>  static inline int __ecreate(struct sgx_pageinfo *pginfo, void *secs)
>  {
>  	return __encls_2(ECREATE, pginfo, secs);
>  }
>  
> +/* Hash a 256 byte region of an enclave page to SECS:MRENCLAVE. */
>  static inline int __eextend(void *secs, void *addr)
>  {
>  	return __encls_2(EEXTEND, secs, addr);
>  }
>  
> +/*
> + * Associate an EPC page to an enclave either as a REG or TCS page
> + * populated with the provided data.
> + */
>  static inline int __eadd(struct sgx_pageinfo *pginfo, void *addr)
>  {
>  	return __encls_2(EADD, pginfo, addr);
>  }
>  
> +/* Finalize enclave build, initialize enclave for user code execution. */
>  static inline int __einit(void *sigstruct, void *token, void *secs)
>  {
>  	return __encls_ret_3(EINIT, sigstruct, secs, token);
>  }
>  
> +/* Disassociate EPC page from its enclave and mark it as unused. */
>  static inline int __eremove(void *addr)
>  {
>  	return __encls_ret_1(EREMOVE, addr);
>  }
>  
> +/* Copy data to an EPC page belonging to a debug enclave. */
>  static inline int __edbgwr(void *addr, unsigned long *data)
>  {
>  	return __encls_2(EDGBWR, *data, addr);
>  }
>  
> +/* Copy data from an EPC page belonging to a debug enclave. */
>  static inline int __edbgrd(void *addr, unsigned long *data)
>  {
>  	return __encls_1_1(EDGBRD, *data, addr);
>  }
>  
> +/* Track that software has completed the required TLB address clears. */
>  static inline int __etrack(void *addr)
>  {
>  	return __encls_ret_1(ETRACK, addr);
>  }
>  
> +/* Load, verify, and unblock an EPC page. */
>  static inline int __eldu(struct sgx_pageinfo *pginfo, void *addr,
>  			 void *va)
>  {
>  	return __encls_ret_3(ELDU, pginfo, addr, va);
>  }
>  
> +/* Make EPC page inaccessible to enclave, ready to be written to memory. */
>  static inline int __eblock(void *addr)
>  {
>  	return __encls_ret_1(EBLOCK, addr);
>  }
>  
> +/* Initialize an EPC page into a Version Array (VA) page. */
>  static inline int __epa(void *addr)
>  {
>  	unsigned long rbx = SGX_PAGE_TYPE_VA;
> @@ -194,6 +208,7 @@ static inline int __epa(void *addr)
>  	return __encls_2(EPA, rbx, addr);
>  }
>  
> +/* Invalidate an EPC page and write it out to main memory. */
>  static inline int __ewb(struct sgx_pageinfo *pginfo, void *addr,
>  			void *va)
>  {
> -- 
> 2.35.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers
  2022-03-04  9:40 ` [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
@ 2022-03-04  9:41   ` Jarkko Sakkinen
  0 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-04  9:41 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Dave Hansen,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Fri, Mar 04, 2022 at 11:40:34AM +0200, Jarkko Sakkinen wrote:
> This is just a bug fixed version of v2, and also the requirement
> to do obsolete round trip with EMODPE has been optimized away.
> I hope that v3 is rolled out quickly because now there is no
> comparison point. We can have speculative discussions whether
> ioctl or #PF is better but no means to benchmark. Thus, a quick
> patch set revision roll out would be such a great thing.
> 
> Based on https://lore.kernel.org/linux-sgx/20220304033918.361495-1-jarkko@kernel.org/T/#u
> 
> BR, Jarkko

The patches are available also in this git branch:

https://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-sgx.git/log/?h=sgx2-v2.1

BR, Jarkko

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-04  9:35 ` [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions Jarkko Sakkinen
@ 2022-03-09  8:52   ` Jarkko Sakkinen
  2022-03-09  9:35     ` Jarkko Sakkinen
  0 siblings, 1 reply; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-09  8:52 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Dave Hansen,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)

What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
flush out the TLB's, and move a page to pending state, which cannot be done
from inside the enclave.

It's there because of microarchitecture constraints, and less so to work as
a reasonable permission control mechanism (actually it does terrible job on
that side and only confuses).

Once you have this magic TLB reset button in place you can just do one
EACCEPT and EMODPE inside the enclave and you're done.

This is also kind of atomic in the sense that EACCEPT free's a page with no
rights so no misuse can happend before EMODPE has tuned EPCM.

BR, Jarkko

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-09  8:52   ` Jarkko Sakkinen
@ 2022-03-09  9:35     ` Jarkko Sakkinen
  2022-03-09 16:59       ` Reinette Chatre
  0 siblings, 1 reply; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-09  9:35 UTC (permalink / raw)
  To: linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Dave Hansen,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
> > +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
> > +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
> 
> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
> flush out the TLB's, and move a page to pending state, which cannot be done
> from inside the enclave.
> 
> It's there because of microarchitecture constraints, and less so to work as
> a reasonable permission control mechanism (actually it does terrible job on
> that side and only confuses).
> 
> Once you have this magic TLB reset button in place you can just do one
> EACCEPT and EMODPE inside the enclave and you're done.
> 
> This is also kind of atomic in the sense that EACCEPT free's a page with no
> rights so no misuse can happend before EMODPE has tuned EPCM.

I wonder if this type of pattern could be made work out for Graphene:

1. SGX_IOC_ENCLAVE_RESET_PAGES
2. EACCEPT + EMODPE

This kind of delivers EMODP that everyone has been looking for.

BR, Jarkko

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-09  9:35     ` Jarkko Sakkinen
@ 2022-03-09 16:59       ` Reinette Chatre
  2022-03-09 19:10         ` Reinette Chatre
  2022-03-09 23:35         ` Jarkko Sakkinen
  0 siblings, 2 replies; 42+ messages in thread
From: Reinette Chatre @ 2022-03-09 16:59 UTC (permalink / raw)
  To: Jarkko Sakkinen, linux-sgx
  Cc: Nathaniel McCallum, Dave Hansen, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

Hi Jarkko,

On 3/9/2022 1:35 AM, Jarkko Sakkinen wrote:
> On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
>> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
>>> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
>>> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
>>
>> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
>> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
>> flush out the TLB's, and move a page to pending state, which cannot be done
>> from inside the enclave.

I see the main ingredient as running EMODPR to restrict the EPCM permissions. If
the user wants to use SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS just to flush TLB it is
already possible since attempting to use EMODPR to relax permissions does not
change any permissions (although it still sets EPCM.PR) but yet will still
flush the TLB.

Even so, you have a very good point that removing SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
removes the ability for users to flush the TLB after an EMODPE. If there are
thus PTEs present at the time the user runs EMODPE the pages would not be
accessible with the new permissions.

Repurposing SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE to accomplish
this is not efficient because:
- For the OS to flush the TLB the enclave pages need not be in the EPC but
  in order to run EMODPR the enclave page needs to be in the EPC. In an 
  oversubscribed environment running EMODPR unnecessarily can thus introduce
  a significant delay. Please see the performance comparison I did in
  https://lore.kernel.org/linux-sgx/77e81306-6b03-4b09-2df2-48e09e2e79d5@intel.com/
  The test shows that running EMODPR unnecessarily can be orders of magnitude slower.
- Running EMODPR on an enclave page sets the EPCM.PR bin in the enclave page
  that needs to be cleared with an EACCEPT from within the enclave.
  If the user just wants to reset the TLB after running EMODPE then it should
  not be necessary to run EACCEPT again to reset EPCM.PR.

Resetting the TLB is exactly what SGX_IOC_ENCLAVE_RELAX_PERMISSIONS did in an 
efficient way - it is quick (no need to load pages into EPC) and it does not
require EACCEPT to clear EPCM.PR. 

It looks like we need SGX_IOC_ENCLAVE_RELAX_PERMISSIONS back. We could
rename it to SGX_IOC_ENCLAVE_RESET_PAGES if you prefer.

>> It's there because of microarchitecture constraints, and less so to work as
>> a reasonable permission control mechanism (actually it does terrible job on
>> that side and only confuses).
>>
>> Once you have this magic TLB reset button in place you can just do one
>> EACCEPT and EMODPE inside the enclave and you're done.
>>
>> This is also kind of atomic in the sense that EACCEPT free's a page with no
>> rights so no misuse can happend before EMODPE has tuned EPCM.
> 
> I wonder if this type of pattern could be made work out for Graphene:
> 
> 1. SGX_IOC_ENCLAVE_RESET_PAGES
> 2. EACCEPT + EMODPE
> 
> This kind of delivers EMODP that everyone has been looking for.

EACCEPT will result in page table entries created for the enclave page. EMODPE
will be able to relax the permissions but TLB flush would be required to
access the page with the new permissions. SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
(renamed to SGX_IOC_ENCLAVE_RESET_PAGES?) that does just a TLB flush is
required to be after EMODPE.

Reinette

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-09 16:59       ` Reinette Chatre
@ 2022-03-09 19:10         ` Reinette Chatre
  2022-03-09 23:35         ` Jarkko Sakkinen
  1 sibling, 0 replies; 42+ messages in thread
From: Reinette Chatre @ 2022-03-09 19:10 UTC (permalink / raw)
  To: Jarkko Sakkinen, linux-sgx
  Cc: Nathaniel McCallum, Dave Hansen, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

Hi Jarkko,

Apologies, after thinking about this more I believe that I misunderstood
your proposal in my original answer.

On 3/9/2022 8:59 AM, Reinette Chatre wrote:
> Hi Jarkko,
> 
> On 3/9/2022 1:35 AM, Jarkko Sakkinen wrote:
>> On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
>>> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
>>>> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
>>>> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
>>>
>>> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
>>> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
>>> flush out the TLB's, and move a page to pending state, which cannot be done
>>> from inside the enclave.

Why not keep the flexibility of supporting all permission restrictions?

It is already possible to call SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE, no?

> I see the main ingredient as running EMODPR to restrict the EPCM permissions. If
> the user wants to use SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS just to flush TLB it is
> already possible since attempting to use EMODPR to relax permissions does not
> change any permissions (although it still sets EPCM.PR) but yet will still
> flush the TLB.
> 
> Even so, you have a very good point that removing SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
> removes the ability for users to flush the TLB after an EMODPE. If there are
> thus PTEs present at the time the user runs EMODPE the pages would not be
> accessible with the new permissions.
> 
> Repurposing SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE to accomplish
> this is not efficient because:
> - For the OS to flush the TLB the enclave pages need not be in the EPC but
>   in order to run EMODPR the enclave page needs to be in the EPC. In an 
>   oversubscribed environment running EMODPR unnecessarily can thus introduce
>   a significant delay. Please see the performance comparison I did in
>   https://lore.kernel.org/linux-sgx/77e81306-6b03-4b09-2df2-48e09e2e79d5@intel.com/
>   The test shows that running EMODPR unnecessarily can be orders of magnitude slower.
> - Running EMODPR on an enclave page sets the EPCM.PR bin in the enclave page
>   that needs to be cleared with an EACCEPT from within the enclave.
>   If the user just wants to reset the TLB after running EMODPE then it should
>   not be necessary to run EACCEPT again to reset EPCM.PR.
> 
> Resetting the TLB is exactly what SGX_IOC_ENCLAVE_RELAX_PERMISSIONS did in an 
> efficient way - it is quick (no need to load pages into EPC) and it does not
> require EACCEPT to clear EPCM.PR. 
> 
> It looks like we need SGX_IOC_ENCLAVE_RELAX_PERMISSIONS back. We could
> rename it to SGX_IOC_ENCLAVE_RESET_PAGES if you prefer.
> 
>>> It's there because of microarchitecture constraints, and less so to work as
>>> a reasonable permission control mechanism (actually it does terrible job on
>>> that side and only confuses).
>>>
>>> Once you have this magic TLB reset button in place you can just do one
>>> EACCEPT and EMODPE inside the enclave and you're done.
>>>
>>> This is also kind of atomic in the sense that EACCEPT free's a page with no
>>> rights so no misuse can happend before EMODPE has tuned EPCM.
>>
>> I wonder if this type of pattern could be made work out for Graphene:
>>
>> 1. SGX_IOC_ENCLAVE_RESET_PAGES
>> 2. EACCEPT + EMODPE
>>
>> This kind of delivers EMODP that everyone has been looking for.
> 

As I understand it this is currently possible with
SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS.

Reinette


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-09 16:59       ` Reinette Chatre
  2022-03-09 19:10         ` Reinette Chatre
@ 2022-03-09 23:35         ` Jarkko Sakkinen
  2022-03-09 23:42           ` Jarkko Sakkinen
  2022-03-10  0:10           ` Reinette Chatre
  1 sibling, 2 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-09 23:35 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: linux-sgx, Nathaniel McCallum, Dave Hansen, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Wed, Mar 09, 2022 at 08:59:42AM -0800, Reinette Chatre wrote:
> Hi Jarkko,
> 
> On 3/9/2022 1:35 AM, Jarkko Sakkinen wrote:
> > On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
> >> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
> >>> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
> >>> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
> >>
> >> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
> >> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
> >> flush out the TLB's, and move a page to pending state, which cannot be done
> >> from inside the enclave.
> 
> I see the main ingredient as running EMODPR to restrict the EPCM permissions. If
> the user wants to use SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS just to flush TLB it is
> already possible since attempting to use EMODPR to relax permissions does not
> change any permissions (although it still sets EPCM.PR) but yet will still
> flush the TLB.

It's not just to flush the TLB. It also resets permissions to zero from
which it is easy to set the exact permissions with EMODPE.

> Even so, you have a very good point that removing SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
> removes the ability for users to flush the TLB after an EMODPE. If there are
> thus PTEs present at the time the user runs EMODPE the pages would not be
> accessible with the new permissions.
> 
> Repurposing SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE to accomplish
> this is not efficient because:
> - For the OS to flush the TLB the enclave pages need not be in the EPC but
>   in order to run EMODPR the enclave page needs to be in the EPC. In an 
>   oversubscribed environment running EMODPR unnecessarily can thus introduce
>   a significant delay. Please see the performance comparison I did in
>   https://lore.kernel.org/linux-sgx/77e81306-6b03-4b09-2df2-48e09e2e79d5@intel.com/
>   The test shows that running EMODPR unnecessarily can be orders of magnitude slower.
> - Running EMODPR on an enclave page sets the EPCM.PR bin in the enclave page
>   that needs to be cleared with an EACCEPT from within the enclave.
>   If the user just wants to reset the TLB after running EMODPE then it should
>   not be necessary to run EACCEPT again to reset EPCM.PR.
> 
> Resetting the TLB is exactly what SGX_IOC_ENCLAVE_RELAX_PERMISSIONS did in an 
> efficient way - it is quick (no need to load pages into EPC) and it does not
> require EACCEPT to clear EPCM.PR. 
> 
> It looks like we need SGX_IOC_ENCLAVE_RELAX_PERMISSIONS back. We could
> rename it to SGX_IOC_ENCLAVE_RESET_PAGES if you prefer.

Please do not add it. We do not have any use for it. It's not only used
to flush TLB's so it would not do any good. I just use it with fixed
PROT_NONE permissions.

> >> It's there because of microarchitecture constraints, and less so to work as
> >> a reasonable permission control mechanism (actually it does terrible job on
> >> that side and only confuses).
> >>
> >> Once you have this magic TLB reset button in place you can just do one
> >> EACCEPT and EMODPE inside the enclave and you're done.
> >>
> >> This is also kind of atomic in the sense that EACCEPT free's a page with no
> >> rights so no misuse can happend before EMODPE has tuned EPCM.
> > 
> > I wonder if this type of pattern could be made work out for Graphene:
> > 
> > 1. SGX_IOC_ENCLAVE_RESET_PAGES
> > 2. EACCEPT + EMODPE
> > 
> > This kind of delivers EMODP that everyone has been looking for.
> 
> EACCEPT will result in page table entries created for the enclave page. EMODPE
> will be able to relax the permissions but TLB flush would be required to
> access the page with the new permissions. SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
> (renamed to SGX_IOC_ENCLAVE_RESET_PAGES?) that does just a TLB flush is
> required to be after EMODPE.

For EMODPE TLB flush is not required. I even verified this from Mark
Shanahan. And since access rights are zero, the page cannot be
deferenced by threads before EMODPE.


BR, Jarkko

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-09 23:35         ` Jarkko Sakkinen
@ 2022-03-09 23:42           ` Jarkko Sakkinen
  2022-03-10  0:11             ` Reinette Chatre
  2022-03-10  0:10           ` Reinette Chatre
  1 sibling, 1 reply; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-09 23:42 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: linux-sgx, Nathaniel McCallum, Dave Hansen, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Thu, Mar 10, 2022 at 01:35:53AM +0200, Jarkko Sakkinen wrote:
> On Wed, Mar 09, 2022 at 08:59:42AM -0800, Reinette Chatre wrote:
> > Hi Jarkko,
> > 
> > On 3/9/2022 1:35 AM, Jarkko Sakkinen wrote:
> > > On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
> > >> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
> > >>> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
> > >>> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
> > >>
> > >> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
> > >> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
> > >> flush out the TLB's, and move a page to pending state, which cannot be done
> > >> from inside the enclave.
> > 
> > I see the main ingredient as running EMODPR to restrict the EPCM permissions. If
> > the user wants to use SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS just to flush TLB it is
> > already possible since attempting to use EMODPR to relax permissions does not
> > change any permissions (although it still sets EPCM.PR) but yet will still
> > flush the TLB.
> 
> It's not just to flush the TLB. It also resets permissions to zero from
> which it is easy to set the exact permissions with EMODPE.
> 
> > Even so, you have a very good point that removing SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
> > removes the ability for users to flush the TLB after an EMODPE. If there are
> > thus PTEs present at the time the user runs EMODPE the pages would not be
> > accessible with the new permissions.
> > 
> > Repurposing SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE to accomplish
> > this is not efficient because:
> > - For the OS to flush the TLB the enclave pages need not be in the EPC but
> >   in order to run EMODPR the enclave page needs to be in the EPC. In an 
> >   oversubscribed environment running EMODPR unnecessarily can thus introduce
> >   a significant delay. Please see the performance comparison I did in
> >   https://lore.kernel.org/linux-sgx/77e81306-6b03-4b09-2df2-48e09e2e79d5@intel.com/
> >   The test shows that running EMODPR unnecessarily can be orders of magnitude slower.
> > - Running EMODPR on an enclave page sets the EPCM.PR bin in the enclave page
> >   that needs to be cleared with an EACCEPT from within the enclave.
> >   If the user just wants to reset the TLB after running EMODPE then it should
> >   not be necessary to run EACCEPT again to reset EPCM.PR.
> > 
> > Resetting the TLB is exactly what SGX_IOC_ENCLAVE_RELAX_PERMISSIONS did in an 
> > efficient way - it is quick (no need to load pages into EPC) and it does not
> > require EACCEPT to clear EPCM.PR. 
> > 
> > It looks like we need SGX_IOC_ENCLAVE_RELAX_PERMISSIONS back. We could
> > rename it to SGX_IOC_ENCLAVE_RESET_PAGES if you prefer.
> 
> Please do not add it. We do not have any use for it. It's not only used
> to flush TLB's so it would not do any good. I just use it with fixed
> PROT_NONE permissions.
> 
> > >> It's there because of microarchitecture constraints, and less so to work as
> > >> a reasonable permission control mechanism (actually it does terrible job on
> > >> that side and only confuses).
> > >>
> > >> Once you have this magic TLB reset button in place you can just do one
> > >> EACCEPT and EMODPE inside the enclave and you're done.
> > >>
> > >> This is also kind of atomic in the sense that EACCEPT free's a page with no
> > >> rights so no misuse can happend before EMODPE has tuned EPCM.
> > > 
> > > I wonder if this type of pattern could be made work out for Graphene:
> > > 
> > > 1. SGX_IOC_ENCLAVE_RESET_PAGES
> > > 2. EACCEPT + EMODPE
> > > 
> > > This kind of delivers EMODP that everyone has been looking for.
> > 
> > EACCEPT will result in page table entries created for the enclave page. EMODPE
> > will be able to relax the permissions but TLB flush would be required to
> > access the page with the new permissions. SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
> > (renamed to SGX_IOC_ENCLAVE_RESET_PAGES?) that does just a TLB flush is
> > required to be after EMODPE.
> 
> For EMODPE TLB flush is not required. I even verified this from Mark
> Shanahan. And since access rights are zero, the page cannot be
> deferenced by threads before EMODPE.

I'm fine of course keeping SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS as it
is too. It's at least future-proof that way.

BR, Jarkko

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-09 23:35         ` Jarkko Sakkinen
  2022-03-09 23:42           ` Jarkko Sakkinen
@ 2022-03-10  0:10           ` Reinette Chatre
  2022-03-10  2:02             ` Jarkko Sakkinen
  1 sibling, 1 reply; 42+ messages in thread
From: Reinette Chatre @ 2022-03-10  0:10 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: linux-sgx, Nathaniel McCallum, Dave Hansen, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

Hi Jarkko,

On 3/9/2022 3:35 PM, Jarkko Sakkinen wrote:
> On Wed, Mar 09, 2022 at 08:59:42AM -0800, Reinette Chatre wrote:
>> Hi Jarkko,
>>
>> On 3/9/2022 1:35 AM, Jarkko Sakkinen wrote:
>>> On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
>>>> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
>>>>> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
>>>>> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
>>>>
>>>> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
>>>> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
>>>> flush out the TLB's, and move a page to pending state, which cannot be done
>>>> from inside the enclave.
>>
>> I see the main ingredient as running EMODPR to restrict the EPCM permissions. If
>> the user wants to use SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS just to flush TLB it is
>> already possible since attempting to use EMODPR to relax permissions does not
>> change any permissions (although it still sets EPCM.PR) but yet will still
>> flush the TLB.
> 
> It's not just to flush the TLB. It also resets permissions to zero from
> which it is easy to set the exact permissions with EMODPE.
> 
>> Even so, you have a very good point that removing SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
>> removes the ability for users to flush the TLB after an EMODPE. If there are
>> thus PTEs present at the time the user runs EMODPE the pages would not be
>> accessible with the new permissions.
>>
>> Repurposing SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE to accomplish
>> this is not efficient because:
>> - For the OS to flush the TLB the enclave pages need not be in the EPC but
>>   in order to run EMODPR the enclave page needs to be in the EPC. In an 
>>   oversubscribed environment running EMODPR unnecessarily can thus introduce
>>   a significant delay. Please see the performance comparison I did in
>>   https://lore.kernel.org/linux-sgx/77e81306-6b03-4b09-2df2-48e09e2e79d5@intel.com/
>>   The test shows that running EMODPR unnecessarily can be orders of magnitude slower.
>> - Running EMODPR on an enclave page sets the EPCM.PR bin in the enclave page
>>   that needs to be cleared with an EACCEPT from within the enclave.
>>   If the user just wants to reset the TLB after running EMODPE then it should
>>   not be necessary to run EACCEPT again to reset EPCM.PR.
>>
>> Resetting the TLB is exactly what SGX_IOC_ENCLAVE_RELAX_PERMISSIONS did in an 
>> efficient way - it is quick (no need to load pages into EPC) and it does not
>> require EACCEPT to clear EPCM.PR. 
>>
>> It looks like we need SGX_IOC_ENCLAVE_RELAX_PERMISSIONS back. We could
>> rename it to SGX_IOC_ENCLAVE_RESET_PAGES if you prefer.
> 
> Please do not add it. We do not have any use for it. It's not only used
> to flush TLB's so it would not do any good. I just use it with fixed
> PROT_NONE permissions.
> 
>>>> It's there because of microarchitecture constraints, and less so to work as
>>>> a reasonable permission control mechanism (actually it does terrible job on
>>>> that side and only confuses).
>>>>
>>>> Once you have this magic TLB reset button in place you can just do one
>>>> EACCEPT and EMODPE inside the enclave and you're done.
>>>>
>>>> This is also kind of atomic in the sense that EACCEPT free's a page with no
>>>> rights so no misuse can happend before EMODPE has tuned EPCM.
>>>
>>> I wonder if this type of pattern could be made work out for Graphene:
>>>
>>> 1. SGX_IOC_ENCLAVE_RESET_PAGES
>>> 2. EACCEPT + EMODPE
>>>
>>> This kind of delivers EMODP that everyone has been looking for.
>>
>> EACCEPT will result in page table entries created for the enclave page. EMODPE
>> will be able to relax the permissions but TLB flush would be required to
>> access the page with the new permissions. SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
>> (renamed to SGX_IOC_ENCLAVE_RESET_PAGES?) that does just a TLB flush is
>> required to be after EMODPE.
> 
> For EMODPE TLB flush is not required. I even verified this from Mark
> Shanahan. And since access rights are zero, the page cannot be
> deferenced by threads before EMODPE.
> 

Understood. I realized my mistake only after sending the email and attempted
to correct it in the following. Sorry for the noise.

Reinette

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-09 23:42           ` Jarkko Sakkinen
@ 2022-03-10  0:11             ` Reinette Chatre
  0 siblings, 0 replies; 42+ messages in thread
From: Reinette Chatre @ 2022-03-10  0:11 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: linux-sgx, Nathaniel McCallum, Dave Hansen, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

Hi Jarkko,

On 3/9/2022 3:42 PM, Jarkko Sakkinen wrote:
> On Thu, Mar 10, 2022 at 01:35:53AM +0200, Jarkko Sakkinen wrote:
>> On Wed, Mar 09, 2022 at 08:59:42AM -0800, Reinette Chatre wrote:
>>> Hi Jarkko,
>>>
>>> On 3/9/2022 1:35 AM, Jarkko Sakkinen wrote:
>>>> On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
>>>>> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
>>>>>> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
>>>>>> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
>>>>>
>>>>> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
>>>>> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
>>>>> flush out the TLB's, and move a page to pending state, which cannot be done
>>>>> from inside the enclave.
>>>
>>> I see the main ingredient as running EMODPR to restrict the EPCM permissions. If
>>> the user wants to use SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS just to flush TLB it is
>>> already possible since attempting to use EMODPR to relax permissions does not
>>> change any permissions (although it still sets EPCM.PR) but yet will still
>>> flush the TLB.
>>
>> It's not just to flush the TLB. It also resets permissions to zero from
>> which it is easy to set the exact permissions with EMODPE.
>>
>>> Even so, you have a very good point that removing SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
>>> removes the ability for users to flush the TLB after an EMODPE. If there are
>>> thus PTEs present at the time the user runs EMODPE the pages would not be
>>> accessible with the new permissions.
>>>
>>> Repurposing SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE to accomplish
>>> this is not efficient because:
>>> - For the OS to flush the TLB the enclave pages need not be in the EPC but
>>>   in order to run EMODPR the enclave page needs to be in the EPC. In an 
>>>   oversubscribed environment running EMODPR unnecessarily can thus introduce
>>>   a significant delay. Please see the performance comparison I did in
>>>   https://lore.kernel.org/linux-sgx/77e81306-6b03-4b09-2df2-48e09e2e79d5@intel.com/
>>>   The test shows that running EMODPR unnecessarily can be orders of magnitude slower.
>>> - Running EMODPR on an enclave page sets the EPCM.PR bin in the enclave page
>>>   that needs to be cleared with an EACCEPT from within the enclave.
>>>   If the user just wants to reset the TLB after running EMODPE then it should
>>>   not be necessary to run EACCEPT again to reset EPCM.PR.
>>>
>>> Resetting the TLB is exactly what SGX_IOC_ENCLAVE_RELAX_PERMISSIONS did in an 
>>> efficient way - it is quick (no need to load pages into EPC) and it does not
>>> require EACCEPT to clear EPCM.PR. 
>>>
>>> It looks like we need SGX_IOC_ENCLAVE_RELAX_PERMISSIONS back. We could
>>> rename it to SGX_IOC_ENCLAVE_RESET_PAGES if you prefer.
>>
>> Please do not add it. We do not have any use for it. It's not only used
>> to flush TLB's so it would not do any good. I just use it with fixed
>> PROT_NONE permissions.
>>
>>>>> It's there because of microarchitecture constraints, and less so to work as
>>>>> a reasonable permission control mechanism (actually it does terrible job on
>>>>> that side and only confuses).
>>>>>
>>>>> Once you have this magic TLB reset button in place you can just do one
>>>>> EACCEPT and EMODPE inside the enclave and you're done.
>>>>>
>>>>> This is also kind of atomic in the sense that EACCEPT free's a page with no
>>>>> rights so no misuse can happend before EMODPE has tuned EPCM.
>>>>
>>>> I wonder if this type of pattern could be made work out for Graphene:
>>>>
>>>> 1. SGX_IOC_ENCLAVE_RESET_PAGES
>>>> 2. EACCEPT + EMODPE
>>>>
>>>> This kind of delivers EMODP that everyone has been looking for.
>>>
>>> EACCEPT will result in page table entries created for the enclave page. EMODPE
>>> will be able to relax the permissions but TLB flush would be required to
>>> access the page with the new permissions. SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
>>> (renamed to SGX_IOC_ENCLAVE_RESET_PAGES?) that does just a TLB flush is
>>> required to be after EMODPE.
>>
>> For EMODPE TLB flush is not required. I even verified this from Mark
>> Shanahan. And since access rights are zero, the page cannot be
>> deferenced by threads before EMODPE.
> 
> I'm fine of course keeping SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS as it
> is too. It's at least future-proof that way.

Will do.

Reinette

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions
  2022-03-10  0:10           ` Reinette Chatre
@ 2022-03-10  2:02             ` Jarkko Sakkinen
  0 siblings, 0 replies; 42+ messages in thread
From: Jarkko Sakkinen @ 2022-03-10  2:02 UTC (permalink / raw)
  To: Reinette Chatre
  Cc: linux-sgx, Nathaniel McCallum, Dave Hansen, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On Wed, Mar 09, 2022 at 04:10:27PM -0800, Reinette Chatre wrote:
> Hi Jarkko,
> 
> On 3/9/2022 3:35 PM, Jarkko Sakkinen wrote:
> > On Wed, Mar 09, 2022 at 08:59:42AM -0800, Reinette Chatre wrote:
> >> Hi Jarkko,
> >>
> >> On 3/9/2022 1:35 AM, Jarkko Sakkinen wrote:
> >>> On Wed, Mar 09, 2022 at 10:52:22AM +0200, Jarkko Sakkinen wrote:
> >>>> On Fri, Mar 04, 2022 at 11:35:08AM +0200, Jarkko Sakkinen wrote:
> >>>>> +#define SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS \
> >>>>> +	_IOWR(SGX_MAGIC, 0x05, struct sgx_enclave_restrict_perm)
> >>>>
> >>>> What if this was replaced with just SGX_IOC_ENCLAVE_RESET_PAGES, which
> >>>> would simply do EMODPR with PROT_NONE? The main ingredient of EMODPR is to
> >>>> flush out the TLB's, and move a page to pending state, which cannot be done
> >>>> from inside the enclave.
> >>
> >> I see the main ingredient as running EMODPR to restrict the EPCM permissions. If
> >> the user wants to use SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS just to flush TLB it is
> >> already possible since attempting to use EMODPR to relax permissions does not
> >> change any permissions (although it still sets EPCM.PR) but yet will still
> >> flush the TLB.
> > 
> > It's not just to flush the TLB. It also resets permissions to zero from
> > which it is easy to set the exact permissions with EMODPE.
> > 
> >> Even so, you have a very good point that removing SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
> >> removes the ability for users to flush the TLB after an EMODPE. If there are
> >> thus PTEs present at the time the user runs EMODPE the pages would not be
> >> accessible with the new permissions.
> >>
> >> Repurposing SGX_IOC_ENCLAVE_RESTRICT_PERMISSIONS with PROT_NONE to accomplish
> >> this is not efficient because:
> >> - For the OS to flush the TLB the enclave pages need not be in the EPC but
> >>   in order to run EMODPR the enclave page needs to be in the EPC. In an 
> >>   oversubscribed environment running EMODPR unnecessarily can thus introduce
> >>   a significant delay. Please see the performance comparison I did in
> >>   https://lore.kernel.org/linux-sgx/77e81306-6b03-4b09-2df2-48e09e2e79d5@intel.com/
> >>   The test shows that running EMODPR unnecessarily can be orders of magnitude slower.
> >> - Running EMODPR on an enclave page sets the EPCM.PR bin in the enclave page
> >>   that needs to be cleared with an EACCEPT from within the enclave.
> >>   If the user just wants to reset the TLB after running EMODPE then it should
> >>   not be necessary to run EACCEPT again to reset EPCM.PR.
> >>
> >> Resetting the TLB is exactly what SGX_IOC_ENCLAVE_RELAX_PERMISSIONS did in an 
> >> efficient way - it is quick (no need to load pages into EPC) and it does not
> >> require EACCEPT to clear EPCM.PR. 
> >>
> >> It looks like we need SGX_IOC_ENCLAVE_RELAX_PERMISSIONS back. We could
> >> rename it to SGX_IOC_ENCLAVE_RESET_PAGES if you prefer.
> > 
> > Please do not add it. We do not have any use for it. It's not only used
> > to flush TLB's so it would not do any good. I just use it with fixed
> > PROT_NONE permissions.
> > 
> >>>> It's there because of microarchitecture constraints, and less so to work as
> >>>> a reasonable permission control mechanism (actually it does terrible job on
> >>>> that side and only confuses).
> >>>>
> >>>> Once you have this magic TLB reset button in place you can just do one
> >>>> EACCEPT and EMODPE inside the enclave and you're done.
> >>>>
> >>>> This is also kind of atomic in the sense that EACCEPT free's a page with no
> >>>> rights so no misuse can happend before EMODPE has tuned EPCM.
> >>>
> >>> I wonder if this type of pattern could be made work out for Graphene:
> >>>
> >>> 1. SGX_IOC_ENCLAVE_RESET_PAGES
> >>> 2. EACCEPT + EMODPE
> >>>
> >>> This kind of delivers EMODP that everyone has been looking for.
> >>
> >> EACCEPT will result in page table entries created for the enclave page. EMODPE
> >> will be able to relax the permissions but TLB flush would be required to
> >> access the page with the new permissions. SGX_IOC_ENCLAVE_RELAX_PERMISSIONS
> >> (renamed to SGX_IOC_ENCLAVE_RESET_PAGES?) that does just a TLB flush is
> >> required to be after EMODPE.
> > 
> > For EMODPE TLB flush is not required. I even verified this from Mark
> > Shanahan. And since access rights are zero, the page cannot be
> > deferenced by threads before EMODPE.
> > 
> 
> Understood. I realized my mistake only after sending the email and attempted
> to correct it in the following. Sorry for the noise.

Please do not! It's really important this is looked from every angle
before it hits the mainline :-)

BR, Jarkko

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers
  2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
                   ` (29 preceding siblings ...)
  2022-03-04  9:40 ` [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
@ 2022-03-14 19:04 ` Dave Hansen
  30 siblings, 0 replies; 42+ messages in thread
From: Dave Hansen @ 2022-03-14 19:04 UTC (permalink / raw)
  To: Jarkko Sakkinen, linux-sgx
  Cc: Nathaniel McCallum, Reinette Chatre, Dave Hansen,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),
	H. Peter Anvin, open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)

On 3/4/22 01:34, Jarkko Sakkinen wrote:
> From: Reinette Chatre <reinette.chatre@intel.com>
> 
> The SGX ENCLS instruction uses EAX to specify an SGX function and
> may require additional registers, depending on the SGX function.
> ENCLS invokes the specified privileged SGX function for managing
> and debugging enclaves. Macros are used to wrap the ENCLS
> functionality and several wrappers are used to wrap the macros to
> make the different SGX functions accessible in the code.
> 
> The wrappers of the supported SGX functions are cryptic. Add short
> descriptions of each as a comment.
> 
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>

Jarkko, what *is* this series?  There's no cover letter, all the patches
look like they are from Reinette, and the series is free of any
sign-offs from you.

Was this something you expected to be merged?

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2022-03-14 19:04 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-04  9:34 [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
2022-03-04  9:34 ` [RFC PATCH v2.1 02/30] x86/sgx: Add wrapper for SGX2 EMODPR function Jarkko Sakkinen
2022-03-04  9:34 ` [RFC PATCH v2.1 03/30] x86/sgx: Add wrapper for SGX2 EMODT function Jarkko Sakkinen
2022-03-04  9:34 ` [RFC PATCH v2.1 04/30] x86/sgx: Add wrapper for SGX2 EAUG function Jarkko Sakkinen
2022-03-04  9:34 ` [RFC PATCH v2.1 05/30] Documentation/x86: Document SGX permission details Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 06/30] x86/sgx: Support VMA permissions more relaxed than enclave permissions Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 07/30] x86/sgx: Add pfn_mkwrite() handler for present PTEs Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 08/30] x86/sgx: Export sgx_encl_ewb_cpumask() Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 09/30] x86/sgx: Rename sgx_encl_ewb_cpumask() as sgx_encl_cpumask() Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 10/30] x86/sgx: Move PTE zap code to new sgx_zap_enclave_ptes() Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 11/30] x86/sgx: Make sgx_ipi_cb() available internally Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 12/30] x86/sgx: Create utility to validate user provided offset and length Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 13/30] x86/sgx: Keep record of SGX page type Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 14/30] x86/sgx: Support restricting of enclave page permissions Jarkko Sakkinen
2022-03-09  8:52   ` Jarkko Sakkinen
2022-03-09  9:35     ` Jarkko Sakkinen
2022-03-09 16:59       ` Reinette Chatre
2022-03-09 19:10         ` Reinette Chatre
2022-03-09 23:35         ` Jarkko Sakkinen
2022-03-09 23:42           ` Jarkko Sakkinen
2022-03-10  0:11             ` Reinette Chatre
2022-03-10  0:10           ` Reinette Chatre
2022-03-10  2:02             ` Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 15/30] selftests/sgx: Add test for EPCM permission changes Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 16/30] selftests/sgx: Add test for TCS page " Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 17/30] x86/sgx: Support adding of pages to an initialized enclave Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 18/30] x86/sgx: Tighten accessible memory range after enclave initialization Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 19/30] selftests/sgx: Test two different SGX2 EAUG flows Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 20/30] x86/sgx: Support modifying SGX page type Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 21/30] x86/sgx: Support complete page removal Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 22/30] Documentation/x86: Introduce enclave runtime management section Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 23/30] selftests/sgx: Introduce dynamic entry point Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 24/30] selftests/sgx: Introduce TCS initialization enclave operation Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 25/30] selftests/sgx: Test complete changing of page type flow Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 26/30] selftests/sgx: Test faulty enclave behavior Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 27/30] selftests/sgx: Test invalid access to removed enclave page Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 28/30] selftests/sgx: Test reclaiming of untouched page Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 29/30] x86/sgx: Free up EPC pages directly to support large page ranges Jarkko Sakkinen
2022-03-04  9:35 ` [RFC PATCH v2.1 30/30] selftests/sgx: Page removal stress test Jarkko Sakkinen
2022-03-04  9:40 ` [RFC PATCH v2.1 01/30] x86/sgx: Add short descriptions to ENCLS wrappers Jarkko Sakkinen
2022-03-04  9:41   ` Jarkko Sakkinen
2022-03-14 19:04 ` Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).