All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] Fix the issue further discussed in:
@ 2020-10-05  3:17 Jarkko Sakkinen
  2020-10-05 11:11 ` Matthew Wilcox
  0 siblings, 1 reply; 4+ messages in thread
From: Jarkko Sakkinen @ 2020-10-05  3:17 UTC (permalink / raw)
  To: linux-sgx
  Cc: Jarkko Sakkinen, Haitao Huang, Matthew Wilcox,
	Sean Christopherson, Jethro Beekman, Dave Hansen

1. https://lore.kernel.org/linux-sgx/op.0rwbv916wjvjmi@mqcpg7oapc828.gar.corp.intel.com/
2. https://lore.kernel.org/linux-sgx/20201003195440.GD20115@casper.infradead.org/

Use the approach suggested by Matthew, and supported by the analysis
that I wrote:

https://lore.kernel.org/linux-sgx/20201005030619.GA126283@linux.intel.com/

Reported-by: Haitao Huang <haitao.huang@linux.intel.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Jethro Beekman <jethro@fortanix.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 4c6407cd857a..2bb3ec6996e9 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -307,6 +307,7 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 	unsigned long idx_start = PFN_DOWN(start);
 	unsigned long idx_end = PFN_DOWN(end - 1);
 	struct sgx_encl_page *page;
+	unsigned long count = 0;
 
 	XA_STATE(xas, &encl->page_array, idx_start);
 
@@ -317,10 +318,31 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 	if (current->personality & READ_IMPLIES_EXEC)
 		return -EACCES;
 
-	xas_for_each(&xas, page, idx_end)
+	/*
+	 * No need to hold encl->lock:
+	 * 1. None of the page->* get written.
+	 * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This
+	 *    is before calling xa_insert(). After that it is never modified.
+	 */
+	xas_lock(&xas);
+	xas_for_each(&xas, page, idx_end) {
+		if (++count % XA_CHECK_SCHED)
+			continue;
+
+		xas_pause(&xas);
+		xas_unlock(&xas);
+
+		/*
+		 * Attributes are not protected by the xa_lock, so I'm assuming
+		 * that this is the legit place for the check.
+		 */
 		if (!page || (~page->vm_max_prot_bits & vm_prot_bits))
 			return -EACCES;
 
+		cond_resched();
+		xas_lock(&xas);
+	}
+
 	return 0;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] Fix the issue further discussed in:
  2020-10-05  3:17 [PATCH v2] Fix the issue further discussed in: Jarkko Sakkinen
@ 2020-10-05 11:11 ` Matthew Wilcox
  2020-10-05 11:48   ` Jarkko Sakkinen
  0 siblings, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2020-10-05 11:11 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: linux-sgx, Haitao Huang, Sean Christopherson, Jethro Beekman,
	Dave Hansen

On Mon, Oct 05, 2020 at 06:17:59AM +0300, Jarkko Sakkinen wrote:
> @@ -317,10 +318,31 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
>  	if (current->personality & READ_IMPLIES_EXEC)
>  		return -EACCES;
>  
> -	xas_for_each(&xas, page, idx_end)
> +	/*
> +	 * No need to hold encl->lock:
> +	 * 1. None of the page->* get written.
> +	 * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This
> +	 *    is before calling xa_insert(). After that it is never modified.
> +	 */
> +	xas_lock(&xas);
> +	xas_for_each(&xas, page, idx_end) {
> +		if (++count % XA_CHECK_SCHED)
> +			continue;

This really doesn't do what you think it does.

	int ret = 0;
	int count = 0;

	xas_lock(&xas);
	while (xas.index < idx_end) {
		struct sgx_page *page = xas_next(&xas);

		if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) {
			ret = -EACCESS;
			break;
		}

		if (++count % XA_CHECK_SCHED)
			continue;
		xas_pause(&xas);
		xas_unlock(&xas);
		cond_resched();
		xas_lock(&xas);
	}
	xas_unlock(&xas);

	return ret;

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] Fix the issue further discussed in:
  2020-10-05 11:11 ` Matthew Wilcox
@ 2020-10-05 11:48   ` Jarkko Sakkinen
  2020-10-05 11:48     ` Jarkko Sakkinen
  0 siblings, 1 reply; 4+ messages in thread
From: Jarkko Sakkinen @ 2020-10-05 11:48 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-sgx, Haitao Huang, Sean Christopherson, Jethro Beekman,
	Dave Hansen

On Mon, Oct 05, 2020 at 12:11:39PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 05, 2020 at 06:17:59AM +0300, Jarkko Sakkinen wrote:
> > @@ -317,10 +318,31 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
> >  	if (current->personality & READ_IMPLIES_EXEC)
> >  		return -EACCES;
> >  
> > -	xas_for_each(&xas, page, idx_end)
> > +	/*
> > +	 * No need to hold encl->lock:
> > +	 * 1. None of the page->* get written.
> > +	 * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This
> > +	 *    is before calling xa_insert(). After that it is never modified.
> > +	 */
> > +	xas_lock(&xas);
> > +	xas_for_each(&xas, page, idx_end) {
> > +		if (++count % XA_CHECK_SCHED)
> > +			continue;
> 
> This really doesn't do what you think it does.
> 
> 	int ret = 0;
> 	int count = 0;
> 
> 	xas_lock(&xas);
> 	while (xas.index < idx_end) {
> 		struct sgx_page *page = xas_next(&xas);
> 
> 		if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) {
> 			ret = -EACCESS;
> 			break;
> 		}
> 
> 		if (++count % XA_CHECK_SCHED)
> 			continue;
> 		xas_pause(&xas);
> 		xas_unlock(&xas);
> 		cond_resched();
> 		xas_lock(&xas);
> 	}
> 	xas_unlock(&xas);
> 
> 	return ret;

No mine certainly does not, it locks up the system if the loop succeeds
(i.e. does not return -EACCESS) :-) Unfortunately had by mistake the v1
patch (xa_load()) in the kernel that I used to test.

/Jarkko

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] Fix the issue further discussed in:
  2020-10-05 11:48   ` Jarkko Sakkinen
@ 2020-10-05 11:48     ` Jarkko Sakkinen
  0 siblings, 0 replies; 4+ messages in thread
From: Jarkko Sakkinen @ 2020-10-05 11:48 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-sgx, Haitao Huang, Sean Christopherson, Jethro Beekman,
	Dave Hansen

On Mon, Oct 05, 2020 at 02:48:07PM +0300, Jarkko Sakkinen wrote:
> On Mon, Oct 05, 2020 at 12:11:39PM +0100, Matthew Wilcox wrote:
> > On Mon, Oct 05, 2020 at 06:17:59AM +0300, Jarkko Sakkinen wrote:
> > > @@ -317,10 +318,31 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
> > >  	if (current->personality & READ_IMPLIES_EXEC)
> > >  		return -EACCES;
> > >  
> > > -	xas_for_each(&xas, page, idx_end)
> > > +	/*
> > > +	 * No need to hold encl->lock:
> > > +	 * 1. None of the page->* get written.
> > > +	 * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This
> > > +	 *    is before calling xa_insert(). After that it is never modified.
> > > +	 */
> > > +	xas_lock(&xas);
> > > +	xas_for_each(&xas, page, idx_end) {
> > > +		if (++count % XA_CHECK_SCHED)
> > > +			continue;
> > 
> > This really doesn't do what you think it does.
> > 
> > 	int ret = 0;
> > 	int count = 0;
> > 
> > 	xas_lock(&xas);
> > 	while (xas.index < idx_end) {
> > 		struct sgx_page *page = xas_next(&xas);
> > 
> > 		if (!page || (~page->vm_max_prot_bits & vm_prot_bits)) {
> > 			ret = -EACCESS;
> > 			break;
> > 		}
> > 
> > 		if (++count % XA_CHECK_SCHED)
> > 			continue;
> > 		xas_pause(&xas);
> > 		xas_unlock(&xas);
> > 		cond_resched();
> > 		xas_lock(&xas);
> > 	}
> > 	xas_unlock(&xas);
> > 
> > 	return ret;
> 
> No mine certainly does not, it locks up the system if the loop succeeds
> (i.e. does not return -EACCESS) :-) Unfortunately had by mistake the v1
> patch (xa_load()) in the kernel that I used to test.

... and not having xas_unlock() in the end was not intentional.

/Jarkko

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-10-05 11:49 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-05  3:17 [PATCH v2] Fix the issue further discussed in: Jarkko Sakkinen
2020-10-05 11:11 ` Matthew Wilcox
2020-10-05 11:48   ` Jarkko Sakkinen
2020-10-05 11:48     ` Jarkko Sakkinen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.