linux-sgx.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: "Xing, Cedric" <cedric.xing@intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>,
	Andy Lutomirski <luto@kernel.org>,
	Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>,
	"linux-sgx@vger.kernel.org" <linux-sgx@vger.kernel.org>,
	"Hansen, Dave" <dave.hansen@intel.com>,
	Jethro Beekman <jethro@fortanix.com>,
	"Dr . Greg Wettstein" <greg@enjellic.com>
Subject: Re: [PATCH 5/7] x86/sgx: Add flag to zero added region instead of copying from source
Date: Mon, 10 Jun 2019 11:41:13 -0700	[thread overview]
Message-ID: <20190610184112.GI15995@linux.intel.com> (raw)
In-Reply-To: <960B34DE67B9E140824F1DCDEC400C0F654FFDD3@ORSMSX116.amr.corp.intel.com>

On Mon, Jun 10, 2019 at 11:09:50AM -0700, Xing, Cedric wrote:
> > From: Andy Lutomirski [mailto:luto@amacapital.net]
> > Sent: Friday, June 07, 2019 12:32 PM
> > 
> > > On Jun 6, 2019, at 10:32 AM, Sean Christopherson
> > <sean.j.christopherson@intel.com> wrote:
> > >
> > >> On Thu, Jun 06, 2019 at 10:20:38AM -0700, Andy Lutomirski wrote:
> > >> On Wed, Jun 5, 2019 at 12:49 PM Sean Christopherson
> > >> <sean.j.christopherson@intel.com> wrote:
> > >>>
> > >>> For some enclaves, e.g. an enclave with a small code footprint and a
> > >>> large working set, the vast majority of pages added to the enclave
> > >>> are zero pages.  Introduce a flag to denote such zero pages.  The
> > >>> major benefit of the flag will be realized in a future patch to use
> > >>> Linux's actual zero page as the source, as opposed to explicitly
> > >>> zeroing the enclave's backing memory.
> > >>>
> > >>
> > >> I feel like I probably asked this at some point, but why is there a
> > >> workqueue here at all?
> > >
> > > Performance.  A while back I wrote a patch set to remove the worker
> > > queue and discovered that it tanks enclave build time when the enclave
> > > is being hosted by a Golang application.  Here's a snippet from a mail
> > > discussing the code.
> > >
> > >    The bad news is that I don't think we can remove the add page
> > worker
> > >    as applications with userspace schedulers, e.g. Go's M:N scheduler,
> > >    can see a 10x or more throughput improvement when using the worker
> > >    queue.  I did a bit of digging for the Golang case to make sure I
> > >    wasn't doing something horribly stupid/naive and found that it's a
> > >    generic issue in Golang with blocking (or just long-running) system
> > >    calls.  Because Golang multiplexes Goroutines on top of OS threads,
> > >    blocking syscalls introduce latency and context switching overhead,
> > >    e.g. Go's scheduler will spin up a new OS thread to service other
> > >    Goroutines after it realizes the syscall has blocked, and will
> > later
> > >    destroy one of the OS threads so that it doesn't build up too many
> > >    unused.
> > >
> > > IIRC, the scenario is spinning up several goroutines, each building an
> > > enclave.  I played around with adding a flag to do a synchronous EADD
> > > but didn't see a meaningful change in performance for the simple case.
> > > Supporting both the worker queue and direct paths was complex enough
> > > that I decided it wasn't worth the trouble for initial upstreaming.
> > 
> > Sigh.
> > 
> > It seems silly to add a workaround for a language that has trouble
> > calling somewhat-but-not-too-slow syscalls or ioctls.
> > 
> > How about fixing this in Go directly?  Either convince the golang people
> > to add a way to allocate a real thread for a particular region of code
> > or have the Go SGX folks write a bit of C code to do  a whole bunch of
> > ioctls and have Go call *that*.  Then the mess stays in Go where it
> > belongs.
> 
> The add page worker is less about performance but more about concurrency
> restrictions in SGX ISA. EADD/EEXTEND are slow instructions and both take
> lock on the SECS. So if there's another EADD/EEXTEND in progress then it will
> #GP.
> 
> In practice, no sane applications will EADD in multiple threads because that
> would make MRENCLAVE unpredictable. But adversary may use that just to
> trigger #GP in kernel. Therefore, the SGX module would have to lock around
> EADD/EEXTEND anyway. And then we figured it would be better to have the add
> page worker so that no lock would be necessary, plus (small) improvement in
> performance. 

That may have been true years ago, but it doesn't reflect the current
reality.  The ADD_PAGE ioctl *and* the worker function both take the
enclave's lock.  The worker function actually takes it twice, once to pull
the request off the queue, and once to do EADD/EEXTEND.  The lock is
temporarily released by the worker function to avoid deadlock in case EPC
page allocation triggers reclaim.

  reply	other threads:[~2019-06-10 18:41 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-05 19:48 [PATCH 0/7] x86/sgx: Clean up and enhance add pages ioctl Sean Christopherson
2019-06-05 19:48 ` [PATCH 1/7] x86/sgx: Remove dead code to handle non-existent IOR ioctl Sean Christopherson
2019-06-05 19:48 ` [PATCH 2/7] x86/sgx: Remove unnecessary @cmd parameter from ioctl helpers Sean Christopherson
2019-06-05 19:48 ` [PATCH 3/7] x86/sgx: Let ioctl helpers do copy to/from user Sean Christopherson
2019-06-05 19:48 ` [PATCH 4/7] x86/sgx: Allow userspace to add multiple pages in single ioctl() Sean Christopherson
2019-06-06 15:47   ` Jarkko Sakkinen
2019-06-13  0:43   ` Jethro Beekman
2019-06-13 16:51     ` Sean Christopherson
2019-06-13 19:05       ` Andy Lutomirski
2019-06-13 19:15         ` Sean Christopherson
2019-06-13 19:45       ` Xing, Cedric
2019-06-05 19:48 ` [PATCH 5/7] x86/sgx: Add flag to zero added region instead of copying from source Sean Christopherson
2019-06-06 17:20   ` Andy Lutomirski
2019-06-06 17:32     ` Sean Christopherson
2019-06-07 19:32       ` Andy Lutomirski
2019-06-10 17:06         ` Jarkko Sakkinen
2019-06-10 18:09         ` Xing, Cedric
2019-06-10 18:41           ` Sean Christopherson [this message]
2019-06-10 18:53         ` Sean Christopherson
2019-06-13  0:38           ` Jethro Beekman
2019-06-13 13:46             ` Sean Christopherson
2019-06-13 16:16               ` Andy Lutomirski
2019-06-13 16:54                 ` Sean Christopherson
2019-06-05 19:48 ` [PATCH 6/7] x86/sgx: Use the actual zero page as the source when adding zero pages Sean Christopherson
2019-06-05 19:48 ` [PATCH 7/7] x86/sgx: Add a reserved field to sgx_enclave_add_region to drop 'packed' Sean Christopherson
2019-06-05 19:59   ` Dave Hansen
2019-06-05 20:00     ` Andy Lutomirski
2019-06-12 15:14   ` Jarkko Sakkinen
2019-06-12 15:23     ` Sean Christopherson
2019-06-13  0:44       ` Jethro Beekman
2019-06-13 15:38       ` Jarkko Sakkinen
2019-06-12 15:16 ` [PATCH 0/7] x86/sgx: Clean up and enhance add pages ioctl Jarkko Sakkinen
2019-06-12 18:14   ` Jarkko Sakkinen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190610184112.GI15995@linux.intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=cedric.xing@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=greg@enjellic.com \
    --cc=jarkko.sakkinen@linux.intel.com \
    --cc=jethro@fortanix.com \
    --cc=linux-sgx@vger.kernel.org \
    --cc=luto@amacapital.net \
    --cc=luto@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).