From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752089AbdLSSwH convert rfc822-to-8bit (ORCPT ); Tue, 19 Dec 2017 13:52:07 -0500 Received: from mga06.intel.com ([134.134.136.31]:1136 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751009AbdLSSwF (ORCPT ); Tue, 19 Dec 2017 13:52:05 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,427,1508828400"; d="scan'208";a="188164781" From: "Christopherson, Sean J" To: Jarkko Sakkinen CC: "linux-kernel@vger.kernel.org" , "intel-sgx-kernel-dev@lists.01.org" , "platform-driver-x86@vger.kernel.org" Subject: RE: [intel-sgx-kernel-dev] [PATCH v5 06/11] intel_sgx: driver for Intel Software Guard Extensions Thread-Topic: [intel-sgx-kernel-dev] [PATCH v5 06/11] intel_sgx: driver for Intel Software Guard Extensions Thread-Index: AQHTXLgSYsdlgoM+vUCYZXXifu8cFKMUsBQAgAAq4oCAAW6IgIArsZDggAM8dwCABgN64A== Date: Tue, 19 Dec 2017 18:52:02 +0000 Message-ID: <37306EFA9975BE469F115FDE982C075BCDEE4562@ORSMSX114.amr.corp.intel.com> References: <20171113194528.28557-1-jarkko.sakkinen@linux.intel.com> <20171113194528.28557-7-jarkko.sakkinen@linux.intel.com> <1510682106.3313.24.camel@intel.com> <20171114202835.64rl35asldh3jgui@linux.intel.com> <1510770027.11044.37.camel@intel.com> <37306EFA9975BE469F115FDE982C075BC6B3B5E6@ORSMSX108.amr.corp.intel.com> <20171215150020.e3vq5fh2rtydzhkt@linux.intel.com> In-Reply-To: <20171215150020.e3vq5fh2rtydzhkt@linux.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ctpclassification: CTP_IC x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMTcyZjkzNGEtMWMwNy00MDdjLTlmY2ItMTk4NjdjM2UzOGYyIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjIuNS4xOCIsIlRydXN0ZWRMYWJlbEhhc2giOiI5MkxQRkljTkJiK2RpYnRZUzdmS0xtM1wvQlFzeGEyQXg4VW5JWU1ZUldBSm4wb3dYUkZOTGNsMGE2bUJkYXBuRSJ9 dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [10.22.254.140] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Friday, 2017-12-15, Jarkko Sakkinen wrote: > > Resurrecting this thread now that I have a system with launch control > > and have been able to measure the performance impact... > > > > Regenerating the EINIT token every time adds somewhere in the vicinity > > of ~5% overhead to creating an enclave, versus generating a token once > > and reusing it in each EINIT call. This isn't a huge issue since real > > world usage models likely won't be re-launching enclaves at a high rate, > > but it is measurable. > > We can cache tokens in future in the kernel space, can't we? Yes, but why? Deferring to userspace is less complex and likely more performant. Tokens are large enough that there would need to be some form of limit on the number of tokens, which brings up questions about how to account tokens, the cache eviction scheme, whether or not the size of the cache should be controllable from userspace, etc... Userspace caching can likely provide better performance because the user/application knows the usage model and life expectancy of its tokens, i.e. userspace can make informed decisions about when to discard a token, how much memory to dedicate to caching tokens, etc... And in the case of VMs, userspace can reuse tokens across reboots (of the VM), e.g. by saving tokens to disk. From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Christopherson, Sean J" Subject: RE: [intel-sgx-kernel-dev] [PATCH v5 06/11] intel_sgx: driver for Intel Software Guard Extensions Date: Tue, 19 Dec 2017 18:52:02 +0000 Message-ID: <37306EFA9975BE469F115FDE982C075BCDEE4562@ORSMSX114.amr.corp.intel.com> References: <20171113194528.28557-1-jarkko.sakkinen@linux.intel.com> <20171113194528.28557-7-jarkko.sakkinen@linux.intel.com> <1510682106.3313.24.camel@intel.com> <20171114202835.64rl35asldh3jgui@linux.intel.com> <1510770027.11044.37.camel@intel.com> <37306EFA9975BE469F115FDE982C075BC6B3B5E6@ORSMSX108.amr.corp.intel.com> <20171215150020.e3vq5fh2rtydzhkt@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Return-path: Received: from mga06.intel.com ([134.134.136.31]:1136 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751009AbdLSSwF (ORCPT ); Tue, 19 Dec 2017 13:52:05 -0500 In-Reply-To: <20171215150020.e3vq5fh2rtydzhkt@linux.intel.com> Content-Language: en-US Sender: platform-driver-x86-owner@vger.kernel.org List-ID: To: Jarkko Sakkinen Cc: "linux-kernel@vger.kernel.org" , "intel-sgx-kernel-dev@lists.01.org" , "platform-driver-x86@vger.kernel.org" On Friday, 2017-12-15, Jarkko Sakkinen wrote: > > Resurrecting this thread now that I have a system with launch control > > and have been able to measure the performance impact... > > > > Regenerating the EINIT token every time adds somewhere in the vicinity > > of ~5% overhead to creating an enclave, versus generating a token once > > and reusing it in each EINIT call. This isn't a huge issue since real > > world usage models likely won't be re-launching enclaves at a high rate, > > but it is measurable. > > We can cache tokens in future in the kernel space, can't we? Yes, but why? Deferring to userspace is less complex and likely more performant. Tokens are large enough that there would need to be some form of limit on the number of tokens, which brings up questions about how to account tokens, the cache eviction scheme, whether or not the size of the cache should be controllable from userspace, etc... Userspace caching can likely provide better performance because the user/application knows the usage model and life expectancy of its tokens, i.e. userspace can make informed decisions about when to discard a token, how much memory to dedicate to caching tokens, etc... And in the case of VMs, userspace can reuse tokens across reboots (of the VM), e.g. by saving tokens to disk.