From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753757AbdLSXZD (ORCPT ); Tue, 19 Dec 2017 18:25:03 -0500 Received: from mga11.intel.com ([192.55.52.93]:42898 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753350AbdLSXY5 (ORCPT ); Tue, 19 Dec 2017 18:24:57 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,429,1508828400"; d="scan'208";a="185604156" From: "Christopherson, Sean J" To: Jarkko Sakkinen CC: "linux-kernel@vger.kernel.org" , "intel-sgx-kernel-dev@lists.01.org" , "platform-driver-x86@vger.kernel.org" Subject: RE: [intel-sgx-kernel-dev] [PATCH v5 06/11] intel_sgx: driver for Intel Software Guard Extensions Thread-Topic: [intel-sgx-kernel-dev] [PATCH v5 06/11] intel_sgx: driver for Intel Software Guard Extensions Thread-Index: AQHTXLgSYsdlgoM+vUCYZXXifu8cFKMUsBQAgAAq4oCAAW6IgIArsZDggAM8dwCABgN64IAAzwCA//99pjA= Date: Tue, 19 Dec 2017 23:24:55 +0000 Message-ID: <37306EFA9975BE469F115FDE982C075BCDEE4742@ORSMSX114.amr.corp.intel.com> References: <20171113194528.28557-1-jarkko.sakkinen@linux.intel.com> <20171113194528.28557-7-jarkko.sakkinen@linux.intel.com> <1510682106.3313.24.camel@intel.com> <20171114202835.64rl35asldh3jgui@linux.intel.com> <1510770027.11044.37.camel@intel.com> <37306EFA9975BE469F115FDE982C075BC6B3B5E6@ORSMSX108.amr.corp.intel.com> <20171215150020.e3vq5fh2rtydzhkt@linux.intel.com> <37306EFA9975BE469F115FDE982C075BCDEE4562@ORSMSX114.amr.corp.intel.com> <1513725073.2206.13.camel@linux.intel.com> In-Reply-To: <1513725073.2206.13.camel@linux.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ctpclassification: CTP_IC x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYTI5ZTcxZTktMzMwNi00MzRkLTk1ZDktMDZkYzc0YWExZDQwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjIuNS4xOCIsIlRydXN0ZWRMYWJlbEhhc2giOiJOVVJtRnFFeXR4MmdrbWh4NW9FaDhlWU8rK3d1bzNVck5pUlBVVFhhU1B0dDhSS2MzSVlQVVY2aFhhaEpLaUl4In0= dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [10.22.254.140] Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from base64 to 8bit by nfs id vBJNP6b8002396 On Tuesday, December 19, 2017 Jarkko Sakkinen wrote: > On Tue, 2017-12-19 at 18:52 +0000, Christopherson, Sean J wrote: > > > We can cache tokens in future in the kernel space, can't we? > > > > Yes, but why? Deferring to userspace is less complex and likely > > more performant. > > That's quite strong argument especially if you are making that for > systems running multiple independent workloads and not just a single > application. > > > Tokens are large enough that there would need to be some form of > > limit on the number of tokens, which brings up questions about > > how to account tokens, the cache eviction scheme, whether or not > > the size of the cache should be controllable from userspace, etc... > > Leaving caching decisions to the kernel also gives more freedoms to > do global decisions. > > > Userspace caching can likely provide better performance because > > the user/application knows the usage model and life expectancy of > > its tokens, i.e. userspace can make informed decisions about when > > to discard a token, how much memory to dedicate to caching tokens, > > etc... And in the case of VMs, userspace can reuse tokens across > > reboots (of the VM), e.g. by saving tokens to disk. > > I'm not really convinced that your argument is sound if you consider the > whole range of x86 systems that can run enclaves especially if the > system is running multiple irrelated applications. > > And you are ignoring everything else but the performance, which is does > not make any sense. The current design governs the Linux kernel to have > the ultimate power, which enclaves to run with the minimized proprietary > risk. I think that is something worth of emphasizing too. Exposing the token generated by the in-kernel LE doesn't affect the kernel's power in the slightest, e.g. the kernel doesn't need a LE to refuse to run an enclave and a privileged user can always load an out-of-tree driver if they really want to circumvent the kernel's policies, which is probably easier than stealing the LE's private key. > > Whether the token caching is left to kernel or user space will most > definitely introduce some non-trivial performance problems to solve > with some unexpected workloads that we cannot imagine right now. That's > why the governance should be the driver. Not the performance. Those > issues can and must be sorted out in any case. >