From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Christopherson, Sean J" Subject: RE: [intel-sgx-kernel-dev] [PATCH 08/10] kvm: vmx: add guest's IA32_SGXLEPUBKEYHASHn runtime switch support Date: Fri, 16 Jun 2017 16:31:28 +0000 Message-ID: <37306EFA9975BE469F115FDE982C075BC61162A2@ORSMSX108.amr.corp.intel.com> References: <6ab7ec4e-e0fa-af47-11b2-f26edcb088fb@linux.intel.com> <596dc1ad-eac7-798d-72e5-665eb7f3f2e4@linux.intel.com> <0b4697b9-0976-c8ad-e26f-4ff683318486@linux.intel.com> <20170608123101.47pgsaovkgtdxaw4@intel.com> <46bdaa22-8e7d-738f-9dd0-840fe3327506@linux.intel.com> <20170610122306.lfjshzepqxxyqj72@intel.com> <001ecd91-15e7-ef5a-097b-d57bc7784f47@linux.intel.com> <20170612083658.vrrcr6dq6axiovse@intel.com> <3bbe95fe-bb97-d430-e9d3-d4edcb381f46@linux.intel.com> <92f3b0cc-1f04-d8a5-9e86-0417f75f8ed9@linux.intel.com> <91b9e29f-fc15-7524-3740-4417d3a1dd8f@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Cc: "intel-sgx-kernel-dev@lists.01.org" , kvm list , Radim Krcmar , "Paolo Bonzini" To: 'Andy Lutomirski' , "Huang, Kai" Return-path: Received: from mga04.intel.com ([192.55.52.120]:62806 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750865AbdFPQba (ORCPT ); Fri, 16 Jun 2017 12:31:30 -0400 In-Reply-To: Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: Andy Lutomirski wrote: > On Thu, Jun 15, 2017 at 9:33 PM, Huang, Kai wrote: > > > > > > On 6/16/2017 4:11 PM, Andy Lutomirski wrote: > >> > >> On Thu, Jun 15, 2017 at 8:46 PM, Huang, Kai > >> wrote: > >>> > >>> > >>> > >>> On 6/13/2017 11:00 AM, Andy Lutomirski wrote: > >>>> > >>>> > >>>> On Mon, Jun 12, 2017 at 3:08 PM, Huang, Kai > >>>> wrote: > >>>>> > >>>>> > >>>>> > >>>>> I don't know whether SGX driver will have restrict on running > >>>>> provisioning > >>>>> enclave. In my understanding provisioning enclave is always from Intel. > >>>>> However I am not expert here and probably be wrong. Can you point out > >>>>> *exactly* what restricts in host must/should be applied to guest so > >>>>> that > >>>>> Jarkko can know whether he will support those restricts or not? > >>>>> Otherwise > >>>>> I > >>>>> don't think we even need to talk about this topic at current stage. > >>>>> > >>>> > >>>> The whole point is that I don't know. But here are two types of > >>>> restriction I can imagine demand for: > >>>> > >>>> 1. Only a particular approved provisioning enclave may run (be it > >>>> Intel's or otherwise -- with a non-Intel LE, I think you can launch a > >>>> non-Intel provisioning enclave). This would be done to restrict what > >>>> types of remote attestation can be done. (Intel supplies a remote > >>>> attestation service that uses some contractual policy that I don't > >>>> know. Maybe a system owner wants a different policy applied to ISVs.) > >>>> Imposing this policy on guests more or less requires filtering EINIT. > >>> > >>> > >>> > >>> Hi Andy, > >>> > >>> Sorry for late reply. > >>> > >>> What is the issue if host and guest run provisioning enclave from > >>> different > >>> vendor, for example, host runs intel's provisioning enclave, and guest > >>> runs > >>> other vendor's provisioning enclave? Or different guests run provisioning > >>> enclaves from different vendors? > >> > >> > >> There's no issue unless someone has tried to impose a policy. There > >> is clearly at least some interest in having policies that affect what > >> enclaves can run -- otherwise there wouldn't be LEs in the first > >> place. > >> > >>> > >>> One reason I am asking is that, on Xen (where we don't have concept of > >>> *host*), it's likely that we won't apply any policy at Xen hypervisor at > >>> all, and guests will be able to run any enclave from any signer as their > >>> wish. > >> > >> > >> That seems entirely reasonable. Someone may eventually ask Xen to add > >> support for SGX enclave restrictions, in which case you'll either have > >> to tell them that it won't happen or implement it. > >> > >>> > >>> Sorry that I don't understand (or kind of forgot) the issues here. > >>> > >>>> > >>>> 2. For kiosk-ish or single-purpose applications, I can imagine that > >>>> you would want to allow a specific list of enclave signers or even > >>>> enclave hashes. Maybe you would allow exactly one enclave hash. You > >>>> could kludge this up with a restrictive LE policy, but you could also > >>>> do it for real by implementing the specific restriction in the kernel. > >>>> Then you'd want to impose it on the guest, and you'd do it by > >>>> filtering EINIT. > >>> > >>> > >>> Assuming the enclave hash means measurement of enclave, and assuming we > >>> have > >>> a policy that we only allow enclave from one signer to run, would you > >>> also > >>> elaborate the issue that, if host and guest run enclaves from different > >>> signer? If host has such policy, and we are allowing creating guests on > >>> such > >>> host, I think that typically we will have the same policy in the guest > >> > >> > >> Yes, I presume this too, but. > >> > >>> (vetted by guest's kernel). The owner of that host should be aware of the > >>> risk (if there's any) by creating guest and run enclave inside it. > >> > >> > >> No. The host does not trust the guest in general. If the host has a > > > > > > I agree. > > > >> policy that the only enclave that shall run is X, that doesn't mean > >> that the host shall reject all enclaves requested by the normal > >> userspace API except X but that, if /dev/kvm is used, then the user is > >> magically trusted to not load a guest that fails to respect the host > >> policy. It means that the only enclave that shall run is X regardless > >> of what interface is used. The host must only allow X to be loaded by > >> its userspace and the host must only allow X to be loaded by a guest. > >> > > > > This is theoretical thing. I think your statement makes sense only if we > > have specific example that can prove there's actual risk when allowing guest > > to exceed X approved by host. > > > > I will dig more in your previous emails to see whether you have listed such > > real cases (I some kind forgot sorry) but if you don't mind, you can list > > such cases here. > > I'm operating under the assumption that some kind of policy exists in > the first place. I can imagine everything working fairly well without > any real policy, but apparently there are vendors who want restrictive > policies. What I can't imagine is anyone who wants a restrictive > policy but is then okay with the host only partially enforcing it. I think there is a certain amount of inception going on here, i.e. the only reason we're discussing LE enforced policies in the kernel is because the LE architecture exists and can't be disabled. The LE, as originally designed, is intended to be a way for *userspace* to control what code can run on the system, e.g. to provide a hook for anti-virus/malware to inspect an enclave since it's impossible to inspect an enclave once it is running. The kernel doesn't need an LE to restrict what enclaves can run, e.g. it can perform inspection at any point during the initialization process. This is true for guest enclaves as well since the kernel can trap EINIT. By making the LE kernel-only we've bastardized the concept of the LE and have negated the primary value provided by an LE[1][2]. In my opinion, the discussion of the kernel's launch policies is much ado about nothing, e.g. if supported by hardware, I think we'd opt to disable launch control completely. [1] On a system with unlocked IA32_SGXLEPUBKEYHASH MSRs, the only value added by a using a LE to enforce the kernel's policies is defense-in-depth, e.g. an attacker can't hide malicious code in an enclave even if it gains control of the kernel. I think this is a very minor benefit since running in an enclave doesn't grant any new privileges and doesn't persist across system reset. [2] I think it's safe to assume that any use case that requires locked hash MSRs is out of scope for this discussion, given that the upstream kernel will require unlocked MSRs.