From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36687) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bDZ7n-0006QB-I4 for qemu-devel@nongnu.org; Thu, 16 Jun 2016 11:22:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bDZ7k-0002Gy-42 for qemu-devel@nongnu.org; Thu, 16 Jun 2016 11:22:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:32914) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bDZ7j-0002Ge-SV for qemu-devel@nongnu.org; Thu, 16 Jun 2016 11:22:28 -0400 Date: Thu, 16 Jun 2016 16:22:22 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20160616152221.GD2249@work-vm> References: <569FADC7.7060301@linux.vnet.ibm.com> <20160120162220.GH13215@redhat.com> <20160121113632.GC2446@work-vm> <57FA3A002D66E049AA7792D931B894C7060F5494@MOKSCY3MSGUSRGB.ITServices.sbc.com> <945CA011AD5F084CBEA3E851C0AB28894B8C3A14@SHSMSX101.ccr.corp.intel.com> <575E92DB.3080904@linux.vnet.ibm.com> <20160615193019.GB7300@work-vm> <5761C092.5070702@linux.vnet.ibm.com> <20160616080520.GA2249@work-vm> <5762BFE5.9070906@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5762BFE5.9070906@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Berger Cc: Stefan Berger , "mst@redhat.com" , "qemu-devel@nongnu.org" , "hagen.lauer@huawei.com" , "Xu, Quan" , "silviu.vlasceanu@gmail.com" , "SERBAN, CRISTINA" , "SHIH, CHING C" * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote: > On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote: > > * Stefan Berger (stefanb@linux.vnet.ibm.com) wrote: > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote: > > > > > > > > So what was the multi-instance vTPM proxy driver patch set about? > > > That's for containers. > > Why have the two mechanisms? Can you explain how the multi-instance > > proxy works; my brief reading when I saw your patch series seemed > > to suggest it could be used instead of CUSE for the non-container case. > > The multi-instance vtpm proxy driver basically works through usage of an > ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair. > The front-end is a new /dev/tpm%d device that then can be moved into the > container (mknod + device cgroup setup). The backend is an anonymous file > descriptor that is to be passed to a TPM emulator for reading TPM requests > coming in from that /dev/tpm%d and returning responses to. Since it is > implemented as a kernel driver, we can hook it into the Linux Integrity > Measurement Architecture (IMA) and have it be used by IMA in place of a > hardware TPM driver. There's ongoing work in the area of namespacing support > for IMA to have an independent IMA instance per container so that this can > be used. > > A TPM does not only have a data channel (/dev/tpm%d) but also a control > channel, which is primarily implemented in its hardware interface and is > typically not fully accessible to user space. The vtpm proxy driver _only_ > supports the data channel through which it basically relays TPM commands and > responses from user space to the TPM emulator. The control channel is > provided by the software emulator through an additional TCP or UnixIO socket > or in case of CUSE through ioctls. The control channel allows to reset the > TPM when the container/VM is being reset or set the locality of a command or > retrieve the state of the vTPM (for suspend) and set the state of the vTPM > (for resume) among several other things. The commands for the control > channel are defined here: > > https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h > > For a container we would require that its management stack initializes and > resets the vTPM when the container is rebooted. (These are typically > operations that are done through pulses on the motherboard.) > > In case of QEMU we would need to have more access to the control channel, > which includes initialization and reset of the vTPM, getting and setting its > state for suspend/resume/migration, setting the locality of commands, etc., > so that all low-level functionality is accessible to the emulator (QEMU). > The proxy driver does not help with this but we should use the swtpm > implementation that either has that CUSE interface with control channel > (through ioctls) or provides UnixIO and TCP sockets for the control channel. OK, that makes sense; does the control interface need to be handled by QEMU or by libvirt or both? Either way, I think you're saying that with your kernel interface + a UnixIO socket you can avoid the CUSE stuff? Dave > Stefan > > > > > Dave > > P.S. I've removed Jeff from the cc because I got a bounce from > > his AT&T address saying 'restricted/not authorized' > > > > > Stefan > > > > > -- > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK > > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK