From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760608AbdEORoi (ORCPT ); Mon, 15 May 2017 13:44:38 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58738 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760583AbdEORob (ORCPT ); Mon, 15 May 2017 13:44:31 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 3B841120E7 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=alex.williamson@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 3B841120E7 Date: Mon, 15 May 2017 11:44:09 -0600 From: Alex Williamson To: "Chen, Xiaoguang" Cc: Gerd Hoffmann , "Tian, Kevin" , "intel-gfx@lists.freedesktop.org" , "linux-kernel@vger.kernel.org" , "zhenyuw@linux.intel.com" , "Lv, Zhiyuan" , "intel-gvt-dev@lists.freedesktop.org" , "Wang, Zhi A" Subject: Re: [RFC PATCH 6/6] drm/i915/gvt: support QEMU getting the dmabuf Message-ID: <20170515114409.414d1fdb@w520.home> In-Reply-To: References: <1493372130-27727-1-git-send-email-xiaoguang.chen@intel.com> <1493372130-27727-7-git-send-email-xiaoguang.chen@intel.com> <1493718658.8581.82.camel@redhat.com> <20170504100833.199bc8ba@t450s.home> <1493967331.371.53.camel@redhat.com> <20170505091115.7a680636@t450s.home> <1494509273.17970.12.camel@redhat.com> <20170511094526.164985ee@w520.home> <20170511205829.672854c3@t450s.home> <1494580325.14352.57.camel@redhat.com> <20170512103828.4a1378a1@t450s.home> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 15 May 2017 17:44:30 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 15 May 2017 03:36:50 +0000 "Chen, Xiaoguang" wrote: > Hi Alex and Gerd, > > >-----Original Message----- > >From: Alex Williamson [mailto:alex.williamson@redhat.com] > >Sent: Saturday, May 13, 2017 12:38 AM > >To: Gerd Hoffmann > >Cc: Chen, Xiaoguang ; Tian, Kevin > >; intel-gfx@lists.freedesktop.org; linux- > >kernel@vger.kernel.org; zhenyuw@linux.intel.com; Lv, Zhiyuan > >; intel-gvt-dev@lists.freedesktop.org; Wang, Zhi A > > > >Subject: Re: [RFC PATCH 6/6] drm/i915/gvt: support QEMU getting the dmabuf > > > >On Fri, 12 May 2017 11:12:05 +0200 > >Gerd Hoffmann wrote: > > > >> Hi, > >> > >> > If the contents of the framebuffer change or if the parameters of > >> > the framebuffer change? I can't image that creating a new dmabuf fd > >> > for every visual change within the framebuffer would be efficient, > >> > but I don't have any concept of what a dmabuf actually does. > >> > >> Ok, some background: > >> > >> The drm subsystem has the concept of planes. The most important plane > >> is the primary framebuffer (i.e. what gets scanned out to the physical > >> display). The cursor is a plane too, and there can be additional > >> overlay planes for stuff like video playback. > >> > >> Typically there are multiple planes in a system and only one of them > >> gets scanned out to the crtc, i.e. the fbdev emulation creates one > >> plane for the framebuffer console. The X-Server creates a plane too, > >> and when you switch between X-Server and framebuffer console via > >> ctrl-alt-fn the intel driver just reprograms the encoder to scan out > >> the one or the other plane to the crtc. > >> > >> The dma-buf handed out by gvt is a reference to a plane. I think on > >> the host side gvt can see only the active plane (from encoder/crtc > >> register > >> programming) not the inactive ones. > >> > >> The dma-buf can be imported as opengl texture and then be used to > >> render the guest display to a host window. I think it is even > >> possible to use the dma-buf as plane in the host drm driver and scan > >> it out directly to a physical display. The actual framebuffer content > >> stays in gpu memory all the time, the cpu never has to touch it. > >> > >> It is possible to cache the dma-buf handles, i.e. when the guest boots > >> you'll get the first for the fbcon plane, when the x-server starts the > >> second for the x-server framebuffer, and when the user switches to the > >> text console via ctrl-alt-fn you can re-use the fbcon dma-buf you > >> already have. > >> > >> The caching becomes more important for good performance when the guest > >> uses pageflipping (wayland does): define two planes, render into one > >> while displaying the other, then flip the two for a atomic display > >> update. > >> > >> The caching also makes it a bit difficult to create a good interface. > >> So, the current patch set creates: > >> > >> (a) A way to query the active planes (ioctl > >> INTEL_VGPU_QUERY_DMABUF added by patch 5/6 of this series). > >> (b) A way to create a dma-buf for the active plane (ioctl > >> INTEL_VGPU_GENERATE_DMABUF). > >> > >> Typical userspace workflow is to first query the plane, then check if > >> it already has a dma-buf for it, and if not create one. > > > >Thank you! This is immensely helpful! > > > >> > What changes to the framebuffer require a new dmabuf fd? Shouldn't > >> > the user query the parameters of the framebuffer through a dmabuf fd > >> > and shouldn't the dmabuf fd have some signaling mechanism to the > >> > user (eventfd perhaps) to notify the user to re-evaluate the parameters? > >> > >> dma-bufs don't support that, they are really just a handle to a piece > >> of memory, all metadata (format, size) most be communicated by other means. > >> > >> > Otherwise are you imagining that the user polls the vfio region? > >> > >> Hmm, notification support would probably a good reason to have a > >> separate file handle to manage the dma-bufs (instead of using > >> driver-specific ioctls on the vfio fd), because the driver could also > >> use the management fd for notifications then. > > > >I like this idea of a separate control fd for dmabufs, it provides not only a central > >management point, but also a nice abstraction for the vfio device specific > >interface. We potentially only need a single > >VFIO_DEVICE_GET_DMABUF_MGR_FD() ioctl to get a dmabuf management fd > >(perhaps with a type parameter, ex. GFX) where maybe we could have vfio-core > >incorporate this reference into the group lifecycle, so the vendor driver only > >needs to fdget/put this manager fd for the various plane dmabuf fds spawned in > >order to get core-level reference counting. > Following is my understanding of the management fd idea: > 1) QEMU will call VFIO_DEVICE_GET_DMABUF_MGR_FD() ioctl to create a fd and saved the fd in vfio group while initializing the vfio. Ideally there'd be kernel work here too if we want vfio-core to incorporate lifecycle of this fd into the device/group/container lifecycle. Maybe we even want to generalize it further to something like VFIO_DEVICE_GET_FD which takes a parameter of what type of FD to get, GFX_DMABUF_MGR_FD in this case. vfio-core would probably allocate the fd, tap into the release hook for reference counting and pass it to the vfio_device_ops (mdev vendor driver in this case) to attach further. > 2) vendor driver use fdget to add reference count of the fd. > 3) vendor driver use ioctl to the fd to query plane information or create dma-buf fd. > 4) vendor driver use fdput when finished using this fd. > > Is my understanding right? With the above addition, which maybe you were already considering, seems right. > Both QEMU and kernel vfio-core will have changes based on this proposal except the vendor part changes. > Who will make these changes? /me points to the folks trying to enable this functionality... Thanks, Alex