All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g
@ 2017-10-10  9:50 Tina Zhang
  2017-10-10  9:50 ` [PATCH v15 1/7] drm/i915/gvt: Add framebuffer decoder support Tina Zhang
                   ` (8 more replies)
  0 siblings, 9 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel

v14->15:
1) Add VFIO_DEVICE_GET_GFX_DMABUF ABI. (Gerd)
2) Add intel_vgpu_dmabuf_cleanup() to clean up the vGPU's dmabuf. (Gerd)

v13->v14:
1) add PROBE, DMABUF and REGION flags. (Alex)
2) return -ENXIO when gem proxy object is banned by ioctl.
   (Chris) (Daniel)
3) add some details about the float pixel format. (Daniel)
4) add F suffix to the defined name. (Daniel)
5) refine pixel format table. (Zhenyu)

v12->v13:
1) add comments to GEM proxy. (Chris)
2) don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
3) check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
4) remove GEM proxy bar in i915_gem_madvise_ioctl.

v11->v12:
1) add drm_format_mod back. (Gerd and Zhenyu)
2) add region_index. (Gerd)
3) refine the lifecycle of dmabuf.
4) send to dri-devel at lists.freedesktop.org. (Ville) 

v10->v11:
1) rename plane_type to drm_plane_type. (Gerd)
2) move fields of vfio_device_query_gfx_plane to
   vfio_device_gfx_plane_info. (Gerd)
3) remove drm_format_mod, start fields. (Daniel)
4) remove plane_id.

v9->v10:
1) remove dma-buf management
2) refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE
3) track the dma-buf create and release in kernel mode

v8->v9:
1) refine the dma-buf ioctl definition
2) add a lock to protect the dmabuf list
3) move drm format change to a separate patch
4) codes cleanup

v7->v8:
1) refine framebuffer decoder code
2) fix a bug in decoding primary plane

v6->v7:
1) release dma-buf related allocations in dma-buf's associated release
   function.
2) refine ioctl interface for querying plane info or create dma-buf
3) refine framebuffer decoder code
4) the patch series is based on 4.12.0-rc1

v5->v6:
1) align the dma-buf life cycle with the vfio device.
2) add the dma-buf related operations in a separate patch.
3) i915 releated changes.

v4->v5:
1) fix bug while checking whether the gem obj is gvt's dma-buf when user
   change caching mode or domains. Add a helper function to do it.
2) add definition for the query plane and create dma-buf.

v3->v4:
1) fix bug while checking whether the gem obj is gvt's dma-buf when set
   caching mode or doamins.

v2->v3:
1) add a field gvt_plane_info in the drm_i915_gem_obj structure to save
   the decoded plane information to avoid look up while need the plane info.
2) declare a new flag I915_GEM_OBJECT_IS_GVT_DMABUF in drm_i915_gem_object
   to represent the gem obj for gvt's dma-buf. The tiling mode, caching mode
   and domains can not be changed for this kind of gem object.
3) change dma-buf related information to be more generic. So other vendor
   can use the same interface.

v1->v2:
1) create a management fd for dma-buf operations.
2) alloc gem object's backing storage in gem obj's get_pages() callback.

This patch set adds the dma-buf support for intel GVT-g.

dma-buf is an uniform mechanism to share DMA buffers across different
devices and subsystems. dma-buf for intel GVT-g is mainly used to share
the vgpu's framebuffer to userspace to leverage userspace graphics stacks
to render the framebuffer to the display monitor.

The main idea is that we create a gem object and set vgpu's framebuffer as
its backing storage. Then, export a dma-buf associated with this gem object.
With the fd of this dma-buf, userspace can directly handle this buffer.

This patch set can be tried with the following example:
	git://git.kraxel.org/qemu  branch: work/intel-vgpu

Tina Zhang (7):
  drm/i915/gvt: Add framebuffer decoder support
  drm: Introduce RGB 64-bit 16:16:16:16 float format
  drm/i915/gvt: Add RGB 64-bit 16:16:16:16 float format
  drm/i915/gvt: Add opregion support
  vfio: ABI for mdev display dma-buf operation
  drm/i915: Introduce GEM proxy
  drm/i915/gvt: Dmabuf support for GVT-g

 drivers/gpu/drm/i915/gvt/Makefile      |   3 +-
 drivers/gpu/drm/i915/gvt/display.c     |   2 +-
 drivers/gpu/drm/i915/gvt/display.h     |   2 +
 drivers/gpu/drm/i915/gvt/dmabuf.c      | 513 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/dmabuf.h      |  65 ++++
 drivers/gpu/drm/i915/gvt/fb_decoder.c  | 521 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/fb_decoder.h  | 169 +++++++++++
 drivers/gpu/drm/i915/gvt/gvt.c         |   2 +
 drivers/gpu/drm/i915/gvt/gvt.h         |  12 +
 drivers/gpu/drm/i915/gvt/hypercall.h   |   3 +
 drivers/gpu/drm/i915/gvt/kvmgt.c       | 169 ++++++++++-
 drivers/gpu/drm/i915/gvt/mpt.h         |  45 +++
 drivers/gpu/drm/i915/gvt/opregion.c    |  26 +-
 drivers/gpu/drm/i915/gvt/vgpu.c        |   9 +-
 drivers/gpu/drm/i915/i915_gem.c        |  24 +-
 drivers/gpu/drm/i915/i915_gem_object.h |   9 +
 drivers/gpu/drm/i915/i915_gem_tiling.c |   8 +
 include/uapi/drm/drm_fourcc.h          |   4 +
 include/uapi/linux/vfio.h              |  62 ++++
 19 files changed, 1635 insertions(+), 13 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gvt/dmabuf.c
 create mode 100644 drivers/gpu/drm/i915/gvt/dmabuf.h
 create mode 100644 drivers/gpu/drm/i915/gvt/fb_decoder.c
 create mode 100644 drivers/gpu/drm/i915/gvt/fb_decoder.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v15 1/7] drm/i915/gvt: Add framebuffer decoder support
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10  9:50 ` Tina Zhang
  2017-10-10  9:50   ` Tina Zhang
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel

This patch is to introduce the framebuffer decoder which can decode guest
OS's framebuffer information, including primary, cursor and sprite plane.

v14:
- refine pixel format table. (Zhenyu)

v9:
- move drm format change to a separate patch. (Xiaoguang)

v8:
- fix a bug in decoding primary plane. (Tina)

v7:
- refine framebuffer decoder code. (Zhenyu)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
---
 drivers/gpu/drm/i915/gvt/Makefile     |   3 +-
 drivers/gpu/drm/i915/gvt/display.c    |   2 +-
 drivers/gpu/drm/i915/gvt/display.h    |   2 +
 drivers/gpu/drm/i915/gvt/fb_decoder.c | 507 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/fb_decoder.h | 169 ++++++++++++
 drivers/gpu/drm/i915/gvt/gvt.h        |   1 +
 6 files changed, 682 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gvt/fb_decoder.c
 create mode 100644 drivers/gpu/drm/i915/gvt/fb_decoder.h

diff --git a/drivers/gpu/drm/i915/gvt/Makefile b/drivers/gpu/drm/i915/gvt/Makefile
index f5486cb9..019d596 100644
--- a/drivers/gpu/drm/i915/gvt/Makefile
+++ b/drivers/gpu/drm/i915/gvt/Makefile
@@ -1,7 +1,8 @@
 GVT_DIR := gvt
 GVT_SOURCE := gvt.o aperture_gm.o handlers.o vgpu.o trace_points.o firmware.o \
 	interrupt.o gtt.o cfg_space.o opregion.o mmio.o display.o edid.o \
-	execlist.o scheduler.o sched_policy.o render.o cmd_parser.o
+	execlist.o scheduler.o sched_policy.o render.o cmd_parser.o \
+	fb_decoder.o
 
 ccflags-y				+= -I$(src) -I$(src)/$(GVT_DIR)
 i915-y					+= $(addprefix $(GVT_DIR)/, $(GVT_SOURCE))
diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
index 3c31843..fb7fdba 100644
--- a/drivers/gpu/drm/i915/gvt/display.c
+++ b/drivers/gpu/drm/i915/gvt/display.c
@@ -67,7 +67,7 @@ static int edp_pipe_is_enabled(struct intel_vgpu *vgpu)
 	return 1;
 }
 
-static int pipe_is_enabled(struct intel_vgpu *vgpu, int pipe)
+int pipe_is_enabled(struct intel_vgpu *vgpu, int pipe)
 {
 	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
 
diff --git a/drivers/gpu/drm/i915/gvt/display.h b/drivers/gpu/drm/i915/gvt/display.h
index d73de22..b46b868 100644
--- a/drivers/gpu/drm/i915/gvt/display.h
+++ b/drivers/gpu/drm/i915/gvt/display.h
@@ -179,4 +179,6 @@ int intel_vgpu_init_display(struct intel_vgpu *vgpu, u64 resolution);
 void intel_vgpu_reset_display(struct intel_vgpu *vgpu);
 void intel_vgpu_clean_display(struct intel_vgpu *vgpu);
 
+int pipe_is_enabled(struct intel_vgpu *vgpu, int pipe);
+
 #endif
diff --git a/drivers/gpu/drm/i915/gvt/fb_decoder.c b/drivers/gpu/drm/i915/gvt/fb_decoder.c
new file mode 100644
index 0000000..d380ab8
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/fb_decoder.c
@@ -0,0 +1,507 @@
+/*
+ * Copyright(c) 2011-2016 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Authors:
+ *    Kevin Tian <kevin.tian@intel.com>
+ *
+ * Contributors:
+ *    Bing Niu <bing.niu@intel.com>
+ *    Xu Han <xu.han@intel.com>
+ *    Ping Gao <ping.a.gao@intel.com>
+ *    Xiaoguang Chen <xiaoguang.chen@intel.com>
+ *    Yang Liu <yang2.liu@intel.com>
+ *    Tina Zhang <tina.zhang@intel.com>
+ *
+ */
+
+#include <uapi/drm/drm_fourcc.h>
+#include "i915_drv.h"
+#include "gvt.h"
+
+#define PRIMARY_FORMAT_NUM	16
+struct pixel_format {
+	int	drm_format;	/* Pixel format in DRM definition */
+	int	bpp;		/* Bits per pixel, 0 indicates invalid */
+	char	*desc;		/* The description */
+};
+
+static struct pixel_format bdw_pixel_formats[] = {
+	{DRM_FORMAT_C8, 8, "8-bit Indexed"},
+	{DRM_FORMAT_RGB565, 16, "16-bit BGRX (5:6:5 MSB-R:G:B)"},
+	{DRM_FORMAT_XRGB8888, 32, "32-bit BGRX (8:8:8:8 MSB-X:R:G:B)"},
+	{DRM_FORMAT_XBGR2101010, 32, "32-bit RGBX (2:10:10:10 MSB-X:B:G:R)"},
+
+	{DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
+	{DRM_FORMAT_XBGR8888, 32, "32-bit RGBX (8:8:8:8 MSB-X:B:G:R)"},
+
+	/* non-supported format has bpp default to 0 */
+	{0, 0, NULL},
+};
+
+static struct pixel_format skl_pixel_formats[] = {
+	{DRM_FORMAT_YUYV, 16, "16-bit packed YUYV (8:8:8:8 MSB-V:Y2:U:Y1)"},
+	{DRM_FORMAT_UYVY, 16, "16-bit packed UYVY (8:8:8:8 MSB-Y2:V:Y1:U)"},
+	{DRM_FORMAT_YVYU, 16, "16-bit packed YVYU (8:8:8:8 MSB-U:Y2:V:Y1)"},
+	{DRM_FORMAT_VYUY, 16, "16-bit packed VYUY (8:8:8:8 MSB-Y2:U:Y1:V)"},
+
+	{DRM_FORMAT_C8, 8, "8-bit Indexed"},
+	{DRM_FORMAT_RGB565, 16, "16-bit BGRX (5:6:5 MSB-R:G:B)"},
+	{DRM_FORMAT_ABGR8888, 32, "32-bit RGBA (8:8:8:8 MSB-A:B:G:R)"},
+	{DRM_FORMAT_XBGR8888, 32, "32-bit RGBX (8:8:8:8 MSB-X:B:G:R)"},
+
+	{DRM_FORMAT_ARGB8888, 32, "32-bit BGRA (8:8:8:8 MSB-A:R:G:B)"},
+	{DRM_FORMAT_XRGB8888, 32, "32-bit BGRX (8:8:8:8 MSB-X:R:G:B)"},
+	{DRM_FORMAT_XBGR2101010, 32, "32-bit RGBX (2:10:10:10 MSB-X:B:G:R)"},
+	{DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
+
+	/* non-supported format has bpp default to 0 */
+	{0, 0, NULL},
+};
+
+static int bdw_format_to_drm(int format)
+{
+	int bdw_pixel_formats_index = 6;
+
+	switch (format) {
+	case DISPPLANE_8BPP:
+		bdw_pixel_formats_index = 0;
+		break;
+	case DISPPLANE_BGRX565:
+		bdw_pixel_formats_index = 1;
+		break;
+	case DISPPLANE_BGRX888:
+		bdw_pixel_formats_index = 2;
+		break;
+	case DISPPLANE_RGBX101010:
+		bdw_pixel_formats_index = 3;
+		break;
+	case DISPPLANE_BGRX101010:
+		bdw_pixel_formats_index = 4;
+		break;
+	case DISPPLANE_RGBX888:
+		bdw_pixel_formats_index = 5;
+		break;
+
+	default:
+		break;
+	}
+
+	return bdw_pixel_formats_index;
+}
+
+static int skl_format_to_drm(int format, bool rgb_order, bool alpha,
+	int yuv_order)
+{
+	int skl_pixel_formats_index = 12;
+
+	switch (format) {
+	case PLANE_CTL_FORMAT_INDEXED:
+		skl_pixel_formats_index = 4;
+		break;
+	case PLANE_CTL_FORMAT_RGB_565:
+		skl_pixel_formats_index = 5;
+		break;
+	case PLANE_CTL_FORMAT_XRGB_8888:
+		if (rgb_order)
+			skl_pixel_formats_index = alpha ? 6 : 7;
+		else
+			skl_pixel_formats_index = alpha ? 8 : 9;
+		break;
+	case PLANE_CTL_FORMAT_XRGB_2101010:
+		skl_pixel_formats_index = rgb_order ? 10 : 11;
+		break;
+	case PLANE_CTL_FORMAT_YUV422:
+		skl_pixel_formats_index = yuv_order >> 16;
+		if (skl_pixel_formats_index > 3)
+			return -EINVAL;
+		break;
+
+	default:
+		break;
+	}
+
+	return skl_pixel_formats_index;
+}
+
+static u32 intel_vgpu_get_stride(struct intel_vgpu *vgpu, int pipe,
+	u32 tiled, int stride_mask, int bpp)
+{
+	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
+
+	u32 stride_reg = vgpu_vreg(vgpu, DSPSTRIDE(pipe)) & stride_mask;
+	u32 stride = stride_reg;
+
+	if (IS_SKYLAKE(dev_priv)) {
+		switch (tiled) {
+		case PLANE_CTL_TILED_LINEAR:
+			stride = stride_reg * 64;
+			break;
+		case PLANE_CTL_TILED_X:
+			stride = stride_reg * 512;
+			break;
+		case PLANE_CTL_TILED_Y:
+			stride = stride_reg * 128;
+			break;
+		case PLANE_CTL_TILED_YF:
+			if (bpp == 8)
+				stride = stride_reg * 64;
+			else if (bpp == 16 || bpp == 32 || bpp == 64)
+				stride = stride_reg * 128;
+			else
+				gvt_dbg_core("skl: unsupported bpp:%d\n", bpp);
+			break;
+		default:
+			gvt_dbg_core("skl: unsupported tile format:%x\n",
+				tiled);
+		}
+	}
+
+	return stride;
+}
+
+static int get_active_pipe(struct intel_vgpu *vgpu)
+{
+	int i;
+
+	for (i = 0; i < I915_MAX_PIPES; i++)
+		if (pipe_is_enabled(vgpu, i))
+			break;
+
+	return i;
+}
+
+/**
+ * intel_vgpu_decode_primary_plane - Decode primary plane
+ * @vgpu: input vgpu
+ * @plane: primary plane to save decoded info
+ * This function is called for decoding plane
+ *
+ * Returns:
+ * 0 on success, non-zero if failed.
+ */
+int intel_vgpu_decode_primary_plane(struct intel_vgpu *vgpu,
+	struct intel_vgpu_primary_plane_format *plane)
+{
+	u32 val, fmt;
+	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
+	int pipe;
+
+	pipe = get_active_pipe(vgpu);
+	if (pipe >= I915_MAX_PIPES)
+		return -ENODEV;
+
+	val = vgpu_vreg(vgpu, DSPCNTR(pipe));
+	plane->enabled = !!(val & DISPLAY_PLANE_ENABLE);
+	if (!plane->enabled)
+		return -ENODEV;
+
+	if (IS_SKYLAKE(dev_priv)) {
+		plane->tiled = (val & PLANE_CTL_TILED_MASK) >>
+		_PLANE_CTL_TILED_SHIFT;
+		fmt = skl_format_to_drm(
+			val & PLANE_CTL_FORMAT_MASK,
+			val & PLANE_CTL_ORDER_RGBX,
+			val & PLANE_CTL_ALPHA_MASK,
+			val & PLANE_CTL_YUV422_ORDER_MASK);
+		plane->bpp = skl_pixel_formats[fmt].bpp;
+		plane->drm_format = skl_pixel_formats[fmt].drm_format;
+	} else {
+		plane->tiled = !!(val & DISPPLANE_TILED);
+		fmt = bdw_format_to_drm(val & DISPPLANE_PIXFORMAT_MASK);
+		plane->bpp = bdw_pixel_formats[fmt].bpp;
+		plane->drm_format = bdw_pixel_formats[fmt].drm_format;
+	}
+
+	if (!plane->bpp) {
+		gvt_vgpu_err("Non-supported pixel format (0x%x)\n", fmt);
+		return -EINVAL;
+	}
+
+	plane->hw_format = fmt;
+
+	plane->base = vgpu_vreg(vgpu, DSPSURF(pipe)) & GTT_PAGE_MASK;
+	if (!intel_gvt_ggtt_validate_range(vgpu, plane->base, 0)) {
+		gvt_vgpu_err("invalid gma address: %lx\n",
+			     (unsigned long)plane->base);
+		return  -EINVAL;
+	}
+
+	plane->base_gpa = intel_vgpu_gma_to_gpa(vgpu->gtt.ggtt_mm, plane->base);
+	if (plane->base_gpa == INTEL_GVT_INVALID_ADDR) {
+		gvt_vgpu_err("invalid gma address: %lx\n",
+				(unsigned long)plane->base);
+		return  -EINVAL;
+	}
+
+	plane->stride = intel_vgpu_get_stride(vgpu, pipe, (plane->tiled << 10),
+		(IS_SKYLAKE(dev_priv)) ? (_PRI_PLANE_STRIDE_MASK >> 6) :
+		_PRI_PLANE_STRIDE_MASK, plane->bpp);
+
+	plane->width = (vgpu_vreg(vgpu, PIPESRC(pipe)) & _PIPE_H_SRCSZ_MASK) >>
+		_PIPE_H_SRCSZ_SHIFT;
+	plane->width += 1;
+	plane->height = (vgpu_vreg(vgpu, PIPESRC(pipe)) &
+			_PIPE_V_SRCSZ_MASK) >> _PIPE_V_SRCSZ_SHIFT;
+	plane->height += 1;	/* raw height is one minus the real value */
+
+	val = vgpu_vreg(vgpu, DSPTILEOFF(pipe));
+	plane->x_offset = (val & _PRI_PLANE_X_OFF_MASK) >>
+		_PRI_PLANE_X_OFF_SHIFT;
+	plane->y_offset = (val & _PRI_PLANE_Y_OFF_MASK) >>
+		_PRI_PLANE_Y_OFF_SHIFT;
+
+	return 0;
+}
+
+#define CURSOR_FORMAT_NUM	(1 << 6)
+struct cursor_mode_format {
+	int	drm_format;	/* Pixel format in DRM definition */
+	u8	bpp;		/* Bits per pixel; 0 indicates invalid */
+	u32	width;		/* In pixel */
+	u32	height;		/* In lines */
+	char	*desc;		/* The description */
+};
+
+static struct cursor_mode_format cursor_pixel_formats[] = {
+	{DRM_FORMAT_ARGB8888, 32, 128, 128, "128x128 32bpp ARGB"},
+	{DRM_FORMAT_ARGB8888, 32, 256, 256, "256x256 32bpp ARGB"},
+	{DRM_FORMAT_ARGB8888, 32, 64, 64, "64x64 32bpp ARGB"},
+	{DRM_FORMAT_ARGB8888, 32, 64, 64, "64x64 32bpp ARGB"},
+
+	/* non-supported format has bpp default to 0 */
+	{0, 0, 0, 0, NULL},
+};
+
+static int cursor_mode_to_drm(int mode)
+{
+	int cursor_pixel_formats_index = 4;
+
+	switch (mode) {
+	case CURSOR_MODE_128_ARGB_AX:
+		cursor_pixel_formats_index = 0;
+		break;
+	case CURSOR_MODE_256_ARGB_AX:
+		cursor_pixel_formats_index = 1;
+		break;
+	case CURSOR_MODE_64_ARGB_AX:
+		cursor_pixel_formats_index = 2;
+		break;
+	case CURSOR_MODE_64_32B_AX:
+		cursor_pixel_formats_index = 3;
+		break;
+
+	default:
+		break;
+	}
+
+	return cursor_pixel_formats_index;
+}
+
+/**
+ * intel_vgpu_decode_cursor_plane - Decode sprite plane
+ * @vgpu: input vgpu
+ * @plane: cursor plane to save decoded info
+ * This function is called for decoding plane
+ *
+ * Returns:
+ * 0 on success, non-zero if failed.
+ */
+int intel_vgpu_decode_cursor_plane(struct intel_vgpu *vgpu,
+	struct intel_vgpu_cursor_plane_format *plane)
+{
+	u32 val, mode, index;
+	u32 alpha_plane, alpha_force;
+	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
+	int pipe;
+
+	pipe = get_active_pipe(vgpu);
+	if (pipe >= I915_MAX_PIPES)
+		return -ENODEV;
+
+	val = vgpu_vreg(vgpu, CURCNTR(pipe));
+	mode = val & CURSOR_MODE;
+	plane->enabled = (mode != CURSOR_MODE_DISABLE);
+	if (!plane->enabled)
+		return -ENODEV;
+
+	index = cursor_mode_to_drm(mode);
+
+	if (!cursor_pixel_formats[index].bpp) {
+		gvt_vgpu_err("Non-supported cursor mode (0x%x)\n", mode);
+		return -EINVAL;
+	}
+	plane->mode = mode;
+	plane->bpp = cursor_pixel_formats[index].bpp;
+	plane->drm_format = cursor_pixel_formats[index].drm_format;
+	plane->width = cursor_pixel_formats[index].width;
+	plane->height = cursor_pixel_formats[index].height;
+
+	alpha_plane = (val & _CURSOR_ALPHA_PLANE_MASK) >>
+				_CURSOR_ALPHA_PLANE_SHIFT;
+	alpha_force = (val & _CURSOR_ALPHA_FORCE_MASK) >>
+				_CURSOR_ALPHA_FORCE_SHIFT;
+	if (alpha_plane || alpha_force)
+		gvt_dbg_core("alpha_plane=0x%x, alpha_force=0x%x\n",
+			alpha_plane, alpha_force);
+
+	plane->base = vgpu_vreg(vgpu, CURBASE(pipe)) & GTT_PAGE_MASK;
+	if (!intel_gvt_ggtt_validate_range(vgpu, plane->base, 0)) {
+		gvt_vgpu_err("invalid gma address: %lx\n",
+			     (unsigned long)plane->base);
+		return  -EINVAL;
+	}
+
+	plane->base_gpa = intel_vgpu_gma_to_gpa(vgpu->gtt.ggtt_mm, plane->base);
+	if (plane->base_gpa == INTEL_GVT_INVALID_ADDR) {
+		gvt_vgpu_err("invalid gma address: %lx\n",
+				(unsigned long)plane->base);
+		return  -EINVAL;
+	}
+
+	val = vgpu_vreg(vgpu, CURPOS(pipe));
+	plane->x_pos = (val & _CURSOR_POS_X_MASK) >> _CURSOR_POS_X_SHIFT;
+	plane->x_sign = (val & _CURSOR_SIGN_X_MASK) >> _CURSOR_SIGN_X_SHIFT;
+	plane->y_pos = (val & _CURSOR_POS_Y_MASK) >> _CURSOR_POS_Y_SHIFT;
+	plane->y_sign = (val & _CURSOR_SIGN_Y_MASK) >> _CURSOR_SIGN_Y_SHIFT;
+
+	return 0;
+}
+
+#define SPRITE_FORMAT_NUM	(1 << 3)
+
+static struct pixel_format sprite_pixel_formats[SPRITE_FORMAT_NUM] = {
+	[0x0] = {DRM_FORMAT_YUV422, 16, "YUV 16-bit 4:2:2 packed"},
+	[0x1] = {DRM_FORMAT_XRGB2101010, 32, "RGB 32-bit 2:10:10:10"},
+	[0x2] = {DRM_FORMAT_XRGB8888, 32, "RGB 32-bit 8:8:8:8"},
+	[0x4] = {DRM_FORMAT_AYUV, 32,
+		"YUV 32-bit 4:4:4 packed (8:8:8:8 MSB-X:Y:U:V)"},
+};
+
+/**
+ * intel_vgpu_decode_sprite_plane - Decode sprite plane
+ * @vgpu: input vgpu
+ * @plane: sprite plane to save decoded info
+ * This function is called for decoding plane
+ *
+ * Returns:
+ * 0 on success, non-zero if failed.
+ */
+int intel_vgpu_decode_sprite_plane(struct intel_vgpu *vgpu,
+	struct intel_vgpu_sprite_plane_format *plane)
+{
+	u32 val, fmt;
+	u32 color_order, yuv_order;
+	int drm_format;
+	int pipe;
+
+	pipe = get_active_pipe(vgpu);
+	if (pipe >= I915_MAX_PIPES)
+		return -ENODEV;
+
+	val = vgpu_vreg(vgpu, SPRCTL(pipe));
+	plane->enabled = !!(val & SPRITE_ENABLE);
+	if (!plane->enabled)
+		return -ENODEV;
+
+	plane->tiled = !!(val & SPRITE_TILED);
+	color_order = !!(val & SPRITE_RGB_ORDER_RGBX);
+	yuv_order = (val & SPRITE_YUV_BYTE_ORDER_MASK) >>
+				_SPRITE_YUV_ORDER_SHIFT;
+
+	fmt = (val & SPRITE_PIXFORMAT_MASK) >> _SPRITE_FMT_SHIFT;
+	if (!sprite_pixel_formats[fmt].bpp) {
+		gvt_vgpu_err("Non-supported pixel format (0x%x)\n", fmt);
+		return -EINVAL;
+	}
+	plane->hw_format = fmt;
+	plane->bpp = sprite_pixel_formats[fmt].bpp;
+	drm_format = sprite_pixel_formats[fmt].drm_format;
+
+	/* Order of RGB values in an RGBxxx buffer may be ordered RGB or
+	 * BGR depending on the state of the color_order field
+	 */
+	if (!color_order) {
+		if (drm_format == DRM_FORMAT_XRGB2101010)
+			drm_format = DRM_FORMAT_XBGR2101010;
+		else if (drm_format == DRM_FORMAT_XRGB8888)
+			drm_format = DRM_FORMAT_XBGR8888;
+	}
+
+	if (drm_format == DRM_FORMAT_YUV422) {
+		switch (yuv_order) {
+		case 0:
+			drm_format = DRM_FORMAT_YUYV;
+			break;
+		case 1:
+			drm_format = DRM_FORMAT_UYVY;
+			break;
+		case 2:
+			drm_format = DRM_FORMAT_YVYU;
+			break;
+		case 3:
+			drm_format = DRM_FORMAT_VYUY;
+			break;
+		default:
+			/* yuv_order has only 2 bits */
+			break;
+		}
+	}
+
+	plane->drm_format = drm_format;
+
+	plane->base = vgpu_vreg(vgpu, SPRSURF(pipe)) & GTT_PAGE_MASK;
+	if (!intel_gvt_ggtt_validate_range(vgpu, plane->base, 0)) {
+		gvt_vgpu_err("invalid gma address: %lx\n",
+			     (unsigned long)plane->base);
+		return  -EINVAL;
+	}
+
+	plane->base_gpa = intel_vgpu_gma_to_gpa(vgpu->gtt.ggtt_mm, plane->base);
+	if (plane->base_gpa == INTEL_GVT_INVALID_ADDR) {
+		gvt_vgpu_err("invalid gma address: %lx\n",
+				(unsigned long)plane->base);
+		return  -EINVAL;
+	}
+
+	plane->stride = vgpu_vreg(vgpu, SPRSTRIDE(pipe)) &
+				_SPRITE_STRIDE_MASK;
+
+	val = vgpu_vreg(vgpu, SPRSIZE(pipe));
+	plane->height = (val & _SPRITE_SIZE_HEIGHT_MASK) >>
+		_SPRITE_SIZE_HEIGHT_SHIFT;
+	plane->width = (val & _SPRITE_SIZE_WIDTH_MASK) >>
+		_SPRITE_SIZE_WIDTH_SHIFT;
+	plane->height += 1;	/* raw height is one minus the real value */
+	plane->width += 1;	/* raw width is one minus the real value */
+
+	val = vgpu_vreg(vgpu, SPRPOS(pipe));
+	plane->x_pos = (val & _SPRITE_POS_X_MASK) >> _SPRITE_POS_X_SHIFT;
+	plane->y_pos = (val & _SPRITE_POS_Y_MASK) >> _SPRITE_POS_Y_SHIFT;
+
+	val = vgpu_vreg(vgpu, SPROFFSET(pipe));
+	plane->x_offset = (val & _SPRITE_OFFSET_START_X_MASK) >>
+			   _SPRITE_OFFSET_START_X_SHIFT;
+	plane->y_offset = (val & _SPRITE_OFFSET_START_Y_MASK) >>
+			   _SPRITE_OFFSET_START_Y_SHIFT;
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/i915/gvt/fb_decoder.h b/drivers/gpu/drm/i915/gvt/fb_decoder.h
new file mode 100644
index 0000000..cb055f3
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/fb_decoder.h
@@ -0,0 +1,169 @@
+/*
+ * Copyright(c) 2011-2016 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Authors:
+ *    Kevin Tian <kevin.tian@intel.com>
+ *
+ * Contributors:
+ *    Bing Niu <bing.niu@intel.com>
+ *    Xu Han <xu.han@intel.com>
+ *    Ping Gao <ping.a.gao@intel.com>
+ *    Xiaoguang Chen <xiaoguang.chen@intel.com>
+ *    Yang Liu <yang2.liu@intel.com>
+ *    Tina Zhang <tina.zhang@intel.com>
+ *
+ */
+
+#ifndef _GVT_FB_DECODER_H_
+#define _GVT_FB_DECODER_H_
+
+#define _PLANE_CTL_FORMAT_SHIFT		24
+#define _PLANE_CTL_TILED_SHIFT		10
+#define _PIPE_V_SRCSZ_SHIFT		0
+#define _PIPE_V_SRCSZ_MASK		(0xfff << _PIPE_V_SRCSZ_SHIFT)
+#define _PIPE_H_SRCSZ_SHIFT		16
+#define _PIPE_H_SRCSZ_MASK		(0x1fff << _PIPE_H_SRCSZ_SHIFT)
+
+#define _PRI_PLANE_FMT_SHIFT		26
+#define _PRI_PLANE_STRIDE_MASK		(0x3ff << 6)
+#define _PRI_PLANE_X_OFF_SHIFT		0
+#define _PRI_PLANE_X_OFF_MASK		(0x1fff << _PRI_PLANE_X_OFF_SHIFT)
+#define _PRI_PLANE_Y_OFF_SHIFT		16
+#define _PRI_PLANE_Y_OFF_MASK		(0xfff << _PRI_PLANE_Y_OFF_SHIFT)
+
+#define _CURSOR_MODE			0x3f
+#define _CURSOR_ALPHA_FORCE_SHIFT	8
+#define _CURSOR_ALPHA_FORCE_MASK	(0x3 << _CURSOR_ALPHA_FORCE_SHIFT)
+#define _CURSOR_ALPHA_PLANE_SHIFT	10
+#define _CURSOR_ALPHA_PLANE_MASK	(0x3 << _CURSOR_ALPHA_PLANE_SHIFT)
+#define _CURSOR_POS_X_SHIFT		0
+#define _CURSOR_POS_X_MASK		(0x1fff << _CURSOR_POS_X_SHIFT)
+#define _CURSOR_SIGN_X_SHIFT		15
+#define _CURSOR_SIGN_X_MASK		(1 << _CURSOR_SIGN_X_SHIFT)
+#define _CURSOR_POS_Y_SHIFT		16
+#define _CURSOR_POS_Y_MASK		(0xfff << _CURSOR_POS_Y_SHIFT)
+#define _CURSOR_SIGN_Y_SHIFT		31
+#define _CURSOR_SIGN_Y_MASK		(1 << _CURSOR_SIGN_Y_SHIFT)
+
+#define _SPRITE_FMT_SHIFT		25
+#define _SPRITE_COLOR_ORDER_SHIFT	20
+#define _SPRITE_YUV_ORDER_SHIFT		16
+#define _SPRITE_STRIDE_SHIFT		6
+#define _SPRITE_STRIDE_MASK		(0x1ff << _SPRITE_STRIDE_SHIFT)
+#define _SPRITE_SIZE_WIDTH_SHIFT	0
+#define _SPRITE_SIZE_HEIGHT_SHIFT	16
+#define _SPRITE_SIZE_WIDTH_MASK		(0x1fff << _SPRITE_SIZE_WIDTH_SHIFT)
+#define _SPRITE_SIZE_HEIGHT_MASK	(0xfff << _SPRITE_SIZE_HEIGHT_SHIFT)
+#define _SPRITE_POS_X_SHIFT		0
+#define _SPRITE_POS_Y_SHIFT		16
+#define _SPRITE_POS_X_MASK		(0x1fff << _SPRITE_POS_X_SHIFT)
+#define _SPRITE_POS_Y_MASK		(0xfff << _SPRITE_POS_Y_SHIFT)
+#define _SPRITE_OFFSET_START_X_SHIFT	0
+#define _SPRITE_OFFSET_START_Y_SHIFT	16
+#define _SPRITE_OFFSET_START_X_MASK	(0x1fff << _SPRITE_OFFSET_START_X_SHIFT)
+#define _SPRITE_OFFSET_START_Y_MASK	(0xfff << _SPRITE_OFFSET_START_Y_SHIFT)
+
+enum GVT_FB_EVENT {
+	FB_MODE_SET_START = 1,
+	FB_MODE_SET_END,
+	FB_DISPLAY_FLIP,
+};
+
+enum DDI_PORT {
+	DDI_PORT_NONE	= 0,
+	DDI_PORT_B	= 1,
+	DDI_PORT_C	= 2,
+	DDI_PORT_D	= 3,
+	DDI_PORT_E	= 4
+};
+
+struct intel_gvt;
+
+/* color space conversion and gamma correction are not included */
+struct intel_vgpu_primary_plane_format {
+	u8	enabled;	/* plane is enabled */
+	u8	tiled;		/* X-tiled */
+	u8	bpp;		/* bits per pixel */
+	u32	hw_format;	/* format field in the PRI_CTL register */
+	u32	drm_format;	/* format in DRM definition */
+	u32	base;		/* framebuffer base in graphics memory */
+	u64     base_gpa;
+	u32	x_offset;	/* in pixels */
+	u32	y_offset;	/* in lines */
+	u32	width;		/* in pixels */
+	u32	height;		/* in lines */
+	u32	stride;		/* in bytes */
+};
+
+struct intel_vgpu_sprite_plane_format {
+	u8	enabled;	/* plane is enabled */
+	u8	tiled;		/* X-tiled */
+	u8	bpp;		/* bits per pixel */
+	u32	hw_format;	/* format field in the SPR_CTL register */
+	u32	drm_format;	/* format in DRM definition */
+	u32	base;		/* sprite base in graphics memory */
+	u64     base_gpa;
+	u32	x_pos;		/* in pixels */
+	u32	y_pos;		/* in lines */
+	u32	x_offset;	/* in pixels */
+	u32	y_offset;	/* in lines */
+	u32	width;		/* in pixels */
+	u32	height;		/* in lines */
+	u32	stride;		/* in bytes */
+};
+
+struct intel_vgpu_cursor_plane_format {
+	u8	enabled;
+	u8	mode;		/* cursor mode select */
+	u8	bpp;		/* bits per pixel */
+	u32	drm_format;	/* format in DRM definition */
+	u32	base;		/* cursor base in graphics memory */
+	u64     base_gpa;
+	u32	x_pos;		/* in pixels */
+	u32	y_pos;		/* in lines */
+	u8	x_sign;		/* X Position Sign */
+	u8	y_sign;		/* Y Position Sign */
+	u32	width;		/* in pixels */
+	u32	height;		/* in lines */
+	u32	x_hot;		/* in pixels */
+	u32	y_hot;		/* in pixels */
+};
+
+struct intel_vgpu_pipe_format {
+	struct intel_vgpu_primary_plane_format	primary;
+	struct intel_vgpu_sprite_plane_format	sprite;
+	struct intel_vgpu_cursor_plane_format	cursor;
+	enum DDI_PORT ddi_port;  /* the DDI port that pipe is connected to */
+};
+
+struct intel_vgpu_fb_format {
+	struct intel_vgpu_pipe_format	pipes[I915_MAX_PIPES];
+};
+
+int intel_vgpu_decode_primary_plane(struct intel_vgpu *vgpu,
+	struct intel_vgpu_primary_plane_format *plane);
+int intel_vgpu_decode_cursor_plane(struct intel_vgpu *vgpu,
+	struct intel_vgpu_cursor_plane_format *plane);
+int intel_vgpu_decode_sprite_plane(struct intel_vgpu *vgpu,
+	struct intel_vgpu_sprite_plane_format *plane);
+
+#endif
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 4c1a442..0f562ee 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -46,6 +46,7 @@
 #include "sched_policy.h"
 #include "render.h"
 #include "cmd_parser.h"
+#include "fb_decoder.h"
 
 #define GVT_MAX_VGPU 8
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 2/7] drm: Introduce RGB 64-bit 16:16:16:16 float format
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10  9:50   ` Tina Zhang
  2017-10-10  9:50   ` Tina Zhang
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel,
	Ville Syrjälä,
	Dave Airlie, Daniel Vetter

The RGB 64-bit 16:16:16:16 float pixel format is needed by some Apps in
windows. The float format in each component is 1:5:10 MSb-sign:exponent:
fraction.

This patch is to introduce the format to drm, so that the windows guest's
framebuffer in this kind of format can be recognized and used by linux
host.

v14:
- add some details about the float pixel format. (Daniel)
- add F suffix to the defined name. (Daniel)

v12:
- send to dri-devel at lists.freedesktop.org. (Ville)

v9:
- separated from framebuffer decoder patch. (Zhenyu) (Xiaoguang)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 include/uapi/drm/drm_fourcc.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
index 76c9101..575014f 100644
--- a/include/uapi/drm/drm_fourcc.h
+++ b/include/uapi/drm/drm_fourcc.h
@@ -113,6 +113,10 @@ extern "C" {
 
 #define DRM_FORMAT_AYUV		fourcc_code('A', 'Y', 'U', 'V') /* [31:0] A:Y:Cb:Cr 8:8:8:8 little endian */
 
+/* 64 bpp RGB 16:16:16:16 Floating Point */
+#define DRM_FORMAT_XRGB161616F  fourcc_code('X', 'R', '3', 'F') /* [63:0] x:R:G:B 16:16:16:16 little endian */
+#define DRM_FORMAT_XBGR161616F  fourcc_code('X', 'B', '3', 'F') /* [63:0] x:B:G:R 16:16:16:16 little endian */
+
 /*
  * 2 plane RGB + A
  * index 0 = RGB plane, same format as the corresponding non _A8 format has
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 2/7] drm: Introduce RGB 64-bit 16:16:16:16 float format
@ 2017-10-10  9:50   ` Tina Zhang
  0 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Daniel Vetter, intel-gfx, linux-kernel, Dave Airlie, intel-gvt-dev

The RGB 64-bit 16:16:16:16 float pixel format is needed by some Apps in
windows. The float format in each component is 1:5:10 MSb-sign:exponent:
fraction.

This patch is to introduce the format to drm, so that the windows guest's
framebuffer in this kind of format can be recognized and used by linux
host.

v14:
- add some details about the float pixel format. (Daniel)
- add F suffix to the defined name. (Daniel)

v12:
- send to dri-devel at lists.freedesktop.org. (Ville)

v9:
- separated from framebuffer decoder patch. (Zhenyu) (Xiaoguang)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 include/uapi/drm/drm_fourcc.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
index 76c9101..575014f 100644
--- a/include/uapi/drm/drm_fourcc.h
+++ b/include/uapi/drm/drm_fourcc.h
@@ -113,6 +113,10 @@ extern "C" {
 
 #define DRM_FORMAT_AYUV		fourcc_code('A', 'Y', 'U', 'V') /* [31:0] A:Y:Cb:Cr 8:8:8:8 little endian */
 
+/* 64 bpp RGB 16:16:16:16 Floating Point */
+#define DRM_FORMAT_XRGB161616F  fourcc_code('X', 'R', '3', 'F') /* [63:0] x:R:G:B 16:16:16:16 little endian */
+#define DRM_FORMAT_XBGR161616F  fourcc_code('X', 'B', '3', 'F') /* [63:0] x:B:G:R 16:16:16:16 little endian */
+
 /*
  * 2 plane RGB + A
  * index 0 = RGB plane, same format as the corresponding non _A8 format has
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 3/7] drm/i915/gvt: Add RGB 64-bit 16:16:16:16 float format
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10  9:50   ` Tina Zhang
  2017-10-10  9:50   ` Tina Zhang
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel

The RGB 64-bit 16:16:16:16 float pixel format is needed by windows 10
guest VM. This patch is to add this pixel format support to gvt device
model. Without this patch, some Apps, e.g. "DXGIGammaVM.exe", will crash
and make guest screen black.

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/fb_decoder.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/fb_decoder.c b/drivers/gpu/drm/i915/gvt/fb_decoder.c
index d380ab8..49053f4 100644
--- a/drivers/gpu/drm/i915/gvt/fb_decoder.c
+++ b/drivers/gpu/drm/i915/gvt/fb_decoder.c
@@ -52,6 +52,8 @@ static struct pixel_format bdw_pixel_formats[] = {
 
 	{DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
 	{DRM_FORMAT_XBGR8888, 32, "32-bit RGBX (8:8:8:8 MSB-X:B:G:R)"},
+	{DRM_FORMAT_XRGB161616F, 64,
+		"64-bit XRGB Floating Point (16:16:16:16 MSB-X:R:G:B)"},
 
 	/* non-supported format has bpp default to 0 */
 	{0, 0, NULL},
@@ -73,13 +75,18 @@ static struct pixel_format skl_pixel_formats[] = {
 	{DRM_FORMAT_XBGR2101010, 32, "32-bit RGBX (2:10:10:10 MSB-X:B:G:R)"},
 	{DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
 
+	{DRM_FORMAT_XRGB161616F, 64,
+		"64-bit XRGB Floating Point (16:16:16:16 MSB-X:R:G:B)"},
+	{DRM_FORMAT_XBGR161616F, 64,
+		"64-bit XBGR Floating Point (16:16:16:16 MSB-X:B:G:R)"},
+
 	/* non-supported format has bpp default to 0 */
 	{0, 0, NULL},
 };
 
 static int bdw_format_to_drm(int format)
 {
-	int bdw_pixel_formats_index = 6;
+	int bdw_pixel_formats_index = 7;
 
 	switch (format) {
 	case DISPPLANE_8BPP:
@@ -100,7 +107,9 @@ static int bdw_format_to_drm(int format)
 	case DISPPLANE_RGBX888:
 		bdw_pixel_formats_index = 5;
 		break;
-
+	case DISPPLANE_RGBX161616:
+		bdw_pixel_formats_index = 6;
+		break;
 	default:
 		break;
 	}
@@ -111,7 +120,7 @@ static int bdw_format_to_drm(int format)
 static int skl_format_to_drm(int format, bool rgb_order, bool alpha,
 	int yuv_order)
 {
-	int skl_pixel_formats_index = 12;
+	int skl_pixel_formats_index = 14;
 
 	switch (format) {
 	case PLANE_CTL_FORMAT_INDEXED:
@@ -129,6 +138,9 @@ static int skl_format_to_drm(int format, bool rgb_order, bool alpha,
 	case PLANE_CTL_FORMAT_XRGB_2101010:
 		skl_pixel_formats_index = rgb_order ? 10 : 11;
 		break;
+	case PLANE_CTL_FORMAT_XRGB_16161616F:
+		skl_pixel_formats_index = rgb_order ? 12 : 13;
+		break;
 	case PLANE_CTL_FORMAT_YUV422:
 		skl_pixel_formats_index = yuv_order >> 16;
 		if (skl_pixel_formats_index > 3)
@@ -389,11 +401,13 @@ int intel_vgpu_decode_cursor_plane(struct intel_vgpu *vgpu,
 #define SPRITE_FORMAT_NUM	(1 << 3)
 
 static struct pixel_format sprite_pixel_formats[SPRITE_FORMAT_NUM] = {
-	[0x0] = {DRM_FORMAT_YUV422, 16, "YUV 16-bit 4:2:2 packed"},
-	[0x1] = {DRM_FORMAT_XRGB2101010, 32, "RGB 32-bit 2:10:10:10"},
-	[0x2] = {DRM_FORMAT_XRGB8888, 32, "RGB 32-bit 8:8:8:8"},
+	[0x0] = {DRM_FORMAT_YUV422, 16, "16-bit YUV 4:2:2 packed"},
+	[0x1] = {DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
+	[0x2] = {DRM_FORMAT_XRGB8888, 32, "32-bit RGBX (8:8:8:8 MSB-X:B:G:R)"},
+	[0x3]  = {DRM_FORMAT_XRGB161616F, 64,
+		    "64-bit XRGB Floating Point (16:16:16:16 MSB-X:R:G:B)"},
 	[0x4] = {DRM_FORMAT_AYUV, 32,
-		"YUV 32-bit 4:4:4 packed (8:8:8:8 MSB-X:Y:U:V)"},
+		"32-bit YUV 4:4:4 packed (8:8:8:8 MSB-X:Y:U:V)"},
 };
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 3/7] drm/i915/gvt: Add RGB 64-bit 16:16:16:16 float format
@ 2017-10-10  9:50   ` Tina Zhang
  0 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: intel-gfx, intel-gvt-dev, linux-kernel

The RGB 64-bit 16:16:16:16 float pixel format is needed by windows 10
guest VM. This patch is to add this pixel format support to gvt device
model. Without this patch, some Apps, e.g. "DXGIGammaVM.exe", will crash
and make guest screen black.

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/fb_decoder.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/fb_decoder.c b/drivers/gpu/drm/i915/gvt/fb_decoder.c
index d380ab8..49053f4 100644
--- a/drivers/gpu/drm/i915/gvt/fb_decoder.c
+++ b/drivers/gpu/drm/i915/gvt/fb_decoder.c
@@ -52,6 +52,8 @@ static struct pixel_format bdw_pixel_formats[] = {
 
 	{DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
 	{DRM_FORMAT_XBGR8888, 32, "32-bit RGBX (8:8:8:8 MSB-X:B:G:R)"},
+	{DRM_FORMAT_XRGB161616F, 64,
+		"64-bit XRGB Floating Point (16:16:16:16 MSB-X:R:G:B)"},
 
 	/* non-supported format has bpp default to 0 */
 	{0, 0, NULL},
@@ -73,13 +75,18 @@ static struct pixel_format skl_pixel_formats[] = {
 	{DRM_FORMAT_XBGR2101010, 32, "32-bit RGBX (2:10:10:10 MSB-X:B:G:R)"},
 	{DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
 
+	{DRM_FORMAT_XRGB161616F, 64,
+		"64-bit XRGB Floating Point (16:16:16:16 MSB-X:R:G:B)"},
+	{DRM_FORMAT_XBGR161616F, 64,
+		"64-bit XBGR Floating Point (16:16:16:16 MSB-X:B:G:R)"},
+
 	/* non-supported format has bpp default to 0 */
 	{0, 0, NULL},
 };
 
 static int bdw_format_to_drm(int format)
 {
-	int bdw_pixel_formats_index = 6;
+	int bdw_pixel_formats_index = 7;
 
 	switch (format) {
 	case DISPPLANE_8BPP:
@@ -100,7 +107,9 @@ static int bdw_format_to_drm(int format)
 	case DISPPLANE_RGBX888:
 		bdw_pixel_formats_index = 5;
 		break;
-
+	case DISPPLANE_RGBX161616:
+		bdw_pixel_formats_index = 6;
+		break;
 	default:
 		break;
 	}
@@ -111,7 +120,7 @@ static int bdw_format_to_drm(int format)
 static int skl_format_to_drm(int format, bool rgb_order, bool alpha,
 	int yuv_order)
 {
-	int skl_pixel_formats_index = 12;
+	int skl_pixel_formats_index = 14;
 
 	switch (format) {
 	case PLANE_CTL_FORMAT_INDEXED:
@@ -129,6 +138,9 @@ static int skl_format_to_drm(int format, bool rgb_order, bool alpha,
 	case PLANE_CTL_FORMAT_XRGB_2101010:
 		skl_pixel_formats_index = rgb_order ? 10 : 11;
 		break;
+	case PLANE_CTL_FORMAT_XRGB_16161616F:
+		skl_pixel_formats_index = rgb_order ? 12 : 13;
+		break;
 	case PLANE_CTL_FORMAT_YUV422:
 		skl_pixel_formats_index = yuv_order >> 16;
 		if (skl_pixel_formats_index > 3)
@@ -389,11 +401,13 @@ int intel_vgpu_decode_cursor_plane(struct intel_vgpu *vgpu,
 #define SPRITE_FORMAT_NUM	(1 << 3)
 
 static struct pixel_format sprite_pixel_formats[SPRITE_FORMAT_NUM] = {
-	[0x0] = {DRM_FORMAT_YUV422, 16, "YUV 16-bit 4:2:2 packed"},
-	[0x1] = {DRM_FORMAT_XRGB2101010, 32, "RGB 32-bit 2:10:10:10"},
-	[0x2] = {DRM_FORMAT_XRGB8888, 32, "RGB 32-bit 8:8:8:8"},
+	[0x0] = {DRM_FORMAT_YUV422, 16, "16-bit YUV 4:2:2 packed"},
+	[0x1] = {DRM_FORMAT_XRGB2101010, 32, "32-bit BGRX (2:10:10:10 MSB-X:R:G:B)"},
+	[0x2] = {DRM_FORMAT_XRGB8888, 32, "32-bit RGBX (8:8:8:8 MSB-X:B:G:R)"},
+	[0x3]  = {DRM_FORMAT_XRGB161616F, 64,
+		    "64-bit XRGB Floating Point (16:16:16:16 MSB-X:R:G:B)"},
 	[0x4] = {DRM_FORMAT_AYUV, 32,
-		"YUV 32-bit 4:4:4 packed (8:8:8:8 MSB-X:Y:U:V)"},
+		"32-bit YUV 4:4:4 packed (8:8:8:8 MSB-X:Y:U:V)"},
 };
 
 /**
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 4/7] drm/i915/gvt: Add opregion support
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10  9:50   ` Tina Zhang
  2017-10-10  9:50   ` Tina Zhang
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel, Bing Niu

Windows guest driver needs vbt in opregion, to configure the setting
for display. Without opregion support, the display registers won't
be set and this blocks display model to get the correct information
of the guest display plane.

This patch is to provide a virtual opregion for guest. Current
implementation is to fill the virtual opregion with the content in
host's opregion. The original author of this patch is Xiaoguang Chen.

Signed-off-by: Bing Niu <bing.niu@intel.com>
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/hypercall.h |   1 +
 drivers/gpu/drm/i915/gvt/kvmgt.c     | 109 ++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/gvt/mpt.h       |  15 +++++
 drivers/gpu/drm/i915/gvt/opregion.c  |  26 +++++++--
 drivers/gpu/drm/i915/gvt/vgpu.c      |   4 ++
 5 files changed, 146 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
index df7f33a..32c345c 100644
--- a/drivers/gpu/drm/i915/gvt/hypercall.h
+++ b/drivers/gpu/drm/i915/gvt/hypercall.h
@@ -55,6 +55,7 @@ struct intel_gvt_mpt {
 			      unsigned long mfn, unsigned int nr, bool map);
 	int (*set_trap_area)(unsigned long handle, u64 start, u64 end,
 			     bool map);
+	int (*set_opregion)(void *vgpu);
 };
 
 extern struct intel_gvt_mpt xengt_mpt;
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index fd0c85f..6b0a330 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -53,11 +53,23 @@ static const struct intel_gvt_ops *intel_gvt_ops;
 #define VFIO_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << VFIO_PCI_OFFSET_SHIFT)
 #define VFIO_PCI_OFFSET_MASK    (((u64)(1) << VFIO_PCI_OFFSET_SHIFT) - 1)
 
+#define OPREGION_SIGNATURE "IntelGraphicsMem"
+
+struct vfio_region;
+struct intel_vgpu_regops {
+	size_t (*rw)(struct intel_vgpu *vgpu, char *buf,
+			size_t count, loff_t *ppos, bool iswrite);
+	void (*release)(struct intel_vgpu *vgpu,
+			struct vfio_region *region);
+};
+
 struct vfio_region {
 	u32				type;
 	u32				subtype;
 	size_t				size;
 	u32				flags;
+	const struct intel_vgpu_regops	*ops;
+	void				*data;
 };
 
 struct kvmgt_pgfn {
@@ -430,6 +442,91 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
 	}
 }
 
+static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
+		size_t count, loff_t *ppos, bool iswrite)
+{
+	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
+			VFIO_PCI_NUM_REGIONS;
+	void *base = vgpu->vdev.region[i].data;
+	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+
+	if (pos >= vgpu->vdev.region[i].size || iswrite) {
+		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
+		return -EINVAL;
+	}
+	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
+	memcpy(buf, base + pos, count);
+
+	return count;
+}
+
+static void intel_vgpu_reg_release_opregion(struct intel_vgpu *vgpu,
+		struct vfio_region *region)
+{
+	memunmap(region->data);
+}
+
+static const struct intel_vgpu_regops intel_vgpu_regops_opregion = {
+	.rw = intel_vgpu_reg_rw_opregion,
+	.release = intel_vgpu_reg_release_opregion,
+};
+
+static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
+		unsigned int type, unsigned int subtype,
+		const struct intel_vgpu_regops *ops,
+		size_t size, u32 flags, void *data)
+{
+	struct vfio_region *region;
+
+	region = krealloc(vgpu->vdev.region,
+			(vgpu->vdev.num_regions + 1) * sizeof(*region),
+			GFP_KERNEL);
+	if (!region)
+		return -ENOMEM;
+
+	vgpu->vdev.region = region;
+	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
+	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
+	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
+	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
+	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
+	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
+	vgpu->vdev.num_regions++;
+
+	return 0;
+}
+
+static int kvmgt_set_opregion(void *p_vgpu)
+{
+	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+	unsigned int addr;
+	void *base;
+	int ret;
+
+	addr = vgpu->gvt->opregion.opregion_pa;
+	if (!addr || !(~addr))
+		return -ENODEV;
+
+	base = memremap(addr, OPREGION_SIZE, MEMREMAP_WB);
+	if (!base)
+		return -ENOMEM;
+
+	if (memcmp(base, OPREGION_SIGNATURE, 16)) {
+		memunmap(base);
+		return -EINVAL;
+	}
+
+	ret = intel_vgpu_register_reg(vgpu,
+			PCI_VENDOR_ID_INTEL | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
+			VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION,
+			&intel_vgpu_regops_opregion, OPREGION_SIZE,
+			VFIO_REGION_INFO_FLAG_READ, base);
+	if (ret)
+		memunmap(base);
+
+	return ret;
+}
+
 static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
 {
 	struct intel_vgpu *vgpu = NULL;
@@ -646,7 +743,7 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 	int ret = -EINVAL;
 
 
-	if (index >= VFIO_PCI_NUM_REGIONS) {
+	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
 		gvt_vgpu_err("invalid index: %u\n", index);
 		return -EINVAL;
 	}
@@ -680,8 +777,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 	case VFIO_PCI_BAR5_REGION_INDEX:
 	case VFIO_PCI_VGA_REGION_INDEX:
 	case VFIO_PCI_ROM_REGION_INDEX:
+		break;
 	default:
-		gvt_vgpu_err("unsupported region: %u\n", index);
+		index -= VFIO_PCI_NUM_REGIONS;
+		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
+				ppos, is_write);
 	}
 
 	return ret == 0 ? count : ret;
@@ -944,7 +1044,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 
 		info.flags = VFIO_DEVICE_FLAGS_PCI;
 		info.flags |= VFIO_DEVICE_FLAGS_RESET;
-		info.num_regions = VFIO_PCI_NUM_REGIONS;
+		info.num_regions = VFIO_PCI_NUM_REGIONS +
+				vgpu->vdev.num_regions;
 		info.num_irqs = VFIO_PCI_NUM_IRQS;
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
@@ -1065,6 +1166,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 		}
 
 		if (caps.size) {
+			info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
 			if (info.argsz < sizeof(info) + caps.size) {
 				info.argsz = sizeof(info) + caps.size;
 				info.cap_offset = 0;
@@ -1513,6 +1615,7 @@ struct intel_gvt_mpt kvmgt_mpt = {
 	.read_gpa = kvmgt_read_gpa,
 	.write_gpa = kvmgt_write_gpa,
 	.gfn_to_mfn = kvmgt_gfn_to_pfn,
+	.set_opregion = kvmgt_set_opregion,
 };
 EXPORT_SYMBOL_GPL(kvmgt_mpt);
 
diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
index f0e5487..9e73f2e 100644
--- a/drivers/gpu/drm/i915/gvt/mpt.h
+++ b/drivers/gpu/drm/i915/gvt/mpt.h
@@ -292,4 +292,19 @@ static inline int intel_gvt_hypervisor_set_trap_area(
 	return intel_gvt_host.mpt->set_trap_area(vgpu->handle, start, end, map);
 }
 
+/**
+ * intel_gvt_hypervisor_set_opregion - Set opregion for guest
+ * @vgpu: a vGPU
+ *
+ * Returns:
+ * Zero on success, negative error code if failed.
+ */
+static inline int intel_gvt_hypervisor_set_opregion(struct intel_vgpu *vgpu)
+{
+	if (!intel_gvt_host.mpt->set_opregion)
+		return 0;
+
+	return intel_gvt_host.mpt->set_opregion(vgpu);
+}
+
 #endif /* _GVT_MPT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/opregion.c b/drivers/gpu/drm/i915/gvt/opregion.c
index 3117991..04c0452 100644
--- a/drivers/gpu/drm/i915/gvt/opregion.c
+++ b/drivers/gpu/drm/i915/gvt/opregion.c
@@ -113,23 +113,37 @@ void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu)
  */
 int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa)
 {
-	int ret;
+	int ret = 0;
+	unsigned long pfn;
 
 	gvt_dbg_core("vgpu%d: init vgpu opregion\n", vgpu->id);
 
-	if (intel_gvt_host.hypervisor_type == INTEL_GVT_HYPERVISOR_XEN) {
+	switch (intel_gvt_host.hypervisor_type) {
+	case INTEL_GVT_HYPERVISOR_KVM:
+		pfn = intel_gvt_hypervisor_gfn_to_mfn(vgpu, gpa >> PAGE_SHIFT);
+		vgpu_opregion(vgpu)->va = memremap(pfn << PAGE_SHIFT,
+						INTEL_GVT_OPREGION_SIZE,
+						MEMREMAP_WB);
+		if (!vgpu_opregion(vgpu)->va) {
+			gvt_vgpu_err("failed to map guest opregion\n");
+			ret = -EFAULT;
+		}
+		break;
+	case INTEL_GVT_HYPERVISOR_XEN:
 		gvt_dbg_core("emulate opregion from kernel\n");
 
 		ret = init_vgpu_opregion(vgpu, gpa);
 		if (ret)
-			return ret;
+			break;
 
 		ret = map_vgpu_opregion(vgpu, true);
-		if (ret)
-			return ret;
+		break;
+	default:
+		ret = -EINVAL;
+		gvt_vgpu_err("not supported hypervisor\n");
 	}
 
-	return 0;
+	return ret;
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 3deadcb..bcbf535 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -383,6 +383,10 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	if (ret)
 		goto out_clean_shadow_ctx;
 
+	ret = intel_gvt_hypervisor_set_opregion(vgpu);
+	if (ret)
+		goto out_clean_shadow_ctx;
+
 	mutex_unlock(&gvt->lock);
 
 	return vgpu;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 4/7] drm/i915/gvt: Add opregion support
@ 2017-10-10  9:50   ` Tina Zhang
  0 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: intel-gfx, intel-gvt-dev, linux-kernel

Windows guest driver needs vbt in opregion, to configure the setting
for display. Without opregion support, the display registers won't
be set and this blocks display model to get the correct information
of the guest display plane.

This patch is to provide a virtual opregion for guest. Current
implementation is to fill the virtual opregion with the content in
host's opregion. The original author of this patch is Xiaoguang Chen.

Signed-off-by: Bing Niu <bing.niu@intel.com>
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/hypercall.h |   1 +
 drivers/gpu/drm/i915/gvt/kvmgt.c     | 109 ++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/gvt/mpt.h       |  15 +++++
 drivers/gpu/drm/i915/gvt/opregion.c  |  26 +++++++--
 drivers/gpu/drm/i915/gvt/vgpu.c      |   4 ++
 5 files changed, 146 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
index df7f33a..32c345c 100644
--- a/drivers/gpu/drm/i915/gvt/hypercall.h
+++ b/drivers/gpu/drm/i915/gvt/hypercall.h
@@ -55,6 +55,7 @@ struct intel_gvt_mpt {
 			      unsigned long mfn, unsigned int nr, bool map);
 	int (*set_trap_area)(unsigned long handle, u64 start, u64 end,
 			     bool map);
+	int (*set_opregion)(void *vgpu);
 };
 
 extern struct intel_gvt_mpt xengt_mpt;
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index fd0c85f..6b0a330 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -53,11 +53,23 @@ static const struct intel_gvt_ops *intel_gvt_ops;
 #define VFIO_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << VFIO_PCI_OFFSET_SHIFT)
 #define VFIO_PCI_OFFSET_MASK    (((u64)(1) << VFIO_PCI_OFFSET_SHIFT) - 1)
 
+#define OPREGION_SIGNATURE "IntelGraphicsMem"
+
+struct vfio_region;
+struct intel_vgpu_regops {
+	size_t (*rw)(struct intel_vgpu *vgpu, char *buf,
+			size_t count, loff_t *ppos, bool iswrite);
+	void (*release)(struct intel_vgpu *vgpu,
+			struct vfio_region *region);
+};
+
 struct vfio_region {
 	u32				type;
 	u32				subtype;
 	size_t				size;
 	u32				flags;
+	const struct intel_vgpu_regops	*ops;
+	void				*data;
 };
 
 struct kvmgt_pgfn {
@@ -430,6 +442,91 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
 	}
 }
 
+static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
+		size_t count, loff_t *ppos, bool iswrite)
+{
+	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
+			VFIO_PCI_NUM_REGIONS;
+	void *base = vgpu->vdev.region[i].data;
+	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+
+	if (pos >= vgpu->vdev.region[i].size || iswrite) {
+		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
+		return -EINVAL;
+	}
+	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
+	memcpy(buf, base + pos, count);
+
+	return count;
+}
+
+static void intel_vgpu_reg_release_opregion(struct intel_vgpu *vgpu,
+		struct vfio_region *region)
+{
+	memunmap(region->data);
+}
+
+static const struct intel_vgpu_regops intel_vgpu_regops_opregion = {
+	.rw = intel_vgpu_reg_rw_opregion,
+	.release = intel_vgpu_reg_release_opregion,
+};
+
+static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
+		unsigned int type, unsigned int subtype,
+		const struct intel_vgpu_regops *ops,
+		size_t size, u32 flags, void *data)
+{
+	struct vfio_region *region;
+
+	region = krealloc(vgpu->vdev.region,
+			(vgpu->vdev.num_regions + 1) * sizeof(*region),
+			GFP_KERNEL);
+	if (!region)
+		return -ENOMEM;
+
+	vgpu->vdev.region = region;
+	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
+	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
+	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
+	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
+	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
+	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
+	vgpu->vdev.num_regions++;
+
+	return 0;
+}
+
+static int kvmgt_set_opregion(void *p_vgpu)
+{
+	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+	unsigned int addr;
+	void *base;
+	int ret;
+
+	addr = vgpu->gvt->opregion.opregion_pa;
+	if (!addr || !(~addr))
+		return -ENODEV;
+
+	base = memremap(addr, OPREGION_SIZE, MEMREMAP_WB);
+	if (!base)
+		return -ENOMEM;
+
+	if (memcmp(base, OPREGION_SIGNATURE, 16)) {
+		memunmap(base);
+		return -EINVAL;
+	}
+
+	ret = intel_vgpu_register_reg(vgpu,
+			PCI_VENDOR_ID_INTEL | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
+			VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION,
+			&intel_vgpu_regops_opregion, OPREGION_SIZE,
+			VFIO_REGION_INFO_FLAG_READ, base);
+	if (ret)
+		memunmap(base);
+
+	return ret;
+}
+
 static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
 {
 	struct intel_vgpu *vgpu = NULL;
@@ -646,7 +743,7 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 	int ret = -EINVAL;
 
 
-	if (index >= VFIO_PCI_NUM_REGIONS) {
+	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
 		gvt_vgpu_err("invalid index: %u\n", index);
 		return -EINVAL;
 	}
@@ -680,8 +777,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 	case VFIO_PCI_BAR5_REGION_INDEX:
 	case VFIO_PCI_VGA_REGION_INDEX:
 	case VFIO_PCI_ROM_REGION_INDEX:
+		break;
 	default:
-		gvt_vgpu_err("unsupported region: %u\n", index);
+		index -= VFIO_PCI_NUM_REGIONS;
+		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
+				ppos, is_write);
 	}
 
 	return ret == 0 ? count : ret;
@@ -944,7 +1044,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 
 		info.flags = VFIO_DEVICE_FLAGS_PCI;
 		info.flags |= VFIO_DEVICE_FLAGS_RESET;
-		info.num_regions = VFIO_PCI_NUM_REGIONS;
+		info.num_regions = VFIO_PCI_NUM_REGIONS +
+				vgpu->vdev.num_regions;
 		info.num_irqs = VFIO_PCI_NUM_IRQS;
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
@@ -1065,6 +1166,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 		}
 
 		if (caps.size) {
+			info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
 			if (info.argsz < sizeof(info) + caps.size) {
 				info.argsz = sizeof(info) + caps.size;
 				info.cap_offset = 0;
@@ -1513,6 +1615,7 @@ struct intel_gvt_mpt kvmgt_mpt = {
 	.read_gpa = kvmgt_read_gpa,
 	.write_gpa = kvmgt_write_gpa,
 	.gfn_to_mfn = kvmgt_gfn_to_pfn,
+	.set_opregion = kvmgt_set_opregion,
 };
 EXPORT_SYMBOL_GPL(kvmgt_mpt);
 
diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
index f0e5487..9e73f2e 100644
--- a/drivers/gpu/drm/i915/gvt/mpt.h
+++ b/drivers/gpu/drm/i915/gvt/mpt.h
@@ -292,4 +292,19 @@ static inline int intel_gvt_hypervisor_set_trap_area(
 	return intel_gvt_host.mpt->set_trap_area(vgpu->handle, start, end, map);
 }
 
+/**
+ * intel_gvt_hypervisor_set_opregion - Set opregion for guest
+ * @vgpu: a vGPU
+ *
+ * Returns:
+ * Zero on success, negative error code if failed.
+ */
+static inline int intel_gvt_hypervisor_set_opregion(struct intel_vgpu *vgpu)
+{
+	if (!intel_gvt_host.mpt->set_opregion)
+		return 0;
+
+	return intel_gvt_host.mpt->set_opregion(vgpu);
+}
+
 #endif /* _GVT_MPT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/opregion.c b/drivers/gpu/drm/i915/gvt/opregion.c
index 3117991..04c0452 100644
--- a/drivers/gpu/drm/i915/gvt/opregion.c
+++ b/drivers/gpu/drm/i915/gvt/opregion.c
@@ -113,23 +113,37 @@ void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu)
  */
 int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa)
 {
-	int ret;
+	int ret = 0;
+	unsigned long pfn;
 
 	gvt_dbg_core("vgpu%d: init vgpu opregion\n", vgpu->id);
 
-	if (intel_gvt_host.hypervisor_type == INTEL_GVT_HYPERVISOR_XEN) {
+	switch (intel_gvt_host.hypervisor_type) {
+	case INTEL_GVT_HYPERVISOR_KVM:
+		pfn = intel_gvt_hypervisor_gfn_to_mfn(vgpu, gpa >> PAGE_SHIFT);
+		vgpu_opregion(vgpu)->va = memremap(pfn << PAGE_SHIFT,
+						INTEL_GVT_OPREGION_SIZE,
+						MEMREMAP_WB);
+		if (!vgpu_opregion(vgpu)->va) {
+			gvt_vgpu_err("failed to map guest opregion\n");
+			ret = -EFAULT;
+		}
+		break;
+	case INTEL_GVT_HYPERVISOR_XEN:
 		gvt_dbg_core("emulate opregion from kernel\n");
 
 		ret = init_vgpu_opregion(vgpu, gpa);
 		if (ret)
-			return ret;
+			break;
 
 		ret = map_vgpu_opregion(vgpu, true);
-		if (ret)
-			return ret;
+		break;
+	default:
+		ret = -EINVAL;
+		gvt_vgpu_err("not supported hypervisor\n");
 	}
 
-	return 0;
+	return ret;
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 3deadcb..bcbf535 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -383,6 +383,10 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	if (ret)
 		goto out_clean_shadow_ctx;
 
+	ret = intel_gvt_hypervisor_set_opregion(vgpu);
+	if (ret)
+		goto out_clean_shadow_ctx;
+
 	mutex_unlock(&gvt->lock);
 
 	return vgpu;
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10  9:50   ` Tina Zhang
  2017-10-10  9:50   ` Tina Zhang
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel, Daniel Vetter

Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query and
get the plan and its related information. This ioctl can be invoked with:
1) either flag DMABUF or REGION is set. Vendor driver returns success and
the plane_info only when the specific kind of buffer is supported.
2) flag PROBE is set with either DMABUF or REGION. Vendor driver returns
success only when the specific kind of buffer is supported.

The dma-buf's life cycle is handled by user mode and tracked by kernel.
The returned fd in struct vfio_device_query_gfx_plane can be a new
fd or an old fd of a re-exported dma-buf. Host user mode can check the
value of fd and to see if it needs to create new resource according to
the new fd or just use the existed resource related to the old fd.

v15:
- add a ioctl to get a dmabuf for a given dmabuf id. (Gerd)

v14:
- add PROBE, DMABUF and REGION flags. (Alex)

v12:
- add drm_format_mod back. (Gerd and Zhenyu)
- add region_index. (Gerd)

v11:
- rename plane_type to drm_plane_type. (Gerd)
- move fields of vfio_device_query_gfx_plane to vfio_device_gfx_plane_info.
  (Gerd)
- remove drm_format_mod, start fields. (Daniel)
- remove plane_id.

v10:
- refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE. (Alex) (Gerd)

v3:
- add a field gvt_plane_info in the drm_i915_gem_obj structure to save
  the decoded plane information to avoid look up while need the plane
  info. (Gerd)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 include/uapi/linux/vfio.h | 62 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index ae46105..fdf9a9c 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -502,6 +502,68 @@ struct vfio_pci_hot_reset {
 
 #define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE + 13)
 
+/**
+ * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
+ *                                    struct vfio_device_query_gfx_plane)
+ *
+ * Set the drm_plane_type and flags, then retrieve the gfx plane info.
+ *
+ * flags supported:
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF are set
+ *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
+ *   support for dma-buf.
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION are set
+ *   to ask if the mdev supports region. 0 on support, -EINVAL on no
+ *   support for region.
+ * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION is set
+ *   with each call to query the plane info.
+ * - Others are invalid and return -EINVAL.
+ *
+ * Return: 0 on success, -ENODEV with all out fields zero on mdev
+ * device initialization, -errno on other failure.
+ */
+struct vfio_device_gfx_plane_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0)
+#define VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1)
+#define VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
+	/* in */
+	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
+	/* out */
+	__u32 drm_format;	/* drm format of plane */
+	__u64 drm_format_mod;   /* tiled mode */
+	__u32 width;	/* width of plane */
+	__u32 height;	/* height of plane */
+	__u32 stride;	/* stride of plane */
+	__u32 size;	/* size of plane in bytes, align on page*/
+	__u32 x_pos;	/* horizontal position of cursor plane */
+	__u32 y_pos;	/* vertical position of cursor plane*/
+	union {
+		__u32 region_index;	/* region index */
+		__s32 dmabuf_id;	/* dma-buf fd */
+	};
+};
+
+#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/**
+ * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15,
+ *				    struct vfio_device_gfx_dmabuf_fd)
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_gfx_dmabuf_fd {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u32 dmabuf_id;
+	/* out */
+	__s32 dmabuf_fd;
+};
+
+#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
+
 /* -------- API for Type1 VFIO IOMMU -------- */
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
@ 2017-10-10  9:50   ` Tina Zhang
  0 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Daniel Vetter, intel-gfx, intel-gvt-dev, linux-kernel

Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query and
get the plan and its related information. This ioctl can be invoked with:
1) either flag DMABUF or REGION is set. Vendor driver returns success and
the plane_info only when the specific kind of buffer is supported.
2) flag PROBE is set with either DMABUF or REGION. Vendor driver returns
success only when the specific kind of buffer is supported.

The dma-buf's life cycle is handled by user mode and tracked by kernel.
The returned fd in struct vfio_device_query_gfx_plane can be a new
fd or an old fd of a re-exported dma-buf. Host user mode can check the
value of fd and to see if it needs to create new resource according to
the new fd or just use the existed resource related to the old fd.

v15:
- add a ioctl to get a dmabuf for a given dmabuf id. (Gerd)

v14:
- add PROBE, DMABUF and REGION flags. (Alex)

v12:
- add drm_format_mod back. (Gerd and Zhenyu)
- add region_index. (Gerd)

v11:
- rename plane_type to drm_plane_type. (Gerd)
- move fields of vfio_device_query_gfx_plane to vfio_device_gfx_plane_info.
  (Gerd)
- remove drm_format_mod, start fields. (Daniel)
- remove plane_id.

v10:
- refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE. (Alex) (Gerd)

v3:
- add a field gvt_plane_info in the drm_i915_gem_obj structure to save
  the decoded plane information to avoid look up while need the plane
  info. (Gerd)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 include/uapi/linux/vfio.h | 62 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index ae46105..fdf9a9c 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -502,6 +502,68 @@ struct vfio_pci_hot_reset {
 
 #define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE + 13)
 
+/**
+ * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
+ *                                    struct vfio_device_query_gfx_plane)
+ *
+ * Set the drm_plane_type and flags, then retrieve the gfx plane info.
+ *
+ * flags supported:
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF are set
+ *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
+ *   support for dma-buf.
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION are set
+ *   to ask if the mdev supports region. 0 on support, -EINVAL on no
+ *   support for region.
+ * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION is set
+ *   with each call to query the plane info.
+ * - Others are invalid and return -EINVAL.
+ *
+ * Return: 0 on success, -ENODEV with all out fields zero on mdev
+ * device initialization, -errno on other failure.
+ */
+struct vfio_device_gfx_plane_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0)
+#define VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1)
+#define VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
+	/* in */
+	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
+	/* out */
+	__u32 drm_format;	/* drm format of plane */
+	__u64 drm_format_mod;   /* tiled mode */
+	__u32 width;	/* width of plane */
+	__u32 height;	/* height of plane */
+	__u32 stride;	/* stride of plane */
+	__u32 size;	/* size of plane in bytes, align on page*/
+	__u32 x_pos;	/* horizontal position of cursor plane */
+	__u32 y_pos;	/* vertical position of cursor plane*/
+	union {
+		__u32 region_index;	/* region index */
+		__s32 dmabuf_id;	/* dma-buf fd */
+	};
+};
+
+#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/**
+ * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15,
+ *				    struct vfio_device_gfx_dmabuf_fd)
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_gfx_dmabuf_fd {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u32 dmabuf_id;
+	/* out */
+	__s32 dmabuf_fd;
+};
+
+#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
+
 /* -------- API for Type1 VFIO IOMMU -------- */
 
 /**
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 6/7] drm/i915: Introduce GEM proxy
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10  9:50   ` Tina Zhang
  2017-10-10  9:50   ` Tina Zhang
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel,
	Daniel Vetter, Joonas Lahtinen

GEM proxy is a kind of GEM, whose backing physical memory is pinned
and produced by guest VM and is used by host as read only. With GEM
proxy, host is able to access guest physical memory through GEM object
interface. As GEM proxy is such a special kind of GEM, a new flag
I915_GEM_OBJECT_IS_PROXY is introduced to ban host from changing the
backing storage of GEM proxy.

v14:
- return -ENXIO when gem proxy object is banned by ioctl.
  (Chris) (Daniel)

v13:
- add comments to GEM proxy. (Chris)
- don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
- check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
- remove GEM proxy bar in i915_gem_madvise_ioctl.

v6:
- add gem proxy barrier in the following ioctls. (Chris)
  i915_gem_set_caching_ioctl
  i915_gem_set_domain_ioctl
  i915_gem_sw_finish_ioctl
  i915_gem_set_tiling_ioctl
  i915_gem_madvise_ioctl

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c        | 24 +++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_object.h |  7 +++++++
 drivers/gpu/drm/i915/i915_gem_tiling.c |  8 ++++++++
 3 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 000a764..7f1e7ab 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1584,6 +1584,16 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	if (err)
 		goto out;
 
+	/* Proxy objects do not control access to the backing storage, ergo
+	 * they cannot be used as a means to manipulate the cache domain
+	 * tracking for that backing storage. The proxy object is always
+	 * considered to be outside of any cache domain.
+	 */
+	if (i915_gem_object_is_proxy(obj)) {
+		err = -ENXIO;
+		goto out;
+	}
+
 	/* Flush and acquire obj->pages so that we are coherent through
 	 * direct access in memory with previous cached writes through
 	 * shmemfs and that our cache domain tracking remains valid.
@@ -1640,6 +1650,10 @@ i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
+	/* Proxy objects are barred from CPU access, so there is no
+	 * need to ban sw_finish as it is a nop.
+	 */
+
 	/* Pinned buffers may be scanout, so flush the cache */
 	i915_gem_object_flush_if_display(obj);
 	i915_gem_object_put(obj);
@@ -1690,7 +1704,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 	 */
 	if (!obj->base.filp) {
 		i915_gem_object_put(obj);
-		return -EINVAL;
+		return -ENXIO;
 	}
 
 	addr = vm_mmap(obj->base.filp, 0, args->size,
@@ -3749,6 +3763,14 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
+	/* The caching mode of proxy object is handled by its generator, and not
+	 * expected to be changed by user mode.
+	 */
+	if (i915_gem_object_is_proxy(obj)) {
+		ret = -ENXIO;
+		goto out;
+	}
+
 	if (obj->cache_level == level)
 		goto out;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 5b19a49..f3b382a 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -39,6 +39,7 @@ struct drm_i915_gem_object_ops {
 	unsigned int flags;
 #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
 #define I915_GEM_OBJECT_IS_SHRINKABLE   BIT(1)
+#define I915_GEM_OBJECT_IS_PROXY	BIT(2)
 
 	/* Interface between the GEM object and its backing storage.
 	 * get_pages() is called once prior to the use of the associated set
@@ -300,6 +301,12 @@ i915_gem_object_is_shrinkable(const struct drm_i915_gem_object *obj)
 }
 
 static inline bool
+i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
+{
+	return obj->ops->flags & I915_GEM_OBJECT_IS_PROXY;
+}
+
+static inline bool
 i915_gem_object_is_active(const struct drm_i915_gem_object *obj)
 {
 	return obj->active_count;
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index fb5231f..f617012 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -345,6 +345,14 @@ i915_gem_set_tiling_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
+	/* The tiling mode of proxy objects is handled by its generator, and not
+	 * expected to be changed by user mode.
+	 */
+	if (i915_gem_object_is_proxy(obj)) {
+		err = -ENXIO;
+		goto err;
+	}
+
 	if (!i915_tiling_ok(obj, args->tiling_mode, args->stride)) {
 		err = -EINVAL;
 		goto err;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 6/7] drm/i915: Introduce GEM proxy
@ 2017-10-10  9:50   ` Tina Zhang
  0 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Daniel Vetter, intel-gfx, linux-kernel, intel-gvt-dev

GEM proxy is a kind of GEM, whose backing physical memory is pinned
and produced by guest VM and is used by host as read only. With GEM
proxy, host is able to access guest physical memory through GEM object
interface. As GEM proxy is such a special kind of GEM, a new flag
I915_GEM_OBJECT_IS_PROXY is introduced to ban host from changing the
backing storage of GEM proxy.

v14:
- return -ENXIO when gem proxy object is banned by ioctl.
  (Chris) (Daniel)

v13:
- add comments to GEM proxy. (Chris)
- don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
- check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
- remove GEM proxy bar in i915_gem_madvise_ioctl.

v6:
- add gem proxy barrier in the following ioctls. (Chris)
  i915_gem_set_caching_ioctl
  i915_gem_set_domain_ioctl
  i915_gem_sw_finish_ioctl
  i915_gem_set_tiling_ioctl
  i915_gem_madvise_ioctl

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c        | 24 +++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_gem_object.h |  7 +++++++
 drivers/gpu/drm/i915/i915_gem_tiling.c |  8 ++++++++
 3 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 000a764..7f1e7ab 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1584,6 +1584,16 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 	if (err)
 		goto out;
 
+	/* Proxy objects do not control access to the backing storage, ergo
+	 * they cannot be used as a means to manipulate the cache domain
+	 * tracking for that backing storage. The proxy object is always
+	 * considered to be outside of any cache domain.
+	 */
+	if (i915_gem_object_is_proxy(obj)) {
+		err = -ENXIO;
+		goto out;
+	}
+
 	/* Flush and acquire obj->pages so that we are coherent through
 	 * direct access in memory with previous cached writes through
 	 * shmemfs and that our cache domain tracking remains valid.
@@ -1640,6 +1650,10 @@ i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
+	/* Proxy objects are barred from CPU access, so there is no
+	 * need to ban sw_finish as it is a nop.
+	 */
+
 	/* Pinned buffers may be scanout, so flush the cache */
 	i915_gem_object_flush_if_display(obj);
 	i915_gem_object_put(obj);
@@ -1690,7 +1704,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 	 */
 	if (!obj->base.filp) {
 		i915_gem_object_put(obj);
-		return -EINVAL;
+		return -ENXIO;
 	}
 
 	addr = vm_mmap(obj->base.filp, 0, args->size,
@@ -3749,6 +3763,14 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
+	/* The caching mode of proxy object is handled by its generator, and not
+	 * expected to be changed by user mode.
+	 */
+	if (i915_gem_object_is_proxy(obj)) {
+		ret = -ENXIO;
+		goto out;
+	}
+
 	if (obj->cache_level == level)
 		goto out;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index 5b19a49..f3b382a 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -39,6 +39,7 @@ struct drm_i915_gem_object_ops {
 	unsigned int flags;
 #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
 #define I915_GEM_OBJECT_IS_SHRINKABLE   BIT(1)
+#define I915_GEM_OBJECT_IS_PROXY	BIT(2)
 
 	/* Interface between the GEM object and its backing storage.
 	 * get_pages() is called once prior to the use of the associated set
@@ -300,6 +301,12 @@ i915_gem_object_is_shrinkable(const struct drm_i915_gem_object *obj)
 }
 
 static inline bool
+i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
+{
+	return obj->ops->flags & I915_GEM_OBJECT_IS_PROXY;
+}
+
+static inline bool
 i915_gem_object_is_active(const struct drm_i915_gem_object *obj)
 {
 	return obj->active_count;
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index fb5231f..f617012 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -345,6 +345,14 @@ i915_gem_set_tiling_ioctl(struct drm_device *dev, void *data,
 	if (!obj)
 		return -ENOENT;
 
+	/* The tiling mode of proxy objects is handled by its generator, and not
+	 * expected to be changed by user mode.
+	 */
+	if (i915_gem_object_is_proxy(obj)) {
+		err = -ENXIO;
+		goto err;
+	}
+
 	if (!i915_tiling_ok(obj, args->tiling_mode, args->stride)) {
 		err = -EINVAL;
 		goto err;
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 7/7] drm/i915/gvt: Dmabuf support for GVT-g
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10  9:50   ` Tina Zhang
  2017-10-10  9:50   ` Tina Zhang
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Tina Zhang, intel-gfx, intel-gvt-dev, linux-kernel, Daniel Vetter

This patch introduces a guest's framebuffer sharing mechanism based on
dma-buf subsystem. With this sharing mechanism, guest's framebuffer can
be shared between guest VM and host.

v15:
- Add VFIO_DEVICE_GET_GFX_DMABUF ABI. (Gerd)
- Add intel_vgpu_dmabuf_cleanup() to clean up the vGPU's dmabuf. (Gerd)

v14:
- add PROBE, DMABUF and REGION flags. (Alex)

v12:
- refine the lifecycle of dmabuf.

v9:
- remove dma-buf management. (Alex)
- track the dma-buf create and release in kernel mode. (Gerd) (Daniel)

v8:
- refine the dma-buf ioctl definition.(Alex)
- add a lock to protect the dmabuf list. (Alex)

v7:
- release dma-buf related allocations in dma-buf's associated release
  function. (Alex)
- refine ioctl interface for querying plane info or create dma-buf.
  (Alex)

v6:
- align the dma-buf life cycle with the vfio device. (Alex)
- add the dma-buf related operations in a separate patch. (Gerd)
- i915 related changes. (Chris)

v5:
- fix bug while checking whether the gem obj is gvt's dma-buf when user
  change caching mode or domains. Add a helper function to do it.
  (Xiaoguang)
- add definition for the query plane and create dma-buf. (Xiaoguang)

v4:
- fix bug while checking whether the gem obj is gvt's dma-buf when set
  caching mode or doamins. (Xiaoguang)

v3:
- declare a new flag I915_GEM_OBJECT_IS_GVT_DMABUF in drm_i915_gem_object
  to represent the gem obj for gvt's dma-buf. The tiling mode, caching
  mode and domains can not be changed for this kind of gem object. (Alex)
- change dma-buf related information to be more generic. So other vendor
  can use the same interface. (Alex)

v2:
- create a management fd for dma-buf operations. (Alex)
- alloc gem object's backing storage in gem obj's get_pages() callback.
  (Chris)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/i915/gvt/Makefile      |   2 +-
 drivers/gpu/drm/i915/gvt/dmabuf.c      | 513 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/dmabuf.h      |  65 +++++
 drivers/gpu/drm/i915/gvt/gvt.c         |   2 +
 drivers/gpu/drm/i915/gvt/gvt.h         |  11 +
 drivers/gpu/drm/i915/gvt/hypercall.h   |   2 +
 drivers/gpu/drm/i915/gvt/kvmgt.c       |  60 ++++
 drivers/gpu/drm/i915/gvt/mpt.h         |  30 ++
 drivers/gpu/drm/i915/gvt/vgpu.c        |   5 +-
 drivers/gpu/drm/i915/i915_gem_object.h |   2 +
 10 files changed, 690 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gvt/dmabuf.c
 create mode 100644 drivers/gpu/drm/i915/gvt/dmabuf.h

diff --git a/drivers/gpu/drm/i915/gvt/Makefile b/drivers/gpu/drm/i915/gvt/Makefile
index 019d596..18f43cb 100644
--- a/drivers/gpu/drm/i915/gvt/Makefile
+++ b/drivers/gpu/drm/i915/gvt/Makefile
@@ -2,7 +2,7 @@ GVT_DIR := gvt
 GVT_SOURCE := gvt.o aperture_gm.o handlers.o vgpu.o trace_points.o firmware.o \
 	interrupt.o gtt.o cfg_space.o opregion.o mmio.o display.o edid.o \
 	execlist.o scheduler.o sched_policy.o render.o cmd_parser.o \
-	fb_decoder.o
+	fb_decoder.o dmabuf.o
 
 ccflags-y				+= -I$(src) -I$(src)/$(GVT_DIR)
 i915-y					+= $(addprefix $(GVT_DIR)/, $(GVT_SOURCE))
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c
new file mode 100644
index 0000000..e97b3c7
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.c
@@ -0,0 +1,513 @@
+/*
+ * Copyright 2017 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Zhiyuan Lv <zhiyuan.lv@intel.com>
+ *
+ * Contributors:
+ *    Xiaoguang Chen
+ *    Tina Zhang <tina.zhang@intel.com>
+ */
+
+#include <linux/dma-buf.h>
+#include <drm/drmP.h>
+#include <linux/vfio.h>
+
+#include "i915_drv.h"
+#include "gvt.h"
+
+#define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12))
+
+static struct sg_table *vgpu_gem_get_pages(
+		struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
+	struct sg_table *st;
+	struct scatterlist *sg;
+	int i, ret;
+	gen8_pte_t __iomem *gtt_entries;
+	struct intel_vgpu_fb_info *fb_info;
+
+	fb_info = (struct intel_vgpu_fb_info *)obj->gvt_info;
+	if (WARN_ON(!fb_info))
+		return ERR_PTR(-ENODEV);
+
+	st = kmalloc(sizeof(*st), GFP_KERNEL);
+	if (unlikely(!st))
+		return ERR_PTR(-ENOMEM);
+
+	ret = sg_alloc_table(st, fb_info->size, GFP_KERNEL);
+	if (ret) {
+		kfree(st);
+		return ERR_PTR(ret);
+	}
+	gtt_entries = (gen8_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(fb_info->start >> PAGE_SHIFT);
+	for_each_sg(st->sgl, sg, fb_info->size, i) {
+		sg->offset = 0;
+		sg->length = PAGE_SIZE;
+		sg_dma_address(sg) =
+			GEN8_DECODE_PTE(readq(&gtt_entries[i]));
+		sg_dma_len(sg) = PAGE_SIZE;
+	}
+
+	return st;
+}
+
+static void vgpu_gem_put_pages(struct drm_i915_gem_object *obj,
+		struct sg_table *pages)
+{
+	sg_free_table(pages);
+	kfree(pages);
+}
+
+static void dmabuf_gem_object_free(struct kref *kref)
+{
+	struct intel_vgpu_dmabuf_obj *obj =
+		container_of(kref, struct intel_vgpu_dmabuf_obj, kref);
+	struct intel_vgpu *vgpu = obj->vgpu;
+	struct list_head *pos;
+
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+
+	list_for_each(pos, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if (dmabuf_obj == obj) {
+			idr_remove(&vgpu->object_idr, dmabuf_obj->dmabuf_id);
+			kfree(dmabuf_obj->info);
+			kfree(dmabuf_obj);
+			list_del(pos);
+			break;
+		}
+	}
+}
+
+
+static inline void dmabuf_obj_get(struct intel_vgpu_dmabuf_obj *obj)
+{
+	kref_get(&obj->kref);
+}
+
+static inline void dmabuf_obj_put(struct intel_vgpu_dmabuf_obj *obj)
+{
+	kref_put(&obj->kref, dmabuf_gem_object_free);
+}
+
+static void vgpu_gem_release(struct drm_i915_gem_object *gem_obj)
+{
+
+	struct intel_vgpu_fb_info *fb_info = gem_obj->gvt_info;
+	struct intel_vgpu_dmabuf_obj *obj = fb_info->obj;
+	struct intel_vgpu *vgpu = obj->vgpu;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	gem_obj->base.dma_buf = NULL;
+	dmabuf_obj_put(obj);
+	intel_gvt_hypervisor_put_vfio_device(vgpu);
+	mutex_unlock(&vgpu->dmabuf_lock);
+}
+
+static const struct drm_i915_gem_object_ops intel_vgpu_gem_ops = {
+	.flags = I915_GEM_OBJECT_IS_PROXY,
+	.get_pages = vgpu_gem_get_pages,
+	.put_pages = vgpu_gem_put_pages,
+	.release = vgpu_gem_release,
+};
+
+static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
+		struct intel_vgpu_fb_info *info)
+{
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct drm_i915_gem_object *obj;
+
+	obj = i915_gem_object_alloc(dev_priv);
+	if (obj == NULL)
+		return NULL;
+
+	drm_gem_private_object_init(dev, &obj->base,
+		info->size << PAGE_SHIFT);
+	i915_gem_object_init(obj, &intel_vgpu_gem_ops);
+
+	obj->base.read_domains = I915_GEM_DOMAIN_GTT;
+	obj->base.write_domain = 0;
+	if (IS_SKYLAKE(dev_priv)) {
+		unsigned int tiling_mode = 0;
+		unsigned int stride = 0;
+
+		switch (info->drm_format_mod << 10) {
+		case PLANE_CTL_TILED_LINEAR:
+			tiling_mode = I915_TILING_NONE;
+			break;
+		case PLANE_CTL_TILED_X:
+			tiling_mode = I915_TILING_X;
+			stride = info->stride;
+			break;
+		case PLANE_CTL_TILED_Y:
+			tiling_mode = I915_TILING_Y;
+			stride = info->stride;
+			break;
+		default:
+			gvt_dbg_core("not supported tiling mode\n");
+		}
+		obj->tiling_and_stride = tiling_mode | stride;
+	} else {
+		obj->tiling_and_stride = info->drm_format_mod ?
+					I915_TILING_X : 0;
+	}
+
+	return obj;
+}
+
+static int vgpu_get_plane_info(struct drm_device *dev,
+		struct intel_vgpu *vgpu,
+		struct intel_vgpu_fb_info *info,
+		int plane_id)
+{
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct intel_vgpu_primary_plane_format p;
+	struct intel_vgpu_cursor_plane_format c;
+	int ret;
+
+	if (plane_id == DRM_PLANE_TYPE_PRIMARY) {
+		ret = intel_vgpu_decode_primary_plane(vgpu, &p);
+		if (ret)
+			return ret;
+		info->start = p.base;
+		info->start_gpa = p.base_gpa;
+		info->width = p.width;
+		info->height = p.height;
+		info->stride = p.stride;
+		info->drm_format = p.drm_format;
+		info->drm_format_mod = p.tiled;
+		info->size = (((p.stride * p.height * p.bpp) / 8) +
+				(PAGE_SIZE - 1)) >> PAGE_SHIFT;
+	} else if (plane_id == DRM_PLANE_TYPE_CURSOR) {
+		ret = intel_vgpu_decode_cursor_plane(vgpu, &c);
+		if (ret)
+			return ret;
+		info->start = c.base;
+		info->start_gpa = c.base_gpa;
+		info->width = c.width;
+		info->height = c.height;
+		info->stride = c.width * (c.bpp / 8);
+		info->drm_format = c.drm_format;
+		info->drm_format_mod = 0;
+		info->x_pos = c.x_pos;
+		info->y_pos = c.y_pos;
+		info->size = (((info->stride * c.height * c.bpp) / 8)
+				+ (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+	} else {
+		gvt_vgpu_err("invalid plane id:%d\n", plane_id);
+		return -EINVAL;
+	}
+
+	if (info->size == 0) {
+		gvt_vgpu_err("fb size is zero\n");
+		return -EINVAL;
+	}
+
+	if (info->start & (PAGE_SIZE - 1)) {
+		gvt_vgpu_err("Not aligned fb address:0x%llx\n", info->start);
+		return -EFAULT;
+	}
+	if (((info->start >> PAGE_SHIFT) + info->size) >
+		ggtt_total_entries(&dev_priv->ggtt)) {
+		gvt_vgpu_err("Invalid GTT offset or size\n");
+		return -EFAULT;
+	}
+
+	if (!intel_gvt_ggtt_validate_range(vgpu, info->start, info->size)) {
+		gvt_vgpu_err("invalid gma addr\n");
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static struct intel_vgpu_dmabuf_obj *
+pick_dmabuf_by_info(struct intel_vgpu *vgpu,
+		    struct intel_vgpu_fb_info *latest_info)
+{
+	struct list_head *pos;
+	struct intel_vgpu_fb_info *fb_info;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj = NULL;
+	struct intel_vgpu_dmabuf_obj *ret = NULL;
+
+	list_for_each(pos, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if ((dmabuf_obj == NULL) ||
+		    (dmabuf_obj->info == NULL))
+			continue;
+
+		fb_info = (struct intel_vgpu_fb_info *)dmabuf_obj->info;
+		if ((fb_info->start == latest_info->start) &&
+		    (fb_info->start_gpa == latest_info->start_gpa) &&
+		    (fb_info->size == latest_info->size) &&
+		    (fb_info->drm_format_mod == latest_info->drm_format_mod) &&
+		    (fb_info->drm_format == latest_info->drm_format) &&
+		    (fb_info->width == latest_info->width) &&
+		    (fb_info->height == latest_info->height)) {
+			ret = dmabuf_obj;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+static struct intel_vgpu_dmabuf_obj *
+pick_dmabuf_by_num(struct intel_vgpu *vgpu, u32 id)
+{
+	struct list_head *pos;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj = NULL;
+	struct intel_vgpu_dmabuf_obj *ret = NULL;
+
+	list_for_each(pos, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if (!dmabuf_obj)
+			continue;
+
+		if (dmabuf_obj->dmabuf_id == id) {
+			ret = dmabuf_obj;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+static void update_fb_info(struct vfio_device_gfx_plane_info *gvt_dmabuf,
+		      struct intel_vgpu_fb_info *fb_info)
+{
+	gvt_dmabuf->drm_format = fb_info->drm_format;
+	gvt_dmabuf->width = fb_info->width;
+	gvt_dmabuf->height = fb_info->height;
+	gvt_dmabuf->stride = fb_info->stride;
+	gvt_dmabuf->size = fb_info->size;
+	gvt_dmabuf->x_pos = fb_info->x_pos;
+	gvt_dmabuf->y_pos = fb_info->y_pos;
+}
+
+int intel_vgpu_query_plane(struct intel_vgpu *vgpu, void *args)
+{
+	struct drm_device *dev = &vgpu->gvt->dev_priv->drm;
+	struct vfio_device_gfx_plane_info *gfx_plane_info = args;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+	struct intel_vgpu_fb_info fb_info;
+	int ret = 0;
+
+	if (gfx_plane_info->flags == (VFIO_GFX_PLANE_TYPE_DMABUF |
+				       VFIO_GFX_PLANE_TYPE_PROBE))
+		return ret;
+	else if ((gfx_plane_info->flags & ~VFIO_GFX_PLANE_TYPE_DMABUF) ||
+			(!gfx_plane_info->flags))
+		return -EINVAL;
+
+	ret = vgpu_get_plane_info(dev, vgpu, &fb_info,
+					gfx_plane_info->drm_plane_type);
+	if (ret != 0)
+		goto out;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	/* If exists, pick up the exposed dmabuf_obj */
+	dmabuf_obj = pick_dmabuf_by_info(vgpu, &fb_info);
+	if (dmabuf_obj) {
+		update_fb_info(gfx_plane_info, &fb_info);
+		gfx_plane_info->dmabuf_id = dmabuf_obj->dmabuf_id;
+
+		/* This buffer may be released between query_plane ioctl and
+		 * get_dmabuf ioctl. Add the refcount to make sure it won't
+		 * be released between the two ioctls.
+		 */
+		if (!dmabuf_obj->initref) {
+			dmabuf_obj->initref = true;
+			dmabuf_obj_get(dmabuf_obj);
+		}
+		ret = 0;
+		gvt_dbg_dpy("vgpu%d: re-use dmabuf_obj ref %d, id %d\n",
+			    vgpu->id, kref_read(&dmabuf_obj->kref),
+			    gfx_plane_info->dmabuf_id);
+		mutex_unlock(&vgpu->dmabuf_lock);
+		goto out;
+	}
+
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	/* Need to allocate a new one*/
+	dmabuf_obj = kmalloc(sizeof(struct intel_vgpu_dmabuf_obj), GFP_KERNEL);
+	if (unlikely(!dmabuf_obj)) {
+		gvt_vgpu_err("alloc dmabuf_obj failed\n");
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	dmabuf_obj->info = kmalloc(sizeof(struct intel_vgpu_fb_info),
+				   GFP_KERNEL);
+	if (unlikely(!dmabuf_obj->info)) {
+		gvt_vgpu_err("allocate intel vgpu fb info failed\n");
+		ret = -ENOMEM;
+		goto out_free_dmabuf;
+	}
+	memcpy(dmabuf_obj->info, &fb_info, sizeof(struct intel_vgpu_fb_info));
+
+	((struct intel_vgpu_fb_info *)dmabuf_obj->info)->obj = dmabuf_obj;
+
+	dmabuf_obj->vgpu = vgpu;
+
+	ret = idr_alloc(&vgpu->object_idr, dmabuf_obj, 1, 0, GFP_NOWAIT);
+	if (ret < 0)
+		goto out_free_info;
+	gfx_plane_info->dmabuf_id = ret;
+	dmabuf_obj->dmabuf_id = ret;
+
+	dmabuf_obj->initref = true;
+
+	kref_init(&dmabuf_obj->kref);
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	if (intel_gvt_hypervisor_get_vfio_device(vgpu)) {
+		gvt_vgpu_err("get vfio device failed\n");
+		mutex_unlock(&vgpu->dmabuf_lock);
+		goto out_free_info;
+	}
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	update_fb_info(gfx_plane_info, &fb_info);
+
+	INIT_LIST_HEAD(&dmabuf_obj->list);
+	mutex_lock(&vgpu->dmabuf_lock);
+	list_add_tail(&dmabuf_obj->list, &vgpu->dmabuf_obj_list_head);
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	gvt_dbg_dpy("vgpu%d: %s new dmabuf_obj ref %d, id %d\n", vgpu->id,
+		    __func__, kref_read(&dmabuf_obj->kref), ret);
+
+	return 0;
+
+out_free_info:
+	kfree(dmabuf_obj->info);
+out_free_dmabuf:
+	kfree(dmabuf_obj);
+out:
+	/* ENODEV means plane isn't ready, which might be a normal case. */
+	return (ret == -ENODEV) ? 0 : ret;
+}
+
+/* To associate an exposed dmabuf with the dmabuf_obj */
+int intel_vgpu_get_dmabuf(struct intel_vgpu *vgpu, void *args)
+{
+	struct drm_device *dev = &vgpu->gvt->dev_priv->drm;
+	struct vfio_device_gfx_dmabuf_fd *gvt_dmabuf_fd = args;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	int ret = 0;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+
+	dmabuf_obj = pick_dmabuf_by_num(vgpu, gvt_dmabuf_fd->dmabuf_id);
+	if (dmabuf_obj == NULL) {
+		gvt_vgpu_err("invalid dmabuf id:%d\n",
+			     gvt_dmabuf_fd->dmabuf_id);
+		ret = -EINVAL;
+		goto out;
+	}
+
+	obj = vgpu_create_gem(dev, dmabuf_obj->info);
+	if (obj == NULL) {
+		gvt_vgpu_err("create gvt gem obj failed:%d\n", vgpu->id);
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	obj->gvt_info = dmabuf_obj->info;
+
+	dmabuf = i915_gem_prime_export(dev, &obj->base, DRM_CLOEXEC | DRM_RDWR);
+	if (IS_ERR(dmabuf)) {
+		gvt_vgpu_err("export dma-buf failed\n");
+		ret = PTR_ERR(dmabuf);
+		goto out_free_gem;
+	}
+	obj->base.dma_buf = dmabuf;
+
+	i915_gem_object_put(obj);
+
+	ret = dma_buf_fd(dmabuf, DRM_CLOEXEC | DRM_RDWR);
+	if (ret < 0) {
+		gvt_vgpu_err("create dma-buf fd failed ret:%d\n", ret);
+		goto out_free_dmabuf;
+	}
+	gvt_dmabuf_fd->dmabuf_fd = ret;
+
+	if (intel_gvt_hypervisor_get_vfio_device(vgpu)) {
+		gvt_vgpu_err("get vfio device failed\n");
+		put_unused_fd(ret);
+		goto out_free_dmabuf;
+	}
+
+	dmabuf_obj_get(dmabuf_obj);
+
+	if (dmabuf_obj->initref) {
+		dmabuf_obj->initref = false;
+		dmabuf_obj_put(dmabuf_obj);
+	}
+
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	gvt_dbg_dpy("vgpu%d: dmabuf:%d, dmabuf ref %d, fd:%d\n"
+		    "        file count: %ld, GEM ref: %d\n",
+		    vgpu->id, dmabuf_obj->dmabuf_id,
+		    kref_read(&dmabuf_obj->kref),
+		    gvt_dmabuf_fd->dmabuf_fd,
+		    file_count(dmabuf->file),
+		    kref_read(&obj->base.refcount));
+
+	return 0;
+
+out_free_dmabuf:
+	dma_buf_put(dmabuf);
+out_free_gem:
+	i915_gem_object_put(obj);
+out:
+	mutex_unlock(&vgpu->dmabuf_lock);
+	return ret;
+}
+
+void intel_vgpu_dmabuf_cleanup(struct intel_vgpu *vgpu)
+{
+	struct list_head *pos, *n;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	list_for_each_safe(pos, n, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if (dmabuf_obj->initref) {
+			dmabuf_obj->initref = false;
+			dmabuf_obj_put(dmabuf_obj);
+		}
+	}
+	mutex_unlock(&vgpu->dmabuf_lock);
+}
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.h b/drivers/gpu/drm/i915/gvt/dmabuf.h
new file mode 100644
index 0000000..d02c76c
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.h
@@ -0,0 +1,65 @@
+/*
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Authors:
+ *    Zhiyuan Lv <zhiyuan.lv@intel.com>
+ *
+ * Contributors:
+ *    Xiaoguang Chen
+ *    Tina Zhang <tina.zhang@intel.com>
+ */
+
+#ifndef _GVT_DMABUF_H_
+#define _GVT_DMABUF_H_
+#include <linux/vfio.h>
+
+struct intel_vgpu_fb_info {
+	__u64 start;
+	__u64 start_gpa;
+	__u64 drm_format_mod;
+	__u32 drm_format;	/* drm format of plane */
+	__u32 width;	/* width of plane */
+	__u32 height;	/* height of plane */
+	__u32 stride;	/* stride of plane */
+	__u32 size;	/* size of plane in bytes, align on page */
+	__u32 x_pos;	/* horizontal position of cursor plane */
+	__u32 y_pos;	/* vertical position of cursor plane */
+	struct intel_vgpu_dmabuf_obj *obj;
+};
+
+/**
+ * struct intel_vgpu_dmabuf_obj- Intel vGPU device buffer object
+ */
+struct intel_vgpu_dmabuf_obj {
+	struct intel_vgpu *vgpu;
+	struct intel_vgpu_fb_info *info;
+	__u32 dmabuf_id;
+	struct kref kref;
+	bool initref;
+	struct list_head list;
+};
+
+int intel_vgpu_query_plane(struct intel_vgpu *vgpu, void *args);
+int intel_vgpu_get_dmabuf(struct intel_vgpu *vgpu, void *args);
+void intel_vgpu_dmabuf_cleanup(struct intel_vgpu *vgpu);
+
+#endif
diff --git a/drivers/gpu/drm/i915/gvt/gvt.c b/drivers/gpu/drm/i915/gvt/gvt.c
index c27c683..bc7264f 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.c
+++ b/drivers/gpu/drm/i915/gvt/gvt.c
@@ -54,6 +54,8 @@ static const struct intel_gvt_ops intel_gvt_ops = {
 	.vgpu_reset = intel_gvt_reset_vgpu,
 	.vgpu_activate = intel_gvt_activate_vgpu,
 	.vgpu_deactivate = intel_gvt_deactivate_vgpu,
+	.vgpu_query_plane = intel_vgpu_query_plane,
+	.vgpu_get_dmabuf = intel_vgpu_get_dmabuf,
 };
 
 /**
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 0f562ee..e67367e 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -47,6 +47,7 @@
 #include "render.h"
 #include "cmd_parser.h"
 #include "fb_decoder.h"
+#include "dmabuf.h"
 
 #define GVT_MAX_VGPU 8
 
@@ -183,8 +184,16 @@ struct intel_vgpu {
 		struct kvm *kvm;
 		struct work_struct release_work;
 		atomic_t released;
+		struct vfio_device *vfio_device;
 	} vdev;
 #endif
+
+	struct list_head dmabuf_obj_list_head;
+	struct mutex dmabuf_lock;
+	struct idr object_idr;
+
+	struct completion vblank_done;
+
 };
 
 struct intel_gvt_gm {
@@ -500,6 +509,8 @@ struct intel_gvt_ops {
 	void (*vgpu_reset)(struct intel_vgpu *);
 	void (*vgpu_activate)(struct intel_vgpu *);
 	void (*vgpu_deactivate)(struct intel_vgpu *);
+	int (*vgpu_query_plane)(struct intel_vgpu *vgpu, void *);
+	int (*vgpu_get_dmabuf)(struct intel_vgpu *vgpu, void *);
 };
 
 
diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
index 32c345c..a1bd82f 100644
--- a/drivers/gpu/drm/i915/gvt/hypercall.h
+++ b/drivers/gpu/drm/i915/gvt/hypercall.h
@@ -56,6 +56,8 @@ struct intel_gvt_mpt {
 	int (*set_trap_area)(unsigned long handle, u64 start, u64 end,
 			     bool map);
 	int (*set_opregion)(void *vgpu);
+	int (*get_vfio_device)(void *vgpu);
+	void (*put_vfio_device)(void *vgpu);
 };
 
 extern struct intel_gvt_mpt xengt_mpt;
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 6b0a330..d249054 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -492,10 +492,23 @@ static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
 	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
 	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
 	vgpu->vdev.num_regions++;
+	return 0;
+}
 
+static int kvmgt_get_vfio_device(void *p_vgpu)
+{
+	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+
+	vgpu->vdev.vfio_device = vfio_device_get_from_dev(
+		mdev_dev(vgpu->vdev.mdev));
+	if (!vgpu->vdev.vfio_device) {
+		gvt_vgpu_err("failed to get vfio device\n");
+		return -ENODEV;
+	}
 	return 0;
 }
 
+
 static int kvmgt_set_opregion(void *p_vgpu)
 {
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
@@ -527,6 +540,14 @@ static int kvmgt_set_opregion(void *p_vgpu)
 	return ret;
 }
 
+static void kvmgt_put_vfio_device(void *vgpu)
+{
+	if (WARN_ON(!((struct intel_vgpu *)vgpu)->vdev.vfio_device))
+		return;
+
+	vfio_device_put(((struct intel_vgpu *)vgpu)->vdev.vfio_device);
+}
+
 static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
 {
 	struct intel_vgpu *vgpu = NULL;
@@ -1253,6 +1274,40 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 	} else if (cmd == VFIO_DEVICE_RESET) {
 		intel_gvt_ops->vgpu_reset(vgpu);
 		return 0;
+	} else if (cmd == VFIO_DEVICE_QUERY_GFX_PLANE) {
+		struct vfio_device_gfx_plane_info dmabuf;
+		int ret = 0;
+
+		minsz = offsetofend(struct vfio_device_gfx_plane_info,
+				    dmabuf_id);
+		if (copy_from_user(&dmabuf, (void __user *)arg, minsz))
+			return -EFAULT;
+		if (dmabuf.argsz < minsz)
+			return -EINVAL;
+
+		ret = intel_gvt_ops->vgpu_query_plane(vgpu, &dmabuf);
+		if (ret != 0)
+			return ret;
+
+		return copy_to_user((void __user *)arg, &dmabuf, minsz) ?
+								-EFAULT : 0;
+	} else if (cmd == VFIO_DEVICE_GET_GFX_DMABUF) {
+		struct vfio_device_gfx_dmabuf_fd dmabuf_fd;
+		int ret = 0;
+
+		minsz = offsetofend(struct vfio_device_gfx_dmabuf_fd,
+				    dmabuf_fd);
+		if (copy_from_user(&dmabuf_fd, (void __user *)arg, minsz))
+			return -EFAULT;
+		if (dmabuf_fd.argsz < minsz)
+			return -EINVAL;
+
+		ret = intel_gvt_ops->vgpu_get_dmabuf(vgpu, &dmabuf_fd);
+		if (ret != 0)
+			return ret;
+
+		return copy_to_user((void __user *)arg, &dmabuf_fd, minsz) ?
+								-EFAULT : 0;
 	}
 
 	return 0;
@@ -1475,6 +1530,9 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 	kvmgt_protect_table_init(info);
 	gvt_cache_init(vgpu);
 
+	mutex_init(&vgpu->dmabuf_lock);
+	init_completion(&vgpu->vblank_done);
+
 	info->track_node.track_write = kvmgt_page_track_write;
 	info->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
 	kvm_page_track_register_notifier(kvm, &info->track_node);
@@ -1616,6 +1674,8 @@ struct intel_gvt_mpt kvmgt_mpt = {
 	.write_gpa = kvmgt_write_gpa,
 	.gfn_to_mfn = kvmgt_gfn_to_pfn,
 	.set_opregion = kvmgt_set_opregion,
+	.get_vfio_device = kvmgt_get_vfio_device,
+	.put_vfio_device = kvmgt_put_vfio_device,
 };
 EXPORT_SYMBOL_GPL(kvmgt_mpt);
 
diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
index 9e73f2e..86443ff 100644
--- a/drivers/gpu/drm/i915/gvt/mpt.h
+++ b/drivers/gpu/drm/i915/gvt/mpt.h
@@ -307,4 +307,34 @@ static inline int intel_gvt_hypervisor_set_opregion(struct intel_vgpu *vgpu)
 	return intel_gvt_host.mpt->set_opregion(vgpu);
 }
 
+/**
+ * intel_gvt_hypervisor_get_vfio_device - increase vfio device ref count
+ * @vgpu: a vGPU
+ *
+ * Returns:
+ * Zero on success, negative error code if failed.
+ */
+static inline int intel_gvt_hypervisor_get_vfio_device(struct intel_vgpu *vgpu)
+{
+	if (!intel_gvt_host.mpt->get_vfio_device)
+		return 0;
+
+	return intel_gvt_host.mpt->get_vfio_device(vgpu);
+}
+
+/**
+ * intel_gvt_hypervisor_put_vfio_device - decrease vfio device ref count
+ * @vgpu: a vGPU
+ *
+ * Returns:
+ * Zero on success, negative error code if failed.
+ */
+static inline void intel_gvt_hypervisor_put_vfio_device(struct intel_vgpu *vgpu)
+{
+	if (!intel_gvt_host.mpt->put_vfio_device)
+		return;
+
+	intel_gvt_host.mpt->put_vfio_device(vgpu);
+}
+
 #endif /* _GVT_MPT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index bcbf535..326c8c7 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -232,6 +232,7 @@ void intel_gvt_deactivate_vgpu(struct intel_vgpu *vgpu)
 	}
 
 	intel_vgpu_stop_schedule(vgpu);
+	intel_vgpu_dmabuf_cleanup(vgpu);
 
 	mutex_unlock(&gvt->lock);
 }
@@ -261,6 +262,7 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
 	intel_gvt_hypervisor_detach_vgpu(vgpu);
 	intel_vgpu_free_resource(vgpu);
 	intel_vgpu_clean_mmio(vgpu);
+	intel_vgpu_dmabuf_cleanup(vgpu);
 	vfree(vgpu);
 
 	intel_gvt_update_vgpu_types(gvt);
@@ -346,7 +348,8 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	vgpu->gvt = gvt;
 	vgpu->sched_ctl.weight = param->weight;
 	bitmap_zero(vgpu->tlb_handle_pending, I915_NUM_ENGINES);
-
+	INIT_LIST_HEAD(&vgpu->dmabuf_obj_list_head);
+	idr_init(&vgpu->object_idr);
 	intel_vgpu_init_cfg_space(vgpu, param->primary);
 
 	ret = intel_vgpu_init_mmio(vgpu);
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index f3b382a..c91e336 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -199,6 +199,8 @@ struct drm_i915_gem_object {
 		} userptr;
 
 		unsigned long scratch;
+
+		void *gvt_info;
 	};
 
 	/** for phys allocated objects */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v15 7/7] drm/i915/gvt: Dmabuf support for GVT-g
@ 2017-10-10  9:50   ` Tina Zhang
  0 siblings, 0 replies; 31+ messages in thread
From: Tina Zhang @ 2017-10-10  9:50 UTC (permalink / raw)
  To: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede
  Cc: Daniel Vetter, intel-gfx, intel-gvt-dev, linux-kernel

This patch introduces a guest's framebuffer sharing mechanism based on
dma-buf subsystem. With this sharing mechanism, guest's framebuffer can
be shared between guest VM and host.

v15:
- Add VFIO_DEVICE_GET_GFX_DMABUF ABI. (Gerd)
- Add intel_vgpu_dmabuf_cleanup() to clean up the vGPU's dmabuf. (Gerd)

v14:
- add PROBE, DMABUF and REGION flags. (Alex)

v12:
- refine the lifecycle of dmabuf.

v9:
- remove dma-buf management. (Alex)
- track the dma-buf create and release in kernel mode. (Gerd) (Daniel)

v8:
- refine the dma-buf ioctl definition.(Alex)
- add a lock to protect the dmabuf list. (Alex)

v7:
- release dma-buf related allocations in dma-buf's associated release
  function. (Alex)
- refine ioctl interface for querying plane info or create dma-buf.
  (Alex)

v6:
- align the dma-buf life cycle with the vfio device. (Alex)
- add the dma-buf related operations in a separate patch. (Gerd)
- i915 related changes. (Chris)

v5:
- fix bug while checking whether the gem obj is gvt's dma-buf when user
  change caching mode or domains. Add a helper function to do it.
  (Xiaoguang)
- add definition for the query plane and create dma-buf. (Xiaoguang)

v4:
- fix bug while checking whether the gem obj is gvt's dma-buf when set
  caching mode or doamins. (Xiaoguang)

v3:
- declare a new flag I915_GEM_OBJECT_IS_GVT_DMABUF in drm_i915_gem_object
  to represent the gem obj for gvt's dma-buf. The tiling mode, caching
  mode and domains can not be changed for this kind of gem object. (Alex)
- change dma-buf related information to be more generic. So other vendor
  can use the same interface. (Alex)

v2:
- create a management fd for dma-buf operations. (Alex)
- alloc gem object's backing storage in gem obj's get_pages() callback.
  (Chris)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/i915/gvt/Makefile      |   2 +-
 drivers/gpu/drm/i915/gvt/dmabuf.c      | 513 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/dmabuf.h      |  65 +++++
 drivers/gpu/drm/i915/gvt/gvt.c         |   2 +
 drivers/gpu/drm/i915/gvt/gvt.h         |  11 +
 drivers/gpu/drm/i915/gvt/hypercall.h   |   2 +
 drivers/gpu/drm/i915/gvt/kvmgt.c       |  60 ++++
 drivers/gpu/drm/i915/gvt/mpt.h         |  30 ++
 drivers/gpu/drm/i915/gvt/vgpu.c        |   5 +-
 drivers/gpu/drm/i915/i915_gem_object.h |   2 +
 10 files changed, 690 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gvt/dmabuf.c
 create mode 100644 drivers/gpu/drm/i915/gvt/dmabuf.h

diff --git a/drivers/gpu/drm/i915/gvt/Makefile b/drivers/gpu/drm/i915/gvt/Makefile
index 019d596..18f43cb 100644
--- a/drivers/gpu/drm/i915/gvt/Makefile
+++ b/drivers/gpu/drm/i915/gvt/Makefile
@@ -2,7 +2,7 @@ GVT_DIR := gvt
 GVT_SOURCE := gvt.o aperture_gm.o handlers.o vgpu.o trace_points.o firmware.o \
 	interrupt.o gtt.o cfg_space.o opregion.o mmio.o display.o edid.o \
 	execlist.o scheduler.o sched_policy.o render.o cmd_parser.o \
-	fb_decoder.o
+	fb_decoder.o dmabuf.o
 
 ccflags-y				+= -I$(src) -I$(src)/$(GVT_DIR)
 i915-y					+= $(addprefix $(GVT_DIR)/, $(GVT_SOURCE))
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c
new file mode 100644
index 0000000..e97b3c7
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.c
@@ -0,0 +1,513 @@
+/*
+ * Copyright 2017 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Zhiyuan Lv <zhiyuan.lv@intel.com>
+ *
+ * Contributors:
+ *    Xiaoguang Chen
+ *    Tina Zhang <tina.zhang@intel.com>
+ */
+
+#include <linux/dma-buf.h>
+#include <drm/drmP.h>
+#include <linux/vfio.h>
+
+#include "i915_drv.h"
+#include "gvt.h"
+
+#define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12))
+
+static struct sg_table *vgpu_gem_get_pages(
+		struct drm_i915_gem_object *obj)
+{
+	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
+	struct sg_table *st;
+	struct scatterlist *sg;
+	int i, ret;
+	gen8_pte_t __iomem *gtt_entries;
+	struct intel_vgpu_fb_info *fb_info;
+
+	fb_info = (struct intel_vgpu_fb_info *)obj->gvt_info;
+	if (WARN_ON(!fb_info))
+		return ERR_PTR(-ENODEV);
+
+	st = kmalloc(sizeof(*st), GFP_KERNEL);
+	if (unlikely(!st))
+		return ERR_PTR(-ENOMEM);
+
+	ret = sg_alloc_table(st, fb_info->size, GFP_KERNEL);
+	if (ret) {
+		kfree(st);
+		return ERR_PTR(ret);
+	}
+	gtt_entries = (gen8_pte_t __iomem *)dev_priv->ggtt.gsm +
+		(fb_info->start >> PAGE_SHIFT);
+	for_each_sg(st->sgl, sg, fb_info->size, i) {
+		sg->offset = 0;
+		sg->length = PAGE_SIZE;
+		sg_dma_address(sg) =
+			GEN8_DECODE_PTE(readq(&gtt_entries[i]));
+		sg_dma_len(sg) = PAGE_SIZE;
+	}
+
+	return st;
+}
+
+static void vgpu_gem_put_pages(struct drm_i915_gem_object *obj,
+		struct sg_table *pages)
+{
+	sg_free_table(pages);
+	kfree(pages);
+}
+
+static void dmabuf_gem_object_free(struct kref *kref)
+{
+	struct intel_vgpu_dmabuf_obj *obj =
+		container_of(kref, struct intel_vgpu_dmabuf_obj, kref);
+	struct intel_vgpu *vgpu = obj->vgpu;
+	struct list_head *pos;
+
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+
+	list_for_each(pos, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if (dmabuf_obj == obj) {
+			idr_remove(&vgpu->object_idr, dmabuf_obj->dmabuf_id);
+			kfree(dmabuf_obj->info);
+			kfree(dmabuf_obj);
+			list_del(pos);
+			break;
+		}
+	}
+}
+
+
+static inline void dmabuf_obj_get(struct intel_vgpu_dmabuf_obj *obj)
+{
+	kref_get(&obj->kref);
+}
+
+static inline void dmabuf_obj_put(struct intel_vgpu_dmabuf_obj *obj)
+{
+	kref_put(&obj->kref, dmabuf_gem_object_free);
+}
+
+static void vgpu_gem_release(struct drm_i915_gem_object *gem_obj)
+{
+
+	struct intel_vgpu_fb_info *fb_info = gem_obj->gvt_info;
+	struct intel_vgpu_dmabuf_obj *obj = fb_info->obj;
+	struct intel_vgpu *vgpu = obj->vgpu;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	gem_obj->base.dma_buf = NULL;
+	dmabuf_obj_put(obj);
+	intel_gvt_hypervisor_put_vfio_device(vgpu);
+	mutex_unlock(&vgpu->dmabuf_lock);
+}
+
+static const struct drm_i915_gem_object_ops intel_vgpu_gem_ops = {
+	.flags = I915_GEM_OBJECT_IS_PROXY,
+	.get_pages = vgpu_gem_get_pages,
+	.put_pages = vgpu_gem_put_pages,
+	.release = vgpu_gem_release,
+};
+
+static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
+		struct intel_vgpu_fb_info *info)
+{
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct drm_i915_gem_object *obj;
+
+	obj = i915_gem_object_alloc(dev_priv);
+	if (obj == NULL)
+		return NULL;
+
+	drm_gem_private_object_init(dev, &obj->base,
+		info->size << PAGE_SHIFT);
+	i915_gem_object_init(obj, &intel_vgpu_gem_ops);
+
+	obj->base.read_domains = I915_GEM_DOMAIN_GTT;
+	obj->base.write_domain = 0;
+	if (IS_SKYLAKE(dev_priv)) {
+		unsigned int tiling_mode = 0;
+		unsigned int stride = 0;
+
+		switch (info->drm_format_mod << 10) {
+		case PLANE_CTL_TILED_LINEAR:
+			tiling_mode = I915_TILING_NONE;
+			break;
+		case PLANE_CTL_TILED_X:
+			tiling_mode = I915_TILING_X;
+			stride = info->stride;
+			break;
+		case PLANE_CTL_TILED_Y:
+			tiling_mode = I915_TILING_Y;
+			stride = info->stride;
+			break;
+		default:
+			gvt_dbg_core("not supported tiling mode\n");
+		}
+		obj->tiling_and_stride = tiling_mode | stride;
+	} else {
+		obj->tiling_and_stride = info->drm_format_mod ?
+					I915_TILING_X : 0;
+	}
+
+	return obj;
+}
+
+static int vgpu_get_plane_info(struct drm_device *dev,
+		struct intel_vgpu *vgpu,
+		struct intel_vgpu_fb_info *info,
+		int plane_id)
+{
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct intel_vgpu_primary_plane_format p;
+	struct intel_vgpu_cursor_plane_format c;
+	int ret;
+
+	if (plane_id == DRM_PLANE_TYPE_PRIMARY) {
+		ret = intel_vgpu_decode_primary_plane(vgpu, &p);
+		if (ret)
+			return ret;
+		info->start = p.base;
+		info->start_gpa = p.base_gpa;
+		info->width = p.width;
+		info->height = p.height;
+		info->stride = p.stride;
+		info->drm_format = p.drm_format;
+		info->drm_format_mod = p.tiled;
+		info->size = (((p.stride * p.height * p.bpp) / 8) +
+				(PAGE_SIZE - 1)) >> PAGE_SHIFT;
+	} else if (plane_id == DRM_PLANE_TYPE_CURSOR) {
+		ret = intel_vgpu_decode_cursor_plane(vgpu, &c);
+		if (ret)
+			return ret;
+		info->start = c.base;
+		info->start_gpa = c.base_gpa;
+		info->width = c.width;
+		info->height = c.height;
+		info->stride = c.width * (c.bpp / 8);
+		info->drm_format = c.drm_format;
+		info->drm_format_mod = 0;
+		info->x_pos = c.x_pos;
+		info->y_pos = c.y_pos;
+		info->size = (((info->stride * c.height * c.bpp) / 8)
+				+ (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+	} else {
+		gvt_vgpu_err("invalid plane id:%d\n", plane_id);
+		return -EINVAL;
+	}
+
+	if (info->size == 0) {
+		gvt_vgpu_err("fb size is zero\n");
+		return -EINVAL;
+	}
+
+	if (info->start & (PAGE_SIZE - 1)) {
+		gvt_vgpu_err("Not aligned fb address:0x%llx\n", info->start);
+		return -EFAULT;
+	}
+	if (((info->start >> PAGE_SHIFT) + info->size) >
+		ggtt_total_entries(&dev_priv->ggtt)) {
+		gvt_vgpu_err("Invalid GTT offset or size\n");
+		return -EFAULT;
+	}
+
+	if (!intel_gvt_ggtt_validate_range(vgpu, info->start, info->size)) {
+		gvt_vgpu_err("invalid gma addr\n");
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static struct intel_vgpu_dmabuf_obj *
+pick_dmabuf_by_info(struct intel_vgpu *vgpu,
+		    struct intel_vgpu_fb_info *latest_info)
+{
+	struct list_head *pos;
+	struct intel_vgpu_fb_info *fb_info;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj = NULL;
+	struct intel_vgpu_dmabuf_obj *ret = NULL;
+
+	list_for_each(pos, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if ((dmabuf_obj == NULL) ||
+		    (dmabuf_obj->info == NULL))
+			continue;
+
+		fb_info = (struct intel_vgpu_fb_info *)dmabuf_obj->info;
+		if ((fb_info->start == latest_info->start) &&
+		    (fb_info->start_gpa == latest_info->start_gpa) &&
+		    (fb_info->size == latest_info->size) &&
+		    (fb_info->drm_format_mod == latest_info->drm_format_mod) &&
+		    (fb_info->drm_format == latest_info->drm_format) &&
+		    (fb_info->width == latest_info->width) &&
+		    (fb_info->height == latest_info->height)) {
+			ret = dmabuf_obj;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+static struct intel_vgpu_dmabuf_obj *
+pick_dmabuf_by_num(struct intel_vgpu *vgpu, u32 id)
+{
+	struct list_head *pos;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj = NULL;
+	struct intel_vgpu_dmabuf_obj *ret = NULL;
+
+	list_for_each(pos, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if (!dmabuf_obj)
+			continue;
+
+		if (dmabuf_obj->dmabuf_id == id) {
+			ret = dmabuf_obj;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+static void update_fb_info(struct vfio_device_gfx_plane_info *gvt_dmabuf,
+		      struct intel_vgpu_fb_info *fb_info)
+{
+	gvt_dmabuf->drm_format = fb_info->drm_format;
+	gvt_dmabuf->width = fb_info->width;
+	gvt_dmabuf->height = fb_info->height;
+	gvt_dmabuf->stride = fb_info->stride;
+	gvt_dmabuf->size = fb_info->size;
+	gvt_dmabuf->x_pos = fb_info->x_pos;
+	gvt_dmabuf->y_pos = fb_info->y_pos;
+}
+
+int intel_vgpu_query_plane(struct intel_vgpu *vgpu, void *args)
+{
+	struct drm_device *dev = &vgpu->gvt->dev_priv->drm;
+	struct vfio_device_gfx_plane_info *gfx_plane_info = args;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+	struct intel_vgpu_fb_info fb_info;
+	int ret = 0;
+
+	if (gfx_plane_info->flags == (VFIO_GFX_PLANE_TYPE_DMABUF |
+				       VFIO_GFX_PLANE_TYPE_PROBE))
+		return ret;
+	else if ((gfx_plane_info->flags & ~VFIO_GFX_PLANE_TYPE_DMABUF) ||
+			(!gfx_plane_info->flags))
+		return -EINVAL;
+
+	ret = vgpu_get_plane_info(dev, vgpu, &fb_info,
+					gfx_plane_info->drm_plane_type);
+	if (ret != 0)
+		goto out;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	/* If exists, pick up the exposed dmabuf_obj */
+	dmabuf_obj = pick_dmabuf_by_info(vgpu, &fb_info);
+	if (dmabuf_obj) {
+		update_fb_info(gfx_plane_info, &fb_info);
+		gfx_plane_info->dmabuf_id = dmabuf_obj->dmabuf_id;
+
+		/* This buffer may be released between query_plane ioctl and
+		 * get_dmabuf ioctl. Add the refcount to make sure it won't
+		 * be released between the two ioctls.
+		 */
+		if (!dmabuf_obj->initref) {
+			dmabuf_obj->initref = true;
+			dmabuf_obj_get(dmabuf_obj);
+		}
+		ret = 0;
+		gvt_dbg_dpy("vgpu%d: re-use dmabuf_obj ref %d, id %d\n",
+			    vgpu->id, kref_read(&dmabuf_obj->kref),
+			    gfx_plane_info->dmabuf_id);
+		mutex_unlock(&vgpu->dmabuf_lock);
+		goto out;
+	}
+
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	/* Need to allocate a new one*/
+	dmabuf_obj = kmalloc(sizeof(struct intel_vgpu_dmabuf_obj), GFP_KERNEL);
+	if (unlikely(!dmabuf_obj)) {
+		gvt_vgpu_err("alloc dmabuf_obj failed\n");
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	dmabuf_obj->info = kmalloc(sizeof(struct intel_vgpu_fb_info),
+				   GFP_KERNEL);
+	if (unlikely(!dmabuf_obj->info)) {
+		gvt_vgpu_err("allocate intel vgpu fb info failed\n");
+		ret = -ENOMEM;
+		goto out_free_dmabuf;
+	}
+	memcpy(dmabuf_obj->info, &fb_info, sizeof(struct intel_vgpu_fb_info));
+
+	((struct intel_vgpu_fb_info *)dmabuf_obj->info)->obj = dmabuf_obj;
+
+	dmabuf_obj->vgpu = vgpu;
+
+	ret = idr_alloc(&vgpu->object_idr, dmabuf_obj, 1, 0, GFP_NOWAIT);
+	if (ret < 0)
+		goto out_free_info;
+	gfx_plane_info->dmabuf_id = ret;
+	dmabuf_obj->dmabuf_id = ret;
+
+	dmabuf_obj->initref = true;
+
+	kref_init(&dmabuf_obj->kref);
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	if (intel_gvt_hypervisor_get_vfio_device(vgpu)) {
+		gvt_vgpu_err("get vfio device failed\n");
+		mutex_unlock(&vgpu->dmabuf_lock);
+		goto out_free_info;
+	}
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	update_fb_info(gfx_plane_info, &fb_info);
+
+	INIT_LIST_HEAD(&dmabuf_obj->list);
+	mutex_lock(&vgpu->dmabuf_lock);
+	list_add_tail(&dmabuf_obj->list, &vgpu->dmabuf_obj_list_head);
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	gvt_dbg_dpy("vgpu%d: %s new dmabuf_obj ref %d, id %d\n", vgpu->id,
+		    __func__, kref_read(&dmabuf_obj->kref), ret);
+
+	return 0;
+
+out_free_info:
+	kfree(dmabuf_obj->info);
+out_free_dmabuf:
+	kfree(dmabuf_obj);
+out:
+	/* ENODEV means plane isn't ready, which might be a normal case. */
+	return (ret == -ENODEV) ? 0 : ret;
+}
+
+/* To associate an exposed dmabuf with the dmabuf_obj */
+int intel_vgpu_get_dmabuf(struct intel_vgpu *vgpu, void *args)
+{
+	struct drm_device *dev = &vgpu->gvt->dev_priv->drm;
+	struct vfio_device_gfx_dmabuf_fd *gvt_dmabuf_fd = args;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+	struct drm_i915_gem_object *obj;
+	struct dma_buf *dmabuf;
+	int ret = 0;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+
+	dmabuf_obj = pick_dmabuf_by_num(vgpu, gvt_dmabuf_fd->dmabuf_id);
+	if (dmabuf_obj == NULL) {
+		gvt_vgpu_err("invalid dmabuf id:%d\n",
+			     gvt_dmabuf_fd->dmabuf_id);
+		ret = -EINVAL;
+		goto out;
+	}
+
+	obj = vgpu_create_gem(dev, dmabuf_obj->info);
+	if (obj == NULL) {
+		gvt_vgpu_err("create gvt gem obj failed:%d\n", vgpu->id);
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	obj->gvt_info = dmabuf_obj->info;
+
+	dmabuf = i915_gem_prime_export(dev, &obj->base, DRM_CLOEXEC | DRM_RDWR);
+	if (IS_ERR(dmabuf)) {
+		gvt_vgpu_err("export dma-buf failed\n");
+		ret = PTR_ERR(dmabuf);
+		goto out_free_gem;
+	}
+	obj->base.dma_buf = dmabuf;
+
+	i915_gem_object_put(obj);
+
+	ret = dma_buf_fd(dmabuf, DRM_CLOEXEC | DRM_RDWR);
+	if (ret < 0) {
+		gvt_vgpu_err("create dma-buf fd failed ret:%d\n", ret);
+		goto out_free_dmabuf;
+	}
+	gvt_dmabuf_fd->dmabuf_fd = ret;
+
+	if (intel_gvt_hypervisor_get_vfio_device(vgpu)) {
+		gvt_vgpu_err("get vfio device failed\n");
+		put_unused_fd(ret);
+		goto out_free_dmabuf;
+	}
+
+	dmabuf_obj_get(dmabuf_obj);
+
+	if (dmabuf_obj->initref) {
+		dmabuf_obj->initref = false;
+		dmabuf_obj_put(dmabuf_obj);
+	}
+
+	mutex_unlock(&vgpu->dmabuf_lock);
+
+	gvt_dbg_dpy("vgpu%d: dmabuf:%d, dmabuf ref %d, fd:%d\n"
+		    "        file count: %ld, GEM ref: %d\n",
+		    vgpu->id, dmabuf_obj->dmabuf_id,
+		    kref_read(&dmabuf_obj->kref),
+		    gvt_dmabuf_fd->dmabuf_fd,
+		    file_count(dmabuf->file),
+		    kref_read(&obj->base.refcount));
+
+	return 0;
+
+out_free_dmabuf:
+	dma_buf_put(dmabuf);
+out_free_gem:
+	i915_gem_object_put(obj);
+out:
+	mutex_unlock(&vgpu->dmabuf_lock);
+	return ret;
+}
+
+void intel_vgpu_dmabuf_cleanup(struct intel_vgpu *vgpu)
+{
+	struct list_head *pos, *n;
+	struct intel_vgpu_dmabuf_obj *dmabuf_obj;
+
+	mutex_lock(&vgpu->dmabuf_lock);
+	list_for_each_safe(pos, n, &vgpu->dmabuf_obj_list_head) {
+		dmabuf_obj = container_of(pos, struct intel_vgpu_dmabuf_obj,
+						list);
+		if (dmabuf_obj->initref) {
+			dmabuf_obj->initref = false;
+			dmabuf_obj_put(dmabuf_obj);
+		}
+	}
+	mutex_unlock(&vgpu->dmabuf_lock);
+}
diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.h b/drivers/gpu/drm/i915/gvt/dmabuf.h
new file mode 100644
index 0000000..d02c76c
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/dmabuf.h
@@ -0,0 +1,65 @@
+/*
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Authors:
+ *    Zhiyuan Lv <zhiyuan.lv@intel.com>
+ *
+ * Contributors:
+ *    Xiaoguang Chen
+ *    Tina Zhang <tina.zhang@intel.com>
+ */
+
+#ifndef _GVT_DMABUF_H_
+#define _GVT_DMABUF_H_
+#include <linux/vfio.h>
+
+struct intel_vgpu_fb_info {
+	__u64 start;
+	__u64 start_gpa;
+	__u64 drm_format_mod;
+	__u32 drm_format;	/* drm format of plane */
+	__u32 width;	/* width of plane */
+	__u32 height;	/* height of plane */
+	__u32 stride;	/* stride of plane */
+	__u32 size;	/* size of plane in bytes, align on page */
+	__u32 x_pos;	/* horizontal position of cursor plane */
+	__u32 y_pos;	/* vertical position of cursor plane */
+	struct intel_vgpu_dmabuf_obj *obj;
+};
+
+/**
+ * struct intel_vgpu_dmabuf_obj- Intel vGPU device buffer object
+ */
+struct intel_vgpu_dmabuf_obj {
+	struct intel_vgpu *vgpu;
+	struct intel_vgpu_fb_info *info;
+	__u32 dmabuf_id;
+	struct kref kref;
+	bool initref;
+	struct list_head list;
+};
+
+int intel_vgpu_query_plane(struct intel_vgpu *vgpu, void *args);
+int intel_vgpu_get_dmabuf(struct intel_vgpu *vgpu, void *args);
+void intel_vgpu_dmabuf_cleanup(struct intel_vgpu *vgpu);
+
+#endif
diff --git a/drivers/gpu/drm/i915/gvt/gvt.c b/drivers/gpu/drm/i915/gvt/gvt.c
index c27c683..bc7264f 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.c
+++ b/drivers/gpu/drm/i915/gvt/gvt.c
@@ -54,6 +54,8 @@ static const struct intel_gvt_ops intel_gvt_ops = {
 	.vgpu_reset = intel_gvt_reset_vgpu,
 	.vgpu_activate = intel_gvt_activate_vgpu,
 	.vgpu_deactivate = intel_gvt_deactivate_vgpu,
+	.vgpu_query_plane = intel_vgpu_query_plane,
+	.vgpu_get_dmabuf = intel_vgpu_get_dmabuf,
 };
 
 /**
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 0f562ee..e67367e 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -47,6 +47,7 @@
 #include "render.h"
 #include "cmd_parser.h"
 #include "fb_decoder.h"
+#include "dmabuf.h"
 
 #define GVT_MAX_VGPU 8
 
@@ -183,8 +184,16 @@ struct intel_vgpu {
 		struct kvm *kvm;
 		struct work_struct release_work;
 		atomic_t released;
+		struct vfio_device *vfio_device;
 	} vdev;
 #endif
+
+	struct list_head dmabuf_obj_list_head;
+	struct mutex dmabuf_lock;
+	struct idr object_idr;
+
+	struct completion vblank_done;
+
 };
 
 struct intel_gvt_gm {
@@ -500,6 +509,8 @@ struct intel_gvt_ops {
 	void (*vgpu_reset)(struct intel_vgpu *);
 	void (*vgpu_activate)(struct intel_vgpu *);
 	void (*vgpu_deactivate)(struct intel_vgpu *);
+	int (*vgpu_query_plane)(struct intel_vgpu *vgpu, void *);
+	int (*vgpu_get_dmabuf)(struct intel_vgpu *vgpu, void *);
 };
 
 
diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
index 32c345c..a1bd82f 100644
--- a/drivers/gpu/drm/i915/gvt/hypercall.h
+++ b/drivers/gpu/drm/i915/gvt/hypercall.h
@@ -56,6 +56,8 @@ struct intel_gvt_mpt {
 	int (*set_trap_area)(unsigned long handle, u64 start, u64 end,
 			     bool map);
 	int (*set_opregion)(void *vgpu);
+	int (*get_vfio_device)(void *vgpu);
+	void (*put_vfio_device)(void *vgpu);
 };
 
 extern struct intel_gvt_mpt xengt_mpt;
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 6b0a330..d249054 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -492,10 +492,23 @@ static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
 	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
 	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
 	vgpu->vdev.num_regions++;
+	return 0;
+}
 
+static int kvmgt_get_vfio_device(void *p_vgpu)
+{
+	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+
+	vgpu->vdev.vfio_device = vfio_device_get_from_dev(
+		mdev_dev(vgpu->vdev.mdev));
+	if (!vgpu->vdev.vfio_device) {
+		gvt_vgpu_err("failed to get vfio device\n");
+		return -ENODEV;
+	}
 	return 0;
 }
 
+
 static int kvmgt_set_opregion(void *p_vgpu)
 {
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
@@ -527,6 +540,14 @@ static int kvmgt_set_opregion(void *p_vgpu)
 	return ret;
 }
 
+static void kvmgt_put_vfio_device(void *vgpu)
+{
+	if (WARN_ON(!((struct intel_vgpu *)vgpu)->vdev.vfio_device))
+		return;
+
+	vfio_device_put(((struct intel_vgpu *)vgpu)->vdev.vfio_device);
+}
+
 static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
 {
 	struct intel_vgpu *vgpu = NULL;
@@ -1253,6 +1274,40 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 	} else if (cmd == VFIO_DEVICE_RESET) {
 		intel_gvt_ops->vgpu_reset(vgpu);
 		return 0;
+	} else if (cmd == VFIO_DEVICE_QUERY_GFX_PLANE) {
+		struct vfio_device_gfx_plane_info dmabuf;
+		int ret = 0;
+
+		minsz = offsetofend(struct vfio_device_gfx_plane_info,
+				    dmabuf_id);
+		if (copy_from_user(&dmabuf, (void __user *)arg, minsz))
+			return -EFAULT;
+		if (dmabuf.argsz < minsz)
+			return -EINVAL;
+
+		ret = intel_gvt_ops->vgpu_query_plane(vgpu, &dmabuf);
+		if (ret != 0)
+			return ret;
+
+		return copy_to_user((void __user *)arg, &dmabuf, minsz) ?
+								-EFAULT : 0;
+	} else if (cmd == VFIO_DEVICE_GET_GFX_DMABUF) {
+		struct vfio_device_gfx_dmabuf_fd dmabuf_fd;
+		int ret = 0;
+
+		minsz = offsetofend(struct vfio_device_gfx_dmabuf_fd,
+				    dmabuf_fd);
+		if (copy_from_user(&dmabuf_fd, (void __user *)arg, minsz))
+			return -EFAULT;
+		if (dmabuf_fd.argsz < minsz)
+			return -EINVAL;
+
+		ret = intel_gvt_ops->vgpu_get_dmabuf(vgpu, &dmabuf_fd);
+		if (ret != 0)
+			return ret;
+
+		return copy_to_user((void __user *)arg, &dmabuf_fd, minsz) ?
+								-EFAULT : 0;
 	}
 
 	return 0;
@@ -1475,6 +1530,9 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 	kvmgt_protect_table_init(info);
 	gvt_cache_init(vgpu);
 
+	mutex_init(&vgpu->dmabuf_lock);
+	init_completion(&vgpu->vblank_done);
+
 	info->track_node.track_write = kvmgt_page_track_write;
 	info->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
 	kvm_page_track_register_notifier(kvm, &info->track_node);
@@ -1616,6 +1674,8 @@ struct intel_gvt_mpt kvmgt_mpt = {
 	.write_gpa = kvmgt_write_gpa,
 	.gfn_to_mfn = kvmgt_gfn_to_pfn,
 	.set_opregion = kvmgt_set_opregion,
+	.get_vfio_device = kvmgt_get_vfio_device,
+	.put_vfio_device = kvmgt_put_vfio_device,
 };
 EXPORT_SYMBOL_GPL(kvmgt_mpt);
 
diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
index 9e73f2e..86443ff 100644
--- a/drivers/gpu/drm/i915/gvt/mpt.h
+++ b/drivers/gpu/drm/i915/gvt/mpt.h
@@ -307,4 +307,34 @@ static inline int intel_gvt_hypervisor_set_opregion(struct intel_vgpu *vgpu)
 	return intel_gvt_host.mpt->set_opregion(vgpu);
 }
 
+/**
+ * intel_gvt_hypervisor_get_vfio_device - increase vfio device ref count
+ * @vgpu: a vGPU
+ *
+ * Returns:
+ * Zero on success, negative error code if failed.
+ */
+static inline int intel_gvt_hypervisor_get_vfio_device(struct intel_vgpu *vgpu)
+{
+	if (!intel_gvt_host.mpt->get_vfio_device)
+		return 0;
+
+	return intel_gvt_host.mpt->get_vfio_device(vgpu);
+}
+
+/**
+ * intel_gvt_hypervisor_put_vfio_device - decrease vfio device ref count
+ * @vgpu: a vGPU
+ *
+ * Returns:
+ * Zero on success, negative error code if failed.
+ */
+static inline void intel_gvt_hypervisor_put_vfio_device(struct intel_vgpu *vgpu)
+{
+	if (!intel_gvt_host.mpt->put_vfio_device)
+		return;
+
+	intel_gvt_host.mpt->put_vfio_device(vgpu);
+}
+
 #endif /* _GVT_MPT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index bcbf535..326c8c7 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -232,6 +232,7 @@ void intel_gvt_deactivate_vgpu(struct intel_vgpu *vgpu)
 	}
 
 	intel_vgpu_stop_schedule(vgpu);
+	intel_vgpu_dmabuf_cleanup(vgpu);
 
 	mutex_unlock(&gvt->lock);
 }
@@ -261,6 +262,7 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
 	intel_gvt_hypervisor_detach_vgpu(vgpu);
 	intel_vgpu_free_resource(vgpu);
 	intel_vgpu_clean_mmio(vgpu);
+	intel_vgpu_dmabuf_cleanup(vgpu);
 	vfree(vgpu);
 
 	intel_gvt_update_vgpu_types(gvt);
@@ -346,7 +348,8 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	vgpu->gvt = gvt;
 	vgpu->sched_ctl.weight = param->weight;
 	bitmap_zero(vgpu->tlb_handle_pending, I915_NUM_ENGINES);
-
+	INIT_LIST_HEAD(&vgpu->dmabuf_obj_list_head);
+	idr_init(&vgpu->object_idr);
 	intel_vgpu_init_cfg_space(vgpu, param->primary);
 
 	ret = intel_vgpu_init_mmio(vgpu);
diff --git a/drivers/gpu/drm/i915/i915_gem_object.h b/drivers/gpu/drm/i915/i915_gem_object.h
index f3b382a..c91e336 100644
--- a/drivers/gpu/drm/i915/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/i915_gem_object.h
@@ -199,6 +199,8 @@ struct drm_i915_gem_object {
 		} userptr;
 
 		unsigned long scratch;
+
+		void *gvt_info;
 	};
 
 	/** for phys allocated objects */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* ✗ Fi.CI.BAT: failure for drm/i915/gvt: Dma-buf support for GVT-g
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
                   ` (6 preceding siblings ...)
  2017-10-10  9:50   ` Tina Zhang
@ 2017-10-10 10:05 ` Patchwork
  2017-10-10 14:06   ` Gerd Hoffmann
  8 siblings, 0 replies; 31+ messages in thread
From: Patchwork @ 2017-10-10 10:05 UTC (permalink / raw)
  To: Tina Zhang; +Cc: intel-gfx

== Series Details ==

Series: drm/i915/gvt: Dma-buf support for GVT-g
URL   : https://patchwork.freedesktop.org/series/31638/
State : failure

== Summary ==

  CHK     include/config/kernel.release
  CHK     include/generated/uapi/linux/version.h
  CHK     include/generated/utsrelease.h
  CHK     include/generated/bounds.h
  CHK     include/generated/timeconst.h
  CHK     include/generated/asm-offsets.h
  CALL    scripts/checksyscalls.sh
  CHK     scripts/mod/devicetable-offsets.h
  CHK     include/generated/compile.h
  CHK     kernel/config_data.h
  CC [M]  drivers/gpu/drm/i915/gvt/dmabuf.o
drivers/gpu/drm/i915/gvt/dmabuf.c:132:15: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
  .get_pages = vgpu_gem_get_pages,
               ^~~~~~~~~~~~~~~~~~
drivers/gpu/drm/i915/gvt/dmabuf.c:132:15: note: (near initialization for ‘intel_vgpu_gem_ops.get_pages’)
cc1: all warnings being treated as errors
scripts/Makefile.build:313: recipe for target 'drivers/gpu/drm/i915/gvt/dmabuf.o' failed
make[4]: *** [drivers/gpu/drm/i915/gvt/dmabuf.o] Error 1
scripts/Makefile.build:572: recipe for target 'drivers/gpu/drm/i915' failed
make[3]: *** [drivers/gpu/drm/i915] Error 2
scripts/Makefile.build:572: recipe for target 'drivers/gpu/drm' failed
make[2]: *** [drivers/gpu/drm] Error 2
scripts/Makefile.build:572: recipe for target 'drivers/gpu' failed
make[1]: *** [drivers/gpu] Error 2
Makefile:1019: recipe for target 'drivers' failed
make: *** [drivers] Error 2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 6/7] drm/i915: Introduce GEM proxy
  2017-10-10  9:50   ` Tina Zhang
  (?)
@ 2017-10-10 10:58   ` Joonas Lahtinen
  2017-10-10 13:02     ` [Intel-gfx] " Joonas Lahtinen
  -1 siblings, 1 reply; 31+ messages in thread
From: Joonas Lahtinen @ 2017-10-10 10:58 UTC (permalink / raw)
  To: Tina Zhang, alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv,
	zhi.a.wang, kevin.tian, daniel, kwankhede
  Cc: intel-gfx, intel-gvt-dev, linux-kernel, Daniel Vetter

On Tue, 2017-10-10 at 17:50 +0800, Tina Zhang wrote:
> GEM proxy is a kind of GEM, whose backing physical memory is pinned
> and produced by guest VM and is used by host as read only. With GEM
> proxy, host is able to access guest physical memory through GEM object
> interface. As GEM proxy is such a special kind of GEM, a new flag
> I915_GEM_OBJECT_IS_PROXY is introduced to ban host from changing the
> backing storage of GEM proxy.
> 
> v14:
> - return -ENXIO when gem proxy object is banned by ioctl.
>   (Chris) (Daniel)
> 
> v13:
> - add comments to GEM proxy. (Chris)
> - don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
> - check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
> - remove GEM proxy bar in i915_gem_madvise_ioctl.
> 
> v6:
> - add gem proxy barrier in the following ioctls. (Chris)
>   i915_gem_set_caching_ioctl
>   i915_gem_set_domain_ioctl
>   i915_gem_sw_finish_ioctl
>   i915_gem_set_tiling_ioctl
>   i915_gem_madvise_ioctl
> 
> Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

<SNIP>

> +++ b/drivers/gpu/drm/i915/i915_gem_object.h
> @@ -39,6 +39,7 @@ struct drm_i915_gem_object_ops {
>         unsigned int flags;
>  #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
>  #define I915_GEM_OBJECT_IS_SHRINKABLE   BIT(1)
> +#define I915_GEM_OBJECT_IS_PROXY       BIT(2)

Please fix the indent to match. Do convert the above two lines to use
TAB character too.

<SNIP>

> @@ -1690,7 +1704,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
>  	 */
>  	if (!obj->base.filp) {
>  		i915_gem_object_put(obj);
> -		return -EINVAL;
> +		return -ENXIO;
>  	}

This still needs to be a separate patch.

With those fixes, this is;

Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Intel-gfx] [PATCH v15 6/7] drm/i915: Introduce GEM proxy
  2017-10-10 10:58   ` Joonas Lahtinen
@ 2017-10-10 13:02     ` Joonas Lahtinen
  2017-10-11  1:54         ` Zhang, Tina
  0 siblings, 1 reply; 31+ messages in thread
From: Joonas Lahtinen @ 2017-10-10 13:02 UTC (permalink / raw)
  To: Tina Zhang, alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv,
	zhi.a.wang, kevin.tian, daniel, kwankhede
  Cc: Daniel Vetter, intel-gfx, intel-gvt-dev, linux-kernel

On Tue, 2017-10-10 at 13:58 +0300, Joonas Lahtinen wrote:
> On Tue, 2017-10-10 at 17:50 +0800, Tina Zhang wrote:
> > GEM proxy is a kind of GEM, whose backing physical memory is pinned
> > and produced by guest VM and is used by host as read only. With GEM
> > proxy, host is able to access guest physical memory through GEM object
> > interface. As GEM proxy is such a special kind of GEM, a new flag
> > I915_GEM_OBJECT_IS_PROXY is introduced to ban host from changing the
> > backing storage of GEM proxy.
> > 
> > v14:
> > - return -ENXIO when gem proxy object is banned by ioctl.
> >   (Chris) (Daniel)
> > 
> > v13:
> > - add comments to GEM proxy. (Chris)
> > - don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
> > - check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
> > - remove GEM proxy bar in i915_gem_madvise_ioctl.
> > 
> > v6:
> > - add gem proxy barrier in the following ioctls. (Chris)
> >   i915_gem_set_caching_ioctl
> >   i915_gem_set_domain_ioctl
> >   i915_gem_sw_finish_ioctl
> >   i915_gem_set_tiling_ioctl
> >   i915_gem_madvise_ioctl
> > 
> > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> 
> <SNIP>
> 
> > +++ b/drivers/gpu/drm/i915/i915_gem_object.h
> > @@ -39,6 +39,7 @@ struct drm_i915_gem_object_ops {
> >         unsigned int flags;
> >  #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
> >  #define I915_GEM_OBJECT_IS_SHRINKABLE   BIT(1)
> > +#define I915_GEM_OBJECT_IS_PROXY       BIT(2)
> 
> Please fix the indent to match. Do convert the above two lines to use
> TAB character too.
> 
> <SNIP>
> 
> > @@ -1690,7 +1704,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
> >  	 */
> >  	if (!obj->base.filp) {
> >  		i915_gem_object_put(obj);
> > -		return -EINVAL;
> > +		return -ENXIO;
> >  	}
> 
> This still needs to be a separate patch.
> 
> With those fixes, this is;
> 
> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

Also, send the produced two patches (this and the split patch) as a
standalone series for easier testing and merging.

CI seems to have hard time applying the whole series.

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g
  2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
@ 2017-10-10 14:06   ` Gerd Hoffmann
  2017-10-10  9:50   ` Tina Zhang
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Gerd Hoffmann @ 2017-10-10 14:06 UTC (permalink / raw)
  To: Tina Zhang, alex.williamson, chris, zhenyuw, zhiyuan.lv,
	zhi.a.wang, kevin.tian, daniel, kwankhede
  Cc: intel-gfx, intel-gvt-dev, linux-kernel

On Tue, 2017-10-10 at 17:50 +0800, Tina Zhang wrote:
> This patch set adds the dma-buf support for intel GVT-g.
> 
> dma-buf is an uniform mechanism to share DMA buffers across different
> devices and subsystems. dma-buf for intel GVT-g is mainly used to
> share
> the vgpu's framebuffer to userspace to leverage userspace graphics
> stacks
> to render the framebuffer to the display monitor.
> 
> The main idea is that we create a gem object and set vgpu's
> framebuffer as
> its backing storage. Then, export a dma-buf associated with this gem
> object.
> With the fd of this dma-buf, userspace can directly handle this
> buffer.

Tested-by: Gerd Hoffmann <kraxel@redhat.com>

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g
@ 2017-10-10 14:06   ` Gerd Hoffmann
  0 siblings, 0 replies; 31+ messages in thread
From: Gerd Hoffmann @ 2017-10-10 14:06 UTC (permalink / raw)
  To: Tina Zhang, alex.williamson, chris, zhenyuw, zhiyuan.lv,
	zhi.a.wang, kevin.tian, daniel, kwankhede
  Cc: intel-gfx, intel-gvt-dev, linux-kernel

On Tue, 2017-10-10 at 17:50 +0800, Tina Zhang wrote:
> This patch set adds the dma-buf support for intel GVT-g.
> 
> dma-buf is an uniform mechanism to share DMA buffers across different
> devices and subsystems. dma-buf for intel GVT-g is mainly used to
> share
> the vgpu's framebuffer to userspace to leverage userspace graphics
> stacks
> to render the framebuffer to the display monitor.
> 
> The main idea is that we create a gem object and set vgpu's
> framebuffer as
> its backing storage. Then, export a dma-buf associated with this gem
> object.
> With the fd of this dma-buf, userspace can directly handle this
> buffer.

Tested-by: Gerd Hoffmann <kraxel@redhat.com>

cheers,
  Gerd

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
  2017-10-10  9:50   ` Tina Zhang
  (?)
@ 2017-10-10 14:07   ` Gerd Hoffmann
  -1 siblings, 0 replies; 31+ messages in thread
From: Gerd Hoffmann @ 2017-10-10 14:07 UTC (permalink / raw)
  To: Tina Zhang, alex.williamson, chris, zhenyuw, zhiyuan.lv,
	zhi.a.wang, kevin.tian, daniel, kwankhede
  Cc: intel-gfx, intel-gvt-dev, linux-kernel, Daniel Vetter

On Tue, 2017-10-10 at 17:50 +0800, Tina Zhang wrote:
> Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query
> and
> get the plan and its related information. This ioctl can be invoked
> with:
> 1) either flag DMABUF or REGION is set. Vendor driver returns success
> and
> the plane_info only when the specific kind of buffer is supported.
> 2) flag PROBE is set with either DMABUF or REGION. Vendor driver
> returns
> success only when the specific kind of buffer is supported.
> 
> The dma-buf's life cycle is handled by user mode and tracked by
> kernel.
> The returned fd in struct vfio_device_query_gfx_plane can be a new
> fd or an old fd of a re-exported dma-buf. Host user mode can check
> the
> value of fd and to see if it needs to create new resource according
> to
> the new fd or just use the existed resource related to the old fd.

Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
  2017-10-10  9:50   ` Tina Zhang
  (?)
  (?)
@ 2017-10-10 19:16   ` Alex Williamson
  2017-10-11  1:46       ` Zhang, Tina
  -1 siblings, 1 reply; 31+ messages in thread
From: Alex Williamson @ 2017-10-10 19:16 UTC (permalink / raw)
  To: Tina Zhang
  Cc: kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang, kevin.tian,
	daniel, kwankhede, intel-gfx, intel-gvt-dev, linux-kernel,
	Daniel Vetter

On Tue, 10 Oct 2017 17:50:05 +0800
Tina Zhang <tina.zhang@intel.com> wrote:

> Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query and
> get the plan and its related information. This ioctl can be invoked with:

s/plan/plane/

> 1) either flag DMABUF or REGION is set. Vendor driver returns success and
> the plane_info only when the specific kind of buffer is supported.
> 2) flag PROBE is set with either DMABUF or REGION. Vendor driver returns
> success only when the specific kind of buffer is supported.
> 
> The dma-buf's life cycle is handled by user mode and tracked by kernel.
> The returned fd in struct vfio_device_query_gfx_plane can be a new
> fd or an old fd of a re-exported dma-buf. Host user mode can check the
> value of fd and to see if it needs to create new resource according to
> the new fd or just use the existed resource related to the old fd.
> 
> v15:
> - add a ioctl to get a dmabuf for a given dmabuf id. (Gerd)
> 
> v14:
> - add PROBE, DMABUF and REGION flags. (Alex)
> 
> v12:
> - add drm_format_mod back. (Gerd and Zhenyu)
> - add region_index. (Gerd)
> 
> v11:
> - rename plane_type to drm_plane_type. (Gerd)
> - move fields of vfio_device_query_gfx_plane to vfio_device_gfx_plane_info.
>   (Gerd)
> - remove drm_format_mod, start fields. (Daniel)
> - remove plane_id.
> 
> v10:
> - refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE. (Alex) (Gerd)
> 
> v3:
> - add a field gvt_plane_info in the drm_i915_gem_obj structure to save
>   the decoded plane information to avoid look up while need the plane
>   info. (Gerd)
> 
> Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
>  include/uapi/linux/vfio.h | 62 +++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 62 insertions(+)
> 
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index ae46105..fdf9a9c 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -502,6 +502,68 @@ struct vfio_pci_hot_reset {
>  
>  #define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE + 13)
>  
> +/**
> + * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
> + *                                    struct vfio_device_query_gfx_plane)
> + *
> + * Set the drm_plane_type and flags, then retrieve the gfx plane info.
> + *
> + * flags supported:
> + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF are set
> + *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
> + *   support for dma-buf.
> + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION are set
> + *   to ask if the mdev supports region. 0 on support, -EINVAL on no
> + *   support for region.
> + * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION is set
> + *   with each call to query the plane info.

So dmabuf_id is effectively just a token that can be fed into
GET_GFX_DMABUF to get the fd.  The implementation of the token is
vendor specific, but can be thought of as some sort of sequence ID or
generation ID (but not necessarily monotonically increasing), so
GET_GFX_DMABUF may fail if the previously provided dmabuf_id is no
longer valid.  Do I have this correct?

> + * - Others are invalid and return -EINVAL.

And I see that in patch 7/7 that i915 is checking explicitly for only
these flag combinations, great!

> + *
> + * Return: 0 on success, -ENODEV with all out fields zero on mdev
> + * device initialization, -errno on other failure.
> + */
> +struct vfio_device_gfx_plane_info {
> +	__u32 argsz;
> +	__u32 flags;
> +#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0)
> +#define VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1)
> +#define VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
> +	/* in */
> +	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
> +	/* out */
> +	__u32 drm_format;	/* drm format of plane */
> +	__u64 drm_format_mod;   /* tiled mode */
> +	__u32 width;	/* width of plane */
> +	__u32 height;	/* height of plane */
> +	__u32 stride;	/* stride of plane */
> +	__u32 size;	/* size of plane in bytes, align on page*/
> +	__u32 x_pos;	/* horizontal position of cursor plane */
> +	__u32 y_pos;	/* vertical position of cursor plane*/
> +	union {
> +		__u32 region_index;	/* region index */
> +		__s32 dmabuf_id;	/* dma-buf fd */

"dma-buf fd", but it's not an fd.  Why is this signed since it's no
longer an fd?

> +	};
> +};
> +
> +#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
> +
> +/**
> + * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15,
> + *				    struct vfio_device_gfx_dmabuf_fd)
> + *
> + * Return: 0 on success, -errno on failure.
> + */

So given a dmabuf_id, return a dmabuf_fd, which may be the same as a
fd previously returned to the user.  In the latter case, can we assume
that there's no new reference added to the existing fd?  Is the user
permitted to cache the dmabuf_id to dmabuf_fd mapping?  Similarly, if a
user already has a dmabuf_fd, can calling GET_GFX_DMABUF ever fail or
return a different result for the same dmabuf_id which provided that
fd?  Maybe we could define some of these behaviors in the header file
to keep vendor implementations consistent.

> +struct vfio_device_gfx_dmabuf_fd {
> +	__u32 argsz;
> +	__u32 flags;

Patch 7/7 however fails to validate this flags field.  It needs to
return -EINVAL if any unsupported flags are set, which at this point is
all of them.

> +	/* in */
> +	__u32 dmabuf_id;

This unsigned dmabuf_id is probably correct, but it doesn't match the
signed one provided above.

> +	/* out */
> +	__s32 dmabuf_fd;
> +};

VFIO_GROUP_GET_DEVICE_FD returns the fd through the return code, is
there a reason to do it differently here?  GET_DEVICE_FD also only
takes a char parameter without the extra future proofing, I wonder if
this ioctl is simple enough to do the same.

> +
> +#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
> +
>  /* -------- API for Type1 VFIO IOMMU -------- */
>  
>  /**

Thanks,
Alex

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
  2017-10-10 19:16   ` Alex Williamson
@ 2017-10-11  1:46       ` Zhang, Tina
  0 siblings, 0 replies; 31+ messages in thread
From: Zhang, Tina @ 2017-10-11  1:46 UTC (permalink / raw)
  To: Alex Williamson
  Cc: kraxel, chris, zhenyuw, Lv, Zhiyuan, Wang, Zhi A, Tian, Kevin,
	daniel, kwankhede, intel-gfx, intel-gvt-dev, linux-kernel,
	Daniel Vetter



> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Wednesday, October 11, 2017 3:17 AM
> To: Zhang, Tina <tina.zhang@intel.com>
> Cc: kraxel@redhat.com; chris@chris-wilson.co.uk; zhenyuw@linux.intel.com;
> Lv, Zhiyuan <zhiyuan.lv@intel.com>; Wang, Zhi A <zhi.a.wang@intel.com>;
> Tian, Kevin <kevin.tian@intel.com>; daniel@ffwll.ch; kwankhede@nvidia.com;
> intel-gfx@lists.freedesktop.org; intel-gvt-dev@lists.freedesktop.org; linux-
> kernel@vger.kernel.org; Daniel Vetter <daniel.vetter@ffwll.ch>
> Subject: Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
> 
> On Tue, 10 Oct 2017 17:50:05 +0800
> Tina Zhang <tina.zhang@intel.com> wrote:
> 
> > Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query
> > and get the plan and its related information. This ioctl can be invoked with:
> 
> s/plan/plane/
Sorry about this typo :).

> 
> > 1) either flag DMABUF or REGION is set. Vendor driver returns success
> > and the plane_info only when the specific kind of buffer is supported.
> > 2) flag PROBE is set with either DMABUF or REGION. Vendor driver
> > returns success only when the specific kind of buffer is supported.
> >
> > The dma-buf's life cycle is handled by user mode and tracked by kernel.
> > The returned fd in struct vfio_device_query_gfx_plane can be a new fd
> > or an old fd of a re-exported dma-buf. Host user mode can check the
> > value of fd and to see if it needs to create new resource according to
> > the new fd or just use the existed resource related to the old fd.
> >
> > v15:
> > - add a ioctl to get a dmabuf for a given dmabuf id. (Gerd)
> >
> > v14:
> > - add PROBE, DMABUF and REGION flags. (Alex)
> >
> > v12:
> > - add drm_format_mod back. (Gerd and Zhenyu)
> > - add region_index. (Gerd)
> >
> > v11:
> > - rename plane_type to drm_plane_type. (Gerd)
> > - move fields of vfio_device_query_gfx_plane to vfio_device_gfx_plane_info.
> >   (Gerd)
> > - remove drm_format_mod, start fields. (Daniel)
> > - remove plane_id.
> >
> > v10:
> > - refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE. (Alex) (Gerd)
> >
> > v3:
> > - add a field gvt_plane_info in the drm_i915_gem_obj structure to save
> >   the decoded plane information to avoid look up while need the plane
> >   info. (Gerd)
> >
> > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Alex Williamson <alex.williamson@redhat.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > ---
> >  include/uapi/linux/vfio.h | 62
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 62 insertions(+)
> >
> > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > index ae46105..fdf9a9c 100644
> > --- a/include/uapi/linux/vfio.h
> > +++ b/include/uapi/linux/vfio.h
> > @@ -502,6 +502,68 @@ struct vfio_pci_hot_reset {
> >
> >  #define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE +
> 13)
> >
> > +/**
> > + * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
> > + *                                    struct vfio_device_query_gfx_plane)
> > + *
> > + * Set the drm_plane_type and flags, then retrieve the gfx plane info.
> > + *
> > + * flags supported:
> > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF
> are set
> > + *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
> > + *   support for dma-buf.
> > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION
> are set
> > + *   to ask if the mdev supports region. 0 on support, -EINVAL on no
> > + *   support for region.
> > + * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION
> is set
> > + *   with each call to query the plane info.
> 
> So dmabuf_id is effectively just a token that can be fed into GET_GFX_DMABUF
> to get the fd.  The implementation of the token is vendor specific, but can be
> thought of as some sort of sequence ID or generation ID (but not necessarily
> monotonically increasing), so GET_GFX_DMABUF may fail if the previously
> provided dmabuf_id is no longer valid.  Do I have this correct?
Exactly. GET_GFX_DMABUF may fail if the dmabuf_id is no longer valid. -EINVAL will be
returned in that case. 
And dmabuf_id is invalid when:
1) user space has closed all dma-buf fds which are exposed with the dmabuf_id,
2) or the dmabuf_id isn't returned from QUERY_GFX_PLANE.
I will add the comments. Thanks.

> 
> > + * - Others are invalid and return -EINVAL.
> 
> And I see that in patch 7/7 that i915 is checking explicitly for only these flag
> combinations, great!
> 
> > + *
> > + * Return: 0 on success, -ENODEV with all out fields zero on mdev
> > + * device initialization, -errno on other failure.
> > + */
> > +struct vfio_device_gfx_plane_info {
> > +	__u32 argsz;
> > +	__u32 flags;
> > +#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0) #define
> > +VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1) #define
> > +VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
> > +	/* in */
> > +	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
> > +	/* out */
> > +	__u32 drm_format;	/* drm format of plane */
> > +	__u64 drm_format_mod;   /* tiled mode */
> > +	__u32 width;	/* width of plane */
> > +	__u32 height;	/* height of plane */
> > +	__u32 stride;	/* stride of plane */
> > +	__u32 size;	/* size of plane in bytes, align on page*/
> > +	__u32 x_pos;	/* horizontal position of cursor plane */
> > +	__u32 y_pos;	/* vertical position of cursor plane*/
> > +	union {
> > +		__u32 region_index;	/* region index */
> > +		__s32 dmabuf_id;	/* dma-buf fd */
> 
> "dma-buf fd", but it's not an fd.  Why is this signed since it's no longer an fd?
Again, sorry about the typo :)

> 
> > +	};
> > +};
> > +
> > +#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
> > +
> > +/**
> > + * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15,
> > + *				    struct vfio_device_gfx_dmabuf_fd)
> > + *
> > + * Return: 0 on success, -errno on failure.
> > + */
> 
> So given a dmabuf_id, return a dmabuf_fd, which may be the same as a fd
> previously returned to the user.  In the latter case, can we assume that there's
> no new reference added to the existing fd?  Is the user permitted to cache the
> dmabuf_id to dmabuf_fd mapping?  Similarly, if a user already has a dmabuf_fd,
> can calling GET_GFX_DMABUF ever fail or return a different result for the same
> dmabuf_id which provided that fd?  Maybe we could define some of these
> behaviors in the header file to keep vendor implementations consistent.

Actually, a new fd is returned with each time GET_GFX_DMABUF invoked, even
with the same dmabuf_id. The benefit is to leave the dma-buf's life cycle management
to userspace. The dma-buf will be released when all its fds in userspace are closed.

Then, I think I need to add some comments like this:
"dmabuf_fd will return a new fd with each time GET_GFX_DMABUF being invoked"


> 
> > +struct vfio_device_gfx_dmabuf_fd {
> > +	__u32 argsz;
> > +	__u32 flags;
> 
> Patch 7/7 however fails to validate this flags field.  It needs to return -EINVAL if
> any unsupported flags are set, which at this point is all of them.
Indeed.
 
> 
> > +	/* in */
> > +	__u32 dmabuf_id;
> 
> This unsigned dmabuf_id is probably correct, but it doesn't match the signed
> one provided above.
Will make them both unsigned :)

> 
> > +	/* out */
> > +	__s32 dmabuf_fd;
> > +};
> 
> VFIO_GROUP_GET_DEVICE_FD returns the fd through the return code, is there
> a reason to do it differently here?  GET_DEVICE_FD also only takes a char
> parameter without the extra future proofing, I wonder if this ioctl is simple
> enough to do the same.
So that means we don't need the "flages" field and change dmabuf_id to a character string.
Maybe it's fine for the current usage. But, I'm not sure if it's suitable for future.
Gerd, Zhenyu and Zhi, how do you think about it?

Thanks.
Tina
> 
> > +
> > +#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
> > +
> >  /* -------- API for Type1 VFIO IOMMU -------- */
> >
> >  /**
> 
> Thanks,
> Alex

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
@ 2017-10-11  1:46       ` Zhang, Tina
  0 siblings, 0 replies; 31+ messages in thread
From: Zhang, Tina @ 2017-10-11  1:46 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Daniel Vetter, intel-gfx, linux-kernel, kwankhede, Lv, Zhiyuan,
	intel-gvt-dev, kraxel



> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Wednesday, October 11, 2017 3:17 AM
> To: Zhang, Tina <tina.zhang@intel.com>
> Cc: kraxel@redhat.com; chris@chris-wilson.co.uk; zhenyuw@linux.intel.com;
> Lv, Zhiyuan <zhiyuan.lv@intel.com>; Wang, Zhi A <zhi.a.wang@intel.com>;
> Tian, Kevin <kevin.tian@intel.com>; daniel@ffwll.ch; kwankhede@nvidia.com;
> intel-gfx@lists.freedesktop.org; intel-gvt-dev@lists.freedesktop.org; linux-
> kernel@vger.kernel.org; Daniel Vetter <daniel.vetter@ffwll.ch>
> Subject: Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
> 
> On Tue, 10 Oct 2017 17:50:05 +0800
> Tina Zhang <tina.zhang@intel.com> wrote:
> 
> > Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query
> > and get the plan and its related information. This ioctl can be invoked with:
> 
> s/plan/plane/
Sorry about this typo :).

> 
> > 1) either flag DMABUF or REGION is set. Vendor driver returns success
> > and the plane_info only when the specific kind of buffer is supported.
> > 2) flag PROBE is set with either DMABUF or REGION. Vendor driver
> > returns success only when the specific kind of buffer is supported.
> >
> > The dma-buf's life cycle is handled by user mode and tracked by kernel.
> > The returned fd in struct vfio_device_query_gfx_plane can be a new fd
> > or an old fd of a re-exported dma-buf. Host user mode can check the
> > value of fd and to see if it needs to create new resource according to
> > the new fd or just use the existed resource related to the old fd.
> >
> > v15:
> > - add a ioctl to get a dmabuf for a given dmabuf id. (Gerd)
> >
> > v14:
> > - add PROBE, DMABUF and REGION flags. (Alex)
> >
> > v12:
> > - add drm_format_mod back. (Gerd and Zhenyu)
> > - add region_index. (Gerd)
> >
> > v11:
> > - rename plane_type to drm_plane_type. (Gerd)
> > - move fields of vfio_device_query_gfx_plane to vfio_device_gfx_plane_info.
> >   (Gerd)
> > - remove drm_format_mod, start fields. (Daniel)
> > - remove plane_id.
> >
> > v10:
> > - refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE. (Alex) (Gerd)
> >
> > v3:
> > - add a field gvt_plane_info in the drm_i915_gem_obj structure to save
> >   the decoded plane information to avoid look up while need the plane
> >   info. (Gerd)
> >
> > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Alex Williamson <alex.williamson@redhat.com>
> > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > ---
> >  include/uapi/linux/vfio.h | 62
> > +++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 62 insertions(+)
> >
> > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > index ae46105..fdf9a9c 100644
> > --- a/include/uapi/linux/vfio.h
> > +++ b/include/uapi/linux/vfio.h
> > @@ -502,6 +502,68 @@ struct vfio_pci_hot_reset {
> >
> >  #define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE +
> 13)
> >
> > +/**
> > + * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
> > + *                                    struct vfio_device_query_gfx_plane)
> > + *
> > + * Set the drm_plane_type and flags, then retrieve the gfx plane info.
> > + *
> > + * flags supported:
> > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF
> are set
> > + *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
> > + *   support for dma-buf.
> > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION
> are set
> > + *   to ask if the mdev supports region. 0 on support, -EINVAL on no
> > + *   support for region.
> > + * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION
> is set
> > + *   with each call to query the plane info.
> 
> So dmabuf_id is effectively just a token that can be fed into GET_GFX_DMABUF
> to get the fd.  The implementation of the token is vendor specific, but can be
> thought of as some sort of sequence ID or generation ID (but not necessarily
> monotonically increasing), so GET_GFX_DMABUF may fail if the previously
> provided dmabuf_id is no longer valid.  Do I have this correct?
Exactly. GET_GFX_DMABUF may fail if the dmabuf_id is no longer valid. -EINVAL will be
returned in that case. 
And dmabuf_id is invalid when:
1) user space has closed all dma-buf fds which are exposed with the dmabuf_id,
2) or the dmabuf_id isn't returned from QUERY_GFX_PLANE.
I will add the comments. Thanks.

> 
> > + * - Others are invalid and return -EINVAL.
> 
> And I see that in patch 7/7 that i915 is checking explicitly for only these flag
> combinations, great!
> 
> > + *
> > + * Return: 0 on success, -ENODEV with all out fields zero on mdev
> > + * device initialization, -errno on other failure.
> > + */
> > +struct vfio_device_gfx_plane_info {
> > +	__u32 argsz;
> > +	__u32 flags;
> > +#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0) #define
> > +VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1) #define
> > +VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
> > +	/* in */
> > +	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
> > +	/* out */
> > +	__u32 drm_format;	/* drm format of plane */
> > +	__u64 drm_format_mod;   /* tiled mode */
> > +	__u32 width;	/* width of plane */
> > +	__u32 height;	/* height of plane */
> > +	__u32 stride;	/* stride of plane */
> > +	__u32 size;	/* size of plane in bytes, align on page*/
> > +	__u32 x_pos;	/* horizontal position of cursor plane */
> > +	__u32 y_pos;	/* vertical position of cursor plane*/
> > +	union {
> > +		__u32 region_index;	/* region index */
> > +		__s32 dmabuf_id;	/* dma-buf fd */
> 
> "dma-buf fd", but it's not an fd.  Why is this signed since it's no longer an fd?
Again, sorry about the typo :)

> 
> > +	};
> > +};
> > +
> > +#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
> > +
> > +/**
> > + * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15,
> > + *				    struct vfio_device_gfx_dmabuf_fd)
> > + *
> > + * Return: 0 on success, -errno on failure.
> > + */
> 
> So given a dmabuf_id, return a dmabuf_fd, which may be the same as a fd
> previously returned to the user.  In the latter case, can we assume that there's
> no new reference added to the existing fd?  Is the user permitted to cache the
> dmabuf_id to dmabuf_fd mapping?  Similarly, if a user already has a dmabuf_fd,
> can calling GET_GFX_DMABUF ever fail or return a different result for the same
> dmabuf_id which provided that fd?  Maybe we could define some of these
> behaviors in the header file to keep vendor implementations consistent.

Actually, a new fd is returned with each time GET_GFX_DMABUF invoked, even
with the same dmabuf_id. The benefit is to leave the dma-buf's life cycle management
to userspace. The dma-buf will be released when all its fds in userspace are closed.

Then, I think I need to add some comments like this:
"dmabuf_fd will return a new fd with each time GET_GFX_DMABUF being invoked"


> 
> > +struct vfio_device_gfx_dmabuf_fd {
> > +	__u32 argsz;
> > +	__u32 flags;
> 
> Patch 7/7 however fails to validate this flags field.  It needs to return -EINVAL if
> any unsupported flags are set, which at this point is all of them.
Indeed.
 
> 
> > +	/* in */
> > +	__u32 dmabuf_id;
> 
> This unsigned dmabuf_id is probably correct, but it doesn't match the signed
> one provided above.
Will make them both unsigned :)

> 
> > +	/* out */
> > +	__s32 dmabuf_fd;
> > +};
> 
> VFIO_GROUP_GET_DEVICE_FD returns the fd through the return code, is there
> a reason to do it differently here?  GET_DEVICE_FD also only takes a char
> parameter without the extra future proofing, I wonder if this ioctl is simple
> enough to do the same.
So that means we don't need the "flages" field and change dmabuf_id to a character string.
Maybe it's fine for the current usage. But, I'm not sure if it's suitable for future.
Gerd, Zhenyu and Zhi, how do you think about it?

Thanks.
Tina
> 
> > +
> > +#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
> > +
> >  /* -------- API for Type1 VFIO IOMMU -------- */
> >
> >  /**
> 
> Thanks,
> Alex
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: [Intel-gfx] [PATCH v15 6/7] drm/i915: Introduce GEM proxy
  2017-10-10 13:02     ` [Intel-gfx] " Joonas Lahtinen
@ 2017-10-11  1:54         ` Zhang, Tina
  0 siblings, 0 replies; 31+ messages in thread
From: Zhang, Tina @ 2017-10-11  1:54 UTC (permalink / raw)
  To: Joonas Lahtinen, alex.williamson, kraxel, chris, zhenyuw, Lv,
	Zhiyuan, Wang, Zhi A, Tian, Kevin, daniel, kwankhede
  Cc: Daniel Vetter, intel-gfx, intel-gvt-dev, linux-kernel



> -----Original Message-----
> From: Joonas Lahtinen [mailto:joonas.lahtinen@linux.intel.com]
> Sent: Tuesday, October 10, 2017 9:02 PM
> To: Zhang, Tina <tina.zhang@intel.com>; alex.williamson@redhat.com;
> kraxel@redhat.com; chris@chris-wilson.co.uk; zhenyuw@linux.intel.com; Lv,
> Zhiyuan <zhiyuan.lv@intel.com>; Wang, Zhi A <zhi.a.wang@intel.com>; Tian,
> Kevin <kevin.tian@intel.com>; daniel@ffwll.ch; kwankhede@nvidia.com
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>; intel-gfx@lists.freedesktop.org;
> intel-gvt-dev@lists.freedesktop.org; linux-kernel@vger.kernel.org
> Subject: Re: [Intel-gfx] [PATCH v15 6/7] drm/i915: Introduce GEM proxy
> 
> On Tue, 2017-10-10 at 13:58 +0300, Joonas Lahtinen wrote:
> > On Tue, 2017-10-10 at 17:50 +0800, Tina Zhang wrote:
> > > GEM proxy is a kind of GEM, whose backing physical memory is pinned
> > > and produced by guest VM and is used by host as read only. With GEM
> > > proxy, host is able to access guest physical memory through GEM
> > > object interface. As GEM proxy is such a special kind of GEM, a new
> > > flag I915_GEM_OBJECT_IS_PROXY is introduced to ban host from
> > > changing the backing storage of GEM proxy.
> > >
> > > v14:
> > > - return -ENXIO when gem proxy object is banned by ioctl.
> > >   (Chris) (Daniel)
> > >
> > > v13:
> > > - add comments to GEM proxy. (Chris)
> > > - don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
> > > - check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
> > > - remove GEM proxy bar in i915_gem_madvise_ioctl.
> > >
> > > v6:
> > > - add gem proxy barrier in the following ioctls. (Chris)
> > >   i915_gem_set_caching_ioctl
> > >   i915_gem_set_domain_ioctl
> > >   i915_gem_sw_finish_ioctl
> > >   i915_gem_set_tiling_ioctl
> > >   i915_gem_madvise_ioctl
> > >
> > > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> >
> > <SNIP>
> >
> > > +++ b/drivers/gpu/drm/i915/i915_gem_object.h
> > > @@ -39,6 +39,7 @@ struct drm_i915_gem_object_ops {
> > >         unsigned int flags;
> > >  #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
> > >  #define I915_GEM_OBJECT_IS_SHRINKABLE   BIT(1)
> > > +#define I915_GEM_OBJECT_IS_PROXY       BIT(2)
> >
> > Please fix the indent to match. Do convert the above two lines to use
> > TAB character too.
> >
> > <SNIP>
> >
> > > @@ -1690,7 +1704,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev,
> void *data,
> > >  	 */
> > >  	if (!obj->base.filp) {
> > >  		i915_gem_object_put(obj);
> > > -		return -EINVAL;
> > > +		return -ENXIO;
> > >  	}
> >
> > This still needs to be a separate patch.
> >
> > With those fixes, this is;
> >
> > Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> 
> Also, send the produced two patches (this and the split patch) as a standalone
> series for easier testing and merging.
Got it. Thanks.

BR,
Tina
> 
> CI seems to have hard time applying the whole series.
> 
> Regards, Joonas
> --
> Joonas Lahtinen
> Open Source Technology Center
> Intel Corporation

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: [Intel-gfx] [PATCH v15 6/7] drm/i915: Introduce GEM proxy
@ 2017-10-11  1:54         ` Zhang, Tina
  0 siblings, 0 replies; 31+ messages in thread
From: Zhang, Tina @ 2017-10-11  1:54 UTC (permalink / raw)
  To: Joonas Lahtinen, alex.williamson, kraxel, chris, zhenyuw, Lv,
	Zhiyuan, Wang, Zhi A, Tian, Kevin, daniel, kwankhede
  Cc: Daniel Vetter, intel-gfx, intel-gvt-dev, linux-kernel



> -----Original Message-----
> From: Joonas Lahtinen [mailto:joonas.lahtinen@linux.intel.com]
> Sent: Tuesday, October 10, 2017 9:02 PM
> To: Zhang, Tina <tina.zhang@intel.com>; alex.williamson@redhat.com;
> kraxel@redhat.com; chris@chris-wilson.co.uk; zhenyuw@linux.intel.com; Lv,
> Zhiyuan <zhiyuan.lv@intel.com>; Wang, Zhi A <zhi.a.wang@intel.com>; Tian,
> Kevin <kevin.tian@intel.com>; daniel@ffwll.ch; kwankhede@nvidia.com
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>; intel-gfx@lists.freedesktop.org;
> intel-gvt-dev@lists.freedesktop.org; linux-kernel@vger.kernel.org
> Subject: Re: [Intel-gfx] [PATCH v15 6/7] drm/i915: Introduce GEM proxy
> 
> On Tue, 2017-10-10 at 13:58 +0300, Joonas Lahtinen wrote:
> > On Tue, 2017-10-10 at 17:50 +0800, Tina Zhang wrote:
> > > GEM proxy is a kind of GEM, whose backing physical memory is pinned
> > > and produced by guest VM and is used by host as read only. With GEM
> > > proxy, host is able to access guest physical memory through GEM
> > > object interface. As GEM proxy is such a special kind of GEM, a new
> > > flag I915_GEM_OBJECT_IS_PROXY is introduced to ban host from
> > > changing the backing storage of GEM proxy.
> > >
> > > v14:
> > > - return -ENXIO when gem proxy object is banned by ioctl.
> > >   (Chris) (Daniel)
> > >
> > > v13:
> > > - add comments to GEM proxy. (Chris)
> > > - don't ban GEM proxy in i915_gem_sw_finish_ioctl. (Chris)
> > > - check GEM proxy bar after finishing i915_gem_object_wait. (Chris)
> > > - remove GEM proxy bar in i915_gem_madvise_ioctl.
> > >
> > > v6:
> > > - add gem proxy barrier in the following ioctls. (Chris)
> > >   i915_gem_set_caching_ioctl
> > >   i915_gem_set_domain_ioctl
> > >   i915_gem_sw_finish_ioctl
> > >   i915_gem_set_tiling_ioctl
> > >   i915_gem_madvise_ioctl
> > >
> > > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > Cc: Chris Wilson <chris@chris-wilson.co.uk>
> > > Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> >
> > <SNIP>
> >
> > > +++ b/drivers/gpu/drm/i915/i915_gem_object.h
> > > @@ -39,6 +39,7 @@ struct drm_i915_gem_object_ops {
> > >         unsigned int flags;
> > >  #define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
> > >  #define I915_GEM_OBJECT_IS_SHRINKABLE   BIT(1)
> > > +#define I915_GEM_OBJECT_IS_PROXY       BIT(2)
> >
> > Please fix the indent to match. Do convert the above two lines to use
> > TAB character too.
> >
> > <SNIP>
> >
> > > @@ -1690,7 +1704,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev,
> void *data,
> > >  	 */
> > >  	if (!obj->base.filp) {
> > >  		i915_gem_object_put(obj);
> > > -		return -EINVAL;
> > > +		return -ENXIO;
> > >  	}
> >
> > This still needs to be a separate patch.
> >
> > With those fixes, this is;
> >
> > Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> 
> Also, send the produced two patches (this and the split patch) as a standalone
> series for easier testing and merging.
Got it. Thanks.

BR,
Tina
> 
> CI seems to have hard time applying the whole series.
> 
> Regards, Joonas
> --
> Joonas Lahtinen
> Open Source Technology Center
> Intel Corporation

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
  2017-10-11  1:46       ` Zhang, Tina
@ 2017-10-11  2:18         ` Alex Williamson
  -1 siblings, 0 replies; 31+ messages in thread
From: Alex Williamson @ 2017-10-11  2:18 UTC (permalink / raw)
  To: Zhang, Tina
  Cc: kraxel, chris, zhenyuw, Lv, Zhiyuan, Wang, Zhi A, Tian, Kevin,
	daniel, kwankhede, intel-gfx, intel-gvt-dev, linux-kernel,
	Daniel Vetter

On Wed, 11 Oct 2017 01:46:37 +0000
"Zhang, Tina" <tina.zhang@intel.com> wrote:

> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Wednesday, October 11, 2017 3:17 AM
> > To: Zhang, Tina <tina.zhang@intel.com>
> > Cc: kraxel@redhat.com; chris@chris-wilson.co.uk; zhenyuw@linux.intel.com;
> > Lv, Zhiyuan <zhiyuan.lv@intel.com>; Wang, Zhi A <zhi.a.wang@intel.com>;
> > Tian, Kevin <kevin.tian@intel.com>; daniel@ffwll.ch; kwankhede@nvidia.com;
> > intel-gfx@lists.freedesktop.org; intel-gvt-dev@lists.freedesktop.org; linux-
> > kernel@vger.kernel.org; Daniel Vetter <daniel.vetter@ffwll.ch>
> > Subject: Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
> > 
> > On Tue, 10 Oct 2017 17:50:05 +0800
> > Tina Zhang <tina.zhang@intel.com> wrote:
> >   
> > > Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query
> > > and get the plan and its related information. This ioctl can be invoked with:  
> > 
> > s/plan/plane/  
> Sorry about this typo :).
> 
> >   
> > > 1) either flag DMABUF or REGION is set. Vendor driver returns success
> > > and the plane_info only when the specific kind of buffer is supported.
> > > 2) flag PROBE is set with either DMABUF or REGION. Vendor driver
> > > returns success only when the specific kind of buffer is supported.
> > >
> > > The dma-buf's life cycle is handled by user mode and tracked by kernel.
> > > The returned fd in struct vfio_device_query_gfx_plane can be a new fd
> > > or an old fd of a re-exported dma-buf. Host user mode can check the
> > > value of fd and to see if it needs to create new resource according to
> > > the new fd or just use the existed resource related to the old fd.
> > >
> > > v15:
> > > - add a ioctl to get a dmabuf for a given dmabuf id. (Gerd)
> > >
> > > v14:
> > > - add PROBE, DMABUF and REGION flags. (Alex)
> > >
> > > v12:
> > > - add drm_format_mod back. (Gerd and Zhenyu)
> > > - add region_index. (Gerd)
> > >
> > > v11:
> > > - rename plane_type to drm_plane_type. (Gerd)
> > > - move fields of vfio_device_query_gfx_plane to vfio_device_gfx_plane_info.
> > >   (Gerd)
> > > - remove drm_format_mod, start fields. (Daniel)
> > > - remove plane_id.
> > >
> > > v10:
> > > - refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE. (Alex) (Gerd)
> > >
> > > v3:
> > > - add a field gvt_plane_info in the drm_i915_gem_obj structure to save
> > >   the decoded plane information to avoid look up while need the plane
> > >   info. (Gerd)
> > >
> > > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > Cc: Alex Williamson <alex.williamson@redhat.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > ---
> > >  include/uapi/linux/vfio.h | 62
> > > +++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > >
> > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > index ae46105..fdf9a9c 100644
> > > --- a/include/uapi/linux/vfio.h
> > > +++ b/include/uapi/linux/vfio.h
> > > @@ -502,6 +502,68 @@ struct vfio_pci_hot_reset {
> > >
> > >  #define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE +  
> > 13)  
> > >
> > > +/**
> > > + * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
> > > + *                                    struct vfio_device_query_gfx_plane)
> > > + *
> > > + * Set the drm_plane_type and flags, then retrieve the gfx plane info.
> > > + *
> > > + * flags supported:
> > > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF  
> > are set  
> > > + *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
> > > + *   support for dma-buf.
> > > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION  
> > are set  
> > > + *   to ask if the mdev supports region. 0 on support, -EINVAL on no
> > > + *   support for region.
> > > + * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION  
> > is set  
> > > + *   with each call to query the plane info.  
> > 
> > So dmabuf_id is effectively just a token that can be fed into GET_GFX_DMABUF
> > to get the fd.  The implementation of the token is vendor specific, but can be
> > thought of as some sort of sequence ID or generation ID (but not necessarily
> > monotonically increasing), so GET_GFX_DMABUF may fail if the previously
> > provided dmabuf_id is no longer valid.  Do I have this correct?  
> Exactly. GET_GFX_DMABUF may fail if the dmabuf_id is no longer valid. -EINVAL will be
> returned in that case. 
> And dmabuf_id is invalid when:
> 1) user space has closed all dma-buf fds which are exposed with the dmabuf_id,
> 2) or the dmabuf_id isn't returned from QUERY_GFX_PLANE.
> I will add the comments. Thanks.
> 
> >   
> > > + * - Others are invalid and return -EINVAL.  
> > 
> > And I see that in patch 7/7 that i915 is checking explicitly for only these flag
> > combinations, great!
> >   
> > > + *
> > > + * Return: 0 on success, -ENODEV with all out fields zero on mdev
> > > + * device initialization, -errno on other failure.
> > > + */
> > > +struct vfio_device_gfx_plane_info {
> > > +	__u32 argsz;
> > > +	__u32 flags;
> > > +#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0) #define
> > > +VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1) #define
> > > +VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
> > > +	/* in */
> > > +	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
> > > +	/* out */
> > > +	__u32 drm_format;	/* drm format of plane */
> > > +	__u64 drm_format_mod;   /* tiled mode */
> > > +	__u32 width;	/* width of plane */
> > > +	__u32 height;	/* height of plane */
> > > +	__u32 stride;	/* stride of plane */
> > > +	__u32 size;	/* size of plane in bytes, align on page*/
> > > +	__u32 x_pos;	/* horizontal position of cursor plane */
> > > +	__u32 y_pos;	/* vertical position of cursor plane*/
> > > +	union {
> > > +		__u32 region_index;	/* region index */
> > > +		__s32 dmabuf_id;	/* dma-buf fd */  
> > 
> > "dma-buf fd", but it's not an fd.  Why is this signed since it's no longer an fd?  
> Again, sorry about the typo :)
> 
> >   
> > > +	};
> > > +};
> > > +
> > > +#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
> > > +
> > > +/**
> > > + * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15,
> > > + *				    struct vfio_device_gfx_dmabuf_fd)
> > > + *
> > > + * Return: 0 on success, -errno on failure.
> > > + */  
> > 
> > So given a dmabuf_id, return a dmabuf_fd, which may be the same as a fd
> > previously returned to the user.  In the latter case, can we assume that there's
> > no new reference added to the existing fd?  Is the user permitted to cache the
> > dmabuf_id to dmabuf_fd mapping?  Similarly, if a user already has a dmabuf_fd,
> > can calling GET_GFX_DMABUF ever fail or return a different result for the same
> > dmabuf_id which provided that fd?  Maybe we could define some of these
> > behaviors in the header file to keep vendor implementations consistent.  
> 
> Actually, a new fd is returned with each time GET_GFX_DMABUF invoked, even
> with the same dmabuf_id. The benefit is to leave the dma-buf's life cycle management
> to userspace. The dma-buf will be released when all its fds in userspace are closed.

Ok, so userspace could actually have multiple dmabuf_fds open at the
same time for the same dmabuf_id if they were sloppy with management of
those fds.

> Then, I think I need to add some comments like this:
> "dmabuf_fd will return a new fd with each time GET_GFX_DMABUF being invoked"

Yes, that would be great.

> >   
> > > +struct vfio_device_gfx_dmabuf_fd {
> > > +	__u32 argsz;
> > > +	__u32 flags;  
> > 
> > Patch 7/7 however fails to validate this flags field.  It needs to return -EINVAL if
> > any unsupported flags are set, which at this point is all of them.  
> Indeed.
>  
> >   
> > > +	/* in */
> > > +	__u32 dmabuf_id;  
> > 
> > This unsigned dmabuf_id is probably correct, but it doesn't match the signed
> > one provided above.  
> Will make them both unsigned :)
> 
> >   
> > > +	/* out */
> > > +	__s32 dmabuf_fd;
> > > +};  
> > 
> > VFIO_GROUP_GET_DEVICE_FD returns the fd through the return code, is there
> > a reason to do it differently here?  GET_DEVICE_FD also only takes a char
> > parameter without the extra future proofing, I wonder if this ioctl is simple
> > enough to do the same.  
> So that means we don't need the "flages" field and change dmabuf_id to a character string.
> Maybe it's fine for the current usage. But, I'm not sure if it's suitable for future.
> Gerd, Zhenyu and Zhi, how do you think about it?

No, the parameter wouldn't be a char, you'd use an __u32 for the
dmabuf_id.  I'm just referencing the structure of the GET_DEVICE_FD as
an ioctl which returns an fd through the return value and takes a
single parameter.  I don't mean to imply matching the type of that
parameter.  Gerd, what are your thoughts on that, it'd make it slightly
easier to call GET_GFX_DMABUF, but it limits us to file descriptor for
the dmabuf whereas with a flag and expandable structure we could use
some future mechanism to return the dmabuf to userspace.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
@ 2017-10-11  2:18         ` Alex Williamson
  0 siblings, 0 replies; 31+ messages in thread
From: Alex Williamson @ 2017-10-11  2:18 UTC (permalink / raw)
  To: Zhang, Tina
  Cc: Daniel Vetter, intel-gfx, linux-kernel, kwankhede, Lv, Zhiyuan,
	intel-gvt-dev, kraxel

On Wed, 11 Oct 2017 01:46:37 +0000
"Zhang, Tina" <tina.zhang@intel.com> wrote:

> > -----Original Message-----
> > From: Alex Williamson [mailto:alex.williamson@redhat.com]
> > Sent: Wednesday, October 11, 2017 3:17 AM
> > To: Zhang, Tina <tina.zhang@intel.com>
> > Cc: kraxel@redhat.com; chris@chris-wilson.co.uk; zhenyuw@linux.intel.com;
> > Lv, Zhiyuan <zhiyuan.lv@intel.com>; Wang, Zhi A <zhi.a.wang@intel.com>;
> > Tian, Kevin <kevin.tian@intel.com>; daniel@ffwll.ch; kwankhede@nvidia.com;
> > intel-gfx@lists.freedesktop.org; intel-gvt-dev@lists.freedesktop.org; linux-
> > kernel@vger.kernel.org; Daniel Vetter <daniel.vetter@ffwll.ch>
> > Subject: Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
> > 
> > On Tue, 10 Oct 2017 17:50:05 +0800
> > Tina Zhang <tina.zhang@intel.com> wrote:
> >   
> > > Add VFIO_DEVICE_QUERY_GFX_PLANE ioctl command to let user mode query
> > > and get the plan and its related information. This ioctl can be invoked with:  
> > 
> > s/plan/plane/  
> Sorry about this typo :).
> 
> >   
> > > 1) either flag DMABUF or REGION is set. Vendor driver returns success
> > > and the plane_info only when the specific kind of buffer is supported.
> > > 2) flag PROBE is set with either DMABUF or REGION. Vendor driver
> > > returns success only when the specific kind of buffer is supported.
> > >
> > > The dma-buf's life cycle is handled by user mode and tracked by kernel.
> > > The returned fd in struct vfio_device_query_gfx_plane can be a new fd
> > > or an old fd of a re-exported dma-buf. Host user mode can check the
> > > value of fd and to see if it needs to create new resource according to
> > > the new fd or just use the existed resource related to the old fd.
> > >
> > > v15:
> > > - add a ioctl to get a dmabuf for a given dmabuf id. (Gerd)
> > >
> > > v14:
> > > - add PROBE, DMABUF and REGION flags. (Alex)
> > >
> > > v12:
> > > - add drm_format_mod back. (Gerd and Zhenyu)
> > > - add region_index. (Gerd)
> > >
> > > v11:
> > > - rename plane_type to drm_plane_type. (Gerd)
> > > - move fields of vfio_device_query_gfx_plane to vfio_device_gfx_plane_info.
> > >   (Gerd)
> > > - remove drm_format_mod, start fields. (Daniel)
> > > - remove plane_id.
> > >
> > > v10:
> > > - refine the ABI API VFIO_DEVICE_QUERY_GFX_PLANE. (Alex) (Gerd)
> > >
> > > v3:
> > > - add a field gvt_plane_info in the drm_i915_gem_obj structure to save
> > >   the decoded plane information to avoid look up while need the plane
> > >   info. (Gerd)
> > >
> > > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > > Cc: Alex Williamson <alex.williamson@redhat.com>
> > > Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > ---
> > >  include/uapi/linux/vfio.h | 62
> > > +++++++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > >
> > > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > > index ae46105..fdf9a9c 100644
> > > --- a/include/uapi/linux/vfio.h
> > > +++ b/include/uapi/linux/vfio.h
> > > @@ -502,6 +502,68 @@ struct vfio_pci_hot_reset {
> > >
> > >  #define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE +  
> > 13)  
> > >
> > > +/**
> > > + * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
> > > + *                                    struct vfio_device_query_gfx_plane)
> > > + *
> > > + * Set the drm_plane_type and flags, then retrieve the gfx plane info.
> > > + *
> > > + * flags supported:
> > > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF  
> > are set  
> > > + *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
> > > + *   support for dma-buf.
> > > + * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION  
> > are set  
> > > + *   to ask if the mdev supports region. 0 on support, -EINVAL on no
> > > + *   support for region.
> > > + * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION  
> > is set  
> > > + *   with each call to query the plane info.  
> > 
> > So dmabuf_id is effectively just a token that can be fed into GET_GFX_DMABUF
> > to get the fd.  The implementation of the token is vendor specific, but can be
> > thought of as some sort of sequence ID or generation ID (but not necessarily
> > monotonically increasing), so GET_GFX_DMABUF may fail if the previously
> > provided dmabuf_id is no longer valid.  Do I have this correct?  
> Exactly. GET_GFX_DMABUF may fail if the dmabuf_id is no longer valid. -EINVAL will be
> returned in that case. 
> And dmabuf_id is invalid when:
> 1) user space has closed all dma-buf fds which are exposed with the dmabuf_id,
> 2) or the dmabuf_id isn't returned from QUERY_GFX_PLANE.
> I will add the comments. Thanks.
> 
> >   
> > > + * - Others are invalid and return -EINVAL.  
> > 
> > And I see that in patch 7/7 that i915 is checking explicitly for only these flag
> > combinations, great!
> >   
> > > + *
> > > + * Return: 0 on success, -ENODEV with all out fields zero on mdev
> > > + * device initialization, -errno on other failure.
> > > + */
> > > +struct vfio_device_gfx_plane_info {
> > > +	__u32 argsz;
> > > +	__u32 flags;
> > > +#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0) #define
> > > +VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1) #define
> > > +VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
> > > +	/* in */
> > > +	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
> > > +	/* out */
> > > +	__u32 drm_format;	/* drm format of plane */
> > > +	__u64 drm_format_mod;   /* tiled mode */
> > > +	__u32 width;	/* width of plane */
> > > +	__u32 height;	/* height of plane */
> > > +	__u32 stride;	/* stride of plane */
> > > +	__u32 size;	/* size of plane in bytes, align on page*/
> > > +	__u32 x_pos;	/* horizontal position of cursor plane */
> > > +	__u32 y_pos;	/* vertical position of cursor plane*/
> > > +	union {
> > > +		__u32 region_index;	/* region index */
> > > +		__s32 dmabuf_id;	/* dma-buf fd */  
> > 
> > "dma-buf fd", but it's not an fd.  Why is this signed since it's no longer an fd?  
> Again, sorry about the typo :)
> 
> >   
> > > +	};
> > > +};
> > > +
> > > +#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
> > > +
> > > +/**
> > > + * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15,
> > > + *				    struct vfio_device_gfx_dmabuf_fd)
> > > + *
> > > + * Return: 0 on success, -errno on failure.
> > > + */  
> > 
> > So given a dmabuf_id, return a dmabuf_fd, which may be the same as a fd
> > previously returned to the user.  In the latter case, can we assume that there's
> > no new reference added to the existing fd?  Is the user permitted to cache the
> > dmabuf_id to dmabuf_fd mapping?  Similarly, if a user already has a dmabuf_fd,
> > can calling GET_GFX_DMABUF ever fail or return a different result for the same
> > dmabuf_id which provided that fd?  Maybe we could define some of these
> > behaviors in the header file to keep vendor implementations consistent.  
> 
> Actually, a new fd is returned with each time GET_GFX_DMABUF invoked, even
> with the same dmabuf_id. The benefit is to leave the dma-buf's life cycle management
> to userspace. The dma-buf will be released when all its fds in userspace are closed.

Ok, so userspace could actually have multiple dmabuf_fds open at the
same time for the same dmabuf_id if they were sloppy with management of
those fds.

> Then, I think I need to add some comments like this:
> "dmabuf_fd will return a new fd with each time GET_GFX_DMABUF being invoked"

Yes, that would be great.

> >   
> > > +struct vfio_device_gfx_dmabuf_fd {
> > > +	__u32 argsz;
> > > +	__u32 flags;  
> > 
> > Patch 7/7 however fails to validate this flags field.  It needs to return -EINVAL if
> > any unsupported flags are set, which at this point is all of them.  
> Indeed.
>  
> >   
> > > +	/* in */
> > > +	__u32 dmabuf_id;  
> > 
> > This unsigned dmabuf_id is probably correct, but it doesn't match the signed
> > one provided above.  
> Will make them both unsigned :)
> 
> >   
> > > +	/* out */
> > > +	__s32 dmabuf_fd;
> > > +};  
> > 
> > VFIO_GROUP_GET_DEVICE_FD returns the fd through the return code, is there
> > a reason to do it differently here?  GET_DEVICE_FD also only takes a char
> > parameter without the extra future proofing, I wonder if this ioctl is simple
> > enough to do the same.  
> So that means we don't need the "flages" field and change dmabuf_id to a character string.
> Maybe it's fine for the current usage. But, I'm not sure if it's suitable for future.
> Gerd, Zhenyu and Zhi, how do you think about it?

No, the parameter wouldn't be a char, you'd use an __u32 for the
dmabuf_id.  I'm just referencing the structure of the GET_DEVICE_FD as
an ioctl which returns an fd through the return value and takes a
single parameter.  I don't mean to imply matching the type of that
parameter.  Gerd, what are your thoughts on that, it'd make it slightly
easier to call GET_GFX_DMABUF, but it limits us to file descriptor for
the dmabuf whereas with a flag and expandable structure we could use
some future mechanism to return the dmabuf to userspace.  Thanks,

Alex
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 4/7] drm/i915/gvt: Add opregion support
  2017-10-10  9:50   ` Tina Zhang
@ 2017-10-11  4:29     ` Du, Changbin
  -1 siblings, 0 replies; 31+ messages in thread
From: Du, Changbin @ 2017-10-11  4:29 UTC (permalink / raw)
  To: Tina Zhang
  Cc: alex.williamson, kraxel, chris, zhenyuw, zhiyuan.lv, zhi.a.wang,
	kevin.tian, daniel, kwankhede, intel-gfx, Bing Niu,
	intel-gvt-dev, linux-kernel, changbin.du

[-- Attachment #1: Type: text/plain, Size: 10604 bytes --]

On Tue, Oct 10, 2017 at 05:50:04PM +0800, Tina Zhang wrote:
> Windows guest driver needs vbt in opregion, to configure the setting
> for display. Without opregion support, the display registers won't
> be set and this blocks display model to get the correct information
> of the guest display plane.
> 
> This patch is to provide a virtual opregion for guest. Current
> implementation is to fill the virtual opregion with the content in
> host's opregion. The original author of this patch is Xiaoguang Chen.
> 
> Signed-off-by: Bing Niu <bing.niu@intel.com>
> Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> ---
>  drivers/gpu/drm/i915/gvt/hypercall.h |   1 +
>  drivers/gpu/drm/i915/gvt/kvmgt.c     | 109 ++++++++++++++++++++++++++++++++++-
>  drivers/gpu/drm/i915/gvt/mpt.h       |  15 +++++
>  drivers/gpu/drm/i915/gvt/opregion.c  |  26 +++++++--
>  drivers/gpu/drm/i915/gvt/vgpu.c      |   4 ++
>  5 files changed, 146 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
> index df7f33a..32c345c 100644
> --- a/drivers/gpu/drm/i915/gvt/hypercall.h
> +++ b/drivers/gpu/drm/i915/gvt/hypercall.h
> @@ -55,6 +55,7 @@ struct intel_gvt_mpt {
>  			      unsigned long mfn, unsigned int nr, bool map);
>  	int (*set_trap_area)(unsigned long handle, u64 start, u64 end,
>  			     bool map);
> +	int (*set_opregion)(void *vgpu);
Seems we try to hide struct intel_vgpu for kvmgt, but acctually kvmgt already
use it. So set type as struct intel_vgpu directly? I am not sure about xengt,
but the code shows that 'handle' is correct thing?

>  };
>  
>  extern struct intel_gvt_mpt xengt_mpt;
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index fd0c85f..6b0a330 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -53,11 +53,23 @@ static const struct intel_gvt_ops *intel_gvt_ops;
>  #define VFIO_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << VFIO_PCI_OFFSET_SHIFT)
>  #define VFIO_PCI_OFFSET_MASK    (((u64)(1) << VFIO_PCI_OFFSET_SHIFT) - 1)
>  
> +#define OPREGION_SIGNATURE "IntelGraphicsMem"
> +
> +struct vfio_region;
> +struct intel_vgpu_regops {
> +	size_t (*rw)(struct intel_vgpu *vgpu, char *buf,
> +			size_t count, loff_t *ppos, bool iswrite);
> +	void (*release)(struct intel_vgpu *vgpu,
> +			struct vfio_region *region);
> +};
> +
>  struct vfio_region {
>  	u32				type;
>  	u32				subtype;
>  	size_t				size;
>  	u32				flags;
> +	const struct intel_vgpu_regops	*ops;
> +	void				*data;
>  };
>  
>  struct kvmgt_pgfn {
> @@ -430,6 +442,91 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
>  	}
>  }
>  
> +static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
> +		size_t count, loff_t *ppos, bool iswrite)
Personally I think intel_vgpu_rw_opregion() is better. :)

> +{
> +	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
> +			VFIO_PCI_NUM_REGIONS;
> +	void *base = vgpu->vdev.region[i].data;
> +	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
> +
> +	if (pos >= vgpu->vdev.region[i].size || iswrite) {
> +		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
> +		return -EINVAL;
> +	}
> +	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
> +	memcpy(buf, base + pos, count);
> +
> +	return count;
> +}
> +
> +static void intel_vgpu_reg_release_opregion(struct intel_vgpu *vgpu,
> +		struct vfio_region *region)
> +{
> +	memunmap(region->data);
> +}
> +
> +static const struct intel_vgpu_regops intel_vgpu_regops_opregion = {
> +	.rw = intel_vgpu_reg_rw_opregion,
> +	.release = intel_vgpu_reg_release_opregion,
> +};
> +
> +static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
Maybe full name xx_register_region? coufusing between a register and region.

> +		unsigned int type, unsigned int subtype,
> +		const struct intel_vgpu_regops *ops,
> +		size_t size, u32 flags, void *data)
> +{
> +	struct vfio_region *region;
> +
> +	region = krealloc(vgpu->vdev.region,
> +			(vgpu->vdev.num_regions + 1) * sizeof(*region),
> +			GFP_KERNEL);
> +	if (!region)
> +		return -ENOMEM;
> +
> +	vgpu->vdev.region = region;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
> +	vgpu->vdev.num_regions++;
> +
> +	return 0;
> +}
> +
> +static int kvmgt_set_opregion(void *p_vgpu)
> +{
> +	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> +	unsigned int addr;
> +	void *base;
> +	int ret;
> +
> +	addr = vgpu->gvt->opregion.opregion_pa;
> +	if (!addr || !(~addr))
> +		return -ENODEV;
> +
> +	base = memremap(addr, OPREGION_SIZE, MEMREMAP_WB);
> +	if (!base)
> +		return -ENOMEM;
No need map again, i915 has mapped it (dev_priv->opregion->header).

> +
> +	if (memcmp(base, OPREGION_SIGNATURE, 16)) {
> +		memunmap(base);
> +		return -EINVAL;
> +	}
> +
Just check "opregion->header != NULL", since i915 already validates it.

> +	ret = intel_vgpu_register_reg(vgpu,
> +			PCI_VENDOR_ID_INTEL | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
> +			VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION,
> +			&intel_vgpu_regops_opregion, OPREGION_SIZE,
> +			VFIO_REGION_INFO_FLAG_READ, base);
> +	if (ret)
> +		memunmap(base);
> +
> +	return ret;
> +}
> +
>  static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
>  {
>  	struct intel_vgpu *vgpu = NULL;
> @@ -646,7 +743,7 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
>  	int ret = -EINVAL;
>  
>  
> -	if (index >= VFIO_PCI_NUM_REGIONS) {
> +	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
>  		gvt_vgpu_err("invalid index: %u\n", index);
>  		return -EINVAL;
>  	}
> @@ -680,8 +777,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
>  	case VFIO_PCI_BAR5_REGION_INDEX:
>  	case VFIO_PCI_VGA_REGION_INDEX:
>  	case VFIO_PCI_ROM_REGION_INDEX:
> +		break;
>  	default:
> -		gvt_vgpu_err("unsupported region: %u\n", index);
> +		index -= VFIO_PCI_NUM_REGIONS;
> +		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
> +				ppos, is_write);
Make region ops gerneric is great. You can factor out MMIO and CFG region ops as
well. It is better not to mix generic and special handling.

>  	}
>  
>  	return ret == 0 ? count : ret;
> @@ -944,7 +1044,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  
>  		info.flags = VFIO_DEVICE_FLAGS_PCI;
>  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
> -		info.num_regions = VFIO_PCI_NUM_REGIONS;
> +		info.num_regions = VFIO_PCI_NUM_REGIONS +
> +				vgpu->vdev.num_regions;
>  		info.num_irqs = VFIO_PCI_NUM_IRQS;
>  
>  		return copy_to_user((void __user *)arg, &info, minsz) ?
> @@ -1065,6 +1166,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  		}
>  
>  		if (caps.size) {
> +			info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
>  			if (info.argsz < sizeof(info) + caps.size) {
>  				info.argsz = sizeof(info) + caps.size;
>  				info.cap_offset = 0;
> @@ -1513,6 +1615,7 @@ struct intel_gvt_mpt kvmgt_mpt = {
>  	.read_gpa = kvmgt_read_gpa,
>  	.write_gpa = kvmgt_write_gpa,
>  	.gfn_to_mfn = kvmgt_gfn_to_pfn,
> +	.set_opregion = kvmgt_set_opregion,
>  };
>  EXPORT_SYMBOL_GPL(kvmgt_mpt);
>  
> diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
> index f0e5487..9e73f2e 100644
> --- a/drivers/gpu/drm/i915/gvt/mpt.h
> +++ b/drivers/gpu/drm/i915/gvt/mpt.h
> @@ -292,4 +292,19 @@ static inline int intel_gvt_hypervisor_set_trap_area(
>  	return intel_gvt_host.mpt->set_trap_area(vgpu->handle, start, end, map);
>  }
>  
> +/**
> + * intel_gvt_hypervisor_set_opregion - Set opregion for guest
> + * @vgpu: a vGPU
> + *
> + * Returns:
> + * Zero on success, negative error code if failed.
> + */
> +static inline int intel_gvt_hypervisor_set_opregion(struct intel_vgpu *vgpu)
> +{
> +	if (!intel_gvt_host.mpt->set_opregion)
> +		return 0;
> +
> +	return intel_gvt_host.mpt->set_opregion(vgpu);
> +}
> +
>  #endif /* _GVT_MPT_H_ */
> diff --git a/drivers/gpu/drm/i915/gvt/opregion.c b/drivers/gpu/drm/i915/gvt/opregion.c
> index 3117991..04c0452 100644
> --- a/drivers/gpu/drm/i915/gvt/opregion.c
> +++ b/drivers/gpu/drm/i915/gvt/opregion.c
> @@ -113,23 +113,37 @@ void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu)
>   */
>  int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa)
>  {
> -	int ret;
> +	int ret = 0;
> +	unsigned long pfn;
>  
>  	gvt_dbg_core("vgpu%d: init vgpu opregion\n", vgpu->id);
>  
> -	if (intel_gvt_host.hypervisor_type == INTEL_GVT_HYPERVISOR_XEN) {
> +	switch (intel_gvt_host.hypervisor_type) {
> +	case INTEL_GVT_HYPERVISOR_KVM:
> +		pfn = intel_gvt_hypervisor_gfn_to_mfn(vgpu, gpa >> PAGE_SHIFT);
> +		vgpu_opregion(vgpu)->va = memremap(pfn << PAGE_SHIFT,
> +						INTEL_GVT_OPREGION_SIZE,
> +						MEMREMAP_WB);
> +		if (!vgpu_opregion(vgpu)->va) {
> +			gvt_vgpu_err("failed to map guest opregion\n");
> +			ret = -EFAULT;
> +		}
ditto.

> +		break;
> +	case INTEL_GVT_HYPERVISOR_XEN:
>  		gvt_dbg_core("emulate opregion from kernel\n");
>  
>  		ret = init_vgpu_opregion(vgpu, gpa);
>  		if (ret)
> -			return ret;
> +			break;
>  
>  		ret = map_vgpu_opregion(vgpu, true);
> -		if (ret)
> -			return ret;
> +		break;
> +	default:
> +		ret = -EINVAL;
> +		gvt_vgpu_err("not supported hypervisor\n");
>  	}
>  
> -	return 0;
> +	return ret;
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
> index 3deadcb..bcbf535 100644
> --- a/drivers/gpu/drm/i915/gvt/vgpu.c
> +++ b/drivers/gpu/drm/i915/gvt/vgpu.c
> @@ -383,6 +383,10 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
>  	if (ret)
>  		goto out_clean_shadow_ctx;
>  
> +	ret = intel_gvt_hypervisor_set_opregion(vgpu);
> +	if (ret)
> +		goto out_clean_shadow_ctx;
> +
>  	mutex_unlock(&gvt->lock);
>  
>  	return vgpu;
> -- 
> 2.7.4
> 
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev

-- 
Thanks,
Changbin Du

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 4/7] drm/i915/gvt: Add opregion support
@ 2017-10-11  4:29     ` Du, Changbin
  0 siblings, 0 replies; 31+ messages in thread
From: Du, Changbin @ 2017-10-11  4:29 UTC (permalink / raw)
  To: Tina Zhang
  Cc: linux-kernel, intel-gfx, kwankhede, zhiyuan.lv, intel-gvt-dev, kraxel


[-- Attachment #1.1: Type: text/plain, Size: 10604 bytes --]

On Tue, Oct 10, 2017 at 05:50:04PM +0800, Tina Zhang wrote:
> Windows guest driver needs vbt in opregion, to configure the setting
> for display. Without opregion support, the display registers won't
> be set and this blocks display model to get the correct information
> of the guest display plane.
> 
> This patch is to provide a virtual opregion for guest. Current
> implementation is to fill the virtual opregion with the content in
> host's opregion. The original author of this patch is Xiaoguang Chen.
> 
> Signed-off-by: Bing Niu <bing.niu@intel.com>
> Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> ---
>  drivers/gpu/drm/i915/gvt/hypercall.h |   1 +
>  drivers/gpu/drm/i915/gvt/kvmgt.c     | 109 ++++++++++++++++++++++++++++++++++-
>  drivers/gpu/drm/i915/gvt/mpt.h       |  15 +++++
>  drivers/gpu/drm/i915/gvt/opregion.c  |  26 +++++++--
>  drivers/gpu/drm/i915/gvt/vgpu.c      |   4 ++
>  5 files changed, 146 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
> index df7f33a..32c345c 100644
> --- a/drivers/gpu/drm/i915/gvt/hypercall.h
> +++ b/drivers/gpu/drm/i915/gvt/hypercall.h
> @@ -55,6 +55,7 @@ struct intel_gvt_mpt {
>  			      unsigned long mfn, unsigned int nr, bool map);
>  	int (*set_trap_area)(unsigned long handle, u64 start, u64 end,
>  			     bool map);
> +	int (*set_opregion)(void *vgpu);
Seems we try to hide struct intel_vgpu for kvmgt, but acctually kvmgt already
use it. So set type as struct intel_vgpu directly? I am not sure about xengt,
but the code shows that 'handle' is correct thing?

>  };
>  
>  extern struct intel_gvt_mpt xengt_mpt;
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index fd0c85f..6b0a330 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -53,11 +53,23 @@ static const struct intel_gvt_ops *intel_gvt_ops;
>  #define VFIO_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << VFIO_PCI_OFFSET_SHIFT)
>  #define VFIO_PCI_OFFSET_MASK    (((u64)(1) << VFIO_PCI_OFFSET_SHIFT) - 1)
>  
> +#define OPREGION_SIGNATURE "IntelGraphicsMem"
> +
> +struct vfio_region;
> +struct intel_vgpu_regops {
> +	size_t (*rw)(struct intel_vgpu *vgpu, char *buf,
> +			size_t count, loff_t *ppos, bool iswrite);
> +	void (*release)(struct intel_vgpu *vgpu,
> +			struct vfio_region *region);
> +};
> +
>  struct vfio_region {
>  	u32				type;
>  	u32				subtype;
>  	size_t				size;
>  	u32				flags;
> +	const struct intel_vgpu_regops	*ops;
> +	void				*data;
>  };
>  
>  struct kvmgt_pgfn {
> @@ -430,6 +442,91 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
>  	}
>  }
>  
> +static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
> +		size_t count, loff_t *ppos, bool iswrite)
Personally I think intel_vgpu_rw_opregion() is better. :)

> +{
> +	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
> +			VFIO_PCI_NUM_REGIONS;
> +	void *base = vgpu->vdev.region[i].data;
> +	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
> +
> +	if (pos >= vgpu->vdev.region[i].size || iswrite) {
> +		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
> +		return -EINVAL;
> +	}
> +	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
> +	memcpy(buf, base + pos, count);
> +
> +	return count;
> +}
> +
> +static void intel_vgpu_reg_release_opregion(struct intel_vgpu *vgpu,
> +		struct vfio_region *region)
> +{
> +	memunmap(region->data);
> +}
> +
> +static const struct intel_vgpu_regops intel_vgpu_regops_opregion = {
> +	.rw = intel_vgpu_reg_rw_opregion,
> +	.release = intel_vgpu_reg_release_opregion,
> +};
> +
> +static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
Maybe full name xx_register_region? coufusing between a register and region.

> +		unsigned int type, unsigned int subtype,
> +		const struct intel_vgpu_regops *ops,
> +		size_t size, u32 flags, void *data)
> +{
> +	struct vfio_region *region;
> +
> +	region = krealloc(vgpu->vdev.region,
> +			(vgpu->vdev.num_regions + 1) * sizeof(*region),
> +			GFP_KERNEL);
> +	if (!region)
> +		return -ENOMEM;
> +
> +	vgpu->vdev.region = region;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
> +	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
> +	vgpu->vdev.num_regions++;
> +
> +	return 0;
> +}
> +
> +static int kvmgt_set_opregion(void *p_vgpu)
> +{
> +	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> +	unsigned int addr;
> +	void *base;
> +	int ret;
> +
> +	addr = vgpu->gvt->opregion.opregion_pa;
> +	if (!addr || !(~addr))
> +		return -ENODEV;
> +
> +	base = memremap(addr, OPREGION_SIZE, MEMREMAP_WB);
> +	if (!base)
> +		return -ENOMEM;
No need map again, i915 has mapped it (dev_priv->opregion->header).

> +
> +	if (memcmp(base, OPREGION_SIGNATURE, 16)) {
> +		memunmap(base);
> +		return -EINVAL;
> +	}
> +
Just check "opregion->header != NULL", since i915 already validates it.

> +	ret = intel_vgpu_register_reg(vgpu,
> +			PCI_VENDOR_ID_INTEL | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
> +			VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION,
> +			&intel_vgpu_regops_opregion, OPREGION_SIZE,
> +			VFIO_REGION_INFO_FLAG_READ, base);
> +	if (ret)
> +		memunmap(base);
> +
> +	return ret;
> +}
> +
>  static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
>  {
>  	struct intel_vgpu *vgpu = NULL;
> @@ -646,7 +743,7 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
>  	int ret = -EINVAL;
>  
>  
> -	if (index >= VFIO_PCI_NUM_REGIONS) {
> +	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
>  		gvt_vgpu_err("invalid index: %u\n", index);
>  		return -EINVAL;
>  	}
> @@ -680,8 +777,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
>  	case VFIO_PCI_BAR5_REGION_INDEX:
>  	case VFIO_PCI_VGA_REGION_INDEX:
>  	case VFIO_PCI_ROM_REGION_INDEX:
> +		break;
>  	default:
> -		gvt_vgpu_err("unsupported region: %u\n", index);
> +		index -= VFIO_PCI_NUM_REGIONS;
> +		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
> +				ppos, is_write);
Make region ops gerneric is great. You can factor out MMIO and CFG region ops as
well. It is better not to mix generic and special handling.

>  	}
>  
>  	return ret == 0 ? count : ret;
> @@ -944,7 +1044,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  
>  		info.flags = VFIO_DEVICE_FLAGS_PCI;
>  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
> -		info.num_regions = VFIO_PCI_NUM_REGIONS;
> +		info.num_regions = VFIO_PCI_NUM_REGIONS +
> +				vgpu->vdev.num_regions;
>  		info.num_irqs = VFIO_PCI_NUM_IRQS;
>  
>  		return copy_to_user((void __user *)arg, &info, minsz) ?
> @@ -1065,6 +1166,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  		}
>  
>  		if (caps.size) {
> +			info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
>  			if (info.argsz < sizeof(info) + caps.size) {
>  				info.argsz = sizeof(info) + caps.size;
>  				info.cap_offset = 0;
> @@ -1513,6 +1615,7 @@ struct intel_gvt_mpt kvmgt_mpt = {
>  	.read_gpa = kvmgt_read_gpa,
>  	.write_gpa = kvmgt_write_gpa,
>  	.gfn_to_mfn = kvmgt_gfn_to_pfn,
> +	.set_opregion = kvmgt_set_opregion,
>  };
>  EXPORT_SYMBOL_GPL(kvmgt_mpt);
>  
> diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
> index f0e5487..9e73f2e 100644
> --- a/drivers/gpu/drm/i915/gvt/mpt.h
> +++ b/drivers/gpu/drm/i915/gvt/mpt.h
> @@ -292,4 +292,19 @@ static inline int intel_gvt_hypervisor_set_trap_area(
>  	return intel_gvt_host.mpt->set_trap_area(vgpu->handle, start, end, map);
>  }
>  
> +/**
> + * intel_gvt_hypervisor_set_opregion - Set opregion for guest
> + * @vgpu: a vGPU
> + *
> + * Returns:
> + * Zero on success, negative error code if failed.
> + */
> +static inline int intel_gvt_hypervisor_set_opregion(struct intel_vgpu *vgpu)
> +{
> +	if (!intel_gvt_host.mpt->set_opregion)
> +		return 0;
> +
> +	return intel_gvt_host.mpt->set_opregion(vgpu);
> +}
> +
>  #endif /* _GVT_MPT_H_ */
> diff --git a/drivers/gpu/drm/i915/gvt/opregion.c b/drivers/gpu/drm/i915/gvt/opregion.c
> index 3117991..04c0452 100644
> --- a/drivers/gpu/drm/i915/gvt/opregion.c
> +++ b/drivers/gpu/drm/i915/gvt/opregion.c
> @@ -113,23 +113,37 @@ void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu)
>   */
>  int intel_vgpu_init_opregion(struct intel_vgpu *vgpu, u32 gpa)
>  {
> -	int ret;
> +	int ret = 0;
> +	unsigned long pfn;
>  
>  	gvt_dbg_core("vgpu%d: init vgpu opregion\n", vgpu->id);
>  
> -	if (intel_gvt_host.hypervisor_type == INTEL_GVT_HYPERVISOR_XEN) {
> +	switch (intel_gvt_host.hypervisor_type) {
> +	case INTEL_GVT_HYPERVISOR_KVM:
> +		pfn = intel_gvt_hypervisor_gfn_to_mfn(vgpu, gpa >> PAGE_SHIFT);
> +		vgpu_opregion(vgpu)->va = memremap(pfn << PAGE_SHIFT,
> +						INTEL_GVT_OPREGION_SIZE,
> +						MEMREMAP_WB);
> +		if (!vgpu_opregion(vgpu)->va) {
> +			gvt_vgpu_err("failed to map guest opregion\n");
> +			ret = -EFAULT;
> +		}
ditto.

> +		break;
> +	case INTEL_GVT_HYPERVISOR_XEN:
>  		gvt_dbg_core("emulate opregion from kernel\n");
>  
>  		ret = init_vgpu_opregion(vgpu, gpa);
>  		if (ret)
> -			return ret;
> +			break;
>  
>  		ret = map_vgpu_opregion(vgpu, true);
> -		if (ret)
> -			return ret;
> +		break;
> +	default:
> +		ret = -EINVAL;
> +		gvt_vgpu_err("not supported hypervisor\n");
>  	}
>  
> -	return 0;
> +	return ret;
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
> index 3deadcb..bcbf535 100644
> --- a/drivers/gpu/drm/i915/gvt/vgpu.c
> +++ b/drivers/gpu/drm/i915/gvt/vgpu.c
> @@ -383,6 +383,10 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
>  	if (ret)
>  		goto out_clean_shadow_ctx;
>  
> +	ret = intel_gvt_hypervisor_set_opregion(vgpu);
> +	if (ret)
> +		goto out_clean_shadow_ctx;
> +
>  	mutex_unlock(&gvt->lock);
>  
>  	return vgpu;
> -- 
> 2.7.4
> 
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev

-- 
Thanks,
Changbin Du

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
  2017-10-11  2:18         ` Alex Williamson
@ 2017-10-11  6:27           ` Gerd Hoffmann
  -1 siblings, 0 replies; 31+ messages in thread
From: Gerd Hoffmann @ 2017-10-11  6:27 UTC (permalink / raw)
  To: Alex Williamson, Zhang, Tina
  Cc: chris, zhenyuw, Lv, Zhiyuan, Wang, Zhi A, Tian, Kevin, daniel,
	kwankhede, intel-gfx, intel-gvt-dev, linux-kernel, Daniel Vetter

  Hi,

> No, the parameter wouldn't be a char, you'd use an __u32 for the
> dmabuf_id.  I'm just referencing the structure of the GET_DEVICE_FD
> as
> an ioctl which returns an fd through the return value and takes a
> single parameter.  I don't mean to imply matching the type of that
> parameter.

>   Gerd, what are your thoughts on that, it'd make it slightly
> easier to call GET_GFX_DMABUF, but it limits us to file descriptor
> for
> the dmabuf whereas with a flag and expandable structure we could use
> some future mechanism to return the dmabuf to userspace.

It's fine with me.

The point of using fd's to refer to dmabufs is that they can be passed
around easily, so I doubt this is going to change.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation
@ 2017-10-11  6:27           ` Gerd Hoffmann
  0 siblings, 0 replies; 31+ messages in thread
From: Gerd Hoffmann @ 2017-10-11  6:27 UTC (permalink / raw)
  To: Alex Williamson, Zhang, Tina
  Cc: chris, zhenyuw, Lv, Zhiyuan, Wang, Zhi A, Tian, Kevin, daniel,
	kwankhede, intel-gfx, intel-gvt-dev, linux-kernel, Daniel Vetter

  Hi,

> No, the parameter wouldn't be a char, you'd use an __u32 for the
> dmabuf_id.  I'm just referencing the structure of the GET_DEVICE_FD
> as
> an ioctl which returns an fd through the return value and takes a
> single parameter.  I don't mean to imply matching the type of that
> parameter.

>   Gerd, what are your thoughts on that, it'd make it slightly
> easier to call GET_GFX_DMABUF, but it limits us to file descriptor
> for
> the dmabuf whereas with a flag and expandable structure we could use
> some future mechanism to return the dmabuf to userspace.

It's fine with me.

The point of using fd's to refer to dmabufs is that they can be passed
around easily, so I doubt this is going to change.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-10-11  6:27 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-10  9:50 [PATCH v15 0/7] drm/i915/gvt: Dma-buf support for GVT-g Tina Zhang
2017-10-10  9:50 ` [PATCH v15 1/7] drm/i915/gvt: Add framebuffer decoder support Tina Zhang
2017-10-10  9:50 ` [PATCH v15 2/7] drm: Introduce RGB 64-bit 16:16:16:16 float format Tina Zhang
2017-10-10  9:50   ` Tina Zhang
2017-10-10  9:50 ` [PATCH v15 3/7] drm/i915/gvt: Add " Tina Zhang
2017-10-10  9:50   ` Tina Zhang
2017-10-10  9:50 ` [PATCH v15 4/7] drm/i915/gvt: Add opregion support Tina Zhang
2017-10-10  9:50   ` Tina Zhang
2017-10-11  4:29   ` Du, Changbin
2017-10-11  4:29     ` Du, Changbin
2017-10-10  9:50 ` [PATCH v15 5/7] vfio: ABI for mdev display dma-buf operation Tina Zhang
2017-10-10  9:50   ` Tina Zhang
2017-10-10 14:07   ` Gerd Hoffmann
2017-10-10 19:16   ` Alex Williamson
2017-10-11  1:46     ` Zhang, Tina
2017-10-11  1:46       ` Zhang, Tina
2017-10-11  2:18       ` Alex Williamson
2017-10-11  2:18         ` Alex Williamson
2017-10-11  6:27         ` Gerd Hoffmann
2017-10-11  6:27           ` Gerd Hoffmann
2017-10-10  9:50 ` [PATCH v15 6/7] drm/i915: Introduce GEM proxy Tina Zhang
2017-10-10  9:50   ` Tina Zhang
2017-10-10 10:58   ` Joonas Lahtinen
2017-10-10 13:02     ` [Intel-gfx] " Joonas Lahtinen
2017-10-11  1:54       ` Zhang, Tina
2017-10-11  1:54         ` Zhang, Tina
2017-10-10  9:50 ` [PATCH v15 7/7] drm/i915/gvt: Dmabuf support for GVT-g Tina Zhang
2017-10-10  9:50   ` Tina Zhang
2017-10-10 10:05 ` ✗ Fi.CI.BAT: failure for drm/i915/gvt: Dma-buf " Patchwork
2017-10-10 14:06 ` [PATCH v15 0/7] " Gerd Hoffmann
2017-10-10 14:06   ` Gerd Hoffmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.