kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/6] Deliver vGPU display refresh event to userspace
@ 2019-08-16  2:35 Tina Zhang
  2019-08-16  2:35 ` [PATCH v5 1/6] vfio: Define device specific irq type capability Tina Zhang
                   ` (6 more replies)
  0 siblings, 7 replies; 23+ messages in thread
From: Tina Zhang @ 2019-08-16  2:35 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Tina Zhang, kraxel, alex.williamson, kvm, linux-kernel,
	hang.yuan, zhiyuan.lv

This series tries to send the vGPU display refresh event to user land.

Instead of delivering page flip events only or vblank events only, we 
choose to combine two of them, i.e. post display refresh event at vblanks 
and skip some of them when no page flip happens. Vblanks as upper bound 
are safe and skipping no-page-flip vblanks guarantees both trivial performance 
impacts and good user experience without screen tearing. Plus, we have the 
mask/unmask mechansim providing user space flexibility to switch between 
event-notified refresh and classic timer-based refresh.

In addition, there are some cases that guest app only uses one framebuffer 
for both drawing and display. In such case, guest OS won't do the plane page 
flip when the framebuffer is updated, thus the user land won't be notified 
about the updated framebuffer. Hence, in single framebuffer case, we apply
a heuristic to determine whether it is the case or not. If it is, notify user
land when each vblank event triggers.

v5:
- Introduce a vGPU display irq cap which can notify user space with
  both primary and cursor plane update events through one eventfd. (Gerd & Alex)
v4:
- Deliver page flip event and single framebuffer refresh event bounded 
by display vblanks. (Kechen)
v3:
- Deliver display vblank event instead of page flip event. (Zhenyu)
v2:
- Use VFIO irq chain to get eventfds from userspace instead of adding
a new ABI. (Alex)
v1:
- https://patchwork.kernel.org/cover/10962341/

Kechen Lu (2):
  drm/i915/gvt: Deliver async primary plane page flip events at vblank
  drm/i915/gvt: Add cursor plane reg update trap emulation handler

Tina Zhang (4):
  vfio: Define device specific irq type capability
  vfio: Introduce vGPU display irq type
  drm/i915/gvt: Register vGPU display event irq
  drm/i915/gvt: Deliver vGPU refresh event to userspace

 drivers/gpu/drm/i915/gvt/cmd_parser.c |   6 +-
 drivers/gpu/drm/i915/gvt/display.c    |  49 +++++-
 drivers/gpu/drm/i915/gvt/display.h    |   3 +
 drivers/gpu/drm/i915/gvt/gvt.h        |   6 +
 drivers/gpu/drm/i915/gvt/handlers.c   |  32 +++-
 drivers/gpu/drm/i915/gvt/hypercall.h  |   1 +
 drivers/gpu/drm/i915/gvt/interrupt.c  |   7 +
 drivers/gpu/drm/i915/gvt/interrupt.h  |   3 +
 drivers/gpu/drm/i915/gvt/kvmgt.c      | 230 +++++++++++++++++++++++++-
 drivers/gpu/drm/i915/gvt/mpt.h        |  17 ++
 include/uapi/linux/vfio.h             |  40 ++++-
 11 files changed, 375 insertions(+), 19 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v5 1/6] vfio: Define device specific irq type capability
  2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
@ 2019-08-16  2:35 ` Tina Zhang
  2019-08-16 20:51   ` Alex Williamson
  2019-08-16  2:35 ` [PATCH v5 2/6] vfio: Introduce vGPU display irq type Tina Zhang
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: Tina Zhang @ 2019-08-16  2:35 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Tina Zhang, kraxel, alex.williamson, kvm, linux-kernel,
	hang.yuan, zhiyuan.lv, Eric Auger

Cap the number of irqs with fixed indexes and use capability chains
to chain device specific irqs.

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
 include/uapi/linux/vfio.h | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 02bb7ad6e986..d83c9f136a5b 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -444,11 +444,27 @@ struct vfio_irq_info {
 #define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
 #define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
 #define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
+#define VFIO_IRQ_INFO_FLAG_CAPS		(1 << 4) /* Info supports caps */
 	__u32	index;		/* IRQ index */
 	__u32	count;		/* Number of IRQs within this index */
+	__u32	cap_offset;	/* Offset within info struct of first cap */
 };
 #define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)
 
+/*
+ * The irq type capability allows irqs unique to a specific device or
+ * class of devices to be exposed.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_IRQ_INFO_CAP_TYPE      3
+
+struct vfio_irq_info_cap_type {
+	struct vfio_info_cap_header header;
+	__u32 type;     /* global per bus driver */
+	__u32 subtype;  /* type specific */
+};
+
 /**
  * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
  *
@@ -550,7 +566,8 @@ enum {
 	VFIO_PCI_MSIX_IRQ_INDEX,
 	VFIO_PCI_ERR_IRQ_INDEX,
 	VFIO_PCI_REQ_IRQ_INDEX,
-	VFIO_PCI_NUM_IRQS
+	VFIO_PCI_NUM_IRQS = 5	/* Fixed user ABI, IRQ indexes >=5 use   */
+				/* device specific cap to define content */
 };
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 2/6] vfio: Introduce vGPU display irq type
  2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
  2019-08-16  2:35 ` [PATCH v5 1/6] vfio: Define device specific irq type capability Tina Zhang
@ 2019-08-16  2:35 ` Tina Zhang
  2019-08-16 20:51   ` Alex Williamson
  2019-08-16  2:35 ` [PATCH v5 3/6] drm/i915/gvt: Register vGPU display event irq Tina Zhang
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: Tina Zhang @ 2019-08-16  2:35 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Tina Zhang, kraxel, alex.williamson, kvm, linux-kernel,
	hang.yuan, zhiyuan.lv

Introduce vGPU specific irq type VFIO_IRQ_TYPE_GFX, and
VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ as the subtype for vGPU display.

Introduce vfio_irq_info_cap_display_plane_events capability to notify
user space with the vGPU's plane update events

v2:
- Add VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ description. (Alex & Kechen)
- Introduce vfio_irq_info_cap_display_plane_events. (Gerd & Alex)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 include/uapi/linux/vfio.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index d83c9f136a5b..21ac69f0e1a9 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -465,6 +465,27 @@ struct vfio_irq_info_cap_type {
 	__u32 subtype;  /* type specific */
 };
 
+#define VFIO_IRQ_TYPE_GFX				(1)
+/*
+ * vGPU vendor sub-type
+ * vGPU device display related interrupts e.g. vblank/pageflip
+ */
+#define VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ		(1)
+
+/*
+ * Display capability of using one eventfd to notify user space with the
+ * vGPU's plane update events.
+ * cur_event_val: eventfd value stands for cursor plane change event.
+ * pri_event_val: eventfd value stands for primary plane change event.
+ */
+#define VFIO_IRQ_INFO_CAP_DISPLAY	4
+
+struct vfio_irq_info_cap_display_plane_events {
+	struct vfio_info_cap_header header;
+	__u64 cur_event_val;
+	__u64 pri_event_val;
+};
+
 /**
  * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
  *
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 3/6] drm/i915/gvt: Register vGPU display event irq
  2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
  2019-08-16  2:35 ` [PATCH v5 1/6] vfio: Define device specific irq type capability Tina Zhang
  2019-08-16  2:35 ` [PATCH v5 2/6] vfio: Introduce vGPU display irq type Tina Zhang
@ 2019-08-16  2:35 ` Tina Zhang
  2019-08-16  2:35 ` [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace Tina Zhang
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 23+ messages in thread
From: Tina Zhang @ 2019-08-16  2:35 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Tina Zhang, kraxel, alex.williamson, kvm, linux-kernel,
	hang.yuan, zhiyuan.lv

Gvt-g emulates and injects the vGPU's display interrupts in kernel
space. However the dma-buf based framebuffer consumer in the user
land (e.g. Qemu vfio/display) may also need to be notified by this
event.

Register the display irq as VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ to
each vGPU, so that the display interrupt event can be delivered to
userspace through eventfd.

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/display.c   | 10 +++-
 drivers/gpu/drm/i915/gvt/display.h   |  3 ++
 drivers/gpu/drm/i915/gvt/gvt.h       |  2 +
 drivers/gpu/drm/i915/gvt/hypercall.h |  1 +
 drivers/gpu/drm/i915/gvt/kvmgt.c     | 71 ++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/mpt.h       | 17 +++++++
 6 files changed, 102 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
index e1c313da6c00..1a0a4ae4826e 100644
--- a/drivers/gpu/drm/i915/gvt/display.c
+++ b/drivers/gpu/drm/i915/gvt/display.c
@@ -506,16 +506,22 @@ void intel_vgpu_clean_display(struct intel_vgpu *vgpu)
 int intel_vgpu_init_display(struct intel_vgpu *vgpu, u64 resolution)
 {
 	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
+	int ret;
 
 	intel_vgpu_init_i2c_edid(vgpu);
 
 	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv) ||
 	    IS_COFFEELAKE(dev_priv))
-		return setup_virtual_dp_monitor(vgpu, PORT_D, GVT_DP_D,
+		ret = setup_virtual_dp_monitor(vgpu, PORT_D, GVT_DP_D,
 						resolution);
 	else
-		return setup_virtual_dp_monitor(vgpu, PORT_B, GVT_DP_B,
+		ret = setup_virtual_dp_monitor(vgpu, PORT_B, GVT_DP_B,
 						resolution);
+
+	if (ret == 0)
+		intel_gvt_hypervisor_register_display_irq(vgpu);
+
+	return ret;
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/gvt/display.h b/drivers/gpu/drm/i915/gvt/display.h
index a87f33e6a23c..ba07fbef9194 100644
--- a/drivers/gpu/drm/i915/gvt/display.h
+++ b/drivers/gpu/drm/i915/gvt/display.h
@@ -112,6 +112,9 @@
 #define SBI_ADDR_OFFSET_SHIFT           16
 #define SBI_ADDR_OFFSET_MASK            (0xffff << SBI_ADDR_OFFSET_SHIFT)
 
+#define DISPLAY_PRI_REFRESH_EVENT_VAL		(1UL << 56)
+#define DISPLAY_CUR_REFRESH_EVENT_VAL		(1UL << 48)
+
 struct intel_vgpu_sbi_register {
 	unsigned int offset;
 	u32 value;
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index f5a328b5290a..cd29ea28d7ed 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -201,6 +201,8 @@ struct intel_vgpu {
 		struct mdev_device *mdev;
 		struct vfio_region *region;
 		int num_regions;
+		struct vfio_irq *irq;
+		int num_irqs;
 		struct eventfd_ctx *intx_trigger;
 		struct eventfd_ctx *msi_trigger;
 
diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
index 4862fb12778e..be33f20f3bc1 100644
--- a/drivers/gpu/drm/i915/gvt/hypercall.h
+++ b/drivers/gpu/drm/i915/gvt/hypercall.h
@@ -68,6 +68,7 @@ struct intel_gvt_mpt {
 			     bool map);
 	int (*set_opregion)(void *vgpu);
 	int (*set_edid)(void *vgpu, int port_num);
+	int (*register_display_irq)(void *vgpu);
 	int (*get_vfio_device)(void *vgpu);
 	void (*put_vfio_device)(void *vgpu);
 	bool (*is_valid_gfn)(unsigned long handle, unsigned long gfn);
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index a68addf95c23..fd1633342e53 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -78,6 +78,19 @@ struct vfio_region {
 	void				*data;
 };
 
+struct intel_vgpu_irqops {
+	int (*add_capability)(struct intel_vgpu *vgpu,
+			   struct vfio_info_cap *caps);
+};
+
+struct vfio_irq {
+	u32	type;
+	u32	subtype;
+	u32	flags;
+	u32	count;
+	const struct intel_vgpu_irqops *ops;
+};
+
 struct vfio_edid_region {
 	struct vfio_region_gfx_edid vfio_edid_regs;
 	void *edid_blob;
@@ -635,6 +648,59 @@ static int kvmgt_set_edid(void *p_vgpu, int port_num)
 	return ret;
 }
 
+static int add_display_irq_capability(struct intel_vgpu *vgpu,
+			   struct vfio_info_cap *caps)
+{
+	struct vfio_irq_info_cap_display_plane_events cap = {
+		.header.id = VFIO_IRQ_INFO_CAP_DISPLAY,
+		.header.version = 1,
+		.cur_event_val = DISPLAY_CUR_REFRESH_EVENT_VAL,
+		.pri_event_val = DISPLAY_PRI_REFRESH_EVENT_VAL,
+	};
+
+	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+
+static const struct intel_vgpu_irqops intel_vgpu_irqops_display = {
+	.add_capability = add_display_irq_capability,
+};
+
+static int intel_vgpu_register_irq(struct intel_vgpu *vgpu,
+				   unsigned int type, unsigned int subtype,
+				   u32 count, u32 flags,
+				   const struct intel_vgpu_irqops *ops)
+{
+	struct vfio_irq *irq;
+
+	irq = krealloc(vgpu->vdev.irq,
+			(vgpu->vdev.num_irqs + 1) * sizeof(*irq),
+			GFP_KERNEL);
+	if (!irq)
+		return -ENOMEM;
+
+	vgpu->vdev.irq = irq;
+	vgpu->vdev.irq[vgpu->vdev.num_irqs].type = type;
+	vgpu->vdev.irq[vgpu->vdev.num_irqs].subtype = subtype;
+	vgpu->vdev.irq[vgpu->vdev.num_irqs].count = count;
+	vgpu->vdev.irq[vgpu->vdev.num_irqs].flags = flags;
+	vgpu->vdev.irq[vgpu->vdev.num_irqs].ops = ops;
+	vgpu->vdev.num_irqs++;
+	return 0;
+}
+
+static int kvmgt_register_display_irq(void *p_vgpu)
+{
+	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+
+	intel_vgpu_register_irq(vgpu, VFIO_IRQ_TYPE_GFX,
+				VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ,
+				1,
+				VFIO_IRQ_INFO_MASKABLE | VFIO_IRQ_INFO_EVENTFD,
+				&intel_vgpu_irqops_display);
+	return 0;
+}
+
 static void kvmgt_put_vfio_device(void *vgpu)
 {
 	if (WARN_ON(!((struct intel_vgpu *)vgpu)->vdev.vfio_device))
@@ -1838,6 +1904,10 @@ static void kvmgt_detach_vgpu(void *p_vgpu)
 	vgpu->vdev.num_regions = 0;
 	kfree(vgpu->vdev.region);
 	vgpu->vdev.region = NULL;
+
+	vgpu->vdev.num_irqs = 0;
+	kfree(vgpu->vdev.irq);
+	vgpu->vdev.irq = NULL;
 }
 
 static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
@@ -2039,6 +2109,7 @@ static struct intel_gvt_mpt kvmgt_mpt = {
 	.dma_unmap_guest_page = kvmgt_dma_unmap_guest_page,
 	.set_opregion = kvmgt_set_opregion,
 	.set_edid = kvmgt_set_edid,
+	.register_display_irq = kvmgt_register_display_irq,
 	.get_vfio_device = kvmgt_get_vfio_device,
 	.put_vfio_device = kvmgt_put_vfio_device,
 	.is_valid_gfn = kvmgt_is_valid_gfn,
diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
index 0f9440128123..abf4a69920d3 100644
--- a/drivers/gpu/drm/i915/gvt/mpt.h
+++ b/drivers/gpu/drm/i915/gvt/mpt.h
@@ -330,6 +330,23 @@ static inline int intel_gvt_hypervisor_set_edid(struct intel_vgpu *vgpu,
 	return intel_gvt_host.mpt->set_edid(vgpu, port_num);
 }
 
+/**
+ * intel_gvt_hypervisor_set_irq - register vgpu specific irq
+ * @vgpu: a vGPU
+ * @port_num: display port number
+ *
+ * Returns:
+ * Zero on success, negative error code if failed.
+ */
+static inline int intel_gvt_hypervisor_register_display_irq(
+						struct intel_vgpu *vgpu)
+{
+	if (!intel_gvt_host.mpt->register_display_irq)
+		return 0;
+
+	return intel_gvt_host.mpt->register_display_irq(vgpu);
+}
+
 /**
  * intel_gvt_hypervisor_get_vfio_device - increase vfio device ref count
  * @vgpu: a vGPU
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace
  2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
                   ` (2 preceding siblings ...)
  2019-08-16  2:35 ` [PATCH v5 3/6] drm/i915/gvt: Register vGPU display event irq Tina Zhang
@ 2019-08-16  2:35 ` Tina Zhang
  2019-08-26  7:55   ` Zhenyu Wang
  2019-08-16  2:35 ` [PATCH v5 5/6] drm/i915/gvt: Deliver async primary plane page flip events at vblank Tina Zhang
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: Tina Zhang @ 2019-08-16  2:35 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Tina Zhang, kraxel, alex.williamson, kvm, linux-kernel,
	hang.yuan, zhiyuan.lv, Kechen Lu

Deliver the display refresh events to the user land. Userspace can use
the irq mask/unmask mechanism to disable or enable the event delivery.

As we know, delivering refresh event at each vblank safely avoids
tearing and unexpected event overwhelming, but there are still spaces
to optimize.

For handling the normal case, deliver the page flip refresh
event at each vblank, in other words, bounded by vblanks. Skipping some
events bring performance enhancement while not hurting user experience.

For single framebuffer case, deliver the refresh events to userspace at
all vblanks. This heuristic at each vblank leverages pageflip_count
incresements to determine if there is no page flip happens after a certain
period and so that the case is regarded as single framebuffer one.
Although this heuristic makes incorrect decision sometimes and it depends
on guest behavior, for example, when no cursor movements happen, the
user experience does not harm and front buffer is still correctly acquired.
Meanwhile, in actual single framebuffer case, the user experience is
enhanced compared with page flip events only.

Addtionally, to mitigate the events delivering footprints, one eventfd and
8 byte eventfd counter partition are leveraged.

v2:
- Support vfio_irq_info_cap_display_plane_events. (Tina)

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Kechen Lu <kechen.lu@intel.com>
---
 drivers/gpu/drm/i915/gvt/display.c |  22 ++++
 drivers/gpu/drm/i915/gvt/gvt.h     |   2 +
 drivers/gpu/drm/i915/gvt/kvmgt.c   | 159 +++++++++++++++++++++++++++--
 3 files changed, 174 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
index 1a0a4ae4826e..616285e4a014 100644
--- a/drivers/gpu/drm/i915/gvt/display.c
+++ b/drivers/gpu/drm/i915/gvt/display.c
@@ -34,6 +34,8 @@
 
 #include "i915_drv.h"
 #include "gvt.h"
+#include <uapi/linux/vfio.h>
+#include <drm/drm_plane.h>
 
 static int get_edp_pipe(struct intel_vgpu *vgpu)
 {
@@ -387,6 +389,8 @@ void intel_gvt_check_vblank_emulation(struct intel_gvt *gvt)
 	mutex_unlock(&gvt->lock);
 }
 
+#define PAGEFLIP_DELAY_THR 10
+
 static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
 {
 	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
@@ -396,7 +400,10 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
 		[PIPE_B] = PIPE_B_VBLANK,
 		[PIPE_C] = PIPE_C_VBLANK,
 	};
+	int pri_flip_event = SKL_FLIP_EVENT(pipe, PLANE_PRIMARY);
 	int event;
+	u64 eventfd_signal_val = 0;
+	static int no_pageflip_count;
 
 	if (pipe < PIPE_A || pipe > PIPE_C)
 		return;
@@ -407,11 +414,26 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
 		if (!pipe_is_enabled(vgpu, pipe))
 			continue;
 
+		if (event == pri_flip_event)
+			eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
+
 		intel_vgpu_trigger_virtual_event(vgpu, event);
 	}
 
+	if (eventfd_signal_val)
+		no_pageflip_count = 0;
+	else if (!eventfd_signal_val && no_pageflip_count > PAGEFLIP_DELAY_THR)
+		eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
+	else
+		no_pageflip_count++;
+
+	if (vgpu->vdev.vblank_trigger && !vgpu->vdev.display_event_mask &&
+		eventfd_signal_val)
+		eventfd_signal(vgpu->vdev.vblank_trigger, eventfd_signal_val);
+
 	if (pipe_is_enabled(vgpu, pipe)) {
 		vgpu_vreg_t(vgpu, PIPE_FRMCOUNT_G4X(pipe))++;
+
 		intel_vgpu_trigger_virtual_event(vgpu, vblank_event[pipe]);
 	}
 }
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index cd29ea28d7ed..6c8ed030c30b 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -205,6 +205,8 @@ struct intel_vgpu {
 		int num_irqs;
 		struct eventfd_ctx *intx_trigger;
 		struct eventfd_ctx *msi_trigger;
+		struct eventfd_ctx *vblank_trigger;
+		u32 display_event_mask;
 
 		/*
 		 * Two caches are used to avoid mapping duplicated pages (eg.
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index fd1633342e53..9ace1f4ff9eb 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -1250,6 +1250,8 @@ static int intel_vgpu_get_irq_count(struct intel_vgpu *vgpu, int type)
 {
 	if (type == VFIO_PCI_INTX_IRQ_INDEX || type == VFIO_PCI_MSI_IRQ_INDEX)
 		return 1;
+	else if (type < VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs)
+		return vgpu->vdev.irq[type - VFIO_PCI_NUM_IRQS].count;
 
 	return 0;
 }
@@ -1297,7 +1299,60 @@ static int intel_vgpu_set_msi_trigger(struct intel_vgpu *vgpu,
 	return 0;
 }
 
-static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
+static int intel_vgu_set_display_irq_mask(struct intel_vgpu *vgpu,
+		unsigned int index, unsigned int start, unsigned int count,
+		u32 flags, void *data)
+{
+	if (start != 0 || count > 2)
+		return -EINVAL;
+
+	if (flags & VFIO_IRQ_SET_DATA_NONE)
+		vgpu->vdev.display_event_mask |= 1;
+
+	return 0;
+}
+
+static int intel_vgu_set_display_irq_unmask(struct intel_vgpu *vgpu,
+		unsigned int index, unsigned int start, unsigned int count,
+		u32 flags, void *data)
+{
+	if (start != 0 || count > 2)
+		return -EINVAL;
+
+	if (flags & VFIO_IRQ_SET_DATA_NONE)
+		vgpu->vdev.display_event_mask &= 0;
+
+	return 0;
+}
+
+static int intel_vgpu_set_display_event_trigger(struct intel_vgpu *vgpu,
+		unsigned int index, unsigned int start, unsigned int count,
+		u32 flags, void *data)
+{
+	struct eventfd_ctx *trigger;
+
+	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
+		int fd = *(int *)data;
+
+		trigger = eventfd_ctx_fdget(fd);
+		if (IS_ERR(trigger)) {
+			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
+			return PTR_ERR(trigger);
+		}
+		vgpu->vdev.vblank_trigger = trigger;
+		vgpu->vdev.display_event_mask = 0;
+	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count) {
+		trigger = vgpu->vdev.vblank_trigger;
+		if (trigger) {
+			eventfd_ctx_put(trigger);
+			vgpu->vdev.vblank_trigger = NULL;
+		}
+	}
+
+	return 0;
+}
+
+int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
 		unsigned int index, unsigned int start, unsigned int count,
 		void *data)
 {
@@ -1330,6 +1385,35 @@ static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
 			break;
 		}
 		break;
+	default:
+	{
+		int i;
+
+		if (index >= VFIO_PCI_NUM_IRQS +
+					vgpu->vdev.num_irqs)
+			return -EINVAL;
+		index =
+			array_index_nospec(index,
+						VFIO_PCI_NUM_IRQS +
+						vgpu->vdev.num_irqs);
+
+		i = index - VFIO_PCI_NUM_IRQS;
+		if (vgpu->vdev.irq[i].type == VFIO_IRQ_TYPE_GFX &&
+		    vgpu->vdev.irq[i].subtype ==
+		    VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ) {
+			switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+			case VFIO_IRQ_SET_ACTION_MASK:
+				func = intel_vgu_set_display_irq_mask;
+				break;
+			case VFIO_IRQ_SET_ACTION_UNMASK:
+				func = intel_vgu_set_display_irq_unmask;
+				break;
+			case VFIO_IRQ_SET_ACTION_TRIGGER:
+				func = intel_vgpu_set_display_event_trigger;
+				break;
+			}
+		}
+	}
 	}
 
 	if (!func)
@@ -1361,7 +1445,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 		info.flags |= VFIO_DEVICE_FLAGS_RESET;
 		info.num_regions = VFIO_PCI_NUM_REGIONS +
 				vgpu->vdev.num_regions;
-		info.num_irqs = VFIO_PCI_NUM_IRQS;
+		info.num_irqs = VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs;
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
 			-EFAULT : 0;
@@ -1521,32 +1605,88 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 			-EFAULT : 0;
 	} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
 		struct vfio_irq_info info;
+		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+		unsigned int i;
+		int ret;
 
 		minsz = offsetofend(struct vfio_irq_info, count);
 
 		if (copy_from_user(&info, (void __user *)arg, minsz))
 			return -EFAULT;
 
-		if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
+		if (info.argsz < minsz)
 			return -EINVAL;
 
 		switch (info.index) {
 		case VFIO_PCI_INTX_IRQ_INDEX:
 		case VFIO_PCI_MSI_IRQ_INDEX:
+			info.flags = VFIO_IRQ_INFO_EVENTFD;
 			break;
-		default:
+		case VFIO_PCI_MSIX_IRQ_INDEX:
+		case VFIO_PCI_ERR_IRQ_INDEX:
+		case VFIO_PCI_REQ_IRQ_INDEX:
 			return -EINVAL;
-		}
+		default:
+		{
+			struct vfio_irq_info_cap_type cap_type = {
+				.header.id = VFIO_IRQ_INFO_CAP_TYPE,
+				.header.version = 1 };
 
-		info.flags = VFIO_IRQ_INFO_EVENTFD;
+			if (info.index >= VFIO_PCI_NUM_IRQS +
+					vgpu->vdev.num_irqs)
+				return -EINVAL;
+			info.index =
+				array_index_nospec(info.index,
+						VFIO_PCI_NUM_IRQS +
+						vgpu->vdev.num_irqs);
+
+			i = info.index - VFIO_PCI_NUM_IRQS;
+
+			info.flags = vgpu->vdev.irq[i].flags;
+			cap_type.type = vgpu->vdev.irq[i].type;
+			cap_type.subtype = vgpu->vdev.irq[i].subtype;
+
+			ret = vfio_info_add_capability(&caps,
+						&cap_type.header,
+						sizeof(cap_type));
+			if (ret)
+				return ret;
+
+			if (vgpu->vdev.irq[i].ops->add_capability) {
+				ret = vgpu->vdev.irq[i].ops->add_capability(vgpu,
+									    &caps);
+				if (ret)
+					return ret;
+			}
+		}
+		}
 
 		info.count = intel_vgpu_get_irq_count(vgpu, info.index);
 
 		if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
 			info.flags |= (VFIO_IRQ_INFO_MASKABLE |
 				       VFIO_IRQ_INFO_AUTOMASKED);
-		else
-			info.flags |= VFIO_IRQ_INFO_NORESIZE;
+
+		if (caps.size) {
+			info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
+			if (info.argsz < sizeof(info) + caps.size) {
+				info.argsz = sizeof(info) + caps.size;
+				info.cap_offset = 0;
+			} else {
+				vfio_info_cap_shift(&caps, sizeof(info));
+				if (copy_to_user((void __user *)arg +
+						  sizeof(info), caps.buf,
+						  caps.size)) {
+					kfree(caps.buf);
+					return -EFAULT;
+				}
+				info.cap_offset = sizeof(info);
+				if (offsetofend(struct vfio_irq_info, cap_offset) > minsz)
+					minsz = offsetofend(struct vfio_irq_info, cap_offset);
+			}
+
+			kfree(caps.buf);
+		}
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
 			-EFAULT : 0;
@@ -1565,7 +1705,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 			int max = intel_vgpu_get_irq_count(vgpu, hdr.index);
 
 			ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
-						VFIO_PCI_NUM_IRQS, &data_size);
+					VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs,
+								 &data_size);
 			if (ret) {
 				gvt_vgpu_err("intel:vfio_set_irqs_validate_and_prepare failed\n");
 				return -EINVAL;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 5/6] drm/i915/gvt: Deliver async primary plane page flip events at vblank
  2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
                   ` (3 preceding siblings ...)
  2019-08-16  2:35 ` [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace Tina Zhang
@ 2019-08-16  2:35 ` Tina Zhang
  2019-08-16  2:35 ` [PATCH v5 6/6] drm/i915/gvt: Add cursor plane reg update trap emulation handler Tina Zhang
  2019-09-03  1:26 ` [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Zhang, Tina
  6 siblings, 0 replies; 23+ messages in thread
From: Tina Zhang @ 2019-08-16  2:35 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Kechen Lu, kraxel, alex.williamson, kvm, linux-kernel, hang.yuan,
	zhiyuan.lv, Tina Zhang

From: Kechen Lu <kechen.lu@intel.com>

Only sync primary plane page flip events are checked and delivered
as the display refresh events before, this patch tries to deliver async
primary page flip events bounded by vblanks.

To deliver correct async page flip, the new async flip bitmap is
introduced and in vblank emulation handler to check bitset.

Signed-off-by: Kechen Lu <kechen.lu@intel.com>
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/cmd_parser.c |  6 ++++--
 drivers/gpu/drm/i915/gvt/display.c    | 10 ++++++++++
 drivers/gpu/drm/i915/gvt/gvt.h        |  2 ++
 drivers/gpu/drm/i915/gvt/handlers.c   |  5 +++--
 4 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
index ab002cfd3cab..51c3a1b10a54 100644
--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
+++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
@@ -1330,9 +1330,11 @@ static int gen8_update_plane_mmio_from_mi_display_flip(
 	if (info->plane == PLANE_PRIMARY)
 		vgpu_vreg_t(vgpu, PIPE_FLIPCOUNT_G4X(info->pipe))++;
 
-	if (info->async_flip)
+	if (info->async_flip) {
 		intel_vgpu_trigger_virtual_event(vgpu, info->event);
-	else
+		set_bit(info->plane,
+			vgpu->display.async_flip_event[info->pipe]);
+	} else
 		set_bit(info->event, vgpu->irq.flip_done_event[info->pipe]);
 
 	return 0;
diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
index 616285e4a014..35436e845227 100644
--- a/drivers/gpu/drm/i915/gvt/display.c
+++ b/drivers/gpu/drm/i915/gvt/display.c
@@ -420,6 +420,16 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
 		intel_vgpu_trigger_virtual_event(vgpu, event);
 	}
 
+	for_each_set_bit(event, vgpu->display.async_flip_event[pipe],
+			I915_MAX_PLANES) {
+		clear_bit(event, vgpu->display.async_flip_event[pipe]);
+		if (!pipe_is_enabled(vgpu, pipe))
+			continue;
+
+		if (event == PLANE_PRIMARY)
+			eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
+	}
+
 	if (eventfd_signal_val)
 		no_pageflip_count = 0;
 	else if (!eventfd_signal_val && no_pageflip_count > PAGEFLIP_DELAY_THR)
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 6c8ed030c30b..90e35e436666 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -128,6 +128,8 @@ struct intel_vgpu_display {
 	struct intel_vgpu_i2c_edid i2c_edid;
 	struct intel_vgpu_port ports[I915_MAX_PORTS];
 	struct intel_vgpu_sbi sbi;
+	DECLARE_BITMAP(async_flip_event[I915_MAX_PIPES],
+		       I915_MAX_PLANES);
 };
 
 struct vgpu_sched_ctl {
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 18f01eeb2510..92ff037996a2 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -758,9 +758,10 @@ static int pri_surf_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 
 	vgpu_vreg_t(vgpu, PIPE_FLIPCOUNT_G4X(pipe))++;
 
-	if (vgpu_vreg_t(vgpu, DSPCNTR(pipe)) & PLANE_CTL_ASYNC_FLIP)
+	if (vgpu_vreg_t(vgpu, DSPCNTR(pipe)) & PLANE_CTL_ASYNC_FLIP) {
 		intel_vgpu_trigger_virtual_event(vgpu, event);
-	else
+		set_bit(PLANE_PRIMARY, vgpu->display.async_flip_event[pipe]);
+	} else
 		set_bit(event, vgpu->irq.flip_done_event[pipe]);
 
 	return 0;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 6/6] drm/i915/gvt: Add cursor plane reg update trap emulation handler
  2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
                   ` (4 preceding siblings ...)
  2019-08-16  2:35 ` [PATCH v5 5/6] drm/i915/gvt: Deliver async primary plane page flip events at vblank Tina Zhang
@ 2019-08-16  2:35 ` Tina Zhang
  2019-09-03  1:26 ` [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Zhang, Tina
  6 siblings, 0 replies; 23+ messages in thread
From: Tina Zhang @ 2019-08-16  2:35 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Kechen Lu, kraxel, alex.williamson, kvm, linux-kernel, hang.yuan,
	zhiyuan.lv, Tina Zhang

From: Kechen Lu <kechen.lu@intel.com>

This patch adds the cursor plane CURBASE reg update trap handler
in order to:

- Deliver the cursor refresh event at each vblank emulation,
the flip_done_event bit check is supposed to do here. If cursor
plane updates happen, deliver the cursor refresh events.

- Support the sync and async cursor plane updates and
corresponding cursor plane flip interrupts reporting.

v2:
- As the suggestion from Zhenyu, the experiments show that
Windows driver programs the CURBASE and CURPOS at one time as
well as the Linux i915 driver. So only track the CURBASE is
enough.

Signed-off-by: Kechen Lu <kechen.lu@intel.com>
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/display.c   |  7 +++++++
 drivers/gpu/drm/i915/gvt/handlers.c  | 27 ++++++++++++++++++++++++---
 drivers/gpu/drm/i915/gvt/interrupt.c |  7 +++++++
 drivers/gpu/drm/i915/gvt/interrupt.h |  3 +++
 4 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
index 35436e845227..78d43a81bbeb 100644
--- a/drivers/gpu/drm/i915/gvt/display.c
+++ b/drivers/gpu/drm/i915/gvt/display.c
@@ -401,6 +401,7 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
 		[PIPE_C] = PIPE_C_VBLANK,
 	};
 	int pri_flip_event = SKL_FLIP_EVENT(pipe, PLANE_PRIMARY);
+	int cur_flip_event = CURSOR_A_FLIP_DONE + pipe;
 	int event;
 	u64 eventfd_signal_val = 0;
 	static int no_pageflip_count;
@@ -417,6 +418,9 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
 		if (event == pri_flip_event)
 			eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
 
+		if (event == cur_flip_event)
+			eventfd_signal_val |= DISPLAY_CUR_REFRESH_EVENT_VAL;
+
 		intel_vgpu_trigger_virtual_event(vgpu, event);
 	}
 
@@ -428,6 +432,9 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
 
 		if (event == PLANE_PRIMARY)
 			eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
+
+		if (event == PLANE_CURSOR)
+			eventfd_signal_val |= DISPLAY_CUR_REFRESH_EVENT_VAL;
 	}
 
 	if (eventfd_signal_val)
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 92ff037996a2..53ef96562d48 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -767,6 +767,27 @@ static int pri_surf_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	return 0;
 }
 
+#define CURBASE_TO_PIPE(offset) \
+	calc_index(offset, _CURABASE, _CURBBASE, 0, CURBASE(PIPE_C))
+
+static int cur_surf_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
+		void *p_data, unsigned int bytes)
+{
+	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
+	u32 pipe = CURBASE_TO_PIPE(offset);
+	int event = CURSOR_A_FLIP_DONE + pipe;
+
+	write_vreg(vgpu, offset, p_data, bytes);
+
+	if (vgpu_vreg_t(vgpu, CURCNTR(pipe)) & PLANE_CTL_ASYNC_FLIP) {
+		intel_vgpu_trigger_virtual_event(vgpu, event);
+		set_bit(PLANE_CURSOR, vgpu->display.async_flip_event[pipe]);
+	} else
+		set_bit(event, vgpu->irq.flip_done_event[pipe]);
+
+	return 0;
+}
+
 #define SPRSURF_TO_PIPE(offset) \
 	calc_index(offset, _SPRA_SURF, _SPRB_SURF, 0, SPRSURF(PIPE_C))
 
@@ -1970,9 +1991,9 @@ static int init_generic_mmio_info(struct intel_gvt *gvt)
 	MMIO_D(CURPOS(PIPE_B), D_ALL);
 	MMIO_D(CURPOS(PIPE_C), D_ALL);
 
-	MMIO_D(CURBASE(PIPE_A), D_ALL);
-	MMIO_D(CURBASE(PIPE_B), D_ALL);
-	MMIO_D(CURBASE(PIPE_C), D_ALL);
+	MMIO_DH(CURBASE(PIPE_A), D_ALL, NULL, cur_surf_mmio_write);
+	MMIO_DH(CURBASE(PIPE_B), D_ALL, NULL, cur_surf_mmio_write);
+	MMIO_DH(CURBASE(PIPE_C), D_ALL, NULL, cur_surf_mmio_write);
 
 	MMIO_D(CUR_FBC_CTL(PIPE_A), D_ALL);
 	MMIO_D(CUR_FBC_CTL(PIPE_B), D_ALL);
diff --git a/drivers/gpu/drm/i915/gvt/interrupt.c b/drivers/gpu/drm/i915/gvt/interrupt.c
index 951681813230..9c2b9d2e1529 100644
--- a/drivers/gpu/drm/i915/gvt/interrupt.c
+++ b/drivers/gpu/drm/i915/gvt/interrupt.c
@@ -113,6 +113,9 @@ static const char * const irq_name[INTEL_GVT_EVENT_MAX] = {
 	[SPRITE_A_FLIP_DONE] = "Sprite Plane A flip done",
 	[SPRITE_B_FLIP_DONE] = "Sprite Plane B flip done",
 	[SPRITE_C_FLIP_DONE] = "Sprite Plane C flip done",
+	[CURSOR_A_FLIP_DONE] = "Cursor Plane A flip done",
+	[CURSOR_B_FLIP_DONE] = "Cursor Plane B flip done",
+	[CURSOR_C_FLIP_DONE] = "Cursor Plane C flip done",
 
 	[PCU_THERMAL] = "PCU Thermal Event",
 	[PCU_PCODE2DRIVER_MAILBOX] = "PCU pcode2driver mailbox event",
@@ -593,6 +596,10 @@ static void gen8_init_irq(
 		SET_BIT_INFO(irq, 4, SPRITE_A_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_A);
 		SET_BIT_INFO(irq, 4, SPRITE_B_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_B);
 		SET_BIT_INFO(irq, 4, SPRITE_C_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_C);
+
+		SET_BIT_INFO(irq, 6, CURSOR_A_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_A);
+		SET_BIT_INFO(irq, 6, CURSOR_B_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_B);
+		SET_BIT_INFO(irq, 6, CURSOR_C_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_C);
 	}
 
 	/* GEN8 interrupt PCU events */
diff --git a/drivers/gpu/drm/i915/gvt/interrupt.h b/drivers/gpu/drm/i915/gvt/interrupt.h
index 5313fb1b33e1..158f1c7a23f2 100644
--- a/drivers/gpu/drm/i915/gvt/interrupt.h
+++ b/drivers/gpu/drm/i915/gvt/interrupt.h
@@ -92,6 +92,9 @@ enum intel_gvt_event_type {
 	SPRITE_A_FLIP_DONE,
 	SPRITE_B_FLIP_DONE,
 	SPRITE_C_FLIP_DONE,
+	CURSOR_A_FLIP_DONE,
+	CURSOR_B_FLIP_DONE,
+	CURSOR_C_FLIP_DONE,
 
 	PCU_THERMAL,
 	PCU_PCODE2DRIVER_MAILBOX,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/6] vfio: Define device specific irq type capability
  2019-08-16  2:35 ` [PATCH v5 1/6] vfio: Define device specific irq type capability Tina Zhang
@ 2019-08-16 20:51   ` Alex Williamson
  2019-08-20  0:56     ` Zhang, Tina
  0 siblings, 1 reply; 23+ messages in thread
From: Alex Williamson @ 2019-08-16 20:51 UTC (permalink / raw)
  To: Tina Zhang
  Cc: intel-gvt-dev, kraxel, kvm, linux-kernel, hang.yuan, zhiyuan.lv,
	Eric Auger

On Fri, 16 Aug 2019 10:35:23 +0800
Tina Zhang <tina.zhang@intel.com> wrote:

> Cap the number of irqs with fixed indexes and use capability chains
> to chain device specific irqs.
> 
> Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> ---
>  include/uapi/linux/vfio.h | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 02bb7ad6e986..d83c9f136a5b 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -444,11 +444,27 @@ struct vfio_irq_info {
>  #define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
>  #define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
>  #define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
> +#define VFIO_IRQ_INFO_FLAG_CAPS		(1 << 4) /* Info supports caps */
>  	__u32	index;		/* IRQ index */
>  	__u32	count;		/* Number of IRQs within this index */
> +	__u32	cap_offset;	/* Offset within info struct of first cap */
>  };
>  #define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)
>  
> +/*
> + * The irq type capability allows irqs unique to a specific device or
> + * class of devices to be exposed.
> + *
> + * The structures below define version 1 of this capability.
> + */
> +#define VFIO_IRQ_INFO_CAP_TYPE      3

Why 3?  What's using 1 and 2 of this newly defined info cap ID?  Thanks,

Alex

> +
> +struct vfio_irq_info_cap_type {
> +	struct vfio_info_cap_header header;
> +	__u32 type;     /* global per bus driver */
> +	__u32 subtype;  /* type specific */
> +};
> +
>  /**
>   * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
>   *
> @@ -550,7 +566,8 @@ enum {
>  	VFIO_PCI_MSIX_IRQ_INDEX,
>  	VFIO_PCI_ERR_IRQ_INDEX,
>  	VFIO_PCI_REQ_IRQ_INDEX,
> -	VFIO_PCI_NUM_IRQS
> +	VFIO_PCI_NUM_IRQS = 5	/* Fixed user ABI, IRQ indexes >=5 use   */
> +				/* device specific cap to define content */
>  };
>  
>  /*


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/6] vfio: Introduce vGPU display irq type
  2019-08-16  2:35 ` [PATCH v5 2/6] vfio: Introduce vGPU display irq type Tina Zhang
@ 2019-08-16 20:51   ` Alex Williamson
  2019-08-20  2:12     ` Zhang, Tina
  0 siblings, 1 reply; 23+ messages in thread
From: Alex Williamson @ 2019-08-16 20:51 UTC (permalink / raw)
  To: Tina Zhang
  Cc: intel-gvt-dev, kraxel, kvm, linux-kernel, hang.yuan, zhiyuan.lv

On Fri, 16 Aug 2019 10:35:24 +0800
Tina Zhang <tina.zhang@intel.com> wrote:

> Introduce vGPU specific irq type VFIO_IRQ_TYPE_GFX, and
> VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ as the subtype for vGPU display.
> 
> Introduce vfio_irq_info_cap_display_plane_events capability to notify
> user space with the vGPU's plane update events
> 
> v2:
> - Add VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ description. (Alex & Kechen)
> - Introduce vfio_irq_info_cap_display_plane_events. (Gerd & Alex)
> 
> Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> ---
>  include/uapi/linux/vfio.h | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index d83c9f136a5b..21ac69f0e1a9 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -465,6 +465,27 @@ struct vfio_irq_info_cap_type {
>  	__u32 subtype;  /* type specific */
>  };
>  
> +#define VFIO_IRQ_TYPE_GFX				(1)
> +/*
> + * vGPU vendor sub-type
> + * vGPU device display related interrupts e.g. vblank/pageflip
> + */
> +#define VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ		(1)

If this is a GFX/DISPLAY IRQ, why are we talking about a "vGPU" in the
description?  It's not specific to a vGPU implementation, right?  Is
this related to a physical display or a virtual display?  If it's
related to the GFX PLANE ioctls, it should state that.  It's not well
specified what this interrupt signals.  Is it vblank?  Is it pageflip?
Is it both?  Neither?  Something else?

> +
> +/*
> + * Display capability of using one eventfd to notify user space with the
> + * vGPU's plane update events.
> + * cur_event_val: eventfd value stands for cursor plane change event.
> + * pri_event_val: eventfd value stands for primary plane change event.
> + */
> +#define VFIO_IRQ_INFO_CAP_DISPLAY	4
> +
> +struct vfio_irq_info_cap_display_plane_events {
> +	struct vfio_info_cap_header header;
> +	__u64 cur_event_val;
> +	__u64 pri_event_val;
> +};

Again, what display?  Does this reference a GFX plane?  The event_val
data is not well specified, examples might be necessary.  They seem to
be used as a flag bit, so should we simply define a bit index for the
flag rather than a u64 value?  Where are the actual events per plane
defined?

I'm not sure this patch shouldn't be rolled back into 1, I couldn't
find the previous discussion that triggered it to be separate.  Perhaps
simply for sharing with the work Eric is doing?  If so, that's fine,
but maybe make note of it in the cover letter.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 1/6] vfio: Define device specific irq type capability
  2019-08-16 20:51   ` Alex Williamson
@ 2019-08-20  0:56     ` Zhang, Tina
  0 siblings, 0 replies; 23+ messages in thread
From: Zhang, Tina @ 2019-08-20  0:56 UTC (permalink / raw)
  To: Alex Williamson
  Cc: intel-gvt-dev, kraxel, kvm, linux-kernel, Yuan, Hang, Lv,
	Zhiyuan, Eric Auger



> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Saturday, August 17, 2019 4:52 AM
> To: Zhang, Tina <tina.zhang@intel.com>
> Cc: intel-gvt-dev@lists.freedesktop.org; kraxel@redhat.com;
> kvm@vger.kernel.org; linux-kernel@vger.kernel.org; Yuan, Hang
> <hang.yuan@intel.com>; Lv, Zhiyuan <zhiyuan.lv@intel.com>; Eric Auger
> <eric.auger@redhat.com>
> Subject: Re: [PATCH v5 1/6] vfio: Define device specific irq type capability
> 
> On Fri, 16 Aug 2019 10:35:23 +0800
> Tina Zhang <tina.zhang@intel.com> wrote:
> 
> > Cap the number of irqs with fixed indexes and use capability chains to
> > chain device specific irqs.
> >
> > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > Signed-off-by: Eric Auger <eric.auger@redhat.com>
> > ---
> >  include/uapi/linux/vfio.h | 19 ++++++++++++++++++-
> >  1 file changed, 18 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > index 02bb7ad6e986..d83c9f136a5b 100644
> > --- a/include/uapi/linux/vfio.h
> > +++ b/include/uapi/linux/vfio.h
> > @@ -444,11 +444,27 @@ struct vfio_irq_info {
> >  #define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
> >  #define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
> >  #define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
> > +#define VFIO_IRQ_INFO_FLAG_CAPS		(1 << 4) /* Info
> supports caps */
> >  	__u32	index;		/* IRQ index */
> >  	__u32	count;		/* Number of IRQs within this index */
> > +	__u32	cap_offset;	/* Offset within info struct of first cap */
> >  };
> >  #define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE +
> 9)
> >
> > +/*
> > + * The irq type capability allows irqs unique to a specific device or
> > + * class of devices to be exposed.
> > + *
> > + * The structures below define version 1 of this capability.
> > + */
> > +#define VFIO_IRQ_INFO_CAP_TYPE      3
> 
> Why 3?  What's using 1 and 2 of this newly defined info cap ID?  Thanks,
There was an assumption that there were two kinds of CAP_TYPE: VFIO_REGION_INFO_CAP_TYPE and VFIO_IRQ_INFO_CAP_TYPE. Since VFIO_REGION_INFO_CAP_TYPE was defined as 1, VFIO_IRQ_INFO_CAP_TYPE was defined after it.
OK. I see this isn't a good idea. Let's give VFIO_REGION_INFO_CAP_TYPE a new space. Thanks.

BR,
Tina
> 
> Alex
> 
> > +
> > +struct vfio_irq_info_cap_type {
> > +	struct vfio_info_cap_header header;
> > +	__u32 type;     /* global per bus driver */
> > +	__u32 subtype;  /* type specific */
> > +};
> > +
> >  /**
> >   * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct
> vfio_irq_set)
> >   *
> > @@ -550,7 +566,8 @@ enum {
> >  	VFIO_PCI_MSIX_IRQ_INDEX,
> >  	VFIO_PCI_ERR_IRQ_INDEX,
> >  	VFIO_PCI_REQ_IRQ_INDEX,
> > -	VFIO_PCI_NUM_IRQS
> > +	VFIO_PCI_NUM_IRQS = 5	/* Fixed user ABI, IRQ indexes >=5
> use   */
> > +				/* device specific cap to define content */
> >  };
> >
> >  /*


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 2/6] vfio: Introduce vGPU display irq type
  2019-08-16 20:51   ` Alex Williamson
@ 2019-08-20  2:12     ` Zhang, Tina
  2019-08-20  7:20       ` kraxel
  2019-08-20 15:32       ` Alex Williamson
  0 siblings, 2 replies; 23+ messages in thread
From: Zhang, Tina @ 2019-08-20  2:12 UTC (permalink / raw)
  To: Alex Williamson
  Cc: intel-gvt-dev, kraxel, kvm, linux-kernel, Yuan, Hang, Lv, Zhiyuan



> -----Original Message-----
> From: Alex Williamson [mailto:alex.williamson@redhat.com]
> Sent: Saturday, August 17, 2019 4:52 AM
> To: Zhang, Tina <tina.zhang@intel.com>
> Cc: intel-gvt-dev@lists.freedesktop.org; kraxel@redhat.com;
> kvm@vger.kernel.org; linux-kernel@vger.kernel.org; Yuan, Hang
> <hang.yuan@intel.com>; Lv, Zhiyuan <zhiyuan.lv@intel.com>
> Subject: Re: [PATCH v5 2/6] vfio: Introduce vGPU display irq type
> 
> On Fri, 16 Aug 2019 10:35:24 +0800
> Tina Zhang <tina.zhang@intel.com> wrote:
> 
> > Introduce vGPU specific irq type VFIO_IRQ_TYPE_GFX, and
> > VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ as the subtype for vGPU display.
> >
> > Introduce vfio_irq_info_cap_display_plane_events capability to notify
> > user space with the vGPU's plane update events
> >
> > v2:
> > - Add VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ description. (Alex & Kechen)
> > - Introduce vfio_irq_info_cap_display_plane_events. (Gerd & Alex)
> >
> > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > ---
> >  include/uapi/linux/vfio.h | 21 +++++++++++++++++++++
> >  1 file changed, 21 insertions(+)
> >
> > diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> > index d83c9f136a5b..21ac69f0e1a9 100644
> > --- a/include/uapi/linux/vfio.h
> > +++ b/include/uapi/linux/vfio.h
> > @@ -465,6 +465,27 @@ struct vfio_irq_info_cap_type {
> >  	__u32 subtype;  /* type specific */
> >  };
> >
> > +#define VFIO_IRQ_TYPE_GFX				(1)
> > +/*
> > + * vGPU vendor sub-type
> > + * vGPU device display related interrupts e.g. vblank/pageflip  */
> > +#define VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ		(1)
> 
> If this is a GFX/DISPLAY IRQ, why are we talking about a "vGPU" in the
> description?  It's not specific to a vGPU implementation, right?  Is this
> related to a physical display or a virtual display?  If it's related to the GFX
> PLANE ioctls, it should state that.  It's not well specified what this interrupt
> signals.  Is it vblank?  Is it pageflip?
> Is it both?  Neither?  Something else?

Sorry for the confusion caused here. 

The original idea here was to use VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ to notify user space with the display refresh event. The display refresh event is general. When notified, user space can use VFIO_DEVICE_QUERY_GFX_PLANE and VFIO_DEVICE_GET_GFX_DMABUF to get the updated framebuffer, instead of polling them all the time.

In order to give user space more choice to do the optimization, vfio_irq_info_cap_display_plane_events is proposed to tell user space the different plane refresh event values. So when notified by VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ, user space can get the value of the eventfd counter and understand which plane the event refresh event comes from and choose to get the framebuffer on that plane instead of all the planes.

So, from the VFIO user point of view, there is only the display refresh event (i.e. no other events like vblank, pageflip ...). For GTV-g, this display refresh event is implemented by both vblank and pageflip, which is only the implementation thing and can be transparent to the user space. Again sorry about the confusion cased here, I'll correct the comments in the next version.

BTW, IIRC, we might also have one question waiting to be replied:
- Can we just use VFIO_IRQ_TYPE_GFX w/o proposing a new sub type (i.e. VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ)?
    Well, only if we can agree on that we don't have any other GFX IRQ requirements in future. Otherwise, we might need a sub type to differentiate them.

Thanks.

BR,
Tina
> 
> > +
> > +/*
> > + * Display capability of using one eventfd to notify user space with
> > +the
> > + * vGPU's plane update events.
> > + * cur_event_val: eventfd value stands for cursor plane change event.
> > + * pri_event_val: eventfd value stands for primary plane change event.
> > + */
> > +#define VFIO_IRQ_INFO_CAP_DISPLAY	4
> > +
> > +struct vfio_irq_info_cap_display_plane_events {
> > +	struct vfio_info_cap_header header;
> > +	__u64 cur_event_val;
> > +	__u64 pri_event_val;
> > +};
> 
> Again, what display?  Does this reference a GFX plane?  The event_val data is
> not well specified, examples might be necessary.  They seem to be used as a
> flag bit, so should we simply define a bit index for the flag rather than a u64
> value?  Where are the actual events per plane defined?
> 
> I'm not sure this patch shouldn't be rolled back into 1, I couldn't find the
> previous discussion that triggered it to be separate.  Perhaps simply for
> sharing with the work Eric is doing?  If so, that's fine, but maybe make note
> of it in the cover letter.  Thanks,
> 
> Alex

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/6] vfio: Introduce vGPU display irq type
  2019-08-20  2:12     ` Zhang, Tina
@ 2019-08-20  7:20       ` kraxel
  2019-08-20 15:03         ` Alex Williamson
  2019-08-20 15:32       ` Alex Williamson
  1 sibling, 1 reply; 23+ messages in thread
From: kraxel @ 2019-08-20  7:20 UTC (permalink / raw)
  To: Zhang, Tina
  Cc: Alex Williamson, intel-gvt-dev, kvm, linux-kernel, Yuan, Hang,
	Lv, Zhiyuan

> > > +#define VFIO_IRQ_TYPE_GFX				(1)
> > > +/*
> > > + * vGPU vendor sub-type
> > > + * vGPU device display related interrupts e.g. vblank/pageflip  */
> > > +#define VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ		(1)
> > 
> > If this is a GFX/DISPLAY IRQ, why are we talking about a "vGPU" in the
> > description?  It's not specific to a vGPU implementation, right?  Is this
> > related to a physical display or a virtual display?  If it's related to the GFX
> > PLANE ioctls, it should state that.  It's not well specified what this interrupt
> > signals.  Is it vblank?  Is it pageflip?
> > Is it both?  Neither?  Something else?
> 
> Sorry for the confusion caused here. 
> 
> The original idea here was to use VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ to
> notify user space with the display refresh event. The display refresh
> event is general. When notified, user space can use
> VFIO_DEVICE_QUERY_GFX_PLANE and VFIO_DEVICE_GET_GFX_DMABUF to get the
> updated framebuffer, instead of polling them all the time.
> 
> In order to give user space more choice to do the optimization,
> vfio_irq_info_cap_display_plane_events is proposed to tell user space
> the different plane refresh event values. So when notified by
> VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ, user space can get the value of the
> eventfd counter and understand which plane the event refresh event
> comes from and choose to get the framebuffer on that plane instead of
> all the planes.
> 
> So, from the VFIO user point of view, there is only the display
> refresh event (i.e. no other events like vblank, pageflip ...). For
> GTV-g, this display refresh event is implemented by both vblank and
> pageflip, which is only the implementation thing and can be
> transparent to the user space. Again sorry about the confusion cased
> here, I'll correct the comments in the next version.

All this should be explained in a comment for the IRQ in the header file.

Key point for the API is that (a) this is a "the display should be
updated" event and (b) this covers all display updates, i.e. user space
can stop the display update timer and fully depend on getting
notifications if an update is needed.

That GTV-g watches guest pageflips is an implementation detail.  Should
nvidia support this they will probably do something completely
different.  As far I know they render the guest display to some
framebuffer at something like 10fps, so it would make sense for them to
send an event each time they refreshed the framebuffer.

Also note the relationships (cur_event_val is for DRM_PLANE_TYPE_CURSOR
updates and pri_event_val for DRM_PLANE_TYPE_PRIMARY).

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/6] vfio: Introduce vGPU display irq type
  2019-08-20  7:20       ` kraxel
@ 2019-08-20 15:03         ` Alex Williamson
  0 siblings, 0 replies; 23+ messages in thread
From: Alex Williamson @ 2019-08-20 15:03 UTC (permalink / raw)
  To: kraxel
  Cc: Zhang, Tina, intel-gvt-dev, kvm, linux-kernel, Yuan, Hang, Lv, Zhiyuan

On Tue, 20 Aug 2019 09:20:30 +0200
"kraxel@redhat.com" <kraxel@redhat.com> wrote:

> > > > +#define VFIO_IRQ_TYPE_GFX				(1)
> > > > +/*
> > > > + * vGPU vendor sub-type
> > > > + * vGPU device display related interrupts e.g. vblank/pageflip  */
> > > > +#define VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ		(1)  
> > > 
> > > If this is a GFX/DISPLAY IRQ, why are we talking about a "vGPU" in the
> > > description?  It's not specific to a vGPU implementation, right?  Is this
> > > related to a physical display or a virtual display?  If it's related to the GFX
> > > PLANE ioctls, it should state that.  It's not well specified what this interrupt
> > > signals.  Is it vblank?  Is it pageflip?
> > > Is it both?  Neither?  Something else?  
> > 
> > Sorry for the confusion caused here. 
> > 
> > The original idea here was to use VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ to
> > notify user space with the display refresh event. The display refresh
> > event is general. When notified, user space can use
> > VFIO_DEVICE_QUERY_GFX_PLANE and VFIO_DEVICE_GET_GFX_DMABUF to get the
> > updated framebuffer, instead of polling them all the time.
> > 
> > In order to give user space more choice to do the optimization,
> > vfio_irq_info_cap_display_plane_events is proposed to tell user space
> > the different plane refresh event values. So when notified by
> > VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ, user space can get the value of the
> > eventfd counter and understand which plane the event refresh event
> > comes from and choose to get the framebuffer on that plane instead of
> > all the planes.
> > 
> > So, from the VFIO user point of view, there is only the display
> > refresh event (i.e. no other events like vblank, pageflip ...). For
> > GTV-g, this display refresh event is implemented by both vblank and
> > pageflip, which is only the implementation thing and can be
> > transparent to the user space. Again sorry about the confusion cased
> > here, I'll correct the comments in the next version.  
> 
> All this should be explained in a comment for the IRQ in the header file.

Yes, Tina's update and your clarification all make sense to me, but it
needs to be specified in the header how this is supposed to work, what
events get signaled and what the user is intended to do in response to
that signal.  The information is all here, it just needs to be included
in the uapi definition.  Thanks,

Alex

> Key point for the API is that (a) this is a "the display should be
> updated" event and (b) this covers all display updates, i.e. user space
> can stop the display update timer and fully depend on getting
> notifications if an update is needed.
> 
> That GTV-g watches guest pageflips is an implementation detail.  Should
> nvidia support this they will probably do something completely
> different.  As far I know they render the guest display to some
> framebuffer at something like 10fps, so it would make sense for them to
> send an event each time they refreshed the framebuffer.
> 
> Also note the relationships (cur_event_val is for DRM_PLANE_TYPE_CURSOR
> updates and pri_event_val for DRM_PLANE_TYPE_PRIMARY).


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/6] vfio: Introduce vGPU display irq type
  2019-08-20  2:12     ` Zhang, Tina
  2019-08-20  7:20       ` kraxel
@ 2019-08-20 15:32       ` Alex Williamson
  1 sibling, 0 replies; 23+ messages in thread
From: Alex Williamson @ 2019-08-20 15:32 UTC (permalink / raw)
  To: Zhang, Tina
  Cc: intel-gvt-dev, kraxel, kvm, linux-kernel, Yuan, Hang, Lv, Zhiyuan

On Tue, 20 Aug 2019 02:12:10 +0000
"Zhang, Tina" <tina.zhang@intel.com> wrote:

> BTW, IIRC, we might also have one question waiting to be replied:
> - Can we just use VFIO_IRQ_TYPE_GFX w/o proposing a new sub type
> (i.e. VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ)? Well, only if we can agree
> on that we don't have any other GFX IRQ requirements in future.
> Otherwise, we might need a sub type to differentiate them.

I think you've answered your own question ;)  We already have the
infrastructure for defining type/sub-type and it allows us to
categorize and group interrupt types together consistent with how we do
for regions, so what's the overhead in this approach?  Otherwise we
tend to have an ad-hoc list.  We can't say with absolute certainty that
we won't have additional GFX related IRQs.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace
  2019-08-16  2:35 ` [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace Tina Zhang
@ 2019-08-26  7:55   ` Zhenyu Wang
  2019-08-28  1:49     ` Tian, Kevin
  0 siblings, 1 reply; 23+ messages in thread
From: Zhenyu Wang @ 2019-08-26  7:55 UTC (permalink / raw)
  To: Tina Zhang
  Cc: intel-gvt-dev, kvm, linux-kernel, hang.yuan, alex.williamson,
	kraxel, Kechen Lu, zhiyuan.lv

[-- Attachment #1: Type: text/plain, Size: 11876 bytes --]

On 2019.08.16 10:35:26 +0800, Tina Zhang wrote:
> Deliver the display refresh events to the user land. Userspace can use
> the irq mask/unmask mechanism to disable or enable the event delivery.
> 
> As we know, delivering refresh event at each vblank safely avoids
> tearing and unexpected event overwhelming, but there are still spaces
> to optimize.
> 
> For handling the normal case, deliver the page flip refresh
> event at each vblank, in other words, bounded by vblanks. Skipping some
> events bring performance enhancement while not hurting user experience.
> 
> For single framebuffer case, deliver the refresh events to userspace at
> all vblanks. This heuristic at each vblank leverages pageflip_count
> incresements to determine if there is no page flip happens after a certain
> period and so that the case is regarded as single framebuffer one.
> Although this heuristic makes incorrect decision sometimes and it depends
> on guest behavior, for example, when no cursor movements happen, the
> user experience does not harm and front buffer is still correctly acquired.
> Meanwhile, in actual single framebuffer case, the user experience is
> enhanced compared with page flip events only.
> 
> Addtionally, to mitigate the events delivering footprints, one eventfd and
> 8 byte eventfd counter partition are leveraged.
> 
> v2:
> - Support vfio_irq_info_cap_display_plane_events. (Tina)
> 
> Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> Signed-off-by: Kechen Lu <kechen.lu@intel.com>
> ---
>  drivers/gpu/drm/i915/gvt/display.c |  22 ++++
>  drivers/gpu/drm/i915/gvt/gvt.h     |   2 +
>  drivers/gpu/drm/i915/gvt/kvmgt.c   | 159 +++++++++++++++++++++++++++--
>  3 files changed, 174 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
> index 1a0a4ae4826e..616285e4a014 100644
> --- a/drivers/gpu/drm/i915/gvt/display.c
> +++ b/drivers/gpu/drm/i915/gvt/display.c
> @@ -34,6 +34,8 @@
>  
>  #include "i915_drv.h"
>  #include "gvt.h"
> +#include <uapi/linux/vfio.h>
> +#include <drm/drm_plane.h>
>  
>  static int get_edp_pipe(struct intel_vgpu *vgpu)
>  {
> @@ -387,6 +389,8 @@ void intel_gvt_check_vblank_emulation(struct intel_gvt *gvt)
>  	mutex_unlock(&gvt->lock);
>  }
>  
> +#define PAGEFLIP_DELAY_THR 10
> +
>  static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
>  {
>  	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
> @@ -396,7 +400,10 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
>  		[PIPE_B] = PIPE_B_VBLANK,
>  		[PIPE_C] = PIPE_C_VBLANK,
>  	};
> +	int pri_flip_event = SKL_FLIP_EVENT(pipe, PLANE_PRIMARY);
>  	int event;
> +	u64 eventfd_signal_val = 0;
> +	static int no_pageflip_count;
>  
>  	if (pipe < PIPE_A || pipe > PIPE_C)
>  		return;
> @@ -407,11 +414,26 @@ static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
>  		if (!pipe_is_enabled(vgpu, pipe))
>  			continue;
>  
> +		if (event == pri_flip_event)
> +			eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
> +
>  		intel_vgpu_trigger_virtual_event(vgpu, event);
>  	}
>  
> +	if (eventfd_signal_val)
> +		no_pageflip_count = 0;
> +	else if (!eventfd_signal_val && no_pageflip_count > PAGEFLIP_DELAY_THR)

extra !eventfd_signal_val

> +		eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
> +	else
> +		no_pageflip_count++;

no_pageflip_count should be per-vgpu instead of static.

> +
> +	if (vgpu->vdev.vblank_trigger && !vgpu->vdev.display_event_mask &&
> +		eventfd_signal_val)
> +		eventfd_signal(vgpu->vdev.vblank_trigger, eventfd_signal_val);
> +
>  	if (pipe_is_enabled(vgpu, pipe)) {
>  		vgpu_vreg_t(vgpu, PIPE_FRMCOUNT_G4X(pipe))++;
> +

extra line

>  		intel_vgpu_trigger_virtual_event(vgpu, vblank_event[pipe]);
>  	}
>  }
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index cd29ea28d7ed..6c8ed030c30b 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -205,6 +205,8 @@ struct intel_vgpu {
>  		int num_irqs;
>  		struct eventfd_ctx *intx_trigger;
>  		struct eventfd_ctx *msi_trigger;
> +		struct eventfd_ctx *vblank_trigger;
> +		u32 display_event_mask;
>  
>  		/*
>  		 * Two caches are used to avoid mapping duplicated pages (eg.
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index fd1633342e53..9ace1f4ff9eb 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -1250,6 +1250,8 @@ static int intel_vgpu_get_irq_count(struct intel_vgpu *vgpu, int type)
>  {
>  	if (type == VFIO_PCI_INTX_IRQ_INDEX || type == VFIO_PCI_MSI_IRQ_INDEX)
>  		return 1;
> +	else if (type < VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs)
> +		return vgpu->vdev.irq[type - VFIO_PCI_NUM_IRQS].count;
>  
>  	return 0;
>  }
> @@ -1297,7 +1299,60 @@ static int intel_vgpu_set_msi_trigger(struct intel_vgpu *vgpu,
>  	return 0;
>  }
>  
> -static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> +static int intel_vgu_set_display_irq_mask(struct intel_vgpu *vgpu,
> +		unsigned int index, unsigned int start, unsigned int count,
> +		u32 flags, void *data)
> +{
> +	if (start != 0 || count > 2)
> +		return -EINVAL;
> +
> +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> +		vgpu->vdev.display_event_mask |= 1;

see below..

> +
> +	return 0;
> +}
> +
> +static int intel_vgu_set_display_irq_unmask(struct intel_vgpu *vgpu,
> +		unsigned int index, unsigned int start, unsigned int count,
> +		u32 flags, void *data)
> +{
> +	if (start != 0 || count > 2)
> +		return -EINVAL;
> +
> +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> +		vgpu->vdev.display_event_mask &= 0;

looks display_event_mask is used as flag for enable/disable, just write 1 or 0?


> +
> +	return 0;
> +}
> +
> +static int intel_vgpu_set_display_event_trigger(struct intel_vgpu *vgpu,
> +		unsigned int index, unsigned int start, unsigned int count,
> +		u32 flags, void *data)
> +{
> +	struct eventfd_ctx *trigger;
> +
> +	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
> +		int fd = *(int *)data;
> +
> +		trigger = eventfd_ctx_fdget(fd);
> +		if (IS_ERR(trigger)) {
> +			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
> +			return PTR_ERR(trigger);
> +		}
> +		vgpu->vdev.vblank_trigger = trigger;
> +		vgpu->vdev.display_event_mask = 0;
> +	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count) {
> +		trigger = vgpu->vdev.vblank_trigger;
> +		if (trigger) {
> +			eventfd_ctx_put(trigger);
> +			vgpu->vdev.vblank_trigger = NULL;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
>  		unsigned int index, unsigned int start, unsigned int count,
>  		void *data)
>  {
> @@ -1330,6 +1385,35 @@ static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
>  			break;
>  		}
>  		break;
> +	default:
> +	{
> +		int i;
> +
> +		if (index >= VFIO_PCI_NUM_IRQS +
> +					vgpu->vdev.num_irqs)
> +			return -EINVAL;
> +		index =
> +			array_index_nospec(index,
> +						VFIO_PCI_NUM_IRQS +
> +						vgpu->vdev.num_irqs);
> +
> +		i = index - VFIO_PCI_NUM_IRQS;
> +		if (vgpu->vdev.irq[i].type == VFIO_IRQ_TYPE_GFX &&
> +		    vgpu->vdev.irq[i].subtype ==
> +		    VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ) {
> +			switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
> +			case VFIO_IRQ_SET_ACTION_MASK:
> +				func = intel_vgu_set_display_irq_mask;
> +				break;
> +			case VFIO_IRQ_SET_ACTION_UNMASK:
> +				func = intel_vgu_set_display_irq_unmask;
> +				break;
> +			case VFIO_IRQ_SET_ACTION_TRIGGER:
> +				func = intel_vgpu_set_display_event_trigger;
> +				break;
> +			}
> +		}
> +	}
>  	}
>  
>  	if (!func)
> @@ -1361,7 +1445,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
>  		info.num_regions = VFIO_PCI_NUM_REGIONS +
>  				vgpu->vdev.num_regions;
> -		info.num_irqs = VFIO_PCI_NUM_IRQS;
> +		info.num_irqs = VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs;
>  
>  		return copy_to_user((void __user *)arg, &info, minsz) ?
>  			-EFAULT : 0;
> @@ -1521,32 +1605,88 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  			-EFAULT : 0;
>  	} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
>  		struct vfio_irq_info info;
> +		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
> +		unsigned int i;
> +		int ret;
>  
>  		minsz = offsetofend(struct vfio_irq_info, count);
>  
>  		if (copy_from_user(&info, (void __user *)arg, minsz))
>  			return -EFAULT;
>  
> -		if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
> +		if (info.argsz < minsz)
>  			return -EINVAL;
>  
>  		switch (info.index) {
>  		case VFIO_PCI_INTX_IRQ_INDEX:
>  		case VFIO_PCI_MSI_IRQ_INDEX:
> +			info.flags = VFIO_IRQ_INFO_EVENTFD;
>  			break;
> -		default:
> +		case VFIO_PCI_MSIX_IRQ_INDEX:
> +		case VFIO_PCI_ERR_IRQ_INDEX:
> +		case VFIO_PCI_REQ_IRQ_INDEX:
>  			return -EINVAL;
> -		}
> +		default:
> +		{
> +			struct vfio_irq_info_cap_type cap_type = {
> +				.header.id = VFIO_IRQ_INFO_CAP_TYPE,
> +				.header.version = 1 };
>  
> -		info.flags = VFIO_IRQ_INFO_EVENTFD;
> +			if (info.index >= VFIO_PCI_NUM_IRQS +
> +					vgpu->vdev.num_irqs)
> +				return -EINVAL;
> +			info.index =
> +				array_index_nospec(info.index,
> +						VFIO_PCI_NUM_IRQS +
> +						vgpu->vdev.num_irqs);
> +
> +			i = info.index - VFIO_PCI_NUM_IRQS;
> +
> +			info.flags = vgpu->vdev.irq[i].flags;
> +			cap_type.type = vgpu->vdev.irq[i].type;
> +			cap_type.subtype = vgpu->vdev.irq[i].subtype;
> +
> +			ret = vfio_info_add_capability(&caps,
> +						&cap_type.header,
> +						sizeof(cap_type));
> +			if (ret)
> +				return ret;
> +
> +			if (vgpu->vdev.irq[i].ops->add_capability) {
> +				ret = vgpu->vdev.irq[i].ops->add_capability(vgpu,
> +									    &caps);
> +				if (ret)
> +					return ret;
> +			}
> +		}
> +		}
>  
>  		info.count = intel_vgpu_get_irq_count(vgpu, info.index);
>  
>  		if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
>  			info.flags |= (VFIO_IRQ_INFO_MASKABLE |
>  				       VFIO_IRQ_INFO_AUTOMASKED);
> -		else
> -			info.flags |= VFIO_IRQ_INFO_NORESIZE;
> +
> +		if (caps.size) {
> +			info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
> +			if (info.argsz < sizeof(info) + caps.size) {
> +				info.argsz = sizeof(info) + caps.size;
> +				info.cap_offset = 0;
> +			} else {
> +				vfio_info_cap_shift(&caps, sizeof(info));
> +				if (copy_to_user((void __user *)arg +
> +						  sizeof(info), caps.buf,
> +						  caps.size)) {
> +					kfree(caps.buf);
> +					return -EFAULT;
> +				}
> +				info.cap_offset = sizeof(info);
> +				if (offsetofend(struct vfio_irq_info, cap_offset) > minsz)
> +					minsz = offsetofend(struct vfio_irq_info, cap_offset);
> +			}
> +
> +			kfree(caps.buf);
> +		}
>  
>  		return copy_to_user((void __user *)arg, &info, minsz) ?
>  			-EFAULT : 0;
> @@ -1565,7 +1705,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  			int max = intel_vgpu_get_irq_count(vgpu, hdr.index);
>  
>  			ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
> -						VFIO_PCI_NUM_IRQS, &data_size);
> +					VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs,
> +								 &data_size);
>  			if (ret) {
>  				gvt_vgpu_err("intel:vfio_set_irqs_validate_and_prepare failed\n");
>  				return -EINVAL;
> -- 
> 2.17.1
> 
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace
  2019-08-26  7:55   ` Zhenyu Wang
@ 2019-08-28  1:49     ` Tian, Kevin
  2019-08-28  6:59       ` Zhang, Tina
  0 siblings, 1 reply; 23+ messages in thread
From: Tian, Kevin @ 2019-08-28  1:49 UTC (permalink / raw)
  To: Zhenyu Wang, Zhang, Tina
  Cc: kvm, linux-kernel, Yuan, Hang, alex.williamson, kraxel, Lu,
	Kechen, intel-gvt-dev, Lv, Zhiyuan

> From: Zhenyu Wang
> Sent: Monday, August 26, 2019 3:56 PM
> 
> On 2019.08.16 10:35:26 +0800, Tina Zhang wrote:
> > Deliver the display refresh events to the user land. Userspace can use
> > the irq mask/unmask mechanism to disable or enable the event delivery.
> >
> > As we know, delivering refresh event at each vblank safely avoids
> > tearing and unexpected event overwhelming, but there are still spaces
> > to optimize.

can you move optimization to another patch?

> >
> > For handling the normal case, deliver the page flip refresh
> > event at each vblank, in other words, bounded by vblanks. Skipping some
> > events bring performance enhancement while not hurting user experience.

what is the normal case? double buffer? which events are skipped in
such scenario?

> >
> > For single framebuffer case, deliver the refresh events to userspace at
> > all vblanks. This heuristic at each vblank leverages pageflip_count

at all vblanks? later words said the other way i.e. delivering events only
after the threshold is exceeded. Please be consistent in what you try to
explain here.

> > incresements to determine if there is no page flip happens after a certain
> > period and so that the case is regarded as single framebuffer one.
> > Although this heuristic makes incorrect decision sometimes and it depends

why may the heuristic make incorrect decision? under what condition?

> > on guest behavior, for example, when no cursor movements happen, the
> > user experience does not harm and front buffer is still correctly acquired.
> > Meanwhile, in actual single framebuffer case, the user experience is
> > enhanced compared with page flip events only.

'actual' vs. what? a 'faked' single framebuffer case? 

> >
> > Addtionally, to mitigate the events delivering footprints, one eventfd and
> > 8 byte eventfd counter partition are leveraged.

footprint? I guess you meant reducing the frequency of delivered events...

> >
> > v2:
> > - Support vfio_irq_info_cap_display_plane_events. (Tina)
> >
> > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > Signed-off-by: Kechen Lu <kechen.lu@intel.com>
> > ---
> >  drivers/gpu/drm/i915/gvt/display.c |  22 ++++
> >  drivers/gpu/drm/i915/gvt/gvt.h     |   2 +
> >  drivers/gpu/drm/i915/gvt/kvmgt.c   | 159 +++++++++++++++++++++++++++-
> -
> >  3 files changed, 174 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gvt/display.c
> b/drivers/gpu/drm/i915/gvt/display.c
> > index 1a0a4ae4826e..616285e4a014 100644
> > --- a/drivers/gpu/drm/i915/gvt/display.c
> > +++ b/drivers/gpu/drm/i915/gvt/display.c
> > @@ -34,6 +34,8 @@
> >
> >  #include "i915_drv.h"
> >  #include "gvt.h"
> > +#include <uapi/linux/vfio.h>
> > +#include <drm/drm_plane.h>
> >
> >  static int get_edp_pipe(struct intel_vgpu *vgpu)
> >  {
> > @@ -387,6 +389,8 @@ void intel_gvt_check_vblank_emulation(struct
> intel_gvt *gvt)
> >  	mutex_unlock(&gvt->lock);
> >  }
> >
> > +#define PAGEFLIP_DELAY_THR 10
> > +
> >  static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
> >  {
> >  	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
> > @@ -396,7 +400,10 @@ static void emulate_vblank_on_pipe(struct
> intel_vgpu *vgpu, int pipe)
> >  		[PIPE_B] = PIPE_B_VBLANK,
> >  		[PIPE_C] = PIPE_C_VBLANK,
> >  	};
> > +	int pri_flip_event = SKL_FLIP_EVENT(pipe, PLANE_PRIMARY);
> >  	int event;
> > +	u64 eventfd_signal_val = 0;
> > +	static int no_pageflip_count;
> >
> >  	if (pipe < PIPE_A || pipe > PIPE_C)
> >  		return;
> > @@ -407,11 +414,26 @@ static void emulate_vblank_on_pipe(struct
> intel_vgpu *vgpu, int pipe)
> >  		if (!pipe_is_enabled(vgpu, pipe))
> >  			continue;
> >
> > +		if (event == pri_flip_event)
> > +			eventfd_signal_val |=
> DISPLAY_PRI_REFRESH_EVENT_VAL;
> > +
> >  		intel_vgpu_trigger_virtual_event(vgpu, event);
> >  	}
> >
> > +	if (eventfd_signal_val)
> > +		no_pageflip_count = 0;
> > +	else if (!eventfd_signal_val && no_pageflip_count >
> PAGEFLIP_DELAY_THR)
> 
> extra !eventfd_signal_val
> 
> > +		eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;

do you need reset the count to zero here?

> > +	else
> > +		no_pageflip_count++;
> 
> no_pageflip_count should be per-vgpu instead of static.
> 
> > +
> > +	if (vgpu->vdev.vblank_trigger && !vgpu->vdev.display_event_mask
> &&
> > +		eventfd_signal_val)

is this mask per vGPU or per plane? If the latter, you need check specific bit here.

> > +		eventfd_signal(vgpu->vdev.vblank_trigger,
> eventfd_signal_val);
> > +
> >  	if (pipe_is_enabled(vgpu, pipe)) {
> >  		vgpu_vreg_t(vgpu, PIPE_FRMCOUNT_G4X(pipe))++;
> > +
> 
> extra line
> 
> >  		intel_vgpu_trigger_virtual_event(vgpu, vblank_event[pipe]);
> >  	}
> >  }
> > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h
> b/drivers/gpu/drm/i915/gvt/gvt.h
> > index cd29ea28d7ed..6c8ed030c30b 100644
> > --- a/drivers/gpu/drm/i915/gvt/gvt.h
> > +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> > @@ -205,6 +205,8 @@ struct intel_vgpu {
> >  		int num_irqs;
> >  		struct eventfd_ctx *intx_trigger;
> >  		struct eventfd_ctx *msi_trigger;
> > +		struct eventfd_ctx *vblank_trigger;
> > +		u32 display_event_mask;
> >
> >  		/*
> >  		 * Two caches are used to avoid mapping duplicated pages
> (eg.
> > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > index fd1633342e53..9ace1f4ff9eb 100644
> > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > @@ -1250,6 +1250,8 @@ static int intel_vgpu_get_irq_count(struct
> intel_vgpu *vgpu, int type)
> >  {
> >  	if (type == VFIO_PCI_INTX_IRQ_INDEX || type ==
> VFIO_PCI_MSI_IRQ_INDEX)
> >  		return 1;
> > +	else if (type < VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs)
> > +		return vgpu->vdev.irq[type - VFIO_PCI_NUM_IRQS].count;
> >
> >  	return 0;
> >  }
> > @@ -1297,7 +1299,60 @@ static int intel_vgpu_set_msi_trigger(struct
> intel_vgpu *vgpu,
> >  	return 0;
> >  }
> >
> > -static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> > +static int intel_vgu_set_display_irq_mask(struct intel_vgpu *vgpu,

vgu -> vgpu

> > +		unsigned int index, unsigned int start, unsigned int count,
> > +		u32 flags, void *data)
> > +{
> > +	if (start != 0 || count > 2)
> > +		return -EINVAL;
> > +
> > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > +		vgpu->vdev.display_event_mask |= 1;
> 
> see below..
> 
> > +
> > +	return 0;
> > +}
> > +
> > +static int intel_vgu_set_display_irq_unmask(struct intel_vgpu *vgpu,
> > +		unsigned int index, unsigned int start, unsigned int count,
> > +		u32 flags, void *data)
> > +{
> > +	if (start != 0 || count > 2)
> > +		return -EINVAL;
> > +
> > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > +		vgpu->vdev.display_event_mask &= 0;
> 
> looks display_event_mask is used as flag for enable/disable, just write 1 or 0?

Do we plan to support per-plane mask in the future? If yes, then use bit
operation but let's define the bit meaning explicitly now.,

> 
> 
> > +
> > +	return 0;
> > +}
> > +
> > +static int intel_vgpu_set_display_event_trigger(struct intel_vgpu *vgpu,
> > +		unsigned int index, unsigned int start, unsigned int count,
> > +		u32 flags, void *data)
> > +{
> > +	struct eventfd_ctx *trigger;
> > +
> > +	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
> > +		int fd = *(int *)data;
> > +
> > +		trigger = eventfd_ctx_fdget(fd);
> > +		if (IS_ERR(trigger)) {
> > +			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
> > +			return PTR_ERR(trigger);
> > +		}
> > +		vgpu->vdev.vblank_trigger = trigger;
> > +		vgpu->vdev.display_event_mask = 0;
> > +	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count) {
> > +		trigger = vgpu->vdev.vblank_trigger;
> > +		if (trigger) {
> > +			eventfd_ctx_put(trigger);
> > +			vgpu->vdev.vblank_trigger = NULL;
> > +		}
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> >  		unsigned int index, unsigned int start, unsigned int count,
> >  		void *data)
> >  {
> > @@ -1330,6 +1385,35 @@ static int intel_vgpu_set_irqs(struct intel_vgpu
> *vgpu, u32 flags,
> >  			break;
> >  		}
> >  		break;
> > +	default:
> > +	{
> > +		int i;
> > +
> > +		if (index >= VFIO_PCI_NUM_IRQS +
> > +					vgpu->vdev.num_irqs)
> > +			return -EINVAL;
> > +		index =
> > +			array_index_nospec(index,
> > +						VFIO_PCI_NUM_IRQS +
> > +						vgpu->vdev.num_irqs);
> > +
> > +		i = index - VFIO_PCI_NUM_IRQS;
> > +		if (vgpu->vdev.irq[i].type == VFIO_IRQ_TYPE_GFX &&
> > +		    vgpu->vdev.irq[i].subtype ==
> > +		    VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ) {
> > +			switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
> > +			case VFIO_IRQ_SET_ACTION_MASK:
> > +				func = intel_vgu_set_display_irq_mask;
> > +				break;
> > +			case VFIO_IRQ_SET_ACTION_UNMASK:
> > +				func = intel_vgu_set_display_irq_unmask;
> > +				break;
> > +			case VFIO_IRQ_SET_ACTION_TRIGGER:
> > +				func = intel_vgpu_set_display_event_trigger;
> > +				break;
> > +			}
> > +		}
> > +	}
> >  	}
> >
> >  	if (!func)
> > @@ -1361,7 +1445,7 @@ static long intel_vgpu_ioctl(struct mdev_device
> *mdev, unsigned int cmd,
> >  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
> >  		info.num_regions = VFIO_PCI_NUM_REGIONS +
> >  				vgpu->vdev.num_regions;
> > -		info.num_irqs = VFIO_PCI_NUM_IRQS;
> > +		info.num_irqs = VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs;
> >
> >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> >  			-EFAULT : 0;
> > @@ -1521,32 +1605,88 @@ static long intel_vgpu_ioctl(struct mdev_device
> *mdev, unsigned int cmd,
> >  			-EFAULT : 0;
> >  	} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
> >  		struct vfio_irq_info info;
> > +		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
> > +		unsigned int i;
> > +		int ret;
> >
> >  		minsz = offsetofend(struct vfio_irq_info, count);
> >
> >  		if (copy_from_user(&info, (void __user *)arg, minsz))
> >  			return -EFAULT;
> >
> > -		if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
> > +		if (info.argsz < minsz)
> >  			return -EINVAL;
> >
> >  		switch (info.index) {
> >  		case VFIO_PCI_INTX_IRQ_INDEX:
> >  		case VFIO_PCI_MSI_IRQ_INDEX:
> > +			info.flags = VFIO_IRQ_INFO_EVENTFD;
> >  			break;
> > -		default:
> > +		case VFIO_PCI_MSIX_IRQ_INDEX:
> > +		case VFIO_PCI_ERR_IRQ_INDEX:
> > +		case VFIO_PCI_REQ_IRQ_INDEX:
> >  			return -EINVAL;
> > -		}
> > +		default:
> > +		{
> > +			struct vfio_irq_info_cap_type cap_type = {
> > +				.header.id = VFIO_IRQ_INFO_CAP_TYPE,
> > +				.header.version = 1 };
> >
> > -		info.flags = VFIO_IRQ_INFO_EVENTFD;
> > +			if (info.index >= VFIO_PCI_NUM_IRQS +
> > +					vgpu->vdev.num_irqs)
> > +				return -EINVAL;
> > +			info.index =
> > +				array_index_nospec(info.index,
> > +						VFIO_PCI_NUM_IRQS +
> > +						vgpu->vdev.num_irqs);
> > +
> > +			i = info.index - VFIO_PCI_NUM_IRQS;
> > +
> > +			info.flags = vgpu->vdev.irq[i].flags;
> > +			cap_type.type = vgpu->vdev.irq[i].type;
> > +			cap_type.subtype = vgpu->vdev.irq[i].subtype;
> > +
> > +			ret = vfio_info_add_capability(&caps,
> > +						&cap_type.header,
> > +						sizeof(cap_type));
> > +			if (ret)
> > +				return ret;
> > +
> > +			if (vgpu->vdev.irq[i].ops->add_capability) {
> > +				ret = vgpu->vdev.irq[i].ops-
> >add_capability(vgpu,
> > +
> &caps);
> > +				if (ret)
> > +					return ret;
> > +			}
> > +		}
> > +		}
> >
> >  		info.count = intel_vgpu_get_irq_count(vgpu, info.index);
> >
> >  		if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
> >  			info.flags |= (VFIO_IRQ_INFO_MASKABLE |
> >  				       VFIO_IRQ_INFO_AUTOMASKED);
> > -		else
> > -			info.flags |= VFIO_IRQ_INFO_NORESIZE;
> > +
> > +		if (caps.size) {
> > +			info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
> > +			if (info.argsz < sizeof(info) + caps.size) {
> > +				info.argsz = sizeof(info) + caps.size;
> > +				info.cap_offset = 0;
> > +			} else {
> > +				vfio_info_cap_shift(&caps, sizeof(info));
> > +				if (copy_to_user((void __user *)arg +
> > +						  sizeof(info), caps.buf,
> > +						  caps.size)) {
> > +					kfree(caps.buf);
> > +					return -EFAULT;
> > +				}
> > +				info.cap_offset = sizeof(info);
> > +				if (offsetofend(struct vfio_irq_info,
> cap_offset) > minsz)
> > +					minsz = offsetofend(struct
> vfio_irq_info, cap_offset);
> > +			}
> > +
> > +			kfree(caps.buf);
> > +		}
> >
> >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> >  			-EFAULT : 0;
> > @@ -1565,7 +1705,8 @@ static long intel_vgpu_ioctl(struct mdev_device
> *mdev, unsigned int cmd,
> >  			int max = intel_vgpu_get_irq_count(vgpu, hdr.index);
> >
> >  			ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
> > -						VFIO_PCI_NUM_IRQS,
> &data_size);
> > +					VFIO_PCI_NUM_IRQS + vgpu-
> >vdev.num_irqs,
> > +								 &data_size);
> >  			if (ret) {
> >
> 	gvt_vgpu_err("intel:vfio_set_irqs_validate_and_prepare failed\n");
> >  				return -EINVAL;
> > --
> > 2.17.1
> >
> > _______________________________________________
> > intel-gvt-dev mailing list
> > intel-gvt-dev@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> 
> --
> Open Source Technology Center, Intel ltd.
> 
> $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace
  2019-08-28  1:49     ` Tian, Kevin
@ 2019-08-28  6:59       ` Zhang, Tina
  2019-08-28  7:10         ` Tian, Kevin
  0 siblings, 1 reply; 23+ messages in thread
From: Zhang, Tina @ 2019-08-28  6:59 UTC (permalink / raw)
  To: Tian, Kevin, Zhenyu Wang
  Cc: kvm, linux-kernel, Yuan, Hang, alex.williamson, kraxel, Lu,
	Kechen, intel-gvt-dev, Lv, Zhiyuan



> -----Original Message-----
> From: Tian, Kevin
> Sent: Wednesday, August 28, 2019 9:50 AM
> To: Zhenyu Wang <zhenyuw@linux.intel.com>; Zhang, Tina
> <tina.zhang@intel.com>
> Cc: kvm@vger.kernel.org; linux-kernel@vger.kernel.org; Yuan, Hang
> <hang.yuan@intel.com>; alex.williamson@redhat.com; kraxel@redhat.com;
> Lu, Kechen <kechen.lu@intel.com>; intel-gvt-dev@lists.freedesktop.org; Lv,
> Zhiyuan <zhiyuan.lv@intel.com>
> Subject: RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to
> userspace
> 
> > From: Zhenyu Wang
> > Sent: Monday, August 26, 2019 3:56 PM
> >
> > On 2019.08.16 10:35:26 +0800, Tina Zhang wrote:
> > > Deliver the display refresh events to the user land. Userspace can
> > > use the irq mask/unmask mechanism to disable or enable the event
> delivery.
> > >
> > > As we know, delivering refresh event at each vblank safely avoids
> > > tearing and unexpected event overwhelming, but there are still
> > > spaces to optimize.
> 
> can you move optimization to another patch?
OK. I'll try.
> 
> > >
> > > For handling the normal case, deliver the page flip refresh event at
> > > each vblank, in other words, bounded by vblanks. Skipping some
> > > events bring performance enhancement while not hurting user
> experience.
> 
> what is the normal case? double buffer? which events are skipped in such
> scenario?
Here normal case means >= 2 buffers. In this situation, we have to skip the redundant page flip events between two vblanks and notify user space with one display refresh event (i.e. turn those page flip operations between two vblanks into one display refresh event).

> 
> > >
> > > For single framebuffer case, deliver the refresh events to userspace
> > > at all vblanks. This heuristic at each vblank leverages
> > > pageflip_count
> 
> at all vblanks? later words said the other way i.e. delivering events only after
> the threshold is exceeded. Please be consistent in what you try to explain
> here.

Actually, there're two steps: 
1) single framebuffer case recognition
The heuristic needs several vblanks to see if vgpu is working in the single front buffer mode.

2) deliver the display refresh event at all vblanks.
If vgpu is working in single front buffer mode, the display refresh event will be sent at all vblanks.

> 
> > > incresements to determine if there is no page flip happens after a
> > > certain period and so that the case is regarded as single framebuffer one.
> > > Although this heuristic makes incorrect decision sometimes and it
> > > depends
> 
> why may the heuristic make incorrect decision? under what condition?

E.g. when guest window manager is waiting for user input and there're no window update requests from the apps in guest. In situation, although guest doesn't work in single front buffer mode, the heuristic will consider it does and send the display refresh event at all vblanks. Ideally, in this situation, as guest window manager is working in multi-buffer mode, gvt-g should only send the refresh event when page flip happens. However, it seems there's no simple way for gvt-g to tell this case and the real single front buffer case apart.

> 
> > > on guest behavior, for example, when no cursor movements happen, the
> > > user experience does not harm and front buffer is still correctly acquired.
> > > Meanwhile, in actual single framebuffer case, the user experience is
> > > enhanced compared with page flip events only.
> 
> 'actual' vs. what? a 'faked' single framebuffer case?

I think the 'actual' here means vgpu does work in the single front buffer mode. 

> 
> > >
> > > Addtionally, to mitigate the events delivering footprints, one
> > > eventfd and
> > > 8 byte eventfd counter partition are leveraged.
> 
> footprint? I guess you meant reducing the frequency of delivered events...

Exactly. Thanks.

BR,
Tina
> 
> > >
> > > v2:
> > > - Support vfio_irq_info_cap_display_plane_events. (Tina)
> > >
> > > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > > Signed-off-by: Kechen Lu <kechen.lu@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/gvt/display.c |  22 ++++
> > >  drivers/gpu/drm/i915/gvt/gvt.h     |   2 +
> > >  drivers/gpu/drm/i915/gvt/kvmgt.c   | 159
> +++++++++++++++++++++++++++-
> > -
> > >  3 files changed, 174 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/gvt/display.c
> > b/drivers/gpu/drm/i915/gvt/display.c
> > > index 1a0a4ae4826e..616285e4a014 100644
> > > --- a/drivers/gpu/drm/i915/gvt/display.c
> > > +++ b/drivers/gpu/drm/i915/gvt/display.c
> > > @@ -34,6 +34,8 @@
> > >
> > >  #include "i915_drv.h"
> > >  #include "gvt.h"
> > > +#include <uapi/linux/vfio.h>
> > > +#include <drm/drm_plane.h>
> > >
> > >  static int get_edp_pipe(struct intel_vgpu *vgpu)  { @@ -387,6
> > > +389,8 @@ void intel_gvt_check_vblank_emulation(struct
> > intel_gvt *gvt)
> > >  	mutex_unlock(&gvt->lock);
> > >  }
> > >
> > > +#define PAGEFLIP_DELAY_THR 10
> > > +
> > >  static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int
> > > pipe)  {
> > >  	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; @@ -
> 396,7
> > > +400,10 @@ static void emulate_vblank_on_pipe(struct
> > intel_vgpu *vgpu, int pipe)
> > >  		[PIPE_B] = PIPE_B_VBLANK,
> > >  		[PIPE_C] = PIPE_C_VBLANK,
> > >  	};
> > > +	int pri_flip_event = SKL_FLIP_EVENT(pipe, PLANE_PRIMARY);
> > >  	int event;
> > > +	u64 eventfd_signal_val = 0;
> > > +	static int no_pageflip_count;
> > >
> > >  	if (pipe < PIPE_A || pipe > PIPE_C)
> > >  		return;
> > > @@ -407,11 +414,26 @@ static void emulate_vblank_on_pipe(struct
> > intel_vgpu *vgpu, int pipe)
> > >  		if (!pipe_is_enabled(vgpu, pipe))
> > >  			continue;
> > >
> > > +		if (event == pri_flip_event)
> > > +			eventfd_signal_val |=
> > DISPLAY_PRI_REFRESH_EVENT_VAL;
> > > +
> > >  		intel_vgpu_trigger_virtual_event(vgpu, event);
> > >  	}
> > >
> > > +	if (eventfd_signal_val)
> > > +		no_pageflip_count = 0;
> > > +	else if (!eventfd_signal_val && no_pageflip_count >
> > PAGEFLIP_DELAY_THR)
> >
> > extra !eventfd_signal_val
> >
> > > +		eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
> 
> do you need reset the count to zero here?
> 
> > > +	else
> > > +		no_pageflip_count++;
> >
> > no_pageflip_count should be per-vgpu instead of static.
> >
> > > +
> > > +	if (vgpu->vdev.vblank_trigger && !vgpu->vdev.display_event_mask
> > &&
> > > +		eventfd_signal_val)
> 
> is this mask per vGPU or per plane? If the latter, you need check specific bit
> here.
> 
> > > +		eventfd_signal(vgpu->vdev.vblank_trigger,
> > eventfd_signal_val);
> > > +
> > >  	if (pipe_is_enabled(vgpu, pipe)) {
> > >  		vgpu_vreg_t(vgpu, PIPE_FRMCOUNT_G4X(pipe))++;
> > > +
> >
> > extra line
> >
> > >  		intel_vgpu_trigger_virtual_event(vgpu, vblank_event[pipe]);
> > >  	}
> > >  }
> > > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h
> > b/drivers/gpu/drm/i915/gvt/gvt.h
> > > index cd29ea28d7ed..6c8ed030c30b 100644
> > > --- a/drivers/gpu/drm/i915/gvt/gvt.h
> > > +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> > > @@ -205,6 +205,8 @@ struct intel_vgpu {
> > >  		int num_irqs;
> > >  		struct eventfd_ctx *intx_trigger;
> > >  		struct eventfd_ctx *msi_trigger;
> > > +		struct eventfd_ctx *vblank_trigger;
> > > +		u32 display_event_mask;
> > >
> > >  		/*
> > >  		 * Two caches are used to avoid mapping duplicated pages
> > (eg.
> > > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > index fd1633342e53..9ace1f4ff9eb 100644
> > > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > @@ -1250,6 +1250,8 @@ static int intel_vgpu_get_irq_count(struct
> > intel_vgpu *vgpu, int type)
> > >  {
> > >  	if (type == VFIO_PCI_INTX_IRQ_INDEX || type ==
> > VFIO_PCI_MSI_IRQ_INDEX)
> > >  		return 1;
> > > +	else if (type < VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs)
> > > +		return vgpu->vdev.irq[type - VFIO_PCI_NUM_IRQS].count;
> > >
> > >  	return 0;
> > >  }
> > > @@ -1297,7 +1299,60 @@ static int intel_vgpu_set_msi_trigger(struct
> > intel_vgpu *vgpu,
> > >  	return 0;
> > >  }
> > >
> > > -static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> > > +static int intel_vgu_set_display_irq_mask(struct intel_vgpu *vgpu,
> 
> vgu -> vgpu
> 
> > > +		unsigned int index, unsigned int start, unsigned int count,
> > > +		u32 flags, void *data)
> > > +{
> > > +	if (start != 0 || count > 2)
> > > +		return -EINVAL;
> > > +
> > > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > > +		vgpu->vdev.display_event_mask |= 1;
> >
> > see below..
> >
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int intel_vgu_set_display_irq_unmask(struct intel_vgpu *vgpu,
> > > +		unsigned int index, unsigned int start, unsigned int count,
> > > +		u32 flags, void *data)
> > > +{
> > > +	if (start != 0 || count > 2)
> > > +		return -EINVAL;
> > > +
> > > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > > +		vgpu->vdev.display_event_mask &= 0;
> >
> > looks display_event_mask is used as flag for enable/disable, just write 1 or
> 0?
> 
> Do we plan to support per-plane mask in the future? If yes, then use bit
> operation but let's define the bit meaning explicitly now.,
> 
> >
> >
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int intel_vgpu_set_display_event_trigger(struct intel_vgpu *vgpu,
> > > +		unsigned int index, unsigned int start, unsigned int count,
> > > +		u32 flags, void *data)
> > > +{
> > > +	struct eventfd_ctx *trigger;
> > > +
> > > +	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
> > > +		int fd = *(int *)data;
> > > +
> > > +		trigger = eventfd_ctx_fdget(fd);
> > > +		if (IS_ERR(trigger)) {
> > > +			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
> > > +			return PTR_ERR(trigger);
> > > +		}
> > > +		vgpu->vdev.vblank_trigger = trigger;
> > > +		vgpu->vdev.display_event_mask = 0;
> > > +	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count) {
> > > +		trigger = vgpu->vdev.vblank_trigger;
> > > +		if (trigger) {
> > > +			eventfd_ctx_put(trigger);
> > > +			vgpu->vdev.vblank_trigger = NULL;
> > > +		}
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> > >  		unsigned int index, unsigned int start, unsigned int count,
> > >  		void *data)
> > >  {
> > > @@ -1330,6 +1385,35 @@ static int intel_vgpu_set_irqs(struct
> > > intel_vgpu
> > *vgpu, u32 flags,
> > >  			break;
> > >  		}
> > >  		break;
> > > +	default:
> > > +	{
> > > +		int i;
> > > +
> > > +		if (index >= VFIO_PCI_NUM_IRQS +
> > > +					vgpu->vdev.num_irqs)
> > > +			return -EINVAL;
> > > +		index =
> > > +			array_index_nospec(index,
> > > +						VFIO_PCI_NUM_IRQS +
> > > +						vgpu->vdev.num_irqs);
> > > +
> > > +		i = index - VFIO_PCI_NUM_IRQS;
> > > +		if (vgpu->vdev.irq[i].type == VFIO_IRQ_TYPE_GFX &&
> > > +		    vgpu->vdev.irq[i].subtype ==
> > > +		    VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ) {
> > > +			switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK)
> {
> > > +			case VFIO_IRQ_SET_ACTION_MASK:
> > > +				func = intel_vgu_set_display_irq_mask;
> > > +				break;
> > > +			case VFIO_IRQ_SET_ACTION_UNMASK:
> > > +				func = intel_vgu_set_display_irq_unmask;
> > > +				break;
> > > +			case VFIO_IRQ_SET_ACTION_TRIGGER:
> > > +				func = intel_vgpu_set_display_event_trigger;
> > > +				break;
> > > +			}
> > > +		}
> > > +	}
> > >  	}
> > >
> > >  	if (!func)
> > > @@ -1361,7 +1445,7 @@ static long intel_vgpu_ioctl(struct
> > > mdev_device
> > *mdev, unsigned int cmd,
> > >  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
> > >  		info.num_regions = VFIO_PCI_NUM_REGIONS +
> > >  				vgpu->vdev.num_regions;
> > > -		info.num_irqs = VFIO_PCI_NUM_IRQS;
> > > +		info.num_irqs = VFIO_PCI_NUM_IRQS + vgpu-
> >vdev.num_irqs;
> > >
> > >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> > >  			-EFAULT : 0;
> > > @@ -1521,32 +1605,88 @@ static long intel_vgpu_ioctl(struct
> > > mdev_device
> > *mdev, unsigned int cmd,
> > >  			-EFAULT : 0;
> > >  	} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
> > >  		struct vfio_irq_info info;
> > > +		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
> > > +		unsigned int i;
> > > +		int ret;
> > >
> > >  		minsz = offsetofend(struct vfio_irq_info, count);
> > >
> > >  		if (copy_from_user(&info, (void __user *)arg, minsz))
> > >  			return -EFAULT;
> > >
> > > -		if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
> > > +		if (info.argsz < minsz)
> > >  			return -EINVAL;
> > >
> > >  		switch (info.index) {
> > >  		case VFIO_PCI_INTX_IRQ_INDEX:
> > >  		case VFIO_PCI_MSI_IRQ_INDEX:
> > > +			info.flags = VFIO_IRQ_INFO_EVENTFD;
> > >  			break;
> > > -		default:
> > > +		case VFIO_PCI_MSIX_IRQ_INDEX:
> > > +		case VFIO_PCI_ERR_IRQ_INDEX:
> > > +		case VFIO_PCI_REQ_IRQ_INDEX:
> > >  			return -EINVAL;
> > > -		}
> > > +		default:
> > > +		{
> > > +			struct vfio_irq_info_cap_type cap_type = {
> > > +				.header.id = VFIO_IRQ_INFO_CAP_TYPE,
> > > +				.header.version = 1 };
> > >
> > > -		info.flags = VFIO_IRQ_INFO_EVENTFD;
> > > +			if (info.index >= VFIO_PCI_NUM_IRQS +
> > > +					vgpu->vdev.num_irqs)
> > > +				return -EINVAL;
> > > +			info.index =
> > > +				array_index_nospec(info.index,
> > > +						VFIO_PCI_NUM_IRQS +
> > > +						vgpu->vdev.num_irqs);
> > > +
> > > +			i = info.index - VFIO_PCI_NUM_IRQS;
> > > +
> > > +			info.flags = vgpu->vdev.irq[i].flags;
> > > +			cap_type.type = vgpu->vdev.irq[i].type;
> > > +			cap_type.subtype = vgpu->vdev.irq[i].subtype;
> > > +
> > > +			ret = vfio_info_add_capability(&caps,
> > > +						&cap_type.header,
> > > +						sizeof(cap_type));
> > > +			if (ret)
> > > +				return ret;
> > > +
> > > +			if (vgpu->vdev.irq[i].ops->add_capability) {
> > > +				ret = vgpu->vdev.irq[i].ops-
> > >add_capability(vgpu,
> > > +
> > &caps);
> > > +				if (ret)
> > > +					return ret;
> > > +			}
> > > +		}
> > > +		}
> > >
> > >  		info.count = intel_vgpu_get_irq_count(vgpu, info.index);
> > >
> > >  		if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
> > >  			info.flags |= (VFIO_IRQ_INFO_MASKABLE |
> > >  				       VFIO_IRQ_INFO_AUTOMASKED);
> > > -		else
> > > -			info.flags |= VFIO_IRQ_INFO_NORESIZE;
> > > +
> > > +		if (caps.size) {
> > > +			info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
> > > +			if (info.argsz < sizeof(info) + caps.size) {
> > > +				info.argsz = sizeof(info) + caps.size;
> > > +				info.cap_offset = 0;
> > > +			} else {
> > > +				vfio_info_cap_shift(&caps, sizeof(info));
> > > +				if (copy_to_user((void __user *)arg +
> > > +						  sizeof(info), caps.buf,
> > > +						  caps.size)) {
> > > +					kfree(caps.buf);
> > > +					return -EFAULT;
> > > +				}
> > > +				info.cap_offset = sizeof(info);
> > > +				if (offsetofend(struct vfio_irq_info,
> > cap_offset) > minsz)
> > > +					minsz = offsetofend(struct
> > vfio_irq_info, cap_offset);
> > > +			}
> > > +
> > > +			kfree(caps.buf);
> > > +		}
> > >
> > >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> > >  			-EFAULT : 0;
> > > @@ -1565,7 +1705,8 @@ static long intel_vgpu_ioctl(struct
> > > mdev_device
> > *mdev, unsigned int cmd,
> > >  			int max = intel_vgpu_get_irq_count(vgpu, hdr.index);
> > >
> > >  			ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
> > > -						VFIO_PCI_NUM_IRQS,
> > &data_size);
> > > +					VFIO_PCI_NUM_IRQS + vgpu-
> > >vdev.num_irqs,
> > > +								 &data_size);
> > >  			if (ret) {
> > >
> > 	gvt_vgpu_err("intel:vfio_set_irqs_validate_and_prepare failed\n");
> > >  				return -EINVAL;
> > > --
> > > 2.17.1
> > >
> > > _______________________________________________
> > > intel-gvt-dev mailing list
> > > intel-gvt-dev@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> >
> > --
> > Open Source Technology Center, Intel ltd.
> >
> > $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace
  2019-08-28  6:59       ` Zhang, Tina
@ 2019-08-28  7:10         ` Tian, Kevin
  2019-08-28  7:39           ` Zhang, Tina
  0 siblings, 1 reply; 23+ messages in thread
From: Tian, Kevin @ 2019-08-28  7:10 UTC (permalink / raw)
  To: Zhang, Tina, Zhenyu Wang
  Cc: kvm, linux-kernel, Yuan, Hang, alex.williamson, kraxel, Lu,
	Kechen, intel-gvt-dev, Lv, Zhiyuan

> From: Zhang, Tina
> Sent: Wednesday, August 28, 2019 2:59 PM
> 
> > -----Original Message-----
> > From: Tian, Kevin
> > Sent: Wednesday, August 28, 2019 9:50 AM
> > To: Zhenyu Wang <zhenyuw@linux.intel.com>; Zhang, Tina
> > <tina.zhang@intel.com>
> > Cc: kvm@vger.kernel.org; linux-kernel@vger.kernel.org; Yuan, Hang
> > <hang.yuan@intel.com>; alex.williamson@redhat.com; kraxel@redhat.com;
> > Lu, Kechen <kechen.lu@intel.com>; intel-gvt-dev@lists.freedesktop.org; Lv,
> > Zhiyuan <zhiyuan.lv@intel.com>
> > Subject: RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to
> > userspace
> >
> > > From: Zhenyu Wang
> > > Sent: Monday, August 26, 2019 3:56 PM
> > >
> > > On 2019.08.16 10:35:26 +0800, Tina Zhang wrote:
> > > > Deliver the display refresh events to the user land. Userspace can
> > > > use the irq mask/unmask mechanism to disable or enable the event
> > delivery.
> > > >
> > > > As we know, delivering refresh event at each vblank safely avoids
> > > > tearing and unexpected event overwhelming, but there are still
> > > > spaces to optimize.
> >
> > can you move optimization to another patch?
> OK. I'll try.
> >
> > > >
> > > > For handling the normal case, deliver the page flip refresh event at
> > > > each vblank, in other words, bounded by vblanks. Skipping some
> > > > events bring performance enhancement while not hurting user
> > experience.
> >
> > what is the normal case? double buffer? which events are skipped in such
> > scenario?
> Here normal case means >= 2 buffers. In this situation, we have to skip the
> redundant page flip events between two vblanks and notify user space with
> one display refresh event (i.e. turn those page flip operations between two
> vblanks into one display refresh event).
> 

"have to" refers to something mandatory, instead of performance 
enhancement.

> >
> > > >
> > > > For single framebuffer case, deliver the refresh events to userspace
> > > > at all vblanks. This heuristic at each vblank leverages
> > > > pageflip_count
> >
> > at all vblanks? later words said the other way i.e. delivering events only
> after
> > the threshold is exceeded. Please be consistent in what you try to explain
> > here.
> 
> Actually, there're two steps:
> 1) single framebuffer case recognition
> The heuristic needs several vblanks to see if vgpu is working in the single front
> buffer mode.
> 


what is the difference between "single framebuffer" and "single front buffer"?

> 2) deliver the display refresh event at all vblanks.
> If vgpu is working in single front buffer mode, the display refresh event will be
> sent at all vblanks.

I didn't get it. why is it OK to not deliver events before you identify the
single framebuffer mode, while not OK to do so after the identification?
If the definition of such mode can tolerate misses of vblank events, then
you don't need deliver at every vblank. Otherwise, the detection itself
already breaks the guest assumption.

> 
> >
> > > > incresements to determine if there is no page flip happens after a
> > > > certain period and so that the case is regarded as single framebuffer one.
> > > > Although this heuristic makes incorrect decision sometimes and it
> > > > depends
> >
> > why may the heuristic make incorrect decision? under what condition?
> 
> E.g. when guest window manager is waiting for user input and there're no
> window update requests from the apps in guest. In situation, although guest
> doesn't work in single front buffer mode, the heuristic will consider it does
> and send the display refresh event at all vblanks. Ideally, in this situation, as
> guest window manager is working in multi-buffer mode, gvt-g should only
> send the refresh event when page flip happens. However, it seems there's no
> simple way for gvt-g to tell this case and the real single front buffer case apart.

I think you need some background to explain the vblank requirement in
single vs. multi-buffer mode, before talking about the actual optimization.
It's not clear to me so far.

> 
> >
> > > > on guest behavior, for example, when no cursor movements happen,
> the
> > > > user experience does not harm and front buffer is still correctly acquired.
> > > > Meanwhile, in actual single framebuffer case, the user experience is
> > > > enhanced compared with page flip events only.
> >
> > 'actual' vs. what? a 'faked' single framebuffer case?
> 
> I think the 'actual' here means vgpu does work in the single front buffer mode.

then just remove 'actual'

> 
> >
> > > >
> > > > Addtionally, to mitigate the events delivering footprints, one
> > > > eventfd and
> > > > 8 byte eventfd counter partition are leveraged.
> >
> > footprint? I guess you meant reducing the frequency of delivered events...
> 
> Exactly. Thanks.
> 

there are other comments embedded in the code. You may overlook them.

> BR,
> Tina
> >
> > > >
> > > > v2:
> > > > - Support vfio_irq_info_cap_display_plane_events. (Tina)
> > > >
> > > > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > > > Signed-off-by: Kechen Lu <kechen.lu@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/gvt/display.c |  22 ++++
> > > >  drivers/gpu/drm/i915/gvt/gvt.h     |   2 +
> > > >  drivers/gpu/drm/i915/gvt/kvmgt.c   | 159
> > +++++++++++++++++++++++++++-
> > > -
> > > >  3 files changed, 174 insertions(+), 9 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/i915/gvt/display.c
> > > b/drivers/gpu/drm/i915/gvt/display.c
> > > > index 1a0a4ae4826e..616285e4a014 100644
> > > > --- a/drivers/gpu/drm/i915/gvt/display.c
> > > > +++ b/drivers/gpu/drm/i915/gvt/display.c
> > > > @@ -34,6 +34,8 @@
> > > >
> > > >  #include "i915_drv.h"
> > > >  #include "gvt.h"
> > > > +#include <uapi/linux/vfio.h>
> > > > +#include <drm/drm_plane.h>
> > > >
> > > >  static int get_edp_pipe(struct intel_vgpu *vgpu)  { @@ -387,6
> > > > +389,8 @@ void intel_gvt_check_vblank_emulation(struct
> > > intel_gvt *gvt)
> > > >  	mutex_unlock(&gvt->lock);
> > > >  }
> > > >
> > > > +#define PAGEFLIP_DELAY_THR 10
> > > > +
> > > >  static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int
> > > > pipe)  {
> > > >  	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; @@ -
> > 396,7
> > > > +400,10 @@ static void emulate_vblank_on_pipe(struct
> > > intel_vgpu *vgpu, int pipe)
> > > >  		[PIPE_B] = PIPE_B_VBLANK,
> > > >  		[PIPE_C] = PIPE_C_VBLANK,
> > > >  	};
> > > > +	int pri_flip_event = SKL_FLIP_EVENT(pipe, PLANE_PRIMARY);
> > > >  	int event;
> > > > +	u64 eventfd_signal_val = 0;
> > > > +	static int no_pageflip_count;
> > > >
> > > >  	if (pipe < PIPE_A || pipe > PIPE_C)
> > > >  		return;
> > > > @@ -407,11 +414,26 @@ static void emulate_vblank_on_pipe(struct
> > > intel_vgpu *vgpu, int pipe)
> > > >  		if (!pipe_is_enabled(vgpu, pipe))
> > > >  			continue;
> > > >
> > > > +		if (event == pri_flip_event)
> > > > +			eventfd_signal_val |=
> > > DISPLAY_PRI_REFRESH_EVENT_VAL;
> > > > +
> > > >  		intel_vgpu_trigger_virtual_event(vgpu, event);
> > > >  	}
> > > >
> > > > +	if (eventfd_signal_val)
> > > > +		no_pageflip_count = 0;
> > > > +	else if (!eventfd_signal_val && no_pageflip_count >
> > > PAGEFLIP_DELAY_THR)
> > >
> > > extra !eventfd_signal_val
> > >
> > > > +		eventfd_signal_val |= DISPLAY_PRI_REFRESH_EVENT_VAL;
> >
> > do you need reset the count to zero here?
> >
> > > > +	else
> > > > +		no_pageflip_count++;
> > >
> > > no_pageflip_count should be per-vgpu instead of static.
> > >
> > > > +
> > > > +	if (vgpu->vdev.vblank_trigger && !vgpu->vdev.display_event_mask
> > > &&
> > > > +		eventfd_signal_val)
> >
> > is this mask per vGPU or per plane? If the latter, you need check specific bit
> > here.
> >
> > > > +		eventfd_signal(vgpu->vdev.vblank_trigger,
> > > eventfd_signal_val);
> > > > +
> > > >  	if (pipe_is_enabled(vgpu, pipe)) {
> > > >  		vgpu_vreg_t(vgpu, PIPE_FRMCOUNT_G4X(pipe))++;
> > > > +
> > >
> > > extra line
> > >
> > > >  		intel_vgpu_trigger_virtual_event(vgpu, vblank_event[pipe]);
> > > >  	}
> > > >  }
> > > > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h
> > > b/drivers/gpu/drm/i915/gvt/gvt.h
> > > > index cd29ea28d7ed..6c8ed030c30b 100644
> > > > --- a/drivers/gpu/drm/i915/gvt/gvt.h
> > > > +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> > > > @@ -205,6 +205,8 @@ struct intel_vgpu {
> > > >  		int num_irqs;
> > > >  		struct eventfd_ctx *intx_trigger;
> > > >  		struct eventfd_ctx *msi_trigger;
> > > > +		struct eventfd_ctx *vblank_trigger;
> > > > +		u32 display_event_mask;
> > > >
> > > >  		/*
> > > >  		 * Two caches are used to avoid mapping duplicated pages
> > > (eg.
> > > > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > > index fd1633342e53..9ace1f4ff9eb 100644
> > > > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > > @@ -1250,6 +1250,8 @@ static int intel_vgpu_get_irq_count(struct
> > > intel_vgpu *vgpu, int type)
> > > >  {
> > > >  	if (type == VFIO_PCI_INTX_IRQ_INDEX || type ==
> > > VFIO_PCI_MSI_IRQ_INDEX)
> > > >  		return 1;
> > > > +	else if (type < VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs)
> > > > +		return vgpu->vdev.irq[type - VFIO_PCI_NUM_IRQS].count;
> > > >
> > > >  	return 0;
> > > >  }
> > > > @@ -1297,7 +1299,60 @@ static int intel_vgpu_set_msi_trigger(struct
> > > intel_vgpu *vgpu,
> > > >  	return 0;
> > > >  }
> > > >
> > > > -static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> > > > +static int intel_vgu_set_display_irq_mask(struct intel_vgpu *vgpu,
> >
> > vgu -> vgpu
> >
> > > > +		unsigned int index, unsigned int start, unsigned int count,
> > > > +		u32 flags, void *data)
> > > > +{
> > > > +	if (start != 0 || count > 2)
> > > > +		return -EINVAL;
> > > > +
> > > > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > > > +		vgpu->vdev.display_event_mask |= 1;
> > >
> > > see below..
> > >
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int intel_vgu_set_display_irq_unmask(struct intel_vgpu *vgpu,
> > > > +		unsigned int index, unsigned int start, unsigned int count,
> > > > +		u32 flags, void *data)
> > > > +{
> > > > +	if (start != 0 || count > 2)
> > > > +		return -EINVAL;
> > > > +
> > > > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > > > +		vgpu->vdev.display_event_mask &= 0;
> > >
> > > looks display_event_mask is used as flag for enable/disable, just write 1 or
> > 0?
> >
> > Do we plan to support per-plane mask in the future? If yes, then use bit
> > operation but let's define the bit meaning explicitly now.,
> >
> > >
> > >
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int intel_vgpu_set_display_event_trigger(struct intel_vgpu *vgpu,
> > > > +		unsigned int index, unsigned int start, unsigned int count,
> > > > +		u32 flags, void *data)
> > > > +{
> > > > +	struct eventfd_ctx *trigger;
> > > > +
> > > > +	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
> > > > +		int fd = *(int *)data;
> > > > +
> > > > +		trigger = eventfd_ctx_fdget(fd);
> > > > +		if (IS_ERR(trigger)) {
> > > > +			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
> > > > +			return PTR_ERR(trigger);
> > > > +		}
> > > > +		vgpu->vdev.vblank_trigger = trigger;
> > > > +		vgpu->vdev.display_event_mask = 0;
> > > > +	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count) {
> > > > +		trigger = vgpu->vdev.vblank_trigger;
> > > > +		if (trigger) {
> > > > +			eventfd_ctx_put(trigger);
> > > > +			vgpu->vdev.vblank_trigger = NULL;
> > > > +		}
> > > > +	}
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> > > >  		unsigned int index, unsigned int start, unsigned int count,
> > > >  		void *data)
> > > >  {
> > > > @@ -1330,6 +1385,35 @@ static int intel_vgpu_set_irqs(struct
> > > > intel_vgpu
> > > *vgpu, u32 flags,
> > > >  			break;
> > > >  		}
> > > >  		break;
> > > > +	default:
> > > > +	{
> > > > +		int i;
> > > > +
> > > > +		if (index >= VFIO_PCI_NUM_IRQS +
> > > > +					vgpu->vdev.num_irqs)
> > > > +			return -EINVAL;
> > > > +		index =
> > > > +			array_index_nospec(index,
> > > > +						VFIO_PCI_NUM_IRQS +
> > > > +						vgpu->vdev.num_irqs);
> > > > +
> > > > +		i = index - VFIO_PCI_NUM_IRQS;
> > > > +		if (vgpu->vdev.irq[i].type == VFIO_IRQ_TYPE_GFX &&
> > > > +		    vgpu->vdev.irq[i].subtype ==
> > > > +		    VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ) {
> > > > +			switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK)
> > {
> > > > +			case VFIO_IRQ_SET_ACTION_MASK:
> > > > +				func = intel_vgu_set_display_irq_mask;
> > > > +				break;
> > > > +			case VFIO_IRQ_SET_ACTION_UNMASK:
> > > > +				func = intel_vgu_set_display_irq_unmask;
> > > > +				break;
> > > > +			case VFIO_IRQ_SET_ACTION_TRIGGER:
> > > > +				func = intel_vgpu_set_display_event_trigger;
> > > > +				break;
> > > > +			}
> > > > +		}
> > > > +	}
> > > >  	}
> > > >
> > > >  	if (!func)
> > > > @@ -1361,7 +1445,7 @@ static long intel_vgpu_ioctl(struct
> > > > mdev_device
> > > *mdev, unsigned int cmd,
> > > >  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
> > > >  		info.num_regions = VFIO_PCI_NUM_REGIONS +
> > > >  				vgpu->vdev.num_regions;
> > > > -		info.num_irqs = VFIO_PCI_NUM_IRQS;
> > > > +		info.num_irqs = VFIO_PCI_NUM_IRQS + vgpu-
> > >vdev.num_irqs;
> > > >
> > > >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> > > >  			-EFAULT : 0;
> > > > @@ -1521,32 +1605,88 @@ static long intel_vgpu_ioctl(struct
> > > > mdev_device
> > > *mdev, unsigned int cmd,
> > > >  			-EFAULT : 0;
> > > >  	} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
> > > >  		struct vfio_irq_info info;
> > > > +		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
> > > > +		unsigned int i;
> > > > +		int ret;
> > > >
> > > >  		minsz = offsetofend(struct vfio_irq_info, count);
> > > >
> > > >  		if (copy_from_user(&info, (void __user *)arg, minsz))
> > > >  			return -EFAULT;
> > > >
> > > > -		if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
> > > > +		if (info.argsz < minsz)
> > > >  			return -EINVAL;
> > > >
> > > >  		switch (info.index) {
> > > >  		case VFIO_PCI_INTX_IRQ_INDEX:
> > > >  		case VFIO_PCI_MSI_IRQ_INDEX:
> > > > +			info.flags = VFIO_IRQ_INFO_EVENTFD;
> > > >  			break;
> > > > -		default:
> > > > +		case VFIO_PCI_MSIX_IRQ_INDEX:
> > > > +		case VFIO_PCI_ERR_IRQ_INDEX:
> > > > +		case VFIO_PCI_REQ_IRQ_INDEX:
> > > >  			return -EINVAL;
> > > > -		}
> > > > +		default:
> > > > +		{
> > > > +			struct vfio_irq_info_cap_type cap_type = {
> > > > +				.header.id = VFIO_IRQ_INFO_CAP_TYPE,
> > > > +				.header.version = 1 };
> > > >
> > > > -		info.flags = VFIO_IRQ_INFO_EVENTFD;
> > > > +			if (info.index >= VFIO_PCI_NUM_IRQS +
> > > > +					vgpu->vdev.num_irqs)
> > > > +				return -EINVAL;
> > > > +			info.index =
> > > > +				array_index_nospec(info.index,
> > > > +						VFIO_PCI_NUM_IRQS +
> > > > +						vgpu->vdev.num_irqs);
> > > > +
> > > > +			i = info.index - VFIO_PCI_NUM_IRQS;
> > > > +
> > > > +			info.flags = vgpu->vdev.irq[i].flags;
> > > > +			cap_type.type = vgpu->vdev.irq[i].type;
> > > > +			cap_type.subtype = vgpu->vdev.irq[i].subtype;
> > > > +
> > > > +			ret = vfio_info_add_capability(&caps,
> > > > +						&cap_type.header,
> > > > +						sizeof(cap_type));
> > > > +			if (ret)
> > > > +				return ret;
> > > > +
> > > > +			if (vgpu->vdev.irq[i].ops->add_capability) {
> > > > +				ret = vgpu->vdev.irq[i].ops-
> > > >add_capability(vgpu,
> > > > +
> > > &caps);
> > > > +				if (ret)
> > > > +					return ret;
> > > > +			}
> > > > +		}
> > > > +		}
> > > >
> > > >  		info.count = intel_vgpu_get_irq_count(vgpu, info.index);
> > > >
> > > >  		if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
> > > >  			info.flags |= (VFIO_IRQ_INFO_MASKABLE |
> > > >  				       VFIO_IRQ_INFO_AUTOMASKED);
> > > > -		else
> > > > -			info.flags |= VFIO_IRQ_INFO_NORESIZE;
> > > > +
> > > > +		if (caps.size) {
> > > > +			info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
> > > > +			if (info.argsz < sizeof(info) + caps.size) {
> > > > +				info.argsz = sizeof(info) + caps.size;
> > > > +				info.cap_offset = 0;
> > > > +			} else {
> > > > +				vfio_info_cap_shift(&caps, sizeof(info));
> > > > +				if (copy_to_user((void __user *)arg +
> > > > +						  sizeof(info), caps.buf,
> > > > +						  caps.size)) {
> > > > +					kfree(caps.buf);
> > > > +					return -EFAULT;
> > > > +				}
> > > > +				info.cap_offset = sizeof(info);
> > > > +				if (offsetofend(struct vfio_irq_info,
> > > cap_offset) > minsz)
> > > > +					minsz = offsetofend(struct
> > > vfio_irq_info, cap_offset);
> > > > +			}
> > > > +
> > > > +			kfree(caps.buf);
> > > > +		}
> > > >
> > > >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> > > >  			-EFAULT : 0;
> > > > @@ -1565,7 +1705,8 @@ static long intel_vgpu_ioctl(struct
> > > > mdev_device
> > > *mdev, unsigned int cmd,
> > > >  			int max = intel_vgpu_get_irq_count(vgpu, hdr.index);
> > > >
> > > >  			ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
> > > > -						VFIO_PCI_NUM_IRQS,
> > > &data_size);
> > > > +					VFIO_PCI_NUM_IRQS + vgpu-
> > > >vdev.num_irqs,
> > > > +								 &data_size);
> > > >  			if (ret) {
> > > >
> > > 	gvt_vgpu_err("intel:vfio_set_irqs_validate_and_prepare failed\n");
> > > >  				return -EINVAL;
> > > > --
> > > > 2.17.1
> > > >
> > > > _______________________________________________
> > > > intel-gvt-dev mailing list
> > > > intel-gvt-dev@lists.freedesktop.org
> > > > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> > >
> > > --
> > > Open Source Technology Center, Intel ltd.
> > >
> > > $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace
  2019-08-28  7:10         ` Tian, Kevin
@ 2019-08-28  7:39           ` Zhang, Tina
  0 siblings, 0 replies; 23+ messages in thread
From: Zhang, Tina @ 2019-08-28  7:39 UTC (permalink / raw)
  To: Tian, Kevin, Zhenyu Wang
  Cc: kvm, linux-kernel, Yuan, Hang, alex.williamson, kraxel, Lu,
	Kechen, intel-gvt-dev, Lv, Zhiyuan



> -----Original Message-----
> From: Tian, Kevin
> Sent: Wednesday, August 28, 2019 3:10 PM
> To: Zhang, Tina <tina.zhang@intel.com>; Zhenyu Wang
> <zhenyuw@linux.intel.com>
> Cc: kvm@vger.kernel.org; linux-kernel@vger.kernel.org; Yuan, Hang
> <hang.yuan@intel.com>; alex.williamson@redhat.com; kraxel@redhat.com;
> Lu, Kechen <kechen.lu@intel.com>; intel-gvt-dev@lists.freedesktop.org; Lv,
> Zhiyuan <zhiyuan.lv@intel.com>
> Subject: RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to
> userspace
> 
> > From: Zhang, Tina
> > Sent: Wednesday, August 28, 2019 2:59 PM
> >
> > > -----Original Message-----
> > > From: Tian, Kevin
> > > Sent: Wednesday, August 28, 2019 9:50 AM
> > > To: Zhenyu Wang <zhenyuw@linux.intel.com>; Zhang, Tina
> > > <tina.zhang@intel.com>
> > > Cc: kvm@vger.kernel.org; linux-kernel@vger.kernel.org; Yuan, Hang
> > > <hang.yuan@intel.com>; alex.williamson@redhat.com;
> > > kraxel@redhat.com; Lu, Kechen <kechen.lu@intel.com>;
> > > intel-gvt-dev@lists.freedesktop.org; Lv, Zhiyuan
> > > <zhiyuan.lv@intel.com>
> > > Subject: RE: [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event
> > > to userspace
> > >
> > > > From: Zhenyu Wang
> > > > Sent: Monday, August 26, 2019 3:56 PM
> > > >
> > > > On 2019.08.16 10:35:26 +0800, Tina Zhang wrote:
> > > > > Deliver the display refresh events to the user land. Userspace
> > > > > can use the irq mask/unmask mechanism to disable or enable the
> > > > > event
> > > delivery.
> > > > >
> > > > > As we know, delivering refresh event at each vblank safely
> > > > > avoids tearing and unexpected event overwhelming, but there are
> > > > > still spaces to optimize.
> > >
> > > can you move optimization to another patch?
> > OK. I'll try.
> > >
> > > > >
> > > > > For handling the normal case, deliver the page flip refresh
> > > > > event at each vblank, in other words, bounded by vblanks.
> > > > > Skipping some events bring performance enhancement while not
> > > > > hurting user
> > > experience.
> > >
> > > what is the normal case? double buffer? which events are skipped in
> > > such scenario?
> > Here normal case means >= 2 buffers. In this situation, we have to
> > skip the redundant page flip events between two vblanks and notify
> > user space with one display refresh event (i.e. turn those page flip
> > operations between two vblanks into one display refresh event).
> >
> 
> "have to" refers to something mandatory, instead of performance
> enhancement.

I think the "performance enhancement" here means in multi-buffer cases, no need to deliver display refresh events at all vblanks unless the page flip operation does happen between two vblanks.

> 
> > >
> > > > >
> > > > > For single framebuffer case, deliver the refresh events to
> > > > > userspace at all vblanks. This heuristic at each vblank
> > > > > leverages pageflip_count
> > >
> > > at all vblanks? later words said the other way i.e. delivering
> > > events only
> > after
> > > the threshold is exceeded. Please be consistent in what you try to
> > > explain here.
> >
> > Actually, there're two steps:
> > 1) single framebuffer case recognition The heuristic needs several
> > vblanks to see if vgpu is working in the single front buffer mode.
> >
> 
> 
> what is the difference between "single framebuffer" and "single front buffer"?
Same meaning.

> 
> > 2) deliver the display refresh event at all vblanks.
> > If vgpu is working in single front buffer mode, the display refresh event will
> be
> > sent at all vblanks.
> 
> I didn't get it. why is it OK to not deliver events before you identify the
> single framebuffer mode, while not OK to do so after the identification?
> If the definition of such mode can tolerate misses of vblank events, then
> you don't need deliver at every vblank. Otherwise, the detection itself
> already breaks the guest assumption.

There is a small window (about less than 10 vblank periods). During that time, gvt-g won't send any display refresh events unless there is a page flip operation happens. And it's considered that this window is working as the performance enhancement. Considering the case that guest has multi-framebuffers and the conducting page flip rate is lower than the vblank rate, gvt-g only sends the events when the page flip operation really happens. On the other hand, this window is needed by the heuristic to guess if the guest is working in the single front buffer mode. For the "performance enhancement" case, this window could be as large as possible. However, for single front buffer recognition, this window cannot be too large, as it will impact the user experience. So, we just let it be 10 vblank periods.

> 
> >
> > >
> > > > > incresements to determine if there is no page flip happens after a
> > > > > certain period and so that the case is regarded as single framebuffer
> one.
> > > > > Although this heuristic makes incorrect decision sometimes and it
> > > > > depends
> > >
> > > why may the heuristic make incorrect decision? under what condition?
> >
> > E.g. when guest window manager is waiting for user input and there're no
> > window update requests from the apps in guest. In situation, although
> guest
> > doesn't work in single front buffer mode, the heuristic will consider it does
> > and send the display refresh event at all vblanks. Ideally, in this situation,
> as
> > guest window manager is working in multi-buffer mode, gvt-g should only
> > send the refresh event when page flip happens. However, it seems there's
> no
> > simple way for gvt-g to tell this case and the real single front buffer case
> apart.
> 
> I think you need some background to explain the vblank requirement in
> single vs. multi-buffer mode, before talking about the actual optimization.
> It's not clear to me so far.
> 
> >
> > >
> > > > > on guest behavior, for example, when no cursor movements happen,
> > the
> > > > > user experience does not harm and front buffer is still correctly
> acquired.
> > > > > Meanwhile, in actual single framebuffer case, the user experience is
> > > > > enhanced compared with page flip events only.
> > >
> > > 'actual' vs. what? a 'faked' single framebuffer case?
> >
> > I think the 'actual' here means vgpu does work in the single front buffer
> mode.
> 
> then just remove 'actual'
> 
> >
> > >
> > > > >
> > > > > Addtionally, to mitigate the events delivering footprints, one
> > > > > eventfd and
> > > > > 8 byte eventfd counter partition are leveraged.
> > >
> > > footprint? I guess you meant reducing the frequency of delivered
> events...
> >
> > Exactly. Thanks.
> >
> 
> there are other comments embedded in the code. You may overlook them.
Thanks.

BR,
Tina
> 
> > BR,
> > Tina
> > >
> > > > >
> > > > > v2:
> > > > > - Support vfio_irq_info_cap_display_plane_events. (Tina)
> > > > >
> > > > > Signed-off-by: Tina Zhang <tina.zhang@intel.com>
> > > > > Signed-off-by: Kechen Lu <kechen.lu@intel.com>
> > > > > ---
> > > > >  drivers/gpu/drm/i915/gvt/display.c |  22 ++++
> > > > >  drivers/gpu/drm/i915/gvt/gvt.h     |   2 +
> > > > >  drivers/gpu/drm/i915/gvt/kvmgt.c   | 159
> > > +++++++++++++++++++++++++++-
> > > > -
> > > > >  3 files changed, 174 insertions(+), 9 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/i915/gvt/display.c
> > > > b/drivers/gpu/drm/i915/gvt/display.c
> > > > > index 1a0a4ae4826e..616285e4a014 100644
> > > > > --- a/drivers/gpu/drm/i915/gvt/display.c
> > > > > +++ b/drivers/gpu/drm/i915/gvt/display.c
> > > > > @@ -34,6 +34,8 @@
> > > > >
> > > > >  #include "i915_drv.h"
> > > > >  #include "gvt.h"
> > > > > +#include <uapi/linux/vfio.h>
> > > > > +#include <drm/drm_plane.h>
> > > > >
> > > > >  static int get_edp_pipe(struct intel_vgpu *vgpu)  { @@ -387,6
> > > > > +389,8 @@ void intel_gvt_check_vblank_emulation(struct
> > > > intel_gvt *gvt)
> > > > >  	mutex_unlock(&gvt->lock);
> > > > >  }
> > > > >
> > > > > +#define PAGEFLIP_DELAY_THR 10
> > > > > +
> > > > >  static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int
> > > > > pipe)  {
> > > > >  	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; @@ -
> > > 396,7
> > > > > +400,10 @@ static void emulate_vblank_on_pipe(struct
> > > > intel_vgpu *vgpu, int pipe)
> > > > >  		[PIPE_B] = PIPE_B_VBLANK,
> > > > >  		[PIPE_C] = PIPE_C_VBLANK,
> > > > >  	};
> > > > > +	int pri_flip_event = SKL_FLIP_EVENT(pipe, PLANE_PRIMARY);
> > > > >  	int event;
> > > > > +	u64 eventfd_signal_val = 0;
> > > > > +	static int no_pageflip_count;
> > > > >
> > > > >  	if (pipe < PIPE_A || pipe > PIPE_C)
> > > > >  		return;
> > > > > @@ -407,11 +414,26 @@ static void emulate_vblank_on_pipe(struct
> > > > intel_vgpu *vgpu, int pipe)
> > > > >  		if (!pipe_is_enabled(vgpu, pipe))
> > > > >  			continue;
> > > > >
> > > > > +		if (event == pri_flip_event)
> > > > > +			eventfd_signal_val |=
> > > > DISPLAY_PRI_REFRESH_EVENT_VAL;
> > > > > +
> > > > >  		intel_vgpu_trigger_virtual_event(vgpu, event);
> > > > >  	}
> > > > >
> > > > > +	if (eventfd_signal_val)
> > > > > +		no_pageflip_count = 0;
> > > > > +	else if (!eventfd_signal_val && no_pageflip_count >
> > > > PAGEFLIP_DELAY_THR)
> > > >
> > > > extra !eventfd_signal_val
> > > >
> > > > > +		eventfd_signal_val |=
> DISPLAY_PRI_REFRESH_EVENT_VAL;
> > >
> > > do you need reset the count to zero here?
> > >
> > > > > +	else
> > > > > +		no_pageflip_count++;
> > > >
> > > > no_pageflip_count should be per-vgpu instead of static.
> > > >
> > > > > +
> > > > > +	if (vgpu->vdev.vblank_trigger && !vgpu-
> >vdev.display_event_mask
> > > > &&
> > > > > +		eventfd_signal_val)
> > >
> > > is this mask per vGPU or per plane? If the latter, you need check specific
> bit
> > > here.
> > >
> > > > > +		eventfd_signal(vgpu->vdev.vblank_trigger,
> > > > eventfd_signal_val);
> > > > > +
> > > > >  	if (pipe_is_enabled(vgpu, pipe)) {
> > > > >  		vgpu_vreg_t(vgpu, PIPE_FRMCOUNT_G4X(pipe))++;
> > > > > +
> > > >
> > > > extra line
> > > >
> > > > >  		intel_vgpu_trigger_virtual_event(vgpu, vblank_event[pipe]);
> > > > >  	}
> > > > >  }
> > > > > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h
> > > > b/drivers/gpu/drm/i915/gvt/gvt.h
> > > > > index cd29ea28d7ed..6c8ed030c30b 100644
> > > > > --- a/drivers/gpu/drm/i915/gvt/gvt.h
> > > > > +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> > > > > @@ -205,6 +205,8 @@ struct intel_vgpu {
> > > > >  		int num_irqs;
> > > > >  		struct eventfd_ctx *intx_trigger;
> > > > >  		struct eventfd_ctx *msi_trigger;
> > > > > +		struct eventfd_ctx *vblank_trigger;
> > > > > +		u32 display_event_mask;
> > > > >
> > > > >  		/*
> > > > >  		 * Two caches are used to avoid mapping duplicated pages
> > > > (eg.
> > > > > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > > b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > > > index fd1633342e53..9ace1f4ff9eb 100644
> > > > > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > > > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > > > > @@ -1250,6 +1250,8 @@ static int intel_vgpu_get_irq_count(struct
> > > > intel_vgpu *vgpu, int type)
> > > > >  {
> > > > >  	if (type == VFIO_PCI_INTX_IRQ_INDEX || type ==
> > > > VFIO_PCI_MSI_IRQ_INDEX)
> > > > >  		return 1;
> > > > > +	else if (type < VFIO_PCI_NUM_IRQS + vgpu->vdev.num_irqs)
> > > > > +		return vgpu->vdev.irq[type -
> VFIO_PCI_NUM_IRQS].count;
> > > > >
> > > > >  	return 0;
> > > > >  }
> > > > > @@ -1297,7 +1299,60 @@ static int
> intel_vgpu_set_msi_trigger(struct
> > > > intel_vgpu *vgpu,
> > > > >  	return 0;
> > > > >  }
> > > > >
> > > > > -static int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> > > > > +static int intel_vgu_set_display_irq_mask(struct intel_vgpu *vgpu,
> > >
> > > vgu -> vgpu
> > >
> > > > > +		unsigned int index, unsigned int start, unsigned int
> count,
> > > > > +		u32 flags, void *data)
> > > > > +{
> > > > > +	if (start != 0 || count > 2)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > > > > +		vgpu->vdev.display_event_mask |= 1;
> > > >
> > > > see below..
> > > >
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int intel_vgu_set_display_irq_unmask(struct intel_vgpu *vgpu,
> > > > > +		unsigned int index, unsigned int start, unsigned int
> count,
> > > > > +		u32 flags, void *data)
> > > > > +{
> > > > > +	if (start != 0 || count > 2)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	if (flags & VFIO_IRQ_SET_DATA_NONE)
> > > > > +		vgpu->vdev.display_event_mask &= 0;
> > > >
> > > > looks display_event_mask is used as flag for enable/disable, just write 1
> or
> > > 0?
> > >
> > > Do we plan to support per-plane mask in the future? If yes, then use bit
> > > operation but let's define the bit meaning explicitly now.,
> > >
> > > >
> > > >
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int intel_vgpu_set_display_event_trigger(struct intel_vgpu
> *vgpu,
> > > > > +		unsigned int index, unsigned int start, unsigned int
> count,
> > > > > +		u32 flags, void *data)
> > > > > +{
> > > > > +	struct eventfd_ctx *trigger;
> > > > > +
> > > > > +	if (flags & VFIO_IRQ_SET_DATA_EVENTFD) {
> > > > > +		int fd = *(int *)data;
> > > > > +
> > > > > +		trigger = eventfd_ctx_fdget(fd);
> > > > > +		if (IS_ERR(trigger)) {
> > > > > +			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
> > > > > +			return PTR_ERR(trigger);
> > > > > +		}
> > > > > +		vgpu->vdev.vblank_trigger = trigger;
> > > > > +		vgpu->vdev.display_event_mask = 0;
> > > > > +	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count) {
> > > > > +		trigger = vgpu->vdev.vblank_trigger;
> > > > > +		if (trigger) {
> > > > > +			eventfd_ctx_put(trigger);
> > > > > +			vgpu->vdev.vblank_trigger = NULL;
> > > > > +		}
> > > > > +	}
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +int intel_vgpu_set_irqs(struct intel_vgpu *vgpu, u32 flags,
> > > > >  		unsigned int index, unsigned int start, unsigned int count,
> > > > >  		void *data)
> > > > >  {
> > > > > @@ -1330,6 +1385,35 @@ static int intel_vgpu_set_irqs(struct
> > > > > intel_vgpu
> > > > *vgpu, u32 flags,
> > > > >  			break;
> > > > >  		}
> > > > >  		break;
> > > > > +	default:
> > > > > +	{
> > > > > +		int i;
> > > > > +
> > > > > +		if (index >= VFIO_PCI_NUM_IRQS +
> > > > > +					vgpu->vdev.num_irqs)
> > > > > +			return -EINVAL;
> > > > > +		index =
> > > > > +			array_index_nospec(index,
> > > > > +						VFIO_PCI_NUM_IRQS
> +
> > > > > +						vgpu-
> >vdev.num_irqs);
> > > > > +
> > > > > +		i = index - VFIO_PCI_NUM_IRQS;
> > > > > +		if (vgpu->vdev.irq[i].type == VFIO_IRQ_TYPE_GFX &&
> > > > > +		    vgpu->vdev.irq[i].subtype ==
> > > > > +		    VFIO_IRQ_SUBTYPE_GFX_DISPLAY_IRQ) {
> > > > > +			switch (flags &
> VFIO_IRQ_SET_ACTION_TYPE_MASK)
> > > {
> > > > > +			case VFIO_IRQ_SET_ACTION_MASK:
> > > > > +				func =
> intel_vgu_set_display_irq_mask;
> > > > > +				break;
> > > > > +			case VFIO_IRQ_SET_ACTION_UNMASK:
> > > > > +				func =
> intel_vgu_set_display_irq_unmask;
> > > > > +				break;
> > > > > +			case VFIO_IRQ_SET_ACTION_TRIGGER:
> > > > > +				func =
> intel_vgpu_set_display_event_trigger;
> > > > > +				break;
> > > > > +			}
> > > > > +		}
> > > > > +	}
> > > > >  	}
> > > > >
> > > > >  	if (!func)
> > > > > @@ -1361,7 +1445,7 @@ static long intel_vgpu_ioctl(struct
> > > > > mdev_device
> > > > *mdev, unsigned int cmd,
> > > > >  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
> > > > >  		info.num_regions = VFIO_PCI_NUM_REGIONS +
> > > > >  				vgpu->vdev.num_regions;
> > > > > -		info.num_irqs = VFIO_PCI_NUM_IRQS;
> > > > > +		info.num_irqs = VFIO_PCI_NUM_IRQS + vgpu-
> > > >vdev.num_irqs;
> > > > >
> > > > >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> > > > >  			-EFAULT : 0;
> > > > > @@ -1521,32 +1605,88 @@ static long intel_vgpu_ioctl(struct
> > > > > mdev_device
> > > > *mdev, unsigned int cmd,
> > > > >  			-EFAULT : 0;
> > > > >  	} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
> > > > >  		struct vfio_irq_info info;
> > > > > +		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
> > > > > +		unsigned int i;
> > > > > +		int ret;
> > > > >
> > > > >  		minsz = offsetofend(struct vfio_irq_info, count);
> > > > >
> > > > >  		if (copy_from_user(&info, (void __user *)arg, minsz))
> > > > >  			return -EFAULT;
> > > > >
> > > > > -		if (info.argsz < minsz || info.index >=
> VFIO_PCI_NUM_IRQS)
> > > > > +		if (info.argsz < minsz)
> > > > >  			return -EINVAL;
> > > > >
> > > > >  		switch (info.index) {
> > > > >  		case VFIO_PCI_INTX_IRQ_INDEX:
> > > > >  		case VFIO_PCI_MSI_IRQ_INDEX:
> > > > > +			info.flags = VFIO_IRQ_INFO_EVENTFD;
> > > > >  			break;
> > > > > -		default:
> > > > > +		case VFIO_PCI_MSIX_IRQ_INDEX:
> > > > > +		case VFIO_PCI_ERR_IRQ_INDEX:
> > > > > +		case VFIO_PCI_REQ_IRQ_INDEX:
> > > > >  			return -EINVAL;
> > > > > -		}
> > > > > +		default:
> > > > > +		{
> > > > > +			struct vfio_irq_info_cap_type cap_type = {
> > > > > +				.header.id =
> VFIO_IRQ_INFO_CAP_TYPE,
> > > > > +				.header.version = 1 };
> > > > >
> > > > > -		info.flags = VFIO_IRQ_INFO_EVENTFD;
> > > > > +			if (info.index >= VFIO_PCI_NUM_IRQS +
> > > > > +					vgpu->vdev.num_irqs)
> > > > > +				return -EINVAL;
> > > > > +			info.index =
> > > > > +				array_index_nospec(info.index,
> > > > > +						VFIO_PCI_NUM_IRQS
> +
> > > > > +						vgpu-
> >vdev.num_irqs);
> > > > > +
> > > > > +			i = info.index - VFIO_PCI_NUM_IRQS;
> > > > > +
> > > > > +			info.flags = vgpu->vdev.irq[i].flags;
> > > > > +			cap_type.type = vgpu->vdev.irq[i].type;
> > > > > +			cap_type.subtype = vgpu-
> >vdev.irq[i].subtype;
> > > > > +
> > > > > +			ret = vfio_info_add_capability(&caps,
> > > > > +						&cap_type.header,
> > > > > +						sizeof(cap_type));
> > > > > +			if (ret)
> > > > > +				return ret;
> > > > > +
> > > > > +			if (vgpu->vdev.irq[i].ops->add_capability) {
> > > > > +				ret = vgpu->vdev.irq[i].ops-
> > > > >add_capability(vgpu,
> > > > > +
> > > > &caps);
> > > > > +				if (ret)
> > > > > +					return ret;
> > > > > +			}
> > > > > +		}
> > > > > +		}
> > > > >
> > > > >  		info.count = intel_vgpu_get_irq_count(vgpu, info.index);
> > > > >
> > > > >  		if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
> > > > >  			info.flags |= (VFIO_IRQ_INFO_MASKABLE |
> > > > >  				       VFIO_IRQ_INFO_AUTOMASKED);
> > > > > -		else
> > > > > -			info.flags |= VFIO_IRQ_INFO_NORESIZE;
> > > > > +
> > > > > +		if (caps.size) {
> > > > > +			info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
> > > > > +			if (info.argsz < sizeof(info) + caps.size) {
> > > > > +				info.argsz = sizeof(info) + caps.size;
> > > > > +				info.cap_offset = 0;
> > > > > +			} else {
> > > > > +				vfio_info_cap_shift(&caps,
> sizeof(info));
> > > > > +				if (copy_to_user((void __user *)arg +
> > > > > +						  sizeof(info), caps.buf,
> > > > > +						  caps.size)) {
> > > > > +					kfree(caps.buf);
> > > > > +					return -EFAULT;
> > > > > +				}
> > > > > +				info.cap_offset = sizeof(info);
> > > > > +				if (offsetofend(struct vfio_irq_info,
> > > > cap_offset) > minsz)
> > > > > +					minsz = offsetofend(struct
> > > > vfio_irq_info, cap_offset);
> > > > > +			}
> > > > > +
> > > > > +			kfree(caps.buf);
> > > > > +		}
> > > > >
> > > > >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> > > > >  			-EFAULT : 0;
> > > > > @@ -1565,7 +1705,8 @@ static long intel_vgpu_ioctl(struct
> > > > > mdev_device
> > > > *mdev, unsigned int cmd,
> > > > >  			int max = intel_vgpu_get_irq_count(vgpu, hdr.index);
> > > > >
> > > > >  			ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
> > > > > -						VFIO_PCI_NUM_IRQS,
> > > > &data_size);
> > > > > +					VFIO_PCI_NUM_IRQS + vgpu-
> > > > >vdev.num_irqs,
> > > > > +
> &data_size);
> > > > >  			if (ret) {
> > > > >
> > > > 	gvt_vgpu_err("intel:vfio_set_irqs_validate_and_prepare failed\n");
> > > > >  				return -EINVAL;
> > > > > --
> > > > > 2.17.1
> > > > >
> > > > > _______________________________________________
> > > > > intel-gvt-dev mailing list
> > > > > intel-gvt-dev@lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> > > >
> > > > --
> > > > Open Source Technology Center, Intel ltd.
> > > >
> > > > $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 0/6] Deliver vGPU display refresh event to userspace
  2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
                   ` (5 preceding siblings ...)
  2019-08-16  2:35 ` [PATCH v5 6/6] drm/i915/gvt: Add cursor plane reg update trap emulation handler Tina Zhang
@ 2019-09-03  1:26 ` Zhang, Tina
  2019-09-03  7:43   ` Tian, Kevin
  2019-09-05  7:48   ` kraxel
  6 siblings, 2 replies; 23+ messages in thread
From: Zhang, Tina @ 2019-09-03  1:26 UTC (permalink / raw)
  To: intel-gvt-dev, kraxel, alex.williamson
  Cc: kvm, linux-kernel, Yuan, Hang, Lv, Zhiyuan

Hi,

Some people are asking whether the display refresh irq could be provided by qemu vfio display?

Some background: currently, we have two display timers. One is provided by QEMU UI and the other one is provided by the vgpu. The vgpu display framebuffer consumers (i.e. QEMU UIs) depend on the UI timer to consume the contents in the vgpu display framebuffer, meanwhile the display framebuffer producer (e.g. gvt-g display model) updates the framebuffer content according to the vblank timer provided by gpu vendor driver. Since these two timers are not synced, tearing could be noticed. 

So, theoretically to solve this tearing problem, we could have two options:

Option 1: Like what we have in this patch set: stop the QEMU UI timer and let both the framebuffer consumers and the framebuffer producer sync to the display refresh event provided by vendor driver. So, QEMU UI can do the refresh depending on this display refresh event to make sure to have a tear-free framebuffer.

Option 2: QEMU provides the emulated display refresh event to the vgpus provided by vendor driver. For vgpus, the display refresh event can be considered as the vblank event which is leveraged by guest window manager to do the plane update or mode-setting.

People are asking if option 2 could be a better choice.

Thanks.

BR,
Tina

> -----Original Message-----
> From: Zhang, Tina
> Sent: Friday, August 16, 2019 10:35 AM
> To: intel-gvt-dev@lists.freedesktop.org
> Cc: Zhang, Tina <tina.zhang@intel.com>; kraxel@redhat.com;
> alex.williamson@redhat.com; kvm@vger.kernel.org; linux-
> kernel@vger.kernel.org; Yuan, Hang <hang.yuan@intel.com>; Lv, Zhiyuan
> <zhiyuan.lv@intel.com>
> Subject: [PATCH v5 0/6] Deliver vGPU display refresh event to userspace
> 
> This series tries to send the vGPU display refresh event to user land.
> 
> Instead of delivering page flip events only or vblank events only, we choose
> to combine two of them, i.e. post display refresh event at vblanks and skip
> some of them when no page flip happens. Vblanks as upper bound are safe
> and skipping no-page-flip vblanks guarantees both trivial performance
> impacts and good user experience without screen tearing. Plus, we have the
> mask/unmask mechansim providing user space flexibility to switch between
> event-notified refresh and classic timer-based refresh.
> 
> In addition, there are some cases that guest app only uses one framebuffer
> for both drawing and display. In such case, guest OS won't do the plane page
> flip when the framebuffer is updated, thus the user land won't be notified
> about the updated framebuffer. Hence, in single framebuffer case, we apply
> a heuristic to determine whether it is the case or not. If it is, notify user land
> when each vblank event triggers.
> 
> v5:
> - Introduce a vGPU display irq cap which can notify user space with
>   both primary and cursor plane update events through one eventfd. (Gerd &
> Alex)
> v4:
> - Deliver page flip event and single framebuffer refresh event bounded by
> display vblanks. (Kechen)
> v3:
> - Deliver display vblank event instead of page flip event. (Zhenyu)
> v2:
> - Use VFIO irq chain to get eventfds from userspace instead of adding a new
> ABI. (Alex)
> v1:
> - https://patchwork.kernel.org/cover/10962341/
> 
> Kechen Lu (2):
>   drm/i915/gvt: Deliver async primary plane page flip events at vblank
>   drm/i915/gvt: Add cursor plane reg update trap emulation handler
> 
> Tina Zhang (4):
>   vfio: Define device specific irq type capability
>   vfio: Introduce vGPU display irq type
>   drm/i915/gvt: Register vGPU display event irq
>   drm/i915/gvt: Deliver vGPU refresh event to userspace
> 
>  drivers/gpu/drm/i915/gvt/cmd_parser.c |   6 +-
>  drivers/gpu/drm/i915/gvt/display.c    |  49 +++++-
>  drivers/gpu/drm/i915/gvt/display.h    |   3 +
>  drivers/gpu/drm/i915/gvt/gvt.h        |   6 +
>  drivers/gpu/drm/i915/gvt/handlers.c   |  32 +++-
>  drivers/gpu/drm/i915/gvt/hypercall.h  |   1 +
>  drivers/gpu/drm/i915/gvt/interrupt.c  |   7 +
>  drivers/gpu/drm/i915/gvt/interrupt.h  |   3 +
>  drivers/gpu/drm/i915/gvt/kvmgt.c      | 230 +++++++++++++++++++++++++-
>  drivers/gpu/drm/i915/gvt/mpt.h        |  17 ++
>  include/uapi/linux/vfio.h             |  40 ++++-
>  11 files changed, 375 insertions(+), 19 deletions(-)
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 0/6] Deliver vGPU display refresh event to userspace
  2019-09-03  1:26 ` [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Zhang, Tina
@ 2019-09-03  7:43   ` Tian, Kevin
  2019-09-05  7:48   ` kraxel
  1 sibling, 0 replies; 23+ messages in thread
From: Tian, Kevin @ 2019-09-03  7:43 UTC (permalink / raw)
  To: Zhang, Tina, intel-gvt-dev, kraxel, alex.williamson
  Cc: Lv, Zhiyuan, linux-kernel, kvm, Yuan, Hang

> From: Zhang, Tina
> Sent: Tuesday, September 3, 2019 9:27 AM
> 
> Hi,
> 
> Some people are asking whether the display refresh irq could be provided by
> qemu vfio display?
> 
> Some background: currently, we have two display timers. One is provided by
> QEMU UI and the other one is provided by the vgpu. The vgpu display
> framebuffer consumers (i.e. QEMU UIs) depend on the UI timer to consume
> the contents in the vgpu display framebuffer, meanwhile the display
> framebuffer producer (e.g. gvt-g display model) updates the framebuffer
> content according to the vblank timer provided by gpu vendor driver. Since
> these two timers are not synced, tearing could be noticed.
> 
> So, theoretically to solve this tearing problem, we could have two options:
> 
> Option 1: Like what we have in this patch set: stop the QEMU UI timer and let
> both the framebuffer consumers and the framebuffer producer sync to the
> display refresh event provided by vendor driver. So, QEMU UI can do the
> refresh depending on this display refresh event to make sure to have a tear-
> free framebuffer.
> 
> Option 2: QEMU provides the emulated display refresh event to the vgpus
> provided by vendor driver. For vgpus, the display refresh event can be
> considered as the vblank event which is leveraged by guest window manager
> to do the plane update or mode-setting.
> 
> People are asking if option 2 could be a better choice.
> 

I think the answer is specific to different usages. None is perfect in all
scenarios. The key is to find out who is the primary display to show the 
guest framebuffer, then that guy is cared about tearing thus should be 
the source of vblank event:

1) Guest framebuffer is directly programmed to, and shown in the local
display. In such case, the physical vblank event should be the source of
virtual vblank event, i.e. kernel GVT-g driver should emulate the event
in host vblank event handler.

2) Guest framebuffer is shown in the Qemu UI. Then, naturally Qemu UI
should be the source of virtual vblank event, based on its own tearing
requirement:

	a) Guest framebuffer is shown in the local application window,
which is updated by the Qemu UI timer. Then virtual vblank should be
sent at end of the UI timer handler, to make sure no change happened
when the guest framebuffer is composited as the source texture.

	b) Qemu UI may program guest framebuffer directly to the
physical plane, through whatever interface that kernel gfx driver provides.
In such case, Qemu UI should wait for vblank notification from kernel, 
and then trigger the virtual vblank event.

	c) Qemu UI may further route the guest framebuffer to remote
client, e.g. through vnc. Then virtual vblank event should be emulated
according to tearing requirement in vnc protocol.

3) combination of 1) and 2), where either local display or Qemu UI is
considered as primary display with the other for diagnostic purpose.
Then the tearing of the primary display should drive the emulation of
virtual vblank, while the other one may suffer from tearing issue.

Accordingly, we may want a kernel interface allowing user space
to specify the source of vblank emulation: in kernel or from user
space. If the former is specified, then virtual vblank is driven by 
physical vblank event. Otherwise, user space should trigger the
virtual vblank injection. Just leave all the decisions to user space. :-)

Thanks
Kevin
 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 0/6] Deliver vGPU display refresh event to userspace
  2019-09-03  1:26 ` [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Zhang, Tina
  2019-09-03  7:43   ` Tian, Kevin
@ 2019-09-05  7:48   ` kraxel
  2019-09-17  9:27     ` Zhang, Tina
  1 sibling, 1 reply; 23+ messages in thread
From: kraxel @ 2019-09-05  7:48 UTC (permalink / raw)
  To: Zhang, Tina
  Cc: intel-gvt-dev, alex.williamson, kvm, linux-kernel, Yuan, Hang,
	Lv, Zhiyuan

  Hi,

> Option 2: QEMU provides the emulated display refresh event to the
> vgpus provided by vendor driver. For vgpus, the display refresh event
> can be considered as the vblank event which is leveraged by guest
> window manager to do the plane update or mode-setting.

> People are asking if option 2 could be a better choice.

Certainly worth trying, maybe it even makes sense to implement both and
let qemu pick one, possibly even switch them at runtime.

qemu can change the refresh rate.  vnc and sdl use that to reduce the
refresh rate in case nobody is looking (no vnc client connected, sdl
window minimized).  It surely makes sense to make that visible to the
guest so it can throttle display updates too.  I'm not sure vblank is
the way to go though, guests might run into vblank irq timeouts in case
the refresh rate is very low ...

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: [PATCH v5 0/6] Deliver vGPU display refresh event to userspace
  2019-09-05  7:48   ` kraxel
@ 2019-09-17  9:27     ` Zhang, Tina
  0 siblings, 0 replies; 23+ messages in thread
From: Zhang, Tina @ 2019-09-17  9:27 UTC (permalink / raw)
  To: kraxel, Tian, Kevin
  Cc: kvm, linux-kernel, Yuan, Hang, alex.williamson, Lv, Zhiyuan,
	intel-gvt-dev



> -----Original Message-----
> From: intel-gvt-dev [mailto:intel-gvt-dev-bounces@lists.freedesktop.org] On
> Behalf Of kraxel@redhat.com
> Sent: Thursday, September 5, 2019 3:49 PM
> To: Zhang, Tina <tina.zhang@intel.com>
> Cc: kvm@vger.kernel.org; linux-kernel@vger.kernel.org; Yuan, Hang
> <hang.yuan@intel.com>; alex.williamson@redhat.com; Lv, Zhiyuan
> <zhiyuan.lv@intel.com>; intel-gvt-dev@lists.freedesktop.org
> Subject: Re: [PATCH v5 0/6] Deliver vGPU display refresh event to userspace
> 
>   Hi,
> 
> > Option 2: QEMU provides the emulated display refresh event to the
> > vgpus provided by vendor driver. For vgpus, the display refresh event
> > can be considered as the vblank event which is leveraged by guest
> > window manager to do the plane update or mode-setting.
> 
> > People are asking if option 2 could be a better choice.
> 
> Certainly worth trying, maybe it even makes sense to implement both and
> let qemu pick one, possibly even switch them at runtime.
> 
> qemu can change the refresh rate.  vnc and sdl use that to reduce the
> refresh rate in case nobody is looking (no vnc client connected, sdl window
> minimized).  It surely makes sense to make that visible to the guest so it can
> throttle display updates too.  I'm not sure vblank is the way to go though,
> guests might run into vblank irq timeouts in case the refresh rate is very
> low ...

Indeed, low vblank rate isn't expected by guest gfx driver. It complains about the timeout error all the time, when the vblank is low. 

Currently, gvt-g provides full virtualized display model (a.k.a. not pv). And the option 2 is more like a pv solution for performance optimization, which is of course a very good proposal. Since the two options have no dependency, this patch-set limits its scope to only include option 1. Thanks.


BR,
Tina

> 
> cheers,
>   Gerd
> 
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-09-17  9:27 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-16  2:35 [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Tina Zhang
2019-08-16  2:35 ` [PATCH v5 1/6] vfio: Define device specific irq type capability Tina Zhang
2019-08-16 20:51   ` Alex Williamson
2019-08-20  0:56     ` Zhang, Tina
2019-08-16  2:35 ` [PATCH v5 2/6] vfio: Introduce vGPU display irq type Tina Zhang
2019-08-16 20:51   ` Alex Williamson
2019-08-20  2:12     ` Zhang, Tina
2019-08-20  7:20       ` kraxel
2019-08-20 15:03         ` Alex Williamson
2019-08-20 15:32       ` Alex Williamson
2019-08-16  2:35 ` [PATCH v5 3/6] drm/i915/gvt: Register vGPU display event irq Tina Zhang
2019-08-16  2:35 ` [PATCH v5 4/6] drm/i915/gvt: Deliver vGPU refresh event to userspace Tina Zhang
2019-08-26  7:55   ` Zhenyu Wang
2019-08-28  1:49     ` Tian, Kevin
2019-08-28  6:59       ` Zhang, Tina
2019-08-28  7:10         ` Tian, Kevin
2019-08-28  7:39           ` Zhang, Tina
2019-08-16  2:35 ` [PATCH v5 5/6] drm/i915/gvt: Deliver async primary plane page flip events at vblank Tina Zhang
2019-08-16  2:35 ` [PATCH v5 6/6] drm/i915/gvt: Add cursor plane reg update trap emulation handler Tina Zhang
2019-09-03  1:26 ` [PATCH v5 0/6] Deliver vGPU display refresh event to userspace Zhang, Tina
2019-09-03  7:43   ` Tian, Kevin
2019-09-05  7:48   ` kraxel
2019-09-17  9:27     ` Zhang, Tina

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).