All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance
@ 2019-07-08  1:35 Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 1/9] drm/i915: introduced vgpu pv capability Xiaolin Zhang
                   ` (12 more replies)
  0 siblings, 13 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

To improve vgpu performance, it could implement some PV optimization
such as to reduce the mmio access trap numbers or eliminate certain piece
of HW emulation within guest driver to reduce vm exit/vm enter cost.

the solutions in this patch set are implemented two PV optimizations based
on the shared memory region between guest and GVTg for data communication.
The shared memory region is allocated by guest driver and this
region's memory guest physical address will be passed to GVTg through
PVINFO register and later GVTg can access this region directly without
trap cost to achieve data exchange purpose between guest and GVTg.

in this patch set, 2 kind of PV optimization implemented controlled by
pv_caps PVINO register with different pv bit.
1. workload PV submission (context submission): reduce 4 traps to 1 trap
and eliminated execlists HW behaviour emulation.
2. ppgtt PV update: eliminate the cost of ppgtt write protection.

based on the experiment, for small workloads, specifally, glxgears with
vblank_mode off, the average performance gain on single vgpu is 30~50%.
for large workload such as media and 3D, the average performance gain
is about 4%. 

based on the PV mechanism, it could achive more vgpu feature optimization
such as globle GTT update, display plane and water mark update.

v0: RFC patch set
v1: addressed RFC review comments
v2: addressed v1 review comments, added pv callbacks for pv operations
v3:
1. addressed v2 review comments, removed pv callbacks code duplication in
v2 and unified pv calls under g2v notification register. different g2v pv
notifications defined.
2. dropped pv master irq feature due to hard conflict with recnet i915
change and take time to rework.
v4:
1. addressed v3 review comments.
2. extended workload PV submission by skip execlists HW behaviour emulation
and context switch interrupt injection.  
v5:
1. addressed v4 review comments from Chris for pv submission.
2. per-engine communication between PV guest and host.
v6:
1. addressed v5 review comment from Chris for pv submission.
2. addressed v5 review comment from Zhenyu for PV version support.
v7:
1. addessed v5 review comment from Chris starting to use pv cmd buffer
communication between pv guest and GVT.

Xiaolin Zhang (9):
  drm/i915: introduced vgpu pv capability
  drm/i915: vgpu shared memory setup for pv optimization
  drm/i915: vgpu pv command buffer support
  drm/i915: vgpu ppgtt update pv optimization
  drm/i915: vgpu context submission pv optimization
  drm/i915/gvt: GVTg handle pv_caps PVINFO register
  drm/i915/gvt: GVTg handle shared_page setup
  drm/i915/gvt: GVTg support ppgtt pv optimization
  drm/i915/gvt: GVTg support context submission pv optimization

 drivers/gpu/drm/i915/Makefile              |   2 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c        |   8 +-
 drivers/gpu/drm/i915/gvt/execlist.c        |   6 +
 drivers/gpu/drm/i915/gvt/gtt.c             | 295 ++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/gtt.h             |  11 +
 drivers/gpu/drm/i915/gvt/gvt.h             |  12 +-
 drivers/gpu/drm/i915/gvt/handlers.c        | 181 ++++++++++++-
 drivers/gpu/drm/i915/gvt/vgpu.c            |  50 ++++
 drivers/gpu/drm/i915/i915_drv.h            |   6 +-
 drivers/gpu/drm/i915/i915_gem.c            |   3 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c        |   9 +-
 drivers/gpu/drm/i915/i915_gem_gtt.h        |   8 +
 drivers/gpu/drm/i915/i915_pvinfo.h         |   9 +-
 drivers/gpu/drm/i915/i915_vgpu.c           | 414 ++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h           | 132 +++++++++
 drivers/gpu/drm/i915/intel_pv_submission.c | 189 +++++++++++++
 16 files changed, 1322 insertions(+), 13 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/intel_pv_submission.c

-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v7 1/9] drm/i915: introduced vgpu pv capability
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 2/9] drm/i915: vgpu shared memory setup for pv optimization Xiaolin Zhang
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

pv capability for vgpu was introduced by pv_caps in struct
i915_virtual_gpu and a new pv_caps register for host GVT
was defined in struct vgt_if for vgpu pv optimization.

both of them are used to control different feature pv optimization
supported and implemented by both guest and host.

These fields are default zero, no any pv feature enabled.

it also adds VGT_CAPS_PV capability BIT for guest to check GVTg
can support PV feature or not.

v0: RFC, introudced enable_pvmmio module parameter.
v1: addressed RFC comment to remove enable_pvmmio module parameter
by pv capability check.
v2: rebase.
v3: distinct pv caps from guest and host. renamed enable_pvmmio to
pvmmio_caps which is used for host pv caps.
v4: consolidated all pv related functons into a single file i915_vgpu.c
and renamed pvmmio to pv_caps.
v5: rebase.
v6: rebase.
v7: rebase.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h    |  2 ++
 drivers/gpu/drm/i915/i915_pvinfo.h |  5 ++++-
 drivers/gpu/drm/i915/i915_vgpu.c   | 44 +++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h   |  9 ++++++++
 4 files changed, 58 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e05bc3e..6d79867 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -93,6 +93,7 @@
 #include "i915_vma.h"
 
 #include "intel_gvt.h"
+#include "i915_pvinfo.h"
 
 /* General customization:
  */
@@ -1076,6 +1077,7 @@ struct i915_frontbuffer_tracking {
 struct i915_virtual_gpu {
 	bool active;
 	u32 caps;
+	u32 pv_caps;
 };
 
 /* used in computing the new watermarks state */
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 683e97a..ad398b4 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -57,6 +57,7 @@ enum vgt_g2v_type {
 #define VGT_CAPS_FULL_PPGTT		BIT(2)
 #define VGT_CAPS_HWSP_EMULATION		BIT(3)
 #define VGT_CAPS_HUGE_GTT		BIT(4)
+#define VGT_CAPS_PV		BIT(5)
 
 struct vgt_if {
 	u64 magic;		/* VGT_MAGIC */
@@ -109,7 +110,9 @@ struct vgt_if {
 	u32 execlist_context_descriptor_lo;
 	u32 execlist_context_descriptor_hi;
 
-	u32  rsv7[0x200 - 24];    /* pad to one page */
+	u32 pv_caps;
+
+	u32  rsv7[0x200 - 25];    /* pad to one page */
 } __packed;
 
 #define vgtif_offset(x) (offsetof(struct vgt_if, x))
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index dbd1fa3..9b37dd1 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -95,7 +95,14 @@ void i915_detect_vgpu(struct drm_i915_private *dev_priv)
 	dev_priv->vgpu.caps = readl(shared_area + vgtif_offset(vgt_caps));
 
 	dev_priv->vgpu.active = true;
-	DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
+
+	if (!intel_vgpu_check_pv_caps(dev_priv, shared_area)) {
+		DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
+		return;
+	}
+
+	DRM_INFO("Virtual GPU for Intel GVT-g detected with pv_caps 0x%x.\n",
+			dev_priv->vgpu.pv_caps);
 
 out:
 	pci_iounmap(pdev, shared_area);
@@ -297,3 +304,38 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
 	DRM_ERROR("VGT balloon fail\n");
 	return ret;
 }
+
+/*
+ * i915 vgpu PV support for Linux
+ */
+
+/**
+ * intel_vgpu_check_pv_caps - detect virtual GPU PV capabilities
+ * @dev_priv: i915 device private
+ *
+ * This function is called at the initialization stage, to detect VGPU
+ * PV capabilities
+ *
+ * If guest wants to enable pv_caps, it needs to config it explicitly
+ * through vgt_if interface from gvt layer.
+ */
+bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv,
+		void __iomem *shared_area)
+{
+	u32 gvt_pvcaps;
+	u32 pvcaps = 0;
+
+	if (!intel_vgpu_has_pv_caps(dev_priv))
+		return false;
+
+	/* PV capability negotiation between PV guest and GVT */
+	gvt_pvcaps = readl(shared_area + vgtif_offset(pv_caps));
+	pvcaps = dev_priv->vgpu.pv_caps & gvt_pvcaps;
+	dev_priv->vgpu.pv_caps = pvcaps;
+	writel(pvcaps, shared_area + vgtif_offset(pv_caps));
+
+	if (!pvcaps)
+		return false;
+
+	return true;
+}
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 8b3663d..bbe56b5 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -43,7 +43,16 @@ intel_vgpu_has_huge_gtt(struct drm_i915_private *dev_priv)
 	return dev_priv->vgpu.caps & VGT_CAPS_HUGE_GTT;
 }
 
+static inline bool
+intel_vgpu_has_pv_caps(struct drm_i915_private *dev_priv)
+{
+	return dev_priv->vgpu.caps & VGT_CAPS_PV;
+}
+
 int intel_vgt_balloon(struct i915_ggtt *ggtt);
 void intel_vgt_deballoon(struct i915_ggtt *ggtt);
 
+/* i915 vgpu pv related functions */
+bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv,
+		void __iomem *shared_area);
 #endif /* _I915_VGPU_H_ */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 2/9] drm/i915: vgpu shared memory setup for pv optimization
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 1/9] drm/i915: introduced vgpu pv capability Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 3/9] drm/i915: vgpu pv command buffer support Xiaolin Zhang
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

To enable vgpu pv features, we need to setup a shared memory page
which will be used for data exchange directly accessed between both
guest and backend i915 driver to avoid emulation trap cost.

guest i915 will allocate this page memory and then pass it's physical
address to backend i915 driver through PVINFO register so that backend i915
driver can access this shared page meory without any trap cost with the
help form hyperviser's read guest gpa functionality.

guest i915 will send VGT_G2V_SHARED_PAGE_SETUP notification to host GVT
once shared memory setup finished.

the layout of the shared_page also defined as well in this patch which
is used for pv features implementation.

v0: RFC.
v1: addressed RFC comment to move both shared_page_lock and shared_page
to i915_virtual_gpu structure.
v2: packed i915_virtual_gpu structure.
v3: added SHARED_PAGE_SETUP g2v notification for pv shared_page setup
v4: added intel_vgpu_setup_shared_page() in i915_vgpu_pv.c.
v5: per engine desc data in shared memory.
v6: added version support in shared memory (Zhenyu).
v7: rebased.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h    |  4 +-
 drivers/gpu/drm/i915/i915_pvinfo.h |  5 ++-
 drivers/gpu/drm/i915/i915_vgpu.c   | 85 ++++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_vgpu.h   | 17 ++++++++
 4 files changed, 109 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 6d79867..27954b5 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1078,7 +1078,9 @@ struct i915_virtual_gpu {
 	bool active;
 	u32 caps;
 	u32 pv_caps;
-};
+
+	struct i915_virtual_gpu_pv *pv;
+} __packed;
 
 /* used in computing the new watermarks state */
 struct intel_wm_config {
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index ad398b4..3c63603 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -48,6 +48,7 @@ enum vgt_g2v_type {
 	VGT_G2V_PPGTT_L4_PAGE_TABLE_DESTROY,
 	VGT_G2V_EXECLIST_CONTEXT_CREATE,
 	VGT_G2V_EXECLIST_CONTEXT_DESTROY,
+	VGT_G2V_SHARED_PAGE_SETUP,
 	VGT_G2V_MAX,
 };
 
@@ -112,7 +113,9 @@ struct vgt_if {
 
 	u32 pv_caps;
 
-	u32  rsv7[0x200 - 25];    /* pad to one page */
+	u64 shared_page_gpa;
+
+	u32  rsv7[0x200 - 27];    /* pad to one page */
 } __packed;
 
 #define vgtif_offset(x) (offsetof(struct vgt_if, x))
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 9b37dd1..940171a 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -145,6 +145,7 @@ static void vgt_deballoon_space(struct i915_ggtt *ggtt,
  */
 void intel_vgt_deballoon(struct i915_ggtt *ggtt)
 {
+	struct drm_i915_private *i915 = ggtt->vm.i915;
 	int i;
 
 	if (!intel_vgpu_active(ggtt->vm.i915))
@@ -154,6 +155,9 @@ void intel_vgt_deballoon(struct i915_ggtt *ggtt)
 
 	for (i = 0; i < 4; i++)
 		vgt_deballoon_space(ggtt, &bl_info.space[i]);
+
+	if (i915->vgpu.pv)
+		free_page((unsigned long)i915->vgpu.pv->shared_page);
 }
 
 static int vgt_balloon_space(struct i915_ggtt *ggtt,
@@ -309,6 +313,81 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
  * i915 vgpu PV support for Linux
  */
 
+/*
+ * shared_page setup for VGPU PV features
+ */
+static int intel_vgpu_setup_shared_page(struct drm_i915_private *dev_priv,
+		void __iomem *shared_area)
+{
+	void __iomem *addr;
+	struct i915_virtual_gpu_pv *pv;
+	struct gvt_shared_page *base;
+	u64 gpa;
+	u16 ver_maj, ver_min;
+
+	/* We allocate 1 page shared between guest and GVT for data exchange.
+	 *       ___________.....................
+	 *      |head       |                   |
+	 *      |___________|.................. PAGE/8
+	 *      |PV ELSP                        |
+	 *      :___________....................PAGE/4
+	 *      |desc (SEND)                    |
+	 *      |				|
+	 *      :_______________________________PAGE/2
+	 *      |cmds (SEND)                    |
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |_______________________________|
+	 *
+	 * 0 offset: PV version area
+	 * PAGE/8 offset: per engine workload submission data area
+	 * PAGE/4 offset: PV command buffer command descriptor area
+	 * PAGE/2 offset: PV command buffer command data area
+	 */
+
+	base =  (struct gvt_shared_page *)get_zeroed_page(GFP_KERNEL);
+	if (!base) {
+		DRM_INFO("out of memory for shared page memory\n");
+		return -ENOMEM;
+	}
+
+	/* pass guest memory pa address to GVT and then read back to verify */
+	gpa = __pa(base);
+	addr = shared_area + vgtif_offset(shared_page_gpa);
+	writeq(gpa, addr);
+	if (gpa != readq(addr)) {
+		DRM_INFO("vgpu: passed shared_page_gpa failed\n");
+		free_page((unsigned long)base);
+		return -EIO;
+	}
+
+	addr = shared_area + vgtif_offset(g2v_notify);
+	writeq(VGT_G2V_SHARED_PAGE_SETUP, addr);
+
+	ver_maj = base->ver_major;
+	ver_min = base->ver_minor;
+	DRM_INFO("vgpu PV ver major %d and minor %d\n", ver_maj, ver_min);
+	if (ver_maj != PV_MAJOR || ver_min != PV_MINOR) {
+		DRM_INFO("vgpu: PV version incompatible\n");
+		free_page((unsigned long)base);
+		return -EIO;
+	}
+
+	pv = kzalloc(sizeof(struct i915_virtual_gpu_pv), GFP_KERNEL);
+	if (!pv) {
+		free_page((unsigned long)base);
+		return -ENOMEM;
+	}
+
+	dev_priv->vgpu.pv = pv;
+	pv->shared_page = base;
+	pv->enabled = true;
+
+	return 0;
+}
+
 /**
  * intel_vgpu_check_pv_caps - detect virtual GPU PV capabilities
  * @dev_priv: i915 device private
@@ -337,5 +416,11 @@ bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv,
 	if (!pvcaps)
 		return false;
 
+	if (intel_vgpu_setup_shared_page(dev_priv, shared_area)) {
+		dev_priv->vgpu.pv_caps = 0;
+		writel(0, shared_area + vgtif_offset(pv_caps));
+		return false;
+	}
+
 	return true;
 }
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index bbe56b5..9b6a8d1 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -27,6 +27,23 @@
 #include "i915_drv.h"
 #include "i915_pvinfo.h"
 
+#define PV_MAJOR		1
+#define PV_MINOR		0
+
+/*
+ * A shared page(4KB) between gvt and VM, could be allocated by guest driver
+ * or a fixed location in PCI bar 0 region
+ */
+struct gvt_shared_page {
+	u16 ver_major;
+	u16 ver_minor;
+};
+
+struct i915_virtual_gpu_pv {
+	struct gvt_shared_page *shared_page;
+	bool enabled;
+};
+
 void i915_detect_vgpu(struct drm_i915_private *dev_priv);
 
 bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *dev_priv);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 3/9] drm/i915: vgpu pv command buffer support
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 1/9] drm/i915: introduced vgpu pv capability Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 2/9] drm/i915: vgpu shared memory setup for pv optimization Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 4/9] drm/i915: vgpu ppgtt update pv optimization Xiaolin Zhang
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

based on the shared memory setup between guest and GVT, the simple
pv command buffer ring was introduced by this patch used to perform
guest-2-gvt single direction communication.

v1: initial added to address i915 PV v6 patch set comment.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_pvinfo.h |   1 +
 drivers/gpu/drm/i915/i915_vgpu.c   | 193 +++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_vgpu.h   |  66 +++++++++++++
 3 files changed, 260 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 3c63603..db9eebb 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -49,6 +49,7 @@ enum vgt_g2v_type {
 	VGT_G2V_EXECLIST_CONTEXT_CREATE,
 	VGT_G2V_EXECLIST_CONTEXT_DESTROY,
 	VGT_G2V_SHARED_PAGE_SETUP,
+	VGT_G2V_PV_SEND_TRIGGER,
 	VGT_G2V_MAX,
 };
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 940171a..acbe3a0 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -313,6 +313,187 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
  * i915 vgpu PV support for Linux
  */
 
+/**
+ * wait_for_desc_update - Wait for the command buffer descriptor update.
+ * @desc:	buffer descriptor
+ * @fence:	response fence
+ * @status:	placeholder for status
+ *
+ * GVT will update command buffer descriptor with new fence and status
+ * after processing the command identified by the fence. Wait for
+ * specified fence and then read from the descriptor status of the
+ * command.
+ *
+ * Return:
+ * *	0 response received (status is valid)
+ * *	-ETIMEDOUT no response within hardcoded timeout
+ * *	-EPROTO no response, CT buffer is in error
+ */
+static int wait_for_desc_update(struct vgpu_pv_ct_buffer_desc *desc,
+		u32 fence, u32 *status)
+{
+	int err;
+
+#define done (READ_ONCE(desc->fence) == fence)
+	err = wait_for_us(done, 5);
+	if (err)
+		err = wait_for(done, 10);
+#undef done
+
+	if (unlikely(err)) {
+		DRM_ERROR("CT: fence %u failed; reported fence=%u\n",
+				fence, desc->fence);
+	}
+
+	*status = desc->status;
+	return err;
+}
+
+/**
+ * DOC: CTB Guest to GVT request
+ *
+ * Format of the CTB Guest to GVT request message is as follows::
+ *
+ *      +------------+---------+---------+---------+---------+
+ *      |   msg[0]   |   [1]   |   [2]   |   ...   |  [n-1]  |
+ *      +------------+---------+---------+---------+---------+
+ *      |   MESSAGE  |       MESSAGE PAYLOAD                 |
+ *      +   HEADER   +---------+---------+---------+---------+
+ *      |            |    0    |    1    |   ...   |    n    |
+ *      +============+=========+=========+=========+=========+
+ *      |  len >= 1  |  FENCE  |     request specific data   |
+ *      +------+-----+---------+---------+---------+---------+
+ *
+ *                   ^-----------------len-------------------^
+ */
+static int pv_command_buffer_write(struct i915_virtual_gpu_pv *pv,
+		const u32 *action, u32 len /* in dwords */, u32 fence)
+{
+	struct vgpu_pv_ct_buffer_desc *desc = pv->ctb.desc;
+	u32 head = desc->head / 4;	/* in dwords */
+	u32 tail = desc->tail / 4;	/* in dwords */
+	u32 size = desc->size / 4;	/* in dwords */
+	u32 used;			/* in dwords */
+	u32 header;
+	u32 *cmds = pv->ctb.cmds;
+	unsigned int i;
+
+	GEM_BUG_ON(desc->size % 4);
+	GEM_BUG_ON(desc->head % 4);
+	GEM_BUG_ON(desc->tail % 4);
+	GEM_BUG_ON(tail >= size);
+
+	/*
+	 * tail == head condition indicates empty.
+	 */
+	if (tail < head)
+		used = (size - head) + tail;
+	else
+		used = tail - head;
+
+	/* make sure there is a space including extra dw for the fence */
+	if (unlikely(used + len + 1 >= size))
+		return -ENOSPC;
+
+	/*
+	 * Write the message. The format is the following:
+	 * DW0: header (including action code)
+	 * DW1: fence
+	 * DW2+: action data
+	 */
+	header = (len << PV_CT_MSG_LEN_SHIFT) |
+		 (PV_CT_MSG_WRITE_FENCE_TO_DESC) |
+		 (action[0] << PV_CT_MSG_ACTION_SHIFT);
+
+	cmds[tail] = header;
+	tail = (tail + 1) % size;
+
+	cmds[tail] = fence;
+	tail = (tail + 1) % size;
+
+	for (i = 1; i < len; i++) {
+		cmds[tail] = action[i];
+		tail = (tail + 1) % size;
+	}
+
+	/* now update desc tail (back in bytes) */
+	desc->tail = tail * 4;
+	GEM_BUG_ON(desc->tail > desc->size);
+
+	return 0;
+}
+
+static u32 pv_get_next_fence(struct i915_virtual_gpu_pv *pv)
+{
+	/* For now it's trivial */
+	return ++pv->next_fence;
+}
+
+static int pv_send(struct drm_i915_private *dev_priv,
+		const u32 *action, u32 len, u32 *status)
+{
+	struct i915_virtual_gpu *vgpu = &dev_priv->vgpu;
+	struct i915_virtual_gpu_pv *pv = vgpu->pv;
+
+	struct vgpu_pv_ct_buffer_desc *desc = pv->ctb.desc;
+
+	u32 fence;
+	int err;
+
+	GEM_BUG_ON(!pv->enabled);
+	GEM_BUG_ON(!len);
+	GEM_BUG_ON(len & ~PV_CT_MSG_LEN_MASK);
+
+	fence = pv_get_next_fence(pv);
+	err = pv_command_buffer_write(pv, action, len, fence);
+	if (unlikely(err))
+		goto unlink;
+
+	intel_vgpu_pv_notify(dev_priv);
+
+	err = wait_for_desc_update(desc, fence, status);
+	if (unlikely(err))
+		goto unlink;
+
+	if ((*status)) {
+		err = -EIO;
+		goto unlink;
+	}
+
+	err = (*status);
+unlink:
+	return err;
+}
+
+static int intel_vgpu_pv_send_command_buffer(
+		struct drm_i915_private *dev_priv,
+		u32 *action, u32 len)
+{
+	struct i915_virtual_gpu *vgpu = &dev_priv->vgpu;
+
+	u32 status = ~0; /* undefined */
+	int ret;
+
+	mutex_lock(&vgpu->pv->send_mutex);
+
+	ret = pv_send(dev_priv, action, len, &status);
+	if (unlikely(ret < 0)) {
+		DRM_ERROR("PV: send action %#X failed; err=%d status=%#X\n",
+			  action[0], ret, status);
+	} else if (unlikely(ret)) {
+		DRM_ERROR("PV: send action %#x returned %d (%#x)\n",
+				action[0], ret, ret);
+	}
+
+	mutex_unlock(&vgpu->pv->send_mutex);
+	return ret;
+}
+
+static void intel_vgpu_pv_notify_mmio(struct drm_i915_private *dev_priv)
+{
+	I915_WRITE(vgtif_reg(g2v_notify), VGT_G2V_PV_SEND_TRIGGER);
+}
+
 /*
  * shared_page setup for VGPU PV features
  */
@@ -385,6 +566,18 @@ static int intel_vgpu_setup_shared_page(struct drm_i915_private *dev_priv,
 	pv->shared_page = base;
 	pv->enabled = true;
 
+	/* setup PV command buffer ptr */
+	pv->ctb.desc = (void *)base + PV_DESC_OFF;
+	pv->ctb.cmds = (void *)base + PV_CMD_OFF;
+
+	pv->ctb.desc->size = PAGE_SIZE/2;
+	pv->ctb.desc->addr = PV_CMD_OFF;
+
+	/* setup PV command buffer callback */
+	pv->send = intel_vgpu_pv_send_command_buffer;
+	pv->notify = intel_vgpu_pv_notify_mmio;
+	mutex_init(&pv->send_mutex);
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 9b6a8d1..24c49b7 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -29,6 +29,8 @@
 
 #define PV_MAJOR		1
 #define PV_MINOR		0
+#define PV_DESC_OFF		(PAGE_SIZE/4)
+#define PV_CMD_OFF		(PAGE_SIZE/2)
 
 /*
  * A shared page(4KB) between gvt and VM, could be allocated by guest driver
@@ -39,9 +41,60 @@ struct gvt_shared_page {
 	u16 ver_minor;
 };
 
+/*
+ * Definition of the command transport message header (DW0)
+ *
+ * bit[4..0]	message len (in dwords)
+ * bit[7..5]	reserved
+ * bit[8]		write fence to desc
+ * bit[9..11]	reserved
+ * bit[31..16]	action code
+ */
+#define PV_CT_MSG_LEN_SHIFT				0
+#define PV_CT_MSG_LEN_MASK				0x1F
+#define PV_CT_MSG_WRITE_FENCE_TO_DESC	(1 << 8)
+#define PV_CT_MSG_ACTION_SHIFT			16
+#define PV_CT_MSG_ACTION_MASK			0xFFFF
+
+/* PV command transport buffer descriptor */
+struct vgpu_pv_ct_buffer_desc {
+	u32 addr;		/* gfx address */
+	u32 size;		/* size in bytes */
+	u32 head;		/* offset updated by GVT */
+	u32 tail;		/* offset updated by owner */
+
+	u32 fence;		/* fence updated by GVT */
+	u32 status;		/* status updated by GVT */
+} __packed;
+
+/** PV single command transport buffer.
+ *
+ * A single command transport buffer consists of two parts, the header
+ * record (command transport buffer descriptor) and the actual buffer which
+ * holds the commands.
+ *
+ * @desc: pointer to the buffer descriptor
+ * @cmds: pointer to the commands buffer
+ */
+struct vgpu_pv_ct_buffer {
+	struct vgpu_pv_ct_buffer_desc *desc;
+	u32 *cmds;
+};
+
 struct i915_virtual_gpu_pv {
 	struct gvt_shared_page *shared_page;
 	bool enabled;
+
+	/* PV command buffer support */
+	struct vgpu_pv_ct_buffer ctb;
+	u32 next_fence;
+
+	/* To serialize the vgpu PV send actions */
+	struct mutex send_mutex;
+
+	/* VGPU's PV specific send function */
+	int (*send)(struct drm_i915_private *dev_priv, u32 *data, u32 len);
+	void (*notify)(struct drm_i915_private *dev_priv);
 };
 
 void i915_detect_vgpu(struct drm_i915_private *dev_priv);
@@ -66,6 +119,19 @@ intel_vgpu_has_pv_caps(struct drm_i915_private *dev_priv)
 	return dev_priv->vgpu.caps & VGT_CAPS_PV;
 }
 
+static inline void
+intel_vgpu_pv_notify(struct drm_i915_private *dev_priv)
+{
+	dev_priv->vgpu.pv->notify(dev_priv);
+}
+
+static inline int
+intel_vgpu_pv_send(struct drm_i915_private *dev_priv,
+		u32 *action, u32 len)
+{
+	return dev_priv->vgpu.pv->send(dev_priv, action, len);
+}
+
 int intel_vgt_balloon(struct i915_ggtt *ggtt);
 void intel_vgt_deballoon(struct i915_ggtt *ggtt);
 
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 4/9] drm/i915: vgpu ppgtt update pv optimization
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (2 preceding siblings ...)
  2019-07-08  1:35 ` [PATCH v7 3/9] drm/i915: vgpu pv command buffer support Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 5/9] drm/i915: vgpu context submission " Xiaolin Zhang
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

This patch extends vgpu ppgtt g2v notification to notify host
GVT-g of ppgtt update from guest including alloc_4lvl, clear_4lv4
and insert_4lvl.

These updates use the shared memory page to pass struct pv_ppgtt_update
from guest to GVT which is used for pv optimiation implemeation within
host GVT side.

This patch also add one new pv_caps level to control ppgtt update.

Use PV_PPGTT_UPDATE to control this level of pv optimization.

v0: RFC.
v1: rebased.
v2: added pv callbacks for vm.{allocate_va_range, insert_entries,
clear_range} within ppgtt.
v3: rebased, disable huge page ppgtt support when using PVMMIO ppgtt
update due to complex and performance impact.
v4: moved alloc/insert/clear_4lvl pv callbacks into i915_vgpu_pv.c and
added a single intel_vgpu_config_pv_caps() for vgpu pv callbacks setup.
v5: rebase.
v6: rebase.
v7: rebase.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c     |  3 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c |  9 +++--
 drivers/gpu/drm/i915/i915_gem_gtt.h |  8 ++++
 drivers/gpu/drm/i915/i915_vgpu.c    | 79 +++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_vgpu.h    | 25 ++++++++++++
 5 files changed, 120 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 7ade42b..de306e3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1421,7 +1421,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 	int ret;
 
 	/* We need to fallback to 4K pages if host doesn't support huge gtt. */
-	if (intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
+	if ((intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
+		|| intel_vgpu_enabled_pv_caps(dev_priv, PV_PPGTT_UPDATE))
 		mkwrite_device_info(dev_priv)->page_sizes =
 			I915_GTT_PAGE_SIZE_4K;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 236c964..d6224f9 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -910,7 +910,7 @@ static void gen8_ppgtt_clear_3lvl(struct i915_address_space *vm,
  * This is the top-level structure in 4-level page tables used on gen8+.
  * Empty entries are always scratch pml4e.
  */
-static void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
+void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
 				  u64 start, u64 length)
 {
 	struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
@@ -1150,7 +1150,7 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 	} while (iter->sg);
 }
 
-static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
+void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 				   struct i915_vma *vma,
 				   enum i915_cache_level cache_level,
 				   u32 flags)
@@ -1466,7 +1466,7 @@ static int gen8_ppgtt_alloc_3lvl(struct i915_address_space *vm,
 				    i915_vm_to_ppgtt(vm)->pd, start, length);
 }
 
-static int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
+int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
 				 u64 start, u64 length)
 {
 	struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
@@ -1656,6 +1656,9 @@ static struct i915_ppgtt *gen8_ppgtt_create(struct drm_i915_private *i915)
 		ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl;
 		ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl;
 		ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl;
+
+		if (intel_vgpu_active(i915))
+			intel_vgpu_config_pv_caps(i915, PV_PPGTT_UPDATE, ppgtt);
 	} else {
 		if (intel_vgpu_active(i915)) {
 			err = gen8_preallocate_top_level_pdp(ppgtt);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 57a68ef..e19e66a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -641,6 +641,14 @@ int gen6_ppgtt_pin(struct i915_ppgtt *base);
 void gen6_ppgtt_unpin(struct i915_ppgtt *base);
 void gen6_ppgtt_unpin_all(struct i915_ppgtt *base);
 
+void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
+		u64 start, u64 length);
+void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
+		struct i915_vma *vma,
+		enum i915_cache_level cache_level, u32 flags);
+int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
+		u64 start, u64 length);
+
 void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv);
 void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv);
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index acbe3a0..2aad0b8 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -96,6 +96,9 @@ void i915_detect_vgpu(struct drm_i915_private *dev_priv)
 
 	dev_priv->vgpu.active = true;
 
+	/* guest driver PV capability */
+	dev_priv->vgpu.pv_caps = PV_PPGTT_UPDATE;
+
 	if (!intel_vgpu_check_pv_caps(dev_priv, shared_area)) {
 		DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
 		return;
@@ -313,6 +316,82 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
  * i915 vgpu PV support for Linux
  */
 
+static int vgpu_ppgtt_pv_update(struct drm_i915_private *dev_priv,
+		u32 action, u64 pdp, u64 start, u64 length, u32 cache_level)
+{
+	u32 data[8];
+
+	data[0] = action;
+	data[1] = lower_32_bits(pdp);
+	data[2] = upper_32_bits(pdp);
+	data[3] = lower_32_bits(start);
+	data[4] = upper_32_bits(start);
+	data[5] = lower_32_bits(length);
+	data[6] = upper_32_bits(length);
+	data[7] = cache_level;
+
+	return intel_vgpu_pv_send(dev_priv, data, ARRAY_SIZE(data));
+}
+
+static void gen8_ppgtt_clear_4lvl_pv(struct i915_address_space *vm,
+		u64 start, u64 length)
+{
+	struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct drm_i915_private *dev_priv = vm->i915;
+
+	gen8_ppgtt_clear_4lvl(vm, start, length);
+	vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_CLEAR,
+		px_dma(ppgtt->pd), start, length, 0);
+}
+
+static void gen8_ppgtt_insert_4lvl_pv(struct i915_address_space *vm,
+		struct i915_vma *vma,
+		enum i915_cache_level cache_level, u32 flags)
+{
+	struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct drm_i915_private *dev_priv = vm->i915;
+	u64 start = vma->node.start;
+	u64 length = vma->node.size;
+
+	gen8_ppgtt_insert_4lvl(vm, vma, cache_level, flags);
+	vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_INSERT,
+		px_dma(ppgtt->pd), start, length, cache_level);
+}
+
+static int gen8_ppgtt_alloc_4lvl_pv(struct i915_address_space *vm,
+		u64 start, u64 length)
+{
+	struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct drm_i915_private *dev_priv = vm->i915;
+	int ret;
+
+	ret = gen8_ppgtt_alloc_4lvl(vm, start, length);
+	if (ret)
+		return ret;
+
+	return vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_ALLOC,
+		px_dma(ppgtt->pd), start, length, 0);
+}
+
+/*
+ * config guest driver PV ops for different PV features
+ */
+void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap, void *data)
+{
+	struct i915_ppgtt *ppgtt;
+
+	if (!intel_vgpu_enabled_pv_caps(dev_priv, cap))
+		return;
+
+	if (cap == PV_PPGTT_UPDATE) {
+		ppgtt = (struct i915_ppgtt *)data;
+		ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl_pv;
+		ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl_pv;
+		ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl_pv;
+	}
+}
+
 /**
  * wait_for_desc_update - Wait for the command buffer descriptor update.
  * @desc:	buffer descriptor
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 24c49b7..92c84d3 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -33,6 +33,21 @@
 #define PV_CMD_OFF		(PAGE_SIZE/2)
 
 /*
+ * define different capabilities of PV optimization
+ */
+enum pv_caps {
+	PV_PPGTT_UPDATE = 0x1,
+};
+
+/* PV actions */
+enum intel_vgpu_pv_action {
+	PV_ACTION_DEFAULT = 0x0,
+	PV_ACTION_PPGTT_L4_ALLOC,
+	PV_ACTION_PPGTT_L4_CLEAR,
+	PV_ACTION_PPGTT_L4_INSERT,
+};
+
+/*
  * A shared page(4KB) between gvt and VM, could be allocated by guest driver
  * or a fixed location in PCI bar 0 region
  */
@@ -119,6 +134,14 @@ intel_vgpu_has_pv_caps(struct drm_i915_private *dev_priv)
 	return dev_priv->vgpu.caps & VGT_CAPS_PV;
 }
 
+static inline bool
+intel_vgpu_enabled_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap)
+{
+	return (dev_priv->vgpu.active) && intel_vgpu_has_pv_caps(dev_priv)
+			&& (dev_priv->vgpu.pv_caps & cap);
+}
+
 static inline void
 intel_vgpu_pv_notify(struct drm_i915_private *dev_priv)
 {
@@ -138,4 +161,6 @@ void intel_vgt_deballoon(struct i915_ggtt *ggtt);
 /* i915 vgpu pv related functions */
 bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv,
 		void __iomem *shared_area);
+void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap, void *data);
 #endif /* _I915_VGPU_H_ */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 5/9] drm/i915: vgpu context submission pv optimization
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (3 preceding siblings ...)
  2019-07-08  1:35 ` [PATCH v7 4/9] drm/i915: vgpu ppgtt update pv optimization Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08 10:40   ` Chris Wilson
  2019-07-08  1:35 ` [PATCH v7 6/9] drm/i915/gvt: GVTg handle pv_caps PVINFO register Xiaolin Zhang
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

It is performance optimization to override the actual submisison backend
in order to eliminate execlists csb process and reduce mmio trap numbers
for workload submission without context switch interrupt by talking with
GVT via PV submisison notification mechanism between guest and GVT.

Use PV_SUBMISSION to control this level of pv optimization.

v0: RFC.
v1: rebase.
v2: added pv ops for pv context submission. to maximize code resuse,
introduced 2 more ops (submit_ports & preempt_context) instead of 1 op
(set_default_submission) in engine structure. pv version of
submit_ports and preempt_context implemented.
v3:
1. to reduce more code duplication, code refactor and replaced 2 ops
"submit_ports & preempt_contex" from v2 by 1 ops "write_desc"
in engine structure. pv version of write_des implemented.
2. added VGT_G2V_ELSP_SUBMIT for g2v pv notification.
v4: implemented pv elsp submission tasklet as the backend workload
submisison by talking to GVT with PV notificaiton mechanism and renamed
VGT_G2V_ELSP_SUBMIT to VGT_G2V_PV_SUBMISIION.
v5: addressed v4 comments from Chris, intel_pv_submission.c added.
v6: addressed v5 comments from Chris, replaced engine id by hw_id.
v7: rebase.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/Makefile              |   2 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c        |   8 +-
 drivers/gpu/drm/i915/i915_vgpu.c           |  15 ++-
 drivers/gpu/drm/i915/i915_vgpu.h           |  15 +++
 drivers/gpu/drm/i915/intel_pv_submission.c | 189 +++++++++++++++++++++++++++++
 5 files changed, 225 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/intel_pv_submission.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 5266dbe..6e13f7c 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -244,7 +244,7 @@ i915-$(CONFIG_DRM_I915_SELFTEST) += \
 	selftests/igt_spinner.o
 
 # virtual gpu code
-i915-y += i915_vgpu.o
+i915-y += i915_vgpu.o intel_pv_submission.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index e1ae139..48a9b28 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -2702,10 +2702,14 @@ void intel_execlists_set_default_submission(struct intel_engine_cs *engine)
 	engine->unpark = NULL;
 
 	engine->flags |= I915_ENGINE_SUPPORTS_STATS;
-	if (!intel_vgpu_active(engine->i915))
-		engine->flags |= I915_ENGINE_HAS_SEMAPHORES;
+	engine->flags |= I915_ENGINE_HAS_SEMAPHORES;
 	if (HAS_LOGICAL_RING_PREEMPTION(engine->i915))
 		engine->flags |= I915_ENGINE_HAS_PREEMPTION;
+
+	if (intel_vgpu_active(engine->i915)) {
+		engine->flags &= ~I915_ENGINE_HAS_SEMAPHORES;
+		intel_vgpu_config_pv_caps(engine->i915,	PV_SUBMISSION, engine);
+	}
 }
 
 static void execlists_destroy(struct intel_engine_cs *engine)
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 2aad0b8..c628be8 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -97,7 +97,7 @@ void i915_detect_vgpu(struct drm_i915_private *dev_priv)
 	dev_priv->vgpu.active = true;
 
 	/* guest driver PV capability */
-	dev_priv->vgpu.pv_caps = PV_PPGTT_UPDATE;
+	dev_priv->vgpu.pv_caps = PV_PPGTT_UPDATE | PV_SUBMISSION;
 
 	if (!intel_vgpu_check_pv_caps(dev_priv, shared_area)) {
 		DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
@@ -380,6 +380,7 @@ void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
 		enum pv_caps cap, void *data)
 {
 	struct i915_ppgtt *ppgtt;
+	struct intel_engine_cs *engine;
 
 	if (!intel_vgpu_enabled_pv_caps(dev_priv, cap))
 		return;
@@ -390,6 +391,11 @@ void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
 		ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl_pv;
 		ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl_pv;
 	}
+
+	if (cap == PV_SUBMISSION) {
+		engine = (struct intel_engine_cs *)data;
+		vgpu_set_pv_submission(engine);
+	}
 }
 
 /**
@@ -584,6 +590,8 @@ static int intel_vgpu_setup_shared_page(struct drm_i915_private *dev_priv,
 	struct gvt_shared_page *base;
 	u64 gpa;
 	u16 ver_maj, ver_min;
+	int i;
+	u32 size;
 
 	/* We allocate 1 page shared between guest and GVT for data exchange.
 	 *       ___________.....................
@@ -657,6 +665,11 @@ static int intel_vgpu_setup_shared_page(struct drm_i915_private *dev_priv,
 	pv->notify = intel_vgpu_pv_notify_mmio;
 	mutex_init(&pv->send_mutex);
 
+	/* setup PV per engine data exchange ptr */
+	size = sizeof(struct pv_submission);
+	for (i = 0; i < PV_MAX_ENGINES_NUM; i++)
+		pv->pv_elsp[i] = (void *)base + PV_ELSP_OFF +  size * i;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 92c84d3..2cafb30 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -29,6 +29,8 @@
 
 #define PV_MAJOR		1
 #define PV_MINOR		0
+#define PV_MAX_ENGINES_NUM	(VECS1_HW + 1)
+#define PV_ELSP_OFF		(PAGE_SIZE/8)
 #define PV_DESC_OFF		(PAGE_SIZE/4)
 #define PV_CMD_OFF		(PAGE_SIZE/2)
 
@@ -37,6 +39,7 @@
  */
 enum pv_caps {
 	PV_PPGTT_UPDATE = 0x1,
+	PV_SUBMISSION = 0x2,
 };
 
 /* PV actions */
@@ -45,6 +48,7 @@ enum intel_vgpu_pv_action {
 	PV_ACTION_PPGTT_L4_ALLOC,
 	PV_ACTION_PPGTT_L4_CLEAR,
 	PV_ACTION_PPGTT_L4_INSERT,
+	PV_ACTION_ELSP_SUBMISSION,
 };
 
 /*
@@ -56,6 +60,12 @@ struct gvt_shared_page {
 	u16 ver_minor;
 };
 
+/* workload submission pv support data structure */
+struct pv_submission {
+	u64 descs[EXECLIST_MAX_PORTS];
+	bool submitted;
+};
+
 /*
  * Definition of the command transport message header (DW0)
  *
@@ -100,6 +110,9 @@ struct i915_virtual_gpu_pv {
 	struct gvt_shared_page *shared_page;
 	bool enabled;
 
+	/* per engine PV workload submission data */
+	struct pv_submission *pv_elsp[PV_MAX_ENGINES_NUM];
+
 	/* PV command buffer support */
 	struct vgpu_pv_ct_buffer ctb;
 	u32 next_fence;
@@ -163,4 +176,6 @@ bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv,
 		void __iomem *shared_area);
 void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
 		enum pv_caps cap, void *data);
+void vgpu_set_pv_submission(struct intel_engine_cs *engine);
+
 #endif /* _I915_VGPU_H_ */
diff --git a/drivers/gpu/drm/i915/intel_pv_submission.c b/drivers/gpu/drm/i915/intel_pv_submission.c
new file mode 100644
index 0000000..18b491b
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_pv_submission.c
@@ -0,0 +1,189 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2018 Intel Corporation
+ */
+
+#include "intel_drv.h"
+#include "i915_vgpu.h"
+#include "gt/intel_lrc_reg.h"
+#include "gt/intel_engine_pm.h"
+
+static u64 execlists_update_context(struct i915_request *rq)
+{
+	struct intel_context *ce = rq->hw_context;
+	u32 *reg_state = ce->lrc_reg_state;
+
+	reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);
+
+	return ce->lrc_desc;
+}
+
+static inline struct i915_priolist *to_priolist(struct rb_node *rb)
+{
+	return rb_entry(rb, struct i915_priolist, node);
+}
+
+static void pv_submit(struct intel_engine_cs *engine,
+	       struct i915_request **out,
+	       struct i915_request **end)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct i915_virtual_gpu_pv *pv = engine->i915->vgpu.pv;
+	struct pv_submission *pv_elsp = pv->pv_elsp[engine->hw_id];
+	struct i915_request *rq;
+
+	u64 descs[2];
+	int n, err;
+
+	memset(descs, 0, sizeof(descs));
+	n = 0;
+	do {
+		rq = *out++;
+		descs[n++] = execlists_update_context(rq);
+	} while (out != end);
+
+	for (n = 0; n < execlists_num_ports(execlists); n++)
+		pv_elsp->descs[n] = descs[n];
+
+	writel(PV_ACTION_ELSP_SUBMISSION, execlists->submit_reg);
+
+#define done (READ_ONCE(pv_elsp->submitted) == true)
+	err = wait_for_us(done, 1);
+	if (err)
+		err = wait_for(done, 1);
+#undef done
+
+	if (unlikely(err))
+		DRM_ERROR("PV (%s) workload submission failed\n", engine->name);
+
+	pv_elsp->submitted = false;
+}
+
+static struct i915_request *schedule_in(struct i915_request *rq, int idx)
+{
+	trace_i915_request_in(rq, idx);
+
+	if (!rq->hw_context->inflight)
+		rq->hw_context->inflight = rq->engine;
+	intel_context_inflight_inc(rq->hw_context);
+
+	return i915_request_get(rq);
+}
+
+static void pv_dequeue(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct i915_request **first = execlists->inflight;
+	struct i915_request *last = first[0];
+	struct i915_request **port;
+	bool submit = false;
+	struct rb_node *rb;
+
+	lockdep_assert_held(&engine->active.lock);
+
+	if (last)
+		return;
+
+	port = first;
+	while ((rb = rb_first_cached(&execlists->queue))) {
+		struct i915_priolist *p = to_priolist(rb);
+		struct i915_request *rq, *rn;
+		int i;
+
+		priolist_for_each_request_consume(rq, rn, p, i) {
+			if (last && rq->hw_context != last->hw_context)
+				goto done;
+
+			list_del_init(&rq->sched.link);
+			__i915_request_submit(rq);
+			submit = true;
+			last = rq;
+		}
+
+		rb_erase_cached(&p->node, &execlists->queue);
+		i915_priolist_free(p);
+	}
+done:
+	execlists->queue_priority_hint =
+		rb ? to_priolist(rb)->priority : INT_MIN;
+	if (submit) {
+		*port = schedule_in(last, port - execlists->inflight);
+		*++port = NULL;
+		pv_submit(engine, first, port);
+	}
+	execlists->active = execlists->inflight;
+}
+
+static void schedule_out(struct i915_request *rq)
+{
+	trace_i915_request_out(rq);
+
+	intel_context_inflight_dec(rq->hw_context);
+	if (!intel_context_inflight_count(rq->hw_context))
+		rq->hw_context->inflight = NULL;
+
+	i915_request_put(rq);
+}
+
+
+static void vgpu_pv_submission_tasklet(unsigned long data)
+{
+	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct i915_request **port, *rq;
+	unsigned long flags;
+
+	spin_lock_irqsave(&engine->active.lock, flags);
+
+	for (port = execlists->inflight; (rq = *port); port++) {
+		if (!i915_request_completed(rq))
+			break;
+
+		schedule_out(rq);
+	}
+
+	if (port != execlists->inflight) {
+		int idx = port - execlists->inflight;
+		int rem = ARRAY_SIZE(execlists->inflight) - idx;
+
+		memmove(execlists->inflight, port, rem * sizeof(*port));
+	}
+
+	if (!rq)
+		pv_dequeue(engine);
+
+	spin_unlock_irqrestore(&engine->active.lock, flags);
+}
+
+static void vgpu_pv_submission_park(struct intel_engine_cs *engine)
+{
+	intel_engine_park(engine);
+	intel_engine_unpin_breadcrumbs_irq(engine);
+	engine->flags &= ~I915_ENGINE_NEEDS_BREADCRUMB_TASKLET;
+}
+
+static void vgpu_pv_submission_unpark(struct intel_engine_cs *engine)
+{
+	engine->flags |= I915_ENGINE_NEEDS_BREADCRUMB_TASKLET;
+	intel_engine_pin_breadcrumbs_irq(engine);
+}
+
+void vgpu_set_pv_submission(struct intel_engine_cs *engine)
+{
+	/*
+	 * We inherit a bunch of functions from execlists that we'd like
+	 * to keep using:
+	 *
+	 *    engine->submit_request = execlists_submit_request;
+	 *    engine->cancel_requests = execlists_cancel_requests;
+	 *    engine->schedule = execlists_schedule;
+	 *
+	 * But we need to override the actual submission backend in order
+	 * to talk to the GVT with PV notification message.
+	 */
+
+	engine->execlists.tasklet.func = vgpu_pv_submission_tasklet;
+
+	engine->park = vgpu_pv_submission_park;
+	engine->unpark = vgpu_pv_submission_unpark;
+}
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 6/9] drm/i915/gvt: GVTg handle pv_caps PVINFO register
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (4 preceding siblings ...)
  2019-07-08  1:35 ` [PATCH v7 5/9] drm/i915: vgpu context submission " Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 7/9] drm/i915/gvt: GVTg handle shared_page setup Xiaolin Zhang
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

implement pv_caps PVINFO register handler in GVTg to
control different level pv optimization within guest.

report VGT_CAPS_PV capability in pvinfo page for guest.

v0: RFC.
v1: rebase.
v2: rebase.
v3: renamed enable_pvmmio to pvmmio_caps which is used for host
pv caps.
v4: renamed pvmmio_caps to pv_caps.
v5: rebase.
v6: rebase.
v6: rebase.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/handlers.c | 4 ++++
 drivers/gpu/drm/i915/gvt/vgpu.c     | 3 +++
 2 files changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 25f7819..ded97b3 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1194,6 +1194,7 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 		break;
 	case 0x78010:	/* vgt_caps */
 	case 0x7881c:
+	case _vgtif_reg(pv_caps):
 		break;
 	default:
 		invalid_read = true;
@@ -1264,6 +1265,9 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	case _vgtif_reg(g2v_notify):
 		handle_g2v_notification(vgpu, data);
 		break;
+	case _vgtif_reg(pv_caps):
+		DRM_INFO("vgpu id=%d pv caps =0x%x\n", vgpu->id, data);
+		break;
 	/* add xhot and yhot to handled list to avoid error log */
 	case _vgtif_reg(cursor_x_hot):
 	case _vgtif_reg(cursor_y_hot):
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 44ce3c2..3ecc45a 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -47,6 +47,7 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) = VGT_CAPS_FULL_PPGTT;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HWSP_EMULATION;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT;
+	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
@@ -531,6 +532,7 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
 	struct intel_gvt *gvt = vgpu->gvt;
 	struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
 	intel_engine_mask_t resetting_eng = dmlr ? ALL_ENGINES : engine_mask;
+	int pv_caps = vgpu_vreg_t(vgpu, vgtif_reg(pv_caps));
 
 	gvt_dbg_core("------------------------------------------\n");
 	gvt_dbg_core("resseting vgpu%d, dmlr %d, engine_mask %08x\n",
@@ -562,6 +564,7 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
 
 		intel_vgpu_reset_mmio(vgpu, dmlr);
 		populate_pvinfo_page(vgpu);
+		vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = pv_caps;
 		intel_vgpu_reset_display(vgpu);
 
 		if (dmlr) {
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 7/9] drm/i915/gvt: GVTg handle shared_page setup
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (5 preceding siblings ...)
  2019-07-08  1:35 ` [PATCH v7 6/9] drm/i915/gvt: GVTg handle pv_caps PVINFO register Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 8/9] drm/i915/gvt: GVTg support ppgtt pv optimization Xiaolin Zhang
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

GVTg implemented shared_page setup operation and read_shared_page
functionality based on hypervisor_read_gpa().

the shared_page_gpa was passed from guest driver through PVINFO
shared_page_gpa register.

v0: RFC.
v1: rebase.
v2: rebase.
v3: added shared_page_gpa check and if read_gpa failure, return zero
memory and handle VGT_G2V_SHARED_PAGE_SETUP g2v notification
v4: rebase.
v5: rebase.
v6: rebase, added PV version support.
v7: rebase.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h      |  8 ++++++-
 drivers/gpu/drm/i915/gvt/handlers.c | 20 +++++++++++++++++
 drivers/gpu/drm/i915/gvt/vgpu.c     | 43 +++++++++++++++++++++++++++++++++++++
 3 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 7a1fe44..f7f4b03 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -49,6 +49,7 @@
 #include "fb_decoder.h"
 #include "dmabuf.h"
 #include "page_track.h"
+#include "i915_vgpu.h"
 
 #define GVT_MAX_VGPU 8
 
@@ -229,6 +230,8 @@ struct intel_vgpu {
 	struct completion vblank_done;
 
 	u32 scan_nonprivbb;
+	u64 shared_page_gpa;
+	bool shared_page_enabled;
 };
 
 /* validating GM healthy status*/
@@ -686,7 +689,10 @@ int intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu);
 void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
 int intel_gvt_debugfs_init(struct intel_gvt *gvt);
 void intel_gvt_debugfs_clean(struct intel_gvt *gvt);
-
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len);
+int intel_gvt_write_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len);
 
 #include "trace.h"
 #include "mpt.h"
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index ded97b3..4115fd0 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1195,6 +1195,8 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 	case 0x78010:	/* vgt_caps */
 	case 0x7881c:
 	case _vgtif_reg(pv_caps):
+	case _vgtif_reg(shared_page_gpa):
+	case _vgtif_reg(shared_page_gpa) + 4:
 		break;
 	default:
 		invalid_read = true;
@@ -1212,6 +1214,9 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	enum intel_gvt_gtt_type root_entry_type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
 	struct intel_vgpu_mm *mm;
 	u64 *pdps;
+	unsigned long gpa, gfn;
+	u16 ver_major = PV_MAJOR;
+	u16 ver_minor = PV_MINOR;
 
 	pdps = (u64 *)&vgpu_vreg64_t(vgpu, vgtif_reg(pdp[0]));
 
@@ -1225,6 +1230,19 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	case VGT_G2V_PPGTT_L3_PAGE_TABLE_DESTROY:
 	case VGT_G2V_PPGTT_L4_PAGE_TABLE_DESTROY:
 		return intel_vgpu_put_ppgtt_mm(vgpu, pdps);
+	case VGT_G2V_SHARED_PAGE_SETUP:
+		gpa = vgpu_vreg64_t(vgpu, vgtif_reg(shared_page_gpa));
+		gfn = gpa >> PAGE_SHIFT;
+		if (!intel_gvt_hypervisor_is_valid_gfn(vgpu, gfn)) {
+			vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = 0;
+			return 0;
+		}
+		vgpu->shared_page_gpa = gpa;
+		vgpu->shared_page_enabled = true;
+
+		intel_gvt_write_shared_page(vgpu, 0, &ver_major, 2);
+		intel_gvt_write_shared_page(vgpu, 2, &ver_minor, 2);
+		break;
 	case VGT_G2V_EXECLIST_CONTEXT_CREATE:
 	case VGT_G2V_EXECLIST_CONTEXT_DESTROY:
 	case 1:	/* Remove this in guest driver. */
@@ -1281,6 +1299,8 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	case _vgtif_reg(pdp[3].hi):
 	case _vgtif_reg(execlist_context_descriptor_lo):
 	case _vgtif_reg(execlist_context_descriptor_hi):
+	case _vgtif_reg(shared_page_gpa):
+	case _vgtif_reg(shared_page_gpa) + 4:
 		break;
 	case _vgtif_reg(rsv5[0])..._vgtif_reg(rsv5[3]):
 		invalid_write = true;
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 3ecc45a..151f271 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -63,6 +63,8 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(cursor_x_hot)) = UINT_MAX;
 	vgpu_vreg_t(vgpu, vgtif_reg(cursor_y_hot)) = UINT_MAX;
 
+	vgpu_vreg64_t(vgpu, vgtif_reg(shared_page_gpa)) = 0;
+
 	gvt_dbg_core("Populate PVINFO PAGE for vGPU %d\n", vgpu->id);
 	gvt_dbg_core("aperture base [GMADR] 0x%llx size 0x%llx\n",
 		vgpu_aperture_gmadr_base(vgpu), vgpu_aperture_sz(vgpu));
@@ -593,3 +595,44 @@ void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu)
 	intel_gvt_reset_vgpu_locked(vgpu, true, 0);
 	mutex_unlock(&vgpu->vgpu_lock);
 }
+
+/**
+ * intel_gvt_read_shared_page - read content from shared page
+ */
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len)
+{
+	int ret = -EINVAL;
+	unsigned long gpa;
+
+	if (offset >= PAGE_SIZE)
+		goto err;
+
+	gpa = vgpu->shared_page_gpa + offset;
+	ret = intel_gvt_hypervisor_read_gpa(vgpu, gpa, buf, len);
+	if (!ret)
+		return ret;
+err:
+	gvt_vgpu_err("read shared page (offset %x) failed", offset);
+	memset(buf, 0, len);
+	return ret;
+}
+
+int intel_gvt_write_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len)
+{
+	int ret = -EINVAL;
+	unsigned long gpa;
+
+	if (offset >= PAGE_SIZE)
+		goto err;
+
+	gpa = vgpu->shared_page_gpa + offset;
+	ret = intel_gvt_hypervisor_write_gpa(vgpu, gpa, buf, len);
+	if (!ret)
+		return ret;
+err:
+	gvt_vgpu_err("write shared page (offset %x) failed", offset);
+	memset(buf, 0, len);
+	return ret;
+}
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 8/9] drm/i915/gvt: GVTg support ppgtt pv optimization
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (6 preceding siblings ...)
  2019-07-08  1:35 ` [PATCH v7 7/9] drm/i915/gvt: GVTg handle shared_page setup Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:35 ` [PATCH v7 9/9] drm/i915/gvt: GVTg support context submission " Xiaolin Zhang
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

This patch handles ppgtt update from g2v notification.

It read out ppgtt pte entries from guest pte tables page and
convert them to host pfns.

It creates local ppgtt tables and insert the content pages
into the local ppgtt tables directly, which does not track
the usage of guest page table and removes the cost of write
protection from the original shadow page mechansim.

v0: RFC.
v1: rebase.
v2: rebase.
v3: report pv pggtt cap to guest.
v4: renamed VGPU_PVMMIO with VGPU_PVCAP for name consistance, no PV
support if gfx vtd enabled.
v5: rebase.
v6: rebase.
v7: added command transport buffer support.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gtt.c      | 295 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/gtt.h      |  11 ++
 drivers/gpu/drm/i915/gvt/gvt.h      |   4 +
 drivers/gpu/drm/i915/gvt/handlers.c | 127 +++++++++++++++-
 drivers/gpu/drm/i915/gvt/vgpu.c     |   2 +
 5 files changed, 438 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
index 53115bd..85c42db 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.c
+++ b/drivers/gpu/drm/i915/gvt/gtt.c
@@ -1771,6 +1771,25 @@ static int ppgtt_handle_guest_write_page_table_bytes(
 	return 0;
 }
 
+static void invalidate_mm_pv(struct intel_vgpu_mm *mm)
+{
+	struct intel_vgpu *vgpu = mm->vgpu;
+	struct intel_gvt *gvt = vgpu->gvt;
+	struct intel_gvt_gtt *gtt = &gvt->gtt;
+	struct intel_gvt_gtt_pte_ops *ops = gtt->pte_ops;
+	struct intel_gvt_gtt_entry se;
+
+	i915_vm_put(&mm->ppgtt->vm);
+
+	ppgtt_get_shadow_root_entry(mm, &se, 0);
+	if (!ops->test_present(&se))
+		return;
+	se.val64 = 0;
+	ppgtt_set_shadow_root_entry(mm, &se, 0);
+
+	mm->ppgtt_mm.shadowed  = false;
+}
+
 static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 {
 	struct intel_vgpu *vgpu = mm->vgpu;
@@ -1783,6 +1802,11 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 	if (!mm->ppgtt_mm.shadowed)
 		return;
 
+	if (VGPU_PVCAP(mm->vgpu, PV_PPGTT_UPDATE)) {
+		invalidate_mm_pv(mm);
+		return;
+	}
+
 	for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.shadow_pdps); index++) {
 		ppgtt_get_shadow_root_entry(mm, &se, index);
 
@@ -1800,6 +1824,26 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 	mm->ppgtt_mm.shadowed = false;
 }
 
+static int shadow_mm_pv(struct intel_vgpu_mm *mm)
+{
+	struct intel_vgpu *vgpu = mm->vgpu;
+	struct intel_gvt *gvt = vgpu->gvt;
+	struct intel_gvt_gtt_entry se;
+
+	mm->ppgtt = i915_ppgtt_create(gvt->dev_priv);
+	if (IS_ERR(mm->ppgtt)) {
+		gvt_vgpu_err("fail to create ppgtt for pdp 0x%llx\n",
+				px_dma(mm->ppgtt->pd));
+		return PTR_ERR(mm->ppgtt);
+	}
+
+	se.type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
+	se.val64 = px_dma(mm->ppgtt->pd);
+	ppgtt_set_shadow_root_entry(mm, &se, 0);
+	mm->ppgtt_mm.shadowed  = true;
+
+	return 0;
+}
 
 static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm)
 {
@@ -1814,6 +1858,9 @@ static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm)
 	if (mm->ppgtt_mm.shadowed)
 		return 0;
 
+	if (VGPU_PVCAP(mm->vgpu, PV_PPGTT_UPDATE))
+		return shadow_mm_pv(mm);
+
 	mm->ppgtt_mm.shadowed = true;
 
 	for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.guest_pdps); index++) {
@@ -2816,3 +2863,251 @@ void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu)
 	intel_vgpu_destroy_all_ppgtt_mm(vgpu);
 	intel_vgpu_reset_ggtt(vgpu, true);
 }
+
+#define GEN8_PML4E_SIZE		(1UL << GEN8_PML4E_SHIFT)
+#define GEN8_PML4E_SIZE_MASK	(~(GEN8_PML4E_SIZE - 1))
+#define GEN8_PDPE_SIZE		(1UL << GEN8_PDPE_SHIFT)
+#define GEN8_PDPE_SIZE_MASK	(~(GEN8_PDPE_SIZE - 1))
+#define GEN8_PDE_SIZE		(1UL << GEN8_PDE_SHIFT)
+#define GEN8_PDE_SIZE_MASK	(~(GEN8_PDE_SIZE - 1))
+
+#define pml4_addr_end(addr, end)					\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PML4E_SIZE) & GEN8_PML4E_SIZE_MASK; \
+	(__boundary < (end)) ? __boundary : (end);		\
+})
+
+#define pdp_addr_end(addr, end)						\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PDPE_SIZE) & GEN8_PDPE_SIZE_MASK; \
+	(__boundary < (end)) ? __boundary : (end);		\
+})
+
+#define pd_addr_end(addr, end)						\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PDE_SIZE) & GEN8_PDE_SIZE_MASK;	\
+	(__boundary < (end)) ? __boundary : (end);		\
+})
+
+struct ppgtt_walk {
+	unsigned long *mfns;
+	int mfn_index;
+	unsigned long *pt;
+};
+
+static int walk_pt_range(struct intel_vgpu *vgpu, u64 pt,
+				u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long start_index, end_index;
+	int ret;
+	int i;
+	unsigned long mfn, gfn;
+
+	start_index = gma_ops->gma_to_pte_index(start);
+	end_index = ((end - start) >> PAGE_SHIFT) + start_index;
+
+	ret = intel_gvt_hypervisor_read_gpa(vgpu,
+		(pt & PAGE_MASK) + (start_index << info->gtt_entry_size_shift),
+		walk->pt + start_index,
+		(end_index - start_index) << info->gtt_entry_size_shift);
+	if (ret) {
+		gvt_vgpu_err("fail to read gpa %llx\n", pt);
+		return ret;
+	}
+
+	for (i = start_index; i < end_index; i++) {
+		gfn = walk->pt[i] >> PAGE_SHIFT;
+		mfn = intel_gvt_hypervisor_gfn_to_mfn(vgpu, gfn);
+		if (mfn == INTEL_GVT_INVALID_ADDR) {
+			gvt_vgpu_err("fail to translate gfn: 0x%lx\n", gfn);
+			return -ENXIO;
+		}
+		walk->mfns[walk->mfn_index++] = mfn << PAGE_SHIFT;
+	}
+
+	return 0;
+}
+
+
+static int walk_pd_range(struct intel_vgpu *vgpu, u64 pd,
+				u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long index;
+	u64 pt, next;
+	int ret  = 0;
+
+	do {
+		index = gma_ops->gma_to_pde_index(start);
+
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pd & PAGE_MASK) + (index <<
+			info->gtt_entry_size_shift), &pt, 8);
+		if (ret)
+			return ret;
+		next = pd_addr_end(start, end);
+		walk_pt_range(vgpu, pt, start, next, walk);
+
+		start = next;
+	} while (start != end);
+
+	return ret;
+}
+
+
+static int walk_pdp_range(struct intel_vgpu *vgpu, u64 pdp,
+				  u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long index;
+	u64 pd, next;
+	int ret  = 0;
+
+	do {
+		index = gma_ops->gma_to_l4_pdp_index(start);
+
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pdp & PAGE_MASK) + (index <<
+			info->gtt_entry_size_shift), &pd, 8);
+		if (ret)
+			return ret;
+		next = pdp_addr_end(start, end);
+		walk_pd_range(vgpu, pd, start, next, walk);
+		start = next;
+	} while (start != end);
+
+	return ret;
+}
+
+
+static int walk_pml4_range(struct intel_vgpu *vgpu, u64 pml4,
+				u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long index;
+	u64 pdp, next;
+	int ret  = 0;
+
+	do {
+		index = gma_ops->gma_to_pml4_index(start);
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pml4 & PAGE_MASK) + (index <<
+			info->gtt_entry_size_shift), &pdp, 8);
+		if (ret)
+			return ret;
+		next = pml4_addr_end(start, end);
+		walk_pdp_range(vgpu, pdp, start, next, walk);
+		start = next;
+	} while (start != end);
+
+	return ret;
+}
+
+static int intel_vgpu_pv_ppgtt_insert_4lvl(struct intel_vgpu *vgpu,
+		struct intel_vgpu_mm *mm,
+		u64 pml4, u64 start, u64 length, u32 cache_level)
+{
+	int ret = 0;
+	struct sg_table st;
+	struct scatterlist *sg = NULL;
+	int num_pages;
+	struct i915_vma vma;
+	struct ppgtt_walk walk;
+	int i;
+
+	num_pages = length >> PAGE_SHIFT;
+
+	walk.mfn_index = 0;
+	walk.mfns = NULL;
+	walk.pt = NULL;
+
+	walk.mfns = kmalloc_array(num_pages,
+			sizeof(unsigned long), GFP_KERNEL);
+	if (!walk.mfns) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	walk.pt = (unsigned long *)__get_free_pages(GFP_KERNEL, 0);
+	if (!walk.pt) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (sg_alloc_table(&st, num_pages, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = walk_pml4_range(vgpu, pml4, start, start + length, &walk);
+	if (ret)
+		goto fail_free_sg;
+
+	WARN_ON(num_pages != walk.mfn_index);
+
+	for_each_sg(st.sgl, sg, num_pages, i) {
+		sg->offset = 0;
+		sg->length = PAGE_SIZE;
+		sg_dma_address(sg) = walk.mfns[i];
+		sg_dma_len(sg) = PAGE_SIZE;
+	}
+
+	memset(&vma, 0, sizeof(vma));
+	vma.node.start = start;
+	vma.pages = &st;
+	mm->ppgtt->vm.insert_entries(&mm->ppgtt->vm, &vma, cache_level, 0);
+
+fail_free_sg:
+	sg_free_table(&st);
+fail:
+	kfree(walk.mfns);
+	free_page((unsigned long)walk.pt);
+
+	return ret;
+}
+
+int intel_vgpu_handle_pv_ppgtt_update(struct intel_vgpu *vgpu,
+		u32 action, struct pv_ppgtt_update *pv_ppgtt)
+{
+	struct intel_vgpu_mm *mm;
+	u64 pdp, start, length;
+	u32 cache_level;
+	int ret = 0;
+
+	pdp = pv_ppgtt->pdp;
+	start = pv_ppgtt->start;
+	length = pv_ppgtt->length;
+	cache_level = pv_ppgtt->cache_level;
+
+	mm = intel_vgpu_find_ppgtt_mm(vgpu, &pdp);
+	if (!mm) {
+		gvt_vgpu_err("failed to find pdp 0x%llx\n", pdp);
+		ret = -EINVAL;
+	}
+
+	if (action == PV_ACTION_PPGTT_L4_ALLOC) {
+		ret = mm->ppgtt->vm.allocate_va_range(&mm->ppgtt->vm,
+				start, length);
+		if (ret)
+			gvt_vgpu_err("failed to alloc %llx\n", pdp);
+	}
+
+	if (action == PV_ACTION_PPGTT_L4_CLEAR) {
+		mm->ppgtt->vm.clear_range(&mm->ppgtt->vm,
+				start, length);
+	}
+
+	if (action ==  PV_ACTION_PPGTT_L4_INSERT) {
+		ret = intel_vgpu_pv_ppgtt_insert_4lvl(vgpu, mm,
+				pdp, start, length, cache_level);
+		if (ret)
+			gvt_vgpu_err("failed to insert %llx\n", pdp);
+	}
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h
index 42d0394..7fe6dfa 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.h
+++ b/drivers/gpu/drm/i915/gvt/gtt.h
@@ -141,6 +141,7 @@ struct intel_gvt_partial_pte {
 
 struct intel_vgpu_mm {
 	enum intel_gvt_mm_type type;
+	struct i915_ppgtt *ppgtt;
 	struct intel_vgpu *vgpu;
 
 	struct kref ref;
@@ -252,6 +253,14 @@ struct intel_vgpu_ppgtt_spt {
 	struct list_head post_shadow_list;
 };
 
+/* ppgtt pv support data structure */
+struct pv_ppgtt_update {
+	u64 pdp;
+	u64 start;
+	u64 length;
+	u32 cache_level;
+};
+
 int intel_vgpu_sync_oos_pages(struct intel_vgpu *vgpu);
 
 int intel_vgpu_flush_post_shadow(struct intel_vgpu *vgpu);
@@ -277,4 +286,6 @@ int intel_vgpu_emulate_ggtt_mmio_read(struct intel_vgpu *vgpu,
 int intel_vgpu_emulate_ggtt_mmio_write(struct intel_vgpu *vgpu,
 	unsigned int off, void *p_data, unsigned int bytes);
 
+int intel_vgpu_handle_pv_ppgtt_update(struct intel_vgpu *vgpu,
+		u32 action, struct pv_ppgtt_update *pv_ppgtt);
 #endif /* _GVT_GTT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index f7f4b03..c9d5f2c 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -53,6 +53,10 @@
 
 #define GVT_MAX_VGPU 8
 
+#define VGPU_PVCAP(vgpu, cap)	\
+	((vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) & (cap)) \
+			&& vgpu->shared_page_enabled)
+
 struct intel_gvt_host {
 	struct device *dev;
 	bool initialized;
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 4115fd0..85b4346 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1209,6 +1209,127 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 	return 0;
 }
 
+static inline unsigned int ct_header_get_len(u32 header)
+{
+	return (header >> PV_CT_MSG_LEN_SHIFT) & PV_CT_MSG_LEN_MASK;
+}
+
+static inline unsigned int ct_header_get_action(u32 header)
+{
+	return (header >> PV_CT_MSG_ACTION_SHIFT) & PV_CT_MSG_ACTION_MASK;
+}
+
+static int fetch_pv_command_buffer(struct intel_vgpu *vgpu,
+		struct vgpu_pv_ct_buffer_desc *desc,
+		u32 *fence, u32 *action, u32 *data)
+{
+	u32 head, tail, len, size, off;
+	u32 cmd_head;
+	u32 avail;
+	u32 ret;
+
+	/* fetch command descriptor */
+	off = PV_DESC_OFF;
+	ret = intel_gvt_read_shared_page(vgpu, off, desc, sizeof(*desc));
+	if (ret)
+		return ret;
+
+	GEM_BUG_ON(desc->size % 4);
+	GEM_BUG_ON(desc->head % 4);
+	GEM_BUG_ON(desc->tail % 4);
+	GEM_BUG_ON(tail >= size);
+	GEM_BUG_ON(head >= size);
+
+	/* tail == head condition indicates empty */
+	head = desc->head/4;
+	tail = desc->tail/4;
+	size = desc->size/4;
+
+	if (unlikely((tail - head) == 0))
+		return -ENODATA;
+
+	/* fetch command head */
+	off = desc->addr + head * 4;
+	ret = intel_gvt_read_shared_page(vgpu, off, &cmd_head, 4);
+	head = (head + 1) % size;
+	if (ret)
+		goto err;
+
+	len = ct_header_get_len(cmd_head) - 1;
+	*action = ct_header_get_action(cmd_head);
+
+	/* fetch command fence */
+	off = desc->addr + head * 4;
+	ret = intel_gvt_read_shared_page(vgpu, off, fence, 4);
+	head = (head + 1) % size;
+	if (ret)
+		goto err;
+
+	/* no command data */
+	if (len == 0)
+		goto err;
+
+	/* fetch command data */
+	avail = size - head;
+	if (len <= avail) {
+		off =  desc->addr + head * 4;
+		ret = intel_gvt_read_shared_page(vgpu, off, data, len * 4);
+		head = (head + len) % size;
+		if (ret)
+			goto err;
+	} else {
+		/* swap case */
+		off =  desc->addr + head * 4;
+		ret = intel_gvt_read_shared_page(vgpu, off, data, avail * 4);
+		head = (head + avail) % size;
+		if (ret)
+			goto err;
+
+		off = desc->addr;
+		ret = intel_gvt_read_shared_page(vgpu, off, &data[avail],
+				(len - avail) * 4);
+		head = (head + len - avail) % size;
+		if (ret)
+			goto err;
+	}
+
+err:
+	desc->head = head * 4;
+	return ret;
+}
+
+static int handle_pv_actions(struct intel_vgpu *vgpu)
+{
+	struct vgpu_pv_ct_buffer_desc desc;
+	u32 fence, action;
+	u32 data[32];
+	int ret;
+	struct pv_ppgtt_update *ppgtt;
+
+	ret = fetch_pv_command_buffer(vgpu, &desc, &fence, &action, data);
+	if (ret)
+		return ret;
+
+	switch (action) {
+	case PV_ACTION_PPGTT_L4_ALLOC:
+	case PV_ACTION_PPGTT_L4_CLEAR:
+	case PV_ACTION_PPGTT_L4_INSERT:
+		ppgtt = (struct pv_ppgtt_update *)data;
+		ret = intel_vgpu_handle_pv_ppgtt_update(vgpu, action, ppgtt);
+		break;
+	default:
+		break;
+	}
+
+	/* write command descriptor back */
+	desc.fence = fence;
+	desc.status = ret;
+
+	ret = intel_gvt_write_shared_page(vgpu, PV_DESC_OFF,
+			&desc, sizeof(desc));
+	return ret;
+}
+
 static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 {
 	enum intel_gvt_gtt_type root_entry_type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
@@ -1217,6 +1338,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	unsigned long gpa, gfn;
 	u16 ver_major = PV_MAJOR;
 	u16 ver_minor = PV_MINOR;
+	int ret = 0;
 
 	pdps = (u64 *)&vgpu_vreg64_t(vgpu, vgtif_reg(pdp[0]));
 
@@ -1243,6 +1365,9 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 		intel_gvt_write_shared_page(vgpu, 0, &ver_major, 2);
 		intel_gvt_write_shared_page(vgpu, 2, &ver_minor, 2);
 		break;
+	case VGT_G2V_PV_SEND_TRIGGER:
+		ret = handle_pv_actions(vgpu);
+		break;
 	case VGT_G2V_EXECLIST_CONTEXT_CREATE:
 	case VGT_G2V_EXECLIST_CONTEXT_DESTROY:
 	case 1:	/* Remove this in guest driver. */
@@ -1250,7 +1375,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	default:
 		gvt_vgpu_err("Invalid PV notification %d\n", notification);
 	}
-	return 0;
+	return ret;
 }
 
 static int send_display_ready_uevent(struct intel_vgpu *vgpu, int ready)
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 151f271..f982a23 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -49,6 +49,8 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
+	if (!intel_vtd_active())
+		vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT_UPDATE;
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.size)) =
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v7 9/9] drm/i915/gvt: GVTg support context submission pv optimization
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (7 preceding siblings ...)
  2019-07-08  1:35 ` [PATCH v7 8/9] drm/i915/gvt: GVTg support ppgtt pv optimization Xiaolin Zhang
@ 2019-07-08  1:35 ` Xiaolin Zhang
  2019-07-08  1:48 ` ✗ Fi.CI.CHECKPATCH: warning for i915 vgpu PV to improve vgpu performance Patchwork
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Xiaolin Zhang @ 2019-07-08  1:35 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-gvt-dev

implemented context submission pv optimizaiton within GVTg.

GVTg to read context submission data (elsp_data) from the shared_page
directly without trap cost and eliminate execlist HW behavior emulation
without injecting context switch interrupt to guest under PV
submisison mechanism.

v0: RFC.
v1: rebase.
v2: rebase.
v3: report pv context submission cap and handle VGT_G2V_ELSP_SUBMIT
g2v pv notification.
v4: eliminate execlist HW emulation and don't inject context switch
interrupt to guest under PV submisison mechanism.
v5: rebase.
v6: rebase.
v7: rebase.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/execlist.c |  6 ++++++
 drivers/gpu/drm/i915/gvt/handlers.c | 30 +++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/gvt/vgpu.c     |  2 ++
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
index f21b8fb..e52bfd6 100644
--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -382,6 +382,9 @@ static int prepare_execlist_workload(struct intel_vgpu_workload *workload)
 	int ring_id = workload->ring_id;
 	int ret;
 
+	if (VGPU_PVCAP(vgpu, PV_SUBMISSION))
+		return 0;
+
 	if (!workload->emulate_schedule_in)
 		return 0;
 
@@ -429,6 +432,9 @@ static int complete_execlist_workload(struct intel_vgpu_workload *workload)
 		goto out;
 	}
 
+	if (VGPU_PVCAP(vgpu, PV_SUBMISSION))
+		goto out;
+
 	ret = emulate_execlist_ctx_schedule_out(execlist, &workload->ctx_desc);
 out:
 	intel_vgpu_unpin_mm(workload->shadow_mm);
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 85b4346..7da6700 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1810,6 +1810,31 @@ static int mmio_read_from_hw(struct intel_vgpu *vgpu,
 	return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes);
 }
 
+static int handle_pv_submission(struct intel_vgpu *vgpu, int ring_id)
+{
+	struct intel_vgpu_execlist *execlist;
+	u32 hw_id = vgpu->gvt->dev_priv->engine[ring_id]->hw_id;
+	u32 base = PV_ELSP_OFF + hw_id * sizeof(struct pv_submission);
+	u32 desc_off = offsetof(struct pv_submission, descs);
+	u32 submitted_off = offsetof(struct pv_submission, submitted);
+	bool submitted = true;
+	int ret;
+
+	execlist = &vgpu->submission.execlist[ring_id];
+	desc_off += base;
+	if (intel_gvt_read_shared_page(vgpu, desc_off,
+			&execlist->elsp_dwords.data, 16))
+		return -EINVAL;
+
+	ret = intel_vgpu_submit_execlist(vgpu, ring_id);
+	if (ret)
+		submitted = false;
+
+	submitted_off += base;
+	ret = intel_gvt_write_shared_page(vgpu, submitted_off, &submitted, 1);
+	return ret;
+}
+
 static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 		void *p_data, unsigned int bytes)
 {
@@ -1821,8 +1846,11 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	if (WARN_ON(ring_id < 0 || ring_id >= I915_NUM_ENGINES))
 		return -EINVAL;
 
-	execlist = &vgpu->submission.execlist[ring_id];
+	if (VGPU_PVCAP(vgpu, PV_SUBMISSION) &&
+			data == PV_ACTION_ELSP_SUBMISSION)
+		return handle_pv_submission(vgpu, ring_id);
 
+	execlist = &vgpu->submission.execlist[ring_id];
 	execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
 	if (execlist->elsp_dwords.index == 3) {
 		ret = intel_vgpu_submit_execlist(vgpu, ring_id);
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index f982a23..cb088a0 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -51,6 +51,8 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 
 	if (!intel_vtd_active())
 		vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT_UPDATE;
+	vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) |= PV_SUBMISSION;
+
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.size)) =
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* ✗ Fi.CI.CHECKPATCH: warning for i915 vgpu PV to improve vgpu performance
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (8 preceding siblings ...)
  2019-07-08  1:35 ` [PATCH v7 9/9] drm/i915/gvt: GVTg support context submission " Xiaolin Zhang
@ 2019-07-08  1:48 ` Patchwork
  2019-07-08  1:52 ` ✗ Fi.CI.SPARSE: " Patchwork
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2019-07-08  1:48 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: i915 vgpu PV to improve vgpu performance
URL   : https://patchwork.freedesktop.org/series/63333/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
7dc9248b7979 drm/i915: introduced vgpu pv capability
-:91: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#91: FILE: drivers/gpu/drm/i915/i915_vgpu.c:105:
+	DRM_INFO("Virtual GPU for Intel GVT-g detected with pv_caps 0x%x.\n",
+			dev_priv->vgpu.pv_caps);

-:115: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#115: FILE: drivers/gpu/drm/i915/i915_vgpu.c:323:
+bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv,
+		void __iomem *shared_area)

-:153: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#153: FILE: drivers/gpu/drm/i915/i915_vgpu.h:57:
+bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv,
+		void __iomem *shared_area);

total: 0 errors, 0 warnings, 3 checks, 100 lines checked
35d4aff9c6de drm/i915: vgpu shared memory setup for pv optimization
-:101: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#101: FILE: drivers/gpu/drm/i915/i915_vgpu.c:320:
+static int intel_vgpu_setup_shared_page(struct drm_i915_private *dev_priv,
+		void __iomem *shared_area)

-:159: CHECK:ALLOC_SIZEOF_STRUCT: Prefer kzalloc(sizeof(*pv)...) over kzalloc(sizeof(struct i915_virtual_gpu_pv)...)
#159: FILE: drivers/gpu/drm/i915/i915_vgpu.c:378:
+	pv = kzalloc(sizeof(struct i915_virtual_gpu_pv), GFP_KERNEL);

total: 0 errors, 0 warnings, 2 checks, 158 lines checked
cc318f34fc2e drm/i915: vgpu pv command buffer support
-:51: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#51: FILE: drivers/gpu/drm/i915/i915_vgpu.c:333:
+static int wait_for_desc_update(struct vgpu_pv_ct_buffer_desc *desc,
+		u32 fence, u32 *status)

-:63: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#63: FILE: drivers/gpu/drm/i915/i915_vgpu.c:345:
+		DRM_ERROR("CT: fence %u failed; reported fence=%u\n",
+				fence, desc->fence);

-:88: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#88: FILE: drivers/gpu/drm/i915/i915_vgpu.c:370:
+static int pv_command_buffer_write(struct i915_virtual_gpu_pv *pv,
+		const u32 *action, u32 len /* in dwords */, u32 fence)

-:151: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#151: FILE: drivers/gpu/drm/i915/i915_vgpu.c:433:
+static int pv_send(struct drm_i915_private *dev_priv,
+		const u32 *action, u32 len, u32 *status)

-:186: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#186: FILE: drivers/gpu/drm/i915/i915_vgpu.c:468:
+static int intel_vgpu_pv_send_command_buffer(

-:203: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#203: FILE: drivers/gpu/drm/i915/i915_vgpu.c:485:
+		DRM_ERROR("PV: send action %#x returned %d (%#x)\n",
+				action[0], ret, ret);

-:226: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#226: FILE: drivers/gpu/drm/i915/i915_vgpu.c:573:
+	pv->ctb.desc->size = PAGE_SIZE/2;
 	                              ^

-:245: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#245: FILE: drivers/gpu/drm/i915/i915_vgpu.h:32:
+#define PV_DESC_OFF		(PAGE_SIZE/4)
                    		          ^

-:246: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#246: FILE: drivers/gpu/drm/i915/i915_vgpu.h:33:
+#define PV_CMD_OFF		(PAGE_SIZE/2)
                   		          ^

-:323: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#323: FILE: drivers/gpu/drm/i915/i915_vgpu.h:130:
+intel_vgpu_pv_send(struct drm_i915_private *dev_priv,
+		u32 *action, u32 len)

total: 0 errors, 0 warnings, 10 checks, 299 lines checked
e882cebe14fe drm/i915: vgpu ppgtt update pv optimization
-:42: CHECK:LOGICAL_CONTINUATIONS: Logical continuations should be on the previous line
#42: FILE: drivers/gpu/drm/i915/i915_gem.c:1425:
+	if ((intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
+		|| intel_vgpu_enabled_pv_caps(dev_priv, PV_PPGTT_UPDATE))

-:56: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#56: FILE: drivers/gpu/drm/i915/i915_gem_gtt.c:914:
+void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
 				  u64 start, u64 length)

-:65: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#65: FILE: drivers/gpu/drm/i915/i915_gem_gtt.c:1154:
+void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 				   struct i915_vma *vma,

-:74: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#74: FILE: drivers/gpu/drm/i915/i915_gem_gtt.c:1470:
+int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
 				 u64 start, u64 length)

-:96: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#96: FILE: drivers/gpu/drm/i915/i915_gem_gtt.h:645:
+void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
+		u64 start, u64 length);

-:98: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#98: FILE: drivers/gpu/drm/i915/i915_gem_gtt.h:647:
+void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
+		struct i915_vma *vma,

-:101: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#101: FILE: drivers/gpu/drm/i915/i915_gem_gtt.h:650:
+int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
+		u64 start, u64 length);

-:125: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#125: FILE: drivers/gpu/drm/i915/i915_vgpu.c:320:
+static int vgpu_ppgtt_pv_update(struct drm_i915_private *dev_priv,
+		u32 action, u64 pdp, u64 start, u64 length, u32 cache_level)

-:142: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#142: FILE: drivers/gpu/drm/i915/i915_vgpu.c:337:
+static void gen8_ppgtt_clear_4lvl_pv(struct i915_address_space *vm,
+		u64 start, u64 length)

-:149: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#149: FILE: drivers/gpu/drm/i915/i915_vgpu.c:344:
+	vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_CLEAR,
+		px_dma(ppgtt->pd), start, length, 0);

-:153: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#153: FILE: drivers/gpu/drm/i915/i915_vgpu.c:348:
+static void gen8_ppgtt_insert_4lvl_pv(struct i915_address_space *vm,
+		struct i915_vma *vma,

-:163: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#163: FILE: drivers/gpu/drm/i915/i915_vgpu.c:358:
+	vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_INSERT,
+		px_dma(ppgtt->pd), start, length, cache_level);

-:167: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#167: FILE: drivers/gpu/drm/i915/i915_vgpu.c:362:
+static int gen8_ppgtt_alloc_4lvl_pv(struct i915_address_space *vm,
+		u64 start, u64 length)

-:185: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#185: FILE: drivers/gpu/drm/i915/i915_vgpu.c:380:
+void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap, void *data)

-:235: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#235: FILE: drivers/gpu/drm/i915/i915_vgpu.h:139:
+intel_vgpu_enabled_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap)

-:238: CHECK:LOGICAL_CONTINUATIONS: Logical continuations should be on the previous line
#238: FILE: drivers/gpu/drm/i915/i915_vgpu.h:142:
+	return (dev_priv->vgpu.active) && intel_vgpu_has_pv_caps(dev_priv)
+			&& (dev_priv->vgpu.pv_caps & cap);

-:249: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#249: FILE: drivers/gpu/drm/i915/i915_vgpu.h:165:
+void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap, void *data);

total: 0 errors, 0 warnings, 17 checks, 188 lines checked
f7ae004ae398 drm/i915: vgpu context submission pv optimization
-:130: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#130: FILE: drivers/gpu/drm/i915/i915_vgpu.h:33:
+#define PV_ELSP_OFF		(PAGE_SIZE/8)
                    		          ^

-:181: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#181: 
new file mode 100644

-:201: CHECK:SPACING: spaces preferred around that '+' (ctx:VxV)
#201: FILE: drivers/gpu/drm/i915/intel_pv_submission.c:16:
+	reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);
 	                       ^

-:212: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#212: FILE: drivers/gpu/drm/i915/intel_pv_submission.c:27:
+static void pv_submit(struct intel_engine_cs *engine,
+	       struct i915_request **out,

-:313: CHECK:LINE_SPACING: Please don't use multiple blank lines
#313: FILE: drivers/gpu/drm/i915/intel_pv_submission.c:128:
+
+

total: 0 errors, 1 warnings, 4 checks, 307 lines checked
8e123919b35c drm/i915/gvt: GVTg handle pv_caps PVINFO register
01adbd98b998 drm/i915/gvt: GVTg handle shared_page setup
-:51: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#51: FILE: drivers/gpu/drm/i915/gvt/gvt.h:693:
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len);

-:53: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#53: FILE: drivers/gpu/drm/i915/gvt/gvt.h:695:
+int intel_gvt_write_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len);

-:131: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#131: FILE: drivers/gpu/drm/i915/gvt/vgpu.c:603:
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len)

-:150: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#150: FILE: drivers/gpu/drm/i915/gvt/vgpu.c:622:
+int intel_gvt_write_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len)

total: 0 errors, 0 warnings, 4 checks, 122 lines checked
d342082a121e drm/i915/gvt: GVTg support ppgtt pv optimization
-:83: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#83: FILE: drivers/gpu/drm/i915/gvt/gtt.c:1836:
+		gvt_vgpu_err("fail to create ppgtt for pdp 0x%llx\n",
+				px_dma(mm->ppgtt->pd));

-:119: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end' - possible side-effects?
#119: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2874:
+#define pml4_addr_end(addr, end)					\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PML4E_SIZE) & GEN8_PML4E_SIZE_MASK; \
+	(__boundary < (end)) ? __boundary : (end);		\
+})

-:125: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end' - possible side-effects?
#125: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2880:
+#define pdp_addr_end(addr, end)						\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PDPE_SIZE) & GEN8_PDPE_SIZE_MASK; \
+	(__boundary < (end)) ? __boundary : (end);		\
+})

-:131: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end' - possible side-effects?
#131: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2886:
+#define pd_addr_end(addr, end)						\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PDE_SIZE) & GEN8_PDE_SIZE_MASK;	\
+	(__boundary < (end)) ? __boundary : (end);		\
+})

-:144: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#144: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2899:
+static int walk_pt_range(struct intel_vgpu *vgpu, u64 pt,
+				u64 start, u64 end, struct ppgtt_walk *walk)

-:157: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#157: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2912:
+	ret = intel_gvt_hypervisor_read_gpa(vgpu,
+		(pt & PAGE_MASK) + (start_index << info->gtt_entry_size_shift),

-:178: CHECK:LINE_SPACING: Please don't use multiple blank lines
#178: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2933:
+
+

-:180: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#180: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2935:
+static int walk_pd_range(struct intel_vgpu *vgpu, u64 pd,
+				u64 start, u64 end, struct ppgtt_walk *walk)

-:192: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#192: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2947:
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pd & PAGE_MASK) + (index <<

-:205: CHECK:LINE_SPACING: Please don't use multiple blank lines
#205: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2960:
+
+

-:207: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#207: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2962:
+static int walk_pdp_range(struct intel_vgpu *vgpu, u64 pdp,
+				  u64 start, u64 end, struct ppgtt_walk *walk)

-:219: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#219: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2974:
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pdp & PAGE_MASK) + (index <<

-:231: CHECK:LINE_SPACING: Please don't use multiple blank lines
#231: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2986:
+
+

-:233: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#233: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2988:
+static int walk_pml4_range(struct intel_vgpu *vgpu, u64 pml4,
+				u64 start, u64 end, struct ppgtt_walk *walk)

-:244: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#244: FILE: drivers/gpu/drm/i915/gvt/gtt.c:2999:
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pml4 & PAGE_MASK) + (index <<

-:257: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#257: FILE: drivers/gpu/drm/i915/gvt/gtt.c:3012:
+static int intel_vgpu_pv_ppgtt_insert_4lvl(struct intel_vgpu *vgpu,
+		struct intel_vgpu_mm *mm,

-:275: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#275: FILE: drivers/gpu/drm/i915/gvt/gtt.c:3030:
+	walk.mfns = kmalloc_array(num_pages,
+			sizeof(unsigned long), GFP_KERNEL);

-:320: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#320: FILE: drivers/gpu/drm/i915/gvt/gtt.c:3075:
+int intel_vgpu_handle_pv_ppgtt_update(struct intel_vgpu *vgpu,
+		u32 action, struct pv_ppgtt_update *pv_ppgtt)

-:352: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#352: FILE: drivers/gpu/drm/i915/gvt/gtt.c:3107:
+		ret = intel_vgpu_pv_ppgtt_insert_4lvl(vgpu, mm,
+				pdp, start, length, cache_level);

-:391: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#391: FILE: drivers/gpu/drm/i915/gvt/gtt.h:290:
+int intel_vgpu_handle_pv_ppgtt_update(struct intel_vgpu *vgpu,
+		u32 action, struct pv_ppgtt_update *pv_ppgtt);

-:401: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'vgpu' - possible side-effects?
#401: FILE: drivers/gpu/drm/i915/gvt/gvt.h:56:
+#define VGPU_PVCAP(vgpu, cap)	\
+	((vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) & (cap)) \
+			&& vgpu->shared_page_enabled)

-:403: CHECK:LOGICAL_CONTINUATIONS: Logical continuations should be on the previous line
#403: FILE: drivers/gpu/drm/i915/gvt/gvt.h:58:
+	((vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) & (cap)) \
+			&& vgpu->shared_page_enabled)

-:427: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#427: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1223:
+static int fetch_pv_command_buffer(struct intel_vgpu *vgpu,
+		struct vgpu_pv_ct_buffer_desc *desc,

-:448: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#448: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1244:
+	head = desc->head/4;
 	                 ^

-:449: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#449: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1245:
+	tail = desc->tail/4;
 	                 ^

-:450: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#450: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1246:
+	size = desc->size/4;
 	                 ^

-:494: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#494: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1290:
+		ret = intel_gvt_read_shared_page(vgpu, off, &data[avail],
+				(len - avail) * 4);

-:533: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#533: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1329:
+	ret = intel_gvt_write_shared_page(vgpu, PV_DESC_OFF,
+			&desc, sizeof(desc));

total: 0 errors, 0 warnings, 28 checks, 518 lines checked
3d816a9bccbd drm/i915/gvt: GVTg support context submission pv optimization
-:71: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#71: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1826:
+	if (intel_gvt_read_shared_page(vgpu, desc_off,
+			&execlist->elsp_dwords.data, 16))

-:92: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#92: FILE: drivers/gpu/drm/i915/gvt/handlers.c:1850:
+	if (VGPU_PVCAP(vgpu, PV_SUBMISSION) &&
+			data == PV_ACTION_ELSP_SUBMISSION)

total: 0 errors, 0 warnings, 2 checks, 69 lines checked

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ Fi.CI.SPARSE: warning for i915 vgpu PV to improve vgpu performance
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (9 preceding siblings ...)
  2019-07-08  1:48 ` ✗ Fi.CI.CHECKPATCH: warning for i915 vgpu PV to improve vgpu performance Patchwork
@ 2019-07-08  1:52 ` Patchwork
  2019-07-08  2:05 ` ✓ Fi.CI.BAT: success " Patchwork
  2019-07-08  7:44 ` ✗ Fi.CI.IGT: failure " Patchwork
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2019-07-08  1:52 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: i915 vgpu PV to improve vgpu performance
URL   : https://patchwork.freedesktop.org/series/63333/
State : warning

== Summary ==

$ dim sparse origin/drm-tip
Sparse version: v0.5.2
Commit: drm/i915: introduced vgpu pv capability
Okay!

Commit: drm/i915: vgpu shared memory setup for pv optimization
Okay!

Commit: drm/i915: vgpu pv command buffer support
Okay!

Commit: drm/i915: vgpu ppgtt update pv optimization
Okay!

Commit: drm/i915: vgpu context submission pv optimization
+./include/uapi/linux/perf_event.h:147:56: warning: cast truncates bits from constant value (8000000000000000 becomes 0)

Commit: drm/i915/gvt: GVTg handle pv_caps PVINFO register
Okay!

Commit: drm/i915/gvt: GVTg handle shared_page setup
Okay!

Commit: drm/i915/gvt: GVTg support ppgtt pv optimization
+./include/linux/slab.h:666:13: error: undefined identifier '__builtin_mul_overflow'
+./include/linux/slab.h:666:13: warning: call with no type!

Commit: drm/i915/gvt: GVTg support context submission pv optimization
Okay!

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✓ Fi.CI.BAT: success for i915 vgpu PV to improve vgpu performance
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (10 preceding siblings ...)
  2019-07-08  1:52 ` ✗ Fi.CI.SPARSE: " Patchwork
@ 2019-07-08  2:05 ` Patchwork
  2019-07-08  7:44 ` ✗ Fi.CI.IGT: failure " Patchwork
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2019-07-08  2:05 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: i915 vgpu PV to improve vgpu performance
URL   : https://patchwork.freedesktop.org/series/63333/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_6428 -> Patchwork_13557
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/

Known issues
------------

  Here are the changes found in Patchwork_13557 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_create@basic-files:
    - fi-icl-u2:          [PASS][1] -> [INCOMPLETE][2] ([fdo#107713] / [fdo#109100])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/fi-icl-u2/igt@gem_ctx_create@basic-files.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/fi-icl-u2/igt@gem_ctx_create@basic-files.html

  * igt@i915_selftest@live_hangcheck:
    - fi-kbl-x1275:       [PASS][3] -> [DMESG-WARN][4] ([fdo#111074])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/fi-kbl-x1275/igt@i915_selftest@live_hangcheck.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/fi-kbl-x1275/igt@i915_selftest@live_hangcheck.html

  * igt@i915_selftest@live_sanitycheck:
    - fi-icl-u3:          [PASS][5] -> [DMESG-WARN][6] ([fdo#107724]) +1 similar issue
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/fi-icl-u3/igt@i915_selftest@live_sanitycheck.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/fi-icl-u3/igt@i915_selftest@live_sanitycheck.html

  
#### Possible fixes ####

  * igt@i915_selftest@live_hangcheck:
    - fi-kbl-8809g:       [DMESG-WARN][7] ([fdo#111074]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/fi-kbl-8809g/igt@i915_selftest@live_hangcheck.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/fi-kbl-8809g/igt@i915_selftest@live_hangcheck.html

  * igt@kms_chamelium@hdmi-hpd-fast:
    - fi-kbl-7500u:       [FAIL][9] ([fdo#109485]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html

  * igt@kms_frontbuffer_tracking@basic:
    - fi-icl-u3:          [FAIL][11] ([fdo#103167]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/fi-icl-u3/igt@kms_frontbuffer_tracking@basic.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/fi-icl-u3/igt@kms_frontbuffer_tracking@basic.html
    - fi-hsw-peppy:       [DMESG-WARN][13] ([fdo#102614]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/fi-hsw-peppy/igt@kms_frontbuffer_tracking@basic.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/fi-hsw-peppy/igt@kms_frontbuffer_tracking@basic.html

  
  [fdo#102614]: https://bugs.freedesktop.org/show_bug.cgi?id=102614
  [fdo#103167]: https://bugs.freedesktop.org/show_bug.cgi?id=103167
  [fdo#107713]: https://bugs.freedesktop.org/show_bug.cgi?id=107713
  [fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
  [fdo#109100]: https://bugs.freedesktop.org/show_bug.cgi?id=109100
  [fdo#109485]: https://bugs.freedesktop.org/show_bug.cgi?id=109485
  [fdo#111074]: https://bugs.freedesktop.org/show_bug.cgi?id=111074


Participating hosts (53 -> 43)
------------------------------

  Missing    (10): fi-kbl-soraka fi-icl-u4 fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-byt-clapper fi-icl-y fi-icl-dsi fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_6428 -> Patchwork_13557

  CI_DRM_6428: b48155ed6d4d6bdfb48ef317f8da109c11c98110 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5088: 3356087442806675438319578f1c964e51ee4965 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_13557: 3d816a9bccbd09f6b8493337b41850c98c775424 @ git://anongit.freedesktop.org/gfx-ci/linux


== Kernel 32bit build ==

Warning: Kernel 32bit buildtest failed:
https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/build_32bit.log

  CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  CHK     include/generated/compile.h
Kernel: arch/x86/boot/bzImage is ready  (#1)
  Building modules, stage 2.
  MODPOST 112 modules
ERROR: "__udivdi3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
ERROR: "__divdi3" [drivers/gpu/drm/amd/amdgpu/amdgpu.ko] undefined!
scripts/Makefile.modpost:91: recipe for target '__modpost' failed
make[1]: *** [__modpost] Error 1
Makefile:1287: recipe for target 'modules' failed
make: *** [modules] Error 2


== Linux commits ==

3d816a9bccbd drm/i915/gvt: GVTg support context submission pv optimization
d342082a121e drm/i915/gvt: GVTg support ppgtt pv optimization
01adbd98b998 drm/i915/gvt: GVTg handle shared_page setup
8e123919b35c drm/i915/gvt: GVTg handle pv_caps PVINFO register
f7ae004ae398 drm/i915: vgpu context submission pv optimization
e882cebe14fe drm/i915: vgpu ppgtt update pv optimization
cc318f34fc2e drm/i915: vgpu pv command buffer support
35d4aff9c6de drm/i915: vgpu shared memory setup for pv optimization
7dc9248b7979 drm/i915: introduced vgpu pv capability

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* ✗ Fi.CI.IGT: failure for i915 vgpu PV to improve vgpu performance
  2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (11 preceding siblings ...)
  2019-07-08  2:05 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2019-07-08  7:44 ` Patchwork
  12 siblings, 0 replies; 16+ messages in thread
From: Patchwork @ 2019-07-08  7:44 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: i915 vgpu PV to improve vgpu performance
URL   : https://patchwork.freedesktop.org/series/63333/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_6428_full -> Patchwork_13557_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_13557_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_13557_full, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_13557_full:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live_hangcheck:
    - shard-iclb:         [PASS][1] -> [DMESG-FAIL][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb1/igt@i915_selftest@live_hangcheck.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb1/igt@i915_selftest@live_hangcheck.html

  * igt@runner@aborted:
    - shard-iclb:         NOTRUN -> [FAIL][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb1/igt@runner@aborted.html

  
Known issues
------------

  Here are the changes found in Patchwork_13557_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_persistent_relocs@forked-thrash-inactive:
    - shard-snb:          [PASS][4] -> [INCOMPLETE][5] ([fdo#105411])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-snb7/igt@gem_persistent_relocs@forked-thrash-inactive.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-snb1/igt@gem_persistent_relocs@forked-thrash-inactive.html

  * igt@i915_selftest@live_execlists:
    - shard-iclb:         [PASS][6] -> [INCOMPLETE][7] ([fdo#107713])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb1/igt@i915_selftest@live_execlists.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb1/igt@i915_selftest@live_execlists.html

  * igt@kms_cursor_crc@pipe-a-cursor-64x21-random:
    - shard-skl:          [PASS][8] -> [FAIL][9] ([fdo#103232])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl10/igt@kms_cursor_crc@pipe-a-cursor-64x21-random.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl6/igt@kms_cursor_crc@pipe-a-cursor-64x21-random.html

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-skl:          [PASS][10] -> [INCOMPLETE][11] ([fdo#104108]) +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl9/igt@kms_fbcon_fbt@psr-suspend.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl5/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-hsw:          [PASS][12] -> [INCOMPLETE][13] ([fdo#103540]) +1 similar issue
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-hsw8/igt@kms_flip@flip-vs-suspend.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-hsw6/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_frontbuffer_tracking@fbc-rgb565-draw-pwrite:
    - shard-iclb:         [PASS][14] -> [FAIL][15] ([fdo#103167]) +5 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb5/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-pwrite.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb1/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-skl:          [PASS][16] -> [FAIL][17] ([fdo#108040])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl10/igt@kms_frontbuffer_tracking@fbc-suspend.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl6/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes:
    - shard-apl:          [PASS][18] -> [DMESG-WARN][19] ([fdo#108566]) +1 similar issue
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-apl1/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-apl2/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
    - shard-skl:          [PASS][20] -> [FAIL][21] ([fdo#108145]) +1 similar issue
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl10/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl6/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-b-coverage-7efc:
    - shard-skl:          [PASS][22] -> [FAIL][23] ([fdo#108145] / [fdo#110403])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl6/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl7/igt@kms_plane_alpha_blend@pipe-b-coverage-7efc.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         [PASS][24] -> [SKIP][25] ([fdo#109441]) +2 similar issues
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb2/igt@kms_psr@psr2_primary_mmap_cpu.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb5/igt@kms_psr@psr2_primary_mmap_cpu.html

  * igt@kms_setmode@basic:
    - shard-kbl:          [PASS][26] -> [FAIL][27] ([fdo#99912])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-kbl2/igt@kms_setmode@basic.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-kbl4/igt@kms_setmode@basic.html

  
#### Possible fixes ####

  * igt@gem_exec_balancer@smoke:
    - shard-iclb:         [SKIP][28] ([fdo#110854]) -> [PASS][29]
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb6/igt@gem_exec_balancer@smoke.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb2/igt@gem_exec_balancer@smoke.html

  * igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions-varying-size:
    - shard-apl:          [INCOMPLETE][30] ([fdo#103927]) -> [PASS][31]
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-apl6/igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions-varying-size.html
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-apl8/igt@kms_cursor_legacy@cursora-vs-flipa-atomic-transitions-varying-size.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite:
    - shard-iclb:         [FAIL][32] ([fdo#103167]) -> [PASS][33] +4 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb5/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite.html
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-pwrite.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c:
    - shard-apl:          [DMESG-WARN][34] ([fdo#108566]) -> [PASS][35] +3 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-apl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-apl6/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c.html

  * igt@kms_plane_alpha_blend@pipe-a-coverage-7efc:
    - shard-skl:          [FAIL][36] ([fdo#108145]) -> [PASS][37]
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl6/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl7/igt@kms_plane_alpha_blend@pipe-a-coverage-7efc.html

  * igt@kms_plane_lowres@pipe-a-tiling-x:
    - shard-iclb:         [FAIL][38] ([fdo#103166]) -> [PASS][39]
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb7/igt@kms_plane_lowres@pipe-a-tiling-x.html
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb8/igt@kms_plane_lowres@pipe-a-tiling-x.html

  * igt@kms_psr@psr2_basic:
    - shard-iclb:         [SKIP][40] ([fdo#109441]) -> [PASS][41] +2 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-iclb1/igt@kms_psr@psr2_basic.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-iclb2/igt@kms_psr@psr2_basic.html

  * igt@kms_setmode@basic:
    - shard-apl:          [FAIL][42] ([fdo#99912]) -> [PASS][43]
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-apl5/igt@kms_setmode@basic.html
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-apl1/igt@kms_setmode@basic.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - shard-skl:          [INCOMPLETE][44] ([fdo#104108]) -> [PASS][45]
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl10/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl6/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  * igt@kms_vblank@pipe-a-wait-forked-busy:
    - shard-snb:          [SKIP][46] ([fdo#109271]) -> [PASS][47]
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-snb2/igt@kms_vblank@pipe-a-wait-forked-busy.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-snb6/igt@kms_vblank@pipe-a-wait-forked-busy.html

  * igt@tools_test@tools_test:
    - shard-apl:          [SKIP][48] ([fdo#109271]) -> [PASS][49]
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-apl4/igt@tools_test@tools_test.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-apl4/igt@tools_test@tools_test.html
    - shard-glk:          [SKIP][50] ([fdo#109271]) -> [PASS][51]
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-glk9/igt@tools_test@tools_test.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-glk4/igt@tools_test@tools_test.html

  
#### Warnings ####

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-mmap-cpu:
    - shard-skl:          [FAIL][52] ([fdo#103167]) -> [FAIL][53] ([fdo#108040])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6428/shard-skl2/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-mmap-cpu.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/shard-skl8/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-mmap-cpu.html

  
  [fdo#103166]: https://bugs.freedesktop.org/show_bug.cgi?id=103166
  [fdo#103167]: https://bugs.freedesktop.org/show_bug.cgi?id=103167
  [fdo#103232]: https://bugs.freedesktop.org/show_bug.cgi?id=103232
  [fdo#103540]: https://bugs.freedesktop.org/show_bug.cgi?id=103540
  [fdo#103927]: https://bugs.freedesktop.org/show_bug.cgi?id=103927
  [fdo#104108]: https://bugs.freedesktop.org/show_bug.cgi?id=104108
  [fdo#105411]: https://bugs.freedesktop.org/show_bug.cgi?id=105411
  [fdo#107713]: https://bugs.freedesktop.org/show_bug.cgi?id=107713
  [fdo#108040]: https://bugs.freedesktop.org/show_bug.cgi?id=108040
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#108566]: https://bugs.freedesktop.org/show_bug.cgi?id=108566
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110403]: https://bugs.freedesktop.org/show_bug.cgi?id=110403
  [fdo#110854]: https://bugs.freedesktop.org/show_bug.cgi?id=110854
  [fdo#99912]: https://bugs.freedesktop.org/show_bug.cgi?id=99912


Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts


Build changes
-------------

  * Linux: CI_DRM_6428 -> Patchwork_13557

  CI_DRM_6428: b48155ed6d4d6bdfb48ef317f8da109c11c98110 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5088: 3356087442806675438319578f1c964e51ee4965 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_13557: 3d816a9bccbd09f6b8493337b41850c98c775424 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_13557/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 5/9] drm/i915: vgpu context submission pv optimization
  2019-07-08  1:35 ` [PATCH v7 5/9] drm/i915: vgpu context submission " Xiaolin Zhang
@ 2019-07-08 10:40   ` Chris Wilson
  2019-07-22  7:05     ` Zhang, Xiaolin
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Wilson @ 2019-07-08 10:40 UTC (permalink / raw)
  To: Xiaolin Zhang, intel-gfx; +Cc: intel-gvt-dev

Quoting Xiaolin Zhang (2019-07-08 02:35:18)
> +static void pv_submit(struct intel_engine_cs *engine,
> +              struct i915_request **out,
> +              struct i915_request **end)
> +{
> +       struct intel_engine_execlists * const execlists = &engine->execlists;
> +       struct i915_virtual_gpu_pv *pv = engine->i915->vgpu.pv;
> +       struct pv_submission *pv_elsp = pv->pv_elsp[engine->hw_id];
> +       struct i915_request *rq;
> +
> +       u64 descs[2];
> +       int n, err;
> +
> +       memset(descs, 0, sizeof(descs));
> +       n = 0;
> +       do {
> +               rq = *out++;
> +               descs[n++] = execlists_update_context(rq);
> +       } while (out != end);
> +
> +       for (n = 0; n < execlists_num_ports(execlists); n++)
> +               pv_elsp->descs[n] = descs[n];

You can polish this a bit, minor nit.

> +       writel(PV_ACTION_ELSP_SUBMISSION, execlists->submit_reg);
> +
> +#define done (READ_ONCE(pv_elsp->submitted) == true)
> +       err = wait_for_us(done, 1);
> +       if (err)
> +               err = wait_for(done, 1);

Strictly, you need to use wait_for_atomic_us() [under a spinlock here],
and there's no need for a second pass since you are not allowed to sleep
anyway. So just set the timeout to 1000us.

> +#undef done
> +
> +       if (unlikely(err))
> +               DRM_ERROR("PV (%s) workload submission failed\n", engine->name);
> +
> +       pv_elsp->submitted = false;

However, that looks solid wrt to serialisation of this engine with its
pv host, without cross-interference (at least in the comms channel).

If you want to get fancy, you should be able to simply not dequeue until
!pv_elsp->submitted so the wait-for-ack occurs naturally. So long as the
pv host plays nicely, we should always see submitted acked before the
request is signaled. Give or take problems with preemption and the pv
host being a black box that may allow requests to complete and so our
submission be a no-op (and so not generate an interrupt to allow further
submission). Indeed, I would strongly recommend you use the delayed ack
plus jiffie timer to avoid the no-op submission problem.

If you want to prove this in a bunch of mocked up selftests that provide
the pv channel ontop of the local driver....
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v7 5/9] drm/i915: vgpu context submission pv optimization
  2019-07-08 10:40   ` Chris Wilson
@ 2019-07-22  7:05     ` Zhang, Xiaolin
  0 siblings, 0 replies; 16+ messages in thread
From: Zhang, Xiaolin @ 2019-07-22  7:05 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx; +Cc: intel-gvt-dev

On 07/08/2019 06:41 PM, Chris Wilson wrote:
> Quoting Xiaolin Zhang (2019-07-08 02:35:18)
>> +static void pv_submit(struct intel_engine_cs *engine,
>> +              struct i915_request **out,
>> +              struct i915_request **end)
>> +{
>> +       struct intel_engine_execlists * const execlists = &engine->execlists;
>> +       struct i915_virtual_gpu_pv *pv = engine->i915->vgpu.pv;
>> +       struct pv_submission *pv_elsp = pv->pv_elsp[engine->hw_id];
>> +       struct i915_request *rq;
>> +
>> +       u64 descs[2];
>> +       int n, err;
>> +
>> +       memset(descs, 0, sizeof(descs));
>> +       n = 0;
>> +       do {
>> +               rq = *out++;
>> +               descs[n++] = execlists_update_context(rq);
>> +       } while (out != end);
>> +
>> +       for (n = 0; n < execlists_num_ports(execlists); n++)
>> +               pv_elsp->descs[n] = descs[n];
> You can polish this a bit, minor nit.
Sure.
>
>> +       writel(PV_ACTION_ELSP_SUBMISSION, execlists->submit_reg);
>> +
>> +#define done (READ_ONCE(pv_elsp->submitted) == true)
>> +       err = wait_for_us(done, 1);
>> +       if (err)
>> +               err = wait_for(done, 1);
> Strictly, you need to use wait_for_atomic_us() [under a spinlock here],
> and there's no need for a second pass since you are not allowed to sleep
> anyway. So just set the timeout to 1000us.
Sure.
>> +#undef done
>> +
>> +       if (unlikely(err))
>> +               DRM_ERROR("PV (%s) workload submission failed\n", engine->name);
>> +
>> +       pv_elsp->submitted = false;
> However, that looks solid wrt to serialisation of this engine with its
> pv host, without cross-interference (at least in the comms channel).
>
> If you want to get fancy, you should be able to simply not dequeue until
> !pv_elsp->submitted so the wait-for-ack occurs naturally. So long as the
> pv host plays nicely, we should always see submitted acked before the
> request is signaled. Give or take problems with preemption and the pv
> host being a black box that may allow requests to complete and so our
> submission be a no-op (and so not generate an interrupt to allow further
> submission). Indeed, I would strongly recommend you use the delayed ack
> plus jiffie timer to avoid the no-op submission problem.
I will implement this suggestion. thanks your feedback, Chris.
-BRs, Xiaolin
> If you want to prove this in a bunch of mocked up selftests that provide
> the pv channel ontop of the local driver....
> -Chris
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-07-22  7:05 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-08  1:35 [PATCH v7 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 1/9] drm/i915: introduced vgpu pv capability Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 2/9] drm/i915: vgpu shared memory setup for pv optimization Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 3/9] drm/i915: vgpu pv command buffer support Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 4/9] drm/i915: vgpu ppgtt update pv optimization Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 5/9] drm/i915: vgpu context submission " Xiaolin Zhang
2019-07-08 10:40   ` Chris Wilson
2019-07-22  7:05     ` Zhang, Xiaolin
2019-07-08  1:35 ` [PATCH v7 6/9] drm/i915/gvt: GVTg handle pv_caps PVINFO register Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 7/9] drm/i915/gvt: GVTg handle shared_page setup Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 8/9] drm/i915/gvt: GVTg support ppgtt pv optimization Xiaolin Zhang
2019-07-08  1:35 ` [PATCH v7 9/9] drm/i915/gvt: GVTg support context submission " Xiaolin Zhang
2019-07-08  1:48 ` ✗ Fi.CI.CHECKPATCH: warning for i915 vgpu PV to improve vgpu performance Patchwork
2019-07-08  1:52 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-07-08  2:05 ` ✓ Fi.CI.BAT: success " Patchwork
2019-07-08  7:44 ` ✗ Fi.CI.IGT: failure " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.