All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support
@ 2020-10-09  0:04 Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 01/12] drm/i915: introduced vgpu pv capability Xiaolin Zhang
                   ` (16 more replies)
  0 siblings, 17 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

This is new i915 VGPU PV design based on the last year proposal [1].

This is a new series of patch set and discontiued the old series of
patch set due to this new design.

To improve vgpu performance, it could implement some PV optimizations
in different gpu resource domain to reduce the data access overhead
or complexity modeling.

In this patch set, PPGTT and GGTT are identifed as PV optimization from
VGPU memory resource point of view and workloa submisison is identifed
as PV optimization from VGPU compute resource point of view. so 3 PV
features (PV PPGTT, PV GGTT and PV submisison) are designed and implemented
to support VGPU model better. 

To provide the mechanism for PV feature development and implementation,
A simple PV framework is implemented and consisted of 3 sub items:
a. PV capability: it indicateds what kind of PV capability provided by both
guest system and host GVTg subsystem.
b. PV shared memory: this memory is allocated in guest and shared between
guest and host for data exchange, PV command & PV data communication.
c. PV command transport protocol: on top of PV shared memory, it defines
the communication protocol & channel between guest and host to circulate
PV command and PV command data. 

for PV PPGTT, to improve efficiency and reduce the complexity of ppgtt
support, vgpu ppgtt page table operations are implemented in pv fashion
with pv version of bind/unbind for ppgtt vma ops. The pv version of 
ppgtt vma ops use the CTB protocol to communicate pv ppgtt command along 
with data struct pv_vma from guest to GVT and then GVT implement command
handler of PV_CMD_BIND_PPGTT and PV_CMD_UBIND_PPGTT to achieve GVA->HPA
address translation.

for PV GGTT, it is similar with PV PPGGT instead to use PV_CMD_BIND_GGTT
and PV_CMD_UNBIND_GGTT pv command. 

for PV workload submisison, a pv version of workload submission backend
implemented with engine submission data in the shared memory and meanwhile
eliminating execlists csb process and context switch interrupt in
submisision routine to improve efficiency and reduce complexity.

Based on the experiment, small workloads such as glmark2 and Antutu 3D 
benchmark can get benefit for these PV featuers at least 10% performance
gain. for large workload such as media and 3D, it get some benefit,
but not much. 

[1]: https://patchwork.kernel.org/cover/11148059/

v2: addressed v1 comment from Jani Nikula and dim checkpatch issues.

Xiaolin Zhang (12):
  drm/i915: introduced vgpu pv capability
  drm/i915: vgpu shared memory setup for pv support
  drm/i915: vgpu pv command buffer transport protocol
  drm/i915: vgpu ppgtt page table pv support
  drm/i915: vgpu ggtt page table pv support
  drm/i915: vgpu workload submisison pv support
  drm/i915/gvt: GVTg expose pv_caps PVINFO register
  drm/i915/gvt: GVTg handle guest shared_page setup
  drm/i915/gvt: GVTg support vgpu pv CTB protocol
  drm/i915/gvt: GVTg support ppgtt pv operations
  drm/i915/gvt: GVTg support ggtt pv operations
  drm/i915/gvt: GVTg support pv workload submssion

 drivers/gpu/drm/i915/Makefile              |   2 +-
 drivers/gpu/drm/i915/gt/gen8_ppgtt.c       |   4 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c        |   2 +
 drivers/gpu/drm/i915/gvt/gtt.c             | 258 ++++++++++++++
 drivers/gpu/drm/i915/gvt/gtt.h             |   4 +
 drivers/gpu/drm/i915/gvt/gvt.h             |  19 +-
 drivers/gpu/drm/i915/gvt/handlers.c        | 274 ++++++++++++++-
 drivers/gpu/drm/i915/gvt/vgpu.c            |  49 +++
 drivers/gpu/drm/i915/i915_drv.c            |   2 +
 drivers/gpu/drm/i915/i915_drv.h            |   3 +
 drivers/gpu/drm/i915/i915_gem.c            |   4 +-
 drivers/gpu/drm/i915/i915_pvinfo.h         |   9 +-
 drivers/gpu/drm/i915/i915_vgpu.c           | 543 ++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h           | 125 +++++++
 drivers/gpu/drm/i915/intel_pv_submission.c | 325 +++++++++++++++++
 15 files changed, 1616 insertions(+), 7 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/intel_pv_submission.c

-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 01/12] drm/i915: introduced vgpu pv capability
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 02/12] drm/i915: vgpu shared memory setup for pv support Xiaolin Zhang
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

to enable vgpu pv feature, pv capability is introduced for guest by
new pv_caps member in struct i915_virtual_gpu and for host GVT by
new pv_caps register in struct vgt_if.

both of them are used to control different pv feature support in each
domain and the final pv caps runtime negotiated between guest and host.

it also adds VGT_CAPS_PV capability BIT useb by guest to query host GVTg
whether support any PV feature or not.

v2: addressed dim checkpatch issues and Jani Nikula's comment.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h    |  1 +
 drivers/gpu/drm/i915/i915_pvinfo.h |  5 ++-
 drivers/gpu/drm/i915/i915_vgpu.c   | 64 +++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h   | 10 ++++++
 4 files changed, 78 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index eef9a82..b854a53 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -799,6 +799,7 @@ struct i915_virtual_gpu {
 	struct mutex lock; /* serialises sending of g2v_notify command pkts */
 	bool active;
 	u32 caps;
+	u32 pv_caps;
 };
 
 struct intel_cdclk_config {
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 683e97a..8b0dc25 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -57,6 +57,7 @@ enum vgt_g2v_type {
 #define VGT_CAPS_FULL_PPGTT		BIT(2)
 #define VGT_CAPS_HWSP_EMULATION		BIT(3)
 #define VGT_CAPS_HUGE_GTT		BIT(4)
+#define VGT_CAPS_PV			BIT(5)
 
 struct vgt_if {
 	u64 magic;		/* VGT_MAGIC */
@@ -109,7 +110,9 @@ struct vgt_if {
 	u32 execlist_context_descriptor_lo;
 	u32 execlist_context_descriptor_hi;
 
-	u32  rsv7[0x200 - 24];    /* pad to one page */
+	u32 pv_caps;
+
+	u32  rsv7[0x200 - 25];    /* pad to one page */
 } __packed;
 
 #define vgtif_offset(x) (offsetof(struct vgt_if, x))
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 70fca72..d0c9cee 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -98,7 +98,15 @@ void intel_vgpu_detect(struct drm_i915_private *dev_priv)
 
 	dev_priv->vgpu.active = true;
 	mutex_init(&dev_priv->vgpu.lock);
-	drm_info(&dev_priv->drm, "Virtual GPU for Intel GVT-g detected.\n");
+
+	if (intel_vgpu_pv_detect_caps(dev_priv, shared_area)) {
+		drm_info(&dev_priv->drm,
+			 "Virtual GPU for Intel GVT-g detected with PV Capabilities.\n");
+		drm_dbg(&dev_priv->drm, "VGPU pv caps: 0x%x\n", dev_priv->vgpu.pv_caps);
+	} else {
+		drm_info(&dev_priv->drm,
+			 "Virtual GPU for Intel GVT-g detected.\n");
+	}
 
 out:
 	pci_iounmap(pdev, shared_area);
@@ -134,6 +142,19 @@ bool intel_vgpu_has_huge_gtt(struct drm_i915_private *dev_priv)
 	return dev_priv->vgpu.caps & VGT_CAPS_HUGE_GTT;
 }
 
+static bool intel_vgpu_pv_check_cap(struct drm_i915_private *dev_priv,
+				    enum intel_vgpu_pv_caps cap)
+{
+	return (dev_priv->vgpu.active &&
+		(dev_priv->vgpu.caps & VGT_CAPS_PV) &&
+		(dev_priv->vgpu.pv_caps & cap));
+}
+
+static bool intel_vgpu_has_pv_caps(struct drm_i915_private *dev_priv)
+{
+	return dev_priv->vgpu.caps & VGT_CAPS_PV;
+}
+
 struct _balloon_info_ {
 	/*
 	 * There are up to 2 regions per mappable/unmappable graphic
@@ -336,3 +357,44 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
 	drm_err(&dev_priv->drm, "VGT balloon fail\n");
 	return ret;
 }
+
+/*
+ * i915 vgpu PV support for Linux
+ */
+
+/*
+ * Config vgpu PV ops for different PV capabilities
+ */
+void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
+			       enum intel_vgpu_pv_caps cap, void *data)
+{
+	if (!intel_vgpu_pv_check_cap(i915, cap))
+		return;
+}
+
+/*
+ * intel_vgpu_pv_detect_caps - detect virtual GPU PV capabilities
+ *
+ * This function is called at the initialization stage to detect VGPU
+ * PV capabilities
+ */
+bool intel_vgpu_pv_detect_caps(struct drm_i915_private *i915,
+			       void __iomem *shared_area)
+{
+	u32 gvt_pvcaps;
+	u32 pvcaps = 0;
+
+	if (!intel_vgpu_has_pv_caps(i915))
+		return false;
+
+	/* PV capability negotiation between PV guest and GVT */
+	gvt_pvcaps = readl(shared_area + vgtif_offset(pv_caps));
+	pvcaps = i915->vgpu.pv_caps & gvt_pvcaps;
+	i915->vgpu.pv_caps = pvcaps;
+	writel(pvcaps, shared_area + vgtif_offset(pv_caps));
+
+	if (!pvcaps)
+		return false;
+
+	return true;
+}
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index ffbb77d..1dccea6 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -29,6 +29,11 @@
 struct drm_i915_private;
 struct i915_ggtt;
 
+/* intel vGPU PV capabilities */
+enum intel_vgpu_pv_caps {
+	PV_NONE = 0,
+};
+
 void intel_vgpu_detect(struct drm_i915_private *i915);
 bool intel_vgpu_active(struct drm_i915_private *i915);
 void intel_vgpu_register(struct drm_i915_private *i915);
@@ -39,4 +44,9 @@ bool intel_vgpu_has_huge_gtt(struct drm_i915_private *i915);
 int intel_vgt_balloon(struct i915_ggtt *ggtt);
 void intel_vgt_deballoon(struct i915_ggtt *ggtt);
 
+/* i915 vgpu pv related functions */
+bool intel_vgpu_pv_detect_caps(struct drm_i915_private *i915,
+			       void __iomem *shared_area);
+void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
+			       enum intel_vgpu_pv_caps cap, void *data);
 #endif /* _I915_VGPU_H_ */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 02/12] drm/i915: vgpu shared memory setup for pv support
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 01/12] drm/i915: introduced vgpu pv capability Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 03/12] drm/i915: vgpu pv command buffer transport protocol Xiaolin Zhang
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

To support vgpu pv features, a common shared memory is setup used for
communication and data exchange between guest and host GVTg to reduce
data access overhead and trap cost.

guest i915 will allocate this common memory (1 page size) and then pass
it's physical address to host GVTg through PVINFO register so that host
GVTg can access this shared guest page meory without trap cost with
hyperviser's facility.

guest i915 will send VGT_G2V_SHARED_PAGE_SETUP notification to host GVTg
once shared memory setup succcessfully finished.

the layout of the shared_page also defined as well, the first part is the
PV vervsion information used for compabilty support.

v2: addressed dim checkpatch issues and Jani Nikula's comment.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.c    |  2 +
 drivers/gpu/drm/i915/i915_drv.h    |  2 +
 drivers/gpu/drm/i915/i915_pvinfo.h |  5 +-
 drivers/gpu/drm/i915/i915_vgpu.c   | 95 ++++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_vgpu.h   | 14 ++++++
 5 files changed, 117 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index ebc1506..2b8e846 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -969,6 +969,8 @@ static void i915_driver_release(struct drm_device *dev)
 
 	disable_rpm_wakeref_asserts(rpm);
 
+	intel_vgpu_destroy(dev_priv);
+
 	i915_gem_driver_release(dev_priv);
 
 	intel_memory_regions_driver_release(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b854a53..9af915a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -800,6 +800,8 @@ struct i915_virtual_gpu {
 	bool active;
 	u32 caps;
 	u32 pv_caps;
+
+	struct intel_vgpu_pv *pv;
 };
 
 struct intel_cdclk_config {
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 8b0dc25..1d44876 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -48,6 +48,7 @@ enum vgt_g2v_type {
 	VGT_G2V_PPGTT_L4_PAGE_TABLE_DESTROY,
 	VGT_G2V_EXECLIST_CONTEXT_CREATE,
 	VGT_G2V_EXECLIST_CONTEXT_DESTROY,
+	VGT_G2V_SHARED_PAGE_REGISTER,
 	VGT_G2V_MAX,
 };
 
@@ -112,7 +113,9 @@ struct vgt_if {
 
 	u32 pv_caps;
 
-	u32  rsv7[0x200 - 25];    /* pad to one page */
+	u64 shared_page_gpa;
+
+	u32  rsv7[0x200 - 27];    /* pad to one page */
 } __packed;
 
 #define vgtif_offset(x) (offsetof(struct vgt_if, x))
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index d0c9cee..146877f 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -112,6 +112,17 @@ void intel_vgpu_detect(struct drm_i915_private *dev_priv)
 	pci_iounmap(pdev, shared_area);
 }
 
+void intel_vgpu_destroy(struct drm_i915_private *i915)
+{
+	struct intel_vgpu_pv *pv = i915->vgpu.pv;
+
+	if (!intel_vgpu_active(i915) || !pv)
+		return;
+
+	__free_page(virt_to_page(pv->mem));
+	kfree(pv);
+}
+
 void intel_vgpu_register(struct drm_i915_private *i915)
 {
 	/*
@@ -363,6 +374,84 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
  */
 
 /*
+ * shared_page setup for VGPU PV features
+ */
+static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
+					   void __iomem *shared_area)
+{
+	void __iomem *addr;
+	struct intel_vgpu_pv *pv;
+	struct intel_vgpu_pv_shared_page *base;
+	u64 gpa;
+	u16 ver_maj, ver_min;
+	int ret = 0;
+
+	/* We allocate 1 page shared between guest and GVT for data exchange.
+	 *       _______________________________
+	 *      |version                        |
+	 *      |_______________________________PAGE/8
+	 *      |                               |
+	 *      |_______________________________PAGE/4
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |_______________________________PAGE/2
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |                               |
+	 *      |_______________________________|
+	 *
+	 * 0 offset: PV version area
+	 */
+
+	base = (struct intel_vgpu_pv_shared_page *)get_zeroed_page(GFP_KERNEL);
+	if (!base) {
+		drm_dbg(&i915->drm, "out of memory for shared memory\n");
+		return -ENOMEM;
+	}
+
+	/* pass guest memory pa address to GVT and then read back to verify */
+	gpa = __pa(base);
+	addr = shared_area + vgtif_offset(shared_page_gpa);
+	writeq(gpa, addr);
+	if (gpa != readq(addr)) {
+		drm_dbg(&i915->drm, "write shared_page_gpa failed\n");
+		ret = -EIO;
+		goto err;
+	}
+
+	addr = shared_area + vgtif_offset(g2v_notify);
+	writel(VGT_G2V_SHARED_PAGE_REGISTER, addr);
+
+	ver_maj = base->ver_major;
+	ver_min = base->ver_minor;
+	if (ver_maj != PV_MAJOR || ver_min != PV_MINOR) {
+		drm_dbg(&i915->drm, "vgpu PV version incompatible\n");
+		ret = -EIO;
+		goto err;
+	}
+
+	pv = kzalloc(sizeof(*pv), GFP_KERNEL);
+	if (!pv) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	drm_dbg(&i915->drm,
+		"vgpu PV version major %d and minor %d\n", ver_maj, ver_min);
+	i915->vgpu.pv = pv;
+	pv->mem = base;
+	return ret;
+err:
+	__free_page(virt_to_page(base));
+	return ret;
+}
+
+/*
  * Config vgpu PV ops for different PV capabilities
  */
 void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
@@ -396,5 +485,11 @@ bool intel_vgpu_pv_detect_caps(struct drm_i915_private *i915,
 	if (!pvcaps)
 		return false;
 
+	if (intel_vgpu_pv_setup_shared_page(i915, shared_area)) {
+		i915->vgpu.pv_caps = 0;
+		writel(0, shared_area + vgtif_offset(pv_caps));
+		return false;
+	}
+
 	return true;
 }
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 1dccea6..ab8d25b 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -29,12 +29,26 @@
 struct drm_i915_private;
 struct i915_ggtt;
 
+#define PV_MAJOR        0
+#define PV_MINOR        1
+
 /* intel vGPU PV capabilities */
 enum intel_vgpu_pv_caps {
 	PV_NONE = 0,
 };
 
+/* A shared memory(4KB) between GVTg and vgpu allocated by guest */
+struct intel_vgpu_pv_shared_page {
+	u16 ver_major;
+	u16 ver_minor;
+};
+
+struct intel_vgpu_pv {
+	struct intel_vgpu_pv_shared_page *mem;
+};
+
 void intel_vgpu_detect(struct drm_i915_private *i915);
+void intel_vgpu_destroy(struct drm_i915_private *i915);
 bool intel_vgpu_active(struct drm_i915_private *i915);
 void intel_vgpu_register(struct drm_i915_private *i915);
 bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *i915);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 03/12] drm/i915: vgpu pv command buffer transport protocol
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 01/12] drm/i915: introduced vgpu pv capability Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 02/12] drm/i915: vgpu shared memory setup for pv support Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 04/12] drm/i915: vgpu ppgtt page table pv support Xiaolin Zhang
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

based on the common shared memory, vgpu pv command transport buffer (CTB)
protocol is implemented which is a simple pv command buffer ring with pv
command descriptor used to perform guest-2-gvt single direction commucation
between guest and host GVTg.

with this CTB, guest can send PV command with PV data to host to perform PV
commands in host side.

v2: addressed dim checkpatch issues and Jani Nikula's comment.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_pvinfo.h |   1 +
 drivers/gpu/drm/i915/i915_vgpu.c   | 202 ++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h   |  53 ++++++++++
 3 files changed, 254 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 1d44876..ded93c5 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -49,6 +49,7 @@ enum vgt_g2v_type {
 	VGT_G2V_EXECLIST_CONTEXT_CREATE,
 	VGT_G2V_EXECLIST_CONTEXT_DESTROY,
 	VGT_G2V_SHARED_PAGE_REGISTER,
+	VGT_G2V_PV_SEND_TRIGGER,
 	VGT_G2V_MAX,
 };
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 146877f..c833823 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -373,6 +373,189 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
  * i915 vgpu PV support for Linux
  */
 
+/**
+ * intel_vgpu_pv_wait_desc_update - Wait the command buffer descriptor update.
+ * @desc:	buffer descriptor
+ * @fence:	response fence
+ * @status:	placeholder for status
+ *
+ * GVTg will update command buffer descriptor with new fence and status
+ * after processing the command identified by the fence. Wait for
+ * specified fence and then read from the descriptor status of the
+ * command.
+ *
+ * Return:
+ * *	0 response received (status is valid)
+ * *	-ETIMEDOUT no response within hardcoded timeout
+ */
+static int
+intel_vgpu_pv_wait_desc_update(struct drm_i915_private *i915,
+			       struct intel_vgpu_pv_ct_buf_desc *desc,
+			       u32 fence, u32 *status)
+{
+	int err;
+
+#define done (READ_ONCE(desc->fence) == fence)
+	err = wait_for_us(done, 5);
+	if (err)
+		err = wait_for(done, 10);
+#undef done
+
+	if (unlikely(err)) {
+		drm_err(&i915->drm, "CT: fence %u failed; reported fence=%u\n",
+			fence, desc->fence);
+	}
+
+	*status = desc->status;
+
+	return err;
+}
+
+/*
+ * CTB Guest to GVT request
+ *
+ * Format of the CTB Guest to GVT request message is as follows::
+ *
+ *      +------------+---------+---------+---------+---------+
+ *      |   msg[0]   |   [1]   |   [2]   |   ...   |  [n-1]  |
+ *      +------------+---------+---------+---------+---------+
+ *      |   MESSAGE  |       MESSAGE PAYLOAD                 |
+ *      +   HEADER   +---------+---------+---------+---------+
+ *      |            |    0    |    1    |   ...   |    n    |
+ *      +============+=========+=========+=========+=========+
+ *      |  len >= 1  |  FENCE  |     request specific data   |
+ *      +------+-----+---------+---------+---------+---------+
+ *
+ *                   ^-----------------len-------------------^
+ */
+static int intel_vgpu_pv_cmd_buf_write(struct intel_vgpu_pv *pv,
+				       const u32 *action,
+				       u32 len /* in dwords */, u32 fence)
+{
+	struct intel_vgpu_pv_ct_buf_desc *desc = pv->ctb.desc;
+	u32 head = desc->head / 4;	/* in dwords */
+	u32 tail = desc->tail / 4;	/* in dwords */
+	u32 size = desc->size / 4;	/* in dwords */
+	u32 used;			/* in dwords */
+	u32 header;
+	u32 *cmds = pv->ctb.cmds;
+	unsigned int i;
+
+	GEM_BUG_ON(desc->size % 4);
+	GEM_BUG_ON(desc->head % 4);
+	GEM_BUG_ON(desc->tail % 4);
+	GEM_BUG_ON(tail >= size);
+
+	 /* tail == head condition indicates empty */
+	if (tail < head)
+		used = (size - head) + tail;
+	else
+		used = tail - head;
+
+	/* make sure there is a space including extra dw for the fence */
+	if (unlikely(used + len + 1 >= size))
+		return -ENOSPC;
+
+	/*
+	 * Write the message. The format is the following:
+	 * DW0: header (including action code)
+	 * DW1: fence
+	 * DW2+: action data
+	 */
+	header = (len << PV_CT_MSG_LEN_SHIFT) |
+		 (PV_CT_MSG_WRITE_FENCE_TO_DESC) |
+		 (action[0] << PV_CT_MSG_ACTION_SHIFT);
+
+	cmds[tail] = header;
+	tail = (tail + 1) % size;
+
+	cmds[tail] = fence;
+	tail = (tail + 1) % size;
+
+	for (i = 1; i < len; i++) {
+		cmds[tail] = action[i];
+		tail = (tail + 1) % size;
+	}
+
+	/* now update desc tail (back in bytes) */
+	desc->tail = tail * 4;
+	GEM_BUG_ON(desc->tail > desc->size);
+
+	return 0;
+}
+
+static u32 intel_vgpu_pv_get_next_fence(struct intel_vgpu_pv *pv)
+{
+	/* For now it's trivial */
+	return ++pv->next_fence;
+}
+
+static int intel_vgpu_pv_send(struct drm_i915_private *i915,
+			      const u32 *action, u32 len, u32 *status)
+{
+	struct i915_virtual_gpu *vgpu = &i915->vgpu;
+	struct intel_vgpu_pv *pv = vgpu->pv;
+
+	struct intel_vgpu_pv_ct_buf_desc *desc = pv->ctb.desc;
+
+	u32 fence;
+	int err;
+
+	GEM_BUG_ON(!len);
+	GEM_BUG_ON(len & ~PV_CT_MSG_LEN_MASK);
+
+	fence = intel_vgpu_pv_get_next_fence(pv);
+	err = intel_vgpu_pv_cmd_buf_write(pv, action, len, fence);
+	if (unlikely(err))
+		goto unlink;
+
+	i915->vgpu.pv->notify(i915);
+
+	err = intel_vgpu_pv_wait_desc_update(i915, desc, fence, status);
+	if (unlikely(err))
+		goto unlink;
+
+	if ((*status)) {
+		err = -EIO;
+		goto unlink;
+	}
+
+	err = (*status);
+
+unlink:
+	return err;
+}
+
+static int intel_vgpu_pv_send_cmd_buf(struct drm_i915_private *i915,
+				      u32 *action, u32 len)
+{
+	struct i915_virtual_gpu *vgpu = &i915->vgpu;
+	unsigned long flags;
+
+	u32 status = ~0; /* undefined */
+	int ret;
+
+	spin_lock_irqsave(&vgpu->pv->lock, flags);
+
+	ret = intel_vgpu_pv_send(i915, action, len, &status);
+	if (unlikely(ret < 0)) {
+		drm_err(&i915->drm, "PV: send action %#X failed; err=%d status=%#X\n",
+			action[0], ret, status);
+	} else if (unlikely(ret)) {
+		drm_err(&i915->drm, "PV: send action %#x returned %d (%#x)\n",
+			action[0], ret, ret);
+	}
+
+	spin_unlock_irqrestore(&vgpu->pv->lock, flags);
+	return ret;
+}
+
+static void intel_vgpu_pv_notify_mmio(struct drm_i915_private *dev_priv)
+{
+	intel_uncore_write(&dev_priv->uncore,
+			   vgtif_reg(g2v_notify), VGT_G2V_PV_SEND_TRIGGER);
+}
+
 /*
  * shared_page setup for VGPU PV features
  */
@@ -388,7 +571,7 @@ static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
 
 	/* We allocate 1 page shared between guest and GVT for data exchange.
 	 *       _______________________________
-	 *      |version                        |
+	 *      |version|PV_DESCs(SEND)         |
 	 *      |_______________________________PAGE/8
 	 *      |                               |
 	 *      |_______________________________PAGE/4
@@ -396,7 +579,7 @@ static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
 	 *      |                               |
 	 *      |                               |
 	 *      |_______________________________PAGE/2
-	 *      |                               |
+	 *      |PV_CMDs(SEND)                  |
 	 *      |                               |
 	 *      |                               |
 	 *      |                               |
@@ -406,6 +589,8 @@ static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
 	 *      |_______________________________|
 	 *
 	 * 0 offset: PV version area
+	 * PAGE/256 offset: PV command buffer command descriptor area
+	 * PAGE/2 offset: PV command buffer command data area
 	 */
 
 	base = (struct intel_vgpu_pv_shared_page *)get_zeroed_page(GFP_KERNEL);
@@ -443,8 +628,21 @@ static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
 
 	drm_dbg(&i915->drm,
 		"vgpu PV version major %d and minor %d\n", ver_maj, ver_min);
+
 	i915->vgpu.pv = pv;
 	pv->mem = base;
+
+	/* setup PV command buffer ptr */
+	pv->ctb.cmds = (void *)base + PV_CMD_OFF;
+	pv->ctb.desc = (void *)base + PV_DESC_OFF;
+	pv->ctb.desc->size = PAGE_SIZE / 2;
+	pv->ctb.desc->addr = PV_CMD_OFF;
+
+	/* setup PV command buffer callback */
+	pv->send = intel_vgpu_pv_send_cmd_buf;
+	pv->notify = intel_vgpu_pv_notify_mmio;
+	spin_lock_init(&pv->lock);
+
 	return ret;
 err:
 	__free_page(virt_to_page(base));
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index ab8d25b..18f2dd0 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -31,6 +31,8 @@ struct i915_ggtt;
 
 #define PV_MAJOR        0
 #define PV_MINOR        1
+#define PV_DESC_OFF     (PAGE_SIZE / 256)
+#define PV_CMD_OFF      (PAGE_SIZE / 2)
 
 /* intel vGPU PV capabilities */
 enum intel_vgpu_pv_caps {
@@ -43,8 +45,59 @@ struct intel_vgpu_pv_shared_page {
 	u16 ver_minor;
 };
 
+/*
+ * Definition of the command transport message header (DW0)
+ *
+ * bit[0..4]	message len (in dwords)
+ * bit[5..7]	reserved
+ * bit[8..8]	write fence to desc
+ * bit[9..15]	reserved
+ * bit[16..31]	action code
+ */
+#define PV_CT_MSG_LEN_SHIFT             0
+#define PV_CT_MSG_LEN_MASK              0x1F
+#define PV_CT_MSG_WRITE_FENCE_TO_DESC   BIT(8)
+#define PV_CT_MSG_ACTION_SHIFT          16
+#define PV_CT_MSG_ACTION_MASK           0xFFFF
+
+/* PV command transport buffer descriptor */
+struct intel_vgpu_pv_ct_buf_desc {
+	u32 addr;		/* gpa address */
+	u32 size;		/* size in bytes */
+	u32 head;		/* offset updated by GVT */
+	u32 tail;		/* offset updated by owner */
+
+	u32 fence;		/* fence updated by GVT */
+	u32 status;		/* status updated by GVT */
+};
+
+/** PV single command transport buffer.
+ *
+ * A single command transport buffer consists of two parts, the header
+ * record (command transport buffer descriptor) and the actual buffer which
+ * holds the commands.
+ *
+ * @desc: pointer to the buffer descriptor
+ * @cmds: pointer to the commands buffer
+ */
+struct intel_vgpu_pv_ct_buf {
+	struct intel_vgpu_pv_ct_buf_desc *desc;
+	u32 *cmds;
+};
+
 struct intel_vgpu_pv {
 	struct intel_vgpu_pv_shared_page *mem;
+
+	/* PV command buffer support */
+	struct intel_vgpu_pv_ct_buf ctb;
+	u32 next_fence;
+
+	/* To serialize the vgpu PV send actions */
+	spinlock_t lock;
+
+	/* VGPU's PV specific send function */
+	int (*send)(struct drm_i915_private *dev_priv, u32 *data, u32 len);
+	void (*notify)(struct drm_i915_private *dev_priv);
 };
 
 void intel_vgpu_detect(struct drm_i915_private *i915);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 04/12] drm/i915: vgpu ppgtt page table pv support
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (2 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 03/12] drm/i915: vgpu pv command buffer transport protocol Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 05/12] drm/i915: vgpu ggtt " Xiaolin Zhang
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

to improve efficiency and reduce the complexsity of vgpu ppgtt support,
vgpu ppgtt page table operations are implemented in pv fashion and
implemented pv version of bind/unbind for ppgtt vma ops.

The pv version of ppgtt vma ops use the CTB protocol to communicate
pv ppgtt command along with data struct pv_vma from guest to GVT
and then GVT will implement command handler of PV_CMD_BIND_PPGTT and
PV_CMD_UBIND_PPGTT to support vgpu PPGTT feature.

new PV_PPGTT pv_cap is used to control this level of pv support in
both guest and host side.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gt/gen8_ppgtt.c |  4 +-
 drivers/gpu/drm/i915/i915_vgpu.c     | 96 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_vgpu.h     | 17 +++++++
 3 files changed, 116 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
index b236aa0..d5c6007 100644
--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
@@ -732,8 +732,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt)
 
 	ppgtt->vm.pte_encode = gen8_pte_encode;
 
-	if (intel_vgpu_active(gt->i915))
+	if (intel_vgpu_active(gt->i915)) {
+		intel_vgpu_pv_config_caps(gt->i915, PV_PPGTT, ppgtt);
 		gen8_ppgtt_notify_vgt(ppgtt, true);
+	}
 
 	ppgtt->vm.cleanup = gen8_ppgtt_cleanup;
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index c833823..81be1db5 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -99,6 +99,9 @@ void intel_vgpu_detect(struct drm_i915_private *dev_priv)
 	dev_priv->vgpu.active = true;
 	mutex_init(&dev_priv->vgpu.lock);
 
+	/* guest driver PV capability */
+	dev_priv->vgpu.pv_caps = PV_PPGTT;
+
 	if (intel_vgpu_pv_detect_caps(dev_priv, shared_area)) {
 		drm_info(&dev_priv->drm,
 			 "Virtual GPU for Intel GVT-g detected with PV Capabilities.\n");
@@ -373,6 +376,91 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt)
  * i915 vgpu PV support for Linux
  */
 
+static int intel_vgpu_pv_vma_action(struct i915_address_space *vm,
+				    struct i915_vma *vma, u32 action,
+				    u64 flags, u64 pte_flag)
+{
+	struct drm_i915_private *i915 = vma->vm->i915;
+	struct sgt_iter sgt_iter;
+	dma_addr_t addr;
+	struct intel_vgpu_pv_vma pvvma;
+	u32 num_pages;
+	u64 *gpas;
+	int i = 0;
+	u32 data[32];
+	int ret;
+	u32 size = sizeof(pvvma) / 4;
+
+	if (1 + size > ARRAY_SIZE(data))
+		return -EIO;
+
+	memset(&pvvma, 0, sizeof(pvvma));
+	num_pages = vma->node.size >> PAGE_SHIFT;
+	pvvma.size = num_pages;
+	pvvma.start = vma->node.start;
+	pvvma.flags = flags;
+
+	if (action == PV_CMD_BIND_PPGTT || action == PV_CMD_UNBIND_PPGTT)
+		pvvma.pml4 = px_dma(i915_vm_to_ppgtt(vm)->pd);
+
+	if (num_pages == 1) {
+		pvvma.dma_addrs = vma->pages->sgl->dma_address | pte_flag;
+		goto out;
+	}
+
+	gpas = kmalloc_array(num_pages, sizeof(u64), GFP_KERNEL);
+	if (!gpas)
+		return -ENOMEM;
+
+	pvvma.dma_addrs = virt_to_phys((void *)gpas);
+	for_each_sgt_daddr(addr, sgt_iter, vma->pages)
+		gpas[i++] = addr | pte_flag;
+
+	/* Fill the allocated but "unused" space beyond the end of the buffer */
+	while (i < num_pages)
+		gpas[i++] = vm->scratch[0]->encode;
+out:
+	data[0] = action;
+	memcpy(&data[1], &pvvma, sizeof(pvvma));
+	ret = i915->vgpu.pv->send(i915, data, 1 + size);
+
+	if (num_pages > 1)
+		kfree(gpas);
+
+	return ret;
+}
+
+static void intel_vgpu_pv_ppgtt_bind(struct i915_address_space *vm,
+				     struct i915_vm_pt_stash *stash,
+				     struct i915_vma *vma,
+				     enum i915_cache_level cache_level,
+				     u32 flags)
+{
+	u32 pte_flags;
+	u64 pte_encode;
+
+	if (!test_bit(I915_VMA_ALLOC_BIT, __i915_vma_flags(vma))) {
+		set_bit(I915_VMA_ALLOC_BIT, __i915_vma_flags(vma));
+		flags |= BIT(I915_VMA_ALLOC_BIT);
+	}
+
+	/* Applicable to VLV, and gen8+ */
+	pte_flags = 0;
+	if (i915_gem_object_is_readonly(vma->obj))
+		pte_flags |= PTE_READ_ONLY;
+
+	pte_encode = vma->vm->pte_encode(0, cache_level, pte_flags);
+
+	intel_vgpu_pv_vma_action(vm, vma, PV_CMD_BIND_PPGTT, flags, pte_encode);
+}
+
+static void intel_vgpu_pv_ppgtt_unbind(struct i915_address_space *vm,
+				       struct i915_vma *vma)
+{
+	if (test_and_clear_bit(I915_VMA_ALLOC_BIT, __i915_vma_flags(vma)))
+		intel_vgpu_pv_vma_action(vm, vma, PV_CMD_UNBIND_PPGTT, 0, 0);
+}
+
 /**
  * intel_vgpu_pv_wait_desc_update - Wait the command buffer descriptor update.
  * @desc:	buffer descriptor
@@ -655,8 +743,16 @@ static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
 void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
 			       enum intel_vgpu_pv_caps cap, void *data)
 {
+	struct i915_ppgtt *ppgtt;
+
 	if (!intel_vgpu_pv_check_cap(i915, cap))
 		return;
+
+	if (cap == PV_PPGTT) {
+		ppgtt = (struct i915_ppgtt *)data;
+		ppgtt->vm.vma_ops.bind_vma    = intel_vgpu_pv_ppgtt_bind;
+		ppgtt->vm.vma_ops.unbind_vma  = intel_vgpu_pv_ppgtt_unbind;
+	}
 }
 
 /*
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 18f2dd0..49fc174 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -37,6 +37,14 @@ struct i915_ggtt;
 /* intel vGPU PV capabilities */
 enum intel_vgpu_pv_caps {
 	PV_NONE = 0,
+	PV_PPGTT = BIT(0),
+};
+
+/* vgpu PV commands */
+enum intel_vgpu_pv_cmd {
+	PV_CMD_DEFAULT = 0x0,
+	PV_CMD_BIND_PPGTT,
+	PV_CMD_UNBIND_PPGTT,
 };
 
 /* A shared memory(4KB) between GVTg and vgpu allocated by guest */
@@ -45,6 +53,15 @@ struct intel_vgpu_pv_shared_page {
 	u16 ver_minor;
 };
 
+/* PV virtual memory address for GGTT/PPGTT */
+struct intel_vgpu_pv_vma {
+	u32 size; /* num of pages */
+	u32 flags; /* bind or unbind flags */
+	u64 start; /* start of virtual address */
+	u64 dma_addrs; /* BO's dma address list */
+	u64 pml4; /* ppgtt handler */
+};
+
 /*
  * Definition of the command transport message header (DW0)
  *
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 05/12] drm/i915: vgpu ggtt page table pv support
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (3 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 04/12] drm/i915: vgpu ppgtt page table pv support Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 06/12] drm/i915: vgpu workload submisison " Xiaolin Zhang
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

to improve efficiency and reduce the complexsity of vgpu ggtt support,
vgpu ggtt page table operations are implemented in pv fashion and
implemented pv version of bind/unbind for ggtt vma ops.

The pv version of ggtt vma ops use the CTB protocol to communicate pv ggtt
command along with data struct pv_vma from guest to GVT and then GVT will
implement command handler of PV_CMD_BIND_GGTT and PV_CMD_UBIND_gGTT to
support vgpu GGTT feature.

new PV_GGTT pv_cap is used to control this level of pv support in
both guest and host side.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c  |  4 +++-
 drivers/gpu/drm/i915/i915_vgpu.c | 36 +++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h |  3 +++
 3 files changed, 41 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index bb0c129..cc4d28c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1129,9 +1129,11 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 	int ret;
 
 	/* We need to fallback to 4K pages if host doesn't support huge gtt. */
-	if (intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
+	if (intel_vgpu_active(dev_priv)) {
 		mkwrite_device_info(dev_priv)->page_sizes =
 			I915_GTT_PAGE_SIZE_4K;
+		intel_vgpu_pv_config_caps(dev_priv, PV_GGTT, &dev_priv->ggtt);
+	}
 
 	ret = i915_gem_init_userptr(dev_priv);
 	if (ret)
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 81be1db5..b11fcae 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -100,7 +100,7 @@ void intel_vgpu_detect(struct drm_i915_private *dev_priv)
 	mutex_init(&dev_priv->vgpu.lock);
 
 	/* guest driver PV capability */
-	dev_priv->vgpu.pv_caps = PV_PPGTT;
+	dev_priv->vgpu.pv_caps = PV_PPGTT | PV_GGTT;
 
 	if (intel_vgpu_pv_detect_caps(dev_priv, shared_area)) {
 		drm_info(&dev_priv->drm,
@@ -461,6 +461,33 @@ static void intel_vgpu_pv_ppgtt_unbind(struct i915_address_space *vm,
 		intel_vgpu_pv_vma_action(vm, vma, PV_CMD_UNBIND_PPGTT, 0, 0);
 }
 
+static void intel_vgpu_pv_ggtt_bind(struct i915_address_space *vm,
+				    struct i915_vm_pt_stash *stash,
+				    struct i915_vma *vma,
+				    enum i915_cache_level cache_level,
+				    u32 flags)
+{
+	struct drm_i915_gem_object *obj = vma->obj;
+	u32 pte_flags;
+
+	if (i915_vma_is_bound(vma, ~flags & I915_VMA_BIND_MASK))
+		return;
+
+	/* Applicable to VLV (gen8+ do not support RO in the GGTT) */
+	pte_flags = 0;
+	if (i915_gem_object_is_readonly(obj))
+		pte_flags |= PTE_READ_ONLY;
+
+	pte_flags = vma->vm->pte_encode(0, cache_level, 0);
+	intel_vgpu_pv_vma_action(vm, vma, PV_CMD_BIND_GGTT, 0, pte_flags);
+	vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
+}
+
+static void intel_vgpu_pv_ggtt_unbind_nop(struct i915_address_space *vm,
+					  struct i915_vma *vma)
+{
+}
+
 /**
  * intel_vgpu_pv_wait_desc_update - Wait the command buffer descriptor update.
  * @desc:	buffer descriptor
@@ -744,6 +771,7 @@ void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
 			       enum intel_vgpu_pv_caps cap, void *data)
 {
 	struct i915_ppgtt *ppgtt;
+	struct i915_ggtt *ggtt;
 
 	if (!intel_vgpu_pv_check_cap(i915, cap))
 		return;
@@ -753,6 +781,12 @@ void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
 		ppgtt->vm.vma_ops.bind_vma    = intel_vgpu_pv_ppgtt_bind;
 		ppgtt->vm.vma_ops.unbind_vma  = intel_vgpu_pv_ppgtt_unbind;
 	}
+
+	if (cap == PV_GGTT) {
+		ggtt = (struct i915_ggtt *)data;
+		ggtt->vm.vma_ops.bind_vma    = intel_vgpu_pv_ggtt_bind;
+		ggtt->vm.vma_ops.unbind_vma  = intel_vgpu_pv_ggtt_unbind_nop;
+	}
 }
 
 /*
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 49fc174..39acd93 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -38,6 +38,7 @@ struct i915_ggtt;
 enum intel_vgpu_pv_caps {
 	PV_NONE = 0,
 	PV_PPGTT = BIT(0),
+	PV_GGTT = BIT(1),
 };
 
 /* vgpu PV commands */
@@ -45,6 +46,8 @@ enum intel_vgpu_pv_cmd {
 	PV_CMD_DEFAULT = 0x0,
 	PV_CMD_BIND_PPGTT,
 	PV_CMD_UNBIND_PPGTT,
+	PV_CMD_BIND_GGTT,
+	PV_CMD_UNBIND_GGTT,
 };
 
 /* A shared memory(4KB) between GVTg and vgpu allocated by guest */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 06/12] drm/i915: vgpu workload submisison pv support
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (4 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 05/12] drm/i915: vgpu ggtt " Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 07/12] drm/i915/gvt: GVTg expose pv_caps PVINFO register Xiaolin Zhang
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

to improve efficiency and reduce the complexity of vgpu workload
submission support, a pv version of workload submission backend
implemented with engine submission data in the shared memory and
eliminating execlists csb process and context switch interrupt
in submisision routine.

new PV_SUBMISSION pv_cap is used to control this level of pv support in
both guest and host side.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/Makefile              |   2 +-
 drivers/gpu/drm/i915/gt/intel_lrc.c        |   2 +
 drivers/gpu/drm/i915/i915_vgpu.c           |  68 +++++-
 drivers/gpu/drm/i915/i915_vgpu.h           |  28 +++
 drivers/gpu/drm/i915/intel_pv_submission.c | 325 +++++++++++++++++++++++++++++
 5 files changed, 418 insertions(+), 7 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/intel_pv_submission.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index e5574e50..13d1739 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -269,7 +269,7 @@ i915-$(CONFIG_DRM_I915_SELFTEST) += \
 	selftests/librapl.o
 
 # virtual gpu code
-i915-y += i915_vgpu.o
+i915-y += i915_vgpu.o intel_pv_submission.o
 
 ifeq ($(CONFIG_DRM_I915_GVT),y)
 i915-y += intel_gvt.o
diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
index 2875370..d5b6de8 100644
--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
+++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
@@ -5079,6 +5079,8 @@ void intel_execlists_set_default_submission(struct intel_engine_cs *engine)
 			if (IS_ACTIVE(CONFIG_DRM_I915_TIMESLICE_DURATION))
 				engine->flags |= I915_ENGINE_HAS_TIMESLICES;
 		}
+	} else {
+		intel_vgpu_pv_config_caps(engine->i915, PV_SUBMISSION, engine);
 	}
 
 	if (INTEL_GEN(engine->i915) >= 12)
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index b11fcae..e69a2dd 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -101,6 +101,7 @@ void intel_vgpu_detect(struct drm_i915_private *dev_priv)
 
 	/* guest driver PV capability */
 	dev_priv->vgpu.pv_caps = PV_PPGTT | PV_GGTT;
+	dev_priv->vgpu.pv_caps |= PV_SUBMISSION;
 
 	if (intel_vgpu_pv_detect_caps(dev_priv, shared_area)) {
 		drm_info(&dev_priv->drm,
@@ -122,6 +123,7 @@ void intel_vgpu_destroy(struct drm_i915_private *i915)
 	if (!intel_vgpu_active(i915) || !pv)
 		return;
 
+	kfree(pv->submission);
 	__free_page(virt_to_page(pv->mem));
 	kfree(pv);
 }
@@ -606,7 +608,8 @@ static u32 intel_vgpu_pv_get_next_fence(struct intel_vgpu_pv *pv)
 }
 
 static int intel_vgpu_pv_send(struct drm_i915_private *i915,
-			      const u32 *action, u32 len, u32 *status)
+			      const u32 *action, u32 len, u32 *status,
+			      void __iomem *addr)
 {
 	struct i915_virtual_gpu *vgpu = &i915->vgpu;
 	struct intel_vgpu_pv *pv = vgpu->pv;
@@ -624,7 +627,10 @@ static int intel_vgpu_pv_send(struct drm_i915_private *i915,
 	if (unlikely(err))
 		goto unlink;
 
-	i915->vgpu.pv->notify(i915);
+	if (addr)
+		writel(VGT_G2V_PV_SEND_TRIGGER, addr + vgtif_offset(g2v_notify));
+	else
+		i915->vgpu.pv->notify(i915);
 
 	err = intel_vgpu_pv_wait_desc_update(i915, desc, fence, status);
 	if (unlikely(err))
@@ -652,7 +658,7 @@ static int intel_vgpu_pv_send_cmd_buf(struct drm_i915_private *i915,
 
 	spin_lock_irqsave(&vgpu->pv->lock, flags);
 
-	ret = intel_vgpu_pv_send(i915, action, len, &status);
+	ret = intel_vgpu_pv_send(i915, action, len, &status, NULL);
 	if (unlikely(ret < 0)) {
 		drm_err(&i915->drm, "PV: send action %#X failed; err=%d status=%#X\n",
 			action[0], ret, status);
@@ -671,6 +677,19 @@ static void intel_vgpu_pv_notify_mmio(struct drm_i915_private *dev_priv)
 			   vgtif_reg(g2v_notify), VGT_G2V_PV_SEND_TRIGGER);
 }
 
+static void
+intel_vgpu_pv_register_cap_gpa(struct drm_i915_private *i915,
+			       struct intel_vgpu_pv_cap_addr *cap_addr,
+			       void __iomem *shared_area)
+{
+	u32 data[32];
+	u32 status = ~0;
+
+	data[0] = PV_CMD_REGISTER_CAP_GPA;
+	memcpy(&data[1], cap_addr, sizeof(*cap_addr));
+	intel_vgpu_pv_send(i915, data, 1 + sizeof(cap_addr), &status, shared_area);
+}
+
 /*
  * shared_page setup for VGPU PV features
  */
@@ -680,17 +699,21 @@ static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
 	void __iomem *addr;
 	struct intel_vgpu_pv *pv;
 	struct intel_vgpu_pv_shared_page *base;
-	u64 gpa;
+	struct intel_vgpu_pv_cap_addr cap_addr;
+	void *sub_base;
+	u64 gpa, sub_gpa;
 	u16 ver_maj, ver_min;
 	int ret = 0;
+	int i;
+	u32 size;
 
 	/* We allocate 1 page shared between guest and GVT for data exchange.
 	 *       _______________________________
 	 *      |version|PV_DESCs(SEND)         |
 	 *      |_______________________________PAGE/8
-	 *      |                               |
+	 *      |PV_RSVD                        |
 	 *      |_______________________________PAGE/4
-	 *      |                               |
+	 *      |PV_SUBMISSION                  |
 	 *      |                               |
 	 *      |                               |
 	 *      |_______________________________PAGE/2
@@ -758,6 +781,33 @@ static int intel_vgpu_pv_setup_shared_page(struct drm_i915_private *i915,
 	pv->notify = intel_vgpu_pv_notify_mmio;
 	spin_lock_init(&pv->lock);
 
+	/* setup PV per engine data exchange structure */
+	if (i915->vgpu.pv_caps & PV_SUBMISSION) {
+		sub_base = (void *)base + PV_SUB_OFF;
+		sub_gpa = gpa + PV_SUB_OFF;
+
+		size = sizeof(struct intel_vgpu_pv_submission);
+		if (size * I915_NUM_ENGINES > (PV_CMD_OFF - PV_SUB_OFF)) {
+			pv->submission = kmalloc_array(I915_NUM_ENGINES, size, GFP_KERNEL);
+			if (!pv->submission) {
+				ret = -ENOMEM;
+				goto err;
+			}
+			sub_base = pv->submission;
+			sub_gpa = virt_to_phys(pv->submission);
+		}
+
+		for (i = 0; i < I915_NUM_ENGINES; i++) {
+			pv->submission_data[i] = sub_base +  size * i;
+			pv->submission_data[i]->submitted = false;
+			spin_lock_init(&pv->submission_lock[i]);
+		}
+
+		cap_addr.cap = PV_SUBMISSION;
+		cap_addr.gpa = sub_gpa;
+		intel_vgpu_pv_register_cap_gpa(i915, &cap_addr, shared_area);
+	}
+
 	return ret;
 err:
 	__free_page(virt_to_page(base));
@@ -772,6 +822,7 @@ void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
 {
 	struct i915_ppgtt *ppgtt;
 	struct i915_ggtt *ggtt;
+	struct intel_engine_cs *engine;
 
 	if (!intel_vgpu_pv_check_cap(i915, cap))
 		return;
@@ -787,6 +838,11 @@ void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
 		ggtt->vm.vma_ops.bind_vma    = intel_vgpu_pv_ggtt_bind;
 		ggtt->vm.vma_ops.unbind_vma  = intel_vgpu_pv_ggtt_unbind_nop;
 	}
+
+	if (cap == PV_SUBMISSION) {
+		engine = (struct intel_engine_cs *)data;
+		intel_vgpu_pv_set_submission(engine);
+	}
 }
 
 /*
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 39acd93..aab5df3 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -25,6 +25,7 @@
 #define _I915_VGPU_H_
 
 #include <linux/types.h>
+#include "gt/intel_engine_types.h"
 
 struct drm_i915_private;
 struct i915_ggtt;
@@ -32,6 +33,8 @@ struct i915_ggtt;
 #define PV_MAJOR        0
 #define PV_MINOR        1
 #define PV_DESC_OFF     (PAGE_SIZE / 256)
+#define PV_RSVD_OFF     (PAGE_SIZE / 8)
+#define PV_SUB_OFF      (PAGE_SIZE / 4)
 #define PV_CMD_OFF      (PAGE_SIZE / 2)
 
 /* intel vGPU PV capabilities */
@@ -39,6 +42,7 @@ enum intel_vgpu_pv_caps {
 	PV_NONE = 0,
 	PV_PPGTT = BIT(0),
 	PV_GGTT = BIT(1),
+	PV_SUBMISSION = BIT(2),
 };
 
 /* vgpu PV commands */
@@ -48,6 +52,8 @@ enum intel_vgpu_pv_cmd {
 	PV_CMD_UNBIND_PPGTT,
 	PV_CMD_BIND_GGTT,
 	PV_CMD_UNBIND_GGTT,
+	PV_CMD_REGISTER_CAP_GPA,
+	PV_CMD_SUBMIT_WORKLOAD
 };
 
 /* A shared memory(4KB) between GVTg and vgpu allocated by guest */
@@ -56,6 +62,11 @@ struct intel_vgpu_pv_shared_page {
 	u16 ver_minor;
 };
 
+struct intel_vgpu_pv_cap_addr {
+	u32 cap;
+	u64 gpa;
+};
+
 /* PV virtual memory address for GGTT/PPGTT */
 struct intel_vgpu_pv_vma {
 	u32 size; /* num of pages */
@@ -65,6 +76,15 @@ struct intel_vgpu_pv_vma {
 	u64 pml4; /* ppgtt handler */
 };
 
+/* PV workload submission */
+struct intel_vgpu_pv_submission {
+	bool submitted;
+	/* workload lrc descriptor */
+	u64 descs[EXECLIST_MAX_PORTS];
+	/* guest logical context handler */
+	u64 ctx_gpa[EXECLIST_MAX_PORTS];
+} __packed;
+
 /*
  * Definition of the command transport message header (DW0)
  *
@@ -108,6 +128,13 @@ struct intel_vgpu_pv_ct_buf {
 struct intel_vgpu_pv {
 	struct intel_vgpu_pv_shared_page *mem;
 
+	/* protect multiple submission per engine */
+	spinlock_t submission_lock[I915_NUM_ENGINES];
+	/* per engine PV workload submission data area */
+	struct intel_vgpu_pv_submission *submission_data[I915_NUM_ENGINES];
+	/* per engine PV workload submission data dynamic allocated */
+	void *submission;
+
 	/* PV command buffer support */
 	struct intel_vgpu_pv_ct_buf ctb;
 	u32 next_fence;
@@ -136,4 +163,5 @@ bool intel_vgpu_pv_detect_caps(struct drm_i915_private *i915,
 			       void __iomem *shared_area);
 void intel_vgpu_pv_config_caps(struct drm_i915_private *i915,
 			       enum intel_vgpu_pv_caps cap, void *data);
+void intel_vgpu_pv_set_submission(struct intel_engine_cs *engine);
 #endif /* _I915_VGPU_H_ */
diff --git a/drivers/gpu/drm/i915/intel_pv_submission.c b/drivers/gpu/drm/i915/intel_pv_submission.c
new file mode 100644
index 0000000..f4d4a65
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_pv_submission.c
@@ -0,0 +1,325 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2018 Intel Corporation
+ */
+
+#include "i915_vgpu.h"
+#include "gt/intel_lrc_reg.h"
+#include "gt/intel_gt_pm.h"
+#include "gt/intel_ring.h"
+#include "i915_trace.h"
+
+#define CTX_DESC_FORCE_RESTORE BIT_ULL(2)
+
+static u64 execlists_update_context(struct i915_request *rq)
+{
+	struct intel_context *ce = rq->context;
+	u64 desc = ce->lrc.desc;
+	u32 tail, prev;
+
+	tail = intel_ring_set_tail(rq->ring, rq->tail);
+	prev = ce->lrc_reg_state[CTX_RING_TAIL];
+	if (unlikely(intel_ring_direction(rq->ring, tail, prev) <= 0))
+		desc |= CTX_DESC_FORCE_RESTORE;
+	ce->lrc_reg_state[CTX_RING_TAIL] = tail;
+	rq->tail = rq->wa_tail;
+	ce->lrc.desc &= ~CTX_DESC_FORCE_RESTORE;
+	return desc;
+}
+
+static inline struct i915_priolist *to_priolist(struct rb_node *rb)
+{
+	return rb_entry(rb, struct i915_priolist, node);
+}
+
+static struct i915_request *schedule_in(struct i915_request *rq, int idx)
+{
+	__intel_gt_pm_get(rq->engine->gt);
+	return i915_request_get(rq);
+}
+
+static void schedule_out(struct i915_request *rq)
+{
+	intel_gt_pm_put_async(rq->engine->gt);
+	i915_request_put(rq);
+}
+
+static void pv_reset_prepare(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+
+	__tasklet_disable_sync_once(&execlists->tasklet);
+}
+
+static void
+cancel_port_requests(struct intel_engine_execlists * const execlists)
+{
+	struct i915_request * const *port, *rq;
+
+	/* Note we are only using the inflight and not the pending queue */
+	for (port = execlists->active; (rq = *port); port++)
+		schedule_out(rq);
+	execlists->active =
+		memset(execlists->inflight, 0, sizeof(execlists->inflight));
+}
+
+static void pv_reset_rewind(struct intel_engine_cs *engine, bool stalled)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct i915_request *rq;
+	unsigned long flags;
+
+	spin_lock_irqsave(&engine->active.lock, flags);
+
+	cancel_port_requests(execlists);
+
+	/* Push back any incomplete requests for replay after the reset. */
+	rq = execlists_unwind_incomplete_requests(execlists);
+	if (!rq)
+		goto out_unlock;
+
+	if (!i915_request_started(rq))
+		stalled = false;
+
+	__i915_request_reset(rq, stalled);
+	intel_lr_context_reset(engine, rq->context, rq->head, stalled);
+
+out_unlock:
+	spin_unlock_irqrestore(&engine->active.lock, flags);
+}
+
+static void pv_reset_finish(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+
+	if (__tasklet_enable(&execlists->tasklet))
+		/* And kick in case we missed a new request submission. */
+		tasklet_hi_schedule(&execlists->tasklet);
+}
+
+static void pv_reset_cancel(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct i915_request *rq, *rn;
+	struct rb_node *rb;
+	unsigned long flags;
+
+	spin_lock_irqsave(&engine->active.lock, flags);
+
+	/* Cancel the requests on the HW and clear the ELSP tracker. */
+	cancel_port_requests(execlists);
+
+	/* Mark all executing requests as skipped. */
+	list_for_each_entry(rq, &engine->active.requests, sched.link) {
+		i915_request_set_error_once(rq, -EIO);
+		i915_request_mark_complete(rq);
+	}
+
+	/* Flush the queued requests to the timeline list (for retiring). */
+	while ((rb = rb_first_cached(&execlists->queue))) {
+		struct i915_priolist *p = to_priolist(rb);
+		int i;
+
+		priolist_for_each_request_consume(rq, rn, p, i) {
+			list_del_init(&rq->sched.link);
+			__i915_request_submit(rq);
+			dma_fence_set_error(&rq->fence, -EIO);
+			i915_request_mark_complete(rq);
+		}
+
+		rb_erase_cached(&p->node, &execlists->queue);
+		i915_priolist_free(p);
+	}
+
+	execlists->queue_priority_hint = INT_MIN;
+	execlists->queue = RB_ROOT_CACHED;
+
+	spin_unlock_irqrestore(&engine->active.lock, flags);
+}
+
+static void pv_submit(struct intel_engine_cs *engine,
+		      struct i915_request **out, struct i915_request **end)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct intel_vgpu_pv *pv = engine->i915->vgpu.pv;
+	struct intel_vgpu_pv_submission *sub_data = pv->submission_data[engine->id];
+	struct i915_request *rq;
+	int n, err;
+
+	memset(sub_data->descs, 0, sizeof(sub_data->descs));
+	n = 0;
+
+	do {
+		rq = *out++;
+		sub_data->descs[n] = execlists_update_context(rq);
+		n++;
+	} while (out != end);
+
+	spin_lock(&pv->submission_lock[engine->id]);
+	sub_data->submitted = true;
+	writel(PV_CMD_SUBMIT_WORKLOAD, execlists->submit_reg);
+
+#define done (READ_ONCE(sub_data->submitted) == false)
+	err = wait_for_atomic_us(done, 1000);
+#undef done
+	spin_unlock(&pv->submission_lock[engine->id]);
+
+	if (unlikely(err))
+		DRM_ERROR("PV (%s) workload submission failed\n", engine->name);
+}
+
+static void pv_dequeue(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct i915_request **first = execlists->inflight;
+	struct i915_request ** const last_port = first + execlists->port_mask;
+	struct i915_request *last = first[0];
+	struct i915_request **port;
+	struct rb_node *rb;
+	bool submit = false;
+
+	lockdep_assert_held(&engine->active.lock);
+
+	if (last) {
+		if (*++first)
+			return;
+		last = NULL;
+	}
+
+	port = first;
+	while ((rb = rb_first_cached(&execlists->queue))) {
+		struct i915_priolist *p = to_priolist(rb);
+		struct i915_request *rq, *rn;
+		int i;
+
+		priolist_for_each_request_consume(rq, rn, p, i) {
+			if (last && rq->context != last->context) {
+				if (port == last_port)
+					goto done;
+
+				*port = schedule_in(last, port - execlists->inflight);
+				port++;
+			}
+
+			list_del_init(&rq->sched.link);
+			__i915_request_submit(rq);
+			submit = true;
+			last = rq;
+		}
+
+		rb_erase_cached(&p->node, &execlists->queue);
+		i915_priolist_free(p);
+	}
+done:
+	execlists->queue_priority_hint =
+		rb ? to_priolist(rb)->priority : INT_MIN;
+	if (submit) {
+		*port = schedule_in(last, port - execlists->inflight);
+		*++port = NULL;
+		pv_submit(engine, first, port);
+	}
+	execlists->active = execlists->inflight;
+}
+
+static void vgpu_pv_submission_tasklet(unsigned long data)
+{
+	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct i915_request **port, *rq;
+	unsigned long flags;
+	struct intel_vgpu_pv *pv = engine->i915->vgpu.pv;
+	struct intel_vgpu_pv_submission *sub_data = pv->submission_data[engine->id];
+
+	spin_lock_irqsave(&engine->active.lock, flags);
+
+	for (port = execlists->inflight; (rq = *port); port++) {
+		if (!i915_request_completed(rq))
+			break;
+
+		schedule_out(rq);
+	}
+
+	if (port != execlists->inflight) {
+		int idx = port - execlists->inflight;
+		int rem = ARRAY_SIZE(execlists->inflight) - idx;
+
+		memmove(execlists->inflight, port, rem * sizeof(*port));
+	}
+
+	if (!sub_data->submitted)
+		pv_dequeue(engine);
+
+	spin_unlock_irqrestore(&engine->active.lock, flags);
+}
+
+static inline bool
+reset_in_progress(const struct intel_engine_execlists *execlists)
+{
+	return unlikely(!__tasklet_is_enabled(&execlists->tasklet));
+}
+
+static inline int rq_prio(const struct i915_request *rq)
+{
+	return READ_ONCE(rq->sched.attr.priority);
+}
+
+static void queue_request(struct intel_engine_cs *engine,
+			  struct i915_request *rq)
+{
+	GEM_BUG_ON(!list_empty(&rq->sched.link));
+	list_add_tail(&rq->sched.link,
+		      i915_sched_lookup_priolist(engine, rq_prio(rq)));
+	set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
+}
+
+static void pv_submit_request(struct i915_request *request)
+{
+	struct intel_engine_cs *engine = request->engine;
+	unsigned long flags;
+	struct intel_engine_execlists *execlists = &engine->execlists;
+
+	/* Will be called from irq-context when using foreign fences. */
+	spin_lock_irqsave(&engine->active.lock, flags);
+
+	queue_request(engine, request);
+
+	if (rq_prio(request) <= execlists->queue_priority_hint)
+		goto out;
+
+	execlists->queue_priority_hint = rq_prio(request);
+	if (reset_in_progress(execlists))
+		goto out;
+
+	tasklet_hi_schedule(&execlists->tasklet);
+out:
+	spin_unlock_irqrestore(&engine->active.lock, flags);
+}
+
+void intel_vgpu_pv_set_submission(struct intel_engine_cs *engine)
+{
+	/*
+	 * We inherit a bunch of functions from execlists that we'd like
+	 * to keep using:
+	 *
+	 *    engine->submit_request = execlists_submit_request;
+	 *    engine->cancel_requests = execlists_cancel_requests;
+	 *    engine->schedule = execlists_schedule;
+	 *
+	 * But we need to override the actual submission backend in order
+	 * to talk to the GVT with PV notification message.
+	 */
+	engine->submit_request = pv_submit_request;
+	engine->execlists.tasklet.func = vgpu_pv_submission_tasklet;
+
+	/* do not use execlists park/unpark */
+	engine->park = NULL;
+	engine->unpark = NULL;
+
+	engine->reset.prepare = pv_reset_prepare;
+	engine->reset.rewind = pv_reset_rewind;
+	engine->reset.cancel = pv_reset_cancel;
+	engine->reset.finish = pv_reset_finish;
+
+	engine->flags &= ~I915_ENGINE_SUPPORTS_STATS;
+	engine->flags |= I915_ENGINE_NEEDS_BREADCRUMB_TASKLET;
+}
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 07/12] drm/i915/gvt: GVTg expose pv_caps PVINFO register
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (5 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 06/12] drm/i915: vgpu workload submisison " Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 08/12] drm/i915/gvt: GVTg handle guest shared_page setup Xiaolin Zhang
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

expose pv_caps PVINFO register from GVTg to guest in order that guest can
query and control different pv capability support.

report VGT_CAPS_PV capability in pvinfo page for guest.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h      | 8 ++++++++
 drivers/gpu/drm/i915/gvt/handlers.c | 5 +++++
 drivers/gpu/drm/i915/gvt/vgpu.c     | 1 +
 3 files changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 9831361..3810b17 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -49,6 +49,7 @@
 #include "fb_decoder.h"
 #include "dmabuf.h"
 #include "page_track.h"
+#include "i915_vgpu.h"
 
 #define GVT_MAX_VGPU 8
 
@@ -212,6 +213,7 @@ struct intel_vgpu {
 	struct idr object_idr;
 
 	u32 scan_nonprivbb;
+	u32 pv_caps;
 };
 
 static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
@@ -531,6 +533,12 @@ static inline u64 intel_vgpu_get_bar_gpa(struct intel_vgpu *vgpu, int bar)
 			PCI_BASE_ADDRESS_MEM_MASK;
 }
 
+static inline bool intel_vgpu_enabled_pv_cap(struct intel_vgpu *vgpu,
+					     enum intel_vgpu_pv_caps cap)
+{
+	return (vgpu->pv_caps & cap);
+}
+
 void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu);
 int intel_vgpu_init_opregion(struct intel_vgpu *vgpu);
 int intel_vgpu_opregion_base_write_handler(struct intel_vgpu *vgpu, u32 gpa);
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 3be37e6..87d562e 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1203,6 +1203,7 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 		break;
 	case 0x78010:	/* vgt_caps */
 	case 0x7881c:
+	case _vgtif_reg(pv_caps):
 		break;
 	default:
 		invalid_read = true;
@@ -1272,6 +1273,10 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	case _vgtif_reg(g2v_notify):
 		handle_g2v_notification(vgpu, data);
 		break;
+	case _vgtif_reg(pv_caps):
+		DRM_INFO("vgpu id=%d pv caps =0x%x\n", vgpu->id, data);
+		vgpu->pv_caps = data;
+		break;
 	/* add xhot and yhot to handled list to avoid error log */
 	case _vgtif_reg(cursor_x_hot):
 	case _vgtif_reg(cursor_y_hot):
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index f6d7e33..41d51ec 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -48,6 +48,7 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) = VGT_CAPS_FULL_PPGTT;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HWSP_EMULATION;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT;
+	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 08/12] drm/i915/gvt: GVTg handle guest shared_page setup
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (6 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 07/12] drm/i915/gvt: GVTg expose pv_caps PVINFO register Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 09/12] drm/i915/gvt: GVTg support vgpu pv CTB protocol Xiaolin Zhang
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

GVTg implemented guest shared_page register operation and read and write
shared_page functionality based on hypervisor read and write functionality.

the shared_page_gpa was passed from guest driver through PVINFO
shared_page_gpa register.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h      | 11 +++++++--
 drivers/gpu/drm/i915/gvt/handlers.c | 20 +++++++++++++++++
 drivers/gpu/drm/i915/gvt/vgpu.c     | 45 +++++++++++++++++++++++++++++++++++++
 3 files changed, 74 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 3810b17..ca2f656 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -214,6 +214,8 @@ struct intel_vgpu {
 
 	u32 scan_nonprivbb;
 	u32 pv_caps;
+	u64 shared_page_gpa;
+	bool shared_page_enabled;
 };
 
 static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
@@ -536,7 +538,7 @@ static inline u64 intel_vgpu_get_bar_gpa(struct intel_vgpu *vgpu, int bar)
 static inline bool intel_vgpu_enabled_pv_cap(struct intel_vgpu *vgpu,
 					     enum intel_vgpu_pv_caps cap)
 {
-	return (vgpu->pv_caps & cap);
+	return vgpu->shared_page_enabled && (vgpu->pv_caps & cap);
 }
 
 void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu);
@@ -692,7 +694,12 @@ void intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu);
 void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
 void intel_gvt_debugfs_init(struct intel_gvt *gvt);
 void intel_gvt_debugfs_clean(struct intel_gvt *gvt);
-
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+			       unsigned int offset, void *buf,
+			       unsigned long len);
+int intel_gvt_write_shared_page(struct intel_vgpu *vgpu,
+				unsigned int offset, void *buf,
+				unsigned long len);
 
 #include "trace.h"
 #include "mpt.h"
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 87d562e..67cdae4 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1204,6 +1204,8 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 	case 0x78010:	/* vgt_caps */
 	case 0x7881c:
 	case _vgtif_reg(pv_caps):
+	case _vgtif_reg(shared_page_gpa):
+	case _vgtif_reg(shared_page_gpa) + 4:
 		break;
 	default:
 		invalid_read = true;
@@ -1221,6 +1223,9 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	enum intel_gvt_gtt_type root_entry_type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
 	struct intel_vgpu_mm *mm;
 	u64 *pdps;
+	unsigned long gpa, gfn;
+	u16 ver_major = PV_MAJOR;
+	u16 ver_minor = PV_MINOR;
 
 	pdps = (u64 *)&vgpu_vreg64_t(vgpu, vgtif_reg(pdp[0]));
 
@@ -1234,6 +1239,19 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	case VGT_G2V_PPGTT_L3_PAGE_TABLE_DESTROY:
 	case VGT_G2V_PPGTT_L4_PAGE_TABLE_DESTROY:
 		return intel_vgpu_put_ppgtt_mm(vgpu, pdps);
+	case VGT_G2V_SHARED_PAGE_REGISTER:
+		gpa = vgpu_vreg64_t(vgpu, vgtif_reg(shared_page_gpa));
+		gfn = gpa >> PAGE_SHIFT;
+		if (!intel_gvt_hypervisor_is_valid_gfn(vgpu, gfn)) {
+			vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = 0;
+			return 0;
+		}
+		vgpu->shared_page_gpa = gpa;
+		vgpu->shared_page_enabled = true;
+
+		intel_gvt_write_shared_page(vgpu, 0, &ver_major, 2);
+		intel_gvt_write_shared_page(vgpu, 2, &ver_minor, 2);
+		break;
 	case VGT_G2V_EXECLIST_CONTEXT_CREATE:
 	case VGT_G2V_EXECLIST_CONTEXT_DESTROY:
 	case 1:	/* Remove this in guest driver. */
@@ -1290,6 +1308,8 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	case _vgtif_reg(pdp[3].hi):
 	case _vgtif_reg(execlist_context_descriptor_lo):
 	case _vgtif_reg(execlist_context_descriptor_hi):
+	case _vgtif_reg(shared_page_gpa):
+	case _vgtif_reg(shared_page_gpa) + 4:
 		break;
 	case _vgtif_reg(rsv5[0])..._vgtif_reg(rsv5[3]):
 		invalid_write = true;
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 41d51ec..09bd00a 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -64,6 +64,8 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(cursor_x_hot)) = UINT_MAX;
 	vgpu_vreg_t(vgpu, vgtif_reg(cursor_y_hot)) = UINT_MAX;
 
+	vgpu_vreg64_t(vgpu, vgtif_reg(shared_page_gpa)) = 0;
+
 	gvt_dbg_core("Populate PVINFO PAGE for vGPU %d\n", vgpu->id);
 	gvt_dbg_core("aperture base [GMADR] 0x%llx size 0x%llx\n",
 		vgpu_aperture_gmadr_base(vgpu), vgpu_aperture_sz(vgpu));
@@ -613,3 +615,46 @@ void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu)
 	intel_gvt_reset_vgpu_locked(vgpu, true, 0);
 	mutex_unlock(&vgpu->vgpu_lock);
 }
+
+/**
+ * intel_gvt_read_shared_page - read content from shared page
+ */
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+			       unsigned int offset, void *buf,
+			       unsigned long len)
+{
+	int ret = -EINVAL;
+	unsigned long gpa;
+
+	if (offset >= PAGE_SIZE)
+		goto err;
+
+	gpa = vgpu->shared_page_gpa + offset;
+	ret = intel_gvt_hypervisor_read_gpa(vgpu, gpa, buf, len);
+	if (!ret)
+		return ret;
+err:
+	gvt_vgpu_err("read shared page (offset %x) failed", offset);
+	memset(buf, 0, len);
+	return ret;
+}
+
+int intel_gvt_write_shared_page(struct intel_vgpu *vgpu,
+				unsigned int offset, void *buf,
+				unsigned long len)
+{
+	int ret = -EINVAL;
+	unsigned long gpa;
+
+	if (offset >= PAGE_SIZE)
+		goto err;
+
+	gpa = vgpu->shared_page_gpa + offset;
+	ret = intel_gvt_hypervisor_write_gpa(vgpu, gpa, buf, len);
+	if (!ret)
+		return ret;
+err:
+	gvt_vgpu_err("write shared page (offset %x) failed", offset);
+	memset(buf, 0, len);
+	return ret;
+}
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 09/12] drm/i915/gvt: GVTg support vgpu pv CTB protocol
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (7 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 08/12] drm/i915/gvt: GVTg handle guest shared_page setup Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 10/12] drm/i915/gvt: GVTg support ppgtt pv operations Xiaolin Zhang
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

host side to implement vgpu PV CTB protocol. based on the protocol,
CTB read functionality is implemented to handle pv command from guest.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/handlers.c | 118 +++++++++++++++++++++++++++++++++++-
 1 file changed, 117 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 67cdae4..a9f91da 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1218,6 +1218,118 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 	return 0;
 }
 
+static inline unsigned int ct_header_get_len(u32 header)
+{
+	return (header >> PV_CT_MSG_LEN_SHIFT) & PV_CT_MSG_LEN_MASK;
+}
+
+static inline unsigned int ct_header_get_action(u32 header)
+{
+	return (header >> PV_CT_MSG_ACTION_SHIFT) & PV_CT_MSG_ACTION_MASK;
+}
+
+static int fetch_pv_command_buffer(struct intel_vgpu *vgpu,
+				   struct intel_vgpu_pv_ct_buf_desc *desc,
+				   u32 *fence, u32 *action, u32 *data)
+{
+	u32 head, tail, len, size, off;
+	u32 cmd_head;
+	u32 avail;
+	u32 ret;
+
+	/* fetch command descriptor */
+	off = PV_DESC_OFF;
+	ret = intel_gvt_read_shared_page(vgpu, off, desc, sizeof(*desc));
+	if (ret)
+		return ret;
+
+	GEM_BUG_ON(desc->size % 4);
+	GEM_BUG_ON(desc->head % 4);
+	GEM_BUG_ON(desc->tail % 4);
+	GEM_BUG_ON(tail >= size);
+	GEM_BUG_ON(head >= size);
+
+	/* tail == head condition indicates empty */
+	head = desc->head / 4;
+	tail = desc->tail / 4;
+	size = desc->size / 4;
+
+	if (unlikely((tail - head) == 0))
+		return -ENODATA;
+
+	/* fetch command head */
+	off = desc->addr + head * 4;
+	ret = intel_gvt_read_shared_page(vgpu, off, &cmd_head, 4);
+	head = (head + 1) % size;
+	if (ret)
+		goto err;
+
+	len = ct_header_get_len(cmd_head) - 1;
+	*action = ct_header_get_action(cmd_head);
+
+	/* fetch command fence */
+	off = desc->addr + head * 4;
+	ret = intel_gvt_read_shared_page(vgpu, off, fence, 4);
+	head = (head + 1) % size;
+	if (ret)
+		goto err;
+
+	/* no command data */
+	if (len == 0)
+		goto err;
+
+	/* fetch command data */
+	avail = size - head;
+	if (len <= avail) {
+		off =  desc->addr + head * 4;
+		ret = intel_gvt_read_shared_page(vgpu, off, data, len * 4);
+		head = (head + len) % size;
+	} else {
+		/* swap case */
+		off =  desc->addr + head * 4;
+		ret = intel_gvt_read_shared_page(vgpu, off, data, avail * 4);
+		head = (head + avail) % size;
+		if (ret)
+			goto err;
+
+		off = desc->addr;
+		ret = intel_gvt_read_shared_page(vgpu, off, &data[avail],
+						 (len - avail) * 4);
+		head = (head + len - avail) % size;
+	}
+
+err:
+	desc->head = head * 4;
+	return ret;
+}
+
+static int pv_command_buffer_read(struct intel_vgpu *vgpu,
+				  u32 *cmd, u32 *data)
+{
+	struct intel_vgpu_pv_ct_buf_desc desc;
+	u32 fence, off = PV_DESC_OFF;
+	int ret;
+
+	ret = fetch_pv_command_buffer(vgpu, &desc, &fence, cmd, data);
+
+	/* write command descriptor back */
+	desc.fence = fence;
+	desc.status = ret;
+
+	ret = intel_gvt_write_shared_page(vgpu, off, &desc, sizeof(desc));
+	return ret;
+}
+
+static int handle_pv_commands(struct intel_vgpu *vgpu)
+{
+	u32 cmd;
+	u32 data[32];
+	int ret;
+
+	ret = pv_command_buffer_read(vgpu, &cmd, data);
+	return ret;
+}
+
 static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 {
 	enum intel_gvt_gtt_type root_entry_type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
@@ -1226,6 +1338,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	unsigned long gpa, gfn;
 	u16 ver_major = PV_MAJOR;
 	u16 ver_minor = PV_MINOR;
+	int ret = 0;
 
 	pdps = (u64 *)&vgpu_vreg64_t(vgpu, vgtif_reg(pdp[0]));
 
@@ -1252,6 +1365,9 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 		intel_gvt_write_shared_page(vgpu, 0, &ver_major, 2);
 		intel_gvt_write_shared_page(vgpu, 2, &ver_minor, 2);
 		break;
+	case VGT_G2V_PV_SEND_TRIGGER:
+		ret = handle_pv_commands(vgpu);
+		break;
 	case VGT_G2V_EXECLIST_CONTEXT_CREATE:
 	case VGT_G2V_EXECLIST_CONTEXT_DESTROY:
 	case 1:	/* Remove this in guest driver. */
@@ -1259,7 +1375,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	default:
 		gvt_vgpu_err("Invalid PV notification %d\n", notification);
 	}
-	return 0;
+	return ret;
 }
 
 static int send_display_ready_uevent(struct intel_vgpu *vgpu, int ready)
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 10/12] drm/i915/gvt: GVTg support ppgtt pv operations
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (8 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 09/12] drm/i915/gvt: GVTg support vgpu pv CTB protocol Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 11/12] drm/i915/gvt: GVTg support ggtt " Xiaolin Zhang
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

This patch is to handle ppgtt PV_CMD_BIND_PPGTT and PV_CMD_UNBIND_PPGTT

for ppgtt, it creates local ppgtt tables and set up the PTE's directly
from the PV command data from guest, which does not track the usage of
guest page table and remove the cost of write protection from the
original PPGTT shadow page mechansim.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gtt.c      | 175 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/gtt.h      |   4 +
 drivers/gpu/drm/i915/gvt/gvt.h      |   1 +
 drivers/gpu/drm/i915/gvt/handlers.c |  25 ++++++
 drivers/gpu/drm/i915/gvt/vgpu.c     |   2 +
 5 files changed, 207 insertions(+)

diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
index a3a4305..c2b7527 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.c
+++ b/drivers/gpu/drm/i915/gvt/gtt.c
@@ -1777,6 +1777,25 @@ static int ppgtt_handle_guest_write_page_table_bytes(
 	return 0;
 }
 
+static void invalidate_mm_pv(struct intel_vgpu_mm *mm)
+{
+	struct intel_vgpu *vgpu = mm->vgpu;
+	struct intel_gvt *gvt = vgpu->gvt;
+	struct intel_gvt_gtt *gtt = &gvt->gtt;
+	struct intel_gvt_gtt_pte_ops *ops = gtt->pte_ops;
+	struct intel_gvt_gtt_entry se;
+
+	i915_vm_put(&mm->ppgtt->vm);
+
+	ppgtt_get_shadow_root_entry(mm, &se, 0);
+	if (!ops->test_present(&se))
+		return;
+	se.val64 = 0;
+	ppgtt_set_shadow_root_entry(mm, &se, 0);
+
+	mm->ppgtt_mm.shadowed  = false;
+}
+
 static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 {
 	struct intel_vgpu *vgpu = mm->vgpu;
@@ -1789,6 +1808,11 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 	if (!mm->ppgtt_mm.shadowed)
 		return;
 
+	if (intel_vgpu_enabled_pv_cap(vgpu, PV_PPGTT)) {
+		invalidate_mm_pv(mm);
+		return;
+	}
+
 	for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.shadow_pdps); index++) {
 		ppgtt_get_shadow_root_entry(mm, &se, index);
 
@@ -1806,6 +1830,26 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 	mm->ppgtt_mm.shadowed = false;
 }
 
+static int shadow_mm_pv(struct intel_vgpu_mm *mm)
+{
+	struct intel_vgpu *vgpu = mm->vgpu;
+	struct intel_gvt *gvt = vgpu->gvt;
+	struct intel_gvt_gtt_entry se;
+
+	mm->ppgtt = i915_ppgtt_create(gvt->gt);
+	if (IS_ERR(mm->ppgtt)) {
+		gvt_vgpu_err("fail to create ppgtt for pdp 0x%llx\n",
+			     px_dma(mm->ppgtt->pd));
+		return PTR_ERR(mm->ppgtt);
+	}
+
+	se.type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
+	se.val64 = px_dma(mm->ppgtt->pd);
+	ppgtt_set_shadow_root_entry(mm, &se, 0);
+	mm->ppgtt_mm.shadowed  = true;
+
+	return 0;
+}
 
 static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm)
 {
@@ -1820,6 +1864,9 @@ static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm)
 	if (mm->ppgtt_mm.shadowed)
 		return 0;
 
+	if (intel_vgpu_enabled_pv_cap(vgpu, PV_PPGTT))
+		return shadow_mm_pv(mm);
+
 	mm->ppgtt_mm.shadowed = true;
 
 	for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.guest_pdps); index++) {
@@ -2606,6 +2653,134 @@ static int setup_spt_oos(struct intel_gvt *gvt)
 	return ret;
 }
 
+static int intel_vgpu_pv_ppgtt_insert_4lvl(struct intel_vgpu *vgpu,
+					   struct intel_vgpu_mm *mm,
+					   struct intel_vgpu_pv_vma *pvvma, u64 *gfns)
+{
+	u32 size = pvvma->size;
+	int ret = 0;
+	u32 cache_level;
+	struct sg_table st;
+	struct scatterlist *sg = NULL;
+	struct i915_vma vma;
+	unsigned long gfn;
+	dma_addr_t dma_addr;
+	int i;
+	u64 pte_flag;
+
+	cache_level = pvvma->flags & 0xffff;
+
+	if (sg_alloc_table(&st, size, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	pte_flag = gfns[0] & 0xFFF;
+	for_each_sg(st.sgl, sg, size, i) {
+		sg->offset = 0;
+		sg->length = PAGE_SIZE;
+
+		gfn = gfns[i] >> PAGE_SHIFT;
+		ret = intel_gvt_hypervisor_dma_map_guest_page(vgpu, gfn,
+							      PAGE_SIZE, &dma_addr);
+		if (ret) {
+			gvt_vgpu_err("fail to translate gfn: 0x%lx\n", gfn);
+			return -ENXIO;
+		}
+		sg_dma_address(sg) = dma_addr | pte_flag;
+		sg_dma_len(sg) = PAGE_SIZE;
+	}
+
+	memset(&vma, 0, sizeof(vma));
+	vma.node.start = pvvma->start;
+	vma.pages = &st;
+	mm->ppgtt->vm.insert_entries(&mm->ppgtt->vm, &vma, 0, 0);
+	sg_free_table(&st);
+
+fail:
+	return ret;
+}
+
+static void intel_vgpu_pv_ppgtt_bind(struct intel_vgpu *vgpu,
+				     struct intel_vgpu_mm *mm,
+				     struct intel_vgpu_pv_vma *vma, u64 *gfns)
+{
+	struct i915_vm_pt_stash stash = {};
+
+	if (vma->flags & BIT(I915_VMA_ALLOC_BIT)) {
+		i915_vm_alloc_pt_stash(&mm->ppgtt->vm, &stash, vma->size << PAGE_SHIFT);
+		i915_vm_pin_pt_stash(&mm->ppgtt->vm, &stash);
+		mm->ppgtt->vm.allocate_va_range(&mm->ppgtt->vm, &stash,
+						vma->start, vma->size << PAGE_SHIFT);
+	}
+
+	intel_vgpu_pv_ppgtt_insert_4lvl(vgpu, mm, vma, gfns);
+}
+
+static void intel_vgpu_pv_ppgtt_unbind(struct intel_vgpu *vgpu,
+				       struct intel_vgpu_mm *mm,
+				       struct intel_vgpu_pv_vma *vma, u64 *gfns)
+{
+	u32 size = vma->size;
+	int i;
+	u64 dma_addr;
+
+	mm->ppgtt->vm.clear_range(&mm->ppgtt->vm,
+			vma->start, size << PAGE_SHIFT);
+
+	for (i = 0; i < size; i++) {
+		dma_addr = gfns[i];
+		intel_gvt_hypervisor_dma_unmap_guest_page(vgpu, dma_addr);
+	}
+}
+
+int intel_vgpu_handle_pv_vma(struct intel_vgpu *vgpu,
+			     struct intel_vgpu_mm *mm, u32 cmd, u32 data[])
+{
+	struct intel_vgpu_pv_vma *vma = (struct intel_vgpu_pv_vma *)data;
+	int ret = 0;
+
+	u32 num_pages;
+	u64 dma_addr_array[32];
+	u64 *dma_addr_mem;
+	u64 *dma_addrs = dma_addr_array;
+
+	num_pages = vma->size;
+	if (num_pages == 1) {
+		dma_addrs[0] =  vma->dma_addrs;
+		goto out;
+	}
+
+	if (num_pages > ARRAY_SIZE(dma_addr_array)) {
+		dma_addr_mem = kmalloc_array(num_pages,
+					     sizeof(u64), GFP_KERNEL);
+		dma_addrs = dma_addr_mem;
+	}
+
+	ret = intel_gvt_hypervisor_read_gpa(vgpu, vma->dma_addrs,
+					    dma_addrs,
+					    num_pages * sizeof(u64));
+	if (ret)
+		return ret;
+
+out:
+	switch (cmd) {
+	case PV_CMD_UNBIND_PPGTT:
+		intel_vgpu_pv_ppgtt_unbind(vgpu, mm, vma, dma_addrs);
+		break;
+	case PV_CMD_BIND_PPGTT:
+		intel_vgpu_pv_ppgtt_bind(vgpu, mm, vma, dma_addrs);
+		break;
+	default:
+		break;
+	}
+
+	if (num_pages > ARRAY_SIZE(dma_addr_array))
+		kfree(dma_addr_mem);
+
+	return ret;
+}
+
 /**
  * intel_vgpu_find_ppgtt_mm - find a PPGTT mm object
  * @vgpu: a vGPU
diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h
index 52d0d88..bbe9c62 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.h
+++ b/drivers/gpu/drm/i915/gvt/gtt.h
@@ -141,6 +141,7 @@ struct intel_gvt_partial_pte {
 
 struct intel_vgpu_mm {
 	enum intel_gvt_mm_type type;
+	struct i915_ppgtt *ppgtt;
 	struct intel_vgpu *vgpu;
 
 	struct kref ref;
@@ -281,4 +282,7 @@ int intel_vgpu_emulate_ggtt_mmio_write(struct intel_vgpu *vgpu,
 
 void intel_vgpu_destroy_all_ppgtt_mm(struct intel_vgpu *vgpu);
 
+int intel_vgpu_handle_pv_vma(struct intel_vgpu *vgpu,
+			     struct intel_vgpu_mm *mm, u32 action, u32 data[]);
+
 #endif /* _GVT_GTT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index ca2f656..605488c 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -50,6 +50,7 @@
 #include "dmabuf.h"
 #include "page_track.h"
 #include "i915_vgpu.h"
+#include "i915_pvinfo.h"
 
 #define GVT_MAX_VGPU 8
 
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index a9f91da..732b4ed 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1322,11 +1322,36 @@ static int pv_command_buffer_read(struct intel_vgpu *vgpu,
 
 static int handle_pv_commands(struct intel_vgpu *vgpu)
 {
+	struct intel_vgpu_mm *mm;
+	struct intel_vgpu_pv_vma *vma;
+	u64 pdp;
 	u32 cmd;
 	u32 data[32];
 	int ret;
 
 	ret = pv_command_buffer_read(vgpu, &cmd, data);
+
+	if (ret)
+		return ret;
+
+	switch (cmd) {
+	case PV_CMD_BIND_PPGTT:
+	case PV_CMD_UNBIND_PPGTT:
+		vma = (struct intel_vgpu_pv_vma *)data;
+		pdp = vma->pml4;
+		mm = intel_vgpu_find_ppgtt_mm(vgpu, &pdp);
+		if (!mm) {
+			gvt_vgpu_err("failed to find pdp 0x%llx\n", pdp);
+			ret = -EINVAL;
+			enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
+			return ret;
+		}
+		ret = intel_vgpu_handle_pv_vma(vgpu, mm, cmd, data);
+		break;
+	default:
+		break;
+	}
+
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 09bd00a..bf70e5517 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -50,6 +50,8 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
+	vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT;
+
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.size)) =
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 11/12] drm/i915/gvt: GVTg support ggtt pv operations
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (9 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 10/12] drm/i915/gvt: GVTg support ppgtt pv operations Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 12/12] drm/i915/gvt: GVTg support pv workload submssion Xiaolin Zhang
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

This patch is to handle ppgtt PV_CMD_BIND_GGTT and PV_CMD_UNBIND_GGTT

for pv ggtt, it is operated (bind/unbind) per vma instead of per ggtt
entry mmio update to improve efficiency

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gtt.c      | 83 +++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/handlers.c |  4 ++
 drivers/gpu/drm/i915/gvt/vgpu.c     |  2 +-
 3 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
index c2b7527..7dcd427 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.c
+++ b/drivers/gpu/drm/i915/gvt/gtt.c
@@ -2734,6 +2734,83 @@ static void intel_vgpu_pv_ppgtt_unbind(struct intel_vgpu *vgpu,
 	}
 }
 
+static int intel_vgpu_pv_ggtt_bind(struct intel_vgpu *vgpu,
+				   struct intel_vgpu_pv_vma *vma, u64 *gpas)
+{
+	u64 off = (vma->start / I915_GTT_PAGE_SIZE) << 3;
+	u32 size = vma->size;
+	struct intel_vgpu_mm *ggtt_mm = vgpu->gtt.ggtt_mm;
+	struct intel_gvt_gtt_pte_ops *ops = vgpu->gvt->gtt.pte_ops;
+	unsigned long g_gtt_index = off >> 3;
+	struct intel_gvt_gtt_entry e = {.val64 = 0, .type = GTT_TYPE_GGTT_PTE};
+	struct intel_gvt_gtt_entry m = {.val64 = 0, .type = GTT_TYPE_GGTT_PTE};
+	int ret = 0;
+	int i;
+	u64 gfn;
+	dma_addr_t dma_addr;
+
+	for (i = 0; i < size; i++) {
+		e.val64 = gpas[i];
+		if (!ops->test_present(&e)) {
+			ops->set_pfn(&m, vgpu->gvt->gtt.scratch_mfn);
+			ops->clear_present(&m);
+			goto out;
+		}
+
+		gfn = ops->get_pfn(&e);
+		m.val64 = e.val64;
+		ret = intel_gvt_hypervisor_dma_map_guest_page(vgpu, gfn,
+							      PAGE_SIZE, &dma_addr);
+		if (ret) {
+			gvt_vgpu_err("failed to map guest ggtt entry\n");
+			ops->set_pfn(&m, vgpu->gvt->gtt.scratch_mfn);
+		} else {
+			ops->set_pfn(&m, dma_addr >> PAGE_SHIFT);
+		}
+out:
+		g_gtt_index = off >> 3;
+		ggtt_set_guest_entry(ggtt_mm, &e, g_gtt_index);
+		ggtt_get_host_entry(ggtt_mm, &e, g_gtt_index);
+		ggtt_invalidate_pte(vgpu, &e);
+		ggtt_set_host_entry(ggtt_mm, &m, g_gtt_index);
+		off += 8;
+	}
+
+	ggtt_invalidate(vgpu->gvt->gt);
+	return ret;
+}
+
+static int intel_vgpu_pv_ggtt_unbind(struct intel_vgpu *vgpu,
+				     struct intel_vgpu_pv_vma *vma, u64 *gpas)
+{
+	u64 off = (vma->start / I915_GTT_PAGE_SIZE) << 3;
+	u32 size = vma->size;
+	struct intel_vgpu_mm *ggtt_mm = vgpu->gtt.ggtt_mm;
+	struct intel_gvt_gtt_pte_ops *ops = vgpu->gvt->gtt.pte_ops;
+	unsigned long g_gtt_index = off >> 3;
+	struct intel_gvt_gtt_entry e = {.val64 = 0, .type = GTT_TYPE_GGTT_PTE};
+	struct intel_gvt_gtt_entry m = {.val64 = 0, .type = GTT_TYPE_GGTT_PTE};
+	int ret = 0;
+	int i;
+
+	for (i = 0; i < size; i++) {
+		g_gtt_index = off >> 3;
+		e.val64 = gpas[i];
+		ggtt_invalidate_pte(vgpu, &e);
+		ops->clear_present(&e);
+		ggtt_set_guest_entry(ggtt_mm, &e, g_gtt_index);
+		ops->set_pfn(&m, vgpu->gvt->gtt.scratch_mfn);
+		ops->clear_present(&m);
+		ggtt_get_host_entry(ggtt_mm, &e, g_gtt_index);
+		ggtt_set_host_entry(ggtt_mm, &m, g_gtt_index);
+		off += 8;
+	}
+
+	ggtt_invalidate(vgpu->gvt->gt);
+
+	return ret;
+}
+
 int intel_vgpu_handle_pv_vma(struct intel_vgpu *vgpu,
 			     struct intel_vgpu_mm *mm, u32 cmd, u32 data[])
 {
@@ -2771,6 +2848,12 @@ int intel_vgpu_handle_pv_vma(struct intel_vgpu *vgpu,
 	case PV_CMD_BIND_PPGTT:
 		intel_vgpu_pv_ppgtt_bind(vgpu, mm, vma, dma_addrs);
 		break;
+	case PV_CMD_BIND_GGTT:
+		ret = intel_vgpu_pv_ggtt_bind(vgpu, vma, dma_addrs);
+		break;
+	case PV_CMD_UNBIND_GGTT:
+		ret = intel_vgpu_pv_ggtt_unbind(vgpu, vma, dma_addrs);
+		break;
 	default:
 		break;
 	}
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 732b4ed..7f83b77 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1348,6 +1348,10 @@ static int handle_pv_commands(struct intel_vgpu *vgpu)
 		}
 		ret = intel_vgpu_handle_pv_vma(vgpu, mm, cmd, data);
 		break;
+	case PV_CMD_BIND_GGTT:
+	case PV_CMD_UNBIND_GGTT:
+		ret = intel_vgpu_handle_pv_vma(vgpu, NULL, cmd, data);
+		break;
 	default:
 		break;
 	}
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index bf70e5517..c1e2f82 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -50,7 +50,7 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
-	vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT;
+	vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT | PV_GGTT;
 
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] [PATCH v2 12/12] drm/i915/gvt: GVTg support pv workload submssion
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (10 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 11/12] drm/i915/gvt: GVTg support ggtt " Xiaolin Zhang
@ 2020-10-09  0:04 ` Xiaolin Zhang
  2020-10-09 11:14 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for enhanced i915 vgpu with PV feature support (rev2) Patchwork
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Xiaolin Zhang @ 2020-10-09  0:04 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: chris, zhiyuan.lv

implemented pv workload submission support within GVTg.

GVTg to read engine submission data (engine lrc) from the shared_page
with pv interface to reduce mmio trap cost and then eliminate
execlist HW behavior emulation by removing injecting context switch
interrupt to guest under workload submisison pv mode to improve efficiency.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h      |   1 +
 drivers/gpu/drm/i915/gvt/handlers.c | 102 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/vgpu.c     |   1 +
 3 files changed, 104 insertions(+)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 605488c..5cd78aa 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -217,6 +217,7 @@ struct intel_vgpu {
 	u32 pv_caps;
 	u64 shared_page_gpa;
 	bool shared_page_enabled;
+	u64 pv_sub_gpa;
 };
 
 static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 7f83b77..4c19496 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1322,6 +1322,7 @@ static int pv_command_buffer_read(struct intel_vgpu *vgpu,
 
 static int handle_pv_commands(struct intel_vgpu *vgpu)
 {
+	struct intel_vgpu_pv_cap_addr *cap_addr;
 	struct intel_vgpu_mm *mm;
 	struct intel_vgpu_pv_vma *vma;
 	u64 pdp;
@@ -1335,6 +1336,17 @@ static int handle_pv_commands(struct intel_vgpu *vgpu)
 		return ret;
 
 	switch (cmd) {
+	case PV_CMD_REGISTER_CAP_GPA:
+		cap_addr = (struct intel_vgpu_pv_cap_addr *)data;
+		switch (cap_addr->cap) {
+		case PV_SUBMISSION:
+			vgpu->pv_sub_gpa = cap_addr->gpa;
+			break;
+		default:
+			gvt_vgpu_err("invalid pv cap 0x%x\n", cap_addr->cap);
+			break;
+		}
+		break;
 	case PV_CMD_BIND_PPGTT:
 	case PV_CMD_UNBIND_PPGTT:
 		vma = (struct intel_vgpu_pv_vma *)data;
@@ -1857,6 +1869,92 @@ static int mmio_read_from_hw(struct intel_vgpu *vgpu,
 	return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes);
 }
 
+static int pv_prepare_workload(struct intel_vgpu_workload *workload)
+{
+	return 0;
+}
+
+static int pv_complete_workload(struct intel_vgpu_workload *workload)
+{
+	return 0;
+}
+
+static int submit_context_pv(struct intel_vgpu *vgpu,
+			     const struct intel_engine_cs *engine,
+			     struct execlist_ctx_descriptor_format *desc,
+			     bool emulate_schedule_in)
+{
+	struct intel_vgpu_workload *workload = NULL;
+
+	workload = intel_vgpu_create_workload(vgpu, engine, desc);
+	if (IS_ERR(workload))
+		return PTR_ERR(workload);
+
+	workload->prepare = pv_prepare_workload;
+	workload->complete = pv_complete_workload;
+
+	intel_vgpu_queue_workload(workload);
+	return 0;
+}
+
+#define get_desc_from_elsp_dwords(ed, i) \
+	((struct execlist_ctx_descriptor_format *)&((ed)->data[(i) * 2]))
+
+static int handle_pv_submission(struct intel_vgpu *vgpu,
+				const struct intel_engine_cs *engine)
+{
+	struct intel_vgpu_execlist *execlist;
+	struct intel_vgpu_pv_submission sub_data;
+	struct execlist_ctx_descriptor_format *desc[2];
+	u32 ring_id = engine->id;
+	u64 base = vgpu->pv_sub_gpa + ring_id * sizeof(struct intel_vgpu_pv_submission);
+	u32 submitted_off = offsetof(struct intel_vgpu_pv_submission, submitted);
+	bool submitted = false;
+	int i, ret;
+
+	execlist = &vgpu->submission.execlist[ring_id];
+	if (intel_gvt_hypervisor_read_gpa(vgpu, base, &sub_data, sizeof(sub_data)))
+		return -EINVAL;
+
+	desc[0] = (struct execlist_ctx_descriptor_format *)&sub_data.descs[0];
+	desc[1] = (struct execlist_ctx_descriptor_format *)&sub_data.descs[1];
+
+	if (!desc[0]->valid) {
+		gvt_vgpu_err("invalid elsp submission, desc0 is invalid\n");
+		goto inv_desc;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(desc); i++) {
+		if (!desc[i]->valid)
+			continue;
+		if (!desc[i]->privilege_access) {
+			gvt_vgpu_err("unexpected GGTT elsp submission\n");
+			goto inv_desc;
+		}
+	}
+
+	/* submit workload */
+	for (i = 0; i < ARRAY_SIZE(desc); i++) {
+		if (!desc[i]->valid)
+			continue;
+
+		ret = submit_context_pv(vgpu, engine, desc[i], i == 0);
+		if (ret) {
+			gvt_vgpu_err("failed to submit desc %d\n", i);
+			return ret;
+		}
+	}
+
+	base += submitted_off;
+	ret = intel_gvt_hypervisor_write_gpa(vgpu, base, &submitted, 1);
+	return ret;
+
+inv_desc:
+	gvt_vgpu_err("descriptors content: desc0 %08x %08x desc1 %08x %08x\n",
+		     desc[0]->udw, desc[0]->ldw, desc[1]->udw, desc[1]->ldw);
+	return -EINVAL;
+}
+
 static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 		void *p_data, unsigned int bytes)
 {
@@ -1869,6 +1967,10 @@ static int elsp_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	if (drm_WARN_ON(&i915->drm, !engine))
 		return -EINVAL;
 
+	if (intel_vgpu_enabled_pv_cap(vgpu, PV_SUBMISSION) &&
+	    data == PV_CMD_SUBMIT_WORKLOAD)
+		return handle_pv_submission(vgpu, engine);
+
 	execlist = &vgpu->submission.execlist[engine->id];
 
 	execlist->elsp_dwords.data[3 - execlist->elsp_dwords.index] = data;
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index c1e2f82..6ed2a3c 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -51,6 +51,7 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
 	vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT | PV_GGTT;
+	vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) |= PV_SUBMISSION;
 
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for enhanced i915 vgpu with PV feature support (rev2)
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (11 preceding siblings ...)
  2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 12/12] drm/i915/gvt: GVTg support pv workload submssion Xiaolin Zhang
@ 2020-10-09 11:14 ` Patchwork
  2020-10-09 11:15 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2020-10-09 11:14 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: enhanced i915 vgpu with PV feature support (rev2)
URL   : https://patchwork.freedesktop.org/series/81400/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
7432e9c8e16e drm/i915: introduced vgpu pv capability
0a6bfa99d671 drm/i915: vgpu shared memory setup for pv support
8ab37fac488e drm/i915: vgpu pv command buffer transport protocol
fc97dc9d2403 drm/i915: vgpu ppgtt page table pv support
e46b20e055ad drm/i915: vgpu ggtt page table pv support
cab28ddee64b drm/i915: vgpu workload submisison pv support
-:280: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#280: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 551 lines checked
05c854e9ca3a drm/i915/gvt: GVTg expose pv_caps PVINFO register
1a14fbb8eeef drm/i915/gvt: GVTg handle guest shared_page setup
193236865422 drm/i915/gvt: GVTg support vgpu pv CTB protocol
6ec632cd384a drm/i915/gvt: GVTg support ppgtt pv operations
f6a8c1ca2f74 drm/i915/gvt: GVTg support ggtt pv operations
a7035c961144 drm/i915/gvt: GVTg support pv workload submssion


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for enhanced i915 vgpu with PV feature support (rev2)
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (12 preceding siblings ...)
  2020-10-09 11:14 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for enhanced i915 vgpu with PV feature support (rev2) Patchwork
@ 2020-10-09 11:15 ` Patchwork
  2020-10-09 11:17 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2020-10-09 11:15 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: enhanced i915 vgpu with PV feature support (rev2)
URL   : https://patchwork.freedesktop.org/series/81400/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
+drivers/gpu/drm/i915/gt/intel_reset.c:1312:5: warning: context imbalance in 'intel_gt_reset_trylock' - different lock contexts for basic block
+drivers/gpu/drm/i915/i915_perf.c:1440:15: warning: memset with byte count of 16777216
+drivers/gpu/drm/i915/i915_perf.c:1494:15: warning: memset with byte count of 16777216
+./include/linux/seqlock.h:752:24: warning: trying to copy expression type 31
+./include/linux/seqlock.h:778:16: warning: trying to copy expression type 31
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen11_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen12_fwtable_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read64' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_read8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen6_write8' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen8_write16' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen8_write32' - different lock contexts for basic block
+./include/linux/spinlock.h:409:9: warning: context imbalance in 'gen8_write8' - different lock contexts for basic block


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✗ Fi.CI.DOCS: warning for enhanced i915 vgpu with PV feature support (rev2)
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (13 preceding siblings ...)
  2020-10-09 11:15 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2020-10-09 11:17 ` Patchwork
  2020-10-09 11:36 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  2020-10-09 15:37 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  16 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2020-10-09 11:17 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: enhanced i915 vgpu with PV feature support (rev2)
URL   : https://patchwork.freedesktop.org/series/81400/
State : warning

== Summary ==

$ make htmldocs 2>&1 > /dev/null | grep i915
./drivers/gpu/drm/i915/i915_vgpu.c:513: warning: Function parameter or member 'i915' not described in 'intel_vgpu_pv_wait_desc_update'


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for enhanced i915 vgpu with PV feature support (rev2)
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (14 preceding siblings ...)
  2020-10-09 11:17 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
@ 2020-10-09 11:36 ` Patchwork
  2020-10-09 15:37 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  16 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2020-10-09 11:36 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 6204 bytes --]

== Series Details ==

Series: enhanced i915 vgpu with PV feature support (rev2)
URL   : https://patchwork.freedesktop.org/series/81400/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9119 -> Patchwork_18665
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/index.html

Known issues
------------

  Here are the changes found in Patchwork_18665 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_pm_rpm@module-reload:
    - fi-byt-j1900:       [PASS][1] -> [DMESG-WARN][2] ([i915#1982]) +1 similar issue
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-byt-j1900/igt@i915_pm_rpm@module-reload.html

  
#### Possible fixes ####

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
    - {fi-tgl-dsi}:       [DMESG-WARN][3] ([i915#1982]) -> [PASS][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-tgl-dsi/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-tgl-dsi/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
    - fi-bsw-kefka:       [DMESG-WARN][5] ([i915#1982]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-bsw-kefka/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-bsw-kefka/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html

  * igt@kms_cursor_legacy@basic-flip-after-cursor-atomic:
    - fi-icl-u2:          [DMESG-WARN][7] ([i915#1982]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-icl-u2/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-icl-u2/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html

  * igt@vgem_basic@unload:
    - fi-skl-guc:         [DMESG-WARN][9] ([i915#2203]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-skl-guc/igt@vgem_basic@unload.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-skl-guc/igt@vgem_basic@unload.html
    - fi-kbl-x1275:       [DMESG-WARN][11] ([i915#95]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-kbl-x1275/igt@vgem_basic@unload.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-kbl-x1275/igt@vgem_basic@unload.html

  
#### Warnings ####

  * igt@i915_pm_rpm@module-reload:
    - fi-kbl-x1275:       [DMESG-FAIL][13] ([i915#62] / [i915#95]) -> [DMESG-FAIL][14] ([i915#62])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-kbl-x1275/igt@i915_pm_rpm@module-reload.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-kbl-x1275/igt@i915_pm_rpm@module-reload.html
    - fi-kbl-guc:         [DMESG-FAIL][15] ([i915#2203]) -> [DMESG-WARN][16] ([i915#2203])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-kbl-guc/igt@i915_pm_rpm@module-reload.html

  * igt@kms_cursor_legacy@basic-flip-after-cursor-atomic:
    - fi-kbl-x1275:       [DMESG-WARN][17] ([i915#62] / [i915#92]) -> [DMESG-WARN][18] ([i915#62] / [i915#92] / [i915#95])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-kbl-x1275/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-kbl-x1275/igt@kms_cursor_legacy@basic-flip-after-cursor-atomic.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-c:
    - fi-kbl-x1275:       [DMESG-WARN][19] ([i915#62] / [i915#92] / [i915#95]) -> [DMESG-WARN][20] ([i915#62] / [i915#92]) +8 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/fi-kbl-x1275/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-c.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/fi-kbl-x1275/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-c.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2203]: https://gitlab.freedesktop.org/drm/intel/issues/2203
  [i915#62]: https://gitlab.freedesktop.org/drm/intel/issues/62
  [i915#92]: https://gitlab.freedesktop.org/drm/intel/issues/92
  [i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95


Participating hosts (46 -> 39)
------------------------------

  Missing    (7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_9119 -> Patchwork_18665

  CI-20190529: 20190529
  CI_DRM_9119: a3e5da35ec06db35cface113da3befab8e00e03a @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5806: 6adb80cd84310b6d90a5259768d03ebb2c30fe50 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_18665: a7035c9611445c4051e7d688632388000b6fdc73 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

a7035c961144 drm/i915/gvt: GVTg support pv workload submssion
f6a8c1ca2f74 drm/i915/gvt: GVTg support ggtt pv operations
6ec632cd384a drm/i915/gvt: GVTg support ppgtt pv operations
193236865422 drm/i915/gvt: GVTg support vgpu pv CTB protocol
1a14fbb8eeef drm/i915/gvt: GVTg handle guest shared_page setup
05c854e9ca3a drm/i915/gvt: GVTg expose pv_caps PVINFO register
cab28ddee64b drm/i915: vgpu workload submisison pv support
e46b20e055ad drm/i915: vgpu ggtt page table pv support
fc97dc9d2403 drm/i915: vgpu ppgtt page table pv support
8ab37fac488e drm/i915: vgpu pv command buffer transport protocol
0a6bfa99d671 drm/i915: vgpu shared memory setup for pv support
7432e9c8e16e drm/i915: introduced vgpu pv capability

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/index.html

[-- Attachment #1.2: Type: text/html, Size: 8144 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for enhanced i915 vgpu with PV feature support (rev2)
  2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
                   ` (15 preceding siblings ...)
  2020-10-09 11:36 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2020-10-09 15:37 ` Patchwork
  16 siblings, 0 replies; 18+ messages in thread
From: Patchwork @ 2020-10-09 15:37 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 10612 bytes --]

== Series Details ==

Series: enhanced i915 vgpu with PV feature support (rev2)
URL   : https://patchwork.freedesktop.org/series/81400/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_9119_full -> Patchwork_18665_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Known issues
------------

  Here are the changes found in Patchwork_18665_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_sseu@invalid-args:
    - shard-iclb:         [PASS][1] -> [DMESG-WARN][2] ([i915#1982])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-iclb8/igt@gem_ctx_sseu@invalid-args.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-iclb2/igt@gem_ctx_sseu@invalid-args.html

  * igt@gem_exec_whisper@basic-normal-all:
    - shard-glk:          [PASS][3] -> [DMESG-WARN][4] ([i915#118] / [i915#95])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-glk2/igt@gem_exec_whisper@basic-normal-all.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-glk9/igt@gem_exec_whisper@basic-normal-all.html

  * igt@gem_mmap_gtt@basic-write-read-distinct:
    - shard-skl:          [PASS][5] -> [DMESG-WARN][6] ([i915#1982]) +11 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl8/igt@gem_mmap_gtt@basic-write-read-distinct.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl2/igt@gem_mmap_gtt@basic-write-read-distinct.html

  * igt@gem_userptr_blits@unsync-unmap-cycles:
    - shard-skl:          [PASS][7] -> [TIMEOUT][8] ([i915#2424]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl10/igt@gem_userptr_blits@unsync-unmap-cycles.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl8/igt@gem_userptr_blits@unsync-unmap-cycles.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-skl:          [PASS][9] -> [INCOMPLETE][10] ([i915#300])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl8/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl2/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_flip@2x-plain-flip-fb-recreate-interruptible@ac-hdmi-a1-hdmi-a2:
    - shard-glk:          [PASS][11] -> [FAIL][12] ([i915#2122])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-glk7/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible@ac-hdmi-a1-hdmi-a2.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-glk2/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible@ac-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1:
    - shard-skl:          [PASS][13] -> [FAIL][14] ([i915#79])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl7/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl1/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-kbl:          [PASS][15] -> [DMESG-WARN][16] ([i915#180]) +3 similar issues
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-kbl4/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-kbl4/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_frontbuffer_tracking@psr-rgb565-draw-blt:
    - shard-tglb:         [PASS][17] -> [DMESG-WARN][18] ([i915#1982])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-tglb6/igt@kms_frontbuffer_tracking@psr-rgb565-draw-blt.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-tglb6/igt@kms_frontbuffer_tracking@psr-rgb565-draw-blt.html

  * igt@kms_psr@psr2_cursor_blt:
    - shard-iclb:         [PASS][19] -> [SKIP][20] ([fdo#109441]) +1 similar issue
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-iclb2/igt@kms_psr@psr2_cursor_blt.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-iclb6/igt@kms_psr@psr2_cursor_blt.html

  
#### Possible fixes ####

  * igt@gem_eio@kms:
    - shard-skl:          [DMESG-WARN][21] ([i915#1982]) -> [PASS][22] +3 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl8/igt@gem_eio@kms.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl2/igt@gem_eio@kms.html

  * {igt@gem_exec_capture@pi@rcs0}:
    - shard-glk:          [INCOMPLETE][23] ([i915#2553]) -> [PASS][24]
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-glk5/igt@gem_exec_capture@pi@rcs0.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-glk3/igt@gem_exec_capture@pi@rcs0.html

  * {igt@kms_async_flips@async-flip-with-page-flip-events}:
    - shard-tglb:         [FAIL][25] ([i915#2521]) -> [PASS][26]
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-tglb8/igt@kms_async_flips@async-flip-with-page-flip-events.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-tglb6/igt@kms_async_flips@async-flip-with-page-flip-events.html

  * igt@kms_atomic@crtc-invalid-params-fence:
    - shard-snb:          [SKIP][27] ([fdo#109271]) -> [PASS][28] +1 similar issue
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-snb4/igt@kms_atomic@crtc-invalid-params-fence.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-snb7/igt@kms_atomic@crtc-invalid-params-fence.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-90:
    - shard-apl:          [DMESG-WARN][29] ([i915#1635] / [i915#1982]) -> [PASS][30]
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-apl4/igt@kms_big_fb@yf-tiled-32bpp-rotate-90.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-apl2/igt@kms_big_fb@yf-tiled-32bpp-rotate-90.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes:
    - shard-skl:          [INCOMPLETE][31] ([i915#198]) -> [PASS][32]
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl6/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl10/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [FAIL][33] ([fdo#108145] / [i915#265]) -> [PASS][34] +1 similar issue
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl4/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         [SKIP][35] ([fdo#109441]) -> [PASS][36] +4 similar issues
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-iclb8/igt@kms_psr@psr2_cursor_plane_move.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-iclb2/igt@kms_psr@psr2_cursor_plane_move.html

  
#### Warnings ####

  * igt@i915_pm_dc@dc3co-vpb-simulation:
    - shard-iclb:         [SKIP][37] ([i915#658]) -> [SKIP][38] ([i915#588])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-iclb8/igt@i915_pm_dc@dc3co-vpb-simulation.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-iclb2/igt@i915_pm_dc@dc3co-vpb-simulation.html

  * igt@kms_content_protection@lic:
    - shard-apl:          [TIMEOUT][39] ([i915#1319] / [i915#1635]) -> [FAIL][40] ([fdo#110321] / [i915#1635])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-apl6/igt@kms_content_protection@lic.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-apl8/igt@kms_content_protection@lic.html

  * igt@perf@polling-parameterized:
    - shard-skl:          [DMESG-WARN][41] ([i915#1982]) -> [DMESG-FAIL][42] ([i915#1982])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_9119/shard-skl3/igt@perf@polling-parameterized.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/shard-skl5/igt@perf@polling-parameterized.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110321]: https://bugs.freedesktop.org/show_bug.cgi?id=110321
  [i915#118]: https://gitlab.freedesktop.org/drm/intel/issues/118
  [i915#1319]: https://gitlab.freedesktop.org/drm/intel/issues/1319
  [i915#1635]: https://gitlab.freedesktop.org/drm/intel/issues/1635
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
  [i915#2424]: https://gitlab.freedesktop.org/drm/intel/issues/2424
  [i915#2521]: https://gitlab.freedesktop.org/drm/intel/issues/2521
  [i915#2553]: https://gitlab.freedesktop.org/drm/intel/issues/2553
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#300]: https://gitlab.freedesktop.org/drm/intel/issues/300
  [i915#588]: https://gitlab.freedesktop.org/drm/intel/issues/588
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79
  [i915#82]: https://gitlab.freedesktop.org/drm/intel/issues/82
  [i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95


Participating hosts (11 -> 11)
------------------------------

  No changes in participating hosts


Build changes
-------------

  * Linux: CI_DRM_9119 -> Patchwork_18665

  CI-20190529: 20190529
  CI_DRM_9119: a3e5da35ec06db35cface113da3befab8e00e03a @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5806: 6adb80cd84310b6d90a5259768d03ebb2c30fe50 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_18665: a7035c9611445c4051e7d688632388000b6fdc73 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18665/index.html

[-- Attachment #1.2: Type: text/html, Size: 12266 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-10-09 15:37 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-09  0:04 [Intel-gfx] [PATCH v2 00/12] enhanced i915 vgpu with PV feature support Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 01/12] drm/i915: introduced vgpu pv capability Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 02/12] drm/i915: vgpu shared memory setup for pv support Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 03/12] drm/i915: vgpu pv command buffer transport protocol Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 04/12] drm/i915: vgpu ppgtt page table pv support Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 05/12] drm/i915: vgpu ggtt " Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 06/12] drm/i915: vgpu workload submisison " Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 07/12] drm/i915/gvt: GVTg expose pv_caps PVINFO register Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 08/12] drm/i915/gvt: GVTg handle guest shared_page setup Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 09/12] drm/i915/gvt: GVTg support vgpu pv CTB protocol Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 10/12] drm/i915/gvt: GVTg support ppgtt pv operations Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 11/12] drm/i915/gvt: GVTg support ggtt " Xiaolin Zhang
2020-10-09  0:04 ` [Intel-gfx] [PATCH v2 12/12] drm/i915/gvt: GVTg support pv workload submssion Xiaolin Zhang
2020-10-09 11:14 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for enhanced i915 vgpu with PV feature support (rev2) Patchwork
2020-10-09 11:15 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-10-09 11:17 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
2020-10-09 11:36 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-10-09 15:37 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.