All of lore.kernel.org
 help / color / mirror / Atom feed
* ✗ Fi.CI.BAT: failure for i915 vgpu PV to improve vgpu performance
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
@ 2019-03-29  5:00 ` Patchwork
  2019-03-29 13:32 ` [PATCH v4 1/8] drm/i915: introduced vgpu pv capability Xiaolin Zhang
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2019-03-29  5:00 UTC (permalink / raw)
  To: Xiaolin Zhang; +Cc: intel-gfx

== Series Details ==

Series: i915 vgpu PV to improve vgpu performance
URL   : https://patchwork.freedesktop.org/series/58707/
State : failure

== Summary ==

CALL    scripts/checksyscalls.sh
  CALL    scripts/atomic/check-atomics.sh
  DESCEND  objtool
  CHK     include/generated/compile.h
  CC [M]  drivers/gpu/drm/i915/i915_vgpu.o
drivers/gpu/drm/i915/i915_vgpu.c: In function ‘pv_submit’:
drivers/gpu/drm/i915/i915_vgpu.c:337:2: error: implicit declaration of function ‘__raw_i915_write32’; did you mean ‘__raw_uncore_write32’? [-Werror=implicit-function-declaration]
  __raw_i915_write32(uncore, vgtif_reg(g2v_notify),
  ^~~~~~~~~~~~~~~~~~
  __raw_uncore_write32
drivers/gpu/drm/i915/i915_vgpu.c: In function ‘intel_vgpu_setup_shared_page’:
drivers/gpu/drm/i915/i915_vgpu.c:523:2: error: implicit declaration of function ‘__raw_i915_write64’; did you mean ‘__raw_uncore_write64’? [-Werror=implicit-function-declaration]
  __raw_i915_write64(uncore, vgtif_reg(shared_page_gpa), gpa);
  ^~~~~~~~~~~~~~~~~~
  __raw_uncore_write64
drivers/gpu/drm/i915/i915_vgpu.c:524:13: error: implicit declaration of function ‘__raw_i915_read64’; did you mean ‘__raw_uncore_read64’? [-Werror=implicit-function-declaration]
  if (gpa != __raw_i915_read64(uncore, vgtif_reg(shared_page_gpa))) {
             ^~~~~~~~~~~~~~~~~
             __raw_uncore_read64
drivers/gpu/drm/i915/i915_vgpu.c: In function ‘intel_vgpu_check_pv_caps’:
drivers/gpu/drm/i915/i915_vgpu.c:582:15: error: implicit declaration of function ‘__raw_i915_read32’; did you mean ‘__raw_uncore_read32’? [-Werror=implicit-function-declaration]
  gvt_pvcaps = __raw_i915_read32(uncore, vgtif_reg(pv_caps));
               ^~~~~~~~~~~~~~~~~
               __raw_uncore_read32
cc1: all warnings being treated as errors
scripts/Makefile.build:278: recipe for target 'drivers/gpu/drm/i915/i915_vgpu.o' failed
make[5]: *** [drivers/gpu/drm/i915/i915_vgpu.o] Error 1
scripts/Makefile.build:489: recipe for target 'drivers/gpu/drm/i915' failed
make[4]: *** [drivers/gpu/drm/i915] Error 2
scripts/Makefile.build:489: recipe for target 'drivers/gpu/drm' failed
make[3]: *** [drivers/gpu/drm] Error 2
scripts/Makefile.build:489: recipe for target 'drivers/gpu' failed
make[2]: *** [drivers/gpu] Error 2
/home/cidrm/kernel/Makefile:1046: recipe for target 'drivers' failed
make[1]: *** [drivers] Error 2
Makefile:170: recipe for target 'sub-make' failed
make: *** [sub-make] Error 2

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/8] drm/i915: vgpu context submission pv optimization
  2019-03-29 13:32 ` [PATCH v4 4/8] drm/i915: vgpu context submission " Xiaolin Zhang
@ 2019-03-29  7:15   ` Chris Wilson
  2019-03-29 15:40   ` Chris Wilson
  2019-03-29 19:14   ` Chris Wilson
  2 siblings, 0 replies; 17+ messages in thread
From: Chris Wilson @ 2019-03-29  7:15 UTC (permalink / raw)
  To: Xiaolin Zhang, intel-gfx, intel-gvt-dev
  Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

Quoting Xiaolin Zhang (2019-03-29 13:32:40)
> It is performance optimization to override the actual submisison backend
> in order to eliminate execlists csb process and reduce mmio trap numbers
> for workload submission without contextswith interrupt by talking with
> GVT via PV submisison notification mechanism between guest and GVT.

> Use PV_SUBMISSION to control this level of pv optimization.
> 
> v0: RFC
> v1: rebase
> v2: added pv ops for pv context submission. to maximize code resuse,
> introduced 2 more ops (submit_ports & preempt_context) instead of 1 op
> (set_default_submission) in engine structure. pv version of
> submit_ports and preempt_context implemented.
> v3:
> 1. to reduce more code duplication, code refactor and replaced 2 ops
> "submit_ports & preempt_contex" from v2 by 1 ops "write_desc"
> in engine structure. pv version of write_des implemented.
> 2. added VGT_G2V_ELSP_SUBMIT for g2v pv notification.
> v4: implemented pv elsp submission tasklet as the backend workload
> submisison by talking to GVT with PV notificaiton mechanism and renamed
> VGT_G2V_ELSP_SUBMIT to VGT_G2V_PV_SUBMISIION.
> 
> Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
> ---
>  drivers/gpu/drm/i915/i915_irq.c    |   2 +
>  drivers/gpu/drm/i915/i915_pvinfo.h |   1 +
>  drivers/gpu/drm/i915/i915_vgpu.c   | 158 ++++++++++++++++++++++++++++++++++++-
>  drivers/gpu/drm/i915/i915_vgpu.h   |  10 +++
>  drivers/gpu/drm/i915/intel_lrc.c   |   3 +
>  5 files changed, 173 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 2f78829..28e8ee0 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -37,6 +37,7 @@
>  #include "i915_drv.h"
>  #include "i915_trace.h"
>  #include "intel_drv.h"
> +#include "i915_vgpu.h"
>  
>  /**
>   * DOC: interrupt handling
> @@ -1470,6 +1471,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>         if (iir & GT_RENDER_USER_INTERRUPT) {
>                 intel_engine_breadcrumbs_irq(engine);
>                 tasklet |= USES_GUC_SUBMISSION(engine->i915);
> +               tasklet |= USES_PV_SUBMISSION(engine->i915);

You call this an optimisation!
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance
@ 2019-03-29 13:32 Xiaolin Zhang
  2019-03-29  5:00 ` ✗ Fi.CI.BAT: failure for " Patchwork
                   ` (9 more replies)
  0 siblings, 10 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

To improve vgpu performance, it could implement some PV optimization
such as to reduce the mmio access trap numbers or eliminate certain piece
of HW emulation within guest driver to reduce vm exit/vm enter cost.

the solutions in this patch set are implemented two PV optimizations based
on the shared memory region between guest and GVTg for data communication.
The shared memory region is allocated by guest driver and this
region's memory guest physical address will be passed to GVTg through
PVINFO register and later GVTg can access this region directly without
trap cost to achieve data exchange purpose between guest and GVTg.

in this patch set, 2 kind of PV optimization implemented controlled by
pv_caps PVINO register with different pv bit.
1. workload PV submission (context submission): reduce 4 traps to 1 trap
and eliminated execlists HW behaviour emulation.
2. ppgtt PV update: eliminate the cost of ppgtt write protection.

based on the experiment, for small workloads, specifally, glxgears with
vblank_mode off, the average performance gain on single vgpu is 30~50%.
for large workload such as media and 3D, the average performance gain
is about 4%. 

based on the PV mechanism, it could achive more vgpu feature optimization
such as globle GTT update, display plane and water mark update.

v0: RFC patch set
v1: addressed RFC review comments
v2: addressed v1 review comments, added pv callbacks for pv operations
v3:
1. addressed v2 review comments, removed pv callbacks code duplication in
v2 and unified pv calls under g2v notification register. different g2v pv
notifications defined.
2. dropped pv master irq feature due to hard conflict with recnet i915
change and take time to rework.
v4:
1. addressed v3 review comments.
2. extended workload PV submission by skip execlists HW behaviour emulation
and context switch interrupt injection.  

Xiaolin Zhang (8):
  drm/i915: introduced vgpu pv capability
  drm/i915: vgpu shared memory setup for pv optimization
  drm/i915: vgpu ppgtt update pv optimization
  drm/i915: vgpu context submission pv optimization
  drm/i915/gvt: GVTg handle pv_caps PVINFO register
  drm/i915/gvt: GVTg handle shared_page setup
  drm/i915/gvt: GVTg support ppgtt pv optimization
  drm/i915/gvt: GVTg support context submission pv optimization

 drivers/gpu/drm/i915/gvt/execlist.c |   6 +
 drivers/gpu/drm/i915/gvt/gtt.c      | 317 +++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/gtt.h      |   9 +
 drivers/gpu/drm/i915/gvt/gvt.h      |  10 +-
 drivers/gpu/drm/i915/gvt/handlers.c |  63 ++++++-
 drivers/gpu/drm/i915/gvt/vgpu.c     |  31 ++++
 drivers/gpu/drm/i915/i915_drv.h     |   6 +-
 drivers/gpu/drm/i915/i915_gem.c     |   3 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c |   9 +-
 drivers/gpu/drm/i915/i915_gem_gtt.h |   8 +
 drivers/gpu/drm/i915/i915_irq.c     |   2 +
 drivers/gpu/drm/i915/i915_pvinfo.h  |  12 +-
 drivers/gpu/drm/i915/i915_vgpu.c    | 323 +++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h    |  52 ++++++
 drivers/gpu/drm/i915/intel_lrc.c    |   3 +
 15 files changed, 845 insertions(+), 9 deletions(-)

-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/8] drm/i915: introduced vgpu pv capability
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
  2019-03-29  5:00 ` ✗ Fi.CI.BAT: failure for " Patchwork
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29 13:32 ` [PATCH v4 2/8] drm/i915: vgpu shared memory setup for pv optimization Xiaolin Zhang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

pv capability for guest gpu was introduced by pv_caps in struct
i915_virtual_gpu and a new pv_caps register for host GVT
was defined in struct vgt_if for vgpu pv optimization.

both of them are used to control different feature pv optimization
supported and implemented by both guest and host.

These fields are default zero, no any pv feature enabled.

it also adds VGT_CAPS_PV capability BIT for guest to check GVTg
can support PV feature or not.

v0: RFC, introudced enable_pvmmio module parameter.
v1: addressed RFC comment to remove enable_pvmmio module parameter
by pv capability check.
v2: rebase
v3: distinct pv caps from guest and host. renamed enable_pvmmio to
pvmmio_caps which is used for host pv caps.
v4: consolidated all pv related functons into a single file i915_vgpu.c
and renamed pvmmio to pv_caps.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h    |  2 ++
 drivers/gpu/drm/i915/i915_pvinfo.h |  5 ++++-
 drivers/gpu/drm/i915/i915_vgpu.c   | 45 +++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h   |  8 +++++++
 4 files changed, 58 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index eff0e9a..b0c50bb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -86,6 +86,7 @@
 #include "i915_vma.h"
 
 #include "intel_gvt.h"
+#include "i915_pvinfo.h"
 
 /* General customization:
  */
@@ -1246,6 +1247,7 @@ struct i915_frontbuffer_tracking {
 struct i915_virtual_gpu {
 	bool active;
 	u32 caps;
+	u32 pv_caps;
 };
 
 /* used in computing the new watermarks state */
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 969e514..619305a 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -55,6 +55,7 @@ enum vgt_g2v_type {
 #define VGT_CAPS_FULL_PPGTT		BIT(2)
 #define VGT_CAPS_HWSP_EMULATION		BIT(3)
 #define VGT_CAPS_HUGE_GTT		BIT(4)
+#define VGT_CAPS_PV		BIT(5)
 
 struct vgt_if {
 	u64 magic;		/* VGT_MAGIC */
@@ -107,7 +108,9 @@ struct vgt_if {
 	u32 execlist_context_descriptor_lo;
 	u32 execlist_context_descriptor_hi;
 
-	u32  rsv7[0x200 - 24];    /* pad to one page */
+	u32 pv_caps;
+
+	u32  rsv7[0x200 - 25];    /* pad to one page */
 } __packed;
 
 #define vgtif_reg(x) \
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 3d0b493..e9f6d96 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -79,7 +79,14 @@ void i915_check_vgpu(struct drm_i915_private *dev_priv)
 	dev_priv->vgpu.caps = __raw_i915_read32(uncore, vgtif_reg(vgt_caps));
 
 	dev_priv->vgpu.active = true;
-	DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
+
+	if (!intel_vgpu_check_pv_caps(dev_priv)) {
+		DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
+		return;
+	}
+
+	DRM_INFO("Virtual GPU for Intel GVT-g detected with pv_caps 0x%x.\n",
+			dev_priv->vgpu.pv_caps);
 }
 
 bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *dev_priv)
@@ -274,3 +281,39 @@ int intel_vgt_balloon(struct drm_i915_private *dev_priv)
 	DRM_ERROR("VGT balloon fail\n");
 	return ret;
 }
+
+/*
+ * i915 vgpu PV support for Linux
+ */
+
+/**
+ * intel_vgpu_check_pv_caps - detect virtual GPU PV capabilities
+ * @dev_priv: i915 device private
+ *
+ * This function is called at the initialization stage, to detect VGPU
+ * PV capabilities
+ *
+ * If guest wants to enable pv_caps, it needs to config it explicitly
+ * through vgt_if interface from gvt layer.
+ */
+bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv)
+{
+	struct intel_uncore *uncore = &dev_priv->uncore;
+	u32 gvt_pvcaps;
+	u32 pvcaps;
+
+	if (!intel_vgpu_has_pv_caps(dev_priv))
+		return false;
+
+	/* PV capability negotiation between PV guest and GVT */
+	gvt_pvcaps = __raw_i915_read32(uncore, vgtif_reg(pv_caps));
+	pvcaps = dev_priv->vgpu.pv_caps & gvt_pvcaps;
+	dev_priv->vgpu.pv_caps = pvcaps;
+
+	if (!pvcaps)
+		return false;
+
+	__raw_i915_write32(uncore, vgtif_reg(pv_caps), pvcaps);
+
+	return true;
+}
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index ebe1b7b..91010fc 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -42,7 +42,15 @@ intel_vgpu_has_huge_gtt(struct drm_i915_private *dev_priv)
 	return dev_priv->vgpu.caps & VGT_CAPS_HUGE_GTT;
 }
 
+static inline bool
+intel_vgpu_has_pv_caps(struct drm_i915_private *dev_priv)
+{
+	return dev_priv->vgpu.caps & VGT_CAPS_PV;
+}
+
 int intel_vgt_balloon(struct drm_i915_private *dev_priv);
 void intel_vgt_deballoon(struct drm_i915_private *dev_priv);
 
+/* i915 vgpu pv related functions */
+bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv);
 #endif /* _I915_VGPU_H_ */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/8] drm/i915: vgpu shared memory setup for pv optimization
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
  2019-03-29  5:00 ` ✗ Fi.CI.BAT: failure for " Patchwork
  2019-03-29 13:32 ` [PATCH v4 1/8] drm/i915: introduced vgpu pv capability Xiaolin Zhang
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29 13:32 ` [PATCH v4 3/8] drm/i915: vgpu ppgtt update " Xiaolin Zhang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

To enable vgpu pv features, we need to setup a shared memory page
which will be used for data exchange directly accessed between both
guest and backend i915 driver to avoid emulation trap cost.

guest i915 will allocate this page memory and then pass it's physical
address to backend i915 driver through PVINFO register so that backend i915
driver can access this shared page meory without any trap cost with the
help form hyperviser's read guest gpa functionality.

guest i915 will send VGT_G2V_SHARED_PAGE_SETUP notification to host GVT
once shared memory setup finished.

the layout of the shared_page also defined as well in this patch which
is used for pv features implementation.

v0: RFC
v1: addressed RFC comment to move both shared_page_lock and shared_page
to i915_virtual_gpu structure
v2: packed i915_virtual_gpu structure
v3: added SHARED_PAGE_SETUP g2v notification for pv shared_page setup
v4: added intel_vgpu_setup_shared_page() in i915_vgpu_pv.c

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_drv.h    |  4 +++-
 drivers/gpu/drm/i915/i915_pvinfo.h |  5 ++++-
 drivers/gpu/drm/i915/i915_vgpu.c   | 38 ++++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_vgpu.h   | 17 +++++++++++++++++
 4 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b0c50bb..6abd112 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1248,7 +1248,9 @@ struct i915_virtual_gpu {
 	bool active;
 	u32 caps;
 	u32 pv_caps;
-};
+	spinlock_t shared_page_lock;
+	struct gvt_shared_page *shared_page;
+} __packed;
 
 /* used in computing the new watermarks state */
 struct intel_wm_config {
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 619305a..4657bf7 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -46,6 +46,7 @@ enum vgt_g2v_type {
 	VGT_G2V_PPGTT_L4_PAGE_TABLE_DESTROY,
 	VGT_G2V_EXECLIST_CONTEXT_CREATE,
 	VGT_G2V_EXECLIST_CONTEXT_DESTROY,
+	VGT_G2V_SHARED_PAGE_SETUP,
 	VGT_G2V_MAX,
 };
 
@@ -110,7 +111,9 @@ struct vgt_if {
 
 	u32 pv_caps;
 
-	u32  rsv7[0x200 - 25];    /* pad to one page */
+	u64 shared_page_gpa;
+
+	u32  rsv7[0x200 - 27];    /* pad to one page */
 } __packed;
 
 #define vgtif_reg(x) \
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index e9f6d96..1530552 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -135,6 +135,9 @@ void intel_vgt_deballoon(struct drm_i915_private *dev_priv)
 
 	for (i = 0; i < 4; i++)
 		vgt_deballoon_space(&dev_priv->ggtt, &bl_info.space[i]);
+
+	if (dev_priv->vgpu.shared_page)
+		free_page((unsigned long)dev_priv->vgpu.shared_page);
 }
 
 static int vgt_balloon_space(struct i915_ggtt *ggtt,
@@ -286,6 +289,36 @@ int intel_vgt_balloon(struct drm_i915_private *dev_priv)
  * i915 vgpu PV support for Linux
  */
 
+/*
+ * shared_page setup for VGPU PV features
+ */
+static int intel_vgpu_setup_shared_page(struct drm_i915_private *dev_priv)
+{
+	struct intel_uncore *uncore = &dev_priv->uncore;
+	u64 gpa;
+
+	dev_priv->vgpu.shared_page =  (struct gvt_shared_page *)
+			get_zeroed_page(GFP_KERNEL);
+	if (!dev_priv->vgpu.shared_page) {
+		DRM_ERROR("out of memory for shared page memory\n");
+		return -ENOMEM;
+	}
+
+	/* pass guest memory pa address to GVT and then read back to verify */
+	gpa = __pa(dev_priv->vgpu.shared_page);
+	__raw_i915_write64(uncore, vgtif_reg(shared_page_gpa), gpa);
+	if (gpa != __raw_i915_read64(uncore, vgtif_reg(shared_page_gpa))) {
+		DRM_ERROR("vgpu: passed shared_page_gpa failed\n");
+		free_page((unsigned long)dev_priv->vgpu.shared_page);
+		return -EIO;
+	}
+	__raw_i915_write32(uncore, vgtif_reg(g2v_notify),
+			VGT_G2V_SHARED_PAGE_SETUP);
+	spin_lock_init(&dev_priv->vgpu.shared_page_lock);
+
+	return 0;
+}
+
 /**
  * intel_vgpu_check_pv_caps - detect virtual GPU PV capabilities
  * @dev_priv: i915 device private
@@ -313,6 +346,11 @@ bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv)
 	if (!pvcaps)
 		return false;
 
+	if (intel_vgpu_setup_shared_page(dev_priv)) {
+		dev_priv->vgpu.pv_caps = 0;
+		return false;
+	}
+
 	__raw_i915_write32(uncore, vgtif_reg(pv_caps), pvcaps);
 
 	return true;
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 91010fc..68127d4 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -26,6 +26,23 @@
 
 #include "i915_pvinfo.h"
 
+/*
+ * A shared page(4KB) between gvt and VM, could be allocated by guest driver
+ * or a fixed location in PCI bar 0 region
+ */
+struct pv_ppgtt_update {
+	u64 pdp;
+	u64 start;
+	u64 length;
+	u32 cache_level;
+};
+
+struct gvt_shared_page {
+	struct pv_ppgtt_update pv_ppgtt;
+	u32 ring_id;
+	u64 descs[EXECLIST_MAX_PORTS];
+};
+
 void i915_check_vgpu(struct drm_i915_private *dev_priv);
 
 bool intel_vgpu_has_full_ppgtt(struct drm_i915_private *dev_priv);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 3/8] drm/i915: vgpu ppgtt update pv optimization
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (2 preceding siblings ...)
  2019-03-29 13:32 ` [PATCH v4 2/8] drm/i915: vgpu shared memory setup for pv optimization Xiaolin Zhang
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29 13:32 ` [PATCH v4 4/8] drm/i915: vgpu context submission " Xiaolin Zhang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

This patch extends vgpu ppgtt g2v notification to notify host
GVT-g of ppgtt update from guest including alloc_4lvl, clear_4lv4
and insert_4lvl.

These updates use the shared memory page to pass struct pv_ppgtt_update
from guest to GVT which is used for pv optimiation implemeation within
host GVT side.

This patch also add one new pv_caps level to control ppgtt update.

Use PV_PPGTT_UPDATE to control this level of pv optimization.

v0: RFC
v1: rebased
v2: added pv callbacks for vm.{allocate_va_range, insert_entries,
clear_range} within ppgtt.
v3: rebased, disable huge page ppgtt support when using PVMMIO ppgtt
update due to complex and performance impact.
v4: moved alloc/insert/clear_4lvl pv callbacks into i915_vgpu_pv.c and
added a single intel_vgpu_config_pv_caps() for vgpu pv callbacks setup.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_gem.c     |  3 +-
 drivers/gpu/drm/i915/i915_gem_gtt.c |  9 ++--
 drivers/gpu/drm/i915/i915_gem_gtt.h |  8 ++++
 drivers/gpu/drm/i915/i915_pvinfo.h  |  3 ++
 drivers/gpu/drm/i915/i915_vgpu.c    | 84 +++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_vgpu.h    | 17 ++++++++
 6 files changed, 120 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f6cdd5f..c96a6d0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4826,7 +4826,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 	int ret;
 
 	/* We need to fallback to 4K pages if host doesn't support huge gtt. */
-	if (intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
+	if ((intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
+		|| intel_vgpu_enabled_pv_caps(dev_priv, PV_PPGTT_UPDATE))
 		mkwrite_device_info(dev_priv)->page_sizes =
 			I915_GTT_PAGE_SIZE_4K;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 736c845..8011527 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -923,7 +923,7 @@ static void gen8_ppgtt_set_pml4e(struct i915_pml4 *pml4,
  * This is the top-level structure in 4-level page tables used on gen8+.
  * Empty entries are always scratch pml4e.
  */
-static void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
+void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
 				  u64 start, u64 length)
 {
 	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
@@ -1162,7 +1162,7 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma,
 	} while (iter->sg);
 }
 
-static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
+void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
 				   struct i915_vma *vma,
 				   enum i915_cache_level cache_level,
 				   u32 flags)
@@ -1444,7 +1444,7 @@ static int gen8_ppgtt_alloc_3lvl(struct i915_address_space *vm,
 				    &i915_vm_to_ppgtt(vm)->pdp, start, length);
 }
 
-static int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
+int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
 				 u64 start, u64 length)
 {
 	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
@@ -1571,6 +1571,9 @@ static struct i915_hw_ppgtt *gen8_ppgtt_create(struct drm_i915_private *i915)
 		ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl;
 		ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl;
 		ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl;
+
+		if (intel_vgpu_active(i915))
+			intel_vgpu_config_pv_caps(i915, PV_PPGTT_UPDATE, ppgtt);
 	} else {
 		err = __pdp_init(&ppgtt->vm, &ppgtt->pdp);
 		if (err)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 83ded9f..03cff75 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -625,6 +625,14 @@ int gen6_ppgtt_pin(struct i915_hw_ppgtt *base);
 void gen6_ppgtt_unpin(struct i915_hw_ppgtt *base);
 void gen6_ppgtt_unpin_all(struct i915_hw_ppgtt *base);
 
+void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm,
+		u64 start, u64 length);
+void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm,
+		struct i915_vma *vma,
+		enum i915_cache_level cache_level, u32 flags);
+int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm,
+		u64 start, u64 length);
+
 void i915_check_and_clear_faults(struct drm_i915_private *dev_priv);
 void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv);
 void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 4657bf7..2408a9d 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -47,6 +47,9 @@ enum vgt_g2v_type {
 	VGT_G2V_EXECLIST_CONTEXT_CREATE,
 	VGT_G2V_EXECLIST_CONTEXT_DESTROY,
 	VGT_G2V_SHARED_PAGE_SETUP,
+	VGT_G2V_PPGTT_L4_ALLOC,
+	VGT_G2V_PPGTT_L4_CLEAR,
+	VGT_G2V_PPGTT_L4_INSERT,
 	VGT_G2V_MAX,
 };
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 1530552..87a0ca5 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -80,6 +80,9 @@ void i915_check_vgpu(struct drm_i915_private *dev_priv)
 
 	dev_priv->vgpu.active = true;
 
+	/* guest driver PV capability */
+	dev_priv->vgpu.pv_caps = PV_PPGTT_UPDATE;
+
 	if (!intel_vgpu_check_pv_caps(dev_priv)) {
 		DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
 		return;
@@ -289,6 +292,68 @@ int intel_vgt_balloon(struct drm_i915_private *dev_priv)
  * i915 vgpu PV support for Linux
  */
 
+static void gen8_ppgtt_clear_4lvl_pv(struct i915_address_space *vm,
+				  u64 start, u64 length)
+{
+	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct i915_pml4 *pml4 = &ppgtt->pml4;
+	struct drm_i915_private *dev_priv = vm->i915;
+	struct pv_ppgtt_update *pv_ppgtt =
+			&dev_priv->vgpu.shared_page->pv_ppgtt;
+	u64 orig_start = start;
+	u64 orig_length = length;
+
+	gen8_ppgtt_clear_4lvl(vm, start, length);
+
+	pv_ppgtt->pdp = px_dma(pml4);
+	pv_ppgtt->start = orig_start;
+	pv_ppgtt->length = orig_length;
+	I915_WRITE(vgtif_reg(g2v_notify), VGT_G2V_PPGTT_L4_CLEAR);
+}
+
+static void gen8_ppgtt_insert_4lvl_pv(struct i915_address_space *vm,
+				   struct i915_vma *vma,
+				   enum i915_cache_level cache_level,
+				   u32 flags)
+{
+	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct drm_i915_private *dev_priv = vm->i915;
+	struct pv_ppgtt_update *pv_ppgtt =
+			&dev_priv->vgpu.shared_page->pv_ppgtt;
+
+	gen8_ppgtt_insert_4lvl(vm, vma, cache_level, flags);
+
+	pv_ppgtt->pdp = px_dma(&ppgtt->pml4);
+	pv_ppgtt->start = vma->node.start;
+	pv_ppgtt->length = vma->node.size;
+	pv_ppgtt->cache_level = cache_level;
+	I915_WRITE(vgtif_reg(g2v_notify), VGT_G2V_PPGTT_L4_INSERT);
+}
+
+static int gen8_ppgtt_alloc_4lvl_pv(struct i915_address_space *vm,
+				 u64 start, u64 length)
+{
+	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct i915_pml4 *pml4 = &ppgtt->pml4;
+	struct drm_i915_private *dev_priv = vm->i915;
+	struct pv_ppgtt_update *pv_ppgtt =
+			&dev_priv->vgpu.shared_page->pv_ppgtt;
+	int ret;
+	u64 orig_start = start;
+	u64 orig_length = length;
+
+	ret = gen8_ppgtt_alloc_4lvl(vm, start, length);
+	if (ret)
+		return ret;
+
+	pv_ppgtt->pdp = px_dma(pml4);
+	pv_ppgtt->start = orig_start;
+	pv_ppgtt->length = orig_length;
+	I915_WRITE(vgtif_reg(g2v_notify), VGT_G2V_PPGTT_L4_ALLOC);
+
+	return 0;
+}
+
 /*
  * shared_page setup for VGPU PV features
  */
@@ -319,6 +384,25 @@ static int intel_vgpu_setup_shared_page(struct drm_i915_private *dev_priv)
 	return 0;
 }
 
+/*
+ * config guest driver PV ops for different PV features
+ */
+void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap, void *data)
+{
+	struct i915_hw_ppgtt *ppgtt;
+
+	if (!intel_vgpu_enabled_pv_caps(dev_priv, cap))
+		return;
+
+	if (cap == PV_PPGTT_UPDATE) {
+		ppgtt = (struct i915_hw_ppgtt *)data;
+		ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl_pv;
+		ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl_pv;
+		ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl_pv;
+	}
+}
+
 /**
  * intel_vgpu_check_pv_caps - detect virtual GPU PV capabilities
  * @dev_priv: i915 device private
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index 68127d4..dfe2eb4 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -27,6 +27,13 @@
 #include "i915_pvinfo.h"
 
 /*
+ * define different capabilities of PV optimization
+ */
+enum pv_caps {
+	PV_PPGTT_UPDATE = 0x1,
+};
+
+/*
  * A shared page(4KB) between gvt and VM, could be allocated by guest driver
  * or a fixed location in PCI bar 0 region
  */
@@ -65,9 +72,19 @@ intel_vgpu_has_pv_caps(struct drm_i915_private *dev_priv)
 	return dev_priv->vgpu.caps & VGT_CAPS_PV;
 }
 
+static inline bool
+intel_vgpu_enabled_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap)
+{
+	return intel_vgpu_active(dev_priv) && intel_vgpu_has_pv_caps(dev_priv)
+			&& (dev_priv->vgpu.pv_caps & cap);
+}
+
 int intel_vgt_balloon(struct drm_i915_private *dev_priv);
 void intel_vgt_deballoon(struct drm_i915_private *dev_priv);
 
 /* i915 vgpu pv related functions */
 bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv);
+void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
+		enum pv_caps cap, void *data);
 #endif /* _I915_VGPU_H_ */
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 4/8] drm/i915: vgpu context submission pv optimization
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (3 preceding siblings ...)
  2019-03-29 13:32 ` [PATCH v4 3/8] drm/i915: vgpu ppgtt update " Xiaolin Zhang
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29  7:15   ` Chris Wilson
                     ` (2 more replies)
  2019-03-29 13:32 ` [PATCH v4 5/8] drm/i915/gvt: GVTg handle pv_caps PVINFO register Xiaolin Zhang
                   ` (4 subsequent siblings)
  9 siblings, 3 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

It is performance optimization to override the actual submisison backend
in order to eliminate execlists csb process and reduce mmio trap numbers
for workload submission without contextswith interrupt by talking with
GVT via PV submisison notification mechanism between guest and GVT.

Use PV_SUBMISSION to control this level of pv optimization.

v0: RFC
v1: rebase
v2: added pv ops for pv context submission. to maximize code resuse,
introduced 2 more ops (submit_ports & preempt_context) instead of 1 op
(set_default_submission) in engine structure. pv version of
submit_ports and preempt_context implemented.
v3:
1. to reduce more code duplication, code refactor and replaced 2 ops
"submit_ports & preempt_contex" from v2 by 1 ops "write_desc"
in engine structure. pv version of write_des implemented.
2. added VGT_G2V_ELSP_SUBMIT for g2v pv notification.
v4: implemented pv elsp submission tasklet as the backend workload
submisison by talking to GVT with PV notificaiton mechanism and renamed
VGT_G2V_ELSP_SUBMIT to VGT_G2V_PV_SUBMISIION.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/i915_irq.c    |   2 +
 drivers/gpu/drm/i915/i915_pvinfo.h |   1 +
 drivers/gpu/drm/i915/i915_vgpu.c   | 158 ++++++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/i915/i915_vgpu.h   |  10 +++
 drivers/gpu/drm/i915/intel_lrc.c   |   3 +
 5 files changed, 173 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 2f78829..28e8ee0 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -37,6 +37,7 @@
 #include "i915_drv.h"
 #include "i915_trace.h"
 #include "intel_drv.h"
+#include "i915_vgpu.h"
 
 /**
  * DOC: interrupt handling
@@ -1470,6 +1471,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
 	if (iir & GT_RENDER_USER_INTERRUPT) {
 		intel_engine_breadcrumbs_irq(engine);
 		tasklet |= USES_GUC_SUBMISSION(engine->i915);
+		tasklet |= USES_PV_SUBMISSION(engine->i915);
 	}
 
 	if (tasklet)
diff --git a/drivers/gpu/drm/i915/i915_pvinfo.h b/drivers/gpu/drm/i915/i915_pvinfo.h
index 2408a9d..362d898 100644
--- a/drivers/gpu/drm/i915/i915_pvinfo.h
+++ b/drivers/gpu/drm/i915/i915_pvinfo.h
@@ -50,6 +50,7 @@ enum vgt_g2v_type {
 	VGT_G2V_PPGTT_L4_ALLOC,
 	VGT_G2V_PPGTT_L4_CLEAR,
 	VGT_G2V_PPGTT_L4_INSERT,
+	VGT_G2V_PV_SUBMISSION,
 	VGT_G2V_MAX,
 };
 
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
index 87a0ca5..53d05b3 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.c
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -23,6 +23,7 @@
 
 #include "intel_drv.h"
 #include "i915_vgpu.h"
+#include "intel_lrc_reg.h"
 
 /**
  * DOC: Intel GVT-g guest support
@@ -81,7 +82,7 @@ void i915_check_vgpu(struct drm_i915_private *dev_priv)
 	dev_priv->vgpu.active = true;
 
 	/* guest driver PV capability */
-	dev_priv->vgpu.pv_caps = PV_PPGTT_UPDATE;
+	dev_priv->vgpu.pv_caps = PV_PPGTT_UPDATE | PV_SUBMISSION;
 
 	if (!intel_vgpu_check_pv_caps(dev_priv)) {
 		DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
@@ -292,6 +293,154 @@ int intel_vgt_balloon(struct drm_i915_private *dev_priv)
  * i915 vgpu PV support for Linux
  */
 
+static u64 execlists_update_context(struct i915_request *rq)
+{
+	struct intel_context *ce = rq->hw_context;
+	u32 *reg_state = ce->lrc_reg_state;
+
+	reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);
+
+	return ce->lrc_desc;
+}
+
+static inline struct i915_priolist *to_priolist(struct rb_node *rb)
+{
+	return rb_entry(rb, struct i915_priolist, node);
+}
+
+static void pv_submit(struct intel_engine_cs *engine)
+{
+	struct intel_uncore *uncore = &engine->i915->uncore;
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct execlist_port *port = execlists->port;
+	unsigned int n;
+	struct gvt_shared_page *shared_page = engine->i915->vgpu.shared_page;
+	u64 descs[2];
+
+	for (n = 0; n < execlists_num_ports(execlists); n++) {
+		struct i915_request *rq;
+		unsigned int count = 0;
+
+		descs[n] = 0;
+		rq = port_unpack(&port[n], &count);
+		if (rq && count == 0) {
+			port_set(&port[n], port_pack(rq, ++count));
+			descs[n] = execlists_update_context(rq);
+		}
+	}
+
+	spin_lock(&engine->i915->vgpu.shared_page_lock);
+	shared_page->ring_id = engine->id;
+	for (n = 0; n < execlists_num_ports(execlists); n++)
+		shared_page->descs[n] = descs[n];
+
+	__raw_i915_write32(uncore, vgtif_reg(g2v_notify),
+			VGT_G2V_PV_SUBMISSION);
+	spin_unlock(&engine->i915->vgpu.shared_page_lock);
+}
+
+static void pv_dequeue(struct intel_engine_cs *engine)
+{
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct execlist_port *port = execlists->port;
+	struct i915_request *last = NULL;
+	bool submit = false;
+	struct rb_node *rb;
+
+	lockdep_assert_held(&engine->timeline.lock);
+
+	GEM_BUG_ON(port_isset(port));
+
+	while ((rb = rb_first_cached(&execlists->queue))) {
+		struct i915_priolist *p = to_priolist(rb);
+		struct i915_request *rq, *rn;
+		int i;
+
+		priolist_for_each_request_consume(rq, rn, p, i) {
+			if (last && rq->hw_context != last->hw_context)
+				goto done;
+
+			list_del_init(&rq->sched.link);
+
+			__i915_request_submit(rq);
+			trace_i915_request_in(rq, port_index(port, execlists));
+
+			last = rq;
+			submit = true;
+		}
+
+		rb_erase_cached(&p->node, &execlists->queue);
+		i915_priolist_free(p);
+	}
+done:
+	execlists->queue_priority_hint =
+			rb ? to_priolist(rb)->priority : INT_MIN;
+	if (submit) {
+		port_set(port, i915_request_get(last));
+		pv_submit(engine);
+	}
+	if (last)
+		execlists_user_begin(execlists, execlists->port);
+
+	/* We must always keep the beast fed if we have work piled up */
+	GEM_BUG_ON(port_isset(execlists->port) &&
+		   !execlists_is_active(execlists, EXECLISTS_ACTIVE_USER));
+	GEM_BUG_ON(rb_first_cached(&execlists->queue) &&
+		   !port_isset(execlists->port));
+}
+
+static void vgpu_pv_submission_tasklet(unsigned long data)
+{
+	struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+	struct execlist_port *port = execlists->port;
+	struct i915_request *rq;
+	unsigned long flags;
+	bool rq_finished = false;
+
+	spin_lock_irqsave(&engine->timeline.lock, flags);
+
+	rq = port_request(port);
+	while (rq && i915_request_completed(rq)) {
+		trace_i915_request_out(rq);
+		rq_finished = true;
+		i915_request_put(rq);
+
+		port = execlists_port_complete(execlists, port);
+		if (port_isset(port)) {
+			rq_finished = false;
+			execlists_user_begin(execlists, port);
+			rq = port_request(port);
+		} else {
+			execlists_user_end(execlists);
+			rq = NULL;
+		}
+	}
+
+	if (rq_finished || !rq)
+		pv_dequeue(engine);
+
+	spin_unlock_irqrestore(&engine->timeline.lock, flags);
+}
+
+static void vgpu_pv_set_default_submission(struct intel_engine_cs *engine)
+{
+	/*
+	 * We inherit a bunch of functions from execlists that we'd like
+	 * to keep using:
+	 *
+	 *    engine->submit_request = execlists_submit_request;
+	 *    engine->cancel_requests = execlists_cancel_requests;
+	 *    engine->schedule = execlists_schedule;
+	 *
+	 * But we need to override the actual submission backend in order
+	 * to talk to the GVT with PV notification message.
+	 */
+	intel_execlists_set_default_submission(engine);
+
+	engine->execlists.tasklet.func = vgpu_pv_submission_tasklet;
+}
+
 static void gen8_ppgtt_clear_4lvl_pv(struct i915_address_space *vm,
 				  u64 start, u64 length)
 {
@@ -391,6 +540,7 @@ void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
 		enum pv_caps cap, void *data)
 {
 	struct i915_hw_ppgtt *ppgtt;
+	struct intel_engine_cs *engine;
 
 	if (!intel_vgpu_enabled_pv_caps(dev_priv, cap))
 		return;
@@ -401,6 +551,12 @@ void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
 		ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl_pv;
 		ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl_pv;
 	}
+
+	if (cap == PV_SUBMISSION) {
+		engine = (struct intel_engine_cs *)data;
+		engine->set_default_submission = vgpu_pv_set_default_submission;
+		engine->set_default_submission(engine);
+	}
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
index dfe2eb4..cae776d 100644
--- a/drivers/gpu/drm/i915/i915_vgpu.h
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -31,6 +31,7 @@
  */
 enum pv_caps {
 	PV_PPGTT_UPDATE = 0x1,
+	PV_SUBMISSION = 0x2,
 };
 
 /*
@@ -80,6 +81,12 @@ intel_vgpu_enabled_pv_caps(struct drm_i915_private *dev_priv,
 			&& (dev_priv->vgpu.pv_caps & cap);
 }
 
+static inline bool
+intel_vgpu_is_using_pv_submission(struct drm_i915_private *dev_priv)
+{
+	return intel_vgpu_enabled_pv_caps(dev_priv, PV_SUBMISSION);
+}
+
 int intel_vgt_balloon(struct drm_i915_private *dev_priv);
 void intel_vgt_deballoon(struct drm_i915_private *dev_priv);
 
@@ -87,4 +94,7 @@ void intel_vgt_deballoon(struct drm_i915_private *dev_priv);
 bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv);
 void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
 		enum pv_caps cap, void *data);
+#define USES_PV_SUBMISSION(dev_priv) \
+	intel_vgpu_is_using_pv_submission(dev_priv)
+
 #endif /* _I915_VGPU_H_ */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index c1b9780..0a66714 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2352,6 +2352,9 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine)
 		 */
 	}
 	engine->emit_bb_start = gen8_emit_bb_start;
+
+	if (intel_vgpu_active(engine->i915))
+		intel_vgpu_config_pv_caps(engine->i915,	PV_SUBMISSION, engine);
 }
 
 static inline void
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 5/8] drm/i915/gvt: GVTg handle pv_caps PVINFO register
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (4 preceding siblings ...)
  2019-03-29 13:32 ` [PATCH v4 4/8] drm/i915: vgpu context submission " Xiaolin Zhang
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29 13:32 ` [PATCH v4 6/8] drm/i915/gvt: GVTg handle shared_page setup Xiaolin Zhang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

implement pv_caps PVINFO register handler in GVTg to
control different level pv optimization within guest.

report VGT_CAPS_PV capability in pvinfo page for guest.

v0: RFC
v1: rebase
v2: rebase
v3: renamed enable_pvmmio to pvmmio_caps which is used for host
pv caps.
v4: renamed pvmmio_caps to pv_caps

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/handlers.c | 4 ++++
 drivers/gpu/drm/i915/gvt/vgpu.c     | 3 +++
 2 files changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 63418c8..0333652 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1167,6 +1167,7 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 		break;
 	case 0x78010:	/* vgt_caps */
 	case 0x7881c:
+	case _vgtif_reg(pv_caps):
 		break;
 	default:
 		invalid_read = true;
@@ -1240,6 +1241,9 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	case _vgtif_reg(g2v_notify):
 		ret = handle_g2v_notification(vgpu, data);
 		break;
+	case _vgtif_reg(pv_caps):
+		DRM_INFO("vgpu id=%d pv caps =0x%x\n", vgpu->id, data);
+		break;
 	/* add xhot and yhot to handled list to avoid error log */
 	case _vgtif_reg(cursor_x_hot):
 	case _vgtif_reg(cursor_y_hot):
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 314e401..9c9a192 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -47,6 +47,7 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) = VGT_CAPS_FULL_PPGTT;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HWSP_EMULATION;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT;
+	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
@@ -531,6 +532,7 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
 	struct intel_gvt *gvt = vgpu->gvt;
 	struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
 	unsigned int resetting_eng = dmlr ? ALL_ENGINES : engine_mask;
+	int pv_caps = vgpu_vreg_t(vgpu, vgtif_reg(pv_caps));
 
 	gvt_dbg_core("------------------------------------------\n");
 	gvt_dbg_core("resseting vgpu%d, dmlr %d, engine_mask %08x\n",
@@ -562,6 +564,7 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
 
 		intel_vgpu_reset_mmio(vgpu, dmlr);
 		populate_pvinfo_page(vgpu);
+		vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = pv_caps;
 		intel_vgpu_reset_display(vgpu);
 
 		if (dmlr) {
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 6/8] drm/i915/gvt: GVTg handle shared_page setup
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (5 preceding siblings ...)
  2019-03-29 13:32 ` [PATCH v4 5/8] drm/i915/gvt: GVTg handle pv_caps PVINFO register Xiaolin Zhang
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29 13:32 ` [PATCH v4 7/8] drm/i915/gvt: GVTg support ppgtt pv optimization Xiaolin Zhang
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

GVTg implemented shared_page setup operation and read_shared_page
functionality based on hypervisor_read_gpa().

the shared_page_gpa was passed from guest driver through PVINFO
shared_page_gpa register.

v0: RFC
v1: rebase
v2: rebase
v3: added shared_page_gpa check and if read_gpa failure, return zero
memory and handle VGT_G2V_SHARED_PAGE_SETUP g2v notification
v4: rebase

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h      |  6 +++++-
 drivers/gpu/drm/i915/gvt/handlers.c | 15 +++++++++++++++
 drivers/gpu/drm/i915/gvt/vgpu.c     | 24 ++++++++++++++++++++++++
 3 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index a4fd979..b01a54f 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -49,6 +49,7 @@
 #include "fb_decoder.h"
 #include "dmabuf.h"
 #include "page_track.h"
+#include "i915_vgpu.h"
 
 #define GVT_MAX_VGPU 8
 
@@ -229,6 +230,8 @@ struct intel_vgpu {
 	struct completion vblank_done;
 
 	u32 scan_nonprivbb;
+	u64 shared_page_gpa;
+	bool shared_page_enabled;
 };
 
 /* validating GM healthy status*/
@@ -686,7 +689,8 @@ int intel_gvt_debugfs_add_vgpu(struct intel_vgpu *vgpu);
 void intel_gvt_debugfs_remove_vgpu(struct intel_vgpu *vgpu);
 int intel_gvt_debugfs_init(struct intel_gvt *gvt);
 void intel_gvt_debugfs_clean(struct intel_gvt *gvt);
-
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len);
 
 #include "trace.h"
 #include "mpt.h"
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 0333652..28b390c 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1168,6 +1168,8 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 	case 0x78010:	/* vgt_caps */
 	case 0x7881c:
 	case _vgtif_reg(pv_caps):
+	case _vgtif_reg(shared_page_gpa):
+	case _vgtif_reg(shared_page_gpa) + 4:
 		break;
 	default:
 		invalid_read = true;
@@ -1185,6 +1187,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	intel_gvt_gtt_type_t root_entry_type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
 	struct intel_vgpu_mm *mm;
 	u64 *pdps;
+	unsigned long gpa, gfn;
 
 	pdps = (u64 *)&vgpu_vreg64_t(vgpu, vgtif_reg(pdp[0]));
 
@@ -1198,6 +1201,16 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	case VGT_G2V_PPGTT_L3_PAGE_TABLE_DESTROY:
 	case VGT_G2V_PPGTT_L4_PAGE_TABLE_DESTROY:
 		return intel_vgpu_put_ppgtt_mm(vgpu, pdps);
+	case VGT_G2V_SHARED_PAGE_SETUP:
+		gpa = vgpu_vreg64_t(vgpu, vgtif_reg(shared_page_gpa));
+		gfn = gpa >> PAGE_SHIFT;
+		if (!intel_gvt_hypervisor_is_valid_gfn(vgpu, gfn)) {
+			vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = 0;
+			return 0;
+		}
+		vgpu->shared_page_gpa = gpa;
+		vgpu->shared_page_enabled = true;
+		break;
 	case VGT_G2V_EXECLIST_CONTEXT_CREATE:
 	case VGT_G2V_EXECLIST_CONTEXT_DESTROY:
 	case 1:	/* Remove this in guest driver. */
@@ -1257,6 +1270,8 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 	case _vgtif_reg(pdp[3].hi):
 	case _vgtif_reg(execlist_context_descriptor_lo):
 	case _vgtif_reg(execlist_context_descriptor_hi):
+	case _vgtif_reg(shared_page_gpa):
+	case _vgtif_reg(shared_page_gpa) + 4:
 		break;
 	case _vgtif_reg(rsv5[0])..._vgtif_reg(rsv5[3]):
 		enter_failsafe_mode(vgpu, GVT_FAILSAFE_INSUFFICIENT_RESOURCE);
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 9c9a192..4d241a7 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -63,6 +63,8 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(cursor_x_hot)) = UINT_MAX;
 	vgpu_vreg_t(vgpu, vgtif_reg(cursor_y_hot)) = UINT_MAX;
 
+	vgpu_vreg64_t(vgpu, vgtif_reg(shared_page_gpa)) = 0;
+
 	gvt_dbg_core("Populate PVINFO PAGE for vGPU %d\n", vgpu->id);
 	gvt_dbg_core("aperture base [GMADR] 0x%llx size 0x%llx\n",
 		vgpu_aperture_gmadr_base(vgpu), vgpu_aperture_sz(vgpu));
@@ -593,3 +595,25 @@ void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu)
 	intel_gvt_reset_vgpu_locked(vgpu, true, 0);
 	mutex_unlock(&vgpu->vgpu_lock);
 }
+
+/**
+ * intel_gvt_read_shared_page - read content from shared page
+ */
+int intel_gvt_read_shared_page(struct intel_vgpu *vgpu,
+		unsigned int offset, void *buf, unsigned long len)
+{
+	int ret = -EINVAL;
+	unsigned long gpa;
+
+	if (offset >= sizeof(struct gvt_shared_page))
+		goto err;
+
+	gpa = vgpu->shared_page_gpa + offset;
+	ret = intel_gvt_hypervisor_read_gpa(vgpu, gpa, buf, len);
+	if (!ret)
+		return ret;
+err:
+	gvt_vgpu_err("read shared page (offset %x) failed", offset);
+	memset(buf, 0, len);
+	return ret;
+}
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 7/8] drm/i915/gvt: GVTg support ppgtt pv optimization
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (6 preceding siblings ...)
  2019-03-29 13:32 ` [PATCH v4 6/8] drm/i915/gvt: GVTg handle shared_page setup Xiaolin Zhang
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29 13:32 ` [PATCH v4 8/8] drm/i915/gvt: GVTg support context submission " Xiaolin Zhang
  2019-03-29 16:05 ` [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Chris Wilson
  9 siblings, 0 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

This patch handles ppgtt update from g2v notification.

It read out ppgtt pte entries from guest pte tables page and
convert them to host pfns.

It creates local ppgtt tables and insert the content pages
into the local ppgtt tables directly, which does not track
the usage of guest page table and removes the cost of write
protection from the original shadow page mechansim.

v0: RFC
v1: rebase
v2: rebase
v3: report pv pggtt cap to guest
v4: renamed VGPU_PVMMIO with VGPU_PVCAP for name consistance, no PV
support if gfx vtd enabled.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gtt.c      | 317 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/gtt.h      |   9 +
 drivers/gpu/drm/i915/gvt/gvt.h      |   4 +
 drivers/gpu/drm/i915/gvt/handlers.c |  12 +-
 drivers/gpu/drm/i915/gvt/vgpu.c     |   3 +
 5 files changed, 344 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
index f4c992d..f1c07f0 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.c
+++ b/drivers/gpu/drm/i915/gvt/gtt.c
@@ -1744,6 +1744,25 @@ static int ppgtt_handle_guest_write_page_table_bytes(
 	return 0;
 }
 
+static void invalidate_mm_pv(struct intel_vgpu_mm *mm)
+{
+	struct intel_vgpu *vgpu = mm->vgpu;
+	struct intel_gvt *gvt = vgpu->gvt;
+	struct intel_gvt_gtt *gtt = &gvt->gtt;
+	struct intel_gvt_gtt_pte_ops *ops = gtt->pte_ops;
+	struct intel_gvt_gtt_entry se;
+
+	i915_ppgtt_put(mm->ppgtt);
+
+	ppgtt_get_shadow_root_entry(mm, &se, 0);
+	if (!ops->test_present(&se))
+		return;
+	se.val64 = 0;
+	ppgtt_set_shadow_root_entry(mm, &se, 0);
+
+	mm->ppgtt_mm.shadowed  = false;
+}
+
 static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 {
 	struct intel_vgpu *vgpu = mm->vgpu;
@@ -1756,6 +1775,11 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 	if (!mm->ppgtt_mm.shadowed)
 		return;
 
+	if (VGPU_PVCAP(mm->vgpu, PV_PPGTT_UPDATE)) {
+		invalidate_mm_pv(mm);
+		return;
+	}
+
 	for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.shadow_pdps); index++) {
 		ppgtt_get_shadow_root_entry(mm, &se, index);
 
@@ -1773,6 +1797,26 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm)
 	mm->ppgtt_mm.shadowed = false;
 }
 
+static int shadow_mm_pv(struct intel_vgpu_mm *mm)
+{
+	struct intel_vgpu *vgpu = mm->vgpu;
+	struct intel_gvt *gvt = vgpu->gvt;
+	struct intel_gvt_gtt_entry se;
+
+	mm->ppgtt = i915_ppgtt_create(gvt->dev_priv);
+	if (IS_ERR(mm->ppgtt)) {
+		gvt_vgpu_err("fail to create ppgtt for pdp 0x%llx\n",
+				px_dma(&mm->ppgtt->pml4));
+		return PTR_ERR(mm->ppgtt);
+	}
+
+	se.type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
+	se.val64 = px_dma(&mm->ppgtt->pml4);
+	ppgtt_set_shadow_root_entry(mm, &se, 0);
+	mm->ppgtt_mm.shadowed  = true;
+
+	return 0;
+}
 
 static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm)
 {
@@ -1787,6 +1831,9 @@ static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm)
 	if (mm->ppgtt_mm.shadowed)
 		return 0;
 
+	if (VGPU_PVCAP(mm->vgpu, PV_PPGTT_UPDATE))
+		return shadow_mm_pv(mm);
+
 	mm->ppgtt_mm.shadowed = true;
 
 	for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.guest_pdps); index++) {
@@ -2787,3 +2834,273 @@ void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu)
 	intel_vgpu_destroy_all_ppgtt_mm(vgpu);
 	intel_vgpu_reset_ggtt(vgpu, true);
 }
+
+int intel_vgpu_g2v_pv_ppgtt_alloc_4lvl(struct intel_vgpu *vgpu,
+		u64 pdps[])
+{
+	struct intel_vgpu_mm *mm;
+	int ret = 0;
+	u32 offset;
+	struct pv_ppgtt_update pv_ppgtt;
+
+	offset = offsetof(struct gvt_shared_page, pv_ppgtt);
+	intel_gvt_read_shared_page(vgpu, offset, &pv_ppgtt, sizeof(pv_ppgtt));
+
+	mm = intel_vgpu_find_ppgtt_mm(vgpu, &pv_ppgtt.pdp);
+	if (!mm) {
+		gvt_vgpu_err("failed to find pdp 0x%llx\n", pv_ppgtt.pdp);
+		ret = -EINVAL;
+	} else {
+		ret = mm->ppgtt->vm.allocate_va_range(&mm->ppgtt->vm,
+			pv_ppgtt.start, pv_ppgtt.length);
+		if (ret)
+			gvt_vgpu_err("failed to alloc %llx\n", pv_ppgtt.pdp);
+	}
+
+	return ret;
+}
+
+int intel_vgpu_g2v_pv_ppgtt_clear_4lvl(struct intel_vgpu *vgpu,
+		u64 pdps[])
+{
+	struct intel_vgpu_mm *mm;
+	int ret = 0;
+	u32 offset;
+	struct pv_ppgtt_update pv_ppgtt;
+
+	offset = offsetof(struct gvt_shared_page, pv_ppgtt);
+	intel_gvt_read_shared_page(vgpu, offset, &pv_ppgtt, sizeof(pv_ppgtt));
+	mm = intel_vgpu_find_ppgtt_mm(vgpu, &pv_ppgtt.pdp);
+	if (!mm) {
+		gvt_vgpu_err("failed to find pdp 0x%llx\n", pv_ppgtt.pdp);
+		ret = -EINVAL;
+	} else {
+		mm->ppgtt->vm.clear_range(&mm->ppgtt->vm,
+			pv_ppgtt.start, pv_ppgtt.length);
+	}
+
+	return ret;
+}
+
+#define GEN8_PML4E_SIZE		(1UL << GEN8_PML4E_SHIFT)
+#define GEN8_PML4E_SIZE_MASK	(~(GEN8_PML4E_SIZE - 1))
+#define GEN8_PDPE_SIZE		(1UL << GEN8_PDPE_SHIFT)
+#define GEN8_PDPE_SIZE_MASK	(~(GEN8_PDPE_SIZE - 1))
+#define GEN8_PDE_SIZE		(1UL << GEN8_PDE_SHIFT)
+#define GEN8_PDE_SIZE_MASK	(~(GEN8_PDE_SIZE - 1))
+
+#define pml4_addr_end(addr, end)					\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PML4E_SIZE) & GEN8_PML4E_SIZE_MASK; \
+	(__boundary < (end)) ? __boundary : (end);		\
+})
+
+#define pdp_addr_end(addr, end)						\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PDPE_SIZE) & GEN8_PDPE_SIZE_MASK; \
+	(__boundary < (end)) ? __boundary : (end);		\
+})
+
+#define pd_addr_end(addr, end)						\
+({	unsigned long __boundary = \
+			((addr) + GEN8_PDE_SIZE) & GEN8_PDE_SIZE_MASK;	\
+	(__boundary < (end)) ? __boundary : (end);		\
+})
+
+struct ppgtt_walk {
+	unsigned long *mfns;
+	int mfn_index;
+	unsigned long *pt;
+};
+
+static int walk_pt_range(struct intel_vgpu *vgpu, u64 pt,
+				u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long start_index, end_index;
+	int ret;
+	int i;
+	unsigned long mfn, gfn;
+
+	start_index = gma_ops->gma_to_pte_index(start);
+	end_index = ((end - start) >> PAGE_SHIFT) + start_index;
+
+	ret = intel_gvt_hypervisor_read_gpa(vgpu,
+		(pt & PAGE_MASK) + (start_index << info->gtt_entry_size_shift),
+		walk->pt + start_index,
+		(end_index - start_index) << info->gtt_entry_size_shift);
+	if (ret) {
+		gvt_vgpu_err("fail to read gpa %llx\n", pt);
+		return ret;
+	}
+
+	for (i = start_index; i < end_index; i++) {
+		gfn = walk->pt[i] >> PAGE_SHIFT;
+		mfn = intel_gvt_hypervisor_gfn_to_mfn(vgpu, gfn);
+		if (mfn == INTEL_GVT_INVALID_ADDR) {
+			gvt_vgpu_err("fail to translate gfn: 0x%lx\n", gfn);
+			return -ENXIO;
+		}
+		walk->mfns[walk->mfn_index++] = mfn << PAGE_SHIFT;
+	}
+
+	return 0;
+}
+
+
+static int walk_pd_range(struct intel_vgpu *vgpu, u64 pd,
+				u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long index;
+	u64 pt, next;
+	int ret  = 0;
+
+	do {
+		index = gma_ops->gma_to_pde_index(start);
+
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pd & PAGE_MASK) + (index <<
+			info->gtt_entry_size_shift), &pt, 8);
+		if (ret)
+			return ret;
+		next = pd_addr_end(start, end);
+		walk_pt_range(vgpu, pt, start, next, walk);
+
+		start = next;
+	} while (start != end);
+
+	return ret;
+}
+
+
+static int walk_pdp_range(struct intel_vgpu *vgpu, u64 pdp,
+				  u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long index;
+	u64 pd, next;
+	int ret  = 0;
+
+	do {
+		index = gma_ops->gma_to_l4_pdp_index(start);
+
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pdp & PAGE_MASK) + (index <<
+			info->gtt_entry_size_shift), &pd, 8);
+		if (ret)
+			return ret;
+		next = pdp_addr_end(start, end);
+		walk_pd_range(vgpu, pd, start, next, walk);
+		start = next;
+	} while (start != end);
+
+	return ret;
+}
+
+
+static int walk_pml4_range(struct intel_vgpu *vgpu, u64 pml4,
+				u64 start, u64 end, struct ppgtt_walk *walk)
+{
+	const struct intel_gvt_device_info *info = &vgpu->gvt->device_info;
+	struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops;
+	unsigned long index;
+	u64 pdp, next;
+	int ret  = 0;
+
+	do {
+		index = gma_ops->gma_to_pml4_index(start);
+		ret = intel_gvt_hypervisor_read_gpa(vgpu,
+			(pml4 & PAGE_MASK) + (index <<
+			info->gtt_entry_size_shift), &pdp, 8);
+		if (ret)
+			return ret;
+		next = pml4_addr_end(start, end);
+		walk_pdp_range(vgpu, pdp, start, next, walk);
+		start = next;
+	} while (start != end);
+
+	return ret;
+}
+
+int intel_vgpu_g2v_pv_ppgtt_insert_4lvl(struct intel_vgpu *vgpu,
+		u64 pdps[])
+{
+	struct intel_vgpu_mm *mm;
+	u64 pml4, start, length;
+	u32 cache_level;
+	int ret = 0;
+	struct sg_table st;
+	struct scatterlist *sg = NULL;
+	int num_pages;
+	struct i915_vma vma;
+	struct ppgtt_walk walk;
+	int i;
+	u32 offset;
+	struct pv_ppgtt_update pv_ppgtt;
+
+	offset = offsetof(struct gvt_shared_page, pv_ppgtt);
+	intel_gvt_read_shared_page(vgpu, offset, &pv_ppgtt, sizeof(pv_ppgtt));
+	pml4 = pv_ppgtt.pdp;
+	start = pv_ppgtt.start;
+	length = pv_ppgtt.length;
+	cache_level = pv_ppgtt.cache_level;
+	num_pages = length >> PAGE_SHIFT;
+
+	mm = intel_vgpu_find_ppgtt_mm(vgpu, &pml4);
+	if (!mm) {
+		gvt_vgpu_err("fail to find mm for pml4 0x%llx\n", pml4);
+		return -EINVAL;
+	}
+
+	walk.mfn_index = 0;
+	walk.mfns = NULL;
+	walk.pt = NULL;
+
+	walk.mfns = kmalloc_array(num_pages,
+			sizeof(unsigned long), GFP_KERNEL);
+	if (!walk.mfns) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	walk.pt = (unsigned long *)__get_free_pages(GFP_KERNEL, 0);
+	if (!walk.pt) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (sg_alloc_table(&st, num_pages, GFP_KERNEL)) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = walk_pml4_range(vgpu, pml4, start, start + length, &walk);
+	if (ret)
+		goto fail_free_sg;
+
+	WARN_ON(num_pages != walk.mfn_index);
+
+	for_each_sg(st.sgl, sg, num_pages, i) {
+		sg->offset = 0;
+		sg->length = PAGE_SIZE;
+		sg_dma_address(sg) = walk.mfns[i];
+		sg_dma_len(sg) = PAGE_SIZE;
+	}
+
+	memset(&vma, 0, sizeof(vma));
+	vma.node.start = start;
+	vma.pages = &st;
+	mm->ppgtt->vm.insert_entries(&mm->ppgtt->vm, &vma, cache_level, 0);
+
+fail_free_sg:
+	sg_free_table(&st);
+fail:
+	kfree(walk.mfns);
+	free_page((unsigned long)walk.pt);
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h
index 32c573a..571a5c4 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.h
+++ b/drivers/gpu/drm/i915/gvt/gtt.h
@@ -141,6 +141,7 @@ struct intel_gvt_partial_pte {
 
 struct intel_vgpu_mm {
 	enum intel_gvt_mm_type type;
+	struct i915_hw_ppgtt *ppgtt;
 	struct intel_vgpu *vgpu;
 
 	struct kref ref;
@@ -277,4 +278,12 @@ int intel_vgpu_emulate_ggtt_mmio_read(struct intel_vgpu *vgpu,
 int intel_vgpu_emulate_ggtt_mmio_write(struct intel_vgpu *vgpu,
 	unsigned int off, void *p_data, unsigned int bytes);
 
+int intel_vgpu_g2v_pv_ppgtt_alloc_4lvl(struct intel_vgpu *vgpu,
+		u64 pdps[]);
+
+int intel_vgpu_g2v_pv_ppgtt_clear_4lvl(struct intel_vgpu *vgpu,
+		u64 pdps[]);
+
+int intel_vgpu_g2v_pv_ppgtt_insert_4lvl(struct intel_vgpu *vgpu,
+		u64 pdps[]);
 #endif /* _GVT_GTT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index b01a54f..a7ce591 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -53,6 +53,10 @@
 
 #define GVT_MAX_VGPU 8
 
+#define VGPU_PVCAP(vgpu, cap)	\
+	((vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) & (cap)) \
+			&& vgpu->shared_page_enabled)
+
 struct intel_gvt_host {
 	struct device *dev;
 	bool initialized;
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 28b390c..2b4c686 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1188,6 +1188,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	struct intel_vgpu_mm *mm;
 	u64 *pdps;
 	unsigned long gpa, gfn;
+	int ret = 0;
 
 	pdps = (u64 *)&vgpu_vreg64_t(vgpu, vgtif_reg(pdp[0]));
 
@@ -1211,6 +1212,15 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 		vgpu->shared_page_gpa = gpa;
 		vgpu->shared_page_enabled = true;
 		break;
+	case VGT_G2V_PPGTT_L4_ALLOC:
+		ret = intel_vgpu_g2v_pv_ppgtt_alloc_4lvl(vgpu, pdps);
+			break;
+	case VGT_G2V_PPGTT_L4_INSERT:
+		ret = intel_vgpu_g2v_pv_ppgtt_insert_4lvl(vgpu, pdps);
+		break;
+	case VGT_G2V_PPGTT_L4_CLEAR:
+		ret = intel_vgpu_g2v_pv_ppgtt_clear_4lvl(vgpu, pdps);
+		break;
 	case VGT_G2V_EXECLIST_CONTEXT_CREATE:
 	case VGT_G2V_EXECLIST_CONTEXT_DESTROY:
 	case 1:	/* Remove this in guest driver. */
@@ -1218,7 +1228,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	default:
 		gvt_vgpu_err("Invalid PV notification %d\n", notification);
 	}
-	return 0;
+	return ret;
 }
 
 static int send_display_ready_uevent(struct intel_vgpu *vgpu, int ready)
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index 4d241a7..bc7387e 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -49,6 +49,9 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT;
 	vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV;
 
+	if (!intel_vtd_active())
+		vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT_UPDATE;
+
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.size)) =
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 8/8] drm/i915/gvt: GVTg support context submission pv optimization
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (7 preceding siblings ...)
  2019-03-29 13:32 ` [PATCH v4 7/8] drm/i915/gvt: GVTg support ppgtt pv optimization Xiaolin Zhang
@ 2019-03-29 13:32 ` Xiaolin Zhang
  2019-03-29 16:05 ` [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Chris Wilson
  9 siblings, 0 replies; 17+ messages in thread
From: Xiaolin Zhang @ 2019-03-29 13:32 UTC (permalink / raw)
  To: intel-gvt-dev, intel-gfx; +Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

implemented context submission pv optimizaiton within GVTg.

GVTg to read context submission data (elsp_data) from the shared_page
directly without trap cost and eliminate execlist HW behavior emulation
without injecting context switch interrupt to guest under PV
submisison mechanism.

v0: RFC
v1: rebase
v2: rebase
v3: report pv context submission cap and handle VGT_G2V_ELSP_SUBMIT
g2v pv notification.
v4: eliminate execlist HW emulation and don't inject context switch
interrupt to guest under PV submisison mechanism.

Signed-off-by: Xiaolin Zhang <xiaolin.zhang@intel.com>
---
 drivers/gpu/drm/i915/gvt/execlist.c |  6 ++++++
 drivers/gpu/drm/i915/gvt/handlers.c | 32 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/vgpu.c     |  1 +
 3 files changed, 39 insertions(+)

diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
index 1a93472..e68f1d4b 100644
--- a/drivers/gpu/drm/i915/gvt/execlist.c
+++ b/drivers/gpu/drm/i915/gvt/execlist.c
@@ -382,6 +382,9 @@ static int prepare_execlist_workload(struct intel_vgpu_workload *workload)
 	int ring_id = workload->ring_id;
 	int ret;
 
+	if (VGPU_PVCAP(vgpu, PV_SUBMISSION))
+		return 0;
+
 	if (!workload->emulate_schedule_in)
 		return 0;
 
@@ -429,6 +432,9 @@ static int complete_execlist_workload(struct intel_vgpu_workload *workload)
 		goto out;
 	}
 
+	if (VGPU_PVCAP(vgpu, PV_SUBMISSION))
+		goto out;
+
 	ret = emulate_execlist_ctx_schedule_out(execlist, &workload->ctx_desc);
 out:
 	intel_vgpu_unpin_mm(workload->shadow_mm);
diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
index 2b4c686..cbba77c 100644
--- a/drivers/gpu/drm/i915/gvt/handlers.c
+++ b/drivers/gpu/drm/i915/gvt/handlers.c
@@ -1182,6 +1182,35 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
 	return 0;
 }
 
+static int intel_vgpu_g2v_pv_elsp_submit(struct intel_vgpu *vgpu)
+{
+	struct intel_vgpu_execlist *execlist;
+	u32 ring_id_off;
+	int ring_id;
+	u32 descs_off;
+
+	int ret = -EINVAL;
+
+	if (!VGPU_PVCAP(vgpu, PV_SUBMISSION))
+		return ret;
+
+	ring_id_off = offsetof(struct gvt_shared_page, ring_id);
+	if (intel_gvt_read_shared_page(vgpu, ring_id_off, &ring_id, 4))
+		return ret;
+
+	if (WARN_ON(ring_id < 0 || ring_id >= I915_NUM_ENGINES))
+		return ret;
+
+	execlist = &vgpu->submission.execlist[ring_id];
+
+	descs_off = offsetof(struct gvt_shared_page, descs);
+	if (intel_gvt_read_shared_page(vgpu, descs_off,
+			&execlist->elsp_dwords.data, 8 * EXECLIST_MAX_PORTS))
+		return ret;
+
+	return intel_vgpu_submit_execlist(vgpu, ring_id);
+}
+
 static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 {
 	intel_gvt_gtt_type_t root_entry_type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY;
@@ -1221,6 +1250,9 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification)
 	case VGT_G2V_PPGTT_L4_CLEAR:
 		ret = intel_vgpu_g2v_pv_ppgtt_clear_4lvl(vgpu, pdps);
 		break;
+	case VGT_G2V_PV_SUBMISSION:
+		ret = intel_vgpu_g2v_pv_elsp_submit(vgpu);
+		break;
 	case VGT_G2V_EXECLIST_CONTEXT_CREATE:
 	case VGT_G2V_EXECLIST_CONTEXT_DESTROY:
 	case 1:	/* Remove this in guest driver. */
diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c
index bc7387e..c71aaf6 100644
--- a/drivers/gpu/drm/i915/gvt/vgpu.c
+++ b/drivers/gpu/drm/i915/gvt/vgpu.c
@@ -51,6 +51,7 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 
 	if (!intel_vtd_active())
 		vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT_UPDATE;
+	vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) |= PV_SUBMISSION;
 
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) =
 		vgpu_aperture_gmadr_base(vgpu);
-- 
2.7.4

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/8] drm/i915: vgpu context submission pv optimization
  2019-03-29 13:32 ` [PATCH v4 4/8] drm/i915: vgpu context submission " Xiaolin Zhang
  2019-03-29  7:15   ` Chris Wilson
@ 2019-03-29 15:40   ` Chris Wilson
  2019-04-11  5:46     ` Zhang, Xiaolin
  2019-03-29 19:14   ` Chris Wilson
  2 siblings, 1 reply; 17+ messages in thread
From: Chris Wilson @ 2019-03-29 15:40 UTC (permalink / raw)
  To: Xiaolin Zhang, intel-gfx, intel-gvt-dev
  Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

Quoting Xiaolin Zhang (2019-03-29 13:32:40)
> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> index 2f78829..28e8ee0 100644
> --- a/drivers/gpu/drm/i915/i915_irq.c
> +++ b/drivers/gpu/drm/i915/i915_irq.c
> @@ -37,6 +37,7 @@
>  #include "i915_drv.h"
>  #include "i915_trace.h"
>  #include "intel_drv.h"
> +#include "i915_vgpu.h"
>  
>  /**
>   * DOC: interrupt handling
> @@ -1470,6 +1471,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>         if (iir & GT_RENDER_USER_INTERRUPT) {
>                 intel_engine_breadcrumbs_irq(engine);
>                 tasklet |= USES_GUC_SUBMISSION(engine->i915);
> +               tasklet |= USES_PV_SUBMISSION(engine->i915);

We should move this to an engine->flag.

>         }
>  
>         if (tasklet)
> +static void vgpu_pv_set_default_submission(struct intel_engine_cs *engine)
> +{
> +       /*
> +        * We inherit a bunch of functions from execlists that we'd like
> +        * to keep using:
> +        *
> +        *    engine->submit_request = execlists_submit_request;
> +        *    engine->cancel_requests = execlists_cancel_requests;
> +        *    engine->schedule = execlists_schedule;
> +        *
> +        * But we need to override the actual submission backend in order
> +        * to talk to the GVT with PV notification message.
> +        */
> +       intel_execlists_set_default_submission(engine);
> +
> +       engine->execlists.tasklet.func = vgpu_pv_submission_tasklet;

You need to pin the breadcrumbs irq, or it will not fire on every
request.

I'd push for this to live in intel_pv_submission.c or
intel_vgpu_submission.c

> @@ -401,6 +551,12 @@ void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
>                 ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl_pv;
>                 ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl_pv;
>         }
> +
> +       if (cap == PV_SUBMISSION) {
> +               engine = (struct intel_engine_cs *)data;
> +               engine->set_default_submission = vgpu_pv_set_default_submission;
> +               engine->set_default_submission(engine);
> +       }
>  }

> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index c1b9780..0a66714 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -2352,6 +2352,9 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine)
>                  */
>         }
>         engine->emit_bb_start = gen8_emit_bb_start;
> +
> +       if (intel_vgpu_active(engine->i915))
> +               intel_vgpu_config_pv_caps(engine->i915, PV_SUBMISSION, engine);

That pair is ugly. Should clean up the engine initialisation so that it
doesn't involve placing a chunk of code in a foreign class.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance
  2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
                   ` (8 preceding siblings ...)
  2019-03-29 13:32 ` [PATCH v4 8/8] drm/i915/gvt: GVTg support context submission " Xiaolin Zhang
@ 2019-03-29 16:05 ` Chris Wilson
  2019-04-11  5:54   ` Zhang, Xiaolin
  9 siblings, 1 reply; 17+ messages in thread
From: Chris Wilson @ 2019-03-29 16:05 UTC (permalink / raw)
  To: Xiaolin Zhang, intel-gfx, intel-gvt-dev
  Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

Quoting Xiaolin Zhang (2019-03-29 13:32:36)
> To improve vgpu performance, it could implement some PV optimization
> such as to reduce the mmio access trap numbers or eliminate certain piece
> of HW emulation within guest driver to reduce vm exit/vm enter cost.

Where's the CI for this patchset? The lack of interrupts to drive
submission should have shown up, and if not, we need some testcases to
make sure it doesn't happen again. Everytime I see a gvt patch, I ask if
we can get some coverage in intel-gfx-ci :)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/8] drm/i915: vgpu context submission pv optimization
  2019-03-29 13:32 ` [PATCH v4 4/8] drm/i915: vgpu context submission " Xiaolin Zhang
  2019-03-29  7:15   ` Chris Wilson
  2019-03-29 15:40   ` Chris Wilson
@ 2019-03-29 19:14   ` Chris Wilson
  2019-04-11  5:49     ` Zhang, Xiaolin
  2 siblings, 1 reply; 17+ messages in thread
From: Chris Wilson @ 2019-03-29 19:14 UTC (permalink / raw)
  To: Xiaolin Zhang, intel-gfx, intel-gvt-dev
  Cc: zhenyu.z.wang, hang.yuan, zhiyuan.lv

Quoting Xiaolin Zhang (2019-03-29 13:32:40)
> +       spin_lock(&engine->i915->vgpu.shared_page_lock);
> +       shared_page->ring_id = engine->id;
> +       for (n = 0; n < execlists_num_ports(execlists); n++)
> +               shared_page->descs[n] = descs[n];
> +
> +       __raw_i915_write32(uncore, vgtif_reg(g2v_notify),
> +                       VGT_G2V_PV_SUBMISSION);

There's no serialisation with the consumer here. The
shared_page->descs[] will be clobbered by a second engine before they
are read by pv. There should be a wait-for-ack here, or separate
channels for each engine. In execlists, we avoid this by not dequeuing
until after we get a response/ack from HW.

Preemption? I guess preemption was turned off, or else you have to worry
about the same engine overwriting the desc[] (if no ack).
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/8] drm/i915: vgpu context submission pv optimization
  2019-03-29 15:40   ` Chris Wilson
@ 2019-04-11  5:46     ` Zhang, Xiaolin
  0 siblings, 0 replies; 17+ messages in thread
From: Zhang, Xiaolin @ 2019-04-11  5:46 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx, intel-gvt-dev
  Cc: Wang, Zhenyu Z, Yuan, Hang, Lv, Zhiyuan

On 03/29/2019 11:40 PM, Chris Wilson wrote:
> Quoting Xiaolin Zhang (2019-03-29 13:32:40)
>> diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
>> index 2f78829..28e8ee0 100644
>> --- a/drivers/gpu/drm/i915/i915_irq.c
>> +++ b/drivers/gpu/drm/i915/i915_irq.c
>> @@ -37,6 +37,7 @@
>>  #include "i915_drv.h"
>>  #include "i915_trace.h"
>>  #include "intel_drv.h"
>> +#include "i915_vgpu.h"
>>  
>>  /**
>>   * DOC: interrupt handling
>> @@ -1470,6 +1471,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
>>         if (iir & GT_RENDER_USER_INTERRUPT) {
>>                 intel_engine_breadcrumbs_irq(engine);
>>                 tasklet |= USES_GUC_SUBMISSION(engine->i915);
>> +               tasklet |= USES_PV_SUBMISSION(engine->i915);
> We should move this to an engine->flag.
sure, I will remove this next version.
>
>>         }
>>  
>>         if (tasklet)
>> +static void vgpu_pv_set_default_submission(struct intel_engine_cs *engine)
>> +{
>> +       /*
>> +        * We inherit a bunch of functions from execlists that we'd like
>> +        * to keep using:
>> +        *
>> +        *    engine->submit_request = execlists_submit_request;
>> +        *    engine->cancel_requests = execlists_cancel_requests;
>> +        *    engine->schedule = execlists_schedule;
>> +        *
>> +        * But we need to override the actual submission backend in order
>> +        * to talk to the GVT with PV notification message.
>> +        */
>> +       intel_execlists_set_default_submission(engine);
>> +
>> +       engine->execlists.tasklet.func = vgpu_pv_submission_tasklet;
> You need to pin the breadcrumbs irq, or it will not fire on every
> request.
indeed, I should do. thanks your great point.
> I'd push for this to live in intel_pv_submission.c or
> intel_vgpu_submission.c
this will create new file and make a change  in Makefile. if this kind
of change is OK,
I will create one to use intel_pv_submission.c name.
>> @@ -401,6 +551,12 @@ void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv,
>>                 ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl_pv;
>>                 ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl_pv;
>>         }
>> +
>> +       if (cap == PV_SUBMISSION) {
>> +               engine = (struct intel_engine_cs *)data;
>> +               engine->set_default_submission = vgpu_pv_set_default_submission;
>> +               engine->set_default_submission(engine);
>> +       }
>>  }
>> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
>> index c1b9780..0a66714 100644
>> --- a/drivers/gpu/drm/i915/intel_lrc.c
>> +++ b/drivers/gpu/drm/i915/intel_lrc.c
>> @@ -2352,6 +2352,9 @@ logical_ring_default_vfuncs(struct intel_engine_cs *engine)
>>                  */
>>         }
>>         engine->emit_bb_start = gen8_emit_bb_start;
>> +
>> +       if (intel_vgpu_active(engine->i915))
>> +               intel_vgpu_config_pv_caps(engine->i915, PV_SUBMISSION, engine);
> That pair is ugly. Should clean up the engine initialisation so that it
> doesn't involve placing a chunk of code in a foreign class.
> -Chris
>
Yep, will move it to other location.


_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/8] drm/i915: vgpu context submission pv optimization
  2019-03-29 19:14   ` Chris Wilson
@ 2019-04-11  5:49     ` Zhang, Xiaolin
  0 siblings, 0 replies; 17+ messages in thread
From: Zhang, Xiaolin @ 2019-04-11  5:49 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx, intel-gvt-dev
  Cc: Wang, Zhenyu Z, Yuan, Hang, Lv, Zhiyuan

On 03/30/2019 03:14 AM, Chris Wilson wrote:
> Quoting Xiaolin Zhang (2019-03-29 13:32:40)
>> +       spin_lock(&engine->i915->vgpu.shared_page_lock);
>> +       shared_page->ring_id = engine->id;
>> +       for (n = 0; n < execlists_num_ports(execlists); n++)
>> +               shared_page->descs[n] = descs[n];
>> +
>> +       __raw_i915_write32(uncore, vgtif_reg(g2v_notify),
>> +                       VGT_G2V_PV_SUBMISSION);
> There's no serialisation with the consumer here. The
> shared_page->descs[] will be clobbered by a second engine before they
> are read by pv. There should be a wait-for-ack here, or separate
> channels for each engine. In execlists, we avoid this by not dequeuing
> until after we get a response/ack from HW.
>
> Preemption? I guess preemption was turned off, or else you have to worry
> about the same engine overwriting the desc[] (if no ack).
> -Chris
>
Chris, thanks you to point it out my flaw here. will use the per-engine
channel for notification.  preemption is turned off and no support in
current version.

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance
  2019-03-29 16:05 ` [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Chris Wilson
@ 2019-04-11  5:54   ` Zhang, Xiaolin
  0 siblings, 0 replies; 17+ messages in thread
From: Zhang, Xiaolin @ 2019-04-11  5:54 UTC (permalink / raw)
  To: Chris Wilson, intel-gfx, intel-gvt-dev
  Cc: Wang, Zhenyu Z, Yuan, Hang, Lv, Zhiyuan

On 03/30/2019 12:05 AM, Chris Wilson wrote:
> Quoting Xiaolin Zhang (2019-03-29 13:32:36)
>> To improve vgpu performance, it could implement some PV optimization
>> such as to reduce the mmio access trap numbers or eliminate certain piece
>> of HW emulation within guest driver to reduce vm exit/vm enter cost.
> Where's the CI for this patchset? The lack of interrupts to drive
> submission should have shown up, and if not, we need some testcases to
> make sure it doesn't happen again. Everytime I see a gvt patch, I ask if
> we can get some coverage in intel-gfx-ci :)
> -Chris
>
The CI for this patchset was not generated due to build failure. will
try next version to avoid build issue by rebasing to tip of branch.

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-04-11  5:54 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-29 13:32 [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Xiaolin Zhang
2019-03-29  5:00 ` ✗ Fi.CI.BAT: failure for " Patchwork
2019-03-29 13:32 ` [PATCH v4 1/8] drm/i915: introduced vgpu pv capability Xiaolin Zhang
2019-03-29 13:32 ` [PATCH v4 2/8] drm/i915: vgpu shared memory setup for pv optimization Xiaolin Zhang
2019-03-29 13:32 ` [PATCH v4 3/8] drm/i915: vgpu ppgtt update " Xiaolin Zhang
2019-03-29 13:32 ` [PATCH v4 4/8] drm/i915: vgpu context submission " Xiaolin Zhang
2019-03-29  7:15   ` Chris Wilson
2019-03-29 15:40   ` Chris Wilson
2019-04-11  5:46     ` Zhang, Xiaolin
2019-03-29 19:14   ` Chris Wilson
2019-04-11  5:49     ` Zhang, Xiaolin
2019-03-29 13:32 ` [PATCH v4 5/8] drm/i915/gvt: GVTg handle pv_caps PVINFO register Xiaolin Zhang
2019-03-29 13:32 ` [PATCH v4 6/8] drm/i915/gvt: GVTg handle shared_page setup Xiaolin Zhang
2019-03-29 13:32 ` [PATCH v4 7/8] drm/i915/gvt: GVTg support ppgtt pv optimization Xiaolin Zhang
2019-03-29 13:32 ` [PATCH v4 8/8] drm/i915/gvt: GVTg support context submission " Xiaolin Zhang
2019-03-29 16:05 ` [PATCH v4 0/8] i915 vgpu PV to improve vgpu performance Chris Wilson
2019-04-11  5:54   ` Zhang, Xiaolin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.