dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] Support for out-of-tree hypervisor modules in i915/gvt
       [not found] <4079ce7c26a2d2a3c7e0828ed1ea6008d6e2c805.camel@cyberus-technology.de>
@ 2020-01-09 17:13 ` Julian Stecklina
  2020-01-09 17:13   ` [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures Julian Stecklina
                     ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Julian Stecklina @ 2020-01-09 17:13 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Julian Stecklina, hang.yuan, linux-kernel, dri-devel, zhiyuan.lv

These patch series removes the dependency of i915/gvt hypervisor
backends on the driver internals of the i915 driver. Instead, we add a
small public API that hypervisor backends can use.

This enables out-of-tree hypervisor backends for the Intel graphics
virtualization and simplifies development. At the same time, it
creates at least a bit of a barrier to including more i915 internals
into kvmgt, which is nice in itself.

The first two patches are pretty much general cleanup and could be
merged without the rest.

Any feedback is welcome.

Julian Stecklina (4):
  drm/i915/gvt: make gvt oblivious of kvmgt data structures
  drm/i915/gvt: remove unused vblank_done completion
  drm/i915/gvt: define a public interface to gvt
  drm/i915/gvt: move public gvt headers out into global include

 drivers/gpu/drm/i915/gvt/Makefile             |   2 +-
 drivers/gpu/drm/i915/gvt/debug.h              |   2 +-
 drivers/gpu/drm/i915/gvt/display.c            |  26 ++
 drivers/gpu/drm/i915/gvt/display.h            |  27 --
 drivers/gpu/drm/i915/gvt/gtt.h                |   2 -
 drivers/gpu/drm/i915/gvt/gvt.h                |  65 +--
 drivers/gpu/drm/i915/gvt/gvt_public.c         | 154 +++++++
 drivers/gpu/drm/i915/gvt/kvmgt.c              | 413 ++++++++++--------
 drivers/gpu/drm/i915/gvt/mpt.h                |   3 -
 drivers/gpu/drm/i915/gvt/reg.h                |   2 -
 include/drm/i915_gvt.h                        | 104 +++++
 .../drm/i915_gvt_hypercall.h                  |  13 +-
 12 files changed, 537 insertions(+), 276 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gvt/gvt_public.c
 create mode 100644 include/drm/i915_gvt.h
 rename drivers/gpu/drm/i915/gvt/hypercall.h => include/drm/i915_gvt_hypercall.h (92%)

-- 
2.24.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures
  2020-01-09 17:13 ` [RFC PATCH 0/4] Support for out-of-tree hypervisor modules in i915/gvt Julian Stecklina
@ 2020-01-09 17:13   ` Julian Stecklina
  2020-01-20  6:22     ` Zhenyu Wang
  2020-01-09 17:13   ` [RFC PATCH 2/4] drm/i915/gvt: remove unused vblank_done completion Julian Stecklina
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Julian Stecklina @ 2020-01-09 17:13 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Julian Stecklina, linux-kernel, hang.yuan, dri-devel, zhiyuan.lv

Instead of defining KVMGT per-device state in struct intel_vgpu
directly, add an indirection. This makes the GVT code oblivious of
what state KVMGT needs to keep.

The intention here is to eventually make it possible to build
hypervisor backends for the mediator, without having to touch the
mediator itself. This is a first step.

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>

Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
---
 drivers/gpu/drm/i915/gvt/gvt.h   |  32 +---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 287 +++++++++++++++++++------------
 2 files changed, 184 insertions(+), 135 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 0081b051d3e0..2604739e5680 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -196,31 +196,8 @@ struct intel_vgpu {
 
 	struct dentry *debugfs;
 
-#if IS_ENABLED(CONFIG_DRM_I915_GVT_KVMGT)
-	struct {
-		struct mdev_device *mdev;
-		struct vfio_region *region;
-		int num_regions;
-		struct eventfd_ctx *intx_trigger;
-		struct eventfd_ctx *msi_trigger;
-
-		/*
-		 * Two caches are used to avoid mapping duplicated pages (eg.
-		 * scratch pages). This help to reduce dma setup overhead.
-		 */
-		struct rb_root gfn_cache;
-		struct rb_root dma_addr_cache;
-		unsigned long nr_cache_entries;
-		struct mutex cache_lock;
-
-		struct notifier_block iommu_notifier;
-		struct notifier_block group_notifier;
-		struct kvm *kvm;
-		struct work_struct release_work;
-		atomic_t released;
-		struct vfio_device *vfio_device;
-	} vdev;
-#endif
+	/* Hypervisor-specific device state. */
+	void *vdev;
 
 	struct list_head dmabuf_obj_list_head;
 	struct mutex dmabuf_lock;
@@ -231,6 +208,11 @@ struct intel_vgpu {
 	u32 scan_nonprivbb;
 };
 
+static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
+{
+	return vgpu->vdev;
+}
+
 /* validating GM healthy status*/
 #define vgpu_is_vm_unhealthy(ret_val) \
 	(((ret_val) == -EBADRQC) || ((ret_val) == -EFAULT))
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index bd79a9718cc7..d725a4fb94b9 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -108,6 +108,36 @@ struct gvt_dma {
 	struct kref ref;
 };
 
+struct kvmgt_vdev {
+	struct intel_vgpu *vgpu;
+	struct mdev_device *mdev;
+	struct vfio_region *region;
+	int num_regions;
+	struct eventfd_ctx *intx_trigger;
+	struct eventfd_ctx *msi_trigger;
+
+	/*
+	 * Two caches are used to avoid mapping duplicated pages (eg.
+	 * scratch pages). This help to reduce dma setup overhead.
+	 */
+	struct rb_root gfn_cache;
+	struct rb_root dma_addr_cache;
+	unsigned long nr_cache_entries;
+	struct mutex cache_lock;
+
+	struct notifier_block iommu_notifier;
+	struct notifier_block group_notifier;
+	struct kvm *kvm;
+	struct work_struct release_work;
+	atomic_t released;
+	struct vfio_device *vfio_device;
+};
+
+static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)
+{
+	return intel_vgpu_vdev(vgpu);
+}
+
 static inline bool handle_valid(unsigned long handle)
 {
 	return !!(handle & ~0xff);
@@ -129,7 +159,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_unpin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1);
+		ret = vfio_unpin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1);
 		WARN_ON(ret != 1);
 	}
 }
@@ -152,7 +182,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_pin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1,
+		ret = vfio_pin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1,
 				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
@@ -219,7 +249,7 @@ static void gvt_dma_unmap_page(struct intel_vgpu *vgpu, unsigned long gfn,
 static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
 		dma_addr_t dma_addr)
 {
-	struct rb_node *node = vgpu->vdev.dma_addr_cache.rb_node;
+	struct rb_node *node = kvmgt_vdev(vgpu)->dma_addr_cache.rb_node;
 	struct gvt_dma *itr;
 
 	while (node) {
@@ -237,7 +267,7 @@ static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
 
 static struct gvt_dma *__gvt_cache_find_gfn(struct intel_vgpu *vgpu, gfn_t gfn)
 {
-	struct rb_node *node = vgpu->vdev.gfn_cache.rb_node;
+	struct rb_node *node = kvmgt_vdev(vgpu)->gfn_cache.rb_node;
 	struct gvt_dma *itr;
 
 	while (node) {
@@ -258,6 +288,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 {
 	struct gvt_dma *new, *itr;
 	struct rb_node **link, *parent = NULL;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
 	new = kzalloc(sizeof(struct gvt_dma), GFP_KERNEL);
 	if (!new)
@@ -270,7 +301,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 	kref_init(&new->ref);
 
 	/* gfn_cache maps gfn to struct gvt_dma. */
-	link = &vgpu->vdev.gfn_cache.rb_node;
+	link = &vdev->gfn_cache.rb_node;
 	while (*link) {
 		parent = *link;
 		itr = rb_entry(parent, struct gvt_dma, gfn_node);
@@ -281,11 +312,11 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 			link = &parent->rb_right;
 	}
 	rb_link_node(&new->gfn_node, parent, link);
-	rb_insert_color(&new->gfn_node, &vgpu->vdev.gfn_cache);
+	rb_insert_color(&new->gfn_node, &vdev->gfn_cache);
 
 	/* dma_addr_cache maps dma addr to struct gvt_dma. */
 	parent = NULL;
-	link = &vgpu->vdev.dma_addr_cache.rb_node;
+	link = &vdev->dma_addr_cache.rb_node;
 	while (*link) {
 		parent = *link;
 		itr = rb_entry(parent, struct gvt_dma, dma_addr_node);
@@ -296,46 +327,51 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 			link = &parent->rb_right;
 	}
 	rb_link_node(&new->dma_addr_node, parent, link);
-	rb_insert_color(&new->dma_addr_node, &vgpu->vdev.dma_addr_cache);
+	rb_insert_color(&new->dma_addr_node, &vdev->dma_addr_cache);
 
-	vgpu->vdev.nr_cache_entries++;
+	vdev->nr_cache_entries++;
 	return 0;
 }
 
 static void __gvt_cache_remove_entry(struct intel_vgpu *vgpu,
 				struct gvt_dma *entry)
 {
-	rb_erase(&entry->gfn_node, &vgpu->vdev.gfn_cache);
-	rb_erase(&entry->dma_addr_node, &vgpu->vdev.dma_addr_cache);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+
+	rb_erase(&entry->gfn_node, &vdev->gfn_cache);
+	rb_erase(&entry->dma_addr_node, &vdev->dma_addr_cache);
 	kfree(entry);
-	vgpu->vdev.nr_cache_entries--;
+	vdev->nr_cache_entries--;
 }
 
 static void gvt_cache_destroy(struct intel_vgpu *vgpu)
 {
 	struct gvt_dma *dma;
 	struct rb_node *node = NULL;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
 	for (;;) {
-		mutex_lock(&vgpu->vdev.cache_lock);
-		node = rb_first(&vgpu->vdev.gfn_cache);
+		mutex_lock(&vdev->cache_lock);
+		node = rb_first(&vdev->gfn_cache);
 		if (!node) {
-			mutex_unlock(&vgpu->vdev.cache_lock);
+			mutex_unlock(&vdev->cache_lock);
 			break;
 		}
 		dma = rb_entry(node, struct gvt_dma, gfn_node);
 		gvt_dma_unmap_page(vgpu, dma->gfn, dma->dma_addr, dma->size);
 		__gvt_cache_remove_entry(vgpu, dma);
-		mutex_unlock(&vgpu->vdev.cache_lock);
+		mutex_unlock(&vdev->cache_lock);
 	}
 }
 
 static void gvt_cache_init(struct intel_vgpu *vgpu)
 {
-	vgpu->vdev.gfn_cache = RB_ROOT;
-	vgpu->vdev.dma_addr_cache = RB_ROOT;
-	vgpu->vdev.nr_cache_entries = 0;
-	mutex_init(&vgpu->vdev.cache_lock);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+
+	vdev->gfn_cache = RB_ROOT;
+	vdev->dma_addr_cache = RB_ROOT;
+	vdev->nr_cache_entries = 0;
+	mutex_init(&vdev->cache_lock);
 }
 
 static void kvmgt_protect_table_init(struct kvmgt_guest_info *info)
@@ -409,16 +445,18 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
 static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
 		size_t count, loff_t *ppos, bool iswrite)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
 			VFIO_PCI_NUM_REGIONS;
-	void *base = vgpu->vdev.region[i].data;
+	void *base = vdev->region[i].data;
 	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
 
-	if (pos >= vgpu->vdev.region[i].size || iswrite) {
+
+	if (pos >= vdev->region[i].size || iswrite) {
 		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
 		return -EINVAL;
 	}
-	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
+	count = min(count, (size_t)(vdev->region[i].size - pos));
 	memcpy(buf, base + pos, count);
 
 	return count;
@@ -512,7 +550,7 @@ static size_t intel_vgpu_reg_rw_edid(struct intel_vgpu *vgpu, char *buf,
 	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
 			VFIO_PCI_NUM_REGIONS;
 	struct vfio_edid_region *region =
-		(struct vfio_edid_region *)vgpu->vdev.region[i].data;
+		(struct vfio_edid_region *)kvmgt_vdev(vgpu)->region[i].data;
 	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
 
 	if (pos < region->vfio_edid_regs.edid_offset) {
@@ -544,32 +582,34 @@ static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
 		const struct intel_vgpu_regops *ops,
 		size_t size, u32 flags, void *data)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct vfio_region *region;
 
-	region = krealloc(vgpu->vdev.region,
-			(vgpu->vdev.num_regions + 1) * sizeof(*region),
+	region = krealloc(vdev->region,
+			(vdev->num_regions + 1) * sizeof(*region),
 			GFP_KERNEL);
 	if (!region)
 		return -ENOMEM;
 
-	vgpu->vdev.region = region;
-	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
-	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
-	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
-	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
-	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
-	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
-	vgpu->vdev.num_regions++;
+	vdev->region = region;
+	vdev->region[vdev->num_regions].type = type;
+	vdev->region[vdev->num_regions].subtype = subtype;
+	vdev->region[vdev->num_regions].ops = ops;
+	vdev->region[vdev->num_regions].size = size;
+	vdev->region[vdev->num_regions].flags = flags;
+	vdev->region[vdev->num_regions].data = data;
+	vdev->num_regions++;
 	return 0;
 }
 
 static int kvmgt_get_vfio_device(void *p_vgpu)
 {
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
-	vgpu->vdev.vfio_device = vfio_device_get_from_dev(
-		mdev_dev(vgpu->vdev.mdev));
-	if (!vgpu->vdev.vfio_device) {
+	vdev->vfio_device = vfio_device_get_from_dev(
+		mdev_dev(vdev->mdev));
+	if (!vdev->vfio_device) {
 		gvt_vgpu_err("failed to get vfio device\n");
 		return -ENODEV;
 	}
@@ -637,10 +677,12 @@ static int kvmgt_set_edid(void *p_vgpu, int port_num)
 
 static void kvmgt_put_vfio_device(void *vgpu)
 {
-	if (WARN_ON(!((struct intel_vgpu *)vgpu)->vdev.vfio_device))
+	struct kvmgt_vdev *vdev = kvmgt_vdev((struct intel_vgpu *)vgpu);
+
+	if (WARN_ON(!vdev->vfio_device))
 		return;
 
-	vfio_device_put(((struct intel_vgpu *)vgpu)->vdev.vfio_device);
+	vfio_device_put(vdev->vfio_device);
 }
 
 static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
@@ -669,9 +711,9 @@ static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
 		goto out;
 	}
 
-	INIT_WORK(&vgpu->vdev.release_work, intel_vgpu_release_work);
+	INIT_WORK(&kvmgt_vdev(vgpu)->release_work, intel_vgpu_release_work);
 
-	vgpu->vdev.mdev = mdev;
+	kvmgt_vdev(vgpu)->mdev = mdev;
 	mdev_set_drvdata(mdev, vgpu);
 
 	gvt_dbg_core("intel_vgpu_create succeeded for mdev: %s\n",
@@ -696,9 +738,10 @@ static int intel_vgpu_remove(struct mdev_device *mdev)
 static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 				     unsigned long action, void *data)
 {
-	struct intel_vgpu *vgpu = container_of(nb,
-					struct intel_vgpu,
-					vdev.iommu_notifier);
+	struct kvmgt_vdev *vdev = container_of(nb,
+					       struct kvmgt_vdev,
+					       iommu_notifier);
+	struct intel_vgpu *vgpu = vdev->vgpu;
 
 	if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
@@ -708,7 +751,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 		iov_pfn = unmap->iova >> PAGE_SHIFT;
 		end_iov_pfn = iov_pfn + unmap->size / PAGE_SIZE;
 
-		mutex_lock(&vgpu->vdev.cache_lock);
+		mutex_lock(&vdev->cache_lock);
 		for (; iov_pfn < end_iov_pfn; iov_pfn++) {
 			entry = __gvt_cache_find_gfn(vgpu, iov_pfn);
 			if (!entry)
@@ -718,7 +761,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 					   entry->size);
 			__gvt_cache_remove_entry(vgpu, entry);
 		}
-		mutex_unlock(&vgpu->vdev.cache_lock);
+		mutex_unlock(&vdev->cache_lock);
 	}
 
 	return NOTIFY_OK;
@@ -727,16 +770,16 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 static int intel_vgpu_group_notifier(struct notifier_block *nb,
 				     unsigned long action, void *data)
 {
-	struct intel_vgpu *vgpu = container_of(nb,
-					struct intel_vgpu,
-					vdev.group_notifier);
+	struct kvmgt_vdev *vdev = container_of(nb,
+					       struct kvmgt_vdev,
+					       group_notifier);
 
 	/* the only action we care about */
 	if (action == VFIO_GROUP_NOTIFY_SET_KVM) {
-		vgpu->vdev.kvm = data;
+		vdev->kvm = data;
 
 		if (!data)
-			schedule_work(&vgpu->vdev.release_work);
+			schedule_work(&vdev->release_work);
 	}
 
 	return NOTIFY_OK;
@@ -745,15 +788,16 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
 static int intel_vgpu_open(struct mdev_device *mdev)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
 	int ret;
 
-	vgpu->vdev.iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
-	vgpu->vdev.group_notifier.notifier_call = intel_vgpu_group_notifier;
+	vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
+	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
 	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
-				&vgpu->vdev.iommu_notifier);
+				&vdev->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
 			ret);
@@ -762,7 +806,7 @@ static int intel_vgpu_open(struct mdev_device *mdev)
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
-				&vgpu->vdev.group_notifier);
+				&vdev->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
 			ret);
@@ -781,50 +825,52 @@ static int intel_vgpu_open(struct mdev_device *mdev)
 
 	intel_gvt_ops->vgpu_activate(vgpu);
 
-	atomic_set(&vgpu->vdev.released, 0);
+	atomic_set(&vdev->released, 0);
 	return ret;
 
 undo_group:
 	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
-					&vgpu->vdev.group_notifier);
+					&vdev->group_notifier);
 
 undo_iommu:
 	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
-					&vgpu->vdev.iommu_notifier);
+					&vdev->iommu_notifier);
 out:
 	return ret;
 }
 
 static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct eventfd_ctx *trigger;
 
-	trigger = vgpu->vdev.msi_trigger;
+	trigger = vdev->msi_trigger;
 	if (trigger) {
 		eventfd_ctx_put(trigger);
-		vgpu->vdev.msi_trigger = NULL;
+		vdev->msi_trigger = NULL;
 	}
 }
 
 static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct kvmgt_guest_info *info;
 	int ret;
 
 	if (!handle_valid(vgpu->handle))
 		return;
 
-	if (atomic_cmpxchg(&vgpu->vdev.released, 0, 1))
+	if (atomic_cmpxchg(&vdev->released, 0, 1))
 		return;
 
 	intel_gvt_ops->vgpu_release(vgpu);
 
-	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_IOMMU_NOTIFY,
-					&vgpu->vdev.iommu_notifier);
+	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
+					&vdev->iommu_notifier);
 	WARN(ret, "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_GROUP_NOTIFY,
-					&vgpu->vdev.group_notifier);
+	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
+					&vdev->group_notifier);
 	WARN(ret, "vfio_unregister_notifier for group failed: %d\n", ret);
 
 	/* dereference module reference taken at open */
@@ -835,7 +881,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
 
-	vgpu->vdev.kvm = NULL;
+	vdev->kvm = NULL;
 	vgpu->handle = 0;
 }
 
@@ -848,10 +894,10 @@ static void intel_vgpu_release(struct mdev_device *mdev)
 
 static void intel_vgpu_release_work(struct work_struct *work)
 {
-	struct intel_vgpu *vgpu = container_of(work, struct intel_vgpu,
-					vdev.release_work);
+	struct kvmgt_vdev *vdev = container_of(work, struct kvmgt_vdev,
+					       release_work);
 
-	__intel_vgpu_release(vgpu);
+	__intel_vgpu_release(vdev->vgpu);
 }
 
 static u64 intel_vgpu_get_bar_addr(struct intel_vgpu *vgpu, int bar)
@@ -933,12 +979,13 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 			size_t count, loff_t *ppos, bool is_write)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos);
 	u64 pos = *ppos & VFIO_PCI_OFFSET_MASK;
 	int ret = -EINVAL;
 
 
-	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
+	if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions) {
 		gvt_vgpu_err("invalid index: %u\n", index);
 		return -EINVAL;
 	}
@@ -967,11 +1014,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 	case VFIO_PCI_ROM_REGION_INDEX:
 		break;
 	default:
-		if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions)
+		if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions)
 			return -EINVAL;
 
 		index -= VFIO_PCI_NUM_REGIONS;
-		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
+		return vdev->region[index].ops->rw(vgpu, buf, count,
 				ppos, is_write);
 	}
 
@@ -1224,7 +1271,7 @@ static int intel_vgpu_set_msi_trigger(struct intel_vgpu *vgpu,
 			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
 			return PTR_ERR(trigger);
 		}
-		vgpu->vdev.msi_trigger = trigger;
+		kvmgt_vdev(vgpu)->msi_trigger = trigger;
 	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count)
 		intel_vgpu_release_msi_eventfd_ctx(vgpu);
 
@@ -1276,6 +1323,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 			     unsigned long arg)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long minsz;
 
 	gvt_dbg_core("vgpu%d ioctl, cmd: %d\n", vgpu->id, cmd);
@@ -1294,7 +1342,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 		info.flags = VFIO_DEVICE_FLAGS_PCI;
 		info.flags |= VFIO_DEVICE_FLAGS_RESET;
 		info.num_regions = VFIO_PCI_NUM_REGIONS +
-				vgpu->vdev.num_regions;
+				vdev->num_regions;
 		info.num_irqs = VFIO_PCI_NUM_IRQS;
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
@@ -1385,22 +1433,22 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 					.header.version = 1 };
 
 				if (info.index >= VFIO_PCI_NUM_REGIONS +
-						vgpu->vdev.num_regions)
+						vdev->num_regions)
 					return -EINVAL;
 				info.index =
 					array_index_nospec(info.index,
 							VFIO_PCI_NUM_REGIONS +
-							vgpu->vdev.num_regions);
+							vdev->num_regions);
 
 				i = info.index - VFIO_PCI_NUM_REGIONS;
 
 				info.offset =
 					VFIO_PCI_INDEX_TO_OFFSET(info.index);
-				info.size = vgpu->vdev.region[i].size;
-				info.flags = vgpu->vdev.region[i].flags;
+				info.size = vdev->region[i].size;
+				info.flags = vdev->region[i].flags;
 
-				cap_type.type = vgpu->vdev.region[i].type;
-				cap_type.subtype = vgpu->vdev.region[i].subtype;
+				cap_type.type = vdev->region[i].type;
+				cap_type.subtype = vdev->region[i].subtype;
 
 				ret = vfio_info_add_capability(&caps,
 							&cap_type.header,
@@ -1740,13 +1788,15 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 {
 	struct kvmgt_guest_info *info;
 	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 	struct kvm *kvm;
 
 	vgpu = mdev_get_drvdata(mdev);
 	if (handle_valid(vgpu->handle))
 		return -EEXIST;
 
-	kvm = vgpu->vdev.kvm;
+	vdev = kvmgt_vdev(vgpu);
+	kvm = vdev->kvm;
 	if (!kvm || kvm->mm != current->mm) {
 		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
 		return -ESRCH;
@@ -1776,7 +1826,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 	info->debugfs_cache_entries = debugfs_create_ulong(
 						"kvmgt_nr_cache_entries",
 						0444, vgpu->debugfs,
-						&vgpu->vdev.nr_cache_entries);
+						&vdev->nr_cache_entries);
 	return 0;
 }
 
@@ -1793,9 +1843,17 @@ static bool kvmgt_guest_exit(struct kvmgt_guest_info *info)
 	return true;
 }
 
-static int kvmgt_attach_vgpu(void *vgpu, unsigned long *handle)
+static int kvmgt_attach_vgpu(void *p_vgpu, unsigned long *handle)
 {
-	/* nothing to do here */
+	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+
+	vgpu->vdev = kzalloc(sizeof(struct kvmgt_vdev), GFP_KERNEL);
+
+	if (!vgpu->vdev)
+		return -ENOMEM;
+
+	kvmgt_vdev(vgpu)->vgpu = vgpu;
+
 	return 0;
 }
 
@@ -1803,29 +1861,34 @@ static void kvmgt_detach_vgpu(void *p_vgpu)
 {
 	int i;
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
-	if (!vgpu->vdev.region)
+	if (!vdev->region)
 		return;
 
-	for (i = 0; i < vgpu->vdev.num_regions; i++)
-		if (vgpu->vdev.region[i].ops->release)
-			vgpu->vdev.region[i].ops->release(vgpu,
-					&vgpu->vdev.region[i]);
-	vgpu->vdev.num_regions = 0;
-	kfree(vgpu->vdev.region);
-	vgpu->vdev.region = NULL;
+	for (i = 0; i < vdev->num_regions; i++)
+		if (vdev->region[i].ops->release)
+			vdev->region[i].ops->release(vgpu,
+					&vdev->region[i]);
+	vdev->num_regions = 0;
+	kfree(vdev->region);
+	vdev->region = NULL;
+
+	kfree(vdev);
 }
 
 static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
 {
 	struct kvmgt_guest_info *info;
 	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 
 	if (!handle_valid(handle))
 		return -ESRCH;
 
 	info = (struct kvmgt_guest_info *)handle;
 	vgpu = info->vgpu;
+	vdev = kvmgt_vdev(vgpu);
 
 	/*
 	 * When guest is poweroff, msi_trigger is set to NULL, but vgpu's
@@ -1836,10 +1899,10 @@ static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
 	 * enabled by guest. so if msi_trigger is null, success is still
 	 * returned and don't inject interrupt into guest.
 	 */
-	if (vgpu->vdev.msi_trigger == NULL)
+	if (vdev->msi_trigger == NULL)
 		return 0;
 
-	if (eventfd_signal(vgpu->vdev.msi_trigger, 1) == 1)
+	if (eventfd_signal(vdev->msi_trigger, 1) == 1)
 		return 0;
 
 	return -EFAULT;
@@ -1865,26 +1928,26 @@ static unsigned long kvmgt_gfn_to_pfn(unsigned long handle, unsigned long gfn)
 static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
 		unsigned long size, dma_addr_t *dma_addr)
 {
-	struct kvmgt_guest_info *info;
 	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 	struct gvt_dma *entry;
 	int ret;
 
 	if (!handle_valid(handle))
 		return -EINVAL;
 
-	info = (struct kvmgt_guest_info *)handle;
-	vgpu = info->vgpu;
+	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
+	vdev = kvmgt_vdev(vgpu);
 
-	mutex_lock(&info->vgpu->vdev.cache_lock);
+	mutex_lock(&vdev->cache_lock);
 
-	entry = __gvt_cache_find_gfn(info->vgpu, gfn);
+	entry = __gvt_cache_find_gfn(vgpu, gfn);
 	if (!entry) {
 		ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size);
 		if (ret)
 			goto err_unlock;
 
-		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
+		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
 		if (ret)
 			goto err_unmap;
 	} else if (entry->size != size) {
@@ -1896,7 +1959,7 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
 		if (ret)
 			goto err_unlock;
 
-		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
+		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
 		if (ret)
 			goto err_unmap;
 	} else {
@@ -1904,19 +1967,20 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
 		*dma_addr = entry->dma_addr;
 	}
 
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 	return 0;
 
 err_unmap:
 	gvt_dma_unmap_page(vgpu, gfn, *dma_addr, size);
 err_unlock:
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 	return ret;
 }
 
 static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
 {
 	struct kvmgt_guest_info *info;
+	struct kvmgt_vdev *vdev;
 	struct gvt_dma *entry;
 	int ret = 0;
 
@@ -1924,14 +1988,15 @@ static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
 		return -ENODEV;
 
 	info = (struct kvmgt_guest_info *)handle;
+	vdev = kvmgt_vdev(info->vgpu);
 
-	mutex_lock(&info->vgpu->vdev.cache_lock);
+	mutex_lock(&vdev->cache_lock);
 	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
 	if (entry)
 		kref_get(&entry->ref);
 	else
 		ret = -ENOMEM;
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 
 	return ret;
 }
@@ -1947,19 +2012,21 @@ static void __gvt_dma_release(struct kref *ref)
 
 static void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr)
 {
-	struct kvmgt_guest_info *info;
+	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 	struct gvt_dma *entry;
 
 	if (!handle_valid(handle))
 		return;
 
-	info = (struct kvmgt_guest_info *)handle;
+	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
+	vdev = kvmgt_vdev(vgpu);
 
-	mutex_lock(&info->vgpu->vdev.cache_lock);
-	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
+	mutex_lock(&vdev->cache_lock);
+	entry = __gvt_cache_find_dma_addr(vgpu, dma_addr);
 	if (entry)
 		kref_put(&entry->ref, __gvt_dma_release);
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 }
 
 static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
-- 
2.24.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/4] drm/i915/gvt: remove unused vblank_done completion
  2020-01-09 17:13 ` [RFC PATCH 0/4] Support for out-of-tree hypervisor modules in i915/gvt Julian Stecklina
  2020-01-09 17:13   ` [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures Julian Stecklina
@ 2020-01-09 17:13   ` Julian Stecklina
  2020-01-20  6:23     ` Zhenyu Wang
  2020-01-09 17:13   ` [RFC PATCH 3/4] drm/i915/gvt: define a public interface to gvt Julian Stecklina
  2020-01-09 17:13   ` [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include Julian Stecklina
  3 siblings, 1 reply; 16+ messages in thread
From: Julian Stecklina @ 2020-01-09 17:13 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Julian Stecklina, linux-kernel, hang.yuan, dri-devel, zhiyuan.lv

This variable is used nowhere, so remove it.

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>

Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
---
 drivers/gpu/drm/i915/gvt/gvt.h   | 2 --
 drivers/gpu/drm/i915/gvt/kvmgt.c | 2 --
 2 files changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 2604739e5680..8cf292a8d6bd 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -203,8 +203,6 @@ struct intel_vgpu {
 	struct mutex dmabuf_lock;
 	struct idr object_idr;
 
-	struct completion vblank_done;
-
 	u32 scan_nonprivbb;
 };
 
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index d725a4fb94b9..9a435bc1a2f0 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -1817,8 +1817,6 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 	kvmgt_protect_table_init(info);
 	gvt_cache_init(vgpu);
 
-	init_completion(&vgpu->vblank_done);
-
 	info->track_node.track_write = kvmgt_page_track_write;
 	info->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
 	kvm_page_track_register_notifier(kvm, &info->track_node);
-- 
2.24.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/4] drm/i915/gvt: define a public interface to gvt
  2020-01-09 17:13 ` [RFC PATCH 0/4] Support for out-of-tree hypervisor modules in i915/gvt Julian Stecklina
  2020-01-09 17:13   ` [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures Julian Stecklina
  2020-01-09 17:13   ` [RFC PATCH 2/4] drm/i915/gvt: remove unused vblank_done completion Julian Stecklina
@ 2020-01-09 17:13   ` Julian Stecklina
  2020-01-09 17:13   ` [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include Julian Stecklina
  3 siblings, 0 replies; 16+ messages in thread
From: Julian Stecklina @ 2020-01-09 17:13 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Julian Stecklina, linux-kernel, hang.yuan, dri-devel, zhiyuan.lv

So far, the KVMGT code just included the whole i915 driver and GVT
internals to do its job. Change this to have a single public header
that defines the interface via accessor functions.

Some ugly things:

a) The handle member of intel_vgpu should be in kvmgt_vdev and the
generic code should just pass the vgpu pointer around.

b) The "public" API is rather ugly, because I tried to limit the
changes I do to KVMGT and have a mechanical 1:1 conversion. Future
patches will need to simplify this to something sane.

v2:
- Remove edid and display constants from public API

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>

Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
---
 drivers/gpu/drm/i915/gvt/Makefile     |   2 +-
 drivers/gpu/drm/i915/gvt/debug.h      |   2 +-
 drivers/gpu/drm/i915/gvt/display.c    |  26 +++++
 drivers/gpu/drm/i915/gvt/display.h    |  27 -----
 drivers/gpu/drm/i915/gvt/gtt.h        |   2 -
 drivers/gpu/drm/i915/gvt/gvt.h        |  40 +------
 drivers/gpu/drm/i915/gvt/gvt_public.c | 154 ++++++++++++++++++++++++++
 drivers/gpu/drm/i915/gvt/gvt_public.h | 104 +++++++++++++++++
 drivers/gpu/drm/i915/gvt/hypercall.h  |   3 +
 drivers/gpu/drm/i915/gvt/kvmgt.c      | 130 +++++++++++-----------
 drivers/gpu/drm/i915/gvt/mpt.h        |   3 -
 drivers/gpu/drm/i915/gvt/reg.h        |   2 -
 12 files changed, 354 insertions(+), 141 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/gvt/gvt_public.c
 create mode 100644 drivers/gpu/drm/i915/gvt/gvt_public.h

diff --git a/drivers/gpu/drm/i915/gvt/Makefile b/drivers/gpu/drm/i915/gvt/Makefile
index ea8324abc784..183827c4f917 100644
--- a/drivers/gpu/drm/i915/gvt/Makefile
+++ b/drivers/gpu/drm/i915/gvt/Makefile
@@ -3,7 +3,7 @@ GVT_DIR := gvt
 GVT_SOURCE := gvt.o aperture_gm.o handlers.o vgpu.o trace_points.o firmware.o \
 	interrupt.o gtt.o cfg_space.o opregion.o mmio.o display.o edid.o \
 	execlist.o scheduler.o sched_policy.o mmio_context.o cmd_parser.o debugfs.o \
-	fb_decoder.o dmabuf.o page_track.o
+	fb_decoder.o dmabuf.o page_track.o gvt_public.o
 
 ccflags-y				+= -I $(srctree)/$(src) -I $(srctree)/$(src)/$(GVT_DIR)/
 i915-y					+= $(addprefix $(GVT_DIR)/, $(GVT_SOURCE))
diff --git a/drivers/gpu/drm/i915/gvt/debug.h b/drivers/gpu/drm/i915/gvt/debug.h
index c6027125c1ec..8270f34cfa43 100644
--- a/drivers/gpu/drm/i915/gvt/debug.h
+++ b/drivers/gpu/drm/i915/gvt/debug.h
@@ -32,7 +32,7 @@ do {									\
 	if (IS_ERR_OR_NULL(vgpu))					\
 		pr_err("gvt: "fmt, ##args);			\
 	else								\
-		pr_err("gvt: vgpu %d: "fmt, vgpu->id, ##args);\
+		pr_err("gvt: vgpu %d: "fmt, intel_vgpu_id(vgpu), ##args);	\
 } while (0)
 
 #define gvt_dbg_core(fmt, args...) \
diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
index e1c313da6c00..08636fa15c46 100644
--- a/drivers/gpu/drm/i915/gvt/display.c
+++ b/drivers/gpu/drm/i915/gvt/display.c
@@ -529,3 +529,29 @@ void intel_vgpu_reset_display(struct intel_vgpu *vgpu)
 {
 	emulate_monitor_status_change(vgpu);
 }
+
+unsigned int vgpu_edid_xres(struct intel_vgpu_port *port)
+{
+	switch (port->id) {
+	case GVT_EDID_1024_768:
+		return 1024;
+	case GVT_EDID_1920_1200:
+		return 1920;
+	default:
+		return 0;
+	}
+}
+EXPORT_SYMBOL_GPL(vgpu_edid_xres);
+
+unsigned int vgpu_edid_yres(struct intel_vgpu_port *port)
+{
+	switch (port->id) {
+	case GVT_EDID_1024_768:
+		return 768;
+	case GVT_EDID_1920_1200:
+		return 1200;
+	default:
+		return 0;
+	}
+}
+EXPORT_SYMBOL_GPL(vgpu_edid_yres);
diff --git a/drivers/gpu/drm/i915/gvt/display.h b/drivers/gpu/drm/i915/gvt/display.h
index b59b34046e1e..6cdfc28b1070 100644
--- a/drivers/gpu/drm/i915/gvt/display.h
+++ b/drivers/gpu/drm/i915/gvt/display.h
@@ -43,9 +43,6 @@ struct intel_vgpu;
 #define SBI_REG_MAX	20
 #define DPCD_SIZE	0x700
 
-#define intel_vgpu_port(vgpu, port) \
-	(&(vgpu->display.ports[port]))
-
 #define intel_vgpu_has_monitor_on_port(vgpu, port) \
 	(intel_vgpu_port(vgpu, port)->edid && \
 		intel_vgpu_port(vgpu, port)->edid->data_valid)
@@ -178,30 +175,6 @@ static inline char *vgpu_edid_str(enum intel_vgpu_edid id)
 	}
 }
 
-static inline unsigned int vgpu_edid_xres(enum intel_vgpu_edid id)
-{
-	switch (id) {
-	case GVT_EDID_1024_768:
-		return 1024;
-	case GVT_EDID_1920_1200:
-		return 1920;
-	default:
-		return 0;
-	}
-}
-
-static inline unsigned int vgpu_edid_yres(enum intel_vgpu_edid id)
-{
-	switch (id) {
-	case GVT_EDID_1024_768:
-		return 768;
-	case GVT_EDID_1920_1200:
-		return 1200;
-	default:
-		return 0;
-	}
-}
-
 void intel_gvt_emulate_vblank(struct intel_gvt *gvt);
 void intel_gvt_check_vblank_emulation(struct intel_gvt *gvt);
 
diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h
index 88789316807d..2618affcd5d9 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.h
+++ b/drivers/gpu/drm/i915/gvt/gtt.h
@@ -38,8 +38,6 @@
 
 struct intel_vgpu_mm;
 
-#define INTEL_GVT_INVALID_ADDR (~0UL)
-
 struct intel_gvt_gtt_entry {
 	u64 val64;
 	int type;
diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 8cf292a8d6bd..f9693c44e342 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -33,6 +33,7 @@
 #ifndef _GVT_H_
 #define _GVT_H_
 
+#include "gvt_public.h"
 #include "debug.h"
 #include "hypercall.h"
 #include "mmio.h"
@@ -206,11 +207,6 @@ struct intel_vgpu {
 	u32 scan_nonprivbb;
 };
 
-static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
-{
-	return vgpu->vdev;
-}
-
 /* validating GM healthy status*/
 #define vgpu_is_vm_unhealthy(ret_val) \
 	(((ret_val) == -EBADRQC) || ((ret_val) == -EFAULT))
@@ -515,13 +511,6 @@ int intel_vgpu_emulate_cfg_write(struct intel_vgpu *vgpu, unsigned int offset,
 
 void intel_vgpu_emulate_hotplug(struct intel_vgpu *vgpu, bool connected);
 
-static inline u64 intel_vgpu_get_bar_gpa(struct intel_vgpu *vgpu, int bar)
-{
-	/* We are 64bit bar. */
-	return (*(u64 *)(vgpu->cfg_space.virtual_cfg_space + bar)) &
-			PCI_BASE_ADDRESS_MEM_MASK;
-}
-
 void intel_vgpu_clean_opregion(struct intel_vgpu *vgpu);
 int intel_vgpu_init_opregion(struct intel_vgpu *vgpu);
 int intel_vgpu_opregion_base_write_handler(struct intel_vgpu *vgpu, u32 gpa);
@@ -532,33 +521,6 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu);
 int intel_gvt_scan_and_shadow_workload(struct intel_vgpu_workload *workload);
 void enter_failsafe_mode(struct intel_vgpu *vgpu, int reason);
 
-struct intel_gvt_ops {
-	int (*emulate_cfg_read)(struct intel_vgpu *, unsigned int, void *,
-				unsigned int);
-	int (*emulate_cfg_write)(struct intel_vgpu *, unsigned int, void *,
-				unsigned int);
-	int (*emulate_mmio_read)(struct intel_vgpu *, u64, void *,
-				unsigned int);
-	int (*emulate_mmio_write)(struct intel_vgpu *, u64, void *,
-				unsigned int);
-	struct intel_vgpu *(*vgpu_create)(struct intel_gvt *,
-				struct intel_vgpu_type *);
-	void (*vgpu_destroy)(struct intel_vgpu *vgpu);
-	void (*vgpu_release)(struct intel_vgpu *vgpu);
-	void (*vgpu_reset)(struct intel_vgpu *);
-	void (*vgpu_activate)(struct intel_vgpu *);
-	void (*vgpu_deactivate)(struct intel_vgpu *);
-	struct intel_vgpu_type *(*gvt_find_vgpu_type)(struct intel_gvt *gvt,
-			const char *name);
-	bool (*get_gvt_attrs)(struct attribute_group ***intel_vgpu_type_groups);
-	int (*vgpu_query_plane)(struct intel_vgpu *vgpu, void *);
-	int (*vgpu_get_dmabuf)(struct intel_vgpu *vgpu, unsigned int);
-	int (*write_protect_handler)(struct intel_vgpu *, u64, void *,
-				     unsigned int);
-	void (*emulate_hotplug)(struct intel_vgpu *vgpu, bool connected);
-};
-
-
 enum {
 	GVT_FAILSAFE_UNSUPPORTED_GUEST,
 	GVT_FAILSAFE_INSUFFICIENT_RESOURCE,
diff --git a/drivers/gpu/drm/i915/gvt/gvt_public.c b/drivers/gpu/drm/i915/gvt/gvt_public.c
new file mode 100644
index 000000000000..b3f814ec125a
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/gvt_public.c
@@ -0,0 +1,154 @@
+#include "i915_drv.h"
+#include "gvt.h"
+
+struct intel_gvt *intel_gvt_from_kdev(struct device *kobj)
+{
+	return kdev_to_i915(kobj)->gvt;
+}
+EXPORT_SYMBOL_GPL(intel_gvt_from_kdev);
+
+void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
+{
+	return vgpu->vdev;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_vdev);
+
+void intel_vgpu_set_vdev(struct intel_vgpu *vgpu, void *vdev)
+{
+	vgpu->vdev = vdev;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_set_vdev);
+
+int intel_vgpu_id(struct intel_vgpu *vgpu)
+{
+	return vgpu->id;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_id);
+
+bool intel_vgpu_active_find(struct intel_vgpu *vgpu, bool (*pred)(struct intel_vgpu *, void *), void *ctx)
+{
+	struct intel_vgpu *itr;
+	int id;
+	bool ret = false;
+
+	mutex_lock(&vgpu->gvt->lock);
+	for_each_active_vgpu(vgpu->gvt, itr, id) {
+		if (pred(itr, ctx)) {
+			ret = true;
+			goto out;
+		}
+	}
+ out:
+	mutex_unlock(&vgpu->gvt->lock);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_active_find);
+
+struct device *intel_vgpu_pdev(struct intel_vgpu *vgpu)
+{
+	return &vgpu->gvt->dev_priv->drm.pdev->dev;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_pdev);
+
+void *intel_vgpu_opregion_va(struct intel_vgpu *vgpu)
+{
+	return vgpu_opregion(vgpu)->va;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_opregion_va);
+
+struct intel_vgpu_port *intel_vgpu_port(struct intel_vgpu *vgpu, size_t port)
+{
+	return &(vgpu->display.ports[port]);
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_port);
+
+void *intel_vgpu_edid_block(struct intel_vgpu_port *port, size_t *edid_size)
+{
+	*edid_size = EDID_SIZE;
+
+	return port->edid->edid_block;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_edid_block);
+
+void intel_vgpu_set_handle(struct intel_vgpu *vgpu, unsigned long handle)
+{
+	vgpu->handle = handle;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_set_handle);
+
+unsigned long intel_vgpu_handle(struct intel_vgpu *vgpu)
+{
+	return vgpu->handle;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_handle);
+
+u32 *intel_vgpu_bar_ptr(struct intel_vgpu *vgpu, int bar)
+{
+	return (u32 *)(vgpu->cfg_space.virtual_cfg_space + bar);
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_bar_ptr);
+
+u64 intel_vgpu_get_bar_gpa(struct intel_vgpu *vgpu, int bar)
+{
+	/* We are 64bit bar. */
+	return (*(u64 *)(vgpu->cfg_space.virtual_cfg_space + bar)) &
+			PCI_BASE_ADDRESS_MEM_MASK;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_get_bar_gpa);
+
+u64 intel_vgpu_get_bar_size(struct intel_vgpu *vgpu, int bar)
+{
+	return vgpu->cfg_space.bar[bar].size;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_get_bar_size);
+
+u64 intel_gvt_cfg_space_size(struct intel_vgpu *vgpu)
+{
+	return vgpu->gvt->device_info.cfg_space_size;
+}
+EXPORT_SYMBOL_GPL(intel_gvt_cfg_space_size);
+
+u64 intel_gvt_aperture_pa_base(struct intel_vgpu *vgpu)
+{
+	return gvt_aperture_pa_base(vgpu->gvt);
+}
+EXPORT_SYMBOL_GPL(intel_gvt_aperture_pa_base);
+
+u64 intel_vgpu_aperture_offset(struct intel_vgpu *vgpu)
+{
+	return vgpu_aperture_offset(vgpu);
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_aperture_offset);
+
+u64 intel_vgpu_aperture_size(struct intel_vgpu *vgpu)
+{
+	return vgpu_aperture_sz(vgpu);
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_aperture_size);
+
+u64 intel_gvt_aperture_size(struct intel_vgpu *vgpu)
+{
+	return gvt_aperture_sz(vgpu->gvt);
+}
+EXPORT_SYMBOL_GPL(intel_gvt_aperture_size);
+
+struct io_mapping *intel_gvt_gtt_iomap(struct intel_vgpu *vgpu)
+{
+	return &vgpu->gvt->dev_priv->ggtt.iomap;
+}
+EXPORT_SYMBOL_GPL(intel_gvt_gtt_iomap);
+
+bool intel_gvt_in_gtt(struct intel_vgpu *vgpu, u64 offset)
+{
+	struct intel_gvt *gvt = vgpu->gvt;
+
+	return (offset >= gvt->device_info.gtt_start_offset &&
+		offset < gvt->device_info.gtt_start_offset + gvt_ggtt_sz(gvt));
+}
+EXPORT_SYMBOL_GPL(intel_gvt_in_gtt);
+
+struct dentry *intel_vgpu_debugfs(struct intel_vgpu *vgpu)
+{
+	return vgpu->debugfs;
+}
+EXPORT_SYMBOL_GPL(intel_vgpu_debugfs);
diff --git a/drivers/gpu/drm/i915/gvt/gvt_public.h b/drivers/gpu/drm/i915/gvt/gvt_public.h
new file mode 100644
index 000000000000..23bf1235e1a1
--- /dev/null
+++ b/drivers/gpu/drm/i915/gvt/gvt_public.h
@@ -0,0 +1,104 @@
+/*
+ * Copyright(c) 2011-2016 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _GVT_PUBLIC_H_
+#define _GVT_PUBLIC_H_
+
+#include "hypercall.h"
+
+struct attribute;
+struct attribute_group;
+
+struct intel_vgpu;
+struct intel_gvt;
+struct intel_vgpu_type;
+struct intel_vgpu_port;
+
+/* reg.h */
+#define INTEL_GVT_OPREGION_PAGES	2
+#define INTEL_GVT_OPREGION_SIZE		(INTEL_GVT_OPREGION_PAGES * PAGE_SIZE)
+
+/* gtt.h */
+#define INTEL_GVT_INVALID_ADDR (~0UL)
+
+struct intel_gvt_ops {
+	int (*emulate_cfg_read)(struct intel_vgpu *, unsigned int, void *,
+				unsigned int);
+	int (*emulate_cfg_write)(struct intel_vgpu *, unsigned int, void *,
+				unsigned int);
+	int (*emulate_mmio_read)(struct intel_vgpu *, u64, void *,
+				unsigned int);
+	int (*emulate_mmio_write)(struct intel_vgpu *, u64, void *,
+				unsigned int);
+	struct intel_vgpu *(*vgpu_create)(struct intel_gvt *,
+				struct intel_vgpu_type *);
+	void (*vgpu_destroy)(struct intel_vgpu *vgpu);
+	void (*vgpu_release)(struct intel_vgpu *vgpu);
+	void (*vgpu_reset)(struct intel_vgpu *);
+	void (*vgpu_activate)(struct intel_vgpu *);
+	void (*vgpu_deactivate)(struct intel_vgpu *);
+	struct intel_vgpu_type *(*gvt_find_vgpu_type)(struct intel_gvt *gvt,
+			const char *name);
+	bool (*get_gvt_attrs)(struct attribute_group ***intel_vgpu_type_groups);
+	int (*vgpu_query_plane)(struct intel_vgpu *vgpu, void *);
+	int (*vgpu_get_dmabuf)(struct intel_vgpu *vgpu, unsigned int);
+	int (*write_protect_handler)(struct intel_vgpu *, u64, void *,
+				     unsigned int);
+	void (*emulate_hotplug)(struct intel_vgpu *vgpu, bool connected);
+};
+
+bool intel_vgpu_active_find(struct intel_vgpu *vgpu, bool (*pred)(struct intel_vgpu *, void *), void *ctx);
+
+struct intel_gvt *intel_gvt_from_kdev(struct device *kobj);
+void *intel_vgpu_vdev(struct intel_vgpu *vgpu);
+void intel_vgpu_set_vdev(struct intel_vgpu *vgpu, void *vdev);
+int intel_vgpu_id(struct intel_vgpu *vgpu);
+
+struct device *intel_vgpu_pdev(struct intel_vgpu *vgpu);
+void *intel_vgpu_opregion_va(struct intel_vgpu *vgpu);
+struct intel_vgpu_port *intel_vgpu_port(struct intel_vgpu *vgpu, size_t port);
+
+unsigned int vgpu_edid_xres(struct intel_vgpu_port *port);
+unsigned int vgpu_edid_yres(struct intel_vgpu_port *port);
+void *intel_vgpu_edid_block(struct intel_vgpu_port *port, size_t *edid_size);
+
+void intel_vgpu_set_handle(struct intel_vgpu *vgpu, unsigned long handle);
+unsigned long intel_vgpu_handle(struct intel_vgpu *vgpu);
+
+u32 *intel_vgpu_bar_ptr(struct intel_vgpu *vgpu, int bar);
+u64 intel_vgpu_get_bar_gpa(struct intel_vgpu *vgpu, int bar);
+u64 intel_vgpu_get_bar_size(struct intel_vgpu *vgpu, int bar);
+
+u64 intel_gvt_cfg_space_size(struct intel_vgpu *vgpu);
+
+u64 intel_gvt_aperture_pa_base(struct intel_vgpu *vgpu);
+u64 intel_vgpu_aperture_offset(struct intel_vgpu *vgpu);
+u64 intel_vgpu_aperture_size(struct intel_vgpu *vgpu);
+u64 intel_gvt_aperture_size(struct intel_vgpu *vgpu);
+
+struct io_mapping *intel_gvt_gtt_iomap(struct intel_vgpu *vgpu);
+bool intel_gvt_in_gtt(struct intel_vgpu *vgpu, u64 off);
+
+struct dentry *intel_vgpu_debugfs(struct intel_vgpu *vgpu);
+
+#endif /* _GVT_PUBLIC_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/drivers/gpu/drm/i915/gvt/hypercall.h
index b17c4a1599cd..7ed33e4919a3 100644
--- a/drivers/gpu/drm/i915/gvt/hypercall.h
+++ b/drivers/gpu/drm/i915/gvt/hypercall.h
@@ -81,4 +81,7 @@ struct intel_gvt_mpt {
 
 extern struct intel_gvt_mpt xengt_mpt;
 
+int intel_gvt_register_hypervisor(struct intel_gvt_mpt *);
+void intel_gvt_unregister_hypervisor(void);
+
 #endif /* _GVT_HYPERCALL_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 9a435bc1a2f0..f5157211d45f 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -28,10 +28,15 @@
  *    Xiaoguang Chen <xiaoguang.chen@intel.com>
  */
 
+#include <drm/drm_edid.h>
 #include <linux/init.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/hashtable.h>
+#include <linux/io-mapping.h>
 #include <linux/mm.h>
 #include <linux/mmu_context.h>
+#include <linux/pci.h>
 #include <linux/sched/mm.h>
 #include <linux/types.h>
 #include <linux/list.h>
@@ -46,8 +51,8 @@
 
 #include <linux/nospec.h>
 
-#include "i915_drv.h"
-#include "gvt.h"
+#include "debug.h"
+#include "gvt_public.h"
 
 static const struct intel_gvt_ops *intel_gvt_ops;
 
@@ -217,7 +222,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 static int gvt_dma_map_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		dma_addr_t *dma_addr, unsigned long size)
 {
-	struct device *dev = &vgpu->gvt->dev_priv->drm.pdev->dev;
+	struct device *dev = intel_vgpu_pdev(vgpu);
 	struct page *page = NULL;
 	int ret;
 
@@ -240,7 +245,7 @@ static int gvt_dma_map_page(struct intel_vgpu *vgpu, unsigned long gfn,
 static void gvt_dma_unmap_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		dma_addr_t dma_addr, unsigned long size)
 {
-	struct device *dev = &vgpu->gvt->dev_priv->drm.pdev->dev;
+	struct device *dev = intel_vgpu_pdev(vgpu);
 
 	dma_unmap_page(dev, dma_addr, size, PCI_DMA_BIDIRECTIONAL);
 	gvt_unpin_guest_page(vgpu, gfn, size);
@@ -627,7 +632,7 @@ static int kvmgt_set_opregion(void *p_vgpu)
 	 * one later. This one is used to expose opregion to VFIO. And the
 	 * other one created by VFIO later, is used by guest actually.
 	 */
-	base = vgpu_opregion(vgpu)->va;
+	base = intel_vgpu_opregion_va(vgpu);
 	if (!base)
 		return -ENOMEM;
 
@@ -639,7 +644,7 @@ static int kvmgt_set_opregion(void *p_vgpu)
 	ret = intel_vgpu_register_reg(vgpu,
 			PCI_VENDOR_ID_INTEL | VFIO_REGION_TYPE_PCI_VENDOR_TYPE,
 			VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION,
-			&intel_vgpu_regops_opregion, OPREGION_SIZE,
+			&intel_vgpu_regops_opregion, INTEL_GVT_OPREGION_SIZE,
 			VFIO_REGION_INFO_FLAG_READ, base);
 
 	return ret;
@@ -650,24 +655,26 @@ static int kvmgt_set_edid(void *p_vgpu, int port_num)
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
 	struct intel_vgpu_port *port = intel_vgpu_port(vgpu, port_num);
 	struct vfio_edid_region *base;
+	size_t edid_size;
 	int ret;
 
 	base = kzalloc(sizeof(*base), GFP_KERNEL);
 	if (!base)
 		return -ENOMEM;
 
+	base->edid_blob = intel_vgpu_edid_block(port, &edid_size);
+
 	/* TODO: Add multi-port and EDID extension block support */
 	base->vfio_edid_regs.edid_offset = EDID_BLOB_OFFSET;
-	base->vfio_edid_regs.edid_max_size = EDID_SIZE;
-	base->vfio_edid_regs.edid_size = EDID_SIZE;
-	base->vfio_edid_regs.max_xres = vgpu_edid_xres(port->id);
-	base->vfio_edid_regs.max_yres = vgpu_edid_yres(port->id);
-	base->edid_blob = port->edid->edid_block;
+	base->vfio_edid_regs.edid_max_size = edid_size;
+	base->vfio_edid_regs.edid_size = edid_size;
+	base->vfio_edid_regs.max_xres = vgpu_edid_xres(port);
+	base->vfio_edid_regs.max_yres = vgpu_edid_yres(port);
 
 	ret = intel_vgpu_register_reg(vgpu,
 			VFIO_REGION_TYPE_GFX,
 			VFIO_REGION_SUBTYPE_GFX_EDID,
-			&intel_vgpu_regops_edid, EDID_SIZE,
+			&intel_vgpu_regops_edid, edid_size,
 			VFIO_REGION_INFO_FLAG_READ |
 			VFIO_REGION_INFO_FLAG_WRITE |
 			VFIO_REGION_INFO_FLAG_CAPS, base);
@@ -690,11 +697,11 @@ static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
 	struct intel_vgpu *vgpu = NULL;
 	struct intel_vgpu_type *type;
 	struct device *pdev;
-	void *gvt;
+	struct intel_gvt *gvt;
 	int ret;
 
 	pdev = mdev_parent_dev(mdev);
-	gvt = kdev_to_i915(pdev)->gvt;
+	gvt = intel_gvt_from_kdev(pdev);
 
 	type = intel_gvt_ops->gvt_find_vgpu_type(gvt, kobject_name(kobj));
 	if (!type) {
@@ -728,7 +735,7 @@ static int intel_vgpu_remove(struct mdev_device *mdev)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
 
-	if (handle_valid(vgpu->handle))
+	if (handle_valid(intel_vgpu_handle(vgpu)))
 		return -EBUSY;
 
 	intel_gvt_ops->vgpu_destroy(vgpu);
@@ -857,7 +864,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	struct kvmgt_guest_info *info;
 	int ret;
 
-	if (!handle_valid(vgpu->handle))
+	if (!handle_valid(intel_vgpu_handle(vgpu)))
 		return;
 
 	if (atomic_cmpxchg(&vdev->released, 0, 1))
@@ -876,13 +883,13 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 	/* dereference module reference taken at open */
 	module_put(THIS_MODULE);
 
-	info = (struct kvmgt_guest_info *)vgpu->handle;
+	info = (struct kvmgt_guest_info *)intel_vgpu_handle(vgpu);
 	kvmgt_guest_exit(info);
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
 
 	vdev->kvm = NULL;
-	vgpu->handle = 0;
+	intel_vgpu_set_handle(vgpu, 0);
 }
 
 static void intel_vgpu_release(struct mdev_device *mdev)
@@ -902,18 +909,16 @@ static void intel_vgpu_release_work(struct work_struct *work)
 
 static u64 intel_vgpu_get_bar_addr(struct intel_vgpu *vgpu, int bar)
 {
+	u32 *barp = intel_vgpu_bar_ptr(vgpu, bar);
 	u32 start_lo, start_hi;
 	u32 mem_type;
 
-	start_lo = (*(u32 *)(vgpu->cfg_space.virtual_cfg_space + bar)) &
-			PCI_BASE_ADDRESS_MEM_MASK;
-	mem_type = (*(u32 *)(vgpu->cfg_space.virtual_cfg_space + bar)) &
-			PCI_BASE_ADDRESS_MEM_TYPE_MASK;
+	start_lo = barp[0] & PCI_BASE_ADDRESS_MEM_MASK;
+	mem_type = barp[0] & PCI_BASE_ADDRESS_MEM_TYPE_MASK;
 
 	switch (mem_type) {
 	case PCI_BASE_ADDRESS_MEM_TYPE_64:
-		start_hi = (*(u32 *)(vgpu->cfg_space.virtual_cfg_space
-						+ bar + 4));
+		start_hi = barp[1];
 		break;
 	case PCI_BASE_ADDRESS_MEM_TYPE_32:
 	case PCI_BASE_ADDRESS_MEM_TYPE_1M:
@@ -944,8 +949,8 @@ static int intel_vgpu_bar_rw(struct intel_vgpu *vgpu, int bar, u64 off,
 
 static inline bool intel_vgpu_in_aperture(struct intel_vgpu *vgpu, u64 off)
 {
-	return off >= vgpu_aperture_offset(vgpu) &&
-	       off < vgpu_aperture_offset(vgpu) + vgpu_aperture_sz(vgpu);
+	return off >= intel_vgpu_aperture_offset(vgpu) &&
+	       off < intel_vgpu_aperture_offset(vgpu) + intel_vgpu_aperture_size(vgpu);
 }
 
 static int intel_vgpu_aperture_rw(struct intel_vgpu *vgpu, u64 off,
@@ -959,7 +964,7 @@ static int intel_vgpu_aperture_rw(struct intel_vgpu *vgpu, u64 off,
 		return -EINVAL;
 	}
 
-	aperture_va = io_mapping_map_wc(&vgpu->gvt->dev_priv->ggtt.iomap,
+	aperture_va = io_mapping_map_wc(intel_gvt_gtt_iomap(vgpu),
 					ALIGN_DOWN(off, PAGE_SIZE),
 					count + offset_in_page(off));
 	if (!aperture_va)
@@ -1029,8 +1034,7 @@ static bool gtt_entry(struct mdev_device *mdev, loff_t *ppos)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
 	unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos);
-	struct intel_gvt *gvt = vgpu->gvt;
-	int offset;
+	u64 offset;
 
 	/* Only allow MMIO GGTT entry access */
 	if (index != PCI_BASE_ADDRESS_0)
@@ -1039,9 +1043,7 @@ static bool gtt_entry(struct mdev_device *mdev, loff_t *ppos)
 	offset = (u64)(*ppos & VFIO_PCI_OFFSET_MASK) -
 		intel_vgpu_get_bar_gpa(vgpu, PCI_BASE_ADDRESS_0);
 
-	return (offset >= gvt->device_info.gtt_start_offset &&
-		offset < gvt->device_info.gtt_start_offset + gvt_ggtt_sz(gvt)) ?
-			true : false;
+	return intel_gvt_in_gtt(vgpu, offset);
 }
 
 static ssize_t intel_vgpu_read(struct mdev_device *mdev, char __user *buf,
@@ -1219,10 +1221,10 @@ static int intel_vgpu_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
 	if (!intel_vgpu_in_aperture(vgpu, req_start))
 		return -EINVAL;
 	if (req_start + req_size >
-	    vgpu_aperture_offset(vgpu) + vgpu_aperture_sz(vgpu))
+	    intel_vgpu_aperture_offset(vgpu) + intel_vgpu_aperture_size(vgpu))
 		return -EINVAL;
 
-	pgoff = (gvt_aperture_pa_base(vgpu->gvt) >> PAGE_SHIFT) + pgoff;
+	pgoff = (intel_gvt_aperture_pa_base(vgpu) >> PAGE_SHIFT) + pgoff;
 
 	return remap_pfn_range(vma, virtaddr, pgoff, req_size, pg_prot);
 }
@@ -1326,7 +1328,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long minsz;
 
-	gvt_dbg_core("vgpu%d ioctl, cmd: %d\n", vgpu->id, cmd);
+	gvt_dbg_core("vgpu%d ioctl, cmd: %d\n", intel_vgpu_id(vgpu), cmd);
 
 	if (cmd == VFIO_DEVICE_GET_INFO) {
 		struct vfio_device_info info;
@@ -1368,13 +1370,13 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 		switch (info.index) {
 		case VFIO_PCI_CONFIG_REGION_INDEX:
 			info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
-			info.size = vgpu->gvt->device_info.cfg_space_size;
+			info.size = intel_gvt_cfg_space_size(vgpu);
 			info.flags = VFIO_REGION_INFO_FLAG_READ |
 				     VFIO_REGION_INFO_FLAG_WRITE;
 			break;
 		case VFIO_PCI_BAR0_REGION_INDEX:
 			info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
-			info.size = vgpu->cfg_space.bar[info.index].size;
+			info.size = intel_vgpu_get_bar_size(vgpu, info.index);
 			if (!info.size) {
 				info.flags = 0;
 				break;
@@ -1394,7 +1396,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 					VFIO_REGION_INFO_FLAG_MMAP |
 					VFIO_REGION_INFO_FLAG_READ |
 					VFIO_REGION_INFO_FLAG_WRITE;
-			info.size = gvt_aperture_sz(vgpu->gvt);
+			info.size = intel_gvt_aperture_size(vgpu);
 
 			sparse = kzalloc(struct_size(sparse, areas, nr_areas),
 					 GFP_KERNEL);
@@ -1406,8 +1408,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 			sparse->nr_areas = nr_areas;
 			cap_type_id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
 			sparse->areas[0].offset =
-					PAGE_ALIGN(vgpu_aperture_offset(vgpu));
-			sparse->areas[0].size = vgpu_aperture_sz(vgpu);
+				PAGE_ALIGN(intel_vgpu_aperture_offset(vgpu));
+			sparse->areas[0].size = intel_vgpu_aperture_size(vgpu);
 			break;
 
 		case VFIO_PCI_BAR3_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:
@@ -1607,7 +1609,7 @@ vgpu_id_show(struct device *dev, struct device_attribute *attr,
 	if (mdev) {
 		struct intel_vgpu *vgpu = (struct intel_vgpu *)
 			mdev_get_drvdata(mdev);
-		return sprintf(buf, "%d\n", vgpu->id);
+		return sprintf(buf, "%d\n", intel_vgpu_id(vgpu));
 	}
 	return sprintf(buf, "\n");
 }
@@ -1761,27 +1763,21 @@ static void kvmgt_page_track_flush_slot(struct kvm *kvm,
 	spin_unlock(&kvm->mmu_lock);
 }
 
-static bool __kvmgt_vgpu_exist(struct intel_vgpu *vgpu, struct kvm *kvm)
+static bool __kvmgt_has_this_kvm(struct intel_vgpu *vgpu, void *kvm)
 {
-	struct intel_vgpu *itr;
+	unsigned long handle = intel_vgpu_handle(vgpu);
 	struct kvmgt_guest_info *info;
-	int id;
-	bool ret = false;
-
-	mutex_lock(&vgpu->gvt->lock);
-	for_each_active_vgpu(vgpu->gvt, itr, id) {
-		if (!handle_valid(itr->handle))
-			continue;
-
-		info = (struct kvmgt_guest_info *)itr->handle;
-		if (kvm && kvm == info->kvm) {
-			ret = true;
-			goto out;
-		}
-	}
-out:
-	mutex_unlock(&vgpu->gvt->lock);
-	return ret;
+
+	if (!handle_valid(handle))
+		return false;
+
+	info = (struct kvmgt_guest_info *)handle;
+	return kvm && kvm == info->kvm;
+}
+
+static bool __kvmgt_vgpu_exist(struct intel_vgpu *vgpu, struct kvm *kvm)
+{
+	return intel_vgpu_active_find(vgpu, __kvmgt_has_this_kvm, kvm);
 }
 
 static int kvmgt_guest_init(struct mdev_device *mdev)
@@ -1792,7 +1788,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 	struct kvm *kvm;
 
 	vgpu = mdev_get_drvdata(mdev);
-	if (handle_valid(vgpu->handle))
+	if (handle_valid(intel_vgpu_handle(vgpu)))
 		return -EEXIST;
 
 	vdev = kvmgt_vdev(vgpu);
@@ -1809,7 +1805,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 	if (!info)
 		return -ENOMEM;
 
-	vgpu->handle = (unsigned long)info;
+	intel_vgpu_set_handle(vgpu, (unsigned long)info);
 	info->vgpu = vgpu;
 	info->kvm = kvm;
 	kvm_get_kvm(info->kvm);
@@ -1823,7 +1819,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 
 	info->debugfs_cache_entries = debugfs_create_ulong(
 						"kvmgt_nr_cache_entries",
-						0444, vgpu->debugfs,
+						0444, intel_vgpu_debugfs(vgpu),
 						&vdev->nr_cache_entries);
 	return 0;
 }
@@ -1844,13 +1840,15 @@ static bool kvmgt_guest_exit(struct kvmgt_guest_info *info)
 static int kvmgt_attach_vgpu(void *p_vgpu, unsigned long *handle)
 {
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+	struct kvmgt_vdev *vdev;
 
-	vgpu->vdev = kzalloc(sizeof(struct kvmgt_vdev), GFP_KERNEL);
+	vdev = kzalloc(sizeof(struct kvmgt_vdev), GFP_KERNEL);
 
-	if (!vgpu->vdev)
+	if (!vdev)
 		return -ENOMEM;
 
-	kvmgt_vdev(vgpu)->vgpu = vgpu;
+	vdev->vgpu = vgpu;
+	intel_vgpu_set_vdev(vgpu, vdev);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/gvt/mpt.h b/drivers/gpu/drm/i915/gvt/mpt.h
index 9ad224df9c68..8b8184d6a1fb 100644
--- a/drivers/gpu/drm/i915/gvt/mpt.h
+++ b/drivers/gpu/drm/i915/gvt/mpt.h
@@ -392,7 +392,4 @@ static inline bool intel_gvt_hypervisor_is_valid_gfn(
 	return intel_gvt_host.mpt->is_valid_gfn(vgpu->handle, gfn);
 }
 
-int intel_gvt_register_hypervisor(struct intel_gvt_mpt *);
-void intel_gvt_unregister_hypervisor(void);
-
 #endif /* _GVT_MPT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/reg.h b/drivers/gpu/drm/i915/gvt/reg.h
index 5b66e14c5b7b..85f71d2fbcb7 100644
--- a/drivers/gpu/drm/i915/gvt/reg.h
+++ b/drivers/gpu/drm/i915/gvt/reg.h
@@ -49,8 +49,6 @@
 #define INTEL_GVT_OPREGION_SCIC_SF_REQEUSTEDCALLBACKS 1
 #define INTEL_GVT_OPREGION_PARM                   0x204
 
-#define INTEL_GVT_OPREGION_PAGES	2
-#define INTEL_GVT_OPREGION_SIZE		(INTEL_GVT_OPREGION_PAGES * PAGE_SIZE)
 #define INTEL_GVT_OPREGION_VBT_OFFSET	0x400
 #define INTEL_GVT_OPREGION_VBT_SIZE	\
 		(INTEL_GVT_OPREGION_SIZE - INTEL_GVT_OPREGION_VBT_OFFSET)
-- 
2.24.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include
  2020-01-09 17:13 ` [RFC PATCH 0/4] Support for out-of-tree hypervisor modules in i915/gvt Julian Stecklina
                     ` (2 preceding siblings ...)
  2020-01-09 17:13   ` [RFC PATCH 3/4] drm/i915/gvt: define a public interface to gvt Julian Stecklina
@ 2020-01-09 17:13   ` Julian Stecklina
  2020-01-15 15:22     ` Greg KH
  3 siblings, 1 reply; 16+ messages in thread
From: Julian Stecklina @ 2020-01-09 17:13 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Julian Stecklina, linux-kernel, hang.yuan, dri-devel, zhiyuan.lv

Now that the GVT interface to hypervisors does not depend on i915/GVT
internals anymore, we can move the headers to the global include/.

This makes out-of-tree modules for hypervisor integration possible.

Cc: Zhenyu Wang <zhenyuw@linux.intel.com>

Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
---
 drivers/gpu/drm/i915/gvt/gvt.h                         |  3 +--
 drivers/gpu/drm/i915/gvt/kvmgt.c                       |  2 +-
 .../i915/gvt/gvt_public.h => include/drm/i915_gvt.h    |  8 ++++----
 .../hypercall.h => include/drm/i915_gvt_hypercall.h    | 10 +++++++---
 4 files changed, 13 insertions(+), 10 deletions(-)
 rename drivers/gpu/drm/i915/gvt/gvt_public.h => include/drm/i915_gvt.h (97%)
 rename drivers/gpu/drm/i915/gvt/hypercall.h => include/drm/i915_gvt_hypercall.h (95%)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index f9693c44e342..d09374aa7710 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -33,9 +33,8 @@
 #ifndef _GVT_H_
 #define _GVT_H_
 
-#include "gvt_public.h"
+#include <drm/i915_gvt.h>
 #include "debug.h"
-#include "hypercall.h"
 #include "mmio.h"
 #include "reg.h"
 #include "interrupt.h"
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index f5157211d45f..280d69ca964b 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -28,6 +28,7 @@
  *    Xiaoguang Chen <xiaoguang.chen@intel.com>
  */
 
+#include <drm/i915_gvt.h>
 #include <drm/drm_edid.h>
 #include <linux/init.h>
 #include <linux/device.h>
@@ -52,7 +53,6 @@
 #include <linux/nospec.h>
 
 #include "debug.h"
-#include "gvt_public.h"
 
 static const struct intel_gvt_ops *intel_gvt_ops;
 
diff --git a/drivers/gpu/drm/i915/gvt/gvt_public.h b/include/drm/i915_gvt.h
similarity index 97%
rename from drivers/gpu/drm/i915/gvt/gvt_public.h
rename to include/drm/i915_gvt.h
index 23bf1235e1a1..3926ca32f773 100644
--- a/drivers/gpu/drm/i915/gvt/gvt_public.h
+++ b/include/drm/i915_gvt.h
@@ -21,10 +21,10 @@
  * SOFTWARE.
  */
 
-#ifndef _GVT_PUBLIC_H_
-#define _GVT_PUBLIC_H_
+#ifndef _I915_GVT_H_
+#define _I915_GVT_H_
 
-#include "hypercall.h"
+#include <drm/i915_gvt_hypercall.h>
 
 struct attribute;
 struct attribute_group;
@@ -101,4 +101,4 @@ bool intel_gvt_in_gtt(struct intel_vgpu *vgpu, u64 off);
 
 struct dentry *intel_vgpu_debugfs(struct intel_vgpu *vgpu);
 
-#endif /* _GVT_PUBLIC_H_ */
+#endif /* _I915_GVT_H_ */
diff --git a/drivers/gpu/drm/i915/gvt/hypercall.h b/include/drm/i915_gvt_hypercall.h
similarity index 95%
rename from drivers/gpu/drm/i915/gvt/hypercall.h
rename to include/drm/i915_gvt_hypercall.h
index 7ed33e4919a3..c26eef7dbdde 100644
--- a/drivers/gpu/drm/i915/gvt/hypercall.h
+++ b/include/drm/i915_gvt_hypercall.h
@@ -30,8 +30,12 @@
  *
  */
 
-#ifndef _GVT_HYPERCALL_H_
-#define _GVT_HYPERCALL_H_
+#ifndef _I915_GVT_HYPERCALL_H_
+#define _I915_GVT_HYPERCALL_H_
+
+#include <linux/types.h>
+
+struct device;
 
 #include <linux/types.h>
 
@@ -84,4 +88,4 @@ extern struct intel_gvt_mpt xengt_mpt;
 int intel_gvt_register_hypervisor(struct intel_gvt_mpt *);
 void intel_gvt_unregister_hypervisor(void);
 
-#endif /* _GVT_HYPERCALL_H_ */
+#endif /* _I915_GVT_HYPERCALL_H_ */
-- 
2.24.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include
  2020-01-09 17:13   ` [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include Julian Stecklina
@ 2020-01-15 15:22     ` Greg KH
  2020-01-16 14:13       ` Julian Stecklina
  0 siblings, 1 reply; 16+ messages in thread
From: Greg KH @ 2020-01-15 15:22 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: linux-kernel, hang.yuan, dri-devel, intel-gvt-dev, zhiyuan.lv

On Thu, Jan 09, 2020 at 07:13:57PM +0200, Julian Stecklina wrote:
> Now that the GVT interface to hypervisors does not depend on i915/GVT
> internals anymore, we can move the headers to the global include/.
> 
> This makes out-of-tree modules for hypervisor integration possible.

What kind of out-of-tree modules do you need/want for this?  And why do
they somehow have to be out of the tree?  We want them in the tree, and
so should you, as it will save you time and money if they are.

Also, as Christoph said, adding exports for functions that are not used
by anything within the kernel tree itself is not ok, that's not how we
work.

thanks,

greg k-h
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include
  2020-01-15 15:22     ` Greg KH
@ 2020-01-16 14:13       ` Julian Stecklina
  2020-01-16 14:23         ` Greg KH
  2020-01-17  2:15         ` Zhenyu Wang
  0 siblings, 2 replies; 16+ messages in thread
From: Julian Stecklina @ 2020-01-16 14:13 UTC (permalink / raw)
  To: Greg KH
  Cc: Thomas Prescher, linux-kernel, hang.yuan, dri-devel,
	intel-gvt-dev, zhiyuan.lv

Hi Greg, Christoph,

On Wed, 2020-01-15 at 16:22 +0100, Greg KH wrote:
> On Thu, Jan 09, 2020 at 07:13:57PM +0200, Julian Stecklina wrote:
> > Now that the GVT interface to hypervisors does not depend on i915/GVT
> > internals anymore, we can move the headers to the global include/.
> > 
> > This makes out-of-tree modules for hypervisor integration possible.
> 
> What kind of out-of-tree modules do you need/want for this?

The mediated virtualization support in the i915 driver needs a backend to the
hypervisor. There is currently one backend for KVM in the tree
(drivers/gpu/drm/i915/gvt/kvmgt.c) and at least 3 other hypervisor backends out
of tree in various states of development that I know of. We are currently
developing one of these.

> 
> Also, as Christoph said, adding exports for functions that are not used
> by anything within the kernel tree itself is not ok, that's not how we
> work.

The exports are used by the KVM hypervisor backend. The patchset I sent
basically decouples KVMGT from i915 driver internals. So personally I would
count this as a benefit in itself.

There is already an indirection in place that looks like it is intended to
decouple the hypervisor backends from the i915 driver core: intel_gvt_ops. This
is a struct of function pointers that the hypervisor backend uses to talk to the
GPU mediator code.

Unfortunately, this struct doesn't cover all usecases and the KVM hypervisor
backend directly touches the i915 devices' internal state in very few places. My
current solution was to wrap these accesses in accessor functions and
EXPORT_SYMBOL_GPL them.

If the more acceptable solution is to add more function pointers to
intel_gvt_ops instead of exporting symbols, I'm happy to go along this route.

> And why do they somehow have to be out of the tree?  We want them in the
> tree, and so should you, as it will save you time and money if they are.

I also want these hypervisor backends in the tree, but from a development
workflow having the ability to build them as a out-of-tree modules is very
convenient. I guess this is also true for the developers working on the other
hypervisor backends.

When I looked at the status quo in i915/gvt a couple of weeks ago, it seemed
like it would be a win for everyone. Let me just clearly say that we have no
intention of doing binary blob drivers. :)

Thanks,
Julian

[1] 
https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/i915/gvt/gvt.h#L555

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include
  2020-01-16 14:13       ` Julian Stecklina
@ 2020-01-16 14:23         ` Greg KH
  2020-01-16 15:05           ` Julian Stecklina
  2020-01-17  2:15         ` Zhenyu Wang
  1 sibling, 1 reply; 16+ messages in thread
From: Greg KH @ 2020-01-16 14:23 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: Thomas Prescher, linux-kernel, hang.yuan, dri-devel,
	intel-gvt-dev, zhiyuan.lv

On Thu, Jan 16, 2020 at 03:13:01PM +0100, Julian Stecklina wrote:
> Hi Greg, Christoph,
> 
> On Wed, 2020-01-15 at 16:22 +0100, Greg KH wrote:
> > On Thu, Jan 09, 2020 at 07:13:57PM +0200, Julian Stecklina wrote:
> > > Now that the GVT interface to hypervisors does not depend on i915/GVT
> > > internals anymore, we can move the headers to the global include/.
> > > 
> > > This makes out-of-tree modules for hypervisor integration possible.
> > 
> > What kind of out-of-tree modules do you need/want for this?
> 
> The mediated virtualization support in the i915 driver needs a backend to the
> hypervisor. There is currently one backend for KVM in the tree
> (drivers/gpu/drm/i915/gvt/kvmgt.c) and at least 3 other hypervisor backends out
> of tree in various states of development that I know of. We are currently
> developing one of these.

Great, then just submit this patch series as part of your patch series
when submitting yoru hypervisor code.  That's the normal way to export
new symbols, we can't do so without an in-kernel user.

thanks,

greg k-h
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include
  2020-01-16 14:23         ` Greg KH
@ 2020-01-16 15:05           ` Julian Stecklina
  2020-01-16 19:48             ` Greg KH
  0 siblings, 1 reply; 16+ messages in thread
From: Julian Stecklina @ 2020-01-16 15:05 UTC (permalink / raw)
  To: Greg KH
  Cc: Thomas Prescher, linux-kernel, hang.yuan, dri-devel,
	intel-gvt-dev, zhiyuan.lv

Hi Greg,

On Thu, 2020-01-16 at 15:23 +0100, Greg KH wrote:
> On Thu, Jan 16, 2020 at 03:13:01PM +0100, Julian Stecklina wrote:
> > Hi Greg, Christoph,
> > 
> > On Wed, 2020-01-15 at 16:22 +0100, Greg KH wrote:
> > > On Thu, Jan 09, 2020 at 07:13:57PM +0200, Julian Stecklina wrote:
> > > > Now that the GVT interface to hypervisors does not depend on i915/GVT
> > > > internals anymore, we can move the headers to the global include/.
> > > > 
> > > > This makes out-of-tree modules for hypervisor integration possible.
> > > 
> > > What kind of out-of-tree modules do you need/want for this?
> > 
> > The mediated virtualization support in the i915 driver needs a backend to
> > the
> > hypervisor. There is currently one backend for KVM in the tree
> > (drivers/gpu/drm/i915/gvt/kvmgt.c) and at least 3 other hypervisor backends
> > out
> > of tree in various states of development that I know of. We are currently
> > developing one of these.
> 
> Great, then just submit this patch series as part of your patch series
> when submitting yoru hypervisor code.  That's the normal way to export
> new symbols, we can't do so without an in-kernel user.

Fair enough.

As I already said, the KVMGT code is the in-kernel user. But I guess I can
extend the already existing function pointer way of decoupling KVMGT from i915
and be on my way without exporting any symbols.

Somewhat independent of the current discussion, I also think that it's valuable
to have a defined API (I'm not saying stable API) for the hypervisor backends to
define what's okay and not okay for them to do.

Thanks,
Julian

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include
  2020-01-16 15:05           ` Julian Stecklina
@ 2020-01-16 19:48             ` Greg KH
  0 siblings, 0 replies; 16+ messages in thread
From: Greg KH @ 2020-01-16 19:48 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: Thomas Prescher, linux-kernel, hang.yuan, dri-devel,
	intel-gvt-dev, zhiyuan.lv

On Thu, Jan 16, 2020 at 04:05:22PM +0100, Julian Stecklina wrote:
> Hi Greg,
> 
> On Thu, 2020-01-16 at 15:23 +0100, Greg KH wrote:
> > On Thu, Jan 16, 2020 at 03:13:01PM +0100, Julian Stecklina wrote:
> > > Hi Greg, Christoph,
> > > 
> > > On Wed, 2020-01-15 at 16:22 +0100, Greg KH wrote:
> > > > On Thu, Jan 09, 2020 at 07:13:57PM +0200, Julian Stecklina wrote:
> > > > > Now that the GVT interface to hypervisors does not depend on i915/GVT
> > > > > internals anymore, we can move the headers to the global include/.
> > > > > 
> > > > > This makes out-of-tree modules for hypervisor integration possible.
> > > > 
> > > > What kind of out-of-tree modules do you need/want for this?
> > > 
> > > The mediated virtualization support in the i915 driver needs a backend to
> > > the
> > > hypervisor. There is currently one backend for KVM in the tree
> > > (drivers/gpu/drm/i915/gvt/kvmgt.c) and at least 3 other hypervisor backends
> > > out
> > > of tree in various states of development that I know of. We are currently
> > > developing one of these.
> > 
> > Great, then just submit this patch series as part of your patch series
> > when submitting yoru hypervisor code.  That's the normal way to export
> > new symbols, we can't do so without an in-kernel user.
> 
> Fair enough.
> 
> As I already said, the KVMGT code is the in-kernel user. But I guess I can
> extend the already existing function pointer way of decoupling KVMGT from i915
> and be on my way without exporting any symbols.
> 
> Somewhat independent of the current discussion, I also think that it's valuable
> to have a defined API (I'm not saying stable API) for the hypervisor backends to
> define what's okay and not okay for them to do.

The only way to get a "good" api is for at least 3 users of them get
into the kernel tree.  If all you have is one or two, then you go with
what you got, and evolve over time as more get added and find better
ways to use them.

In short, it's just basic evolution, not intelligent design :)

thanks,

greg k-h
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include
  2020-01-16 14:13       ` Julian Stecklina
  2020-01-16 14:23         ` Greg KH
@ 2020-01-17  2:15         ` Zhenyu Wang
  1 sibling, 0 replies; 16+ messages in thread
From: Zhenyu Wang @ 2020-01-17  2:15 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: Thomas Prescher, Greg KH, linux-kernel, dri-devel, hang.yuan,
	intel-gvt-dev, zhiyuan.lv


[-- Attachment #1.1: Type: text/plain, Size: 3074 bytes --]

On 2020.01.16 15:13:01 +0100, Julian Stecklina wrote:
> Hi Greg, Christoph,
> 
> On Wed, 2020-01-15 at 16:22 +0100, Greg KH wrote:
> > On Thu, Jan 09, 2020 at 07:13:57PM +0200, Julian Stecklina wrote:
> > > Now that the GVT interface to hypervisors does not depend on i915/GVT
> > > internals anymore, we can move the headers to the global include/.
> > > 
> > > This makes out-of-tree modules for hypervisor integration possible.
> > 
> > What kind of out-of-tree modules do you need/want for this?
> 
> The mediated virtualization support in the i915 driver needs a backend to the
> hypervisor. There is currently one backend for KVM in the tree
> (drivers/gpu/drm/i915/gvt/kvmgt.c) and at least 3 other hypervisor backends out
> of tree in various states of development that I know of. We are currently
> developing one of these.
> 
> > 
> > Also, as Christoph said, adding exports for functions that are not used
> > by anything within the kernel tree itself is not ok, that's not how we
> > work.
> 
> The exports are used by the KVM hypervisor backend. The patchset I sent
> basically decouples KVMGT from i915 driver internals. So personally I would
> count this as a benefit in itself.
> 
> There is already an indirection in place that looks like it is intended to
> decouple the hypervisor backends from the i915 driver core: intel_gvt_ops. This
> is a struct of function pointers that the hypervisor backend uses to talk to the
> GPU mediator code.
> 
> Unfortunately, this struct doesn't cover all usecases and the KVM hypervisor
> backend directly touches the i915 devices' internal state in very few places. My
> current solution was to wrap these accesses in accessor functions and
> EXPORT_SYMBOL_GPL them.
> 
> If the more acceptable solution is to add more function pointers to
> intel_gvt_ops instead of exporting symbols, I'm happy to go along this route.
>

That depends on the hypervisor requirement and purpose, if it requires
gvt device model for some function e.g emulate mmio, we want it to be
a general gvt_ops, if it just trys to retrieve some vgpu info, we
might see if some common wrapper of internal data would be more easier.

> > And why do they somehow have to be out of the tree?  We want them in the
> > tree, and so should you, as it will save you time and money if they are.
> 
> I also want these hypervisor backends in the tree, but from a development
> workflow having the ability to build them as a out-of-tree modules is very
> convenient. I guess this is also true for the developers working on the other
> hypervisor backends.
> 
> When I looked at the status quo in i915/gvt a couple of weeks ago, it seemed
> like it would be a win for everyone. Let me just clearly say that we have no
> intention of doing binary blob drivers. :)
>

yeah, we do like to see more hypervisor support and make more clear interface
between core device model with that.

thanks

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures
  2020-01-09 17:13   ` [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures Julian Stecklina
@ 2020-01-20  6:22     ` Zhenyu Wang
  2020-01-20  6:33       ` Zhenyu Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Zhenyu Wang @ 2020-01-20  6:22 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: linux-kernel, dri-devel, hang.yuan, zhiyuan.lv, intel-gvt-dev


[-- Attachment #1.1: Type: text/plain, Size: 28850 bytes --]

On 2020.01.09 19:13:54 +0200, Julian Stecklina wrote:
> Instead of defining KVMGT per-device state in struct intel_vgpu
> directly, add an indirection. This makes the GVT code oblivious of
> what state KVMGT needs to keep.
> 
> The intention here is to eventually make it possible to build
> hypervisor backends for the mediator, without having to touch the
> mediator itself. This is a first step.
> 
> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> 
> Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
> ---

Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>

>  drivers/gpu/drm/i915/gvt/gvt.h   |  32 +---
>  drivers/gpu/drm/i915/gvt/kvmgt.c | 287 +++++++++++++++++++------------
>  2 files changed, 184 insertions(+), 135 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 0081b051d3e0..2604739e5680 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -196,31 +196,8 @@ struct intel_vgpu {
>  
>  	struct dentry *debugfs;
>  
> -#if IS_ENABLED(CONFIG_DRM_I915_GVT_KVMGT)
> -	struct {
> -		struct mdev_device *mdev;
> -		struct vfio_region *region;
> -		int num_regions;
> -		struct eventfd_ctx *intx_trigger;
> -		struct eventfd_ctx *msi_trigger;
> -
> -		/*
> -		 * Two caches are used to avoid mapping duplicated pages (eg.
> -		 * scratch pages). This help to reduce dma setup overhead.
> -		 */
> -		struct rb_root gfn_cache;
> -		struct rb_root dma_addr_cache;
> -		unsigned long nr_cache_entries;
> -		struct mutex cache_lock;
> -
> -		struct notifier_block iommu_notifier;
> -		struct notifier_block group_notifier;
> -		struct kvm *kvm;
> -		struct work_struct release_work;
> -		atomic_t released;
> -		struct vfio_device *vfio_device;
> -	} vdev;
> -#endif
> +	/* Hypervisor-specific device state. */
> +	void *vdev;
>  
>  	struct list_head dmabuf_obj_list_head;
>  	struct mutex dmabuf_lock;
> @@ -231,6 +208,11 @@ struct intel_vgpu {
>  	u32 scan_nonprivbb;
>  };
>  
> +static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
> +{
> +	return vgpu->vdev;
> +}
> +
>  /* validating GM healthy status*/
>  #define vgpu_is_vm_unhealthy(ret_val) \
>  	(((ret_val) == -EBADRQC) || ((ret_val) == -EFAULT))
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index bd79a9718cc7..d725a4fb94b9 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -108,6 +108,36 @@ struct gvt_dma {
>  	struct kref ref;
>  };
>  
> +struct kvmgt_vdev {
> +	struct intel_vgpu *vgpu;
> +	struct mdev_device *mdev;
> +	struct vfio_region *region;
> +	int num_regions;
> +	struct eventfd_ctx *intx_trigger;
> +	struct eventfd_ctx *msi_trigger;
> +
> +	/*
> +	 * Two caches are used to avoid mapping duplicated pages (eg.
> +	 * scratch pages). This help to reduce dma setup overhead.
> +	 */
> +	struct rb_root gfn_cache;
> +	struct rb_root dma_addr_cache;
> +	unsigned long nr_cache_entries;
> +	struct mutex cache_lock;
> +
> +	struct notifier_block iommu_notifier;
> +	struct notifier_block group_notifier;
> +	struct kvm *kvm;
> +	struct work_struct release_work;
> +	atomic_t released;
> +	struct vfio_device *vfio_device;
> +};
> +
> +static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)
> +{
> +	return intel_vgpu_vdev(vgpu);
> +}
> +
>  static inline bool handle_valid(unsigned long handle)
>  {
>  	return !!(handle & ~0xff);
> @@ -129,7 +159,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
>  	for (npage = 0; npage < total_pages; npage++) {
>  		unsigned long cur_gfn = gfn + npage;
>  
> -		ret = vfio_unpin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1);
> +		ret = vfio_unpin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1);
>  		WARN_ON(ret != 1);
>  	}
>  }
> @@ -152,7 +182,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
>  		unsigned long cur_gfn = gfn + npage;
>  		unsigned long pfn;
>  
> -		ret = vfio_pin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1,
> +		ret = vfio_pin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1,
>  				     IOMMU_READ | IOMMU_WRITE, &pfn);
>  		if (ret != 1) {
>  			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
> @@ -219,7 +249,7 @@ static void gvt_dma_unmap_page(struct intel_vgpu *vgpu, unsigned long gfn,
>  static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
>  		dma_addr_t dma_addr)
>  {
> -	struct rb_node *node = vgpu->vdev.dma_addr_cache.rb_node;
> +	struct rb_node *node = kvmgt_vdev(vgpu)->dma_addr_cache.rb_node;
>  	struct gvt_dma *itr;
>  
>  	while (node) {
> @@ -237,7 +267,7 @@ static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
>  
>  static struct gvt_dma *__gvt_cache_find_gfn(struct intel_vgpu *vgpu, gfn_t gfn)
>  {
> -	struct rb_node *node = vgpu->vdev.gfn_cache.rb_node;
> +	struct rb_node *node = kvmgt_vdev(vgpu)->gfn_cache.rb_node;
>  	struct gvt_dma *itr;
>  
>  	while (node) {
> @@ -258,6 +288,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
>  {
>  	struct gvt_dma *new, *itr;
>  	struct rb_node **link, *parent = NULL;
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  
>  	new = kzalloc(sizeof(struct gvt_dma), GFP_KERNEL);
>  	if (!new)
> @@ -270,7 +301,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
>  	kref_init(&new->ref);
>  
>  	/* gfn_cache maps gfn to struct gvt_dma. */
> -	link = &vgpu->vdev.gfn_cache.rb_node;
> +	link = &vdev->gfn_cache.rb_node;
>  	while (*link) {
>  		parent = *link;
>  		itr = rb_entry(parent, struct gvt_dma, gfn_node);
> @@ -281,11 +312,11 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
>  			link = &parent->rb_right;
>  	}
>  	rb_link_node(&new->gfn_node, parent, link);
> -	rb_insert_color(&new->gfn_node, &vgpu->vdev.gfn_cache);
> +	rb_insert_color(&new->gfn_node, &vdev->gfn_cache);
>  
>  	/* dma_addr_cache maps dma addr to struct gvt_dma. */
>  	parent = NULL;
> -	link = &vgpu->vdev.dma_addr_cache.rb_node;
> +	link = &vdev->dma_addr_cache.rb_node;
>  	while (*link) {
>  		parent = *link;
>  		itr = rb_entry(parent, struct gvt_dma, dma_addr_node);
> @@ -296,46 +327,51 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
>  			link = &parent->rb_right;
>  	}
>  	rb_link_node(&new->dma_addr_node, parent, link);
> -	rb_insert_color(&new->dma_addr_node, &vgpu->vdev.dma_addr_cache);
> +	rb_insert_color(&new->dma_addr_node, &vdev->dma_addr_cache);
>  
> -	vgpu->vdev.nr_cache_entries++;
> +	vdev->nr_cache_entries++;
>  	return 0;
>  }
>  
>  static void __gvt_cache_remove_entry(struct intel_vgpu *vgpu,
>  				struct gvt_dma *entry)
>  {
> -	rb_erase(&entry->gfn_node, &vgpu->vdev.gfn_cache);
> -	rb_erase(&entry->dma_addr_node, &vgpu->vdev.dma_addr_cache);
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> +
> +	rb_erase(&entry->gfn_node, &vdev->gfn_cache);
> +	rb_erase(&entry->dma_addr_node, &vdev->dma_addr_cache);
>  	kfree(entry);
> -	vgpu->vdev.nr_cache_entries--;
> +	vdev->nr_cache_entries--;
>  }
>  
>  static void gvt_cache_destroy(struct intel_vgpu *vgpu)
>  {
>  	struct gvt_dma *dma;
>  	struct rb_node *node = NULL;
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  
>  	for (;;) {
> -		mutex_lock(&vgpu->vdev.cache_lock);
> -		node = rb_first(&vgpu->vdev.gfn_cache);
> +		mutex_lock(&vdev->cache_lock);
> +		node = rb_first(&vdev->gfn_cache);
>  		if (!node) {
> -			mutex_unlock(&vgpu->vdev.cache_lock);
> +			mutex_unlock(&vdev->cache_lock);
>  			break;
>  		}
>  		dma = rb_entry(node, struct gvt_dma, gfn_node);
>  		gvt_dma_unmap_page(vgpu, dma->gfn, dma->dma_addr, dma->size);
>  		__gvt_cache_remove_entry(vgpu, dma);
> -		mutex_unlock(&vgpu->vdev.cache_lock);
> +		mutex_unlock(&vdev->cache_lock);
>  	}
>  }
>  
>  static void gvt_cache_init(struct intel_vgpu *vgpu)
>  {
> -	vgpu->vdev.gfn_cache = RB_ROOT;
> -	vgpu->vdev.dma_addr_cache = RB_ROOT;
> -	vgpu->vdev.nr_cache_entries = 0;
> -	mutex_init(&vgpu->vdev.cache_lock);
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> +
> +	vdev->gfn_cache = RB_ROOT;
> +	vdev->dma_addr_cache = RB_ROOT;
> +	vdev->nr_cache_entries = 0;
> +	mutex_init(&vdev->cache_lock);
>  }
>  
>  static void kvmgt_protect_table_init(struct kvmgt_guest_info *info)
> @@ -409,16 +445,18 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
>  static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
>  		size_t count, loff_t *ppos, bool iswrite)
>  {
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
>  			VFIO_PCI_NUM_REGIONS;
> -	void *base = vgpu->vdev.region[i].data;
> +	void *base = vdev->region[i].data;
>  	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
>  
> -	if (pos >= vgpu->vdev.region[i].size || iswrite) {
> +
> +	if (pos >= vdev->region[i].size || iswrite) {
>  		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
>  		return -EINVAL;
>  	}
> -	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
> +	count = min(count, (size_t)(vdev->region[i].size - pos));
>  	memcpy(buf, base + pos, count);
>  
>  	return count;
> @@ -512,7 +550,7 @@ static size_t intel_vgpu_reg_rw_edid(struct intel_vgpu *vgpu, char *buf,
>  	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
>  			VFIO_PCI_NUM_REGIONS;
>  	struct vfio_edid_region *region =
> -		(struct vfio_edid_region *)vgpu->vdev.region[i].data;
> +		(struct vfio_edid_region *)kvmgt_vdev(vgpu)->region[i].data;
>  	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
>  
>  	if (pos < region->vfio_edid_regs.edid_offset) {
> @@ -544,32 +582,34 @@ static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
>  		const struct intel_vgpu_regops *ops,
>  		size_t size, u32 flags, void *data)
>  {
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	struct vfio_region *region;
>  
> -	region = krealloc(vgpu->vdev.region,
> -			(vgpu->vdev.num_regions + 1) * sizeof(*region),
> +	region = krealloc(vdev->region,
> +			(vdev->num_regions + 1) * sizeof(*region),
>  			GFP_KERNEL);
>  	if (!region)
>  		return -ENOMEM;
>  
> -	vgpu->vdev.region = region;
> -	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
> -	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
> -	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
> -	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
> -	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
> -	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
> -	vgpu->vdev.num_regions++;
> +	vdev->region = region;
> +	vdev->region[vdev->num_regions].type = type;
> +	vdev->region[vdev->num_regions].subtype = subtype;
> +	vdev->region[vdev->num_regions].ops = ops;
> +	vdev->region[vdev->num_regions].size = size;
> +	vdev->region[vdev->num_regions].flags = flags;
> +	vdev->region[vdev->num_regions].data = data;
> +	vdev->num_regions++;
>  	return 0;
>  }
>  
>  static int kvmgt_get_vfio_device(void *p_vgpu)
>  {
>  	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  
> -	vgpu->vdev.vfio_device = vfio_device_get_from_dev(
> -		mdev_dev(vgpu->vdev.mdev));
> -	if (!vgpu->vdev.vfio_device) {
> +	vdev->vfio_device = vfio_device_get_from_dev(
> +		mdev_dev(vdev->mdev));
> +	if (!vdev->vfio_device) {
>  		gvt_vgpu_err("failed to get vfio device\n");
>  		return -ENODEV;
>  	}
> @@ -637,10 +677,12 @@ static int kvmgt_set_edid(void *p_vgpu, int port_num)
>  
>  static void kvmgt_put_vfio_device(void *vgpu)
>  {
> -	if (WARN_ON(!((struct intel_vgpu *)vgpu)->vdev.vfio_device))
> +	struct kvmgt_vdev *vdev = kvmgt_vdev((struct intel_vgpu *)vgpu);
> +
> +	if (WARN_ON(!vdev->vfio_device))
>  		return;
>  
> -	vfio_device_put(((struct intel_vgpu *)vgpu)->vdev.vfio_device);
> +	vfio_device_put(vdev->vfio_device);
>  }
>  
>  static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
> @@ -669,9 +711,9 @@ static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
>  		goto out;
>  	}
>  
> -	INIT_WORK(&vgpu->vdev.release_work, intel_vgpu_release_work);
> +	INIT_WORK(&kvmgt_vdev(vgpu)->release_work, intel_vgpu_release_work);
>  
> -	vgpu->vdev.mdev = mdev;
> +	kvmgt_vdev(vgpu)->mdev = mdev;
>  	mdev_set_drvdata(mdev, vgpu);
>  
>  	gvt_dbg_core("intel_vgpu_create succeeded for mdev: %s\n",
> @@ -696,9 +738,10 @@ static int intel_vgpu_remove(struct mdev_device *mdev)
>  static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
>  				     unsigned long action, void *data)
>  {
> -	struct intel_vgpu *vgpu = container_of(nb,
> -					struct intel_vgpu,
> -					vdev.iommu_notifier);
> +	struct kvmgt_vdev *vdev = container_of(nb,
> +					       struct kvmgt_vdev,
> +					       iommu_notifier);
> +	struct intel_vgpu *vgpu = vdev->vgpu;
>  
>  	if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
>  		struct vfio_iommu_type1_dma_unmap *unmap = data;
> @@ -708,7 +751,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
>  		iov_pfn = unmap->iova >> PAGE_SHIFT;
>  		end_iov_pfn = iov_pfn + unmap->size / PAGE_SIZE;
>  
> -		mutex_lock(&vgpu->vdev.cache_lock);
> +		mutex_lock(&vdev->cache_lock);
>  		for (; iov_pfn < end_iov_pfn; iov_pfn++) {
>  			entry = __gvt_cache_find_gfn(vgpu, iov_pfn);
>  			if (!entry)
> @@ -718,7 +761,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
>  					   entry->size);
>  			__gvt_cache_remove_entry(vgpu, entry);
>  		}
> -		mutex_unlock(&vgpu->vdev.cache_lock);
> +		mutex_unlock(&vdev->cache_lock);
>  	}
>  
>  	return NOTIFY_OK;
> @@ -727,16 +770,16 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
>  static int intel_vgpu_group_notifier(struct notifier_block *nb,
>  				     unsigned long action, void *data)
>  {
> -	struct intel_vgpu *vgpu = container_of(nb,
> -					struct intel_vgpu,
> -					vdev.group_notifier);
> +	struct kvmgt_vdev *vdev = container_of(nb,
> +					       struct kvmgt_vdev,
> +					       group_notifier);
>  
>  	/* the only action we care about */
>  	if (action == VFIO_GROUP_NOTIFY_SET_KVM) {
> -		vgpu->vdev.kvm = data;
> +		vdev->kvm = data;
>  
>  		if (!data)
> -			schedule_work(&vgpu->vdev.release_work);
> +			schedule_work(&vdev->release_work);
>  	}
>  
>  	return NOTIFY_OK;
> @@ -745,15 +788,16 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
>  static int intel_vgpu_open(struct mdev_device *mdev)
>  {
>  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	unsigned long events;
>  	int ret;
>  
> -	vgpu->vdev.iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
> -	vgpu->vdev.group_notifier.notifier_call = intel_vgpu_group_notifier;
> +	vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
> +	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
>  
>  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
>  	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
> -				&vgpu->vdev.iommu_notifier);
> +				&vdev->iommu_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
>  			ret);
> @@ -762,7 +806,7 @@ static int intel_vgpu_open(struct mdev_device *mdev)
>  
>  	events = VFIO_GROUP_NOTIFY_SET_KVM;
>  	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
> -				&vgpu->vdev.group_notifier);
> +				&vdev->group_notifier);
>  	if (ret != 0) {
>  		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
>  			ret);
> @@ -781,50 +825,52 @@ static int intel_vgpu_open(struct mdev_device *mdev)
>  
>  	intel_gvt_ops->vgpu_activate(vgpu);
>  
> -	atomic_set(&vgpu->vdev.released, 0);
> +	atomic_set(&vdev->released, 0);
>  	return ret;
>  
>  undo_group:
>  	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> -					&vgpu->vdev.group_notifier);
> +					&vdev->group_notifier);
>  
>  undo_iommu:
>  	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> -					&vgpu->vdev.iommu_notifier);
> +					&vdev->iommu_notifier);
>  out:
>  	return ret;
>  }
>  
>  static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu)
>  {
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	struct eventfd_ctx *trigger;
>  
> -	trigger = vgpu->vdev.msi_trigger;
> +	trigger = vdev->msi_trigger;
>  	if (trigger) {
>  		eventfd_ctx_put(trigger);
> -		vgpu->vdev.msi_trigger = NULL;
> +		vdev->msi_trigger = NULL;
>  	}
>  }
>  
>  static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>  {
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	struct kvmgt_guest_info *info;
>  	int ret;
>  
>  	if (!handle_valid(vgpu->handle))
>  		return;
>  
> -	if (atomic_cmpxchg(&vgpu->vdev.released, 0, 1))
> +	if (atomic_cmpxchg(&vdev->released, 0, 1))
>  		return;
>  
>  	intel_gvt_ops->vgpu_release(vgpu);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_IOMMU_NOTIFY,
> -					&vgpu->vdev.iommu_notifier);
> +	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
> +					&vdev->iommu_notifier);
>  	WARN(ret, "vfio_unregister_notifier for iommu failed: %d\n", ret);
>  
> -	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_GROUP_NOTIFY,
> -					&vgpu->vdev.group_notifier);
> +	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
> +					&vdev->group_notifier);
>  	WARN(ret, "vfio_unregister_notifier for group failed: %d\n", ret);
>  
>  	/* dereference module reference taken at open */
> @@ -835,7 +881,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
>  
>  	intel_vgpu_release_msi_eventfd_ctx(vgpu);
>  
> -	vgpu->vdev.kvm = NULL;
> +	vdev->kvm = NULL;
>  	vgpu->handle = 0;
>  }
>  
> @@ -848,10 +894,10 @@ static void intel_vgpu_release(struct mdev_device *mdev)
>  
>  static void intel_vgpu_release_work(struct work_struct *work)
>  {
> -	struct intel_vgpu *vgpu = container_of(work, struct intel_vgpu,
> -					vdev.release_work);
> +	struct kvmgt_vdev *vdev = container_of(work, struct kvmgt_vdev,
> +					       release_work);
>  
> -	__intel_vgpu_release(vgpu);
> +	__intel_vgpu_release(vdev->vgpu);
>  }
>  
>  static u64 intel_vgpu_get_bar_addr(struct intel_vgpu *vgpu, int bar)
> @@ -933,12 +979,13 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
>  			size_t count, loff_t *ppos, bool is_write)
>  {
>  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos);
>  	u64 pos = *ppos & VFIO_PCI_OFFSET_MASK;
>  	int ret = -EINVAL;
>  
>  
> -	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
> +	if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions) {
>  		gvt_vgpu_err("invalid index: %u\n", index);
>  		return -EINVAL;
>  	}
> @@ -967,11 +1014,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
>  	case VFIO_PCI_ROM_REGION_INDEX:
>  		break;
>  	default:
> -		if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions)
> +		if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions)
>  			return -EINVAL;
>  
>  		index -= VFIO_PCI_NUM_REGIONS;
> -		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
> +		return vdev->region[index].ops->rw(vgpu, buf, count,
>  				ppos, is_write);
>  	}
>  
> @@ -1224,7 +1271,7 @@ static int intel_vgpu_set_msi_trigger(struct intel_vgpu *vgpu,
>  			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
>  			return PTR_ERR(trigger);
>  		}
> -		vgpu->vdev.msi_trigger = trigger;
> +		kvmgt_vdev(vgpu)->msi_trigger = trigger;
>  	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count)
>  		intel_vgpu_release_msi_eventfd_ctx(vgpu);
>  
> @@ -1276,6 +1323,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  			     unsigned long arg)
>  {
>  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  	unsigned long minsz;
>  
>  	gvt_dbg_core("vgpu%d ioctl, cmd: %d\n", vgpu->id, cmd);
> @@ -1294,7 +1342,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  		info.flags = VFIO_DEVICE_FLAGS_PCI;
>  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
>  		info.num_regions = VFIO_PCI_NUM_REGIONS +
> -				vgpu->vdev.num_regions;
> +				vdev->num_regions;
>  		info.num_irqs = VFIO_PCI_NUM_IRQS;
>  
>  		return copy_to_user((void __user *)arg, &info, minsz) ?
> @@ -1385,22 +1433,22 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
>  					.header.version = 1 };
>  
>  				if (info.index >= VFIO_PCI_NUM_REGIONS +
> -						vgpu->vdev.num_regions)
> +						vdev->num_regions)
>  					return -EINVAL;
>  				info.index =
>  					array_index_nospec(info.index,
>  							VFIO_PCI_NUM_REGIONS +
> -							vgpu->vdev.num_regions);
> +							vdev->num_regions);
>  
>  				i = info.index - VFIO_PCI_NUM_REGIONS;
>  
>  				info.offset =
>  					VFIO_PCI_INDEX_TO_OFFSET(info.index);
> -				info.size = vgpu->vdev.region[i].size;
> -				info.flags = vgpu->vdev.region[i].flags;
> +				info.size = vdev->region[i].size;
> +				info.flags = vdev->region[i].flags;
>  
> -				cap_type.type = vgpu->vdev.region[i].type;
> -				cap_type.subtype = vgpu->vdev.region[i].subtype;
> +				cap_type.type = vdev->region[i].type;
> +				cap_type.subtype = vdev->region[i].subtype;
>  
>  				ret = vfio_info_add_capability(&caps,
>  							&cap_type.header,
> @@ -1740,13 +1788,15 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
>  {
>  	struct kvmgt_guest_info *info;
>  	struct intel_vgpu *vgpu;
> +	struct kvmgt_vdev *vdev;
>  	struct kvm *kvm;
>  
>  	vgpu = mdev_get_drvdata(mdev);
>  	if (handle_valid(vgpu->handle))
>  		return -EEXIST;
>  
> -	kvm = vgpu->vdev.kvm;
> +	vdev = kvmgt_vdev(vgpu);
> +	kvm = vdev->kvm;
>  	if (!kvm || kvm->mm != current->mm) {
>  		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
>  		return -ESRCH;
> @@ -1776,7 +1826,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
>  	info->debugfs_cache_entries = debugfs_create_ulong(
>  						"kvmgt_nr_cache_entries",
>  						0444, vgpu->debugfs,
> -						&vgpu->vdev.nr_cache_entries);
> +						&vdev->nr_cache_entries);
>  	return 0;
>  }
>  
> @@ -1793,9 +1843,17 @@ static bool kvmgt_guest_exit(struct kvmgt_guest_info *info)
>  	return true;
>  }
>  
> -static int kvmgt_attach_vgpu(void *vgpu, unsigned long *handle)
> +static int kvmgt_attach_vgpu(void *p_vgpu, unsigned long *handle)
>  {
> -	/* nothing to do here */
> +	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> +
> +	vgpu->vdev = kzalloc(sizeof(struct kvmgt_vdev), GFP_KERNEL);
> +
> +	if (!vgpu->vdev)
> +		return -ENOMEM;
> +
> +	kvmgt_vdev(vgpu)->vgpu = vgpu;
> +
>  	return 0;
>  }
>  
> @@ -1803,29 +1861,34 @@ static void kvmgt_detach_vgpu(void *p_vgpu)
>  {
>  	int i;
>  	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
>  
> -	if (!vgpu->vdev.region)
> +	if (!vdev->region)
>  		return;
>  
> -	for (i = 0; i < vgpu->vdev.num_regions; i++)
> -		if (vgpu->vdev.region[i].ops->release)
> -			vgpu->vdev.region[i].ops->release(vgpu,
> -					&vgpu->vdev.region[i]);
> -	vgpu->vdev.num_regions = 0;
> -	kfree(vgpu->vdev.region);
> -	vgpu->vdev.region = NULL;
> +	for (i = 0; i < vdev->num_regions; i++)
> +		if (vdev->region[i].ops->release)
> +			vdev->region[i].ops->release(vgpu,
> +					&vdev->region[i]);
> +	vdev->num_regions = 0;
> +	kfree(vdev->region);
> +	vdev->region = NULL;
> +
> +	kfree(vdev);
>  }
>  
>  static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
>  {
>  	struct kvmgt_guest_info *info;
>  	struct intel_vgpu *vgpu;
> +	struct kvmgt_vdev *vdev;
>  
>  	if (!handle_valid(handle))
>  		return -ESRCH;
>  
>  	info = (struct kvmgt_guest_info *)handle;
>  	vgpu = info->vgpu;
> +	vdev = kvmgt_vdev(vgpu);
>  
>  	/*
>  	 * When guest is poweroff, msi_trigger is set to NULL, but vgpu's
> @@ -1836,10 +1899,10 @@ static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
>  	 * enabled by guest. so if msi_trigger is null, success is still
>  	 * returned and don't inject interrupt into guest.
>  	 */
> -	if (vgpu->vdev.msi_trigger == NULL)
> +	if (vdev->msi_trigger == NULL)
>  		return 0;
>  
> -	if (eventfd_signal(vgpu->vdev.msi_trigger, 1) == 1)
> +	if (eventfd_signal(vdev->msi_trigger, 1) == 1)
>  		return 0;
>  
>  	return -EFAULT;
> @@ -1865,26 +1928,26 @@ static unsigned long kvmgt_gfn_to_pfn(unsigned long handle, unsigned long gfn)
>  static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
>  		unsigned long size, dma_addr_t *dma_addr)
>  {
> -	struct kvmgt_guest_info *info;
>  	struct intel_vgpu *vgpu;
> +	struct kvmgt_vdev *vdev;
>  	struct gvt_dma *entry;
>  	int ret;
>  
>  	if (!handle_valid(handle))
>  		return -EINVAL;
>  
> -	info = (struct kvmgt_guest_info *)handle;
> -	vgpu = info->vgpu;
> +	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
> +	vdev = kvmgt_vdev(vgpu);
>  
> -	mutex_lock(&info->vgpu->vdev.cache_lock);
> +	mutex_lock(&vdev->cache_lock);
>  
> -	entry = __gvt_cache_find_gfn(info->vgpu, gfn);
> +	entry = __gvt_cache_find_gfn(vgpu, gfn);
>  	if (!entry) {
>  		ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size);
>  		if (ret)
>  			goto err_unlock;
>  
> -		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
> +		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
>  		if (ret)
>  			goto err_unmap;
>  	} else if (entry->size != size) {
> @@ -1896,7 +1959,7 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
>  		if (ret)
>  			goto err_unlock;
>  
> -		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
> +		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
>  		if (ret)
>  			goto err_unmap;
>  	} else {
> @@ -1904,19 +1967,20 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
>  		*dma_addr = entry->dma_addr;
>  	}
>  
> -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> +	mutex_unlock(&vdev->cache_lock);
>  	return 0;
>  
>  err_unmap:
>  	gvt_dma_unmap_page(vgpu, gfn, *dma_addr, size);
>  err_unlock:
> -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> +	mutex_unlock(&vdev->cache_lock);
>  	return ret;
>  }
>  
>  static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
>  {
>  	struct kvmgt_guest_info *info;
> +	struct kvmgt_vdev *vdev;
>  	struct gvt_dma *entry;
>  	int ret = 0;
>  
> @@ -1924,14 +1988,15 @@ static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
>  		return -ENODEV;
>  
>  	info = (struct kvmgt_guest_info *)handle;
> +	vdev = kvmgt_vdev(info->vgpu);
>  
> -	mutex_lock(&info->vgpu->vdev.cache_lock);
> +	mutex_lock(&vdev->cache_lock);
>  	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
>  	if (entry)
>  		kref_get(&entry->ref);
>  	else
>  		ret = -ENOMEM;
> -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> +	mutex_unlock(&vdev->cache_lock);
>  
>  	return ret;
>  }
> @@ -1947,19 +2012,21 @@ static void __gvt_dma_release(struct kref *ref)
>  
>  static void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr)
>  {
> -	struct kvmgt_guest_info *info;
> +	struct intel_vgpu *vgpu;
> +	struct kvmgt_vdev *vdev;
>  	struct gvt_dma *entry;
>  
>  	if (!handle_valid(handle))
>  		return;
>  
> -	info = (struct kvmgt_guest_info *)handle;
> +	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
> +	vdev = kvmgt_vdev(vgpu);
>  
> -	mutex_lock(&info->vgpu->vdev.cache_lock);
> -	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
> +	mutex_lock(&vdev->cache_lock);
> +	entry = __gvt_cache_find_dma_addr(vgpu, dma_addr);
>  	if (entry)
>  		kref_put(&entry->ref, __gvt_dma_release);
> -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> +	mutex_unlock(&vdev->cache_lock);
>  }
>  
>  static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
> -- 
> 2.24.1
> 
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/4] drm/i915/gvt: remove unused vblank_done completion
  2020-01-09 17:13   ` [RFC PATCH 2/4] drm/i915/gvt: remove unused vblank_done completion Julian Stecklina
@ 2020-01-20  6:23     ` Zhenyu Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Zhenyu Wang @ 2020-01-20  6:23 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: linux-kernel, dri-devel, hang.yuan, zhiyuan.lv, intel-gvt-dev


[-- Attachment #1.1: Type: text/plain, Size: 1748 bytes --]

On 2020.01.09 19:13:55 +0200, Julian Stecklina wrote:
> This variable is used nowhere, so remove it.
> 
> Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> 
> Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
> ---

Thanks for catching this.

Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>

>  drivers/gpu/drm/i915/gvt/gvt.h   | 2 --
>  drivers/gpu/drm/i915/gvt/kvmgt.c | 2 --
>  2 files changed, 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> index 2604739e5680..8cf292a8d6bd 100644
> --- a/drivers/gpu/drm/i915/gvt/gvt.h
> +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> @@ -203,8 +203,6 @@ struct intel_vgpu {
>  	struct mutex dmabuf_lock;
>  	struct idr object_idr;
>  
> -	struct completion vblank_done;
> -
>  	u32 scan_nonprivbb;
>  };
>  
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index d725a4fb94b9..9a435bc1a2f0 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -1817,8 +1817,6 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
>  	kvmgt_protect_table_init(info);
>  	gvt_cache_init(vgpu);
>  
> -	init_completion(&vgpu->vblank_done);
> -
>  	info->track_node.track_write = kvmgt_page_track_write;
>  	info->track_node.track_flush_slot = kvmgt_page_track_flush_slot;
>  	kvm_page_track_register_notifier(kvm, &info->track_node);
> -- 
> 2.24.1
> 
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev

-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures
  2020-01-20  6:22     ` Zhenyu Wang
@ 2020-01-20  6:33       ` Zhenyu Wang
  2020-01-20 17:25         ` [PATCH] " Julian Stecklina
  2020-01-20 17:28         ` [RFC PATCH 1/4] " Julian Stecklina
  0 siblings, 2 replies; 16+ messages in thread
From: Zhenyu Wang @ 2020-01-20  6:33 UTC (permalink / raw)
  To: Julian Stecklina
  Cc: zhiyuan.lv, intel-gvt-dev, linux-kernel, dri-devel, hang.yuan


[-- Attachment #1.1: Type: text/plain, Size: 30983 bytes --]

On 2020.01.20 14:22:10 +0800, Zhenyu Wang wrote:
> On 2020.01.09 19:13:54 +0200, Julian Stecklina wrote:
> > Instead of defining KVMGT per-device state in struct intel_vgpu
> > directly, add an indirection. This makes the GVT code oblivious of
> > what state KVMGT needs to keep.
> > 
> > The intention here is to eventually make it possible to build
> > hypervisor backends for the mediator, without having to touch the
> > mediator itself. This is a first step.
> > 
> > Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
> > 
> > Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
> > ---
> 
> Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>
>

hmm, I failed to apply this one, could you refresh against gvt-staging branch
on https://github.com/intel/gvt-linux?

thanks

> >  drivers/gpu/drm/i915/gvt/gvt.h   |  32 +---
> >  drivers/gpu/drm/i915/gvt/kvmgt.c | 287 +++++++++++++++++++------------
> >  2 files changed, 184 insertions(+), 135 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
> > index 0081b051d3e0..2604739e5680 100644
> > --- a/drivers/gpu/drm/i915/gvt/gvt.h
> > +++ b/drivers/gpu/drm/i915/gvt/gvt.h
> > @@ -196,31 +196,8 @@ struct intel_vgpu {
> >  
> >  	struct dentry *debugfs;
> >  
> > -#if IS_ENABLED(CONFIG_DRM_I915_GVT_KVMGT)
> > -	struct {
> > -		struct mdev_device *mdev;
> > -		struct vfio_region *region;
> > -		int num_regions;
> > -		struct eventfd_ctx *intx_trigger;
> > -		struct eventfd_ctx *msi_trigger;
> > -
> > -		/*
> > -		 * Two caches are used to avoid mapping duplicated pages (eg.
> > -		 * scratch pages). This help to reduce dma setup overhead.
> > -		 */
> > -		struct rb_root gfn_cache;
> > -		struct rb_root dma_addr_cache;
> > -		unsigned long nr_cache_entries;
> > -		struct mutex cache_lock;
> > -
> > -		struct notifier_block iommu_notifier;
> > -		struct notifier_block group_notifier;
> > -		struct kvm *kvm;
> > -		struct work_struct release_work;
> > -		atomic_t released;
> > -		struct vfio_device *vfio_device;
> > -	} vdev;
> > -#endif
> > +	/* Hypervisor-specific device state. */
> > +	void *vdev;
> >  
> >  	struct list_head dmabuf_obj_list_head;
> >  	struct mutex dmabuf_lock;
> > @@ -231,6 +208,11 @@ struct intel_vgpu {
> >  	u32 scan_nonprivbb;
> >  };
> >  
> > +static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
> > +{
> > +	return vgpu->vdev;
> > +}
> > +
> >  /* validating GM healthy status*/
> >  #define vgpu_is_vm_unhealthy(ret_val) \
> >  	(((ret_val) == -EBADRQC) || ((ret_val) == -EFAULT))
> > diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > index bd79a9718cc7..d725a4fb94b9 100644
> > --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> > +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> > @@ -108,6 +108,36 @@ struct gvt_dma {
> >  	struct kref ref;
> >  };
> >  
> > +struct kvmgt_vdev {
> > +	struct intel_vgpu *vgpu;
> > +	struct mdev_device *mdev;
> > +	struct vfio_region *region;
> > +	int num_regions;
> > +	struct eventfd_ctx *intx_trigger;
> > +	struct eventfd_ctx *msi_trigger;
> > +
> > +	/*
> > +	 * Two caches are used to avoid mapping duplicated pages (eg.
> > +	 * scratch pages). This help to reduce dma setup overhead.
> > +	 */
> > +	struct rb_root gfn_cache;
> > +	struct rb_root dma_addr_cache;
> > +	unsigned long nr_cache_entries;
> > +	struct mutex cache_lock;
> > +
> > +	struct notifier_block iommu_notifier;
> > +	struct notifier_block group_notifier;
> > +	struct kvm *kvm;
> > +	struct work_struct release_work;
> > +	atomic_t released;
> > +	struct vfio_device *vfio_device;
> > +};
> > +
> > +static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)
> > +{
> > +	return intel_vgpu_vdev(vgpu);
> > +}
> > +
> >  static inline bool handle_valid(unsigned long handle)
> >  {
> >  	return !!(handle & ~0xff);
> > @@ -129,7 +159,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> >  	for (npage = 0; npage < total_pages; npage++) {
> >  		unsigned long cur_gfn = gfn + npage;
> >  
> > -		ret = vfio_unpin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1);
> > +		ret = vfio_unpin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1);
> >  		WARN_ON(ret != 1);
> >  	}
> >  }
> > @@ -152,7 +182,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
> >  		unsigned long cur_gfn = gfn + npage;
> >  		unsigned long pfn;
> >  
> > -		ret = vfio_pin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1,
> > +		ret = vfio_pin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1,
> >  				     IOMMU_READ | IOMMU_WRITE, &pfn);
> >  		if (ret != 1) {
> >  			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
> > @@ -219,7 +249,7 @@ static void gvt_dma_unmap_page(struct intel_vgpu *vgpu, unsigned long gfn,
> >  static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
> >  		dma_addr_t dma_addr)
> >  {
> > -	struct rb_node *node = vgpu->vdev.dma_addr_cache.rb_node;
> > +	struct rb_node *node = kvmgt_vdev(vgpu)->dma_addr_cache.rb_node;
> >  	struct gvt_dma *itr;
> >  
> >  	while (node) {
> > @@ -237,7 +267,7 @@ static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
> >  
> >  static struct gvt_dma *__gvt_cache_find_gfn(struct intel_vgpu *vgpu, gfn_t gfn)
> >  {
> > -	struct rb_node *node = vgpu->vdev.gfn_cache.rb_node;
> > +	struct rb_node *node = kvmgt_vdev(vgpu)->gfn_cache.rb_node;
> >  	struct gvt_dma *itr;
> >  
> >  	while (node) {
> > @@ -258,6 +288,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
> >  {
> >  	struct gvt_dma *new, *itr;
> >  	struct rb_node **link, *parent = NULL;
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  
> >  	new = kzalloc(sizeof(struct gvt_dma), GFP_KERNEL);
> >  	if (!new)
> > @@ -270,7 +301,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
> >  	kref_init(&new->ref);
> >  
> >  	/* gfn_cache maps gfn to struct gvt_dma. */
> > -	link = &vgpu->vdev.gfn_cache.rb_node;
> > +	link = &vdev->gfn_cache.rb_node;
> >  	while (*link) {
> >  		parent = *link;
> >  		itr = rb_entry(parent, struct gvt_dma, gfn_node);
> > @@ -281,11 +312,11 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
> >  			link = &parent->rb_right;
> >  	}
> >  	rb_link_node(&new->gfn_node, parent, link);
> > -	rb_insert_color(&new->gfn_node, &vgpu->vdev.gfn_cache);
> > +	rb_insert_color(&new->gfn_node, &vdev->gfn_cache);
> >  
> >  	/* dma_addr_cache maps dma addr to struct gvt_dma. */
> >  	parent = NULL;
> > -	link = &vgpu->vdev.dma_addr_cache.rb_node;
> > +	link = &vdev->dma_addr_cache.rb_node;
> >  	while (*link) {
> >  		parent = *link;
> >  		itr = rb_entry(parent, struct gvt_dma, dma_addr_node);
> > @@ -296,46 +327,51 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
> >  			link = &parent->rb_right;
> >  	}
> >  	rb_link_node(&new->dma_addr_node, parent, link);
> > -	rb_insert_color(&new->dma_addr_node, &vgpu->vdev.dma_addr_cache);
> > +	rb_insert_color(&new->dma_addr_node, &vdev->dma_addr_cache);
> >  
> > -	vgpu->vdev.nr_cache_entries++;
> > +	vdev->nr_cache_entries++;
> >  	return 0;
> >  }
> >  
> >  static void __gvt_cache_remove_entry(struct intel_vgpu *vgpu,
> >  				struct gvt_dma *entry)
> >  {
> > -	rb_erase(&entry->gfn_node, &vgpu->vdev.gfn_cache);
> > -	rb_erase(&entry->dma_addr_node, &vgpu->vdev.dma_addr_cache);
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> > +
> > +	rb_erase(&entry->gfn_node, &vdev->gfn_cache);
> > +	rb_erase(&entry->dma_addr_node, &vdev->dma_addr_cache);
> >  	kfree(entry);
> > -	vgpu->vdev.nr_cache_entries--;
> > +	vdev->nr_cache_entries--;
> >  }
> >  
> >  static void gvt_cache_destroy(struct intel_vgpu *vgpu)
> >  {
> >  	struct gvt_dma *dma;
> >  	struct rb_node *node = NULL;
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  
> >  	for (;;) {
> > -		mutex_lock(&vgpu->vdev.cache_lock);
> > -		node = rb_first(&vgpu->vdev.gfn_cache);
> > +		mutex_lock(&vdev->cache_lock);
> > +		node = rb_first(&vdev->gfn_cache);
> >  		if (!node) {
> > -			mutex_unlock(&vgpu->vdev.cache_lock);
> > +			mutex_unlock(&vdev->cache_lock);
> >  			break;
> >  		}
> >  		dma = rb_entry(node, struct gvt_dma, gfn_node);
> >  		gvt_dma_unmap_page(vgpu, dma->gfn, dma->dma_addr, dma->size);
> >  		__gvt_cache_remove_entry(vgpu, dma);
> > -		mutex_unlock(&vgpu->vdev.cache_lock);
> > +		mutex_unlock(&vdev->cache_lock);
> >  	}
> >  }
> >  
> >  static void gvt_cache_init(struct intel_vgpu *vgpu)
> >  {
> > -	vgpu->vdev.gfn_cache = RB_ROOT;
> > -	vgpu->vdev.dma_addr_cache = RB_ROOT;
> > -	vgpu->vdev.nr_cache_entries = 0;
> > -	mutex_init(&vgpu->vdev.cache_lock);
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> > +
> > +	vdev->gfn_cache = RB_ROOT;
> > +	vdev->dma_addr_cache = RB_ROOT;
> > +	vdev->nr_cache_entries = 0;
> > +	mutex_init(&vdev->cache_lock);
> >  }
> >  
> >  static void kvmgt_protect_table_init(struct kvmgt_guest_info *info)
> > @@ -409,16 +445,18 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
> >  static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
> >  		size_t count, loff_t *ppos, bool iswrite)
> >  {
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
> >  			VFIO_PCI_NUM_REGIONS;
> > -	void *base = vgpu->vdev.region[i].data;
> > +	void *base = vdev->region[i].data;
> >  	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
> >  
> > -	if (pos >= vgpu->vdev.region[i].size || iswrite) {
> > +
> > +	if (pos >= vdev->region[i].size || iswrite) {
> >  		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
> >  		return -EINVAL;
> >  	}
> > -	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
> > +	count = min(count, (size_t)(vdev->region[i].size - pos));
> >  	memcpy(buf, base + pos, count);
> >  
> >  	return count;
> > @@ -512,7 +550,7 @@ static size_t intel_vgpu_reg_rw_edid(struct intel_vgpu *vgpu, char *buf,
> >  	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
> >  			VFIO_PCI_NUM_REGIONS;
> >  	struct vfio_edid_region *region =
> > -		(struct vfio_edid_region *)vgpu->vdev.region[i].data;
> > +		(struct vfio_edid_region *)kvmgt_vdev(vgpu)->region[i].data;
> >  	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
> >  
> >  	if (pos < region->vfio_edid_regs.edid_offset) {
> > @@ -544,32 +582,34 @@ static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
> >  		const struct intel_vgpu_regops *ops,
> >  		size_t size, u32 flags, void *data)
> >  {
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  	struct vfio_region *region;
> >  
> > -	region = krealloc(vgpu->vdev.region,
> > -			(vgpu->vdev.num_regions + 1) * sizeof(*region),
> > +	region = krealloc(vdev->region,
> > +			(vdev->num_regions + 1) * sizeof(*region),
> >  			GFP_KERNEL);
> >  	if (!region)
> >  		return -ENOMEM;
> >  
> > -	vgpu->vdev.region = region;
> > -	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
> > -	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
> > -	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
> > -	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
> > -	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
> > -	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
> > -	vgpu->vdev.num_regions++;
> > +	vdev->region = region;
> > +	vdev->region[vdev->num_regions].type = type;
> > +	vdev->region[vdev->num_regions].subtype = subtype;
> > +	vdev->region[vdev->num_regions].ops = ops;
> > +	vdev->region[vdev->num_regions].size = size;
> > +	vdev->region[vdev->num_regions].flags = flags;
> > +	vdev->region[vdev->num_regions].data = data;
> > +	vdev->num_regions++;
> >  	return 0;
> >  }
> >  
> >  static int kvmgt_get_vfio_device(void *p_vgpu)
> >  {
> >  	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  
> > -	vgpu->vdev.vfio_device = vfio_device_get_from_dev(
> > -		mdev_dev(vgpu->vdev.mdev));
> > -	if (!vgpu->vdev.vfio_device) {
> > +	vdev->vfio_device = vfio_device_get_from_dev(
> > +		mdev_dev(vdev->mdev));
> > +	if (!vdev->vfio_device) {
> >  		gvt_vgpu_err("failed to get vfio device\n");
> >  		return -ENODEV;
> >  	}
> > @@ -637,10 +677,12 @@ static int kvmgt_set_edid(void *p_vgpu, int port_num)
> >  
> >  static void kvmgt_put_vfio_device(void *vgpu)
> >  {
> > -	if (WARN_ON(!((struct intel_vgpu *)vgpu)->vdev.vfio_device))
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev((struct intel_vgpu *)vgpu);
> > +
> > +	if (WARN_ON(!vdev->vfio_device))
> >  		return;
> >  
> > -	vfio_device_put(((struct intel_vgpu *)vgpu)->vdev.vfio_device);
> > +	vfio_device_put(vdev->vfio_device);
> >  }
> >  
> >  static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
> > @@ -669,9 +711,9 @@ static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
> >  		goto out;
> >  	}
> >  
> > -	INIT_WORK(&vgpu->vdev.release_work, intel_vgpu_release_work);
> > +	INIT_WORK(&kvmgt_vdev(vgpu)->release_work, intel_vgpu_release_work);
> >  
> > -	vgpu->vdev.mdev = mdev;
> > +	kvmgt_vdev(vgpu)->mdev = mdev;
> >  	mdev_set_drvdata(mdev, vgpu);
> >  
> >  	gvt_dbg_core("intel_vgpu_create succeeded for mdev: %s\n",
> > @@ -696,9 +738,10 @@ static int intel_vgpu_remove(struct mdev_device *mdev)
> >  static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
> >  				     unsigned long action, void *data)
> >  {
> > -	struct intel_vgpu *vgpu = container_of(nb,
> > -					struct intel_vgpu,
> > -					vdev.iommu_notifier);
> > +	struct kvmgt_vdev *vdev = container_of(nb,
> > +					       struct kvmgt_vdev,
> > +					       iommu_notifier);
> > +	struct intel_vgpu *vgpu = vdev->vgpu;
> >  
> >  	if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
> >  		struct vfio_iommu_type1_dma_unmap *unmap = data;
> > @@ -708,7 +751,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
> >  		iov_pfn = unmap->iova >> PAGE_SHIFT;
> >  		end_iov_pfn = iov_pfn + unmap->size / PAGE_SIZE;
> >  
> > -		mutex_lock(&vgpu->vdev.cache_lock);
> > +		mutex_lock(&vdev->cache_lock);
> >  		for (; iov_pfn < end_iov_pfn; iov_pfn++) {
> >  			entry = __gvt_cache_find_gfn(vgpu, iov_pfn);
> >  			if (!entry)
> > @@ -718,7 +761,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
> >  					   entry->size);
> >  			__gvt_cache_remove_entry(vgpu, entry);
> >  		}
> > -		mutex_unlock(&vgpu->vdev.cache_lock);
> > +		mutex_unlock(&vdev->cache_lock);
> >  	}
> >  
> >  	return NOTIFY_OK;
> > @@ -727,16 +770,16 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
> >  static int intel_vgpu_group_notifier(struct notifier_block *nb,
> >  				     unsigned long action, void *data)
> >  {
> > -	struct intel_vgpu *vgpu = container_of(nb,
> > -					struct intel_vgpu,
> > -					vdev.group_notifier);
> > +	struct kvmgt_vdev *vdev = container_of(nb,
> > +					       struct kvmgt_vdev,
> > +					       group_notifier);
> >  
> >  	/* the only action we care about */
> >  	if (action == VFIO_GROUP_NOTIFY_SET_KVM) {
> > -		vgpu->vdev.kvm = data;
> > +		vdev->kvm = data;
> >  
> >  		if (!data)
> > -			schedule_work(&vgpu->vdev.release_work);
> > +			schedule_work(&vdev->release_work);
> >  	}
> >  
> >  	return NOTIFY_OK;
> > @@ -745,15 +788,16 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
> >  static int intel_vgpu_open(struct mdev_device *mdev)
> >  {
> >  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  	unsigned long events;
> >  	int ret;
> >  
> > -	vgpu->vdev.iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
> > -	vgpu->vdev.group_notifier.notifier_call = intel_vgpu_group_notifier;
> > +	vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
> > +	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
> >  
> >  	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
> >  	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
> > -				&vgpu->vdev.iommu_notifier);
> > +				&vdev->iommu_notifier);
> >  	if (ret != 0) {
> >  		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
> >  			ret);
> > @@ -762,7 +806,7 @@ static int intel_vgpu_open(struct mdev_device *mdev)
> >  
> >  	events = VFIO_GROUP_NOTIFY_SET_KVM;
> >  	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
> > -				&vgpu->vdev.group_notifier);
> > +				&vdev->group_notifier);
> >  	if (ret != 0) {
> >  		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
> >  			ret);
> > @@ -781,50 +825,52 @@ static int intel_vgpu_open(struct mdev_device *mdev)
> >  
> >  	intel_gvt_ops->vgpu_activate(vgpu);
> >  
> > -	atomic_set(&vgpu->vdev.released, 0);
> > +	atomic_set(&vdev->released, 0);
> >  	return ret;
> >  
> >  undo_group:
> >  	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
> > -					&vgpu->vdev.group_notifier);
> > +					&vdev->group_notifier);
> >  
> >  undo_iommu:
> >  	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
> > -					&vgpu->vdev.iommu_notifier);
> > +					&vdev->iommu_notifier);
> >  out:
> >  	return ret;
> >  }
> >  
> >  static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu)
> >  {
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  	struct eventfd_ctx *trigger;
> >  
> > -	trigger = vgpu->vdev.msi_trigger;
> > +	trigger = vdev->msi_trigger;
> >  	if (trigger) {
> >  		eventfd_ctx_put(trigger);
> > -		vgpu->vdev.msi_trigger = NULL;
> > +		vdev->msi_trigger = NULL;
> >  	}
> >  }
> >  
> >  static void __intel_vgpu_release(struct intel_vgpu *vgpu)
> >  {
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  	struct kvmgt_guest_info *info;
> >  	int ret;
> >  
> >  	if (!handle_valid(vgpu->handle))
> >  		return;
> >  
> > -	if (atomic_cmpxchg(&vgpu->vdev.released, 0, 1))
> > +	if (atomic_cmpxchg(&vdev->released, 0, 1))
> >  		return;
> >  
> >  	intel_gvt_ops->vgpu_release(vgpu);
> >  
> > -	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_IOMMU_NOTIFY,
> > -					&vgpu->vdev.iommu_notifier);
> > +	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
> > +					&vdev->iommu_notifier);
> >  	WARN(ret, "vfio_unregister_notifier for iommu failed: %d\n", ret);
> >  
> > -	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_GROUP_NOTIFY,
> > -					&vgpu->vdev.group_notifier);
> > +	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
> > +					&vdev->group_notifier);
> >  	WARN(ret, "vfio_unregister_notifier for group failed: %d\n", ret);
> >  
> >  	/* dereference module reference taken at open */
> > @@ -835,7 +881,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
> >  
> >  	intel_vgpu_release_msi_eventfd_ctx(vgpu);
> >  
> > -	vgpu->vdev.kvm = NULL;
> > +	vdev->kvm = NULL;
> >  	vgpu->handle = 0;
> >  }
> >  
> > @@ -848,10 +894,10 @@ static void intel_vgpu_release(struct mdev_device *mdev)
> >  
> >  static void intel_vgpu_release_work(struct work_struct *work)
> >  {
> > -	struct intel_vgpu *vgpu = container_of(work, struct intel_vgpu,
> > -					vdev.release_work);
> > +	struct kvmgt_vdev *vdev = container_of(work, struct kvmgt_vdev,
> > +					       release_work);
> >  
> > -	__intel_vgpu_release(vgpu);
> > +	__intel_vgpu_release(vdev->vgpu);
> >  }
> >  
> >  static u64 intel_vgpu_get_bar_addr(struct intel_vgpu *vgpu, int bar)
> > @@ -933,12 +979,13 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
> >  			size_t count, loff_t *ppos, bool is_write)
> >  {
> >  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  	unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos);
> >  	u64 pos = *ppos & VFIO_PCI_OFFSET_MASK;
> >  	int ret = -EINVAL;
> >  
> >  
> > -	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
> > +	if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions) {
> >  		gvt_vgpu_err("invalid index: %u\n", index);
> >  		return -EINVAL;
> >  	}
> > @@ -967,11 +1014,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
> >  	case VFIO_PCI_ROM_REGION_INDEX:
> >  		break;
> >  	default:
> > -		if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions)
> > +		if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions)
> >  			return -EINVAL;
> >  
> >  		index -= VFIO_PCI_NUM_REGIONS;
> > -		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
> > +		return vdev->region[index].ops->rw(vgpu, buf, count,
> >  				ppos, is_write);
> >  	}
> >  
> > @@ -1224,7 +1271,7 @@ static int intel_vgpu_set_msi_trigger(struct intel_vgpu *vgpu,
> >  			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
> >  			return PTR_ERR(trigger);
> >  		}
> > -		vgpu->vdev.msi_trigger = trigger;
> > +		kvmgt_vdev(vgpu)->msi_trigger = trigger;
> >  	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count)
> >  		intel_vgpu_release_msi_eventfd_ctx(vgpu);
> >  
> > @@ -1276,6 +1323,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
> >  			     unsigned long arg)
> >  {
> >  	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  	unsigned long minsz;
> >  
> >  	gvt_dbg_core("vgpu%d ioctl, cmd: %d\n", vgpu->id, cmd);
> > @@ -1294,7 +1342,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
> >  		info.flags = VFIO_DEVICE_FLAGS_PCI;
> >  		info.flags |= VFIO_DEVICE_FLAGS_RESET;
> >  		info.num_regions = VFIO_PCI_NUM_REGIONS +
> > -				vgpu->vdev.num_regions;
> > +				vdev->num_regions;
> >  		info.num_irqs = VFIO_PCI_NUM_IRQS;
> >  
> >  		return copy_to_user((void __user *)arg, &info, minsz) ?
> > @@ -1385,22 +1433,22 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
> >  					.header.version = 1 };
> >  
> >  				if (info.index >= VFIO_PCI_NUM_REGIONS +
> > -						vgpu->vdev.num_regions)
> > +						vdev->num_regions)
> >  					return -EINVAL;
> >  				info.index =
> >  					array_index_nospec(info.index,
> >  							VFIO_PCI_NUM_REGIONS +
> > -							vgpu->vdev.num_regions);
> > +							vdev->num_regions);
> >  
> >  				i = info.index - VFIO_PCI_NUM_REGIONS;
> >  
> >  				info.offset =
> >  					VFIO_PCI_INDEX_TO_OFFSET(info.index);
> > -				info.size = vgpu->vdev.region[i].size;
> > -				info.flags = vgpu->vdev.region[i].flags;
> > +				info.size = vdev->region[i].size;
> > +				info.flags = vdev->region[i].flags;
> >  
> > -				cap_type.type = vgpu->vdev.region[i].type;
> > -				cap_type.subtype = vgpu->vdev.region[i].subtype;
> > +				cap_type.type = vdev->region[i].type;
> > +				cap_type.subtype = vdev->region[i].subtype;
> >  
> >  				ret = vfio_info_add_capability(&caps,
> >  							&cap_type.header,
> > @@ -1740,13 +1788,15 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
> >  {
> >  	struct kvmgt_guest_info *info;
> >  	struct intel_vgpu *vgpu;
> > +	struct kvmgt_vdev *vdev;
> >  	struct kvm *kvm;
> >  
> >  	vgpu = mdev_get_drvdata(mdev);
> >  	if (handle_valid(vgpu->handle))
> >  		return -EEXIST;
> >  
> > -	kvm = vgpu->vdev.kvm;
> > +	vdev = kvmgt_vdev(vgpu);
> > +	kvm = vdev->kvm;
> >  	if (!kvm || kvm->mm != current->mm) {
> >  		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
> >  		return -ESRCH;
> > @@ -1776,7 +1826,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
> >  	info->debugfs_cache_entries = debugfs_create_ulong(
> >  						"kvmgt_nr_cache_entries",
> >  						0444, vgpu->debugfs,
> > -						&vgpu->vdev.nr_cache_entries);
> > +						&vdev->nr_cache_entries);
> >  	return 0;
> >  }
> >  
> > @@ -1793,9 +1843,17 @@ static bool kvmgt_guest_exit(struct kvmgt_guest_info *info)
> >  	return true;
> >  }
> >  
> > -static int kvmgt_attach_vgpu(void *vgpu, unsigned long *handle)
> > +static int kvmgt_attach_vgpu(void *p_vgpu, unsigned long *handle)
> >  {
> > -	/* nothing to do here */
> > +	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> > +
> > +	vgpu->vdev = kzalloc(sizeof(struct kvmgt_vdev), GFP_KERNEL);
> > +
> > +	if (!vgpu->vdev)
> > +		return -ENOMEM;
> > +
> > +	kvmgt_vdev(vgpu)->vgpu = vgpu;
> > +
> >  	return 0;
> >  }
> >  
> > @@ -1803,29 +1861,34 @@ static void kvmgt_detach_vgpu(void *p_vgpu)
> >  {
> >  	int i;
> >  	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
> > +	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
> >  
> > -	if (!vgpu->vdev.region)
> > +	if (!vdev->region)
> >  		return;
> >  
> > -	for (i = 0; i < vgpu->vdev.num_regions; i++)
> > -		if (vgpu->vdev.region[i].ops->release)
> > -			vgpu->vdev.region[i].ops->release(vgpu,
> > -					&vgpu->vdev.region[i]);
> > -	vgpu->vdev.num_regions = 0;
> > -	kfree(vgpu->vdev.region);
> > -	vgpu->vdev.region = NULL;
> > +	for (i = 0; i < vdev->num_regions; i++)
> > +		if (vdev->region[i].ops->release)
> > +			vdev->region[i].ops->release(vgpu,
> > +					&vdev->region[i]);
> > +	vdev->num_regions = 0;
> > +	kfree(vdev->region);
> > +	vdev->region = NULL;
> > +
> > +	kfree(vdev);
> >  }
> >  
> >  static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
> >  {
> >  	struct kvmgt_guest_info *info;
> >  	struct intel_vgpu *vgpu;
> > +	struct kvmgt_vdev *vdev;
> >  
> >  	if (!handle_valid(handle))
> >  		return -ESRCH;
> >  
> >  	info = (struct kvmgt_guest_info *)handle;
> >  	vgpu = info->vgpu;
> > +	vdev = kvmgt_vdev(vgpu);
> >  
> >  	/*
> >  	 * When guest is poweroff, msi_trigger is set to NULL, but vgpu's
> > @@ -1836,10 +1899,10 @@ static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
> >  	 * enabled by guest. so if msi_trigger is null, success is still
> >  	 * returned and don't inject interrupt into guest.
> >  	 */
> > -	if (vgpu->vdev.msi_trigger == NULL)
> > +	if (vdev->msi_trigger == NULL)
> >  		return 0;
> >  
> > -	if (eventfd_signal(vgpu->vdev.msi_trigger, 1) == 1)
> > +	if (eventfd_signal(vdev->msi_trigger, 1) == 1)
> >  		return 0;
> >  
> >  	return -EFAULT;
> > @@ -1865,26 +1928,26 @@ static unsigned long kvmgt_gfn_to_pfn(unsigned long handle, unsigned long gfn)
> >  static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
> >  		unsigned long size, dma_addr_t *dma_addr)
> >  {
> > -	struct kvmgt_guest_info *info;
> >  	struct intel_vgpu *vgpu;
> > +	struct kvmgt_vdev *vdev;
> >  	struct gvt_dma *entry;
> >  	int ret;
> >  
> >  	if (!handle_valid(handle))
> >  		return -EINVAL;
> >  
> > -	info = (struct kvmgt_guest_info *)handle;
> > -	vgpu = info->vgpu;
> > +	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
> > +	vdev = kvmgt_vdev(vgpu);
> >  
> > -	mutex_lock(&info->vgpu->vdev.cache_lock);
> > +	mutex_lock(&vdev->cache_lock);
> >  
> > -	entry = __gvt_cache_find_gfn(info->vgpu, gfn);
> > +	entry = __gvt_cache_find_gfn(vgpu, gfn);
> >  	if (!entry) {
> >  		ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size);
> >  		if (ret)
> >  			goto err_unlock;
> >  
> > -		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
> > +		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
> >  		if (ret)
> >  			goto err_unmap;
> >  	} else if (entry->size != size) {
> > @@ -1896,7 +1959,7 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
> >  		if (ret)
> >  			goto err_unlock;
> >  
> > -		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
> > +		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
> >  		if (ret)
> >  			goto err_unmap;
> >  	} else {
> > @@ -1904,19 +1967,20 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
> >  		*dma_addr = entry->dma_addr;
> >  	}
> >  
> > -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> > +	mutex_unlock(&vdev->cache_lock);
> >  	return 0;
> >  
> >  err_unmap:
> >  	gvt_dma_unmap_page(vgpu, gfn, *dma_addr, size);
> >  err_unlock:
> > -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> > +	mutex_unlock(&vdev->cache_lock);
> >  	return ret;
> >  }
> >  
> >  static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
> >  {
> >  	struct kvmgt_guest_info *info;
> > +	struct kvmgt_vdev *vdev;
> >  	struct gvt_dma *entry;
> >  	int ret = 0;
> >  
> > @@ -1924,14 +1988,15 @@ static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
> >  		return -ENODEV;
> >  
> >  	info = (struct kvmgt_guest_info *)handle;
> > +	vdev = kvmgt_vdev(info->vgpu);
> >  
> > -	mutex_lock(&info->vgpu->vdev.cache_lock);
> > +	mutex_lock(&vdev->cache_lock);
> >  	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
> >  	if (entry)
> >  		kref_get(&entry->ref);
> >  	else
> >  		ret = -ENOMEM;
> > -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> > +	mutex_unlock(&vdev->cache_lock);
> >  
> >  	return ret;
> >  }
> > @@ -1947,19 +2012,21 @@ static void __gvt_dma_release(struct kref *ref)
> >  
> >  static void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr)
> >  {
> > -	struct kvmgt_guest_info *info;
> > +	struct intel_vgpu *vgpu;
> > +	struct kvmgt_vdev *vdev;
> >  	struct gvt_dma *entry;
> >  
> >  	if (!handle_valid(handle))
> >  		return;
> >  
> > -	info = (struct kvmgt_guest_info *)handle;
> > +	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
> > +	vdev = kvmgt_vdev(vgpu);
> >  
> > -	mutex_lock(&info->vgpu->vdev.cache_lock);
> > -	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
> > +	mutex_lock(&vdev->cache_lock);
> > +	entry = __gvt_cache_find_dma_addr(vgpu, dma_addr);
> >  	if (entry)
> >  		kref_put(&entry->ref, __gvt_dma_release);
> > -	mutex_unlock(&info->vgpu->vdev.cache_lock);
> > +	mutex_unlock(&vdev->cache_lock);
> >  }
> >  
> >  static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
> > -- 
> > 2.24.1
> > 
> > _______________________________________________
> > intel-gvt-dev mailing list
> > intel-gvt-dev@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev
> 
> -- 
> Open Source Technology Center, Intel ltd.
> 
> $gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827



> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev


-- 
Open Source Technology Center, Intel ltd.

$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] drm/i915/gvt: make gvt oblivious of kvmgt data structures
  2020-01-20  6:33       ` Zhenyu Wang
@ 2020-01-20 17:25         ` Julian Stecklina
  2020-01-20 17:28         ` [RFC PATCH 1/4] " Julian Stecklina
  1 sibling, 0 replies; 16+ messages in thread
From: Julian Stecklina @ 2020-01-20 17:25 UTC (permalink / raw)
  To: intel-gvt-dev
  Cc: Julian Stecklina, linux-kernel, hang.yuan, dri-devel, zhiyuan.lv

Instead of defining KVMGT per-device state in struct intel_vgpu
directly, add an indirection. This makes the GVT code oblivious of
what state KVMGT needs to keep.

The intention here is to eventually make it possible to build
hypervisor backends for the mediator, without having to touch the
mediator itself. This is a first step.

v2:
- rebased onto gvt-staging (no conflicts)

Signed-off-by: Julian Stecklina <julian.stecklina@cyberus-technology.de>
Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>
---
 drivers/gpu/drm/i915/gvt/gvt.h   |  32 +---
 drivers/gpu/drm/i915/gvt/kvmgt.c | 287 +++++++++++++++++++------------
 2 files changed, 184 insertions(+), 135 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h
index 9fe9decc0d86..8cf292a8d6bd 100644
--- a/drivers/gpu/drm/i915/gvt/gvt.h
+++ b/drivers/gpu/drm/i915/gvt/gvt.h
@@ -196,31 +196,8 @@ struct intel_vgpu {
 
 	struct dentry *debugfs;
 
-#if IS_ENABLED(CONFIG_DRM_I915_GVT_KVMGT)
-	struct {
-		struct mdev_device *mdev;
-		struct vfio_region *region;
-		int num_regions;
-		struct eventfd_ctx *intx_trigger;
-		struct eventfd_ctx *msi_trigger;
-
-		/*
-		 * Two caches are used to avoid mapping duplicated pages (eg.
-		 * scratch pages). This help to reduce dma setup overhead.
-		 */
-		struct rb_root gfn_cache;
-		struct rb_root dma_addr_cache;
-		unsigned long nr_cache_entries;
-		struct mutex cache_lock;
-
-		struct notifier_block iommu_notifier;
-		struct notifier_block group_notifier;
-		struct kvm *kvm;
-		struct work_struct release_work;
-		atomic_t released;
-		struct vfio_device *vfio_device;
-	} vdev;
-#endif
+	/* Hypervisor-specific device state. */
+	void *vdev;
 
 	struct list_head dmabuf_obj_list_head;
 	struct mutex dmabuf_lock;
@@ -229,6 +206,11 @@ struct intel_vgpu {
 	u32 scan_nonprivbb;
 };
 
+static inline void *intel_vgpu_vdev(struct intel_vgpu *vgpu)
+{
+	return vgpu->vdev;
+}
+
 /* validating GM healthy status*/
 #define vgpu_is_vm_unhealthy(ret_val) \
 	(((ret_val) == -EBADRQC) || ((ret_val) == -EFAULT))
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 85e59c502ab5..9a435bc1a2f0 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -108,6 +108,36 @@ struct gvt_dma {
 	struct kref ref;
 };
 
+struct kvmgt_vdev {
+	struct intel_vgpu *vgpu;
+	struct mdev_device *mdev;
+	struct vfio_region *region;
+	int num_regions;
+	struct eventfd_ctx *intx_trigger;
+	struct eventfd_ctx *msi_trigger;
+
+	/*
+	 * Two caches are used to avoid mapping duplicated pages (eg.
+	 * scratch pages). This help to reduce dma setup overhead.
+	 */
+	struct rb_root gfn_cache;
+	struct rb_root dma_addr_cache;
+	unsigned long nr_cache_entries;
+	struct mutex cache_lock;
+
+	struct notifier_block iommu_notifier;
+	struct notifier_block group_notifier;
+	struct kvm *kvm;
+	struct work_struct release_work;
+	atomic_t released;
+	struct vfio_device *vfio_device;
+};
+
+static inline struct kvmgt_vdev *kvmgt_vdev(struct intel_vgpu *vgpu)
+{
+	return intel_vgpu_vdev(vgpu);
+}
+
 static inline bool handle_valid(unsigned long handle)
 {
 	return !!(handle & ~0xff);
@@ -129,7 +159,7 @@ static void gvt_unpin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 	for (npage = 0; npage < total_pages; npage++) {
 		unsigned long cur_gfn = gfn + npage;
 
-		ret = vfio_unpin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1);
+		ret = vfio_unpin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1);
 		WARN_ON(ret != 1);
 	}
 }
@@ -152,7 +182,7 @@ static int gvt_pin_guest_page(struct intel_vgpu *vgpu, unsigned long gfn,
 		unsigned long cur_gfn = gfn + npage;
 		unsigned long pfn;
 
-		ret = vfio_pin_pages(mdev_dev(vgpu->vdev.mdev), &cur_gfn, 1,
+		ret = vfio_pin_pages(mdev_dev(kvmgt_vdev(vgpu)->mdev), &cur_gfn, 1,
 				     IOMMU_READ | IOMMU_WRITE, &pfn);
 		if (ret != 1) {
 			gvt_vgpu_err("vfio_pin_pages failed for gfn 0x%lx, ret %d\n",
@@ -219,7 +249,7 @@ static void gvt_dma_unmap_page(struct intel_vgpu *vgpu, unsigned long gfn,
 static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
 		dma_addr_t dma_addr)
 {
-	struct rb_node *node = vgpu->vdev.dma_addr_cache.rb_node;
+	struct rb_node *node = kvmgt_vdev(vgpu)->dma_addr_cache.rb_node;
 	struct gvt_dma *itr;
 
 	while (node) {
@@ -237,7 +267,7 @@ static struct gvt_dma *__gvt_cache_find_dma_addr(struct intel_vgpu *vgpu,
 
 static struct gvt_dma *__gvt_cache_find_gfn(struct intel_vgpu *vgpu, gfn_t gfn)
 {
-	struct rb_node *node = vgpu->vdev.gfn_cache.rb_node;
+	struct rb_node *node = kvmgt_vdev(vgpu)->gfn_cache.rb_node;
 	struct gvt_dma *itr;
 
 	while (node) {
@@ -258,6 +288,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 {
 	struct gvt_dma *new, *itr;
 	struct rb_node **link, *parent = NULL;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
 	new = kzalloc(sizeof(struct gvt_dma), GFP_KERNEL);
 	if (!new)
@@ -270,7 +301,7 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 	kref_init(&new->ref);
 
 	/* gfn_cache maps gfn to struct gvt_dma. */
-	link = &vgpu->vdev.gfn_cache.rb_node;
+	link = &vdev->gfn_cache.rb_node;
 	while (*link) {
 		parent = *link;
 		itr = rb_entry(parent, struct gvt_dma, gfn_node);
@@ -281,11 +312,11 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 			link = &parent->rb_right;
 	}
 	rb_link_node(&new->gfn_node, parent, link);
-	rb_insert_color(&new->gfn_node, &vgpu->vdev.gfn_cache);
+	rb_insert_color(&new->gfn_node, &vdev->gfn_cache);
 
 	/* dma_addr_cache maps dma addr to struct gvt_dma. */
 	parent = NULL;
-	link = &vgpu->vdev.dma_addr_cache.rb_node;
+	link = &vdev->dma_addr_cache.rb_node;
 	while (*link) {
 		parent = *link;
 		itr = rb_entry(parent, struct gvt_dma, dma_addr_node);
@@ -296,46 +327,51 @@ static int __gvt_cache_add(struct intel_vgpu *vgpu, gfn_t gfn,
 			link = &parent->rb_right;
 	}
 	rb_link_node(&new->dma_addr_node, parent, link);
-	rb_insert_color(&new->dma_addr_node, &vgpu->vdev.dma_addr_cache);
+	rb_insert_color(&new->dma_addr_node, &vdev->dma_addr_cache);
 
-	vgpu->vdev.nr_cache_entries++;
+	vdev->nr_cache_entries++;
 	return 0;
 }
 
 static void __gvt_cache_remove_entry(struct intel_vgpu *vgpu,
 				struct gvt_dma *entry)
 {
-	rb_erase(&entry->gfn_node, &vgpu->vdev.gfn_cache);
-	rb_erase(&entry->dma_addr_node, &vgpu->vdev.dma_addr_cache);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+
+	rb_erase(&entry->gfn_node, &vdev->gfn_cache);
+	rb_erase(&entry->dma_addr_node, &vdev->dma_addr_cache);
 	kfree(entry);
-	vgpu->vdev.nr_cache_entries--;
+	vdev->nr_cache_entries--;
 }
 
 static void gvt_cache_destroy(struct intel_vgpu *vgpu)
 {
 	struct gvt_dma *dma;
 	struct rb_node *node = NULL;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
 	for (;;) {
-		mutex_lock(&vgpu->vdev.cache_lock);
-		node = rb_first(&vgpu->vdev.gfn_cache);
+		mutex_lock(&vdev->cache_lock);
+		node = rb_first(&vdev->gfn_cache);
 		if (!node) {
-			mutex_unlock(&vgpu->vdev.cache_lock);
+			mutex_unlock(&vdev->cache_lock);
 			break;
 		}
 		dma = rb_entry(node, struct gvt_dma, gfn_node);
 		gvt_dma_unmap_page(vgpu, dma->gfn, dma->dma_addr, dma->size);
 		__gvt_cache_remove_entry(vgpu, dma);
-		mutex_unlock(&vgpu->vdev.cache_lock);
+		mutex_unlock(&vdev->cache_lock);
 	}
 }
 
 static void gvt_cache_init(struct intel_vgpu *vgpu)
 {
-	vgpu->vdev.gfn_cache = RB_ROOT;
-	vgpu->vdev.dma_addr_cache = RB_ROOT;
-	vgpu->vdev.nr_cache_entries = 0;
-	mutex_init(&vgpu->vdev.cache_lock);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
+
+	vdev->gfn_cache = RB_ROOT;
+	vdev->dma_addr_cache = RB_ROOT;
+	vdev->nr_cache_entries = 0;
+	mutex_init(&vdev->cache_lock);
 }
 
 static void kvmgt_protect_table_init(struct kvmgt_guest_info *info)
@@ -409,16 +445,18 @@ static void kvmgt_protect_table_del(struct kvmgt_guest_info *info,
 static size_t intel_vgpu_reg_rw_opregion(struct intel_vgpu *vgpu, char *buf,
 		size_t count, loff_t *ppos, bool iswrite)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
 			VFIO_PCI_NUM_REGIONS;
-	void *base = vgpu->vdev.region[i].data;
+	void *base = vdev->region[i].data;
 	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
 
-	if (pos >= vgpu->vdev.region[i].size || iswrite) {
+
+	if (pos >= vdev->region[i].size || iswrite) {
 		gvt_vgpu_err("invalid op or offset for Intel vgpu OpRegion\n");
 		return -EINVAL;
 	}
-	count = min(count, (size_t)(vgpu->vdev.region[i].size - pos));
+	count = min(count, (size_t)(vdev->region[i].size - pos));
 	memcpy(buf, base + pos, count);
 
 	return count;
@@ -512,7 +550,7 @@ static size_t intel_vgpu_reg_rw_edid(struct intel_vgpu *vgpu, char *buf,
 	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
 			VFIO_PCI_NUM_REGIONS;
 	struct vfio_edid_region *region =
-		(struct vfio_edid_region *)vgpu->vdev.region[i].data;
+		(struct vfio_edid_region *)kvmgt_vdev(vgpu)->region[i].data;
 	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
 
 	if (pos < region->vfio_edid_regs.edid_offset) {
@@ -544,32 +582,34 @@ static int intel_vgpu_register_reg(struct intel_vgpu *vgpu,
 		const struct intel_vgpu_regops *ops,
 		size_t size, u32 flags, void *data)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct vfio_region *region;
 
-	region = krealloc(vgpu->vdev.region,
-			(vgpu->vdev.num_regions + 1) * sizeof(*region),
+	region = krealloc(vdev->region,
+			(vdev->num_regions + 1) * sizeof(*region),
 			GFP_KERNEL);
 	if (!region)
 		return -ENOMEM;
 
-	vgpu->vdev.region = region;
-	vgpu->vdev.region[vgpu->vdev.num_regions].type = type;
-	vgpu->vdev.region[vgpu->vdev.num_regions].subtype = subtype;
-	vgpu->vdev.region[vgpu->vdev.num_regions].ops = ops;
-	vgpu->vdev.region[vgpu->vdev.num_regions].size = size;
-	vgpu->vdev.region[vgpu->vdev.num_regions].flags = flags;
-	vgpu->vdev.region[vgpu->vdev.num_regions].data = data;
-	vgpu->vdev.num_regions++;
+	vdev->region = region;
+	vdev->region[vdev->num_regions].type = type;
+	vdev->region[vdev->num_regions].subtype = subtype;
+	vdev->region[vdev->num_regions].ops = ops;
+	vdev->region[vdev->num_regions].size = size;
+	vdev->region[vdev->num_regions].flags = flags;
+	vdev->region[vdev->num_regions].data = data;
+	vdev->num_regions++;
 	return 0;
 }
 
 static int kvmgt_get_vfio_device(void *p_vgpu)
 {
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
-	vgpu->vdev.vfio_device = vfio_device_get_from_dev(
-		mdev_dev(vgpu->vdev.mdev));
-	if (!vgpu->vdev.vfio_device) {
+	vdev->vfio_device = vfio_device_get_from_dev(
+		mdev_dev(vdev->mdev));
+	if (!vdev->vfio_device) {
 		gvt_vgpu_err("failed to get vfio device\n");
 		return -ENODEV;
 	}
@@ -637,10 +677,12 @@ static int kvmgt_set_edid(void *p_vgpu, int port_num)
 
 static void kvmgt_put_vfio_device(void *vgpu)
 {
-	if (WARN_ON(!((struct intel_vgpu *)vgpu)->vdev.vfio_device))
+	struct kvmgt_vdev *vdev = kvmgt_vdev((struct intel_vgpu *)vgpu);
+
+	if (WARN_ON(!vdev->vfio_device))
 		return;
 
-	vfio_device_put(((struct intel_vgpu *)vgpu)->vdev.vfio_device);
+	vfio_device_put(vdev->vfio_device);
 }
 
 static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
@@ -669,9 +711,9 @@ static int intel_vgpu_create(struct kobject *kobj, struct mdev_device *mdev)
 		goto out;
 	}
 
-	INIT_WORK(&vgpu->vdev.release_work, intel_vgpu_release_work);
+	INIT_WORK(&kvmgt_vdev(vgpu)->release_work, intel_vgpu_release_work);
 
-	vgpu->vdev.mdev = mdev;
+	kvmgt_vdev(vgpu)->mdev = mdev;
 	mdev_set_drvdata(mdev, vgpu);
 
 	gvt_dbg_core("intel_vgpu_create succeeded for mdev: %s\n",
@@ -696,9 +738,10 @@ static int intel_vgpu_remove(struct mdev_device *mdev)
 static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 				     unsigned long action, void *data)
 {
-	struct intel_vgpu *vgpu = container_of(nb,
-					struct intel_vgpu,
-					vdev.iommu_notifier);
+	struct kvmgt_vdev *vdev = container_of(nb,
+					       struct kvmgt_vdev,
+					       iommu_notifier);
+	struct intel_vgpu *vgpu = vdev->vgpu;
 
 	if (action == VFIO_IOMMU_NOTIFY_DMA_UNMAP) {
 		struct vfio_iommu_type1_dma_unmap *unmap = data;
@@ -708,7 +751,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 		iov_pfn = unmap->iova >> PAGE_SHIFT;
 		end_iov_pfn = iov_pfn + unmap->size / PAGE_SIZE;
 
-		mutex_lock(&vgpu->vdev.cache_lock);
+		mutex_lock(&vdev->cache_lock);
 		for (; iov_pfn < end_iov_pfn; iov_pfn++) {
 			entry = __gvt_cache_find_gfn(vgpu, iov_pfn);
 			if (!entry)
@@ -718,7 +761,7 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 					   entry->size);
 			__gvt_cache_remove_entry(vgpu, entry);
 		}
-		mutex_unlock(&vgpu->vdev.cache_lock);
+		mutex_unlock(&vdev->cache_lock);
 	}
 
 	return NOTIFY_OK;
@@ -727,16 +770,16 @@ static int intel_vgpu_iommu_notifier(struct notifier_block *nb,
 static int intel_vgpu_group_notifier(struct notifier_block *nb,
 				     unsigned long action, void *data)
 {
-	struct intel_vgpu *vgpu = container_of(nb,
-					struct intel_vgpu,
-					vdev.group_notifier);
+	struct kvmgt_vdev *vdev = container_of(nb,
+					       struct kvmgt_vdev,
+					       group_notifier);
 
 	/* the only action we care about */
 	if (action == VFIO_GROUP_NOTIFY_SET_KVM) {
-		vgpu->vdev.kvm = data;
+		vdev->kvm = data;
 
 		if (!data)
-			schedule_work(&vgpu->vdev.release_work);
+			schedule_work(&vdev->release_work);
 	}
 
 	return NOTIFY_OK;
@@ -745,15 +788,16 @@ static int intel_vgpu_group_notifier(struct notifier_block *nb,
 static int intel_vgpu_open(struct mdev_device *mdev)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long events;
 	int ret;
 
-	vgpu->vdev.iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
-	vgpu->vdev.group_notifier.notifier_call = intel_vgpu_group_notifier;
+	vdev->iommu_notifier.notifier_call = intel_vgpu_iommu_notifier;
+	vdev->group_notifier.notifier_call = intel_vgpu_group_notifier;
 
 	events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
 	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &events,
-				&vgpu->vdev.iommu_notifier);
+				&vdev->iommu_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for iommu failed: %d\n",
 			ret);
@@ -762,7 +806,7 @@ static int intel_vgpu_open(struct mdev_device *mdev)
 
 	events = VFIO_GROUP_NOTIFY_SET_KVM;
 	ret = vfio_register_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY, &events,
-				&vgpu->vdev.group_notifier);
+				&vdev->group_notifier);
 	if (ret != 0) {
 		gvt_vgpu_err("vfio_register_notifier for group failed: %d\n",
 			ret);
@@ -781,50 +825,52 @@ static int intel_vgpu_open(struct mdev_device *mdev)
 
 	intel_gvt_ops->vgpu_activate(vgpu);
 
-	atomic_set(&vgpu->vdev.released, 0);
+	atomic_set(&vdev->released, 0);
 	return ret;
 
 undo_group:
 	vfio_unregister_notifier(mdev_dev(mdev), VFIO_GROUP_NOTIFY,
-					&vgpu->vdev.group_notifier);
+					&vdev->group_notifier);
 
 undo_iommu:
 	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
-					&vgpu->vdev.iommu_notifier);
+					&vdev->iommu_notifier);
 out:
 	return ret;
 }
 
 static void intel_vgpu_release_msi_eventfd_ctx(struct intel_vgpu *vgpu)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct eventfd_ctx *trigger;
 
-	trigger = vgpu->vdev.msi_trigger;
+	trigger = vdev->msi_trigger;
 	if (trigger) {
 		eventfd_ctx_put(trigger);
-		vgpu->vdev.msi_trigger = NULL;
+		vdev->msi_trigger = NULL;
 	}
 }
 
 static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 {
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	struct kvmgt_guest_info *info;
 	int ret;
 
 	if (!handle_valid(vgpu->handle))
 		return;
 
-	if (atomic_cmpxchg(&vgpu->vdev.released, 0, 1))
+	if (atomic_cmpxchg(&vdev->released, 0, 1))
 		return;
 
 	intel_gvt_ops->vgpu_release(vgpu);
 
-	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_IOMMU_NOTIFY,
-					&vgpu->vdev.iommu_notifier);
+	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_IOMMU_NOTIFY,
+					&vdev->iommu_notifier);
 	WARN(ret, "vfio_unregister_notifier for iommu failed: %d\n", ret);
 
-	ret = vfio_unregister_notifier(mdev_dev(vgpu->vdev.mdev), VFIO_GROUP_NOTIFY,
-					&vgpu->vdev.group_notifier);
+	ret = vfio_unregister_notifier(mdev_dev(vdev->mdev), VFIO_GROUP_NOTIFY,
+					&vdev->group_notifier);
 	WARN(ret, "vfio_unregister_notifier for group failed: %d\n", ret);
 
 	/* dereference module reference taken at open */
@@ -835,7 +881,7 @@ static void __intel_vgpu_release(struct intel_vgpu *vgpu)
 
 	intel_vgpu_release_msi_eventfd_ctx(vgpu);
 
-	vgpu->vdev.kvm = NULL;
+	vdev->kvm = NULL;
 	vgpu->handle = 0;
 }
 
@@ -848,10 +894,10 @@ static void intel_vgpu_release(struct mdev_device *mdev)
 
 static void intel_vgpu_release_work(struct work_struct *work)
 {
-	struct intel_vgpu *vgpu = container_of(work, struct intel_vgpu,
-					vdev.release_work);
+	struct kvmgt_vdev *vdev = container_of(work, struct kvmgt_vdev,
+					       release_work);
 
-	__intel_vgpu_release(vgpu);
+	__intel_vgpu_release(vdev->vgpu);
 }
 
 static u64 intel_vgpu_get_bar_addr(struct intel_vgpu *vgpu, int bar)
@@ -933,12 +979,13 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 			size_t count, loff_t *ppos, bool is_write)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned int index = VFIO_PCI_OFFSET_TO_INDEX(*ppos);
 	u64 pos = *ppos & VFIO_PCI_OFFSET_MASK;
 	int ret = -EINVAL;
 
 
-	if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions) {
+	if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions) {
 		gvt_vgpu_err("invalid index: %u\n", index);
 		return -EINVAL;
 	}
@@ -967,11 +1014,11 @@ static ssize_t intel_vgpu_rw(struct mdev_device *mdev, char *buf,
 	case VFIO_PCI_ROM_REGION_INDEX:
 		break;
 	default:
-		if (index >= VFIO_PCI_NUM_REGIONS + vgpu->vdev.num_regions)
+		if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions)
 			return -EINVAL;
 
 		index -= VFIO_PCI_NUM_REGIONS;
-		return vgpu->vdev.region[index].ops->rw(vgpu, buf, count,
+		return vdev->region[index].ops->rw(vgpu, buf, count,
 				ppos, is_write);
 	}
 
@@ -1224,7 +1271,7 @@ static int intel_vgpu_set_msi_trigger(struct intel_vgpu *vgpu,
 			gvt_vgpu_err("eventfd_ctx_fdget failed\n");
 			return PTR_ERR(trigger);
 		}
-		vgpu->vdev.msi_trigger = trigger;
+		kvmgt_vdev(vgpu)->msi_trigger = trigger;
 	} else if ((flags & VFIO_IRQ_SET_DATA_NONE) && !count)
 		intel_vgpu_release_msi_eventfd_ctx(vgpu);
 
@@ -1276,6 +1323,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 			     unsigned long arg)
 {
 	struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 	unsigned long minsz;
 
 	gvt_dbg_core("vgpu%d ioctl, cmd: %d\n", vgpu->id, cmd);
@@ -1294,7 +1342,7 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 		info.flags = VFIO_DEVICE_FLAGS_PCI;
 		info.flags |= VFIO_DEVICE_FLAGS_RESET;
 		info.num_regions = VFIO_PCI_NUM_REGIONS +
-				vgpu->vdev.num_regions;
+				vdev->num_regions;
 		info.num_irqs = VFIO_PCI_NUM_IRQS;
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
@@ -1385,22 +1433,22 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
 					.header.version = 1 };
 
 				if (info.index >= VFIO_PCI_NUM_REGIONS +
-						vgpu->vdev.num_regions)
+						vdev->num_regions)
 					return -EINVAL;
 				info.index =
 					array_index_nospec(info.index,
 							VFIO_PCI_NUM_REGIONS +
-							vgpu->vdev.num_regions);
+							vdev->num_regions);
 
 				i = info.index - VFIO_PCI_NUM_REGIONS;
 
 				info.offset =
 					VFIO_PCI_INDEX_TO_OFFSET(info.index);
-				info.size = vgpu->vdev.region[i].size;
-				info.flags = vgpu->vdev.region[i].flags;
+				info.size = vdev->region[i].size;
+				info.flags = vdev->region[i].flags;
 
-				cap_type.type = vgpu->vdev.region[i].type;
-				cap_type.subtype = vgpu->vdev.region[i].subtype;
+				cap_type.type = vdev->region[i].type;
+				cap_type.subtype = vdev->region[i].subtype;
 
 				ret = vfio_info_add_capability(&caps,
 							&cap_type.header,
@@ -1740,13 +1788,15 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 {
 	struct kvmgt_guest_info *info;
 	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 	struct kvm *kvm;
 
 	vgpu = mdev_get_drvdata(mdev);
 	if (handle_valid(vgpu->handle))
 		return -EEXIST;
 
-	kvm = vgpu->vdev.kvm;
+	vdev = kvmgt_vdev(vgpu);
+	kvm = vdev->kvm;
 	if (!kvm || kvm->mm != current->mm) {
 		gvt_vgpu_err("KVM is required to use Intel vGPU\n");
 		return -ESRCH;
@@ -1774,7 +1824,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
 	info->debugfs_cache_entries = debugfs_create_ulong(
 						"kvmgt_nr_cache_entries",
 						0444, vgpu->debugfs,
-						&vgpu->vdev.nr_cache_entries);
+						&vdev->nr_cache_entries);
 	return 0;
 }
 
@@ -1791,9 +1841,17 @@ static bool kvmgt_guest_exit(struct kvmgt_guest_info *info)
 	return true;
 }
 
-static int kvmgt_attach_vgpu(void *vgpu, unsigned long *handle)
+static int kvmgt_attach_vgpu(void *p_vgpu, unsigned long *handle)
 {
-	/* nothing to do here */
+	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+
+	vgpu->vdev = kzalloc(sizeof(struct kvmgt_vdev), GFP_KERNEL);
+
+	if (!vgpu->vdev)
+		return -ENOMEM;
+
+	kvmgt_vdev(vgpu)->vgpu = vgpu;
+
 	return 0;
 }
 
@@ -1801,29 +1859,34 @@ static void kvmgt_detach_vgpu(void *p_vgpu)
 {
 	int i;
 	struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;
+	struct kvmgt_vdev *vdev = kvmgt_vdev(vgpu);
 
-	if (!vgpu->vdev.region)
+	if (!vdev->region)
 		return;
 
-	for (i = 0; i < vgpu->vdev.num_regions; i++)
-		if (vgpu->vdev.region[i].ops->release)
-			vgpu->vdev.region[i].ops->release(vgpu,
-					&vgpu->vdev.region[i]);
-	vgpu->vdev.num_regions = 0;
-	kfree(vgpu->vdev.region);
-	vgpu->vdev.region = NULL;
+	for (i = 0; i < vdev->num_regions; i++)
+		if (vdev->region[i].ops->release)
+			vdev->region[i].ops->release(vgpu,
+					&vdev->region[i]);
+	vdev->num_regions = 0;
+	kfree(vdev->region);
+	vdev->region = NULL;
+
+	kfree(vdev);
 }
 
 static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
 {
 	struct kvmgt_guest_info *info;
 	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 
 	if (!handle_valid(handle))
 		return -ESRCH;
 
 	info = (struct kvmgt_guest_info *)handle;
 	vgpu = info->vgpu;
+	vdev = kvmgt_vdev(vgpu);
 
 	/*
 	 * When guest is poweroff, msi_trigger is set to NULL, but vgpu's
@@ -1834,10 +1897,10 @@ static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
 	 * enabled by guest. so if msi_trigger is null, success is still
 	 * returned and don't inject interrupt into guest.
 	 */
-	if (vgpu->vdev.msi_trigger == NULL)
+	if (vdev->msi_trigger == NULL)
 		return 0;
 
-	if (eventfd_signal(vgpu->vdev.msi_trigger, 1) == 1)
+	if (eventfd_signal(vdev->msi_trigger, 1) == 1)
 		return 0;
 
 	return -EFAULT;
@@ -1863,26 +1926,26 @@ static unsigned long kvmgt_gfn_to_pfn(unsigned long handle, unsigned long gfn)
 static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
 		unsigned long size, dma_addr_t *dma_addr)
 {
-	struct kvmgt_guest_info *info;
 	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 	struct gvt_dma *entry;
 	int ret;
 
 	if (!handle_valid(handle))
 		return -EINVAL;
 
-	info = (struct kvmgt_guest_info *)handle;
-	vgpu = info->vgpu;
+	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
+	vdev = kvmgt_vdev(vgpu);
 
-	mutex_lock(&info->vgpu->vdev.cache_lock);
+	mutex_lock(&vdev->cache_lock);
 
-	entry = __gvt_cache_find_gfn(info->vgpu, gfn);
+	entry = __gvt_cache_find_gfn(vgpu, gfn);
 	if (!entry) {
 		ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size);
 		if (ret)
 			goto err_unlock;
 
-		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
+		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
 		if (ret)
 			goto err_unmap;
 	} else if (entry->size != size) {
@@ -1894,7 +1957,7 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
 		if (ret)
 			goto err_unlock;
 
-		ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
+		ret = __gvt_cache_add(vgpu, gfn, *dma_addr, size);
 		if (ret)
 			goto err_unmap;
 	} else {
@@ -1902,19 +1965,20 @@ static int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
 		*dma_addr = entry->dma_addr;
 	}
 
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 	return 0;
 
 err_unmap:
 	gvt_dma_unmap_page(vgpu, gfn, *dma_addr, size);
 err_unlock:
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 	return ret;
 }
 
 static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
 {
 	struct kvmgt_guest_info *info;
+	struct kvmgt_vdev *vdev;
 	struct gvt_dma *entry;
 	int ret = 0;
 
@@ -1922,14 +1986,15 @@ static int kvmgt_dma_pin_guest_page(unsigned long handle, dma_addr_t dma_addr)
 		return -ENODEV;
 
 	info = (struct kvmgt_guest_info *)handle;
+	vdev = kvmgt_vdev(info->vgpu);
 
-	mutex_lock(&info->vgpu->vdev.cache_lock);
+	mutex_lock(&vdev->cache_lock);
 	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
 	if (entry)
 		kref_get(&entry->ref);
 	else
 		ret = -ENOMEM;
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 
 	return ret;
 }
@@ -1945,19 +2010,21 @@ static void __gvt_dma_release(struct kref *ref)
 
 static void kvmgt_dma_unmap_guest_page(unsigned long handle, dma_addr_t dma_addr)
 {
-	struct kvmgt_guest_info *info;
+	struct intel_vgpu *vgpu;
+	struct kvmgt_vdev *vdev;
 	struct gvt_dma *entry;
 
 	if (!handle_valid(handle))
 		return;
 
-	info = (struct kvmgt_guest_info *)handle;
+	vgpu = ((struct kvmgt_guest_info *)handle)->vgpu;
+	vdev = kvmgt_vdev(vgpu);
 
-	mutex_lock(&info->vgpu->vdev.cache_lock);
-	entry = __gvt_cache_find_dma_addr(info->vgpu, dma_addr);
+	mutex_lock(&vdev->cache_lock);
+	entry = __gvt_cache_find_dma_addr(vgpu, dma_addr);
 	if (entry)
 		kref_put(&entry->ref, __gvt_dma_release);
-	mutex_unlock(&info->vgpu->vdev.cache_lock);
+	mutex_unlock(&vdev->cache_lock);
 }
 
 static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
-- 
2.24.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures
  2020-01-20  6:33       ` Zhenyu Wang
  2020-01-20 17:25         ` [PATCH] " Julian Stecklina
@ 2020-01-20 17:28         ` Julian Stecklina
  1 sibling, 0 replies; 16+ messages in thread
From: Julian Stecklina @ 2020-01-20 17:28 UTC (permalink / raw)
  To: Zhenyu Wang; +Cc: zhiyuan.lv, intel-gvt-dev, linux-kernel, dri-devel, hang.yuan


[-- Attachment #1.1: Type: text/plain, Size: 242 bytes --]

On Mon, 2020-01-20 at 14:33 +0800, Zhenyu Wang wrote:
> hmm, I failed to apply this one, could you refresh against gvt-staging branch
> 
> on https://github.com/intel/gvt-linux?

Done. I've sent out the rebased (and re-tested) patch.

Julian

[-- Attachment #1.2: Type: text/html, Size: 630 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-01-21  8:32 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <4079ce7c26a2d2a3c7e0828ed1ea6008d6e2c805.camel@cyberus-technology.de>
2020-01-09 17:13 ` [RFC PATCH 0/4] Support for out-of-tree hypervisor modules in i915/gvt Julian Stecklina
2020-01-09 17:13   ` [RFC PATCH 1/4] drm/i915/gvt: make gvt oblivious of kvmgt data structures Julian Stecklina
2020-01-20  6:22     ` Zhenyu Wang
2020-01-20  6:33       ` Zhenyu Wang
2020-01-20 17:25         ` [PATCH] " Julian Stecklina
2020-01-20 17:28         ` [RFC PATCH 1/4] " Julian Stecklina
2020-01-09 17:13   ` [RFC PATCH 2/4] drm/i915/gvt: remove unused vblank_done completion Julian Stecklina
2020-01-20  6:23     ` Zhenyu Wang
2020-01-09 17:13   ` [RFC PATCH 3/4] drm/i915/gvt: define a public interface to gvt Julian Stecklina
2020-01-09 17:13   ` [RFC PATCH 4/4] drm/i915/gvt: move public gvt headers out into global include Julian Stecklina
2020-01-15 15:22     ` Greg KH
2020-01-16 14:13       ` Julian Stecklina
2020-01-16 14:23         ` Greg KH
2020-01-16 15:05           ` Julian Stecklina
2020-01-16 19:48             ` Greg KH
2020-01-17  2:15         ` Zhenyu Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).