dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/4] Add support for Keem Bay VPU DRM driver
@ 2020-10-09 11:57 kuhanh.murugasen.krishnan
  2020-10-09 11:57 ` [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM kuhanh.murugasen.krishnan
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: kuhanh.murugasen.krishnan @ 2020-10-09 11:57 UTC (permalink / raw)
  To: dri-devel

From: "Murugasen Krishnan, Kuhanh" <kuhanh.murugasen.krishnan@intel.com>

This is a new DRM media codec driver for Intel's Keem Bay SOC which
integrates the Verisilicon's Hantro Video Processor Unit (VPU) IP.
The SoC couples an ARM Cortex A53 CPU with an Intel Movidius VPU.

Hantro VPU IP is a series of video decoder and encoder semiconductor IP cores,
which can be flexibly configured for video surveillance, multimedia consumer
products, Internet of Things, cloud service products, data centers, aerial
photography and recorders, thereby providing video transcoding and multi-channel
HD video encoding and decoding.

Hantro VPU IP consists of Hantro VC8000D for decoder and Hantro VC8000E for encoder.

Hantro VC8000D allows 4K decoding with a minimal silicon single-core solution that
supports HEVC and H.264 video formats, key features:
* HEVC Main10 and Main Profiles up to Level 5.2
* HEVC Main Still Profile
* H.264 Main and High Profiles up to Level 5.2
* HEVC, H.264 and JPEG decoding up to 4K@60fps
* 8 channels 1080p@30fps decoding

Hantro VC8000E allows 4K encoding with a minimal silicon single-core solution that
supports HEVC and H.264 video formats, key features:
* HEVC Main10, Main and Main Still Profile, level 5.1
* H.264 Baseline, Main and High, High10 level 5.2
* JPEG encoder 16Kx16K max resolution
* HEVC/H264 Support up to 4K@60fps performance single-core
* 8 channels 1080p@30fps encoding
* B-frame support for higher compression rates
* Reference Frame Compression

This driver is tested with the Keem Bay EVM board which is the reference baord
for Keem Bay SOC.

Device tree patches are under review here:
https://lore.kernel.org/linux-arm-kernel/20200708175020.194436-1-daniele.alessandrelli@linux.intel.com/T/

Murugasen Krishnan, Kuhanh (4):
  drm: Add Keem Bay VPU codec DRM
  drm: hantro: Keem Bay VPU DRM encoder
  drm: hantro: Keem Bay VPU DRM decoder
  drm: hantro: Keem Bay VPU DRM build files

 drivers/gpu/drm/Kconfig                  |    2 +
 drivers/gpu/drm/Makefile                 |    1 +
 drivers/gpu/drm/hantro/Kconfig           |   21 +
 drivers/gpu/drm/hantro/Makefile          |    6 +
 drivers/gpu/drm/hantro/hantro_dec.c      | 1441 +++++++++++++++++++++++++
 drivers/gpu/drm/hantro/hantro_dec.h      |   59 ++
 drivers/gpu/drm/hantro/hantro_drm.c      | 1673 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/hantro/hantro_drm.h      |  208 ++++
 drivers/gpu/drm/hantro/hantro_dwl_defs.h |  101 ++
 drivers/gpu/drm/hantro/hantro_enc.c      |  738 +++++++++++++
 drivers/gpu/drm/hantro/hantro_enc.h      |   66 ++
 drivers/gpu/drm/hantro/hantro_fence.c    |  284 +++++
 drivers/gpu/drm/hantro/hantro_priv.h     |  106 ++
 13 files changed, 4706 insertions(+)
 create mode 100644 drivers/gpu/drm/hantro/Kconfig
 create mode 100644 drivers/gpu/drm/hantro/Makefile
 create mode 100644 drivers/gpu/drm/hantro/hantro_dec.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_dec.h
 create mode 100644 drivers/gpu/drm/hantro/hantro_drm.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_drm.h
 create mode 100644 drivers/gpu/drm/hantro/hantro_dwl_defs.h
 create mode 100644 drivers/gpu/drm/hantro/hantro_enc.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_enc.h
 create mode 100644 drivers/gpu/drm/hantro/hantro_fence.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_priv.h

-- 
1.9.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM
  2020-10-09 11:57 [PATCH v1 0/4] Add support for Keem Bay VPU DRM driver kuhanh.murugasen.krishnan
@ 2020-10-09 11:57 ` kuhanh.murugasen.krishnan
  2020-10-09 22:15   ` Daniel Vetter
  2020-10-09 11:57 ` [PATCH v1 2/4] drm: hantro: Keem Bay VPU DRM encoder kuhanh.murugasen.krishnan
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: kuhanh.murugasen.krishnan @ 2020-10-09 11:57 UTC (permalink / raw)
  To: dri-devel

From: "Murugasen Krishnan, Kuhanh" <kuhanh.murugasen.krishnan@intel.com>

This is a new DRM media codec driver for Intel's Keem Bay SOC which
integrates the Verisilicon's Hantro Video Processor Unit (VPU) IP.
The SoC couples an ARM Cortex A53 CPU with an Intel Movidius VPU.

Hantro VPU IP is a series of video decoder and encoder semiconductor IP cores,
which can be flexibly configured for video surveillance, multimedia consumer
products, Internet of Things, cloud service products, data centers, aerial
photography and recorders, thereby providing video transcoding and multi-channel
HD video encoding and decoding.

Hantro VPU IP consists of Hantro VC8000D for decoder and Hantro VC8000E for encoder.

Signed-off-by: Murugasen Krishnan, Kuhanh <kuhanh.murugasen.krishnan@intel.com>
Acked-by: Mark, Gross <mgross@linux.intel.com>
---
 drivers/gpu/drm/hantro/hantro_drm.c   | 1673 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/hantro/hantro_drm.h   |  208 ++++
 drivers/gpu/drm/hantro/hantro_fence.c |  284 ++++++
 drivers/gpu/drm/hantro/hantro_priv.h  |  106 +++
 4 files changed, 2271 insertions(+)
 create mode 100644 drivers/gpu/drm/hantro/hantro_drm.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_drm.h
 create mode 100644 drivers/gpu/drm/hantro/hantro_fence.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_priv.h

diff --git a/drivers/gpu/drm/hantro/hantro_drm.c b/drivers/gpu/drm/hantro/hantro_drm.c
new file mode 100644
index 0000000..50ccddf
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_drm.c
@@ -0,0 +1,1673 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *    Hantro driver main DRM file
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#include <linux/io.h>
+#include <linux/sched.h>
+#include <linux/uaccess.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/shmem_fs.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/dma-contiguous.h>
+#include <drm/drm_modeset_helper.h>
+/* hantro header */
+#include "hantro_priv.h"
+#include "hantro_enc.h"
+#include "hantro_dec.h"
+/* for dynamic ddr */
+#include <linux/dma-mapping.h>
+#include <linux/of_fdt.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/cma.h>
+
+struct hantro_device_handle hantro_dev;
+
+/* struct used for dynamic ddr allocations */
+struct hantro_mem ddr1;
+struct device *ddr_dev;
+
+static u32 hantro_vblank_no_hw_counter(struct drm_device *dev,
+				       unsigned int pipe)
+{
+	return 0;
+}
+
+static int hantro_recordmem(struct drm_file *priv, void *obj, int size)
+{
+	int ret;
+	struct idr *list = (struct idr *)priv->driver_priv;
+
+	ret = idr_alloc(list, obj, 1, 0, GFP_KERNEL);
+
+	return (ret > 0 ? 0 : -ENOMEM);
+}
+
+static void hantro_unrecordmem(struct drm_file *priv, void *obj)
+{
+	int id;
+	struct idr *list = (struct idr *)priv->driver_priv;
+	void *gemobj;
+
+	idr_for_each_entry(list, gemobj, id) {
+		if (gemobj == obj) {
+			idr_remove(list, id);
+			break;
+		}
+	}
+}
+
+static void hantro_drm_fb_destroy(struct drm_framebuffer *fb)
+{
+	struct hantro_drm_fb *vsi_fb = (struct hantro_drm_fb *)fb;
+	int i;
+
+	for (i = 0; i < 4; i++)
+		hantro_unref_drmobj(vsi_fb->obj[i]);
+
+	drm_framebuffer_cleanup(fb);
+	kfree(vsi_fb);
+}
+
+static int hantro_drm_fb_create_handle(struct drm_framebuffer *fb,
+				       struct drm_file *file_priv,
+				       unsigned int *handle)
+{
+	struct hantro_drm_fb *vsi_fb = (struct hantro_drm_fb *)fb;
+
+	return drm_gem_handle_create(file_priv, vsi_fb->obj[0], handle);
+}
+
+static int hantro_drm_fb_dirty(struct drm_framebuffer *fb,
+			       struct drm_file *file, unsigned int flags,
+			       unsigned int color, struct drm_clip_rect *clips,
+			       unsigned int num_clips)
+{
+	return 0;
+}
+
+static const struct drm_framebuffer_funcs hantro_drm_fb_funcs = {
+	.destroy = hantro_drm_fb_destroy,
+	.create_handle = hantro_drm_fb_create_handle,
+	.dirty = hantro_drm_fb_dirty,
+};
+
+static int hantro_gem_dumb_create_internal(struct drm_file *file_priv,
+					   struct drm_device *dev,
+					   struct drm_mode_create_dumb *args)
+{
+	int ret = 0;
+	int in_size, out_size;
+	struct drm_gem_hantro_object *cma_obj;
+	int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
+	struct drm_gem_object *obj;
+
+	if (mutex_lock_interruptible(&dev->struct_mutex))
+		return -EBUSY;
+	cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
+	if (!cma_obj) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	obj = &cma_obj->base;
+	out_size = sizeof(*args);
+	in_size = sizeof(*args);
+	args->pitch = ALIGN(min_pitch, 64);
+	args->size = (__u64)args->pitch * (__u64)args->height;
+	args->size = (args->size + PAGE_SIZE - 1) / PAGE_SIZE * PAGE_SIZE;
+
+	cma_obj->num_pages = args->size >> PAGE_SHIFT;
+	cma_obj->flag = 0;
+	cma_obj->pageaddr = NULL;
+	cma_obj->pages = NULL;
+	cma_obj->vaddr = NULL;
+
+	if (args->handle == DDR0_CHANNEL) {
+		ddr_dev = dev->dev;
+		cma_obj->ddr_channel = DDR0_CHANNEL;
+	} else if (args->handle == DDR1_CHANNEL) {
+		ddr_dev = ddr1.dev;
+		cma_obj->ddr_channel = DDR1_CHANNEL;
+	}
+	cma_obj->vaddr = dma_alloc_coherent(ddr_dev, args->size,
+					    &cma_obj->paddr, GFP_KERNEL | GFP_DMA);
+	if (!cma_obj->vaddr) {
+		kfree(cma_obj);
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	drm_gem_object_init(dev, obj, args->size);
+
+	args->handle = 0;
+	ret = drm_gem_handle_create(file_priv, obj, &args->handle);
+	if (ret == 0)
+		ret = hantro_recordmem(file_priv, cma_obj, args->size);
+	if (ret) {
+		dma_free_coherent(ddr_dev, args->size, cma_obj->vaddr,
+				  cma_obj->paddr);
+		kfree(cma_obj);
+	}
+	init_hantro_resv(&cma_obj->kresv, cma_obj);
+	cma_obj->handle = args->handle;
+out:
+	mutex_unlock(&dev->struct_mutex);
+
+	return ret;
+}
+
+static int hantro_gem_dumb_create(struct drm_device *dev, void *data,
+				  struct drm_file *file_priv)
+{
+	return hantro_gem_dumb_create_internal(file_priv, dev,
+					       (struct drm_mode_create_dumb *)data);
+}
+
+static int hantro_gem_dumb_map_offset(struct drm_file *file_priv,
+				      struct drm_device *dev, uint32_t handle,
+				      uint64_t *offset)
+{
+	struct drm_gem_object *obj;
+	int ret;
+
+	obj = hantro_gem_object_lookup(dev, file_priv, handle);
+	if (!obj)
+		return -EINVAL;
+
+	ret = drm_gem_create_mmap_offset(obj);
+	if (ret == 0)
+		*offset = drm_vma_node_offset_addr(&obj->vma_node);
+	hantro_unref_drmobj(obj);
+
+	return ret;
+}
+
+static int hantro_destroy_dumb(struct drm_device *dev, void *data,
+			       struct drm_file *file_priv)
+{
+	struct drm_mode_destroy_dumb *args = data;
+	struct drm_gem_object *obj;
+	struct drm_gem_hantro_object *cma_obj;
+
+	if (mutex_lock_interruptible(&dev->struct_mutex))
+		return -EBUSY;
+	obj = hantro_gem_object_lookup(dev, file_priv, args->handle);
+	if (!obj) {
+		mutex_unlock(&dev->struct_mutex);
+		return -EINVAL;
+	}
+	hantro_unref_drmobj(obj);
+
+	cma_obj = to_drm_gem_hantro_obj(obj);
+	if ((cma_obj->flag & HANTRO_GEM_FLAG_IMPORT) == 0)
+		hantro_unrecordmem(file_priv, cma_obj);
+
+	drm_gem_handle_delete(file_priv, args->handle);
+	hantro_unref_drmobj(obj);
+	mutex_unlock(&dev->struct_mutex);
+
+	return 0;
+}
+
+static int hantro_release_dumb(struct drm_device *dev,
+			       struct drm_file *file_priv, void *obj)
+{
+	struct drm_gem_object *gemobj = obj;
+	struct drm_gem_hantro_object *cma_obj;
+
+	cma_obj = to_drm_gem_hantro_obj(gemobj);
+
+	drm_gem_free_mmap_offset(&cma_obj->base);
+
+	if (cma_obj->flag & HANTRO_GEM_FLAG_EXPORT) {
+		drm_gem_handle_delete(file_priv, cma_obj->handle);
+		hantro_unref_drmobj(obj);
+		return 0;
+	}
+
+	drm_gem_object_release(gemobj);
+	drm_gem_handle_delete(file_priv, cma_obj->handle);
+
+	if (cma_obj->vaddr) {
+		if (cma_obj->ddr_channel == DDR0_CHANNEL)
+			ddr_dev = gemobj->dev->dev;
+		else if (cma_obj->ddr_channel == DDR1_CHANNEL)
+			ddr_dev = ddr1.dev;
+		dma_free_coherent(ddr_dev, cma_obj->base.size, cma_obj->vaddr,
+				  cma_obj->paddr);
+	}
+	dma_resv_fini(&cma_obj->kresv);
+	kfree(cma_obj);
+
+	return 0;
+}
+
+static int hantro_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+	int ret = 0;
+	struct drm_gem_object *obj = NULL;
+	struct drm_gem_hantro_object *cma_obj;
+	struct drm_vma_offset_node *node;
+	unsigned long page_num = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+	unsigned long address = 0;
+	int sgtidx = 0;
+	struct scatterlist *pscatter = NULL;
+	struct page **pages = NULL;
+
+	if (mutex_lock_interruptible(&hantro_dev.drm_dev->struct_mutex))
+		return -EBUSY;
+	drm_vma_offset_lock_lookup(hantro_dev.drm_dev->vma_offset_manager);
+	node = drm_vma_offset_exact_lookup_locked(hantro_dev.drm_dev->vma_offset_manager,
+						  vma->vm_pgoff, vma_pages(vma));
+
+	if (likely(node)) {
+		obj = container_of(node, struct drm_gem_object, vma_node);
+		if (!kref_get_unless_zero(&obj->refcount))
+			obj = NULL;
+	}
+	drm_vma_offset_unlock_lookup(hantro_dev.drm_dev->vma_offset_manager);
+	hantro_unref_drmobj(obj);
+
+	if (!obj) {
+		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
+		return -EINVAL;
+	}
+	cma_obj = to_drm_gem_hantro_obj(obj);
+
+	if (page_num > cma_obj->num_pages) {
+		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
+		return -EINVAL;
+	}
+
+	if ((cma_obj->flag & HANTRO_GEM_FLAG_IMPORT) == 0) {
+		address = (unsigned long)cma_obj->vaddr;
+		if (address == 0) {
+			mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
+			return -EINVAL;
+		}
+		ret = drm_gem_mmap_obj(obj,
+				       drm_vma_node_size(node) << PAGE_SHIFT, vma);
+
+		if (ret) {
+			mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
+			return ret;
+		}
+	} else {
+		pscatter = &cma_obj->sgt->sgl[sgtidx];
+		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+	}
+
+	vma->vm_pgoff = 0;
+	if (cma_obj->ddr_channel == DDR0_CHANNEL)
+		ddr_dev = hantro_dev.drm_dev->dev;
+	else if (cma_obj->ddr_channel == DDR1_CHANNEL)
+		ddr_dev = ddr1.dev;
+
+	if (dma_mmap_coherent(ddr_dev, vma, cma_obj->vaddr, cma_obj->paddr,
+			      page_num << PAGE_SHIFT)) {
+		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
+		return -EAGAIN;
+	}
+
+	vma->vm_private_data = cma_obj;
+	cma_obj->pages = pages;
+	mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
+
+	return ret;
+}
+
+static int hantro_gem_open_obj(struct drm_gem_object *obj,
+			       struct drm_file *filp)
+{
+	return 0;
+}
+
+static int hantro_device_open(struct inode *inode, struct file *filp)
+{
+	int ret;
+
+	ret = drm_open(inode, filp);
+	hantrodec_open(inode, filp);
+
+	return ret;
+}
+
+static int hantro_device_release(struct inode *inode, struct file *filp)
+{
+	return drm_release(inode, filp);
+}
+
+static vm_fault_t hantro_vm_fault(struct vm_fault *vmf)
+{
+	return -EPERM;
+}
+
+#ifndef virt_to_bus
+static inline unsigned long virt_to_bus(void *address)
+{
+	return (unsigned long)address;
+}
+#endif
+
+static struct sg_table *
+hantro_gem_prime_get_sg_table(struct drm_gem_object *obj)
+{
+	struct drm_gem_hantro_object *cma_obj = to_drm_gem_hantro_obj(obj);
+	struct sg_table *sgt;
+	int ret;
+
+	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+	if (!sgt)
+		return NULL;
+
+	if (cma_obj->ddr_channel == DDR0_CHANNEL)
+		ddr_dev = obj->dev->dev;
+	else if (cma_obj->ddr_channel == DDR1_CHANNEL)
+		ddr_dev = ddr1.dev;
+
+	ret = dma_get_sgtable(ddr_dev, sgt, cma_obj->vaddr, cma_obj->paddr,
+			      obj->size);
+	if (ret < 0)
+		goto out;
+
+	return sgt;
+
+out:
+	kfree(sgt);
+	return NULL;
+}
+
+static struct drm_gem_object *
+hantro_gem_prime_import_sg_table(struct drm_device *dev,
+				 struct dma_buf_attachment *attach,
+				 struct sg_table *sgt)
+{
+	struct drm_gem_hantro_object *cma_obj;
+	struct drm_gem_object *obj;
+
+	cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
+	if (!cma_obj)
+		return ERR_PTR(-ENOMEM);
+
+	obj = &cma_obj->base;
+
+	if (sgt->nents > 1) {
+		/* check if the entries in the sg_table are contiguous */
+		dma_addr_t next_addr = sg_dma_address(sgt->sgl);
+		struct scatterlist *s;
+		unsigned int i;
+
+		for_each_sg(sgt->sgl, s, sgt->nents, i) {
+			/*
+			 * sg_dma_address(s) is only valid for entries
+			 * that have sg_dma_len(s) != 0
+			 */
+			if (!sg_dma_len(s))
+				continue;
+
+			if (sg_dma_address(s) != next_addr) {
+				kfree(cma_obj);
+				return ERR_PTR(-EINVAL);
+			}
+
+			next_addr = sg_dma_address(s) + sg_dma_len(s);
+		}
+	}
+	if (drm_gem_object_init(dev, obj, attach->dmabuf->size) != 0) {
+		kfree(cma_obj);
+		return ERR_PTR(-ENOMEM);
+	}
+	cma_obj->paddr = sg_dma_address(sgt->sgl);
+	cma_obj->vaddr = dma_buf_vmap(attach->dmabuf);
+	cma_obj->sgt = sgt;
+	cma_obj->flag |= HANTRO_GEM_FLAG_IMPORT;
+	cma_obj->num_pages = attach->dmabuf->size >> PAGE_SHIFT;
+
+	return obj;
+}
+
+static void *hantro_gem_prime_vmap(struct drm_gem_object *obj)
+{
+	struct drm_gem_hantro_object *cma_obj = to_drm_gem_hantro_obj(obj);
+
+	return cma_obj->vaddr;
+}
+
+static void hantro_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+{
+}
+
+static int hantro_gem_prime_mmap(struct drm_gem_object *obj,
+				 struct vm_area_struct *vma)
+{
+	struct drm_gem_hantro_object *cma_obj;
+	unsigned long page_num = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+	int ret = 0;
+
+	cma_obj = to_drm_gem_hantro_obj(obj);
+
+	if (page_num > cma_obj->num_pages)
+		return -EINVAL;
+
+	if ((cma_obj->flag & HANTRO_GEM_FLAG_IMPORT) != 0)
+		return -EINVAL;
+
+	if ((unsigned long)cma_obj->vaddr == 0)
+		return -EINVAL;
+
+	ret = drm_gem_mmap_obj(obj, obj->size, vma);
+	if (ret < 0)
+		return ret;
+
+	vma->vm_flags &= ~VM_PFNMAP;
+	vma->vm_pgoff = 0;
+
+	if (cma_obj->ddr_channel == DDR0_CHANNEL)
+		ddr_dev = obj->dev->dev;
+	else if (cma_obj->ddr_channel == DDR1_CHANNEL)
+		ddr_dev = ddr1.dev;
+
+	if (dma_mmap_coherent(ddr_dev, vma, cma_obj->vaddr, cma_obj->paddr,
+			      vma->vm_end - vma->vm_start)) {
+		drm_gem_vm_close(vma);
+		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
+		return -EAGAIN;
+	}
+	vma->vm_private_data = cma_obj;
+
+	return ret;
+}
+
+static struct drm_gem_object *
+hantro_drm_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf)
+{
+	return drm_gem_prime_import(dev, dma_buf);
+}
+
+static void hantro_gem_free_object(struct drm_gem_object *gem_obj)
+{
+	struct drm_gem_hantro_object *cma_obj;
+
+	cma_obj = to_drm_gem_hantro_obj(gem_obj);
+	if (cma_obj->pages) {
+		int i;
+
+		for (i = 0; i < cma_obj->num_pages; i++)
+			unref_page(cma_obj->pages[i]);
+
+		kfree(cma_obj->pages);
+		cma_obj->pages = NULL;
+	}
+
+	drm_gem_free_mmap_offset(gem_obj);
+	drm_gem_object_release(gem_obj);
+	if (gem_obj->import_attach) {
+		if (cma_obj->vaddr)
+			dma_buf_vunmap(gem_obj->import_attach->dmabuf,
+				       cma_obj->vaddr);
+		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
+	} else if (cma_obj->vaddr) {
+		if (cma_obj->ddr_channel == DDR0_CHANNEL)
+			ddr_dev = gem_obj->dev->dev;
+		else if (cma_obj->ddr_channel == DDR1_CHANNEL)
+			ddr_dev = ddr1.dev;
+		dma_free_coherent(ddr_dev, cma_obj->base.size, cma_obj->vaddr,
+				  cma_obj->paddr);
+	}
+
+	dma_resv_fini(&cma_obj->kresv);
+	kfree(cma_obj);
+}
+
+static int hantro_gem_close(struct drm_device *dev, void *data,
+			    struct drm_file *file_priv)
+{
+	struct drm_gem_close *args = data;
+	int ret = 0;
+	struct drm_gem_object *obj =
+		hantro_gem_object_lookup(dev, file_priv, args->handle);
+
+	if (!obj)
+		return -EINVAL;
+
+	ret = drm_gem_handle_delete(file_priv, args->handle);
+	hantro_unref_drmobj(obj);
+
+	return ret;
+}
+
+static int hantro_gem_open(struct drm_device *dev, void *data,
+			   struct drm_file *file_priv)
+{
+	int ret;
+	u32 handle;
+	struct drm_gem_open *openarg;
+	struct drm_gem_object *obj = NULL;
+
+	openarg = (struct drm_gem_open *)data;
+
+	obj = idr_find(&dev->object_name_idr, (int)openarg->name);
+	if (obj)
+		hantro_ref_drmobj(obj);
+	else
+		return -ENOENT;
+
+	ret = drm_gem_handle_create(file_priv, obj, &handle);
+	hantro_unref_drmobj(obj);
+	if (ret)
+		return ret;
+
+	openarg->handle = handle;
+	openarg->size = obj->size;
+
+	return ret;
+}
+
+static int hantro_map_vaddr(struct drm_device *dev, void *data,
+			    struct drm_file *file_priv)
+{
+	struct hantro_addrmap *pamap = data;
+	struct drm_gem_object *obj;
+	struct drm_gem_hantro_object *cma_obj;
+
+	obj = hantro_gem_object_lookup(dev, file_priv, pamap->handle);
+	if (!obj)
+		return -EINVAL;
+
+	cma_obj = to_drm_gem_hantro_obj(obj);
+	pamap->vm_addr = (unsigned long)cma_obj->vaddr;
+	pamap->phy_addr = cma_obj->paddr;
+	hantro_unref_drmobj(obj);
+
+	return 0;
+}
+
+static int hantro_gem_flink(struct drm_device *dev, void *data,
+			    struct drm_file *file_priv)
+{
+	struct drm_gem_flink *args = data;
+	struct drm_gem_object *obj;
+	int ret;
+
+	if (!drm_core_check_feature(dev, DRIVER_GEM))
+		return -ENODEV;
+
+	obj = hantro_gem_object_lookup(dev, file_priv, args->handle);
+	if (!obj)
+		return -ENOENT;
+
+	mutex_lock(&dev->object_name_lock);
+	/* prevent races with concurrent gem_close. */
+	if (obj->handle_count == 0) {
+		ret = -ENOENT;
+		goto err;
+	}
+
+	if (!obj->name) {
+		ret = idr_alloc(&dev->object_name_idr, obj, 1, 0, GFP_KERNEL);
+		if (ret < 0)
+			goto err;
+
+		obj->name = ret;
+	}
+
+	args->name = (uint64_t)obj->name;
+	ret = 0;
+
+err:
+	mutex_unlock(&dev->object_name_lock);
+	hantro_unref_drmobj(obj);
+	return ret;
+}
+
+static int hantro_map_dumb(struct drm_device *dev, void *data,
+			   struct drm_file *file_priv)
+{
+	int ret;
+	struct drm_mode_map_dumb *temparg = (struct drm_mode_map_dumb *)data;
+
+	ret = hantro_gem_dumb_map_offset(file_priv, dev, temparg->handle,
+					 &temparg->offset);
+
+	return ret;
+}
+
+static int hantro_drm_open(struct drm_device *dev, struct drm_file *file)
+{
+	struct idr *ptr;
+
+	ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
+	if (!ptr)
+		return -ENOMEM;
+	idr_init(ptr);
+	file->driver_priv = ptr;
+
+	return 0;
+}
+
+static void hantro_drm_postclose(struct drm_device *dev, struct drm_file *file)
+{
+	int id;
+	struct idr *cmalist = (struct idr *)file->driver_priv;
+	void *obj;
+
+	mutex_lock(&dev->struct_mutex);
+	if (file->driver_priv) {
+		idr_for_each_entry(cmalist, obj, id) {
+			if (obj) {
+				hantro_release_dumb(dev, file, obj);
+				idr_remove(cmalist, id);
+			}
+		}
+		idr_destroy(cmalist);
+		kfree(file->driver_priv);
+		file->driver_priv = NULL;
+	}
+	mutex_unlock(&dev->struct_mutex);
+}
+
+static int hantro_handle_to_fd(struct drm_device *dev, void *data,
+			       struct drm_file *file_priv)
+{
+	int ret;
+	struct drm_prime_handle *primeargs = (struct drm_prime_handle *)data;
+	struct drm_gem_object *obj;
+	struct drm_gem_hantro_object *cma_obj;
+
+	obj = hantro_gem_object_lookup(dev, file_priv, primeargs->handle);
+	if (!obj)
+		return -ENOENT;
+
+	ret = drm_gem_prime_handle_to_fd(dev, file_priv, primeargs->handle,
+					 primeargs->flags, &primeargs->fd);
+
+	if (ret == 0) {
+		cma_obj = to_drm_gem_hantro_obj(obj);
+		cma_obj->flag |= HANTRO_GEM_FLAG_EXPORT;
+	}
+	hantro_unref_drmobj(obj);
+
+	return ret;
+}
+
+static int hantro_fd_to_handle(struct drm_device *dev, void *data,
+			       struct drm_file *file_priv)
+{
+	struct drm_prime_handle *primeargs = (struct drm_prime_handle *)data;
+
+	return drm_gem_prime_fd_to_handle(dev, file_priv, primeargs->fd,
+					  &primeargs->handle);
+}
+
+static int hantro_fb_create2(struct drm_device *dev, void *data,
+			     struct drm_file *file_priv)
+{
+	struct drm_mode_fb_cmd2 *mode_cmd = (struct drm_mode_fb_cmd2 *)data;
+	struct hantro_drm_fb *vsifb;
+	struct drm_gem_object *objs[4];
+	struct drm_gem_object *obj;
+	const struct drm_format_info *info = drm_get_format_info(dev, mode_cmd);
+	unsigned int hsub;
+	unsigned int vsub;
+	int num_planes;
+	int ret;
+	int i;
+
+	hsub = info->hsub;
+	vsub = info->vsub;
+	num_planes = min_t(int, info->num_planes, 4);
+	for (i = 0; i < num_planes; i++) {
+		unsigned int width = mode_cmd->width / (i ? hsub : 1);
+		unsigned int height = mode_cmd->height / (i ? vsub : 1);
+		unsigned int min_size;
+
+		obj = hantro_gem_object_lookup(dev, file_priv,
+					       mode_cmd->handles[i]);
+		if (!obj) {
+			ret = -ENXIO;
+			goto err_gem_object_unreference;
+		}
+		hantro_unref_drmobj(obj);
+		min_size = (height - 1) * mode_cmd->pitches[i] +
+			   mode_cmd->offsets[i] + width * info->cpp[i];
+		if (obj->size < min_size) {
+			ret = -EINVAL;
+			goto err_gem_object_unreference;
+		}
+		objs[i] = obj;
+	}
+	vsifb = kzalloc(sizeof(*vsifb), GFP_KERNEL);
+	if (!vsifb)
+		return -ENOMEM;
+	drm_helper_mode_fill_fb_struct(dev, &vsifb->fb, mode_cmd);
+	for (i = 0; i < num_planes; i++)
+		vsifb->obj[i] = objs[i];
+	ret = drm_framebuffer_init(dev, &vsifb->fb, &hantro_drm_fb_funcs);
+	if (ret)
+		kfree(vsifb);
+	return ret;
+
+err_gem_object_unreference:
+	return ret;
+}
+
+static int hantro_fb_create(struct drm_device *dev, void *data,
+			    struct drm_file *file_priv)
+{
+	struct drm_mode_fb_cmd *or = data;
+	struct drm_mode_fb_cmd2 r = {};
+	int ret;
+
+	/* convert to new format and call new ioctl */
+	r.fb_id = or->fb_id;
+	r.width = or->width;
+	r.height = or->height;
+	r.pitches[0] = or->pitch;
+	r.pixel_format = drm_mode_legacy_fb_format(or->bpp, or->depth);
+	r.handles[0] = or->handle;
+
+	ret = hantro_fb_create2(dev, &r, file_priv);
+	if (ret)
+		return ret;
+
+	or->fb_id = r.fb_id;
+
+	return 0;
+}
+
+static int hantro_get_version(struct drm_device *dev, void *data,
+			      struct drm_file *file_priv)
+{
+	struct drm_version *pversion;
+	char *sname = DRIVER_NAME;
+	char *sdesc = DRIVER_DESC;
+	char *sdate = DRIVER_DATE;
+
+	pversion = (struct drm_version *)data;
+	pversion->version_major = dev->driver->major;
+	pversion->version_minor = dev->driver->minor;
+	pversion->version_patchlevel = 0;
+	pversion->name_len = strlen(DRIVER_NAME);
+	pversion->desc_len = strlen(DRIVER_DESC);
+	pversion->date_len = strlen(DRIVER_DATE);
+
+	if (pversion->name)
+		if (copy_to_user(pversion->name, sname, pversion->name_len))
+			return -EFAULT;
+	if (pversion->date)
+		if (copy_to_user(pversion->date, sdate, pversion->date_len))
+			return -EFAULT;
+	if (pversion->desc)
+		if (copy_to_user(pversion->desc, sdesc, pversion->desc_len))
+			return -EFAULT;
+
+	return 0;
+}
+
+static int hantro_get_cap(struct drm_device *dev, void *data,
+			  struct drm_file *file_priv)
+{
+	struct drm_get_cap *req = (struct drm_get_cap *)data;
+
+	req->value = 0;
+	switch (req->capability) {
+	case DRM_CAP_PRIME:
+		req->value |= dev->driver->prime_fd_to_handle ?
+				      DRM_PRIME_CAP_IMPORT :
+				      0;
+		req->value |= dev->driver->prime_handle_to_fd ?
+				      DRM_PRIME_CAP_EXPORT :
+				      0;
+		return 0;
+	case DRM_CAP_DUMB_BUFFER:
+		req->value = 1;
+		break;
+	case DRM_CAP_VBLANK_HIGH_CRTC:
+		req->value = 1;
+		break;
+	case DRM_CAP_DUMB_PREFERRED_DEPTH:
+		req->value = dev->mode_config.preferred_depth;
+		break;
+	case DRM_CAP_DUMB_PREFER_SHADOW:
+		req->value = dev->mode_config.prefer_shadow;
+		break;
+	case DRM_CAP_ASYNC_PAGE_FLIP:
+		req->value = dev->mode_config.async_page_flip;
+		break;
+	case DRM_CAP_CURSOR_WIDTH:
+		if (dev->mode_config.cursor_width)
+			req->value = dev->mode_config.cursor_width;
+		else
+			req->value = 64;
+		break;
+	case DRM_CAP_CURSOR_HEIGHT:
+		if (dev->mode_config.cursor_height)
+			req->value = dev->mode_config.cursor_height;
+		else
+			req->value = 64;
+		break;
+	case DRM_CAP_ADDFB2_MODIFIERS:
+		req->value = dev->mode_config.allow_fb_modifiers;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int hantro_test(struct drm_device *dev, void *data,
+		       struct drm_file *file_priv)
+{
+	unsigned int *input = data;
+	int handle = *input;
+	struct drm_gem_object *obj;
+	struct dma_fence *pfence;
+	int ret = 10 * HZ; /* timeout */
+
+	obj = hantro_gem_object_lookup(dev, file_priv, handle);
+	if (!obj)
+		return -EINVAL;
+
+	pfence = dma_resv_get_excl(obj->dma_buf->resv);
+	while (ret > 0)
+		ret = schedule_timeout(ret);
+	hantro_fence_signal(pfence);
+	hantro_unref_drmobj(obj);
+
+	return 0;
+}
+
+static int hantro_getprimeaddr(struct drm_device *dev, void *data,
+			       struct drm_file *file_priv)
+{
+	unsigned long *input = data;
+	int fd = *input;
+	struct drm_gem_hantro_object *cma_obj;
+	struct dma_buf *dma_buf;
+
+	dma_buf = dma_buf_get(fd);
+	if (IS_ERR(dma_buf))
+		return PTR_ERR(dma_buf);
+	cma_obj = (struct drm_gem_hantro_object *)dma_buf->priv;
+	*input = cma_obj->paddr;
+	dma_buf_put(dma_buf);
+
+	return 0;
+}
+
+static int hantro_ptr_to_phys(struct drm_device *dev, void *data,
+			      struct drm_file *file_priv)
+{
+	unsigned long *arg = data;
+	struct vm_area_struct *vma;
+	struct drm_gem_hantro_object *cma_obj;
+	unsigned long vaddr = *arg;
+
+	vma = find_vma(current->mm, vaddr);
+	if (!vma)
+		return -EFAULT;
+
+	cma_obj = (struct drm_gem_hantro_object *)vma->vm_private_data;
+	if (!cma_obj)
+		return -EFAULT;
+	if (cma_obj->base.dev != dev)
+		return -EFAULT;
+	if (vaddr < vma->vm_start ||
+	    vaddr >= vma->vm_start + (cma_obj->num_pages << PAGE_SHIFT))
+		return -EFAULT;
+
+	*arg = (phys_addr_t)(vaddr - vma->vm_start) + cma_obj->paddr;
+
+	return 0;
+}
+
+static int hantro_getmagic(struct drm_device *dev, void *data,
+			   struct drm_file *file_priv)
+{
+	struct drm_auth *auth = data;
+	int ret = 0;
+
+	mutex_lock(&dev->struct_mutex);
+	if (!file_priv->magic) {
+		ret = idr_alloc(&file_priv->master->magic_map, file_priv, 1, 0,
+				GFP_KERNEL);
+		if (ret >= 0)
+			file_priv->magic = ret;
+	}
+	auth->magic = file_priv->magic;
+	mutex_unlock(&dev->struct_mutex);
+
+	return ret < 0 ? ret : 0;
+}
+
+static int hantro_authmagic(struct drm_device *dev, void *data,
+			    struct drm_file *file_priv)
+{
+	struct drm_auth *auth = data;
+	struct drm_file *file;
+
+	mutex_lock(&dev->struct_mutex);
+	file = idr_find(&file_priv->master->magic_map, auth->magic);
+	if (file) {
+		file->authenticated = 1;
+		idr_replace(&file_priv->master->magic_map, NULL, auth->magic);
+	}
+	mutex_unlock(&dev->struct_mutex);
+
+	return file ? 0 : -EINVAL;
+}
+
+#define DRM_IOCTL_DEF(ioctl, _func, _flags)                                    \
+	[DRM_IOCTL_NR(ioctl)] = {                                              \
+		.cmd = ioctl, .func = _func, .flags = _flags, .name = #ioctl   \
+	}
+
+#define DRM_CONTROL_ALLOW 0
+/* Ioctl table */
+static const struct drm_ioctl_desc hantro_ioctls[] = {
+	DRM_IOCTL_DEF(DRM_IOCTL_VERSION, hantro_get_version,
+		      DRM_UNLOCKED | DRM_RENDER_ALLOW | DRM_CONTROL_ALLOW),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_invalid_op, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, hantro_getmagic, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_invalid_op,
+		      DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_invalid_op, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_invalid_op, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_invalid_op, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, hantro_get_cap,
+		      DRM_UNLOCKED | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_invalid_op, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_invalid_op,
+		      DRM_UNLOCKED | DRM_MASTER),
+	DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_UNBLOCK, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, hantro_authmagic,
+		      DRM_AUTH | DRM_UNLOCKED | DRM_MASTER),
+	DRM_IOCTL_DEF(DRM_IOCTL_ADD_MAP, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_RM_MAP, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_SET_SAREA_CTX, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_SAREA_CTX, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_invalid_op,
+		      DRM_UNLOCKED | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_invalid_op,
+		      DRM_UNLOCKED | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_invalid_op,
+		      DRM_AUTH | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_MOD_CTX, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_GET_CTX, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_SWITCH_CTX, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_NEW_CTX, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_RES_CTX, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_ADD_DRAW, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_RM_DRAW, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_LOCK, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_UNLOCK, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_FINISH, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_ADD_BUFS, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_MARK_BUFS, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_INFO_BUFS, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_MAP_BUFS, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+#if IS_ENABLED(CONFIG_AGP)
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_RELEASE, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_ENABLE, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_INFO, drm_invalid_op, DRM_AUTH),
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_ALLOC, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_FREE, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_BIND, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+#endif
+	DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_invalid_op, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_invalid_op, 0),
+	DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_invalid_op,
+		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
+	DRM_IOCTL_DEF(DRM_IOCTL_GEM_CLOSE, hantro_gem_close,
+		      DRM_UNLOCKED | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK, hantro_gem_flink,
+		      DRM_AUTH | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN, hantro_gem_open,
+		      DRM_AUTH | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, hantro_handle_to_fd,
+		      DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, hantro_fd_to_handle,
+		      DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANE, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPLANE, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETGAMMA, drm_invalid_op, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETGAMMA, drm_invalid_op,
+		      DRM_MASTER | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETENCODER, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCONNECTOR, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATTACHMODE, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DETACHMODE, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPBLOB, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETFB, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, hantro_fb_create,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, hantro_fb_create2,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_RMFB, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_PAGE_FLIP, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DIRTYFB, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATE_DUMB, hantro_gem_dumb_create,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_MAP_DUMB, hantro_map_dumb,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROY_DUMB, hantro_destroy_dumb,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_GETPROPERTIES, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_SETPROPERTY, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR2, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATOMIC, drm_invalid_op,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATEPROPBLOB, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROYPROPBLOB, drm_invalid_op,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+
+	/* hantro specific ioctls */
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_TESTCMD, hantro_test,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_GETPADDR, hantro_map_vaddr,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_TESTREADY, hantro_testbufvalid,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_SETDOMAIN, hantro_setdomain,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_ACQUIREBUF, hantro_acquirebuf,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_RELEASEBUF, hantro_releasebuf,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_GETPRIMEADDR, hantro_getprimeaddr,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_PTR_PHYADDR, hantro_ptr_to_phys,
+		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+};
+
+#if DRM_CONTROL_ALLOW == 0
+#undef DRM_CONTROL_ALLOW
+#endif
+
+#define HANTRO_IOCTL_COUNT ARRAY_SIZE(hantro_ioctls)
+static long hantro_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+	struct drm_file *file_priv = filp->private_data;
+	struct drm_device *dev = hantro_dev.drm_dev;
+	const struct drm_ioctl_desc *ioctl = NULL;
+	drm_ioctl_t *func;
+	unsigned int nr = DRM_IOCTL_NR(cmd);
+	int retcode = 0;
+	char stack_kdata[128];
+	char *kdata = stack_kdata;
+	unsigned int in_size, out_size;
+
+	if (drm_dev_is_unplugged(dev))
+		return -ENODEV;
+
+	out_size = _IOC_SIZE(cmd);
+	in_size = _IOC_SIZE(cmd);
+
+	if (in_size > 0) {
+		if (_IOC_DIR(cmd) & _IOC_READ)
+			retcode = !hantro_access_ok(VERIFY_WRITE, (void *)arg,
+						    in_size);
+		else if (_IOC_DIR(cmd) & _IOC_WRITE)
+			retcode = !hantro_access_ok(VERIFY_READ, (void *)arg,
+						    in_size);
+		if (retcode)
+			return -EFAULT;
+	}
+	if (nr >= DRM_IOCTL_NR(HX280ENC_IOC_START) &&
+	    nr <= DRM_IOCTL_NR(HX280ENC_IOC_END)) {
+		return hantroenc_ioctl(filp, cmd, arg);
+	}
+	if (nr >= DRM_IOCTL_NR(HANTRODEC_IOC_START) &&
+	    nr <= DRM_IOCTL_NR(HANTRODEC_IOC_END)) {
+		return hantrodec_ioctl(filp, cmd, arg);
+	}
+
+	if (nr >= HANTRO_IOCTL_COUNT)
+		return -EINVAL;
+	ioctl = &hantro_ioctls[nr];
+
+	if (copy_from_user(kdata, (void __user *)arg, in_size) != 0)
+		return -EFAULT;
+
+	if (cmd == DRM_IOCTL_MODE_SETCRTC ||
+	    cmd == DRM_IOCTL_MODE_GETRESOURCES ||
+	    cmd == DRM_IOCTL_SET_CLIENT_CAP || cmd == DRM_IOCTL_MODE_GETCRTC ||
+	    cmd == DRM_IOCTL_MODE_GETENCODER ||
+	    cmd == DRM_IOCTL_MODE_GETCONNECTOR || cmd == DRM_IOCTL_MODE_GETFB) {
+		retcode = drm_ioctl(filp, cmd, arg);
+		return retcode;
+	}
+	func = ioctl->func;
+	if (!func)
+		return -EINVAL;
+	retcode = func(dev, kdata, file_priv);
+
+	if (copy_to_user((void __user *)arg, kdata, out_size) != 0)
+		retcode = -EFAULT;
+
+	return retcode;
+}
+
+/* VFS methods */
+static const struct file_operations hantro_fops = {
+	.owner = THIS_MODULE,
+	.open = hantro_device_open,
+	.mmap = hantro_mmap,
+	.release = hantro_device_release,
+	.poll = drm_poll,
+	.read = drm_read,
+	.unlocked_ioctl = hantro_ioctl, //drm_ioctl,
+	.compat_ioctl = drm_compat_ioctl,
+};
+
+void hantro_gem_vm_close(struct vm_area_struct *vma)
+{
+	struct drm_gem_hantro_object *obj =
+		(struct drm_gem_hantro_object *)vma->vm_private_data;
+	/* unmap callback */
+
+	if (obj->pages) {
+		int i;
+
+		for (i = 0; i < obj->num_pages; i++)
+			unref_page(obj->pages[i]);
+
+		kfree(obj->pages);
+		obj->pages = NULL;
+	}
+	drm_gem_vm_close(vma);
+}
+
+static void hantro_release(struct drm_device *dev)
+{
+}
+
+static void hantro_gem_dmabuf_release(struct dma_buf *dma_buf)
+{
+	return drm_gem_dmabuf_release(dma_buf);
+}
+
+static int hantro_gem_map_attach(struct dma_buf *dma_buf,
+				 struct dma_buf_attachment *attach)
+{
+	int ret;
+	struct drm_gem_hantro_object *cma_obj =
+		(struct drm_gem_hantro_object *)dma_buf->priv;
+
+	ret = drm_gem_map_attach(dma_buf, attach);
+	if (ret == 0)
+		cma_obj->flag |= HANTRO_GEM_FLAG_EXPORTUSED;
+
+	return ret;
+}
+
+static void hantro_gem_map_detach(struct dma_buf *dma_buf,
+				  struct dma_buf_attachment *attach)
+{
+	drm_gem_map_detach(dma_buf, attach);
+}
+
+static struct sg_table *
+hantro_gem_map_dma_buf(struct dma_buf_attachment *attach,
+		       enum dma_data_direction dir)
+{
+	return drm_gem_map_dma_buf(attach, dir);
+}
+
+static int hantro_gem_dmabuf_mmap(struct dma_buf *dma_buf,
+				  struct vm_area_struct *vma)
+{
+	return drm_gem_dmabuf_mmap(dma_buf, vma);
+}
+
+static void *hantro_gem_dmabuf_vmap(struct dma_buf *dma_buf)
+{
+	return drm_gem_dmabuf_vmap(dma_buf);
+}
+
+static const struct dma_buf_ops hantro_dmabuf_ops = {
+	.attach = hantro_gem_map_attach,
+	.detach = hantro_gem_map_detach,
+	.map_dma_buf = hantro_gem_map_dma_buf,
+	.unmap_dma_buf = drm_gem_unmap_dma_buf,
+	.release = hantro_gem_dmabuf_release,
+	.mmap = hantro_gem_dmabuf_mmap,
+	.vmap = hantro_gem_dmabuf_vmap,
+	.vunmap = drm_gem_dmabuf_vunmap,
+};
+
+static struct drm_driver hantro_drm_driver;
+static struct dma_buf *hantro_prime_export(struct drm_gem_object *obj,
+					   int flags)
+{
+	struct drm_gem_hantro_object *cma_obj;
+	struct dma_buf_export_info exp_info = {
+		.exp_name = KBUILD_MODNAME,
+		.owner = obj->dev->driver->fops->owner,
+		.ops = &hantro_dmabuf_ops,
+		.flags = flags,
+		.priv = obj,
+	};
+
+	cma_obj = to_drm_gem_hantro_obj(obj);
+	exp_info.resv = &cma_obj->kresv;
+	exp_info.size = cma_obj->num_pages << PAGE_SHIFT;
+
+	return drm_gem_dmabuf_export(obj->dev, &exp_info);
+}
+
+static void hantro_close_object(struct drm_gem_object *obj,
+				struct drm_file *file_priv)
+{
+	struct drm_gem_hantro_object *cma_obj;
+
+	cma_obj = to_drm_gem_hantro_obj(obj);
+	if (obj->dma_buf && (cma_obj->flag & HANTRO_GEM_FLAG_EXPORTUSED))
+		dma_buf_put(obj->dma_buf);
+}
+
+static int hantro_gem_prime_handle_to_fd(struct drm_device *dev,
+					 struct drm_file *filp, u32 handle,
+					 u32 flags, int *prime_fd)
+{
+	return drm_gem_prime_handle_to_fd(dev, filp, handle, flags, prime_fd);
+}
+
+static const struct vm_operations_struct hantro_drm_gem_cma_vm_ops = {
+	.open = drm_gem_vm_open,
+	.close = hantro_gem_vm_close,
+	.fault = hantro_vm_fault,
+};
+
+static struct drm_driver hantro_drm_driver = {
+	//these two are related with controlD and renderD
+	.driver_features = DRIVER_GEM | DRIVER_RENDER,
+	.get_vblank_counter = hantro_vblank_no_hw_counter,
+	.open = hantro_drm_open,
+	.postclose = hantro_drm_postclose,
+	.release = hantro_release,
+	.dumb_destroy = drm_gem_dumb_destroy,
+	.dumb_create = hantro_gem_dumb_create_internal,
+	.dumb_map_offset = hantro_gem_dumb_map_offset,
+	.gem_open_object = hantro_gem_open_obj,
+	.gem_close_object = hantro_close_object,
+	.gem_prime_export = hantro_prime_export,
+	.gem_prime_import = hantro_drm_gem_prime_import,
+	.prime_handle_to_fd = hantro_gem_prime_handle_to_fd,
+	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+	.gem_prime_import_sg_table = hantro_gem_prime_import_sg_table,
+	.gem_prime_get_sg_table = hantro_gem_prime_get_sg_table,
+	.gem_prime_vmap = hantro_gem_prime_vmap,
+	.gem_prime_vunmap = hantro_gem_prime_vunmap,
+	.gem_prime_mmap = hantro_gem_prime_mmap,
+	.gem_free_object_unlocked = hantro_gem_free_object,
+	.gem_vm_ops = &hantro_drm_gem_cma_vm_ops,
+	.fops = &hantro_fops,
+	.name = DRIVER_NAME,
+	.desc = DRIVER_DESC,
+	.date = DRIVER_DATE,
+	.major = DRIVER_MAJOR,
+	.minor = DRIVER_MINOR,
+};
+
+static ssize_t bandwidth_dec_read_show(struct device *kdev,
+				       struct device_attribute *attr, char *buf)
+{
+	/*
+	 *  sys/bus/platform/drivers/hantro/xxxxx.vpu/bandwidth_dec_read
+	 *  Used to show bandwidth info to user space.
+	 *  Real data should be read from HW registers
+	 *  This file is read only.
+	 */
+	u32 bandwidth = hantrodec_readbandwidth(1);
+
+	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
+}
+
+static ssize_t bandwidth_dec_write_show(struct device *kdev,
+					struct device_attribute *attr, char *buf)
+{
+	u32 bandwidth = hantrodec_readbandwidth(0);
+
+	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
+}
+
+static ssize_t bandwidth_enc_read_show(struct device *kdev,
+				       struct device_attribute *attr, char *buf)
+{
+	u32 bandwidth = hantroenc_readbandwidth(1);
+
+	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
+}
+
+static ssize_t bandwidth_enc_write_show(struct device *kdev,
+					struct device_attribute *attr, char *buf)
+{
+	u32 bandwidth = hantroenc_readbandwidth(0);
+
+	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
+}
+
+static DEVICE_ATTR(bandwidth_dec_read, 0444, bandwidth_dec_read_show, NULL);
+static DEVICE_ATTR(bandwidth_dec_write, 0444, bandwidth_dec_write_show, NULL);
+static DEVICE_ATTR(bandwidth_enc_read, 0444, bandwidth_enc_read_show, NULL);
+static DEVICE_ATTR(bandwidth_enc_write, 0444, bandwidth_enc_write_show, NULL);
+
+static int hantro_create_sysfs_api(struct device *dev)
+{
+	int result;
+
+	result = device_create_file(dev, &dev_attr_bandwidth_dec_read);
+	if (result != 0)
+		return result;
+
+	result = device_create_file(dev, &dev_attr_bandwidth_dec_write);
+	if (result != 0) {
+		device_remove_file(dev, &dev_attr_bandwidth_dec_read);
+		return result;
+	}
+
+	result = device_create_file(dev, &dev_attr_bandwidth_enc_read);
+	if (result != 0) {
+		device_remove_file(dev, &dev_attr_bandwidth_dec_read);
+		device_remove_file(dev, &dev_attr_bandwidth_dec_write);
+		return result;
+	}
+
+	result = device_create_file(dev, &dev_attr_bandwidth_enc_write);
+	if (result != 0) {
+		device_remove_file(dev, &dev_attr_bandwidth_dec_read);
+		device_remove_file(dev, &dev_attr_bandwidth_dec_write);
+		device_remove_file(dev, &dev_attr_bandwidth_enc_read);
+		return result;
+	}
+
+	return 0;
+}
+
+static int init_hantro_rsvd_mem(struct device *dev, struct hantro_mem *mem,
+				const char *mem_name, unsigned int mem_idx)
+{
+	struct device *mem_dev;
+	dma_addr_t dma_handle;
+	int rc;
+	size_t mem_size;
+	void *vaddr;
+
+	/* Create a child device (of dev) to own the reserved memory. */
+	mem_dev =
+		devm_kzalloc(dev, sizeof(struct device), GFP_KERNEL | GFP_DMA);
+	if (!mem_dev)
+		return -ENOMEM;
+
+	device_initialize(mem_dev);
+	dev_set_name(mem_dev, "%s:%s", dev_name(dev), mem_name);
+	mem_dev->parent = dev;
+	mem_dev->dma_mask = dev->dma_mask;
+	mem_dev->coherent_dma_mask = dev->coherent_dma_mask;
+
+	/* Set up DMA configuration using information from parent's DT node. */
+	rc = of_dma_configure(mem_dev, dev->of_node, true);
+	mem_dev->release = of_reserved_mem_device_release;
+
+	rc = device_add(mem_dev);
+	if (rc)
+		goto err;
+	/* Initialized the device reserved memory region. */
+	rc = of_reserved_mem_device_init_by_idx(mem_dev, dev->of_node, mem_idx);
+	if (rc) {
+		dev_err(dev, "Couldn't get reserved memory with idx = %d, %d\n",
+			mem_idx, rc);
+		device_del(mem_dev);
+		goto err;
+	} else {
+		dev_info(dev, "Success get reserved memory with idx = %d, %d\n",
+			 mem_idx, rc);
+	}
+
+	dma_handle -= dev->dma_pfn_offset << PAGE_SHIFT;
+
+	mem->dev = mem_dev;
+	mem->vaddr = vaddr;
+	mem->dma_handle = dma_handle;
+	mem->size = mem_size;
+
+	return 0;
+err:
+	put_device(mem_dev);
+	return rc;
+}
+
+static int hantro_drm_probe(struct platform_device *pdev)
+{
+	int result;
+	struct device *dev = &pdev->dev;
+
+	if (!hantro_dev.platformdev)
+		hantro_dev.platformdev = pdev;
+
+	/* try to attach rsv mem to dtb node */
+	result = init_hantro_rsvd_mem(dev, &ddr1, "ddr1", 0);
+	if (result) {
+		dev_err(dev, "Failed to set up DDR1 reserved memory.\n");
+		return result;
+	}
+
+	dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+	dma_set_mask_and_coherent(ddr1.dev, DMA_BIT_MASK(64));
+
+	dev_info(dev, "ddr1 vaddr 0x%p paddr 0x%pad size 0x%zX\n", ddr1.vaddr,
+		 &ddr1.dma_handle, ddr1.size);
+
+	result = hantro_create_sysfs_api(dev);
+	if (result != 0)
+		pr_info("create sysfs fail");
+
+	/* check if pdev equals hantro_dev.platformdev */
+	result = hantrodec_init(pdev);
+	if (result != 0)
+		return result;
+	result = hantroenc_init(pdev);
+	if (result != 0)
+		return result;
+
+	return 0;
+}
+
+static int hantro_drm_remove(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+
+	device_remove_file(dev, &dev_attr_bandwidth_dec_read);
+	device_remove_file(dev, &dev_attr_bandwidth_dec_write);
+	device_remove_file(dev, &dev_attr_bandwidth_enc_read);
+	device_remove_file(dev, &dev_attr_bandwidth_enc_write);
+
+	return 0;
+}
+
+static const struct platform_device_id hantro_drm_platform_ids[] = {
+	{
+		.name = "hantro",
+	},
+	{},
+};
+MODULE_DEVICE_TABLE(platform, hantro_drm_platform_ids);
+
+static const struct of_device_id hantro_of_match[] = {
+	/* to match dtb, else reg io will fail */
+	{
+		.compatible = "intel,hantro",
+	},
+	{ /* sentinel */ }
+};
+
+static int hantro_pm_suspend(struct device *kdev)
+{
+	return 0;
+}
+
+static int hantro_pm_resume(struct device *kdev)
+{
+	return 0;
+}
+
+static const struct dev_pm_ops hantro_pm_ops = {
+	.suspend = hantro_pm_suspend,
+	.resume = hantro_pm_resume,
+};
+
+static struct platform_driver hantro_drm_platform_driver = {
+	.probe = hantro_drm_probe,
+	.remove = hantro_drm_remove,
+	.driver = {
+		.name = DRIVER_NAME,
+		.owner = THIS_MODULE,
+		.of_match_table = hantro_of_match,
+		.pm = &hantro_pm_ops,
+	},
+	.id_table = hantro_drm_platform_ids,
+};
+
+static const struct platform_device_info hantro_platform_info = {
+	.name = DRIVER_NAME,
+	.id = -1,
+	.dma_mask = DMA_BIT_MASK(64),
+};
+
+static int hantro_major = 1; /* dynamic */
+void __exit hantro_cleanup(void)
+{
+	device_unregister(ddr1.dev);
+	hantrodec_cleanup();
+	hantroenc_cleanup();
+	release_fence_data();
+	unregister_chrdev(hantro_major, "hantro");
+	drm_dev_unregister(hantro_dev.drm_dev);
+	drm_dev_put(hantro_dev.drm_dev);
+	platform_device_unregister(hantro_dev.platformdev);
+	platform_driver_unregister(&hantro_drm_platform_driver);
+}
+
+int __init hantro_init(void)
+{
+	int result;
+
+	hantro_dev.platformdev = NULL;
+	result = platform_driver_register(&hantro_drm_platform_driver);
+	if (result < 0)
+		return result;
+
+	if (!hantro_dev.platformdev) {
+		pr_info("hantro create platform device fail");
+		return -1;
+	}
+
+	hantro_dev.drm_dev =
+		drm_dev_alloc(&hantro_drm_driver, &hantro_dev.platformdev->dev);
+	if (IS_ERR(hantro_dev.drm_dev)) {
+		pr_info("init drm failed\n");
+		platform_driver_unregister(&hantro_drm_platform_driver);
+		return PTR_ERR(hantro_dev.drm_dev);
+	}
+
+	hantro_dev.drm_dev->dev = &hantro_dev.platformdev->dev;
+	pr_info("hantro device created");
+
+	drm_mode_config_init(hantro_dev.drm_dev);
+	result = drm_dev_register(hantro_dev.drm_dev, 0);
+	if (result < 0) {
+		drm_dev_unregister(hantro_dev.drm_dev);
+		drm_dev_put(hantro_dev.drm_dev);
+		platform_driver_unregister(&hantro_drm_platform_driver);
+	} else {
+		init_fence_data();
+	}
+
+	return result;
+}
+
+module_init(hantro_init);
+module_exit(hantro_cleanup);
+
+/* module description */
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Verisilicon");
+MODULE_DESCRIPTION("Hantro DRM manager");
diff --git a/drivers/gpu/drm/hantro/hantro_drm.h b/drivers/gpu/drm/hantro/hantro_drm.h
new file mode 100644
index 0000000..13b6f14
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_drm.h
@@ -0,0 +1,208 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *    Hantro driver public header file.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#ifndef HANTRO_H
+#define HANTRO_H
+
+#include <linux/ioctl.h>
+#include <linux/dma-resv.h>
+#include <linux/dma-mapping.h>
+#include <drm/drm_vma_manager.h>
+#include <drm/drm_gem_cma_helper.h>
+#include <drm/drm_gem.h>
+#include <linux/dma-buf.h>
+#include <drm/drm.h>
+#include <drm/drm_auth.h>
+#include <drm/drm.h>
+#include <drm/drm_framebuffer.h>
+#include <drm/drm_drv.h>
+#include <drm/drm_fourcc.h>
+#include <linux/version.h>
+#include <linux/dma-fence.h>
+#include <linux/platform_device.h>
+
+/* basic driver definitions */
+#define DRIVER_NAME     "hantro"
+#define DRIVER_DESC     "hantro DRM"
+#define DRIVER_DATE     "20201008"
+#define DRIVER_MAJOR    1
+#define DRIVER_MINOR    0
+
+/* these domain definitions are identical to hantro_bufmgr.h */
+#define HANTRO_DOMAIN_NONE		0x00000
+#define HANTRO_CPU_DOMAIN		0x00001
+#define HANTRO_HEVC264_DOMAIN		0x00002
+#define HANTRO_JPEG_DOMAIN		0x00004
+#define HANTRO_DECODER0_DOMAIN		0x00008
+#define HANTRO_DECODER1_DOMAIN		0x00010
+#define HANTRO_DECODER2_DOMAIN		0x00020
+#define HANTRO_GEM_FLAG_IMPORT		BIT(0)
+#define HANTRO_GEM_FLAG_EXPORT		BIT(1)
+#define HANTRO_GEM_FLAG_EXPORTUSED	BIT(2)
+#define HANTRO_FENCE_WRITE 1
+
+/* dynamic ddr allocation defines */
+#define DDR0_CHANNEL			0
+#define DDR1_CHANNEL			1
+
+struct hantro_mem {
+	struct device *dev;	/* Child device managing the memory region. */
+	void *vaddr;		/* The virtual address of the memory region. */
+	dma_addr_t dma_handle;	/* The address of the memory region. */
+	size_t size;		/* The size of the memory region. */
+};
+
+struct hantro_drm_fb {
+	struct drm_framebuffer fb;
+	struct drm_gem_object *obj[4];
+};
+
+struct drm_gem_hantro_object {
+	struct drm_gem_object base;
+	dma_addr_t paddr;
+	struct sg_table *sgt;
+	/* For objects with DMA memory allocated by GEM CMA */
+	void *vaddr;
+	struct page *pageaddr;
+	struct page **pages;
+	unsigned long num_pages;
+	/* fence ref */
+	struct dma_resv kresv;
+	unsigned int ctxno;
+	int handle;
+	int flag;
+	int ddr_channel;
+};
+
+struct hantro_fencecheck {
+	unsigned int handle;
+	int ready;
+};
+
+struct hantro_domainset {
+	unsigned int handle;
+	unsigned int writedomain;
+	unsigned int readdomain;
+};
+
+struct hantro_addrmap {
+	unsigned int handle;
+	unsigned long vm_addr;
+	unsigned long phy_addr;
+};
+
+struct hantro_regtransfer {
+	unsigned long coreid;
+	unsigned long offset;
+	unsigned long size;
+	const void *data;
+	int benc; /* encoder core or decoder core */
+	int direction; /* 0=read, 1=write */
+};
+
+struct hantro_corenum {
+	unsigned int deccore;
+	unsigned int enccore;
+};
+
+struct hantro_acquirebuf {
+	unsigned long handle;
+	unsigned long flags;
+	unsigned long timeout;
+	unsigned long fence_handle;
+};
+
+struct hantro_releasebuf {
+	unsigned long fence_handle;
+};
+
+struct core_desc {
+	__u32 id;	/* id of the core */
+	__u32 *regs;	/* pointer to user registers */
+	__u32 size;	/* size of register space */
+	__u32 reg_id;
+};
+
+/* Ioctl definitions */
+/*hantro drm */
+#define HANTRO_IOCTL_START (DRM_COMMAND_BASE)
+#define DRM_IOCTL_HANTRO_TESTCMD DRM_IOWR(HANTRO_IOCTL_START, unsigned int)
+#define DRM_IOCTL_HANTRO_GETPADDR                                              \
+	DRM_IOWR(HANTRO_IOCTL_START + 1, struct hantro_addrmap)
+#define DRM_IOCTL_HANTRO_TESTREADY                                             \
+	DRM_IOWR(HANTRO_IOCTL_START + 3, struct hantro_fencecheck)
+#define DRM_IOCTL_HANTRO_SETDOMAIN                                             \
+	DRM_IOWR(HANTRO_IOCTL_START + 4, struct hantro_domainset)
+#define DRM_IOCTL_HANTRO_ACQUIREBUF                                            \
+	DRM_IOWR(HANTRO_IOCTL_START + 6, struct hantro_acquirebuf)
+#define DRM_IOCTL_HANTRO_RELEASEBUF                                            \
+	DRM_IOWR(HANTRO_IOCTL_START + 7, struct hantro_releasebuf)
+#define DRM_IOCTL_HANTRO_GETPRIMEADDR                                          \
+	DRM_IOWR(HANTRO_IOCTL_START + 8, unsigned long *)
+#define DRM_IOCTL_HANTRO_PTR_PHYADDR                                           \
+	DRM_IOWR(HANTRO_IOCTL_START + 9, unsigned long *)
+
+/* hantro enc */
+#define HX280ENC_IOC_START DRM_IO(HANTRO_IOCTL_START + 16)
+#define HX280ENC_IOCGHWOFFSET DRM_IOR(HANTRO_IOCTL_START + 17, unsigned long *)
+#define HX280ENC_IOCGHWIOSIZE DRM_IOWR(HANTRO_IOCTL_START + 18, unsigned long *)
+#define HX280ENC_IOC_CLI DRM_IO(HANTRO_IOCTL_START + 19)
+#define HX280ENC_IOC_STI DRM_IO(HANTRO_IOCTL_START + 20)
+#define HX280ENC_IOCHARDRESET                                                  \
+	DRM_IO(HANTRO_IOCTL_START + 21) /* debugging tool */
+#define HX280ENC_IOCGSRAMOFFSET                                                \
+	DRM_IOR(HANTRO_IOCTL_START + 22, unsigned long *)
+#define HX280ENC_IOCGSRAMEIOSIZE                                               \
+	DRM_IOR(HANTRO_IOCTL_START + 23, unsigned int *)
+#define HX280ENC_IOCH_ENC_RESERVE                                              \
+	DRM_IOR(HANTRO_IOCTL_START + 24, unsigned int *)
+#define HX280ENC_IOCH_ENC_RELEASE                                              \
+	DRM_IOR(HANTRO_IOCTL_START + 25, unsigned int *)
+#define HX280ENC_IOCG_CORE_NUM DRM_IOR(HANTRO_IOCTL_START + 26, unsigned int *)
+#define HX280ENC_IOCG_CORE_WAIT DRM_IOR(HANTRO_IOCTL_START + 27, unsigned int *)
+#define HX280ENC_IOC_END DRM_IO(HANTRO_IOCTL_START + 39)
+
+/* hantro dec */
+#define HANTRODEC_IOC_START DRM_IO(HANTRO_IOCTL_START + 40)
+#define HANTRODEC_PP_INSTANCE DRM_IO(HANTRO_IOCTL_START + 41)
+#define HANTRODEC_HW_PERFORMANCE DRM_IO(HANTRO_IOCTL_START + 42)
+#define HANTRODEC_IOCGHWOFFSET DRM_IOR(HANTRO_IOCTL_START + 43, unsigned long *)
+#define HANTRODEC_IOCGHWIOSIZE DRM_IOR(HANTRO_IOCTL_START + 44, unsigned int *)
+#define HANTRODEC_IOC_CLI DRM_IO(HANTRO_IOCTL_START + 45)
+#define HANTRODEC_IOC_STI DRM_IO(HANTRO_IOCTL_START + 46)
+#define HANTRODEC_IOC_MC_OFFSETS                                               \
+	DRM_IOR(HANTRO_IOCTL_START + 47, unsigned long *)
+#define HANTRODEC_IOC_MC_CORES DRM_IOR(HANTRO_IOCTL_START + 48, unsigned int *)
+#define HANTRODEC_IOCS_DEC_PUSH_REG                                            \
+	DRM_IOW(HANTRO_IOCTL_START + 49, struct core_desc *)
+#define HANTRODEC_IOCS_PP_PUSH_REG                                             \
+	DRM_IOW(HANTRO_IOCTL_START + 50, struct core_desc *)
+#define HANTRODEC_IOCH_DEC_RESERVE DRM_IO(HANTRO_IOCTL_START + 51)
+#define HANTRODEC_IOCT_DEC_RELEASE DRM_IO(HANTRO_IOCTL_START + 52)
+#define HANTRODEC_IOCQ_PP_RESERVE DRM_IO(HANTRO_IOCTL_START + 53)
+#define HANTRODEC_IOCT_PP_RELEASE DRM_IO(HANTRO_IOCTL_START + 54)
+#define HANTRODEC_IOCX_DEC_WAIT                                                \
+	DRM_IOWR(HANTRO_IOCTL_START + 55, struct core_desc *)
+#define HANTRODEC_IOCX_PP_WAIT                                                 \
+	DRM_IOWR(HANTRO_IOCTL_START + 56, struct core_desc *)
+#define HANTRODEC_IOCS_DEC_PULL_REG                                            \
+	DRM_IOWR(HANTRO_IOCTL_START + 57, struct core_desc *)
+#define HANTRODEC_IOCS_PP_PULL_REG                                             \
+	DRM_IOWR(HANTRO_IOCTL_START + 58, struct core_desc *)
+#define HANTRODEC_IOCG_CORE_WAIT DRM_IOR(HANTRO_IOCTL_START + 59, int *)
+#define HANTRODEC_IOX_ASIC_ID DRM_IOR(HANTRO_IOCTL_START + 60, __u32 *)
+#define HANTRODEC_IOCG_CORE_ID DRM_IO(HANTRO_IOCTL_START + 61)
+#define HANTRODEC_IOCS_DEC_WRITE_REG                                           \
+	DRM_IOW(HANTRO_IOCTL_START + 62, struct core_desc *)
+#define HANTRODEC_IOCS_DEC_READ_REG                                            \
+	DRM_IOWR(HANTRO_IOCTL_START + 63, struct core_desc *)
+#define HANTRODEC_DEBUG_STATUS DRM_IO(HANTRO_IOCTL_START + 64)
+#define HANTRODEC_IOX_ASIC_BUILD_ID DRM_IOR(HANTRO_IOCTL_START + 65, __u32 *)
+#define HANTRODEC_IOC_END DRM_IO(HANTRO_IOCTL_START + 80)
+
+#endif /* HANTRO_H */
diff --git a/drivers/gpu/drm/hantro/hantro_fence.c b/drivers/gpu/drm/hantro/hantro_fence.c
new file mode 100644
index 0000000..e009ba9
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_fence.c
@@ -0,0 +1,284 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *    Hantro driver DMA_BUF fence operation.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#include "hantro_priv.h"
+
+static unsigned long seqno;
+DEFINE_IDR(fence_idr);
+/* fence mutex struct */
+struct mutex fence_mutex;
+
+static const char *hantro_fence_get_driver_name(struct dma_fence *fence)
+{
+	return "hantro";
+}
+
+static const char *hantro_fence_get_timeline_name(struct dma_fence *fence)
+{
+	return " ";
+}
+
+static bool hantro_fence_enable_signaling(struct dma_fence *fence)
+{
+	if (test_bit(HANTRO_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags))
+		return true;
+	else
+		return false;
+}
+
+static bool hantro_fence_signaled(struct dma_fence *fobj)
+{
+	unsigned long irqflags;
+	bool ret;
+
+	spin_lock_irqsave(fobj->lock, irqflags);
+	ret = (test_bit(HANTRO_FENCE_FLAG_SIGNAL_BIT, &fobj->flags) != 0);
+	spin_unlock_irqrestore(fobj->lock, irqflags);
+
+	return ret;
+}
+
+static void hantro_fence_free(struct dma_fence *fence)
+{
+	kfree(fence->lock);
+	fence->lock = NULL;
+	dma_fence_free(fence);
+}
+
+const static struct dma_fence_ops hantro_fenceops = {
+	.get_driver_name = hantro_fence_get_driver_name,
+	.get_timeline_name = hantro_fence_get_timeline_name,
+	.enable_signaling = hantro_fence_enable_signaling,
+	.signaled = hantro_fence_signaled,
+	.wait = hantro_fence_default_wait,
+	.release = hantro_fence_free,
+};
+
+static struct dma_fence *alloc_fence(unsigned int ctxno)
+{
+	struct dma_fence *fobj;
+	/* spinlock for fence */
+	spinlock_t *lock;
+
+	fobj = kzalloc(sizeof(*fobj), GFP_KERNEL);
+	if (!fobj)
+		return NULL;
+	lock = kzalloc(sizeof(*lock), GFP_KERNEL);
+	if (!lock) {
+		kfree(fobj);
+		return NULL;
+	}
+
+	spin_lock_init(lock);
+	hantro_fence_init(fobj, &hantro_fenceops, lock, ctxno, seqno++);
+	clear_bit(HANTRO_FENCE_FLAG_SIGNAL_BIT, &fobj->flags);
+	set_bit(HANTRO_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fobj->flags);
+
+	return fobj;
+}
+
+static int is_hantro_fence(struct dma_fence *fence)
+{
+	return (fence->ops == &hantro_fenceops);
+}
+
+int init_hantro_resv(struct dma_resv *presv,
+		     struct drm_gem_hantro_object *cma_obj)
+{
+	dma_resv_init(presv);
+	cma_obj->ctxno = hantro_fence_context_alloc(1);
+
+	return 0;
+}
+
+int hantro_waitfence(struct dma_fence *pfence)
+{
+	if (test_bit(HANTRO_FENCE_FLAG_SIGNAL_BIT, &pfence->flags))
+		return 0;
+
+	if (is_hantro_fence(pfence))
+		return 0;
+	else
+		return hantro_fence_wait_timeout(pfence, true, 30 * HZ);
+}
+
+int hantro_setdomain(struct drm_device *dev, void *data,
+		     struct drm_file *file_priv)
+{
+	return 0;
+}
+
+void init_fence_data(void)
+{
+	seqno = 0;
+	mutex_init(&fence_mutex);
+	idr_init(&fence_idr);
+}
+
+static int fence_idr_fini(int id, void *p, void *data)
+{
+	hantro_fence_signal(p);
+	hantro_fence_put(p);
+
+	return 0;
+}
+
+void release_fence_data(void)
+{
+	mutex_lock(&fence_mutex);
+	idr_for_each(&fence_idr, fence_idr_fini, NULL);
+	idr_destroy(&fence_idr);
+	mutex_unlock(&fence_mutex);
+}
+
+int hantro_acquirebuf(struct drm_device *dev, void *data,
+		      struct drm_file *file_priv)
+{
+	struct hantro_acquirebuf *arg = data;
+	struct dma_resv *resv;
+	struct drm_gem_object *obj;
+	struct dma_fence *fence = NULL;
+	unsigned long timeout = arg->timeout;
+	unsigned long fenceid = -1;
+	int ret = 0;
+
+	obj = hantro_gem_object_lookup(dev, file_priv, arg->handle);
+	if (!obj)
+		return -ENOENT;
+
+	if (!obj->dma_buf) {
+		if (hantro_dev.drm_dev == obj->dev) {
+			struct drm_gem_hantro_object *hobj =
+				to_drm_gem_hantro_obj(obj);
+
+			resv = &hobj->kresv;
+		} else {
+			ret = -ENOENT;
+			goto err;
+		}
+	} else {
+		resv = obj->dma_buf->resv;
+	}
+	/* Check for a stalled fence */
+	if (!dma_resv_wait_timeout_rcu(resv, arg->flags
+					& HANTRO_FENCE_WRITE,
+					1, timeout)) {
+		ret = -EBUSY;
+		goto err;
+	}
+
+	/* Expose the fence via the dma-buf */
+	ret = -ENOMEM;
+	fence = alloc_fence(hantro_fence_context_alloc(1));
+	if (!fence)
+		goto err;
+
+	mutex_lock(&fence_mutex);
+	ret = idr_alloc(&fence_idr, fence, 1, 0, GFP_KERNEL);
+	mutex_unlock(&fence_mutex);
+	if (ret >= 0)
+		fenceid = ret;
+	else
+		goto err;
+
+	dma_resv_lock(resv, NULL);
+	if (resv->fence_excl  &&
+	    !hantro_fence_is_signaled(resv->fence_excl)) {
+		dma_resv_unlock(resv);
+		ret = -EBUSY;
+		goto err;
+	}
+	ret = 0;
+	if (arg->flags & HANTRO_FENCE_WRITE) {
+		dma_resv_add_excl_fence(resv, fence);
+	} else {
+		ret = hantro_reserve_obj_shared(resv, 1);
+		if (ret == 0)
+			dma_resv_add_shared_fence(resv, fence);
+	}
+	dma_resv_unlock(resv);
+
+	/* Record the fence in our idr for later signaling */
+	if (ret == 0) {
+		arg->fence_handle = fenceid;
+		hantro_unref_drmobj(obj);
+
+		return ret;
+	}
+err:
+	if (fenceid >= 0) {
+		mutex_lock(&fence_mutex);
+		idr_remove(&fence_idr, fenceid);
+		mutex_unlock(&fence_mutex);
+	}
+	if (fence) {
+		hantro_fence_signal(fence);
+		hantro_fence_put(fence);
+	}
+	hantro_unref_drmobj(obj);
+	return ret;
+}
+
+int hantro_testbufvalid(struct drm_device *dev, void *data,
+			struct drm_file *file_priv)
+{
+	struct hantro_fencecheck *arg = data;
+	struct dma_resv *resv;
+	struct drm_gem_object *obj;
+
+	arg->ready = 0;
+	obj = hantro_gem_object_lookup(dev, file_priv, arg->handle);
+	if (!obj)
+		return -ENOENT;
+
+	if (!obj->dma_buf) {
+		if (hantro_dev.drm_dev == obj->dev) {
+			struct drm_gem_hantro_object *hobj =
+				to_drm_gem_hantro_obj(obj);
+
+			resv = &hobj->kresv;
+		} else {
+			return -ENOENT;
+		}
+	} else {
+		resv = obj->dma_buf->resv;
+	}
+
+	/* Check for a stalled fence */
+	if (dma_resv_wait_timeout_rcu(resv, 1, 1, 0) <= 0)
+		arg->ready = 0;
+	else
+		arg->ready = 1;
+
+	return 0;
+}
+
+int hantro_releasebuf(struct drm_device *dev, void *data,
+		      struct drm_file *file_priv)
+{
+	struct hantro_releasebuf *arg = data;
+	struct dma_fence *fence;
+	int ret = 0;
+
+	mutex_lock(&fence_mutex);
+	fence = idr_replace(&fence_idr, NULL, arg->fence_handle);
+	mutex_unlock(&fence_mutex);
+
+	if (!fence || IS_ERR(fence))
+		return -ENOENT;
+	if (hantro_fence_is_signaled(fence))
+		ret = -ETIMEDOUT;
+
+	hantro_fence_signal(fence);
+	hantro_fence_put(fence);
+	mutex_lock(&fence_mutex);
+	idr_remove(&fence_idr, arg->fence_handle);
+	mutex_unlock(&fence_mutex);
+
+	return ret;
+}
diff --git a/drivers/gpu/drm/hantro/hantro_priv.h b/drivers/gpu/drm/hantro/hantro_priv.h
new file mode 100644
index 0000000..7257cfd
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_priv.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *    Hantro driver private header file.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#ifndef HANTRO_PRIV_H
+#define HANTRO_PRIV_H
+#include "hantro_drm.h"
+
+#define hantro_access_ok(a, b, c)	access_ok(b, c)
+#define hantro_reserve_obj_shared(a, b)	dma_resv_reserve_shared(a, b)
+#define hantro_ref_drmobj		drm_gem_object_get
+#define hantro_unref_drmobj		drm_gem_object_put
+
+struct hantro_device_handle {
+	struct platform_device *platformdev; /* parent device */
+	struct drm_device *drm_dev;
+	int bprobed;
+};
+
+extern struct hantro_device_handle hantro_dev;
+
+#define HANTRO_FENCE_FLAG_ENABLE_SIGNAL_BIT DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT
+#define HANTRO_FENCE_FLAG_SIGNAL_BIT DMA_FENCE_FLAG_SIGNALED_BIT
+
+static inline signed long
+hantro_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
+{
+	return dma_fence_default_wait(fence, intr, timeout);
+}
+
+static inline void hantro_fence_init(struct dma_fence *fence,
+				     const struct dma_fence_ops *ops,
+				     spinlock_t *lock, unsigned int context,
+				     unsigned int seqno)
+{
+	return dma_fence_init(fence, ops, lock, context, seqno);
+}
+
+static inline unsigned int hantro_fence_context_alloc(unsigned int num)
+{
+	return dma_fence_context_alloc(num);
+}
+
+static inline signed long
+hantro_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
+{
+	return dma_fence_wait_timeout(fence, intr, timeout);
+}
+
+static inline struct drm_gem_object *
+hantro_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
+			 u32 handle)
+{
+	return drm_gem_object_lookup(filp, handle);
+}
+
+static inline void hantro_fence_put(struct dma_fence *fence)
+{
+	return dma_fence_put(fence);
+}
+
+static inline int hantro_fence_signal(struct dma_fence *fence)
+{
+	return dma_fence_signal(fence);
+}
+
+static inline void ref_page(struct page *pp)
+{
+	atomic_inc(&pp->_refcount);
+	atomic_inc(&pp->_mapcount);
+}
+
+static inline void unref_page(struct page *pp)
+{
+	atomic_dec(&pp->_refcount);
+	atomic_dec(&pp->_mapcount);
+}
+
+static inline bool hantro_fence_is_signaled(struct dma_fence *fence)
+{
+	return dma_fence_is_signaled(fence);
+}
+
+static inline struct drm_gem_hantro_object *
+to_drm_gem_hantro_obj(struct drm_gem_object *gem_obj)
+{
+	return container_of(gem_obj, struct drm_gem_hantro_object, base);
+}
+
+int hantro_setdomain(struct drm_device *dev, void *data,
+		     struct drm_file *file_priv);
+int hantro_acquirebuf(struct drm_device *dev, void *data,
+		      struct drm_file *file_priv);
+int hantro_testbufvalid(struct drm_device *dev, void *data,
+			struct drm_file *file_priv);
+int hantro_releasebuf(struct drm_device *dev, void *data,
+		      struct drm_file *file_priv);
+int init_hantro_resv(struct dma_resv *presv,
+		     struct drm_gem_hantro_object *cma_obj);
+void init_fence_data(void);
+void release_fence_data(void);
+#endif /*HANTRO_PRIV_H*/
-- 
1.9.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 2/4] drm: hantro: Keem Bay VPU DRM encoder
  2020-10-09 11:57 [PATCH v1 0/4] Add support for Keem Bay VPU DRM driver kuhanh.murugasen.krishnan
  2020-10-09 11:57 ` [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM kuhanh.murugasen.krishnan
@ 2020-10-09 11:57 ` kuhanh.murugasen.krishnan
  2020-10-09 11:57 ` [PATCH v1 3/4] drm: hantro: Keem Bay VPU DRM decoder kuhanh.murugasen.krishnan
  2020-10-09 11:57 ` [PATCH v1 4/4] drm: hantro: Keem Bay VPU DRM build files kuhanh.murugasen.krishnan
  3 siblings, 0 replies; 7+ messages in thread
From: kuhanh.murugasen.krishnan @ 2020-10-09 11:57 UTC (permalink / raw)
  To: dri-devel

From: "Murugasen Krishnan, Kuhanh" <kuhanh.murugasen.krishnan@intel.com>

Hantro VC8000E allows 4K encoding with a minimal silicon single-core solution that
supports HEVC and H.264 video formats, key features:
* HEVC Main10, Main and Main Still Profile, level 5.1
* H.264 Baseline, Main and High, High10 level 5.2
* JPEG encoder 16Kx16K max resolution
* HEVC/H264 Support up to 4K@60fps performance single-core
* 8 channels 1080p@30fps encoding
* B-frame support for higher compression rates
* Reference Frame Compression

Signed-off-by: Murugasen Krishnan, Kuhanh <kuhanh.murugasen.krishnan@intel.com>
---
 drivers/gpu/drm/hantro/hantro_enc.c | 738 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/hantro/hantro_enc.h |  66 ++++
 2 files changed, 804 insertions(+)
 create mode 100644 drivers/gpu/drm/hantro/hantro_enc.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_enc.h

diff --git a/drivers/gpu/drm/hantro/hantro_enc.c b/drivers/gpu/drm/hantro/hantro_enc.c
new file mode 100644
index 0000000..35d1153
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_enc.c
@@ -0,0 +1,738 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *    Hantro encoder hardware driver.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/fs.h>
+#include <linux/errno.h>
+#include <linux/moduleparam.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/io.h>
+#include <linux/pci.h>
+#include <linux/uaccess.h>
+#include <linux/ioport.h>
+#include <linux/version.h>
+#include <linux/vmalloc.h>
+#include <linux/timer.h>
+#include "hantro_enc.h"
+#include <linux/irq.h>
+#include <linux/clk.h>
+
+struct semaphore enc_core_sem;
+static DECLARE_WAIT_QUEUE_HEAD(enc_hw_queue);
+static DEFINE_SPINLOCK(enc_owner_lock);
+static DECLARE_WAIT_QUEUE_HEAD(enc_wait_queue);
+
+/* for all cores, the core info should be listed here for subsequent use */
+/* base_addr, iosize, irq, resource_shared */
+struct enc_core enc_core_array[] = {
+	{ CORE_0_IO_ADDR, CORE_0_IO_SIZE, INT_PIN_CORE_0,
+	  RESOURCE_SHARED_INTER_CORES }, /* core_0, hevc and avc */
+	{ CORE_1_IO_ADDR, CORE_1_IO_SIZE, INT_PIN_CORE_1,
+	  RESOURCE_SHARED_INTER_CORES } /* core_1, jpeg */
+};
+
+/* Interrupt Pin Name */
+const char *core_irq_names[] = {
+	"irq_hantro_videoencoder",   /* core_0, hevc and avc */
+	"irq_hantro_jpegencoder"    /* core_1, jpeg */
+};
+
+/* KMB VC8000E page lookup table */
+static unsigned long page_lut_read = KMB_VC8000E_PAGE_LUT;
+static u8 *page_lut_regs_read;
+
+static struct clk *hantro_clk_xin_venc;
+static struct clk *hantro_clk_xin_jpeg;
+static struct clk *hantro_clk_venc;
+static struct clk *hantro_clk_jpeg;
+
+static int hantro_clk_enable(void)
+{
+	clk_prepare_enable(hantro_clk_xin_venc);
+	clk_prepare_enable(hantro_clk_xin_jpeg);
+	clk_prepare_enable(hantro_clk_venc);
+	clk_prepare_enable(hantro_clk_jpeg);
+
+	return 0;
+}
+
+static int hantro_clk_disable(void)
+{
+	if (hantro_clk_xin_venc)
+		clk_disable_unprepare(hantro_clk_xin_venc);
+
+	if (hantro_clk_venc)
+		clk_disable_unprepare(hantro_clk_venc);
+
+	if (hantro_clk_xin_jpeg)
+		clk_disable_unprepare(hantro_clk_xin_jpeg);
+
+	if (hantro_clk_jpeg)
+		clk_disable_unprepare(hantro_clk_jpeg);
+
+	return 0;
+}
+
+/***************************TYPE AND FUNCTION DECLARATION****************/
+struct hantroenc_t {
+	struct enc_core core_cfg;	//config of each core,such as base addr, irq,etc
+	u32 hw_id;		//hw id to indicate project
+	u32 core_id;		//core id for driver and sw internal use
+	u32 is_valid;		//indicate this core is hantro's core or not
+	u32 is_reserved;	//indicate this core is occupied by user or not
+	int pid;		//indicate which process is occupying the core
+	u32 irq_received;	//indicate this core receives irq
+	u32 irq_status;
+	char *buffer;
+	unsigned int buffsize;
+	u8 *hwregs;
+	struct fasync_struct *async_queue;
+};
+
+static int reserve_io(void);
+static void release_io(void);
+static void reset_asic(struct hantroenc_t *dev);
+static int check_core_occupation(struct hantroenc_t *dev);
+static void release_encoder(struct hantroenc_t *dev, u32 *core_info);
+
+#ifdef hantroenc_DEBUG
+static void dump_regs(unsigned long data);
+#endif
+
+/* IRQ handler */
+static irqreturn_t hantroenc_isr(int irq, void *dev_id);
+
+/*********************local variable declaration*****************/
+unsigned long sram_base;
+unsigned int sram_size;
+/* and this is our MAJOR; use 0 for dynamic allocation (recommended) */
+static int hantroenc_major;
+static int total_core_num;
+/* dynamic allocation */
+static struct hantroenc_t *hantroenc_data;
+
+/******************************************************************************/
+static int check_enc_irq(struct hantroenc_t *dev, u32 *core_info, u32 *irq_status)
+{
+	unsigned long flags;
+	int rdy = 0;
+	u32 i = 0;
+	u8 core_mapping = 0;
+
+	core_mapping = (u8)(*core_info & 0xFF);
+
+	while (core_mapping) {
+		if (core_mapping & 0x1) {
+			if (i > total_core_num - 1)
+				break;
+
+			spin_lock_irqsave(&enc_owner_lock, flags);
+
+			if (dev[i].irq_received) {
+				/* reset the wait condition(s) */
+				PDEBUG("check %d irq ready\n", i);
+				dev[i].irq_received = 0;
+				rdy = 1;
+				*core_info = i;
+				*irq_status = dev[i].irq_status;
+			}
+
+			spin_unlock_irqrestore(&enc_owner_lock, flags);
+			break;
+		}
+		core_mapping = core_mapping >> 1;
+		i++;
+	}
+
+	return rdy;
+}
+
+static unsigned int wait_enc_ready(struct hantroenc_t *dev, u32 *core_info,
+				   u32 *irq_status)
+{
+	if (wait_event_interruptible(enc_wait_queue,
+				     check_enc_irq(dev, core_info, irq_status))) {
+		PDEBUG("ENC wait_event_interruptible interrupted\n");
+		release_encoder(dev, core_info);
+		return -ERESTARTSYS;
+	}
+
+	return 0;
+}
+
+u32 hantroenc_readbandwidth(int is_read_bw)
+{
+	int i;
+	u32 bandwidth = 0;
+
+	for (i = 0; i < total_core_num; i++) {
+		if (hantroenc_data[i].is_valid == 0)
+			continue;
+
+		if (is_read_bw)
+			bandwidth +=
+				ioread32((void *)(hantroenc_data[i].hwregs +
+						  HANTRO_VC8KE_REG_BWREAD * 4));
+		else
+			bandwidth +=
+				ioread32((void *)(hantroenc_data[i].hwregs +
+					 HANTRO_VC8KE_REG_BWWRITE * 4));
+	}
+
+	return bandwidth * VC8KE_BURSTWIDTH;
+}
+
+static int check_core_occupation(struct hantroenc_t *dev)
+{
+	int ret = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&enc_owner_lock, flags);
+	if (!dev->is_reserved) {
+		dev->is_reserved = 1;
+		dev->pid = current->pid;
+		ret = 1;
+		PDEBUG("%s pid=%d\n", __func__, dev->pid);
+	}
+
+	spin_unlock_irqrestore(&enc_owner_lock, flags);
+
+	return ret;
+}
+
+static int get_workable_core(struct hantroenc_t *dev, u32 *core_info,
+			     u32 *core_info_tmp)
+{
+	int ret = 0;
+	u32 i = 0;
+	u32 cores;
+	u32 core_id = 0;
+	u8 core_mapping = 0;
+	u32 required_num = 0;
+
+	cores = *core_info;
+	required_num = ((cores >> CORE_INFO_AMOUNT_OFFSET) & 0x7) + 1;
+	core_mapping = (u8)(cores & 0xFF);
+
+	if (*core_info_tmp == 0)
+		*core_info_tmp = required_num << 8;
+	else
+		required_num = ((*core_info_tmp & 0xF00) >> 8);
+
+	PDEBUG("%s:required_num=%d,core_info=%x\n", __func__, required_num,
+	       *core_info);
+
+	if (required_num) {
+		/* a valid free Core that has specified core id */
+		while (core_mapping) {
+			if (core_mapping & 0x1) {
+				if (i > total_core_num - 1)
+					break;
+				core_id = i;
+				if (dev[core_id].is_valid &&
+				    check_core_occupation(&dev[core_id])) {
+					*core_info_tmp =
+						((((*core_info_tmp & 0xF00) >>
+						   8) -
+						  1)
+						 << 8) |
+						(*core_info_tmp & 0x0FF);
+					*core_info_tmp =
+						*core_info_tmp | (1 << core_id);
+					if (((*core_info_tmp & 0xF00) >> 8) ==
+					    0) {
+						ret = 1;
+						*core_info =
+							(*core_info &
+							 0xFFFFFF00) |
+							(*core_info_tmp & 0xFF);
+						*core_info_tmp = 0;
+						required_num = 0;
+						break;
+					}
+				}
+			}
+			core_mapping = core_mapping >> 1;
+			i++;
+		}
+	} else {
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static long reserve_encoder(struct hantroenc_t *dev, u32 *core_info)
+{
+	u32 core_info_tmp = 0;
+	/*
+	 * If HW resources are shared inter cores
+	 * just make sure only one is using the HW
+	 */
+	if (dev[0].core_cfg.resource_shared) {
+		if (down_interruptible(&enc_core_sem))
+			return -ERESTARTSYS;
+	}
+
+	/* lock a core that has specified core id */
+	if (wait_event_interruptible(enc_hw_queue,
+				     get_workable_core(dev, core_info,
+						       &core_info_tmp) != 0))
+		return -ERESTARTSYS;
+
+	return 0;
+}
+
+static void release_encoder(struct hantroenc_t *dev, u32 *core_info)
+{
+	unsigned long flags;
+	u32 core_num = 0;
+	u32 i = 0, core_id;
+	u8 core_mapping = 0;
+
+	core_num = ((*core_info >> CORE_INFO_AMOUNT_OFFSET) & 0x7) + 1;
+
+	core_mapping = (u8)(*core_info & 0xFF);
+
+	PDEBUG("%s:core_num=%d,core_mapping=%x\n", __func__, core_num,
+	       core_mapping);
+	/* release specified core id */
+	while (core_mapping) {
+		if (core_mapping & 0x1) {
+			core_id = i;
+			spin_lock_irqsave(&enc_owner_lock, flags);
+			PDEBUG("dev[core_id].pid=%d,current->pid=%d\n",
+			       dev[core_id].pid, current->pid);
+			if (dev[core_id].is_reserved &&
+			    dev[core_id].pid == current->pid) {
+				dev[core_id].pid = -1;
+				dev[core_id].is_reserved = 0;
+			}
+
+			dev[core_id].irq_received = 0;
+			dev[core_id].irq_status = 0;
+			spin_unlock_irqrestore(&enc_owner_lock, flags);
+		}
+		core_mapping = core_mapping >> 1;
+		i++;
+	}
+
+	wake_up_interruptible_all(&enc_hw_queue);
+	if (dev->core_cfg.resource_shared)
+		up(&enc_core_sem);
+}
+
+long hantroenc_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+	unsigned int tmp;
+
+	if (!hantroenc_data)
+		return -ENXIO;
+	switch (cmd) {
+	case HX280ENC_IOCGHWOFFSET: {
+		u32 id;
+
+		__get_user(id, (u32 *)arg);
+
+		if (id >= total_core_num || hantroenc_data[id].is_valid == 0)
+			return -EFAULT;
+
+		__put_user(hantroenc_data[id].core_cfg.base_addr,
+			   (unsigned long *)arg);
+		break;
+	}
+
+	case HX280ENC_IOCGHWIOSIZE: {
+		u32 id;
+		u32 io_size;
+
+		__get_user(id, (u32 *)arg);
+
+		if (id >= total_core_num || hantroenc_data[id].is_valid == 0)
+			return -EFAULT;
+
+		io_size = hantroenc_data[id].core_cfg.iosize;
+		__put_user(io_size, (u32 *)arg);
+
+		return 0;
+	}
+	case HX280ENC_IOCGSRAMOFFSET:
+		__put_user(sram_base, (unsigned long *)arg);
+		break;
+	case HX280ENC_IOCGSRAMEIOSIZE:
+		__put_user(sram_size, (unsigned int *)arg);
+		break;
+	case HX280ENC_IOCG_CORE_NUM:
+		__put_user(total_core_num, (unsigned int *)arg);
+		PDEBUG("enc core num = %d", total_core_num);
+		break;
+	case HX280ENC_IOCH_ENC_RESERVE: {
+		u32 core_info;
+		int ret;
+
+		PDEBUG("Reserve ENC Cores\n");
+		__get_user(core_info, (u32 *)arg);
+		ret = reserve_encoder(hantroenc_data, &core_info);
+		if (ret == 0)
+			__put_user(core_info, (u32 *)arg);
+		return ret;
+	}
+	case HX280ENC_IOCH_ENC_RELEASE: {
+		u32 core_info;
+
+		__get_user(core_info, (u32 *)arg);
+
+		PDEBUG("Release ENC Core\n");
+
+		release_encoder(hantroenc_data, &core_info);
+
+		break;
+	}
+
+	case HX280ENC_IOCG_CORE_WAIT: {
+		u32 core_info;
+		u32 irq_status;
+
+		__get_user(core_info, (u32 *)arg);
+
+		tmp = wait_enc_ready(hantroenc_data, &core_info, &irq_status);
+		if (tmp == 0) {
+			__put_user(irq_status, (unsigned int *)arg);
+			return core_info; //return core_id
+		}
+		__put_user(0, (unsigned int *)arg);
+		return -1;
+
+		break;
+	}
+	}
+
+	return 0;
+}
+
+int hantroenc_init(struct platform_device *pdev)
+{
+	int result = 0;
+	int i;
+
+	sram_base = 0;
+	sram_size = 0;
+	hantroenc_major = 0;
+	total_core_num = 0;
+	hantroenc_data = NULL;
+
+	total_core_num = sizeof(enc_core_array) / sizeof(struct enc_core);
+	for (i = 0; i < total_core_num; i++) {
+		pr_info("hantroenc: module init - core[%d] addr = %lx\n", i,
+			(size_t)enc_core_array[i].base_addr);
+	}
+
+	hantroenc_data = vmalloc(sizeof(*hantroenc_data) * total_core_num);
+	if (!hantroenc_data)
+		goto err;
+	memset(hantroenc_data, 0, sizeof(*hantroenc_data) * total_core_num);
+
+	for (i = 0; i < total_core_num; i++) {
+		hantroenc_data[i].core_cfg = enc_core_array[i];
+		hantroenc_data[i].async_queue = NULL;
+		hantroenc_data[i].hwregs = NULL;
+		hantroenc_data[i].core_id = i;
+	}
+
+	hantroenc_data[0].core_cfg.irq =
+		platform_get_irq_byname(pdev, "irq_hantro_videoencoder");
+	hantroenc_data[1].core_cfg.irq =
+		platform_get_irq_byname(pdev, "irq_hantro_jpgencoder");
+
+	hantro_clk_xin_venc = clk_get(&pdev->dev, "clk_xin_venc");
+	hantro_clk_venc = clk_get(&pdev->dev, "clk_venc");
+	hantro_clk_xin_jpeg = clk_get(&pdev->dev, "clk_xin_jpeg");
+	hantro_clk_jpeg = clk_get(&pdev->dev, "clk_jpeg");
+	hantro_clk_enable();
+
+	/* Set KMB VENC CLK to max 700Mhz VENC */
+	pr_info("hx280enc venc: Before setting any clocks: clk_xin_venc: %ld | clk_venc %ld\n",
+		clk_get_rate(hantro_clk_xin_venc),
+		clk_get_rate(hantro_clk_venc));
+	clk_set_rate(hantro_clk_xin_venc, 700000000);
+	pr_info("hx280enc venc: Set clocks to 700Mhz: clk_xin_venc: %ld | clk_venc %ld\n",
+		clk_get_rate(hantro_clk_xin_venc),
+		clk_get_rate(hantro_clk_venc));
+
+	/* Set KMB JPEGENC CLK to max 700Mhz JPEGENC */
+	pr_info("hx280enc jpegenc: Before setting any clocks: clk_xin_jpeg: %ld | clk_jpeg %ld\n",
+		clk_get_rate(hantro_clk_xin_jpeg),
+		clk_get_rate(hantro_clk_jpeg));
+	clk_set_rate(hantro_clk_xin_jpeg, 700000000);
+	pr_info("hx280enc jpegenc: Set clocks to 700Mhz: clk_xin_jpeg: %ld | clk_jpeg %ld\n",
+		clk_get_rate(hantro_clk_xin_jpeg),
+		clk_get_rate(hantro_clk_jpeg));
+
+	result = reserve_io();
+	if (result < 0)
+		goto err;
+
+	reset_asic(hantroenc_data); /* reset hardware */
+
+	sema_init(&enc_core_sem, 1);
+
+	/* Dynamic AXI ID and Page LUT routines */
+	/* Register and set the page lookup table for encoder */
+	if (!request_mem_region(page_lut_read,
+				hantroenc_data[0].core_cfg.iosize,
+				"hantroenc_pagelut_read")) {
+		pr_info("hantroenc: failed to reserve page lookup table regs\n");
+		return -EBUSY;
+	}
+	page_lut_regs_read = (u8 *)ioremap
+		(page_lut_read, hantroenc_data[0].core_cfg.iosize);
+	if (!page_lut_regs_read)
+		pr_info("hantroenc: failed to ioremap page lookup table regs\n");
+
+	/* Set write page LUT AXI ID 1-8 to 0x4 */
+	iowrite32(0x04040400, (void *)page_lut_regs_read + 0x10);
+	pr_info("hx280enc: Page LUT WR AXI ID 3:0 = %x\n",
+		ioread32((void *)page_lut_regs_read + 0x10));
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0x14);
+	pr_info("hx280enc: Page LUT WR AXI ID 7:4 = %x\n",
+		ioread32((void *)page_lut_regs_read + 0x14));
+	iowrite32(0x4, (void *)page_lut_regs_read + 0x18);
+	pr_info("hx280enc: Page LUT WR AXI ID 8 = %x\n",
+		ioread32((void *)page_lut_regs_read + 0x18));
+
+	/* Set VENC sw_enc_axi_rd_id_e = 1 */
+	iowrite32(1 << 16, (void *)hantroenc_data[0].hwregs + 0x8);
+	pr_info("hx280enc: sw_enc_axi_rd_id_e  = %x\n",
+		ioread32((void *)hantroenc_data[0].hwregs + 0x8));
+	/* Set RD Page LUT AXI ID 0.1 to 0x0 and the rest AXI ID 2-8 to 0x4 */
+	iowrite32(0x04040000, (void *)page_lut_regs_read);
+	pr_info("hx280enc: RD AXI 3:0 = %x\n",
+		ioread32((void *)page_lut_regs_read));
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0x4);
+	pr_info("hx280enc: RD AXI 7:4  = %x\n",
+		ioread32((void *)page_lut_regs_read + 0x4));
+	iowrite32(0x00000004, (void *)page_lut_regs_read + 0x8);
+	pr_info("hx280enc: RD AXI 8 = %x\n",
+		ioread32((void *)page_lut_regs_read + 0x8));
+
+	/* get the IRQ line */
+	for (i = 0; i < total_core_num; i++) {
+		if (hantroenc_data[i].is_valid == 0)
+			continue;
+		if (hantroenc_data[i].core_cfg.irq > 0) {
+			PDEBUG("hx280enc: Trying to request_irq = %d\n",
+			       hantroenc_data[i].core_cfg.irq);
+
+			result = request_irq(hantroenc_data[i].core_cfg.irq,
+					     hantroenc_isr, IRQF_SHARED,
+					     core_irq_names[i],
+					     (void *)&hantroenc_data[i]);
+
+			if (result == -EINVAL) {
+				PDEBUG("hx280enc: Bad irq number or handler\n");
+				release_io();
+				goto err;
+			} else if (result == -EBUSY) {
+				PDEBUG("hx280enc: IRQ <%d> busy, change config\n",
+				       hantroenc_data[i].core_cfg.irq);
+				release_io();
+				goto err;
+			}
+		}
+	}
+
+	pr_info("hantroenc: module inserted.\n");
+
+	return 0;
+err:
+	pr_info("hantroenc: module not inserted\n");
+	return result;
+}
+
+void hantroenc_cleanup(void)
+{
+	int i = 0;
+
+	for (i = 0; i < total_core_num; i++) {
+		u32 hwid = hantroenc_data[i].hw_id;
+		u32 major_id = (hwid & 0x0000FF00) >> 8;
+		u32 wclr = (major_id >= 0x61) ? (0x1FD) : (0);
+
+		if (hantroenc_data[i].is_valid == 0)
+			continue;
+		iowrite32(0, (void *)(hantroenc_data[i].hwregs +
+				      0x14)); /* disable HW */
+		iowrite32(wclr, (void *)(hantroenc_data[i].hwregs +
+					 0x04)); /* clear enc IRQ */
+
+		/* free the encoder IRQ */
+		if (hantroenc_data[i].core_cfg.irq > 0)
+			free_irq(hantroenc_data[i].core_cfg.irq,
+				 (void *)&hantroenc_data[i]);
+	}
+
+	release_io();
+	vfree(hantroenc_data);
+	hantro_clk_disable();
+	pr_info("hantroenc: module removed\n");
+}
+
+static int reserve_io(void)
+{
+	u32 hwid;
+	int i;
+	u32 found_hw = 0;
+
+	for (i = 0; i < total_core_num; i++) {
+		if (!request_mem_region(hantroenc_data[i].core_cfg.base_addr,
+					hantroenc_data[i].core_cfg.iosize,
+					"hx280enc")) {
+			PDEBUG("hantroenc: failed to reserve HW regs\n");
+			continue;
+		}
+
+		hantroenc_data[i].hwregs =
+			(u8 *)ioremap(hantroenc_data[i].core_cfg.base_addr,
+			hantroenc_data[i].core_cfg.iosize);
+
+		if (!hantroenc_data[i].hwregs) {
+			PDEBUG("hantroenc: failed to ioremap HW regs\n");
+			release_io();
+			continue;
+		}
+
+		/* read hwid and check validness and store it */
+		hwid = (u32)ioread32((void *)hantroenc_data[i].hwregs);
+		PDEBUG("hwid=0x%08x\n", hwid);
+
+		/* check for encoder HW ID */
+		if (((((hwid >> 16) & 0xFFFF) !=
+		      ((ENC_HW_ID1 >> 16) & 0xFFFF))) &&
+		    ((((hwid >> 16) & 0xFFFF) !=
+		      ((ENC_HW_ID2 >> 16) & 0xFFFF)))) {
+			PDEBUG("hantroenc: HW not found at %lx\n",
+			       hantroenc_data[i].core_cfg.base_addr);
+#ifdef hantroenc_DEBUG
+			dump_regs((unsigned long)&hantroenc_data);
+#endif
+			release_io();
+			hantroenc_data[i].is_valid = 0;
+			continue;
+		}
+		hantroenc_data[i].hw_id = hwid;
+		hantroenc_data[i].is_valid = 1;
+		found_hw = 1;
+
+		PDEBUG("hantroenc: HW at base <%lx> with ID <0x%08x>\n",
+		       hantroenc_data[i].core_cfg.base_addr, hwid);
+	}
+
+	if (found_hw == 0) {
+		PDEBUG("hantroenc: NO ANY HW found!!\n");
+		return -1;
+	}
+
+	return 0;
+}
+
+static void release_io(void)
+{
+	u32 i;
+
+	for (i = 0; i < total_core_num; i++) {
+		if (hantroenc_data[i].is_valid == 0)
+			continue;
+		if (hantroenc_data[i].hwregs)
+			iounmap((void *)hantroenc_data[i].hwregs);
+		release_mem_region(hantroenc_data[i].core_cfg.base_addr,
+				   hantroenc_data[i].core_cfg.iosize);
+	}
+
+	iounmap((void *)page_lut_regs_read);
+	release_mem_region(page_lut_read, hantroenc_data[0].core_cfg.iosize);
+}
+
+static irqreturn_t hantroenc_isr(int irq, void *dev_id)
+{
+	unsigned int handled = 0;
+	struct hantroenc_t *dev = (struct hantroenc_t *)dev_id;
+	u32 irq_status;
+	unsigned long flags;
+
+	/*
+	 * If core is not reserved by any user,
+	 * but irq is received, just ignore it
+	 */
+	spin_lock_irqsave(&enc_owner_lock, flags);
+	if (!dev->is_reserved) {
+		PDEBUG("%s:received IRQ but core is not reserved!\n", __func__);
+		irq_status = (u32)ioread32((void *)(dev->hwregs + 0x04));
+		if (irq_status & 0x01) {
+			/*
+			 * clear all IRQ bits by writing value 1
+			 * (hwid >= 0x80006100) means IRQ is cleared
+			 */
+			u32 hwid = ioread32((void *)dev->hwregs);
+			u32 major_id = (hwid & 0x0000FF00) >> 8;
+			u32 wclr = (major_id >= 0x61) ? irq_status :
+						       (irq_status & (~0x1FD));
+
+			iowrite32(wclr, (void *)(dev->hwregs + 0x04));
+		}
+		spin_unlock_irqrestore(&enc_owner_lock, flags);
+		return IRQ_HANDLED;
+	}
+	spin_unlock_irqrestore(&enc_owner_lock, flags);
+
+	PDEBUG("%s:received IRQ!\n", __func__);
+	irq_status = (u32)ioread32((void *)(dev->hwregs + 0x04));
+	PDEBUG("hx280enc: irq_status of %d is:%x\n", dev->core_id, irq_status);
+	if (irq_status & 0x01) {
+		/*
+		 * clear all IRQ bits by writing value 1
+		 * (hwid >= 0x80006100) means IRQ is cleared
+		 */
+		u32 hwid = ioread32((void *)dev->hwregs);
+		u32 major_id = (hwid & 0x0000FF00) >> 8;
+		u32 wclr = (major_id >= 0x61) ? irq_status :
+					       (irq_status & (~0x1FD));
+
+		iowrite32(wclr, (void *)(dev->hwregs + 0x04));
+		spin_lock_irqsave(&enc_owner_lock, flags);
+		dev->irq_received = 1;
+		dev->irq_status = irq_status & (~0x01);
+		spin_unlock_irqrestore(&enc_owner_lock, flags);
+
+		wake_up_interruptible_all(&enc_wait_queue);
+		handled++;
+	}
+	if (!handled)
+		PDEBUG("IRQ received, but not hantro's!\n");
+
+	return IRQ_HANDLED;
+}
+
+static void reset_asic(struct hantroenc_t *dev)
+{
+	int i, n;
+
+	for (n = 0; n < total_core_num; n++) {
+		if (dev[n].is_valid == 0)
+			continue;
+		iowrite32(0, (void *)(dev[n].hwregs + 0x14));
+		for (i = 4; i < dev[n].core_cfg.iosize; i += 4)
+			iowrite32(0, (void *)(dev[n].hwregs + i));
+	}
+}
diff --git a/drivers/gpu/drm/hantro/hantro_enc.h b/drivers/gpu/drm/hantro/hantro_enc.h
new file mode 100644
index 0000000..711de74
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_enc.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *    Hantro encoder hardware driver header file.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#ifndef _HX280ENC_H_
+#define _HX280ENC_H_
+
+#include "hantro_drm.h"
+
+/*
+ * Macros to help debugging
+ */
+#undef PDEBUG
+#ifdef HX280ENC_DEBUG
+#ifdef __KERNEL__
+/* This one if debugging is on, and kernel space */
+#define PDEBUG(fmt, args...) pr_info("hmp4e: " fmt, ##args)
+#else
+/* This one for user space */
+#define PDEBUG(fmt, args...) printf(__FILE__ ":%d: " fmt, __LINE__, ##args)
+#endif
+#else
+#define PDEBUG(fmt, args...) /* not debugging: nothing */
+#endif
+
+/*------------------------------------------------------------------------
+ *****************************PORTING LAYER********************************
+ *-------------------------------------------------------------------------
+ */
+/*0:no resource sharing inter cores 1: existing resource sharing*/
+#define RESOURCE_SHARED_INTER_CORES     0
+#define CORE_0_IO_ADDR                  0x20884000 //Video
+#define CORE_1_IO_ADDR                  0x208a0000 //JPEG
+
+#define CORE_0_IO_SIZE                  (250 * 4) /* bytes */
+#define CORE_1_IO_SIZE                  (250 * 4) /* bytes */
+
+#define INT_PIN_CORE_0                  137
+#define INT_PIN_CORE_1                  64
+
+#define HANTRO_VC8KE_REG_BWREAD         216
+#define HANTRO_VC8KE_REG_BWWRITE        220
+#define VC8KE_BURSTWIDTH                16
+#define KMB_VC8000E_PAGE_LUT            0x20885000
+
+#define ENC_HW_ID1			0x48320100
+#define ENC_HW_ID2			0x80006000
+#define CORE_INFO_MODE_OFFSET		31
+#define CORE_INFO_AMOUNT_OFFSET		28
+
+struct enc_core {
+	unsigned long base_addr;
+	u32 iosize;
+	int irq;
+	u32 resource_shared;
+};
+
+long hantroenc_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
+int hantroenc_init(struct platform_device *pdev);
+void hantroenc_cleanup(void);
+u32 hantroenc_readbandwidth(int is_read_bw);
+#endif /* !_HX280ENC_H_ */
-- 
1.9.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 3/4] drm: hantro: Keem Bay VPU DRM decoder
  2020-10-09 11:57 [PATCH v1 0/4] Add support for Keem Bay VPU DRM driver kuhanh.murugasen.krishnan
  2020-10-09 11:57 ` [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM kuhanh.murugasen.krishnan
  2020-10-09 11:57 ` [PATCH v1 2/4] drm: hantro: Keem Bay VPU DRM encoder kuhanh.murugasen.krishnan
@ 2020-10-09 11:57 ` kuhanh.murugasen.krishnan
  2020-10-09 11:57 ` [PATCH v1 4/4] drm: hantro: Keem Bay VPU DRM build files kuhanh.murugasen.krishnan
  3 siblings, 0 replies; 7+ messages in thread
From: kuhanh.murugasen.krishnan @ 2020-10-09 11:57 UTC (permalink / raw)
  To: dri-devel

From: "Murugasen Krishnan, Kuhanh" <kuhanh.murugasen.krishnan@intel.com>

Hantro VC8000D allows 4K decoding with a minimal silicon single-core solution that
supports HEVC and H.264 video formats, key features:
* HEVC Main10 and Main Profiles up to Level 5.2
* HEVC Main Still Profile
* H.264 Main and High Profiles up to Level 5.2
* HEVC, H.264 and JPEG decoding up to 4K@60fps
* 8 channels 1080p@30fps decoding

Signed-off-by: Murugasen Krishnan, Kuhanh <kuhanh.murugasen.krishnan@intel.com>
---
 drivers/gpu/drm/hantro/hantro_dec.c      | 1441 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/hantro/hantro_dec.h      |   59 ++
 drivers/gpu/drm/hantro/hantro_dwl_defs.h |  101 +++
 3 files changed, 1601 insertions(+)
 create mode 100644 drivers/gpu/drm/hantro/hantro_dec.c
 create mode 100644 drivers/gpu/drm/hantro/hantro_dec.h
 create mode 100644 drivers/gpu/drm/hantro/hantro_dwl_defs.h

diff --git a/drivers/gpu/drm/hantro/hantro_dec.c b/drivers/gpu/drm/hantro/hantro_dec.c
new file mode 100644
index 0000000..ac501f3
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_dec.c
@@ -0,0 +1,1441 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *    Hantro decoder hardware driver.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#include "hantro_dec.h"
+#include "hantro_dwl_defs.h"
+#include <linux/io.h>
+#include <linux/uaccess.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/pci.h>
+#include <linux/sched.h>
+#include <linux/semaphore.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/version.h>
+#include <linux/wait.h>
+#include <linux/timer.h>
+#include <linux/clk.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/clk.h>
+
+static const int dec_hwid[] = {
+	0x8001 /* VDEC */
+};
+
+ulong multicorebase[HXDEC_MAX_CORES] = {
+	SOCLE_LOGIC_0_BASE,
+	SOCLE_LOGIC_1_BASE
+};
+
+int irq[HXDEC_MAX_CORES] = {
+	DEC_IRQ_0,
+	DEC_IRQ_1
+};
+
+unsigned int iosize[HXDEC_MAX_CORES] = {
+	DEC_IO_SIZE_0,
+	DEC_IO_SIZE_1
+};
+
+/* KMB page lookup table */
+static unsigned long page_lut_read = KMB_VC8000D_PAGE_LUT;
+static u8 *page_lut_regs_read;
+
+/*
+ * Because one core may contain multi-pipeline,
+ * so multicore base may be changed
+ */
+unsigned long multicorebase_actual[HXDEC_MAX_CORES];
+int elements = 2;
+static struct device *parent_dev;
+
+struct hantrodec_t {
+	char *buffer;
+	unsigned int iosize[HXDEC_MAX_CORES];
+	u8 *hwregs[HXDEC_MAX_CORES];
+	int irq[HXDEC_MAX_CORES];
+	int hw_id[HXDEC_MAX_CORES];
+	int cores;
+	struct fasync_struct *async_queue_dec;
+	struct fasync_struct *async_queue_pp;
+};
+
+struct core_cfg {
+	/* indicate the supported format */
+	u32 cfg[HXDEC_MAX_CORES];
+	/* back up of cfg */
+	u32 cfg_backup[HXDEC_MAX_CORES];
+	/* indicate if main core exist */
+	int its_main_core_id[HXDEC_MAX_CORES];
+	/* indicate if aux core exist */
+	int its_aux_core_id[HXDEC_MAX_CORES];
+};
+
+static struct hantrodec_t hantrodec_data;
+static int reserve_io(void);
+static void release_io(void);
+static void reset_asic(struct hantrodec_t *dev);
+
+#ifdef HANTRODEC_DEBUG
+static void dump_regs(struct hantrodec_t *dev);
+#endif
+
+/* IRQ handler */
+static irqreturn_t hantrodec_isr(int irq, void *dev_id);
+static u32 dec_regs[HXDEC_MAX_CORES][DEC_IO_SIZE_MAX / 4];
+struct semaphore dec_core_sem;
+struct semaphore pp_core_sem;
+static int dec_irq;
+static int pp_irq;
+
+atomic_t irq_rx = ATOMIC_INIT(0);
+atomic_t irq_tx = ATOMIC_INIT(0);
+
+static struct file *dec_owner[HXDEC_MAX_CORES];
+static struct file *pp_owner[HXDEC_MAX_CORES];
+static int core_has_format(const u32 *cfg, int core, u32 format);
+
+static DEFINE_SPINLOCK(owner_lock);
+static DECLARE_WAIT_QUEUE_HEAD(dec_wait_queue);
+static DECLARE_WAIT_QUEUE_HEAD(pp_wait_queue);
+static DECLARE_WAIT_QUEUE_HEAD(hw_queue);
+
+static struct core_cfg config;
+static u32 timeout;
+
+static struct clk *hantro_clk_xin_vdec;
+static struct clk *hantro_clk_vdec;
+
+static int hantro_clk_enable(void)
+{
+	clk_prepare_enable(hantro_clk_xin_vdec);
+	clk_prepare_enable(hantro_clk_vdec);
+
+	return 0;
+}
+
+static int hantro_clk_disable(void)
+{
+	if (hantro_clk_xin_vdec)
+		clk_disable_unprepare(hantro_clk_xin_vdec);
+
+	if (hantro_clk_vdec)
+		clk_disable_unprepare(hantro_clk_vdec);
+
+	return 0;
+}
+
+u32 hantrodec_readbandwidth(int is_read_bw)
+{
+	int i;
+	u32 bandwidth = 0;
+	struct hantrodec_t *dev = &hantrodec_data;
+
+	for (i = 0; i < hantrodec_data.cores; i++) {
+		if (is_read_bw)
+			bandwidth +=
+				ioread32((void *)(dev->hwregs[i] +
+						  HANTRO_VC8KD_REG_BWREAD * 4));
+		else
+			bandwidth += ioread32
+				((void *)(dev->hwregs[i] +
+					 HANTRO_VC8KD_REG_BWWRITE * 4));
+	}
+
+	return bandwidth * VC8KD_BURSTWIDTH;
+}
+
+static void read_core_config(struct hantrodec_t *dev)
+{
+	int c;
+	u32 reg, tmp, mask;
+
+	memset(config.cfg, 0, sizeof(config.cfg));
+
+	for (c = 0; c < dev->cores; c++) {
+		/* Decoder configuration */
+		if ((IS_VC8000D(dev->hw_id[c])) &&
+		    config.its_main_core_id[c] < 0) {
+			reg = ioread32((void *)(dev->hwregs[c] +
+						HANTRODEC_SYNTH_CFG * 4));
+
+			tmp = (reg >> DWL_H264_E) & 0x3U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has H264\n", c);
+			config.cfg[c] |=
+				tmp ? 1 << DWL_CLIENT_TYPE_H264_DEC : 0;
+
+			tmp = (reg >> DWL_JPEG_E) & 0x01U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has JPEG\n", c);
+			config.cfg[c] |=
+				tmp ? 1 << DWL_CLIENT_TYPE_JPEG_DEC : 0;
+
+			tmp = (reg >> DWL_MPEG4_E) & 0x3U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has MPEG4\n", c);
+			config.cfg[c] |=
+				tmp ? 1 << DWL_CLIENT_TYPE_MPEG4_DEC : 0;
+
+			tmp = (reg >> DWL_VC1_E) & 0x3U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has VC1\n", c);
+			config.cfg[c] |= tmp ? 1 << DWL_CLIENT_TYPE_VC1_DEC : 0;
+
+			tmp = (reg >> DWL_MPEG2_E) & 0x01U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has MPEG2\n", c);
+			config.cfg[c] |=
+				tmp ? 1 << DWL_CLIENT_TYPE_MPEG2_DEC : 0;
+
+			tmp = (reg >> DWL_VP6_E) & 0x01U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has VP6\n", c);
+			config.cfg[c] |= tmp ? 1 << DWL_CLIENT_TYPE_VP6_DEC : 0;
+
+			reg = ioread32((void *)(dev->hwregs[c] +
+						HANTRODEC_SYNTH_CFG_2 * 4));
+
+			/* VP7 and WEBP is part of VP8 */
+			mask = (1 << DWL_VP8_E) | (1 << DWL_VP7_E) |
+			       (1 << DWL_WEBP_E);
+			tmp = (reg & mask);
+			if (tmp & (1 << DWL_VP8_E))
+				pr_info("hantrodec: core[%d] has VP8\n", c);
+			if (tmp & (1 << DWL_VP7_E))
+				pr_info("hantrodec: core[%d] has VP7\n", c);
+			if (tmp & (1 << DWL_WEBP_E))
+				pr_info("hantrodec: core[%d] has WebP\n", c);
+			config.cfg[c] |= tmp ? 1 << DWL_CLIENT_TYPE_VP8_DEC : 0;
+
+			tmp = (reg >> DWL_AVS_E) & 0x01U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has AVS\n", c);
+			config.cfg[c] |= tmp ? 1 << DWL_CLIENT_TYPE_AVS_DEC : 0;
+
+			tmp = (reg >> DWL_RV_E) & 0x03U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has RV\n", c);
+			config.cfg[c] |= tmp ? 1 << DWL_CLIENT_TYPE_RV_DEC : 0;
+
+			reg = ioread32((void *)(dev->hwregs[c] +
+						HANTRODEC_SYNTH_CFG_3 * 4));
+
+			tmp = (reg >> DWL_HEVC_E) & 0x07U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has HEVC\n", c);
+			config.cfg[c] |=
+				tmp ? 1 << DWL_CLIENT_TYPE_HEVC_DEC : 0;
+
+			tmp = (reg >> DWL_VP9_E) & 0x07U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has VP9\n", c);
+			config.cfg[c] |= tmp ? 1 << DWL_CLIENT_TYPE_VP9_DEC : 0;
+
+			/* Post-processor configuration */
+			reg = ioread32((void *)(dev->hwregs[c] +
+						HANTRODECPP_CFG_STAT * 4));
+
+			tmp = (reg >> DWL_PP_E) & 0x01U;
+			if (tmp)
+				pr_info("hantrodec: core[%d] has PP\n", c);
+			config.cfg[c] |= tmp ? 1 << DWL_CLIENT_TYPE_PP : 0;
+
+			if (config.its_aux_core_id[c] >= 0) {
+				/* set main_core_id and aux_core_id */
+				reg = ioread32
+					((void *)(dev->hwregs[c] +
+						 HANTRODEC_SYNTH_CFG_2 * 4));
+
+				tmp = (reg >> DWL_H264_PIPELINE_E) & 0x01U;
+				if (tmp)
+					pr_info("hantrodec: core[%d] has pipeline H264\n",
+						c);
+				config.cfg[config.its_aux_core_id[c]] |=
+					tmp ? 1 << DWL_CLIENT_TYPE_H264_DEC : 0;
+
+				tmp = (reg >> DWL_JPEG_PIPELINE_E) & 0x01U;
+				if (tmp)
+					pr_info("hantrodec: core[%d] has pipeline JPEG\n",
+						c);
+				config.cfg[config.its_aux_core_id[c]] |=
+					tmp ? 1 << DWL_CLIENT_TYPE_JPEG_DEC : 0;
+			}
+		}
+	}
+	memcpy(config.cfg_backup, config.cfg, sizeof(config.cfg));
+}
+
+static int core_has_format(const u32 *cfg, int core, u32 format)
+{
+	return (cfg[core] & (1 << format)) ? 1 : 0;
+}
+
+static int get_dec_core(long core, struct hantrodec_t *dev, struct file *filp,
+			unsigned long format)
+{
+	int success = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&owner_lock, flags);
+	if (core_has_format(config.cfg, core, format) &&
+	    !dec_owner[core]) {
+		dec_owner[core] = filp;
+		success = 1;
+		/*
+		 * If one main core takes one format which doesn't supported
+		 * by aux core, set aux core's cfg to none video format support
+		 */
+		if (config.its_aux_core_id[core] >= 0) {
+			if (!core_has_format(config.cfg,
+					     config.its_aux_core_id[core],
+					     format))
+				config.cfg[config.its_aux_core_id[core]] = 0;
+			else
+				config.cfg[config.its_aux_core_id[core]] =
+					(1 << format);
+		}
+		/*
+		 * If one aux core takes one format,
+		 * set main core's cfg to aux core supported video format
+		 */
+		else if (config.its_main_core_id[core] >= 0)
+			config.cfg[config.its_main_core_id[core]] =
+				(1 << format);
+	}
+
+	spin_unlock_irqrestore(&owner_lock, flags);
+
+	return success;
+}
+
+static int get_dec_core_any(long *core, struct hantrodec_t *dev, struct file *filp,
+			    unsigned long format)
+{
+	int success = 0;
+	long c;
+
+	*core = -1;
+
+	for (c = 0; c < dev->cores; c++) {
+		/* a free core that has format */
+		if (get_dec_core(c, dev, filp, format)) {
+			success = 1;
+			*core = c;
+			break;
+		}
+	}
+
+	return success;
+}
+
+static int get_dec_coreid(struct hantrodec_t *dev, struct file *filp,
+			  unsigned long format)
+{
+	long c;
+	unsigned long flags;
+
+	int core_id = -1;
+
+	for (c = 0; c < dev->cores; c++) {
+		/* a core that has format */
+		spin_lock_irqsave(&owner_lock, flags);
+		if (core_has_format(config.cfg_backup, c, format)) {
+			core_id = c;
+			spin_unlock_irqrestore(&owner_lock, flags);
+			break;
+		}
+		spin_unlock_irqrestore(&owner_lock, flags);
+	}
+
+	return core_id;
+}
+
+static long reserve_decoder(struct hantrodec_t *dev, struct file *filp,
+			    unsigned long format)
+{
+	long core = -1;
+
+	/* reserve a core */
+	if (down_interruptible(&dec_core_sem))
+		return -ERESTARTSYS;
+
+	/* lock a core that has specific format */
+	if (wait_event_interruptible(hw_queue, get_dec_core_any(&core, dev, filp,
+								format) != 0))
+		return -ERESTARTSYS;
+	PDEBUG("reserve core %ld:%lx", core, (unsigned long)filp);
+
+	return core;
+}
+
+static void release_decoder(struct hantrodec_t *dev, long core)
+{
+	u32 status;
+	unsigned long flags;
+
+	status = ioread32
+			((void *)(dev->hwregs[core] +
+				  HANTRODEC_IRQ_STAT_DEC_OFF));
+
+	/* make sure HW is disabled */
+	if (status & HANTRODEC_DEC_E) {
+		PDEBUG("hantrodec: DEC[%li] still enabled -> reset\n", core);
+		/* abort decoder */
+		status |= HANTRODEC_DEC_ABORT | HANTRODEC_DEC_IRQ_DISABLE;
+		iowrite32(status, (void *)(dev->hwregs[core] +
+					   HANTRODEC_IRQ_STAT_DEC_OFF));
+	}
+
+	spin_lock_irqsave(&owner_lock, flags);
+	/* If aux core released, revert main core's config back */
+	if (config.its_main_core_id[core] >= 0)
+		config.cfg[config.its_main_core_id[core]] =
+			config.cfg_backup[config.its_main_core_id[core]];
+
+	/* If main core released, revert aux core's config back */
+	if (config.its_aux_core_id[core] >= 0)
+		config.cfg[config.its_aux_core_id[core]] =
+			config.cfg_backup[config.its_aux_core_id[core]];
+
+	dec_owner[core] = NULL;
+	spin_unlock_irqrestore(&owner_lock, flags);
+	up(&dec_core_sem);
+	wake_up_interruptible_all(&hw_queue);
+}
+
+static long reserve_post_processor(struct hantrodec_t *dev, struct file *filp)
+{
+	unsigned long flags;
+	long core = 0;
+
+	/* single core PP only */
+	if (down_interruptible(&pp_core_sem))
+		return -ERESTARTSYS;
+
+	spin_lock_irqsave(&owner_lock, flags);
+	pp_owner[core] = filp;
+	spin_unlock_irqrestore(&owner_lock, flags);
+
+	return core;
+}
+
+static void release_post_processor(struct hantrodec_t *dev, long core)
+{
+	unsigned long flags;
+
+	u32 status =
+		ioread32((void *)(dev->hwregs[core] + HANTRO_IRQ_STAT_PP_OFF));
+
+	/* make sure HW is disabled */
+	if (status & HANTRO_PP_E) {
+		PDEBUG("hantrodec: PP[%li] still enabled -> reset\n", core);
+		/* disable IRQ */
+		status |= HANTRO_PP_IRQ_DISABLE;
+		/* disable postprocessor */
+		status &= (~HANTRO_PP_E);
+		iowrite32(0x10,
+			  (void *)(dev->hwregs[core] + HANTRO_IRQ_STAT_PP_OFF));
+	}
+
+	spin_lock_irqsave(&owner_lock, flags);
+	pp_owner[core] = NULL;
+	spin_unlock_irqrestore(&owner_lock, flags);
+	up(&pp_core_sem);
+}
+
+long reserve_dec_pp(struct hantrodec_t *dev, struct file *filp,
+		    unsigned long format)
+{
+	/* reserve core 0, DEC+PP for pipeline */
+	unsigned long flags;
+	long core = 0;
+
+	/* check that core has the requested dec format */
+	if (!core_has_format(config.cfg, core, format))
+		return -EFAULT;
+
+	/* check that core has PP */
+	if (!core_has_format(config.cfg, core, DWL_CLIENT_TYPE_PP))
+		return -EFAULT;
+
+	/* reserve a core */
+	if (down_interruptible(&dec_core_sem))
+		return -ERESTARTSYS;
+
+	/* wait until the core is available */
+	if (wait_event_interruptible(hw_queue, get_dec_core(core, dev, filp,
+							    format) != 0)) {
+		up(&dec_core_sem);
+		return -ERESTARTSYS;
+	}
+
+	if (down_interruptible(&pp_core_sem)) {
+		release_decoder(dev, core);
+		return -ERESTARTSYS;
+	}
+
+	spin_lock_irqsave(&owner_lock, flags);
+	pp_owner[core] = filp;
+	spin_unlock_irqrestore(&owner_lock, flags);
+
+	return core;
+}
+
+static long dec_flush_regs(struct hantrodec_t *dev, struct core_desc *core)
+{
+	long ret = 0, i;
+
+	u32 id = core->id;
+
+	ret = copy_from_user(dec_regs[id], core->regs, HANTRO_VC8000D_REGS * 4);
+	if (ret) {
+		PDEBUG("copy_from_user failed, returned %li\n", ret);
+		return -EFAULT;
+	}
+	/* write all regs but the status reg[1] to hardware */
+	for (i = 2; i <= HANTRO_VC8000D_LAST_REG; i++)
+		iowrite32(dec_regs[id][i], (void *)(dev->hwregs[id] + i * 4));
+	/* write the status register, which may start the decoder */
+	iowrite32(dec_regs[id][1], (void *)(dev->hwregs[id] + 4));
+
+	PDEBUG("flushed registers on core %d\n", id);
+
+	return 0;
+}
+
+static long dec_refresh_regs(struct hantrodec_t *dev, struct core_desc *core)
+{
+	long ret, i;
+	u32 id = core->id;
+
+	for (i = 0; i <= HANTRO_VC8000D_LAST_REG; i++)
+		dec_regs[id][i] = ioread32((void *)(dev->hwregs[id] + i * 4));
+
+	ret = copy_to_user(core->regs, dec_regs[id],
+			   HANTRO_VC8000D_LAST_REG * 4);
+	if (ret) {
+		PDEBUG("copy_to_user failed, returned %li\n", ret);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static int check_dec_irq(struct hantrodec_t *dev, int id)
+{
+	unsigned long flags;
+	int rdy = 0;
+	const u32 irq_mask = (1 << id);
+
+	spin_lock_irqsave(&owner_lock, flags);
+
+	if (dec_irq & irq_mask) {
+		/* reset the wait condition(s) */
+		dec_irq &= ~irq_mask;
+		rdy = 1;
+	}
+
+	spin_unlock_irqrestore(&owner_lock, flags);
+
+	return rdy;
+}
+
+static long wait_dec_ready_and_refresh_regs(struct hantrodec_t *dev,
+					    struct core_desc *core)
+{
+	u32 id = core->id;
+	long ret;
+
+	PDEBUG("wait_event_interruptible DEC[%d]\n", id);
+	ret = wait_event_interruptible_timeout
+		(dec_wait_queue, check_dec_irq(dev, id), msecs_to_jiffies(10));
+	if (ret == -ERESTARTSYS) {
+		pr_err("DEC[%d]  failed to wait_event_interruptible interrupted\n",
+		       id);
+		return -ERESTARTSYS;
+	} else if (ret == 0) {
+		pr_err("DEC[%d]  wait_event_interruptible timeout\n", id);
+		timeout = 1;
+		return -EBUSY;
+	}
+	atomic_inc(&irq_tx);
+
+	/* refresh registers */
+	return dec_refresh_regs(dev, core);
+}
+
+static long dec_write_regs(struct hantrodec_t *dev, struct core_desc *core)
+{
+	long ret = 0, i;
+	u32 id = core->id;
+
+	i = core->reg_id;
+	ret = copy_from_user(dec_regs[id] + core->reg_id,
+			     core->regs + core->reg_id, 4);
+	if (ret) {
+		PDEBUG("copy_from_user failed, returned %li\n", ret);
+		return -EFAULT;
+	}
+	iowrite32(dec_regs[id][i], (void *)dev->hwregs[id] + i * 4);
+
+	return 0;
+}
+
+u32 *hantrodec_get_reg_addr(u32 coreid, u32 regid)
+{
+	if (coreid >= hantrodec_data.cores)
+		return NULL;
+	if (regid * 4 >= hantrodec_data.iosize[coreid])
+		return NULL;
+
+	return (u32 *)(hantrodec_data.hwregs[coreid] + regid * 4);
+}
+
+static long dec_read_regs(struct hantrodec_t *dev, struct core_desc *core)
+{
+	long ret, i;
+	u32 id = core->id;
+
+	i = core->reg_id;
+
+	/* read specific registers from hardware */
+	i = core->reg_id;
+	dec_regs[id][i] = ioread32((void *)dev->hwregs[id] + i * 4);
+	/* put registers to user space*/
+	ret = copy_to_user(core->regs + core->reg_id,
+			   dec_regs[id] + core->reg_id, 4);
+	if (ret) {
+		PDEBUG("copy_to_user failed, returned %li\n", ret);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static long pp_flush_regs(struct hantrodec_t *dev, struct core_desc *core)
+{
+	long ret = 0;
+	u32 id = core->id;
+	u32 i;
+
+	/* copy original dec regs to kernel space */
+	ret = copy_from_user(dec_regs[id] + HANTRO_PP_ORG_FIRST_REG,
+			     core->regs + HANTRO_PP_ORG_FIRST_REG,
+			     HANTRO_PP_ORG_REGS * 4);
+	if (ret) {
+		pr_err("copy_from_user failed, returned %li\n", ret);
+		return -EFAULT;
+	}
+	/* write all regs but the status reg[1] to hardware */
+	/* both original and extended regs need to be written */
+	for (i = HANTRO_PP_ORG_FIRST_REG + 1; i <= HANTRO_PP_ORG_LAST_REG; i++)
+		iowrite32(dec_regs[id][i], (void *)dev->hwregs[id] + i * 4);
+	/* write the stat reg, which may start the PP */
+	iowrite32(dec_regs[id][HANTRO_PP_ORG_FIRST_REG],
+		  (void *)dev->hwregs[id] + HANTRO_PP_ORG_FIRST_REG * 4);
+
+	return 0;
+}
+
+static long pp_refresh_regs(struct hantrodec_t *dev, struct core_desc *core)
+{
+	long i, ret;
+	u32 id = core->id;
+	/* user has to know exactly what they are asking for */
+	if (core->size != (HANTRO_PP_ORG_REGS * 4))
+		return -EFAULT;
+	/* read all registers from hardware */
+	/* both original and extended regs need to be read */
+	for (i = HANTRO_PP_ORG_FIRST_REG; i <= HANTRO_PP_ORG_LAST_REG; i++)
+		dec_regs[id][i] = ioread32((void *)dev->hwregs[id] + i * 4);
+	/* put original registers to user space */
+	ret = copy_to_user(core->regs + HANTRO_PP_ORG_FIRST_REG,
+			   dec_regs[id] + HANTRO_PP_ORG_FIRST_REG,
+			   HANTRO_PP_ORG_REGS * 4);
+	if (ret) {
+		pr_err("copy_to_user failed, returned %li\n", ret);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static int check_pp_irq(struct hantrodec_t *dev, int id)
+{
+	unsigned long flags;
+	int rdy = 0;
+	const u32 irq_mask = (1 << id);
+
+	spin_lock_irqsave(&owner_lock, flags);
+
+	if (pp_irq & irq_mask) {
+		/* reset the wait condition(s) */
+		pp_irq &= ~irq_mask;
+		rdy = 1;
+	}
+
+	spin_unlock_irqrestore(&owner_lock, flags);
+
+	return rdy;
+}
+
+static long wait_pp_ready_and_refresh_regs(struct hantrodec_t *dev,
+					   struct core_desc *core)
+{
+	u32 id = core->id;
+
+	PDEBUG("wait_event_interruptible PP[%d]\n", id);
+	if (wait_event_interruptible(pp_wait_queue, check_pp_irq(dev, id))) {
+		pr_err("PP[%d]  failed to wait_event_interruptible interrupted\n",
+		       id);
+		return -ERESTARTSYS;
+	}
+
+	atomic_inc(&irq_tx);
+
+	/* refresh registers */
+	return pp_refresh_regs(dev, core);
+}
+
+static int check_core_irq(struct hantrodec_t *dev, const struct file *filp,
+			  int *id)
+{
+	unsigned long flags;
+	int rdy = 0, n = 0;
+
+	do {
+		u32 irq_mask = (1 << n);
+
+		spin_lock_irqsave(&owner_lock, flags);
+
+		if (dec_irq & irq_mask) {
+			PDEBUG("%s get irq for core %d:%lx", __func__, n,
+			       (unsigned long)filp);
+			if (*id == n) {
+				/* we have an IRQ for our client */
+				/* reset the wait condition(s) */
+				dec_irq &= ~irq_mask;
+				/* signal ready core no. for our client */
+				*id = n;
+				rdy = 1;
+				spin_unlock_irqrestore(&owner_lock, flags);
+				break;
+			} else if (!dec_owner[n]) {
+				/* zombie IRQ */
+				PDEBUG("IRQ on core[%d], but no owner!\n", n);
+				/* reset the wait condition(s) */
+				dec_irq &= ~irq_mask;
+			}
+		}
+
+		spin_unlock_irqrestore(&owner_lock, flags);
+		n++; /* next core */
+	} while (n < dev->cores);
+
+	return rdy;
+}
+
+static long wait_core_ready(struct hantrodec_t *dev, const struct file *filp,
+			    int *id)
+{
+	PDEBUG("wait_event_interruptible CORE\n");
+
+	if (wait_event_interruptible(dec_wait_queue,
+				     check_core_irq(dev, filp, id))) {
+		pr_err("CORE  failed to wait_event_interruptible interrupted\n");
+		return -ERESTARTSYS;
+	}
+
+	atomic_inc(&irq_tx);
+
+	return 0;
+}
+
+/*-------------------------------------------------------------------------
+ * Function name   : hantrodec_ioctl
+ * Description     : communication method to/from the user space
+ *
+ * Return type     : long
+ *-------------------------------------------------------------------------
+ */
+long hantrodec_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+	int err = 0;
+	u32 id;
+	long tmp;
+	struct core_desc core;
+
+	switch (_IOC_NR(cmd)) {
+	case _IOC_NR(HANTRODEC_IOC_CLI): {
+		id = arg;
+		if (id >= hantrodec_data.cores)
+			return -EFAULT;
+		disable_irq(hantrodec_data.irq[id]);
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOC_STI): {
+		id = arg;
+		if (id >= hantrodec_data.cores)
+			return -EFAULT;
+		enable_irq(hantrodec_data.irq[id]);
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOCGHWOFFSET): {
+		__get_user(id, (__u32 *)arg);
+		if (id >= hantrodec_data.cores)
+			return -EFAULT;
+
+		__put_user(multicorebase_actual[id], (unsigned long *)arg);
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOCGHWIOSIZE): {
+		__u32 io_size;
+
+		__get_user(id, (__u32 *)arg);
+		if (id >= hantrodec_data.cores)
+			return -EFAULT;
+		io_size = hantrodec_data.iosize[id];
+		__put_user(io_size, (u32 *)arg);
+
+		return 0;
+	}
+	case _IOC_NR(HANTRODEC_IOC_MC_OFFSETS): {
+		tmp = copy_to_user((unsigned long *)arg, multicorebase_actual,
+				   sizeof(multicorebase_actual));
+		if (err) {
+			pr_err("copy_to_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOC_MC_CORES):
+		__put_user(hantrodec_data.cores, (unsigned int *)arg);
+		PDEBUG("hantrodec_data.cores=%d\n", hantrodec_data.cores);
+		break;
+	case _IOC_NR(HANTRODEC_IOCS_DEC_PUSH_REG): {
+		struct core_desc core;
+
+		/* get registers from user space */
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			pr_err("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		dec_flush_regs(&hantrodec_data, &core);
+		break;
+	}
+
+	case _IOC_NR(HANTRODEC_IOCS_DEC_WRITE_REG): {
+		/* get registers from user space */
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			PDEBUG("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		dec_write_regs(&hantrodec_data, &core);
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOCS_PP_PUSH_REG): {
+		/* get registers from user space */
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			pr_err("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		pp_flush_regs(&hantrodec_data, &core);
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOCS_DEC_PULL_REG): {
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			pr_err("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		return dec_refresh_regs(&hantrodec_data, &core);
+	}
+	case _IOC_NR(HANTRODEC_IOCS_DEC_READ_REG): {
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			PDEBUG("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		return dec_read_regs(&hantrodec_data, &core);
+	}
+	case _IOC_NR(HANTRODEC_IOCS_PP_PULL_REG): {
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			pr_err("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		return pp_refresh_regs(&hantrodec_data, &core);
+	}
+	case _IOC_NR(HANTRODEC_IOCH_DEC_RESERVE): {
+		PDEBUG("Reserve DEC core, format = %li\n", arg);
+		return reserve_decoder(&hantrodec_data, filp, arg);
+	}
+	case _IOC_NR(HANTRODEC_IOCT_DEC_RELEASE): {
+		if (arg >= hantrodec_data.cores || dec_owner[arg] != filp) {
+			pr_err("bogus DEC release, core = %li\n", arg);
+			return -EFAULT;
+		}
+
+		PDEBUG("Release DEC, core = %li\n", arg);
+
+		release_decoder(&hantrodec_data, arg);
+
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOCQ_PP_RESERVE):
+		return reserve_post_processor(&hantrodec_data, filp);
+	case _IOC_NR(HANTRODEC_IOCT_PP_RELEASE): {
+		if (arg != 0 || pp_owner[arg] != filp) {
+			pr_err("bogus PP release %li\n", arg);
+			return -EFAULT;
+		}
+
+		release_post_processor(&hantrodec_data, arg);
+
+		break;
+	}
+	case _IOC_NR(HANTRODEC_IOCX_DEC_WAIT): {
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			pr_err("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		return wait_dec_ready_and_refresh_regs(&hantrodec_data, &core);
+	}
+	case _IOC_NR(HANTRODEC_IOCX_PP_WAIT): {
+		tmp = copy_from_user(&core, (void *)arg,
+				     sizeof(struct core_desc));
+		if (tmp) {
+			pr_err("copy_from_user failed, returned %li\n", tmp);
+			return -EFAULT;
+		}
+
+		return wait_pp_ready_and_refresh_regs(&hantrodec_data, &core);
+	}
+	case _IOC_NR(HANTRODEC_IOCG_CORE_WAIT): {
+		int id;
+
+		__get_user(id, (int *)arg);
+		tmp = wait_core_ready(&hantrodec_data, filp, &id);
+		__put_user(id, (int *)arg);
+		return tmp;
+	}
+	case _IOC_NR(HANTRODEC_IOX_ASIC_ID): {
+		__get_user(id, (u32 *)arg);
+		if (id >= hantrodec_data.cores)
+			return -EFAULT;
+		id = ioread32((void *)hantrodec_data.hwregs[id]);
+		__put_user(id, (u32 *)arg);
+		return 0;
+	}
+	case _IOC_NR(HANTRODEC_IOCG_CORE_ID): {
+		PDEBUG("Get DEC core_id, format = %li\n", arg);
+		tmp = get_dec_coreid(&hantrodec_data, filp, arg);
+		return tmp;
+	}
+	case _IOC_NR(HANTRODEC_DEBUG_STATUS): {
+		PDEBUG("hantrodec: dec_irq     = 0x%08x\n", dec_irq);
+		PDEBUG("hantrodec: pp_irq      = 0x%08x\n", pp_irq);
+
+		PDEBUG("hantrodec: IRQs received/sent2user = %d / %d\n",
+		       atomic_read(&irq_rx), atomic_read(&irq_tx));
+
+		for (tmp = 0; tmp < hantrodec_data.cores; tmp++) {
+			PDEBUG("hantrodec: dec_core[%li] %s\n", tmp,
+			       !dec_owner[tmp] ? "FREE" : "RESERVED");
+			PDEBUG("hantrodec: pp_core[%li]  %s\n", tmp,
+			       !pp_owner[tmp]  ? "FREE" : "RESERVED");
+		}
+		break;
+	}
+	default:
+		return -ENOTTY;
+	}
+
+	return 0;
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : hantrodec_release
+ *Description     : Release driver
+ *
+ *Return type     : int
+ *----------------------------------------------------------------------------
+ */
+int hantrodec_release(struct file *filp)
+{
+	int n;
+	struct hantrodec_t *dev = &hantrodec_data;
+
+	for (n = 0; n < dev->cores; n++) {
+		if (dec_owner[n] == filp) {
+			PDEBUG("releasing dec core %i lock\n", n);
+			release_decoder(dev, n);
+		}
+	}
+
+	for (n = 0; n < 1; n++) {
+		if (pp_owner[n] == filp) {
+			PDEBUG("releasing pp core %i lock\n", n);
+			release_post_processor(dev, n);
+		}
+	}
+
+	return 0;
+}
+
+int hantrodec_open(struct inode *inode, struct file *filp)
+{
+	return 0;
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : hantrodec_init
+ *Description     : Initialize the driver
+ *
+ *Return type     : int
+ *---------------------------------------------------------------------------
+ */
+int hantrodec_init(struct platform_device *pdev)
+{
+	int result = 0;
+	int irq_0;
+	int i;
+
+	dec_irq = 0;
+	pp_irq = 0;
+	parent_dev = &pdev->dev;
+	pr_info("hantrodec: Init multi core[0] at 0x%16lx\n"
+		"core[1] at 0x%16lx\n",
+		multicorebase[0], multicorebase[1]);
+
+	hantrodec_data.cores = 0;
+	hantrodec_data.iosize[0] = DEC_IO_SIZE_0;
+	hantrodec_data.irq[0] = -1;
+	hantrodec_data.iosize[1] = DEC_IO_SIZE_1;
+	hantrodec_data.irq[1] = -1;
+
+	for (i = 0; i < HXDEC_MAX_CORES; i++) {
+		hantrodec_data.hwregs[i] = 0;
+		/*
+		 * If user gave less core bases that we have by default,
+		 * invalidate default bases
+		 */
+		if (elements && i >= elements)
+			multicorebase[i] = -1;
+	}
+
+	hantrodec_data.async_queue_dec = NULL;
+	hantrodec_data.async_queue_pp = NULL;
+	/* Enable and set the VDEC clks */
+	hantro_clk_xin_vdec = clk_get(&pdev->dev, "clk_xin_vdec");
+	hantro_clk_vdec = clk_get(&pdev->dev, "clk_vdec");
+	hantro_clk_enable();
+	/* Set KMB CLK to 700 Mhz VDEC */
+	pr_info("hantrodec: Before setting any clocks: clk_xin_vdec: %ld | clk_vdec %ld\n",
+		clk_get_rate(hantro_clk_xin_vdec),
+		clk_get_rate(hantro_clk_vdec));
+	clk_set_rate(hantro_clk_xin_vdec, 700000000);
+	pr_info("hantrodec: Set clocks to 700Mhz: clk_xin_vdec: %ld | clk_vdec %ld\n",
+		clk_get_rate(hantro_clk_xin_vdec),
+		clk_get_rate(hantro_clk_vdec));
+
+	result = reserve_io();
+
+	if (result < 0)
+		goto err;
+
+	memset(dec_owner, 0, sizeof(dec_owner));
+	memset(pp_owner, 0, sizeof(pp_owner));
+	sema_init(&dec_core_sem, hantrodec_data.cores);
+	sema_init(&pp_core_sem, 1);
+
+	/* read configuration for all cores */
+	read_core_config(&hantrodec_data);
+	/* reset hardware */
+	reset_asic(&hantrodec_data);
+
+	/* Dynamic AXI ID and Page LUT routines */
+	/* Register and set the page lookup table for read */
+	if (!request_mem_region(page_lut_read, hantrodec_data.iosize[0],
+				"hantrodec_pagelut_read")) {
+		pr_info("hantrodec: failed to reserve page lookup table registers\n");
+		return -EBUSY;
+	}
+	page_lut_regs_read =
+		(u8 *)ioremap(page_lut_read, hantrodec_data.iosize[0]);
+	if (!page_lut_regs_read)
+		pr_info("hantrodec: failed to ioremap page lookup table registers\n");
+
+	/* Set VDEC RD Page LUT AXI ID 0-15 to 0x4 */
+	iowrite32(0x04040404, (void *)page_lut_regs_read);
+	pr_info("hantrodec: RD AXI ID 3:0 = %x\n",
+		ioread32((void *)page_lut_regs_read));
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0x4);
+	pr_info("hantrodec: RD AXI ID 7:4 = %x\n",
+		ioread32((void *)page_lut_regs_read + 0x4));
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0x8);
+	pr_info("hantrodec: RD AXI ID 11:8 = %x\n",
+		ioread32((void *)page_lut_regs_read + 0x8));
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0xc);
+	pr_info("hantrodec: RD AXI ID 15:12 = %x\n",
+		ioread32((void *)page_lut_regs_read + 0xc));
+
+	/* dynamic WR AXI ID */
+	/* Set sw_dec_axi_wr_id_e to 1 */
+	iowrite32(1 << 13, (void *)hantrodec_data.hwregs[0] + 0xE8);
+	pr_info("hantrodec: sw_dec_axi_wr_id_e  = %x\n",
+		ioread32((void *)hantrodec_data.hwregs[0] + 0xE8));
+	/*
+	 * Set WR Page LUT AXI ID 0-3, 6-15 to 0x4 and
+	 * WR Page LUT AXI ID 4,5 to 0x0
+	 */
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0x10);
+	pr_info("hantrodec: page_lut_regs WR AXI ID 3:0= %x\n",
+		ioread32((void *)page_lut_regs_read + 0x10));
+	iowrite32(0x04040000, (void *)page_lut_regs_read + 0x14);
+	pr_info("hantrodec: page_lut_regs WR AXI ID 7:4= %x\n",
+		ioread32((void *)page_lut_regs_read + 0x14));
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0x18);
+	pr_info("hantrodec: page_lut_regs WR AXI ID 11:8= %x\n",
+		ioread32((void *)page_lut_regs_read + 0x18));
+	iowrite32(0x04040404, (void *)page_lut_regs_read + 0x1c);
+	pr_info("hantrodec: page_lut_regs WR AXI ID 15:12= %x\n",
+		ioread32((void *)page_lut_regs_read + 0x1c));
+
+	/* register irq for each core */
+	irq_0 = irq[0];
+	if (irq_0 > 0) {
+		PDEBUG("irq_0 platform_get_irq\n");
+		irq_0 = platform_get_irq_byname(pdev, "irq_hantro_decoder");
+		result = request_irq(irq_0, hantrodec_isr, IRQF_SHARED,
+				     "irq_hantro_decoder",
+				     (void *)&hantrodec_data);
+		if (result != 0) {
+			PDEBUG("can't reserve irq0\n");
+			goto err0;
+		}
+		PDEBUG("reserve irq0 success with irq0 = %d\n", irq_0);
+		hantrodec_data.irq[0] = irq_0;
+	} else {
+		PDEBUG("can't get irq0 and irq0 value = %d\n", irq_0);
+		result = -EINVAL;
+		goto err0;
+	}
+
+	pr_info("hantrodec: module inserted.\n");
+
+	return 0;
+err0:
+	release_io();
+
+err:
+	return result;
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : hantrodec_cleanup
+ *Description     : clean up
+ *
+ *Return type     : int
+ *---------------------------------------------------------------------------
+ */
+void hantrodec_cleanup(void)
+{
+	struct hantrodec_t *dev = &hantrodec_data;
+	int n = 0;
+	/* reset hardware */
+	reset_asic(dev);
+
+	/* free the IRQ */
+	for (n = 0; n < dev->cores; n++) {
+		if (dev->irq[n] != -1)
+			free_irq(dev->irq[n], (void *)dev);
+	}
+
+	release_io();
+	hantro_clk_disable();
+	pr_info("hantrodec: module removed\n");
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : check_hw_id
+ *Return type     : int
+ *---------------------------------------------------------------------------
+ */
+static int check_hw_id(struct hantrodec_t *dev)
+{
+	long hwid;
+	int i;
+	size_t num_hw = sizeof(dec_hwid) / sizeof(*dec_hwid);
+
+	int found = 0;
+
+	for (i = 0; i < dev->cores; i++) {
+		if (dev->hwregs[i]) {
+			hwid = readl(dev->hwregs[i]);
+			PDEBUG("hantrodec: core %d HW ID=0x%16lx\n", i, hwid);
+			hwid = (hwid >> 16) & 0xFFFF;
+
+			while (num_hw--) {
+				if (hwid == dec_hwid[num_hw]) {
+					PDEBUG("hantrodec: Supported HW found at 0x%16lx\n",
+					       multicorebase_actual[i]);
+					found++;
+					dev->hw_id[i] = hwid;
+					break;
+				}
+			}
+			if (!found) {
+				PDEBUG("hantrodec: Unknown HW found at 0x%16lx\n",
+				       multicorebase_actual[i]);
+				return 0;
+			}
+			found = 0;
+			num_hw = sizeof(dec_hwid) / sizeof(*dec_hwid);
+		}
+	}
+
+	return 1;
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : reserve_io
+ *Description     : IO reserve
+ *
+ *Return type     : int
+ *---------------------------------------------------------------------------
+ */
+static int reserve_io(void)
+{
+	int i;
+	long hwid;
+	u32 reg;
+
+	memcpy(multicorebase_actual, multicorebase,
+	       HXDEC_MAX_CORES * sizeof(unsigned long));
+	memcpy((unsigned int *)(hantrodec_data.iosize), iosize,
+	       HXDEC_MAX_CORES * sizeof(unsigned int));
+	memcpy((unsigned int *)hantrodec_data.irq, irq,
+	       HXDEC_MAX_CORES * sizeof(int));
+
+	for (i = 0; i < HXDEC_MAX_CORES; i++) {
+		if (multicorebase_actual[i] != -1) {
+			if (!request_mem_region(multicorebase_actual[i],
+						hantrodec_data.iosize[i],
+						"hantrodec0")) {
+				PDEBUG("hantrodec: failed to reserve HW regs\n");
+				return -EBUSY;
+			}
+
+			hantrodec_data.hwregs[i] =
+				(u8 *)ioremap(multicorebase_actual[i],
+						      hantrodec_data.iosize[i]);
+
+			if (!hantrodec_data.hwregs[i]) {
+				PDEBUG("hantrodec: failed to ioremap HW regs\n");
+				release_io();
+				return -EBUSY;
+			}
+
+			hantrodec_data.cores++;
+			config.its_main_core_id[i] = -1;
+			config.its_aux_core_id[i] = -1;
+			hwid = ((readl(hantrodec_data.hwregs[i])) >> 16) &
+			       0xFFFF;
+
+			if (IS_VC8000D(hwid)) {
+				reg = readl(hantrodec_data.hwregs[i] +
+					    HANTRODEC_SYNTH_CFG_2_OFF);
+				if (((reg >> DWL_H264_PIPELINE_E) & 0x01U) ||
+				    ((reg >> DWL_JPEG_PIPELINE_E) & 0x01U)) {
+					i++;
+					config.its_aux_core_id[i - 1] = i;
+					config.its_main_core_id[i] = i - 1;
+					config.its_aux_core_id[i] = -1;
+					multicorebase_actual[i] =
+						multicorebase_actual[i - 1] +
+						0x800;
+					hantrodec_data.iosize[i] =
+						hantrodec_data.iosize[i - 1];
+					memcpy(multicorebase_actual + i + 1,
+					       multicorebase + i,
+					       (HXDEC_MAX_CORES - i - 1) *
+						       sizeof(unsigned long));
+					memcpy((unsigned int *)(hantrodec_data
+									.iosize +
+								i + 1),
+					       iosize + i,
+					       (HXDEC_MAX_CORES - i - 1) *
+						       sizeof(unsigned int));
+					if (!request_mem_region
+					    (multicorebase_actual[i],
+					     hantrodec_data.iosize[i],
+						    "hantrodec0")) {
+						PDEBUG("hantrodec: failed to reserve HW regs\n");
+						return -EBUSY;
+					}
+
+					hantrodec_data.hwregs[i] =
+						(u8 *)ioremap
+							(multicorebase_actual[i],
+							 hantrodec_data
+								.iosize[i]);
+
+					if (!hantrodec_data.hwregs[i]) {
+						PDEBUG("hantrodec: failed to ioremap HW regs\n");
+						release_io();
+						return -EBUSY;
+					}
+					hantrodec_data.cores++;
+				}
+			}
+		}
+	}
+
+	/* check for correct HW */
+	if (!check_hw_id(&hantrodec_data)) {
+		release_io();
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : release_io
+ *Description     : release
+ *
+ *Return type     : void
+ *---------------------------------------------------------------------------
+ */
+static void release_io(void)
+{
+	int i;
+
+	for (i = 0; i < hantrodec_data.cores; i++) {
+		if (hantrodec_data.hwregs[i])
+			iounmap((void *)hantrodec_data.hwregs[i]);
+		release_mem_region(multicorebase_actual[i],
+				   hantrodec_data.iosize[i]);
+	}
+
+	iounmap((void *)page_lut_regs_read);
+	release_mem_region(page_lut_read, hantrodec_data.iosize[0]);
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : hantrodec_isr
+ *Description     : interrupt handler
+ *
+ *Return type     : irqreturn_t
+ *---------------------------------------------------------------------------
+ */
+static irqreturn_t hantrodec_isr(int irq, void *dev_id)
+{
+	unsigned long flags;
+	unsigned int handled = 0;
+	int i;
+
+	u8 *hwregs;
+	struct hantrodec_t *dev;
+	u32 irq_status_dec;
+
+	dev = (struct hantrodec_t *)dev_id;
+	spin_lock_irqsave(&owner_lock, flags);
+
+	for (i = 0; i < dev->cores; i++) {
+		u8 *hwregs = dev->hwregs[i];
+
+		/* interrupt status register read */
+		irq_status_dec =
+			ioread32((void *)hwregs + HANTRODEC_IRQ_STAT_DEC_OFF);
+		PDEBUG("%d core irq = %x\n", i, irq_status_dec);
+		if (irq_status_dec & HANTRODEC_DEC_IRQ) {
+			/* clear dec IRQ */
+			irq_status_dec &= (~HANTRODEC_DEC_IRQ);
+			iowrite32(irq_status_dec,
+				  (void *)hwregs + HANTRODEC_IRQ_STAT_DEC_OFF);
+
+			PDEBUG("decoder IRQ received! core %d\n", i);
+
+			atomic_inc(&irq_rx);
+
+			dec_irq |= (1 << i);
+
+			wake_up_interruptible_all(&dec_wait_queue);
+			handled++;
+		}
+	}
+
+	spin_unlock_irqrestore(&owner_lock, flags);
+	if (!handled)
+		PDEBUG("IRQ received, but not hantrodec's!\n");
+	(void)hwregs;
+
+	return IRQ_RETVAL(handled);
+}
+
+/*---------------------------------------------------------------------------
+ *Function name   : reset_asic
+ *Description     : reset asic
+ *
+ *Return type     :
+ *---------------------------------------------------------------------------
+ */
+static void reset_asic(struct hantrodec_t *dev)
+{
+	int i, j;
+	u32 status;
+
+	for (j = 0; j < dev->cores; j++) {
+		status = ioread32((void *)dev->hwregs[j] +
+				  HANTRODEC_IRQ_STAT_DEC_OFF);
+
+		if (status & HANTRODEC_DEC_E) {
+			/* abort with IRQ disabled */
+			status =
+				HANTRODEC_DEC_ABORT | HANTRODEC_DEC_IRQ_DISABLE;
+			iowrite32(status, (void *)dev->hwregs[j] +
+						  HANTRODEC_IRQ_STAT_DEC_OFF);
+		}
+
+		for (i = 4; i < dev->iosize[j]; i += 4)
+			iowrite32(0, (void *)dev->hwregs[j] + i);
+	}
+}
diff --git a/drivers/gpu/drm/hantro/hantro_dec.h b/drivers/gpu/drm/hantro/hantro_dec.h
new file mode 100644
index 0000000..c0f7573
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_dec.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *    Hantro decoder hardware driver header file.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#ifndef _HANTRODEC_H_
+#define _HANTRODEC_H_
+#include <linux/ioctl.h>
+#include <linux/types.h>
+#include "hantro_drm.h"
+
+#undef PDEBUG
+#ifdef HANTRODEC_DEBUG
+#ifdef __KERNEL__
+#define PDEBUG(fmt, args...) pr_info("hantrodec: " fmt, ##args)
+#else
+#define PDEBUG(fmt, args...) fprintf(stderr, fmt, ##args)
+#endif
+#else
+#define PDEBUG(fmt, args...)
+#endif
+
+/* hantro PP regs */
+#define HANTRO_PP_ORG_REGS              41
+#define HANTRO_PP_ORG_FIRST_REG         60
+#define HANTRO_PP_ORG_LAST_REG          100
+#define HANTRO_PP_EXT_FIRST_REG         146
+#define HANTRO_PP_EXT_LAST_REG          154
+
+/* hantro VC8000D reg config */
+#define HANTRO_VC8000D_REGS             342 /* VC8000D total regs */
+#define HANTRO_VC8000D_FIRST_REG        0
+#define HANTRO_VC8000D_LAST_REG         (HANTRO_VC8000D_REGS - 1)
+#define HANTRO_VC8KD_REG_BWREAD         300
+#define HANTRO_VC8KD_REG_BWWRITE        301
+#define VC8KD_BURSTWIDTH                16
+#define DEC_IO_SIZE_MAX                 (HANTRO_VC8000D_REGS * 4)
+#define HXDEC_MAX_CORES                 2
+
+/* Logic module base address */
+#define SOCLE_LOGIC_0_BASE              0x20888000
+#define SOCLE_LOGIC_1_BASE              0x20888800
+#define DEC_IO_SIZE_0                   DEC_IO_SIZE_MAX /* bytes */
+#define DEC_IO_SIZE_1                   DEC_IO_SIZE_MAX /* bytes */
+#define DEC_IRQ_0                       138
+#define DEC_IRQ_1                       138
+#define KMB_VC8000D_PAGE_LUT            0x20889000
+#define IS_VC8000D(hw_id)               (((hw_id) == 0x8001) ? 1 : 0)
+
+int hantrodec_init(struct platform_device *pdev);
+void hantrodec_cleanup(void);
+long hantrodec_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
+u32 *hantrodec_get_reg_addr(u32 coreid, u32 regid);
+int hantrodec_open(struct inode *inode, struct file *filp);
+u32 hantrodec_readbandwidth(int is_read_bw);
+#endif /* !_HANTRODEC_H_ */
diff --git a/drivers/gpu/drm/hantro/hantro_dwl_defs.h b/drivers/gpu/drm/hantro/hantro_dwl_defs.h
new file mode 100644
index 0000000..4411c62
--- /dev/null
+++ b/drivers/gpu/drm/hantro/hantro_dwl_defs.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *    Hantro driver hardware register definition.
+ *
+ *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
+ *    Copyright (c) 2020 Intel Corporation
+ */
+
+#ifndef SOFTWARE_LINUX_DWL_DWL_DEFS_H_
+#define SOFTWARE_LINUX_DWL_DWL_DEFS_H_
+
+#define DWL_CLIENT_TYPE_H264_DEC        1U
+#define DWL_CLIENT_TYPE_MPEG4_DEC       2U
+#define DWL_CLIENT_TYPE_JPEG_DEC        3U
+#define DWL_CLIENT_TYPE_PP              4U
+#define DWL_CLIENT_TYPE_VC1_DEC         5U
+#define DWL_CLIENT_TYPE_MPEG2_DEC       6U
+#define DWL_CLIENT_TYPE_VP6_DEC         7U
+#define DWL_CLIENT_TYPE_AVS_DEC         8U
+#define DWL_CLIENT_TYPE_RV_DEC          9U
+#define DWL_CLIENT_TYPE_VP8_DEC         10U
+#define DWL_CLIENT_TYPE_VP9_DEC         11U
+#define DWL_CLIENT_TYPE_HEVC_DEC        12U
+
+#define DWL_MPEG2_E			31 /* 1 bit  */
+#define DWL_VC1_E			29 /* 2 bits */
+#define DWL_JPEG_E			28 /* 1 bit  */
+#define DWL_MPEG4_E			26 /* 2 bits */
+#define DWL_H264_E			24 /* 2 bits */
+#define DWL_VP6_E			23 /* 1 bit  */
+#define DWL_RV_E			26 /* 2 bits */
+#define DWL_VP8_E			23 /* 1 bit  */
+#define DWL_VP7_E			24 /* 1 bit  */
+#define DWL_WEBP_E			19 /* 1 bit  */
+#define DWL_AVS_E			22 /* 1 bit  */
+#define DWL_G1_PP_E			16 /* 1 bit  */
+#define DWL_G2_PP_E			31 /* 1 bit  */
+#define DWL_PP_E			31 /* 1 bit  */
+#define DWL_HEVC_E			26 /* 3 bits */
+#define DWL_VP9_E			29 /* 3 bits */
+
+#define DWL_H264_PIPELINE_E		31 /* 1 bit */
+#define DWL_JPEG_PIPELINE_E		30 /* 1 bit */
+
+#define DWL_G2_HEVC_E			0 /* 1 bits */
+#define DWL_G2_VP9_E			1 /* 1 bits */
+#define DWL_G2_RFC_E			2 /* 1 bits */
+#define DWL_RFC_E			17 /* 2 bits */
+#define DWL_G2_DS_E			3 /* 1 bits */
+#define DWL_DS_E			28 /* 3 bits */
+#define DWL_HEVC_VER			8 /* 4 bits */
+#define DWL_VP9_PROFILE			12 /* 3 bits */
+#define DWL_RING_E			16 /* 1 bits */
+
+#define HANTRODEC_IRQ_STAT_DEC		1
+#define HANTRODEC_IRQ_STAT_DEC_OFF	(HANTRODEC_IRQ_STAT_DEC * 4)
+
+#define HANTRODECPP_SYNTH_CFG		60
+#define HANTRODECPP_SYNTH_CFG_OFF	(HANTRODECPP_SYNTH_CFG * 4)
+#define HANTRODEC_SYNTH_CFG		50
+#define HANTRODEC_SYNTH_CFG_OFF		(HANTRODEC_SYNTH_CFG * 4)
+#define HANTRODEC_SYNTH_CFG_2		54
+#define HANTRODEC_SYNTH_CFG_2_OFF	(HANTRODEC_SYNTH_CFG_2 * 4)
+#define HANTRODEC_SYNTH_CFG_3		56
+#define HANTRODEC_SYNTH_CFG_3_OFF	(HANTRODEC_SYNTH_CFG_3 * 4)
+#define HANTRODEC_CFG_STAT		23
+#define HANTRODEC_CFG_STAT_OFF		(HANTRODEC_CFG_STAT * 4)
+#define HANTRODECPP_CFG_STAT		260
+#define HANTRODECPP_CFG_STAT_OFF	(HANTRODECPP_CFG_STAT * 4)
+/* VC8000D HW build id */
+#define HANTRODEC_HW_BUILD_ID		309
+#define HANTRODEC_HW_BUILD_ID_OFF	(HANTRODEC_HW_BUILD_ID * 4)
+
+#define HANTRODEC_DEC_E			0x01
+#define HANTRODEC_PP_E			0x01
+#define HANTRODEC_DEC_ABORT		0x20
+#define HANTRODEC_DEC_IRQ_DISABLE	0x10
+#define HANTRODEC_DEC_IRQ		0x100
+
+/* Legacy from G1 */
+#define HANTRO_IRQ_STAT_DEC		1
+#define HANTRO_IRQ_STAT_DEC_OFF		(HANTRO_IRQ_STAT_DEC * 4)
+#define HANTRO_IRQ_STAT_PP		60
+#define HANTRO_IRQ_STAT_PP_OFF		(HANTRO_IRQ_STAT_PP * 4)
+
+#define HANTROPP_SYNTH_CFG		100
+#define HANTROPP_SYNTH_CFG_OFF		(HANTROPP_SYNTH_CFG * 4)
+#define HANTRODEC_SYNTH_CFG		50
+#define HANTRODEC_SYNTH_CFG_OFF		(HANTRODEC_SYNTH_CFG * 4)
+#define HANTRODEC_SYNTH_CFG_2		54
+#define HANTRODEC_SYNTH_CFG_2_OFF	(HANTRODEC_SYNTH_CFG_2 * 4)
+
+#define HANTRO_DEC_E			0x01
+#define HANTRO_PP_E			0x01
+#define HANTRO_DEC_ABORT		0x20
+#define HANTRO_DEC_IRQ_DISABLE		0x10
+#define HANTRO_PP_IRQ_DISABLE		0x10
+#define HANTRO_DEC_IRQ			0x100
+#define HANTRO_PP_IRQ			0x100
+
+#endif /* SOFTWARE_LINUX_DWL_DWL_DEFS_H_ */
-- 
1.9.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v1 4/4] drm: hantro: Keem Bay VPU DRM build files
  2020-10-09 11:57 [PATCH v1 0/4] Add support for Keem Bay VPU DRM driver kuhanh.murugasen.krishnan
                   ` (2 preceding siblings ...)
  2020-10-09 11:57 ` [PATCH v1 3/4] drm: hantro: Keem Bay VPU DRM decoder kuhanh.murugasen.krishnan
@ 2020-10-09 11:57 ` kuhanh.murugasen.krishnan
  3 siblings, 0 replies; 7+ messages in thread
From: kuhanh.murugasen.krishnan @ 2020-10-09 11:57 UTC (permalink / raw)
  To: dri-devel

From: "Murugasen Krishnan, Kuhanh" <kuhanh.murugasen.krishnan@intel.com>

Signed-off-by: Murugasen Krishnan, Kuhanh <kuhanh.murugasen.krishnan@intel.com>
---
 drivers/gpu/drm/Kconfig         |  2 ++
 drivers/gpu/drm/Makefile        |  1 +
 drivers/gpu/drm/hantro/Kconfig  | 21 +++++++++++++++++++++
 drivers/gpu/drm/hantro/Makefile |  6 ++++++
 4 files changed, 30 insertions(+)
 create mode 100644 drivers/gpu/drm/hantro/Kconfig
 create mode 100644 drivers/gpu/drm/hantro/Makefile

diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 147d61b..723aa68 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -275,6 +275,8 @@ source "drivers/gpu/drm/nouveau/Kconfig"
 
 source "drivers/gpu/drm/i915/Kconfig"
 
+source "drivers/gpu/drm/hantro/Kconfig"
+
 config DRM_VGEM
 	tristate "Virtual GEM provider"
 	depends on DRM
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 2f31579..d79d1fc 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -71,6 +71,7 @@ obj-$(CONFIG_DRM_AMDGPU)+= amd/amdgpu/
 obj-$(CONFIG_DRM_MGA)	+= mga/
 obj-$(CONFIG_DRM_I810)	+= i810/
 obj-$(CONFIG_DRM_I915)	+= i915/
+obj-$(CONFIG_DRM_HANTRO)  += hantro/
 obj-$(CONFIG_DRM_MGAG200) += mgag200/
 obj-$(CONFIG_DRM_V3D)  += v3d/
 obj-$(CONFIG_DRM_VC4)  += vc4/
diff --git a/drivers/gpu/drm/hantro/Kconfig b/drivers/gpu/drm/hantro/Kconfig
new file mode 100644
index 0000000..cbf6d99
--- /dev/null
+++ b/drivers/gpu/drm/hantro/Kconfig
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config DRM_HANTRO
+	tristate "Hantro DRM"
+	depends on DRM
+	depends on ARM64
+	select DRM_PANEL
+	select DRM_KMS_HELPER
+	help
+	  Choose this option if you have a system that has "Keem
+	  Bay VPU" hardware which supports Verisilicon's Hantro
+	  Video Processor Unit (VPU) IP, a series of video decoder
+	  and encoder semiconductor IP cores which can be flexibly
+	  configured for video surveillance, multimedia consumer
+	  products, Internet of Things, cloud service products, data
+	  centers, aerial photography and recorders, thereby providing
+	  video transconding and multi-channel HD video encoding and
+	  decoding.
+
+	  Hantro VC8000D allows 4K decoding that supports H264 and HEVC
+	  video formats. Hantro VC8000E allows 4K encoding that supports
+	  H264 and HEVC video formats.
diff --git a/drivers/gpu/drm/hantro/Makefile b/drivers/gpu/drm/hantro/Makefile
new file mode 100644
index 0000000..ade521e7
--- /dev/null
+++ b/drivers/gpu/drm/hantro/Makefile
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the Hantro DRM media codec driver. This driver provides
+# support for Keem Bay Hantro VPU IP.
+hantro-objs := hantro_drm.o hantro_enc.o hantro_dec.o hantro_fence.o
+obj-$(CONFIG_DRM_HANTRO) += hantro.o
-- 
1.9.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM
  2020-10-09 11:57 ` [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM kuhanh.murugasen.krishnan
@ 2020-10-09 22:15   ` Daniel Vetter
  2020-10-10  8:20     ` Ezequiel Garcia
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Vetter @ 2020-10-09 22:15 UTC (permalink / raw)
  To: kuhanh.murugasen.krishnan; +Cc: Greg KH, mgross, dri-devel

On Fri, Oct 09, 2020 at 07:57:52PM +0800, kuhanh.murugasen.krishnan@intel.com wrote:
> From: "Murugasen Krishnan, Kuhanh" <kuhanh.murugasen.krishnan@intel.com>
> 
> This is a new DRM media codec driver for Intel's Keem Bay SOC which
> integrates the Verisilicon's Hantro Video Processor Unit (VPU) IP.
> The SoC couples an ARM Cortex A53 CPU with an Intel Movidius VPU.
> 
> Hantro VPU IP is a series of video decoder and encoder semiconductor IP cores,
> which can be flexibly configured for video surveillance, multimedia consumer
> products, Internet of Things, cloud service products, data centers, aerial
> photography and recorders, thereby providing video transcoding and multi-channel
> HD video encoding and decoding.
> 
> Hantro VPU IP consists of Hantro VC8000D for decoder and Hantro VC8000E for encoder.
> 
> Signed-off-by: Murugasen Krishnan, Kuhanh <kuhanh.murugasen.krishnan@intel.com>
> Acked-by: Mark, Gross <mgross@linux.intel.com>

So there's this intel-internal pre-approval review process thing going on,
and apparently it's utterly useless.

This driver here happily copypastes like half of drm. This is not how
upstream developement is done.

The other issue, and that's the same as with the other kmb drm driver:
Doesn't start out with dt changes and schema, in the same patch series as
the driver itself. In the case of kmb display driver this means it's at
v9, and only now did we discover that the architecture is (maybe, verdict
by dt/armsoc people is still pending) is all wrong.

And finally you might have picked the wrong subsystem, proposing the first
media codec for drm is certainly bold, but not entirely out of line.

I'm not sure what the goal here is, but you might want to send a mail to
my intel address first before we proceed here.
-Daniel

> ---
>  drivers/gpu/drm/hantro/hantro_drm.c   | 1673 +++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/hantro/hantro_drm.h   |  208 ++++
>  drivers/gpu/drm/hantro/hantro_fence.c |  284 ++++++
>  drivers/gpu/drm/hantro/hantro_priv.h  |  106 +++
>  4 files changed, 2271 insertions(+)
>  create mode 100644 drivers/gpu/drm/hantro/hantro_drm.c
>  create mode 100644 drivers/gpu/drm/hantro/hantro_drm.h
>  create mode 100644 drivers/gpu/drm/hantro/hantro_fence.c
>  create mode 100644 drivers/gpu/drm/hantro/hantro_priv.h
> 
> diff --git a/drivers/gpu/drm/hantro/hantro_drm.c b/drivers/gpu/drm/hantro/hantro_drm.c
> new file mode 100644
> index 0000000..50ccddf
> --- /dev/null
> +++ b/drivers/gpu/drm/hantro/hantro_drm.c
> @@ -0,0 +1,1673 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + *    Hantro driver main DRM file
> + *
> + *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
> + *    Copyright (c) 2020 Intel Corporation
> + */
> +
> +#include <linux/io.h>
> +#include <linux/sched.h>
> +#include <linux/uaccess.h>
> +#include <linux/errno.h>
> +#include <linux/fs.h>
> +#include <linux/init.h>
> +#include <linux/ioport.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/mm.h>
> +#include <linux/shmem_fs.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/vmalloc.h>
> +#include <linux/dma-contiguous.h>
> +#include <drm/drm_modeset_helper.h>
> +/* hantro header */
> +#include "hantro_priv.h"
> +#include "hantro_enc.h"
> +#include "hantro_dec.h"
> +/* for dynamic ddr */
> +#include <linux/dma-mapping.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of.h>
> +#include <linux/of_address.h>
> +#include <linux/of_device.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/cma.h>
> +
> +struct hantro_device_handle hantro_dev;
> +
> +/* struct used for dynamic ddr allocations */
> +struct hantro_mem ddr1;
> +struct device *ddr_dev;
> +
> +static u32 hantro_vblank_no_hw_counter(struct drm_device *dev,
> +				       unsigned int pipe)
> +{
> +	return 0;
> +}
> +
> +static int hantro_recordmem(struct drm_file *priv, void *obj, int size)
> +{
> +	int ret;
> +	struct idr *list = (struct idr *)priv->driver_priv;
> +
> +	ret = idr_alloc(list, obj, 1, 0, GFP_KERNEL);
> +
> +	return (ret > 0 ? 0 : -ENOMEM);
> +}
> +
> +static void hantro_unrecordmem(struct drm_file *priv, void *obj)
> +{
> +	int id;
> +	struct idr *list = (struct idr *)priv->driver_priv;
> +	void *gemobj;
> +
> +	idr_for_each_entry(list, gemobj, id) {
> +		if (gemobj == obj) {
> +			idr_remove(list, id);
> +			break;
> +		}
> +	}
> +}
> +
> +static void hantro_drm_fb_destroy(struct drm_framebuffer *fb)
> +{
> +	struct hantro_drm_fb *vsi_fb = (struct hantro_drm_fb *)fb;
> +	int i;
> +
> +	for (i = 0; i < 4; i++)
> +		hantro_unref_drmobj(vsi_fb->obj[i]);
> +
> +	drm_framebuffer_cleanup(fb);
> +	kfree(vsi_fb);
> +}
> +
> +static int hantro_drm_fb_create_handle(struct drm_framebuffer *fb,
> +				       struct drm_file *file_priv,
> +				       unsigned int *handle)
> +{
> +	struct hantro_drm_fb *vsi_fb = (struct hantro_drm_fb *)fb;
> +
> +	return drm_gem_handle_create(file_priv, vsi_fb->obj[0], handle);
> +}
> +
> +static int hantro_drm_fb_dirty(struct drm_framebuffer *fb,
> +			       struct drm_file *file, unsigned int flags,
> +			       unsigned int color, struct drm_clip_rect *clips,
> +			       unsigned int num_clips)
> +{
> +	return 0;
> +}
> +
> +static const struct drm_framebuffer_funcs hantro_drm_fb_funcs = {
> +	.destroy = hantro_drm_fb_destroy,
> +	.create_handle = hantro_drm_fb_create_handle,
> +	.dirty = hantro_drm_fb_dirty,
> +};
> +
> +static int hantro_gem_dumb_create_internal(struct drm_file *file_priv,
> +					   struct drm_device *dev,
> +					   struct drm_mode_create_dumb *args)
> +{
> +	int ret = 0;
> +	int in_size, out_size;
> +	struct drm_gem_hantro_object *cma_obj;
> +	int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
> +	struct drm_gem_object *obj;
> +
> +	if (mutex_lock_interruptible(&dev->struct_mutex))
> +		return -EBUSY;
> +	cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
> +	if (!cma_obj) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +	obj = &cma_obj->base;
> +	out_size = sizeof(*args);
> +	in_size = sizeof(*args);
> +	args->pitch = ALIGN(min_pitch, 64);
> +	args->size = (__u64)args->pitch * (__u64)args->height;
> +	args->size = (args->size + PAGE_SIZE - 1) / PAGE_SIZE * PAGE_SIZE;
> +
> +	cma_obj->num_pages = args->size >> PAGE_SHIFT;
> +	cma_obj->flag = 0;
> +	cma_obj->pageaddr = NULL;
> +	cma_obj->pages = NULL;
> +	cma_obj->vaddr = NULL;
> +
> +	if (args->handle == DDR0_CHANNEL) {
> +		ddr_dev = dev->dev;
> +		cma_obj->ddr_channel = DDR0_CHANNEL;
> +	} else if (args->handle == DDR1_CHANNEL) {
> +		ddr_dev = ddr1.dev;
> +		cma_obj->ddr_channel = DDR1_CHANNEL;
> +	}
> +	cma_obj->vaddr = dma_alloc_coherent(ddr_dev, args->size,
> +					    &cma_obj->paddr, GFP_KERNEL | GFP_DMA);
> +	if (!cma_obj->vaddr) {
> +		kfree(cma_obj);
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	drm_gem_object_init(dev, obj, args->size);
> +
> +	args->handle = 0;
> +	ret = drm_gem_handle_create(file_priv, obj, &args->handle);
> +	if (ret == 0)
> +		ret = hantro_recordmem(file_priv, cma_obj, args->size);
> +	if (ret) {
> +		dma_free_coherent(ddr_dev, args->size, cma_obj->vaddr,
> +				  cma_obj->paddr);
> +		kfree(cma_obj);
> +	}
> +	init_hantro_resv(&cma_obj->kresv, cma_obj);
> +	cma_obj->handle = args->handle;
> +out:
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	return ret;
> +}
> +
> +static int hantro_gem_dumb_create(struct drm_device *dev, void *data,
> +				  struct drm_file *file_priv)
> +{
> +	return hantro_gem_dumb_create_internal(file_priv, dev,
> +					       (struct drm_mode_create_dumb *)data);
> +}
> +
> +static int hantro_gem_dumb_map_offset(struct drm_file *file_priv,
> +				      struct drm_device *dev, uint32_t handle,
> +				      uint64_t *offset)
> +{
> +	struct drm_gem_object *obj;
> +	int ret;
> +
> +	obj = hantro_gem_object_lookup(dev, file_priv, handle);
> +	if (!obj)
> +		return -EINVAL;
> +
> +	ret = drm_gem_create_mmap_offset(obj);
> +	if (ret == 0)
> +		*offset = drm_vma_node_offset_addr(&obj->vma_node);
> +	hantro_unref_drmobj(obj);
> +
> +	return ret;
> +}
> +
> +static int hantro_destroy_dumb(struct drm_device *dev, void *data,
> +			       struct drm_file *file_priv)
> +{
> +	struct drm_mode_destroy_dumb *args = data;
> +	struct drm_gem_object *obj;
> +	struct drm_gem_hantro_object *cma_obj;
> +
> +	if (mutex_lock_interruptible(&dev->struct_mutex))
> +		return -EBUSY;
> +	obj = hantro_gem_object_lookup(dev, file_priv, args->handle);
> +	if (!obj) {
> +		mutex_unlock(&dev->struct_mutex);
> +		return -EINVAL;
> +	}
> +	hantro_unref_drmobj(obj);
> +
> +	cma_obj = to_drm_gem_hantro_obj(obj);
> +	if ((cma_obj->flag & HANTRO_GEM_FLAG_IMPORT) == 0)
> +		hantro_unrecordmem(file_priv, cma_obj);
> +
> +	drm_gem_handle_delete(file_priv, args->handle);
> +	hantro_unref_drmobj(obj);
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	return 0;
> +}
> +
> +static int hantro_release_dumb(struct drm_device *dev,
> +			       struct drm_file *file_priv, void *obj)
> +{
> +	struct drm_gem_object *gemobj = obj;
> +	struct drm_gem_hantro_object *cma_obj;
> +
> +	cma_obj = to_drm_gem_hantro_obj(gemobj);
> +
> +	drm_gem_free_mmap_offset(&cma_obj->base);
> +
> +	if (cma_obj->flag & HANTRO_GEM_FLAG_EXPORT) {
> +		drm_gem_handle_delete(file_priv, cma_obj->handle);
> +		hantro_unref_drmobj(obj);
> +		return 0;
> +	}
> +
> +	drm_gem_object_release(gemobj);
> +	drm_gem_handle_delete(file_priv, cma_obj->handle);
> +
> +	if (cma_obj->vaddr) {
> +		if (cma_obj->ddr_channel == DDR0_CHANNEL)
> +			ddr_dev = gemobj->dev->dev;
> +		else if (cma_obj->ddr_channel == DDR1_CHANNEL)
> +			ddr_dev = ddr1.dev;
> +		dma_free_coherent(ddr_dev, cma_obj->base.size, cma_obj->vaddr,
> +				  cma_obj->paddr);
> +	}
> +	dma_resv_fini(&cma_obj->kresv);
> +	kfree(cma_obj);
> +
> +	return 0;
> +}
> +
> +static int hantro_mmap(struct file *filp, struct vm_area_struct *vma)
> +{
> +	int ret = 0;
> +	struct drm_gem_object *obj = NULL;
> +	struct drm_gem_hantro_object *cma_obj;
> +	struct drm_vma_offset_node *node;
> +	unsigned long page_num = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
> +	unsigned long address = 0;
> +	int sgtidx = 0;
> +	struct scatterlist *pscatter = NULL;
> +	struct page **pages = NULL;
> +
> +	if (mutex_lock_interruptible(&hantro_dev.drm_dev->struct_mutex))
> +		return -EBUSY;
> +	drm_vma_offset_lock_lookup(hantro_dev.drm_dev->vma_offset_manager);
> +	node = drm_vma_offset_exact_lookup_locked(hantro_dev.drm_dev->vma_offset_manager,
> +						  vma->vm_pgoff, vma_pages(vma));
> +
> +	if (likely(node)) {
> +		obj = container_of(node, struct drm_gem_object, vma_node);
> +		if (!kref_get_unless_zero(&obj->refcount))
> +			obj = NULL;
> +	}
> +	drm_vma_offset_unlock_lookup(hantro_dev.drm_dev->vma_offset_manager);
> +	hantro_unref_drmobj(obj);
> +
> +	if (!obj) {
> +		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
> +		return -EINVAL;
> +	}
> +	cma_obj = to_drm_gem_hantro_obj(obj);
> +
> +	if (page_num > cma_obj->num_pages) {
> +		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
> +		return -EINVAL;
> +	}
> +
> +	if ((cma_obj->flag & HANTRO_GEM_FLAG_IMPORT) == 0) {
> +		address = (unsigned long)cma_obj->vaddr;
> +		if (address == 0) {
> +			mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
> +			return -EINVAL;
> +		}
> +		ret = drm_gem_mmap_obj(obj,
> +				       drm_vma_node_size(node) << PAGE_SHIFT, vma);
> +
> +		if (ret) {
> +			mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
> +			return ret;
> +		}
> +	} else {
> +		pscatter = &cma_obj->sgt->sgl[sgtidx];
> +		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> +	}
> +
> +	vma->vm_pgoff = 0;
> +	if (cma_obj->ddr_channel == DDR0_CHANNEL)
> +		ddr_dev = hantro_dev.drm_dev->dev;
> +	else if (cma_obj->ddr_channel == DDR1_CHANNEL)
> +		ddr_dev = ddr1.dev;
> +
> +	if (dma_mmap_coherent(ddr_dev, vma, cma_obj->vaddr, cma_obj->paddr,
> +			      page_num << PAGE_SHIFT)) {
> +		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
> +		return -EAGAIN;
> +	}
> +
> +	vma->vm_private_data = cma_obj;
> +	cma_obj->pages = pages;
> +	mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
> +
> +	return ret;
> +}
> +
> +static int hantro_gem_open_obj(struct drm_gem_object *obj,
> +			       struct drm_file *filp)
> +{
> +	return 0;
> +}
> +
> +static int hantro_device_open(struct inode *inode, struct file *filp)
> +{
> +	int ret;
> +
> +	ret = drm_open(inode, filp);
> +	hantrodec_open(inode, filp);
> +
> +	return ret;
> +}
> +
> +static int hantro_device_release(struct inode *inode, struct file *filp)
> +{
> +	return drm_release(inode, filp);
> +}
> +
> +static vm_fault_t hantro_vm_fault(struct vm_fault *vmf)
> +{
> +	return -EPERM;
> +}
> +
> +#ifndef virt_to_bus
> +static inline unsigned long virt_to_bus(void *address)
> +{
> +	return (unsigned long)address;
> +}
> +#endif
> +
> +static struct sg_table *
> +hantro_gem_prime_get_sg_table(struct drm_gem_object *obj)
> +{
> +	struct drm_gem_hantro_object *cma_obj = to_drm_gem_hantro_obj(obj);
> +	struct sg_table *sgt;
> +	int ret;
> +
> +	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> +	if (!sgt)
> +		return NULL;
> +
> +	if (cma_obj->ddr_channel == DDR0_CHANNEL)
> +		ddr_dev = obj->dev->dev;
> +	else if (cma_obj->ddr_channel == DDR1_CHANNEL)
> +		ddr_dev = ddr1.dev;
> +
> +	ret = dma_get_sgtable(ddr_dev, sgt, cma_obj->vaddr, cma_obj->paddr,
> +			      obj->size);
> +	if (ret < 0)
> +		goto out;
> +
> +	return sgt;
> +
> +out:
> +	kfree(sgt);
> +	return NULL;
> +}
> +
> +static struct drm_gem_object *
> +hantro_gem_prime_import_sg_table(struct drm_device *dev,
> +				 struct dma_buf_attachment *attach,
> +				 struct sg_table *sgt)
> +{
> +	struct drm_gem_hantro_object *cma_obj;
> +	struct drm_gem_object *obj;
> +
> +	cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
> +	if (!cma_obj)
> +		return ERR_PTR(-ENOMEM);
> +
> +	obj = &cma_obj->base;
> +
> +	if (sgt->nents > 1) {
> +		/* check if the entries in the sg_table are contiguous */
> +		dma_addr_t next_addr = sg_dma_address(sgt->sgl);
> +		struct scatterlist *s;
> +		unsigned int i;
> +
> +		for_each_sg(sgt->sgl, s, sgt->nents, i) {
> +			/*
> +			 * sg_dma_address(s) is only valid for entries
> +			 * that have sg_dma_len(s) != 0
> +			 */
> +			if (!sg_dma_len(s))
> +				continue;
> +
> +			if (sg_dma_address(s) != next_addr) {
> +				kfree(cma_obj);
> +				return ERR_PTR(-EINVAL);
> +			}
> +
> +			next_addr = sg_dma_address(s) + sg_dma_len(s);
> +		}
> +	}
> +	if (drm_gem_object_init(dev, obj, attach->dmabuf->size) != 0) {
> +		kfree(cma_obj);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +	cma_obj->paddr = sg_dma_address(sgt->sgl);
> +	cma_obj->vaddr = dma_buf_vmap(attach->dmabuf);
> +	cma_obj->sgt = sgt;
> +	cma_obj->flag |= HANTRO_GEM_FLAG_IMPORT;
> +	cma_obj->num_pages = attach->dmabuf->size >> PAGE_SHIFT;
> +
> +	return obj;
> +}
> +
> +static void *hantro_gem_prime_vmap(struct drm_gem_object *obj)
> +{
> +	struct drm_gem_hantro_object *cma_obj = to_drm_gem_hantro_obj(obj);
> +
> +	return cma_obj->vaddr;
> +}
> +
> +static void hantro_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +{
> +}
> +
> +static int hantro_gem_prime_mmap(struct drm_gem_object *obj,
> +				 struct vm_area_struct *vma)
> +{
> +	struct drm_gem_hantro_object *cma_obj;
> +	unsigned long page_num = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
> +	int ret = 0;
> +
> +	cma_obj = to_drm_gem_hantro_obj(obj);
> +
> +	if (page_num > cma_obj->num_pages)
> +		return -EINVAL;
> +
> +	if ((cma_obj->flag & HANTRO_GEM_FLAG_IMPORT) != 0)
> +		return -EINVAL;
> +
> +	if ((unsigned long)cma_obj->vaddr == 0)
> +		return -EINVAL;
> +
> +	ret = drm_gem_mmap_obj(obj, obj->size, vma);
> +	if (ret < 0)
> +		return ret;
> +
> +	vma->vm_flags &= ~VM_PFNMAP;
> +	vma->vm_pgoff = 0;
> +
> +	if (cma_obj->ddr_channel == DDR0_CHANNEL)
> +		ddr_dev = obj->dev->dev;
> +	else if (cma_obj->ddr_channel == DDR1_CHANNEL)
> +		ddr_dev = ddr1.dev;
> +
> +	if (dma_mmap_coherent(ddr_dev, vma, cma_obj->vaddr, cma_obj->paddr,
> +			      vma->vm_end - vma->vm_start)) {
> +		drm_gem_vm_close(vma);
> +		mutex_unlock(&hantro_dev.drm_dev->struct_mutex);
> +		return -EAGAIN;
> +	}
> +	vma->vm_private_data = cma_obj;
> +
> +	return ret;
> +}
> +
> +static struct drm_gem_object *
> +hantro_drm_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf)
> +{
> +	return drm_gem_prime_import(dev, dma_buf);
> +}
> +
> +static void hantro_gem_free_object(struct drm_gem_object *gem_obj)
> +{
> +	struct drm_gem_hantro_object *cma_obj;
> +
> +	cma_obj = to_drm_gem_hantro_obj(gem_obj);
> +	if (cma_obj->pages) {
> +		int i;
> +
> +		for (i = 0; i < cma_obj->num_pages; i++)
> +			unref_page(cma_obj->pages[i]);
> +
> +		kfree(cma_obj->pages);
> +		cma_obj->pages = NULL;
> +	}
> +
> +	drm_gem_free_mmap_offset(gem_obj);
> +	drm_gem_object_release(gem_obj);
> +	if (gem_obj->import_attach) {
> +		if (cma_obj->vaddr)
> +			dma_buf_vunmap(gem_obj->import_attach->dmabuf,
> +				       cma_obj->vaddr);
> +		drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
> +	} else if (cma_obj->vaddr) {
> +		if (cma_obj->ddr_channel == DDR0_CHANNEL)
> +			ddr_dev = gem_obj->dev->dev;
> +		else if (cma_obj->ddr_channel == DDR1_CHANNEL)
> +			ddr_dev = ddr1.dev;
> +		dma_free_coherent(ddr_dev, cma_obj->base.size, cma_obj->vaddr,
> +				  cma_obj->paddr);
> +	}
> +
> +	dma_resv_fini(&cma_obj->kresv);
> +	kfree(cma_obj);
> +}
> +
> +static int hantro_gem_close(struct drm_device *dev, void *data,
> +			    struct drm_file *file_priv)
> +{
> +	struct drm_gem_close *args = data;
> +	int ret = 0;
> +	struct drm_gem_object *obj =
> +		hantro_gem_object_lookup(dev, file_priv, args->handle);
> +
> +	if (!obj)
> +		return -EINVAL;
> +
> +	ret = drm_gem_handle_delete(file_priv, args->handle);
> +	hantro_unref_drmobj(obj);
> +
> +	return ret;
> +}
> +
> +static int hantro_gem_open(struct drm_device *dev, void *data,
> +			   struct drm_file *file_priv)
> +{
> +	int ret;
> +	u32 handle;
> +	struct drm_gem_open *openarg;
> +	struct drm_gem_object *obj = NULL;
> +
> +	openarg = (struct drm_gem_open *)data;
> +
> +	obj = idr_find(&dev->object_name_idr, (int)openarg->name);
> +	if (obj)
> +		hantro_ref_drmobj(obj);
> +	else
> +		return -ENOENT;
> +
> +	ret = drm_gem_handle_create(file_priv, obj, &handle);
> +	hantro_unref_drmobj(obj);
> +	if (ret)
> +		return ret;
> +
> +	openarg->handle = handle;
> +	openarg->size = obj->size;
> +
> +	return ret;
> +}
> +
> +static int hantro_map_vaddr(struct drm_device *dev, void *data,
> +			    struct drm_file *file_priv)
> +{
> +	struct hantro_addrmap *pamap = data;
> +	struct drm_gem_object *obj;
> +	struct drm_gem_hantro_object *cma_obj;
> +
> +	obj = hantro_gem_object_lookup(dev, file_priv, pamap->handle);
> +	if (!obj)
> +		return -EINVAL;
> +
> +	cma_obj = to_drm_gem_hantro_obj(obj);
> +	pamap->vm_addr = (unsigned long)cma_obj->vaddr;
> +	pamap->phy_addr = cma_obj->paddr;
> +	hantro_unref_drmobj(obj);
> +
> +	return 0;
> +}
> +
> +static int hantro_gem_flink(struct drm_device *dev, void *data,
> +			    struct drm_file *file_priv)
> +{
> +	struct drm_gem_flink *args = data;
> +	struct drm_gem_object *obj;
> +	int ret;
> +
> +	if (!drm_core_check_feature(dev, DRIVER_GEM))
> +		return -ENODEV;
> +
> +	obj = hantro_gem_object_lookup(dev, file_priv, args->handle);
> +	if (!obj)
> +		return -ENOENT;
> +
> +	mutex_lock(&dev->object_name_lock);
> +	/* prevent races with concurrent gem_close. */
> +	if (obj->handle_count == 0) {
> +		ret = -ENOENT;
> +		goto err;
> +	}
> +
> +	if (!obj->name) {
> +		ret = idr_alloc(&dev->object_name_idr, obj, 1, 0, GFP_KERNEL);
> +		if (ret < 0)
> +			goto err;
> +
> +		obj->name = ret;
> +	}
> +
> +	args->name = (uint64_t)obj->name;
> +	ret = 0;
> +
> +err:
> +	mutex_unlock(&dev->object_name_lock);
> +	hantro_unref_drmobj(obj);
> +	return ret;
> +}
> +
> +static int hantro_map_dumb(struct drm_device *dev, void *data,
> +			   struct drm_file *file_priv)
> +{
> +	int ret;
> +	struct drm_mode_map_dumb *temparg = (struct drm_mode_map_dumb *)data;
> +
> +	ret = hantro_gem_dumb_map_offset(file_priv, dev, temparg->handle,
> +					 &temparg->offset);
> +
> +	return ret;
> +}
> +
> +static int hantro_drm_open(struct drm_device *dev, struct drm_file *file)
> +{
> +	struct idr *ptr;
> +
> +	ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
> +	if (!ptr)
> +		return -ENOMEM;
> +	idr_init(ptr);
> +	file->driver_priv = ptr;
> +
> +	return 0;
> +}
> +
> +static void hantro_drm_postclose(struct drm_device *dev, struct drm_file *file)
> +{
> +	int id;
> +	struct idr *cmalist = (struct idr *)file->driver_priv;
> +	void *obj;
> +
> +	mutex_lock(&dev->struct_mutex);
> +	if (file->driver_priv) {
> +		idr_for_each_entry(cmalist, obj, id) {
> +			if (obj) {
> +				hantro_release_dumb(dev, file, obj);
> +				idr_remove(cmalist, id);
> +			}
> +		}
> +		idr_destroy(cmalist);
> +		kfree(file->driver_priv);
> +		file->driver_priv = NULL;
> +	}
> +	mutex_unlock(&dev->struct_mutex);
> +}
> +
> +static int hantro_handle_to_fd(struct drm_device *dev, void *data,
> +			       struct drm_file *file_priv)
> +{
> +	int ret;
> +	struct drm_prime_handle *primeargs = (struct drm_prime_handle *)data;
> +	struct drm_gem_object *obj;
> +	struct drm_gem_hantro_object *cma_obj;
> +
> +	obj = hantro_gem_object_lookup(dev, file_priv, primeargs->handle);
> +	if (!obj)
> +		return -ENOENT;
> +
> +	ret = drm_gem_prime_handle_to_fd(dev, file_priv, primeargs->handle,
> +					 primeargs->flags, &primeargs->fd);
> +
> +	if (ret == 0) {
> +		cma_obj = to_drm_gem_hantro_obj(obj);
> +		cma_obj->flag |= HANTRO_GEM_FLAG_EXPORT;
> +	}
> +	hantro_unref_drmobj(obj);
> +
> +	return ret;
> +}
> +
> +static int hantro_fd_to_handle(struct drm_device *dev, void *data,
> +			       struct drm_file *file_priv)
> +{
> +	struct drm_prime_handle *primeargs = (struct drm_prime_handle *)data;
> +
> +	return drm_gem_prime_fd_to_handle(dev, file_priv, primeargs->fd,
> +					  &primeargs->handle);
> +}
> +
> +static int hantro_fb_create2(struct drm_device *dev, void *data,
> +			     struct drm_file *file_priv)
> +{
> +	struct drm_mode_fb_cmd2 *mode_cmd = (struct drm_mode_fb_cmd2 *)data;
> +	struct hantro_drm_fb *vsifb;
> +	struct drm_gem_object *objs[4];
> +	struct drm_gem_object *obj;
> +	const struct drm_format_info *info = drm_get_format_info(dev, mode_cmd);
> +	unsigned int hsub;
> +	unsigned int vsub;
> +	int num_planes;
> +	int ret;
> +	int i;
> +
> +	hsub = info->hsub;
> +	vsub = info->vsub;
> +	num_planes = min_t(int, info->num_planes, 4);
> +	for (i = 0; i < num_planes; i++) {
> +		unsigned int width = mode_cmd->width / (i ? hsub : 1);
> +		unsigned int height = mode_cmd->height / (i ? vsub : 1);
> +		unsigned int min_size;
> +
> +		obj = hantro_gem_object_lookup(dev, file_priv,
> +					       mode_cmd->handles[i]);
> +		if (!obj) {
> +			ret = -ENXIO;
> +			goto err_gem_object_unreference;
> +		}
> +		hantro_unref_drmobj(obj);
> +		min_size = (height - 1) * mode_cmd->pitches[i] +
> +			   mode_cmd->offsets[i] + width * info->cpp[i];
> +		if (obj->size < min_size) {
> +			ret = -EINVAL;
> +			goto err_gem_object_unreference;
> +		}
> +		objs[i] = obj;
> +	}
> +	vsifb = kzalloc(sizeof(*vsifb), GFP_KERNEL);
> +	if (!vsifb)
> +		return -ENOMEM;
> +	drm_helper_mode_fill_fb_struct(dev, &vsifb->fb, mode_cmd);
> +	for (i = 0; i < num_planes; i++)
> +		vsifb->obj[i] = objs[i];
> +	ret = drm_framebuffer_init(dev, &vsifb->fb, &hantro_drm_fb_funcs);
> +	if (ret)
> +		kfree(vsifb);
> +	return ret;
> +
> +err_gem_object_unreference:
> +	return ret;
> +}
> +
> +static int hantro_fb_create(struct drm_device *dev, void *data,
> +			    struct drm_file *file_priv)
> +{
> +	struct drm_mode_fb_cmd *or = data;
> +	struct drm_mode_fb_cmd2 r = {};
> +	int ret;
> +
> +	/* convert to new format and call new ioctl */
> +	r.fb_id = or->fb_id;
> +	r.width = or->width;
> +	r.height = or->height;
> +	r.pitches[0] = or->pitch;
> +	r.pixel_format = drm_mode_legacy_fb_format(or->bpp, or->depth);
> +	r.handles[0] = or->handle;
> +
> +	ret = hantro_fb_create2(dev, &r, file_priv);
> +	if (ret)
> +		return ret;
> +
> +	or->fb_id = r.fb_id;
> +
> +	return 0;
> +}
> +
> +static int hantro_get_version(struct drm_device *dev, void *data,
> +			      struct drm_file *file_priv)
> +{
> +	struct drm_version *pversion;
> +	char *sname = DRIVER_NAME;
> +	char *sdesc = DRIVER_DESC;
> +	char *sdate = DRIVER_DATE;
> +
> +	pversion = (struct drm_version *)data;
> +	pversion->version_major = dev->driver->major;
> +	pversion->version_minor = dev->driver->minor;
> +	pversion->version_patchlevel = 0;
> +	pversion->name_len = strlen(DRIVER_NAME);
> +	pversion->desc_len = strlen(DRIVER_DESC);
> +	pversion->date_len = strlen(DRIVER_DATE);
> +
> +	if (pversion->name)
> +		if (copy_to_user(pversion->name, sname, pversion->name_len))
> +			return -EFAULT;
> +	if (pversion->date)
> +		if (copy_to_user(pversion->date, sdate, pversion->date_len))
> +			return -EFAULT;
> +	if (pversion->desc)
> +		if (copy_to_user(pversion->desc, sdesc, pversion->desc_len))
> +			return -EFAULT;
> +
> +	return 0;
> +}
> +
> +static int hantro_get_cap(struct drm_device *dev, void *data,
> +			  struct drm_file *file_priv)
> +{
> +	struct drm_get_cap *req = (struct drm_get_cap *)data;
> +
> +	req->value = 0;
> +	switch (req->capability) {
> +	case DRM_CAP_PRIME:
> +		req->value |= dev->driver->prime_fd_to_handle ?
> +				      DRM_PRIME_CAP_IMPORT :
> +				      0;
> +		req->value |= dev->driver->prime_handle_to_fd ?
> +				      DRM_PRIME_CAP_EXPORT :
> +				      0;
> +		return 0;
> +	case DRM_CAP_DUMB_BUFFER:
> +		req->value = 1;
> +		break;
> +	case DRM_CAP_VBLANK_HIGH_CRTC:
> +		req->value = 1;
> +		break;
> +	case DRM_CAP_DUMB_PREFERRED_DEPTH:
> +		req->value = dev->mode_config.preferred_depth;
> +		break;
> +	case DRM_CAP_DUMB_PREFER_SHADOW:
> +		req->value = dev->mode_config.prefer_shadow;
> +		break;
> +	case DRM_CAP_ASYNC_PAGE_FLIP:
> +		req->value = dev->mode_config.async_page_flip;
> +		break;
> +	case DRM_CAP_CURSOR_WIDTH:
> +		if (dev->mode_config.cursor_width)
> +			req->value = dev->mode_config.cursor_width;
> +		else
> +			req->value = 64;
> +		break;
> +	case DRM_CAP_CURSOR_HEIGHT:
> +		if (dev->mode_config.cursor_height)
> +			req->value = dev->mode_config.cursor_height;
> +		else
> +			req->value = 64;
> +		break;
> +	case DRM_CAP_ADDFB2_MODIFIERS:
> +		req->value = dev->mode_config.allow_fb_modifiers;
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int hantro_test(struct drm_device *dev, void *data,
> +		       struct drm_file *file_priv)
> +{
> +	unsigned int *input = data;
> +	int handle = *input;
> +	struct drm_gem_object *obj;
> +	struct dma_fence *pfence;
> +	int ret = 10 * HZ; /* timeout */
> +
> +	obj = hantro_gem_object_lookup(dev, file_priv, handle);
> +	if (!obj)
> +		return -EINVAL;
> +
> +	pfence = dma_resv_get_excl(obj->dma_buf->resv);
> +	while (ret > 0)
> +		ret = schedule_timeout(ret);
> +	hantro_fence_signal(pfence);
> +	hantro_unref_drmobj(obj);
> +
> +	return 0;
> +}
> +
> +static int hantro_getprimeaddr(struct drm_device *dev, void *data,
> +			       struct drm_file *file_priv)
> +{
> +	unsigned long *input = data;
> +	int fd = *input;
> +	struct drm_gem_hantro_object *cma_obj;
> +	struct dma_buf *dma_buf;
> +
> +	dma_buf = dma_buf_get(fd);
> +	if (IS_ERR(dma_buf))
> +		return PTR_ERR(dma_buf);
> +	cma_obj = (struct drm_gem_hantro_object *)dma_buf->priv;
> +	*input = cma_obj->paddr;
> +	dma_buf_put(dma_buf);
> +
> +	return 0;
> +}
> +
> +static int hantro_ptr_to_phys(struct drm_device *dev, void *data,
> +			      struct drm_file *file_priv)
> +{
> +	unsigned long *arg = data;
> +	struct vm_area_struct *vma;
> +	struct drm_gem_hantro_object *cma_obj;
> +	unsigned long vaddr = *arg;
> +
> +	vma = find_vma(current->mm, vaddr);
> +	if (!vma)
> +		return -EFAULT;
> +
> +	cma_obj = (struct drm_gem_hantro_object *)vma->vm_private_data;
> +	if (!cma_obj)
> +		return -EFAULT;
> +	if (cma_obj->base.dev != dev)
> +		return -EFAULT;
> +	if (vaddr < vma->vm_start ||
> +	    vaddr >= vma->vm_start + (cma_obj->num_pages << PAGE_SHIFT))
> +		return -EFAULT;
> +
> +	*arg = (phys_addr_t)(vaddr - vma->vm_start) + cma_obj->paddr;
> +
> +	return 0;
> +}
> +
> +static int hantro_getmagic(struct drm_device *dev, void *data,
> +			   struct drm_file *file_priv)
> +{
> +	struct drm_auth *auth = data;
> +	int ret = 0;
> +
> +	mutex_lock(&dev->struct_mutex);
> +	if (!file_priv->magic) {
> +		ret = idr_alloc(&file_priv->master->magic_map, file_priv, 1, 0,
> +				GFP_KERNEL);
> +		if (ret >= 0)
> +			file_priv->magic = ret;
> +	}
> +	auth->magic = file_priv->magic;
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	return ret < 0 ? ret : 0;
> +}
> +
> +static int hantro_authmagic(struct drm_device *dev, void *data,
> +			    struct drm_file *file_priv)
> +{
> +	struct drm_auth *auth = data;
> +	struct drm_file *file;
> +
> +	mutex_lock(&dev->struct_mutex);
> +	file = idr_find(&file_priv->master->magic_map, auth->magic);
> +	if (file) {
> +		file->authenticated = 1;
> +		idr_replace(&file_priv->master->magic_map, NULL, auth->magic);
> +	}
> +	mutex_unlock(&dev->struct_mutex);
> +
> +	return file ? 0 : -EINVAL;
> +}
> +
> +#define DRM_IOCTL_DEF(ioctl, _func, _flags)                                    \
> +	[DRM_IOCTL_NR(ioctl)] = {                                              \
> +		.cmd = ioctl, .func = _func, .flags = _flags, .name = #ioctl   \
> +	}
> +
> +#define DRM_CONTROL_ALLOW 0
> +/* Ioctl table */
> +static const struct drm_ioctl_desc hantro_ioctls[] = {
> +	DRM_IOCTL_DEF(DRM_IOCTL_VERSION, hantro_get_version,
> +		      DRM_UNLOCKED | DRM_RENDER_ALLOW | DRM_CONTROL_ALLOW),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_invalid_op, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, hantro_getmagic, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_invalid_op,
> +		      DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_MAP, drm_invalid_op, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_CLIENT, drm_invalid_op, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_STATS, drm_invalid_op, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_CAP, hantro_get_cap,
> +		      DRM_UNLOCKED | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF(DRM_IOCTL_SET_CLIENT_CAP, drm_invalid_op, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_SET_VERSION, drm_invalid_op,
> +		      DRM_UNLOCKED | DRM_MASTER),
> +	DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_UNBLOCK, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, hantro_authmagic,
> +		      DRM_AUTH | DRM_UNLOCKED | DRM_MASTER),
> +	DRM_IOCTL_DEF(DRM_IOCTL_ADD_MAP, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_RM_MAP, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_SET_SAREA_CTX, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_SAREA_CTX, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_invalid_op,
> +		      DRM_UNLOCKED | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_invalid_op,
> +		      DRM_UNLOCKED | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_ADD_CTX, drm_invalid_op,
> +		      DRM_AUTH | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_RM_CTX, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MOD_CTX, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GET_CTX, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_SWITCH_CTX, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_NEW_CTX, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_RES_CTX, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_ADD_DRAW, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_RM_DRAW, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_LOCK, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_UNLOCK, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_FINISH, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_ADD_BUFS, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MARK_BUFS, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_INFO_BUFS, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MAP_BUFS, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_FREE_BUFS, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_DMA, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_CONTROL, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +#if IS_ENABLED(CONFIG_AGP)
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_ACQUIRE, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_RELEASE, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_ENABLE, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_INFO, drm_invalid_op, DRM_AUTH),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_ALLOC, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_FREE, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_BIND, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_AGP_UNBIND, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +#endif
> +	DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_invalid_op, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_invalid_op, 0),
> +	DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_invalid_op,
> +		      DRM_AUTH | DRM_MASTER | DRM_ROOT_ONLY),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GEM_CLOSE, hantro_gem_close,
> +		      DRM_UNLOCKED | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GEM_FLINK, hantro_gem_flink,
> +		      DRM_AUTH | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_GEM_OPEN, hantro_gem_open,
> +		      DRM_AUTH | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETRESOURCES, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_PRIME_HANDLE_TO_FD, hantro_handle_to_fd,
> +		      DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF(DRM_IOCTL_PRIME_FD_TO_HANDLE, hantro_fd_to_handle,
> +		      DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANERESOURCES, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPLANE, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPLANE, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETGAMMA, drm_invalid_op, DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETGAMMA, drm_invalid_op,
> +		      DRM_MASTER | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETENCODER, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCONNECTOR, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATTACHMODE, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DETACHMODE, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPERTY, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETPROPERTY, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETPROPBLOB, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETFB, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, hantro_fb_create,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, hantro_fb_create2,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_RMFB, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_PAGE_FLIP, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DIRTYFB, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATE_DUMB, hantro_gem_dumb_create,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_MAP_DUMB, hantro_map_dumb,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROY_DUMB, hantro_destroy_dumb,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_GETPROPERTIES, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_SETPROPERTY, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR2, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_ATOMIC, drm_invalid_op,
> +		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_CREATEPROPBLOB, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROYPROPBLOB, drm_invalid_op,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +
> +	/* hantro specific ioctls */
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_TESTCMD, hantro_test,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_GETPADDR, hantro_map_vaddr,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_TESTREADY, hantro_testbufvalid,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_SETDOMAIN, hantro_setdomain,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_ACQUIREBUF, hantro_acquirebuf,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_RELEASEBUF, hantro_releasebuf,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_GETPRIMEADDR, hantro_getprimeaddr,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +	DRM_IOCTL_DEF(DRM_IOCTL_HANTRO_PTR_PHYADDR, hantro_ptr_to_phys,
> +		      DRM_CONTROL_ALLOW | DRM_UNLOCKED),
> +};
> +
> +#if DRM_CONTROL_ALLOW == 0
> +#undef DRM_CONTROL_ALLOW
> +#endif
> +
> +#define HANTRO_IOCTL_COUNT ARRAY_SIZE(hantro_ioctls)
> +static long hantro_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> +{
> +	struct drm_file *file_priv = filp->private_data;
> +	struct drm_device *dev = hantro_dev.drm_dev;
> +	const struct drm_ioctl_desc *ioctl = NULL;
> +	drm_ioctl_t *func;
> +	unsigned int nr = DRM_IOCTL_NR(cmd);
> +	int retcode = 0;
> +	char stack_kdata[128];
> +	char *kdata = stack_kdata;
> +	unsigned int in_size, out_size;
> +
> +	if (drm_dev_is_unplugged(dev))
> +		return -ENODEV;
> +
> +	out_size = _IOC_SIZE(cmd);
> +	in_size = _IOC_SIZE(cmd);
> +
> +	if (in_size > 0) {
> +		if (_IOC_DIR(cmd) & _IOC_READ)
> +			retcode = !hantro_access_ok(VERIFY_WRITE, (void *)arg,
> +						    in_size);
> +		else if (_IOC_DIR(cmd) & _IOC_WRITE)
> +			retcode = !hantro_access_ok(VERIFY_READ, (void *)arg,
> +						    in_size);
> +		if (retcode)
> +			return -EFAULT;
> +	}
> +	if (nr >= DRM_IOCTL_NR(HX280ENC_IOC_START) &&
> +	    nr <= DRM_IOCTL_NR(HX280ENC_IOC_END)) {
> +		return hantroenc_ioctl(filp, cmd, arg);
> +	}
> +	if (nr >= DRM_IOCTL_NR(HANTRODEC_IOC_START) &&
> +	    nr <= DRM_IOCTL_NR(HANTRODEC_IOC_END)) {
> +		return hantrodec_ioctl(filp, cmd, arg);
> +	}
> +
> +	if (nr >= HANTRO_IOCTL_COUNT)
> +		return -EINVAL;
> +	ioctl = &hantro_ioctls[nr];
> +
> +	if (copy_from_user(kdata, (void __user *)arg, in_size) != 0)
> +		return -EFAULT;
> +
> +	if (cmd == DRM_IOCTL_MODE_SETCRTC ||
> +	    cmd == DRM_IOCTL_MODE_GETRESOURCES ||
> +	    cmd == DRM_IOCTL_SET_CLIENT_CAP || cmd == DRM_IOCTL_MODE_GETCRTC ||
> +	    cmd == DRM_IOCTL_MODE_GETENCODER ||
> +	    cmd == DRM_IOCTL_MODE_GETCONNECTOR || cmd == DRM_IOCTL_MODE_GETFB) {
> +		retcode = drm_ioctl(filp, cmd, arg);
> +		return retcode;
> +	}
> +	func = ioctl->func;
> +	if (!func)
> +		return -EINVAL;
> +	retcode = func(dev, kdata, file_priv);
> +
> +	if (copy_to_user((void __user *)arg, kdata, out_size) != 0)
> +		retcode = -EFAULT;
> +
> +	return retcode;
> +}
> +
> +/* VFS methods */
> +static const struct file_operations hantro_fops = {
> +	.owner = THIS_MODULE,
> +	.open = hantro_device_open,
> +	.mmap = hantro_mmap,
> +	.release = hantro_device_release,
> +	.poll = drm_poll,
> +	.read = drm_read,
> +	.unlocked_ioctl = hantro_ioctl, //drm_ioctl,
> +	.compat_ioctl = drm_compat_ioctl,
> +};
> +
> +void hantro_gem_vm_close(struct vm_area_struct *vma)
> +{
> +	struct drm_gem_hantro_object *obj =
> +		(struct drm_gem_hantro_object *)vma->vm_private_data;
> +	/* unmap callback */
> +
> +	if (obj->pages) {
> +		int i;
> +
> +		for (i = 0; i < obj->num_pages; i++)
> +			unref_page(obj->pages[i]);
> +
> +		kfree(obj->pages);
> +		obj->pages = NULL;
> +	}
> +	drm_gem_vm_close(vma);
> +}
> +
> +static void hantro_release(struct drm_device *dev)
> +{
> +}
> +
> +static void hantro_gem_dmabuf_release(struct dma_buf *dma_buf)
> +{
> +	return drm_gem_dmabuf_release(dma_buf);
> +}
> +
> +static int hantro_gem_map_attach(struct dma_buf *dma_buf,
> +				 struct dma_buf_attachment *attach)
> +{
> +	int ret;
> +	struct drm_gem_hantro_object *cma_obj =
> +		(struct drm_gem_hantro_object *)dma_buf->priv;
> +
> +	ret = drm_gem_map_attach(dma_buf, attach);
> +	if (ret == 0)
> +		cma_obj->flag |= HANTRO_GEM_FLAG_EXPORTUSED;
> +
> +	return ret;
> +}
> +
> +static void hantro_gem_map_detach(struct dma_buf *dma_buf,
> +				  struct dma_buf_attachment *attach)
> +{
> +	drm_gem_map_detach(dma_buf, attach);
> +}
> +
> +static struct sg_table *
> +hantro_gem_map_dma_buf(struct dma_buf_attachment *attach,
> +		       enum dma_data_direction dir)
> +{
> +	return drm_gem_map_dma_buf(attach, dir);
> +}
> +
> +static int hantro_gem_dmabuf_mmap(struct dma_buf *dma_buf,
> +				  struct vm_area_struct *vma)
> +{
> +	return drm_gem_dmabuf_mmap(dma_buf, vma);
> +}
> +
> +static void *hantro_gem_dmabuf_vmap(struct dma_buf *dma_buf)
> +{
> +	return drm_gem_dmabuf_vmap(dma_buf);
> +}
> +
> +static const struct dma_buf_ops hantro_dmabuf_ops = {
> +	.attach = hantro_gem_map_attach,
> +	.detach = hantro_gem_map_detach,
> +	.map_dma_buf = hantro_gem_map_dma_buf,
> +	.unmap_dma_buf = drm_gem_unmap_dma_buf,
> +	.release = hantro_gem_dmabuf_release,
> +	.mmap = hantro_gem_dmabuf_mmap,
> +	.vmap = hantro_gem_dmabuf_vmap,
> +	.vunmap = drm_gem_dmabuf_vunmap,
> +};
> +
> +static struct drm_driver hantro_drm_driver;
> +static struct dma_buf *hantro_prime_export(struct drm_gem_object *obj,
> +					   int flags)
> +{
> +	struct drm_gem_hantro_object *cma_obj;
> +	struct dma_buf_export_info exp_info = {
> +		.exp_name = KBUILD_MODNAME,
> +		.owner = obj->dev->driver->fops->owner,
> +		.ops = &hantro_dmabuf_ops,
> +		.flags = flags,
> +		.priv = obj,
> +	};
> +
> +	cma_obj = to_drm_gem_hantro_obj(obj);
> +	exp_info.resv = &cma_obj->kresv;
> +	exp_info.size = cma_obj->num_pages << PAGE_SHIFT;
> +
> +	return drm_gem_dmabuf_export(obj->dev, &exp_info);
> +}
> +
> +static void hantro_close_object(struct drm_gem_object *obj,
> +				struct drm_file *file_priv)
> +{
> +	struct drm_gem_hantro_object *cma_obj;
> +
> +	cma_obj = to_drm_gem_hantro_obj(obj);
> +	if (obj->dma_buf && (cma_obj->flag & HANTRO_GEM_FLAG_EXPORTUSED))
> +		dma_buf_put(obj->dma_buf);
> +}
> +
> +static int hantro_gem_prime_handle_to_fd(struct drm_device *dev,
> +					 struct drm_file *filp, u32 handle,
> +					 u32 flags, int *prime_fd)
> +{
> +	return drm_gem_prime_handle_to_fd(dev, filp, handle, flags, prime_fd);
> +}
> +
> +static const struct vm_operations_struct hantro_drm_gem_cma_vm_ops = {
> +	.open = drm_gem_vm_open,
> +	.close = hantro_gem_vm_close,
> +	.fault = hantro_vm_fault,
> +};
> +
> +static struct drm_driver hantro_drm_driver = {
> +	//these two are related with controlD and renderD
> +	.driver_features = DRIVER_GEM | DRIVER_RENDER,
> +	.get_vblank_counter = hantro_vblank_no_hw_counter,
> +	.open = hantro_drm_open,
> +	.postclose = hantro_drm_postclose,
> +	.release = hantro_release,
> +	.dumb_destroy = drm_gem_dumb_destroy,
> +	.dumb_create = hantro_gem_dumb_create_internal,
> +	.dumb_map_offset = hantro_gem_dumb_map_offset,
> +	.gem_open_object = hantro_gem_open_obj,
> +	.gem_close_object = hantro_close_object,
> +	.gem_prime_export = hantro_prime_export,
> +	.gem_prime_import = hantro_drm_gem_prime_import,
> +	.prime_handle_to_fd = hantro_gem_prime_handle_to_fd,
> +	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> +	.gem_prime_import_sg_table = hantro_gem_prime_import_sg_table,
> +	.gem_prime_get_sg_table = hantro_gem_prime_get_sg_table,
> +	.gem_prime_vmap = hantro_gem_prime_vmap,
> +	.gem_prime_vunmap = hantro_gem_prime_vunmap,
> +	.gem_prime_mmap = hantro_gem_prime_mmap,
> +	.gem_free_object_unlocked = hantro_gem_free_object,
> +	.gem_vm_ops = &hantro_drm_gem_cma_vm_ops,
> +	.fops = &hantro_fops,
> +	.name = DRIVER_NAME,
> +	.desc = DRIVER_DESC,
> +	.date = DRIVER_DATE,
> +	.major = DRIVER_MAJOR,
> +	.minor = DRIVER_MINOR,
> +};
> +
> +static ssize_t bandwidth_dec_read_show(struct device *kdev,
> +				       struct device_attribute *attr, char *buf)
> +{
> +	/*
> +	 *  sys/bus/platform/drivers/hantro/xxxxx.vpu/bandwidth_dec_read
> +	 *  Used to show bandwidth info to user space.
> +	 *  Real data should be read from HW registers
> +	 *  This file is read only.
> +	 */
> +	u32 bandwidth = hantrodec_readbandwidth(1);
> +
> +	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
> +}
> +
> +static ssize_t bandwidth_dec_write_show(struct device *kdev,
> +					struct device_attribute *attr, char *buf)
> +{
> +	u32 bandwidth = hantrodec_readbandwidth(0);
> +
> +	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
> +}
> +
> +static ssize_t bandwidth_enc_read_show(struct device *kdev,
> +				       struct device_attribute *attr, char *buf)
> +{
> +	u32 bandwidth = hantroenc_readbandwidth(1);
> +
> +	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
> +}
> +
> +static ssize_t bandwidth_enc_write_show(struct device *kdev,
> +					struct device_attribute *attr, char *buf)
> +{
> +	u32 bandwidth = hantroenc_readbandwidth(0);
> +
> +	return snprintf(buf, PAGE_SIZE, "%u\n", bandwidth);
> +}
> +
> +static DEVICE_ATTR(bandwidth_dec_read, 0444, bandwidth_dec_read_show, NULL);
> +static DEVICE_ATTR(bandwidth_dec_write, 0444, bandwidth_dec_write_show, NULL);
> +static DEVICE_ATTR(bandwidth_enc_read, 0444, bandwidth_enc_read_show, NULL);
> +static DEVICE_ATTR(bandwidth_enc_write, 0444, bandwidth_enc_write_show, NULL);
> +
> +static int hantro_create_sysfs_api(struct device *dev)
> +{
> +	int result;
> +
> +	result = device_create_file(dev, &dev_attr_bandwidth_dec_read);
> +	if (result != 0)
> +		return result;
> +
> +	result = device_create_file(dev, &dev_attr_bandwidth_dec_write);
> +	if (result != 0) {
> +		device_remove_file(dev, &dev_attr_bandwidth_dec_read);
> +		return result;
> +	}
> +
> +	result = device_create_file(dev, &dev_attr_bandwidth_enc_read);
> +	if (result != 0) {
> +		device_remove_file(dev, &dev_attr_bandwidth_dec_read);
> +		device_remove_file(dev, &dev_attr_bandwidth_dec_write);
> +		return result;
> +	}
> +
> +	result = device_create_file(dev, &dev_attr_bandwidth_enc_write);
> +	if (result != 0) {
> +		device_remove_file(dev, &dev_attr_bandwidth_dec_read);
> +		device_remove_file(dev, &dev_attr_bandwidth_dec_write);
> +		device_remove_file(dev, &dev_attr_bandwidth_enc_read);
> +		return result;
> +	}
> +
> +	return 0;
> +}
> +
> +static int init_hantro_rsvd_mem(struct device *dev, struct hantro_mem *mem,
> +				const char *mem_name, unsigned int mem_idx)
> +{
> +	struct device *mem_dev;
> +	dma_addr_t dma_handle;
> +	int rc;
> +	size_t mem_size;
> +	void *vaddr;
> +
> +	/* Create a child device (of dev) to own the reserved memory. */
> +	mem_dev =
> +		devm_kzalloc(dev, sizeof(struct device), GFP_KERNEL | GFP_DMA);
> +	if (!mem_dev)
> +		return -ENOMEM;
> +
> +	device_initialize(mem_dev);
> +	dev_set_name(mem_dev, "%s:%s", dev_name(dev), mem_name);
> +	mem_dev->parent = dev;
> +	mem_dev->dma_mask = dev->dma_mask;
> +	mem_dev->coherent_dma_mask = dev->coherent_dma_mask;
> +
> +	/* Set up DMA configuration using information from parent's DT node. */
> +	rc = of_dma_configure(mem_dev, dev->of_node, true);
> +	mem_dev->release = of_reserved_mem_device_release;
> +
> +	rc = device_add(mem_dev);
> +	if (rc)
> +		goto err;
> +	/* Initialized the device reserved memory region. */
> +	rc = of_reserved_mem_device_init_by_idx(mem_dev, dev->of_node, mem_idx);
> +	if (rc) {
> +		dev_err(dev, "Couldn't get reserved memory with idx = %d, %d\n",
> +			mem_idx, rc);
> +		device_del(mem_dev);
> +		goto err;
> +	} else {
> +		dev_info(dev, "Success get reserved memory with idx = %d, %d\n",
> +			 mem_idx, rc);
> +	}
> +
> +	dma_handle -= dev->dma_pfn_offset << PAGE_SHIFT;
> +
> +	mem->dev = mem_dev;
> +	mem->vaddr = vaddr;
> +	mem->dma_handle = dma_handle;
> +	mem->size = mem_size;
> +
> +	return 0;
> +err:
> +	put_device(mem_dev);
> +	return rc;
> +}
> +
> +static int hantro_drm_probe(struct platform_device *pdev)
> +{
> +	int result;
> +	struct device *dev = &pdev->dev;
> +
> +	if (!hantro_dev.platformdev)
> +		hantro_dev.platformdev = pdev;
> +
> +	/* try to attach rsv mem to dtb node */
> +	result = init_hantro_rsvd_mem(dev, &ddr1, "ddr1", 0);
> +	if (result) {
> +		dev_err(dev, "Failed to set up DDR1 reserved memory.\n");
> +		return result;
> +	}
> +
> +	dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
> +	dma_set_mask_and_coherent(ddr1.dev, DMA_BIT_MASK(64));
> +
> +	dev_info(dev, "ddr1 vaddr 0x%p paddr 0x%pad size 0x%zX\n", ddr1.vaddr,
> +		 &ddr1.dma_handle, ddr1.size);
> +
> +	result = hantro_create_sysfs_api(dev);
> +	if (result != 0)
> +		pr_info("create sysfs fail");
> +
> +	/* check if pdev equals hantro_dev.platformdev */
> +	result = hantrodec_init(pdev);
> +	if (result != 0)
> +		return result;
> +	result = hantroenc_init(pdev);
> +	if (result != 0)
> +		return result;
> +
> +	return 0;
> +}
> +
> +static int hantro_drm_remove(struct platform_device *pdev)
> +{
> +	struct device *dev = &pdev->dev;
> +
> +	device_remove_file(dev, &dev_attr_bandwidth_dec_read);
> +	device_remove_file(dev, &dev_attr_bandwidth_dec_write);
> +	device_remove_file(dev, &dev_attr_bandwidth_enc_read);
> +	device_remove_file(dev, &dev_attr_bandwidth_enc_write);
> +
> +	return 0;
> +}
> +
> +static const struct platform_device_id hantro_drm_platform_ids[] = {
> +	{
> +		.name = "hantro",
> +	},
> +	{},
> +};
> +MODULE_DEVICE_TABLE(platform, hantro_drm_platform_ids);
> +
> +static const struct of_device_id hantro_of_match[] = {
> +	/* to match dtb, else reg io will fail */
> +	{
> +		.compatible = "intel,hantro",
> +	},
> +	{ /* sentinel */ }
> +};
> +
> +static int hantro_pm_suspend(struct device *kdev)
> +{
> +	return 0;
> +}
> +
> +static int hantro_pm_resume(struct device *kdev)
> +{
> +	return 0;
> +}
> +
> +static const struct dev_pm_ops hantro_pm_ops = {
> +	.suspend = hantro_pm_suspend,
> +	.resume = hantro_pm_resume,
> +};
> +
> +static struct platform_driver hantro_drm_platform_driver = {
> +	.probe = hantro_drm_probe,
> +	.remove = hantro_drm_remove,
> +	.driver = {
> +		.name = DRIVER_NAME,
> +		.owner = THIS_MODULE,
> +		.of_match_table = hantro_of_match,
> +		.pm = &hantro_pm_ops,
> +	},
> +	.id_table = hantro_drm_platform_ids,
> +};
> +
> +static const struct platform_device_info hantro_platform_info = {
> +	.name = DRIVER_NAME,
> +	.id = -1,
> +	.dma_mask = DMA_BIT_MASK(64),
> +};
> +
> +static int hantro_major = 1; /* dynamic */
> +void __exit hantro_cleanup(void)
> +{
> +	device_unregister(ddr1.dev);
> +	hantrodec_cleanup();
> +	hantroenc_cleanup();
> +	release_fence_data();
> +	unregister_chrdev(hantro_major, "hantro");
> +	drm_dev_unregister(hantro_dev.drm_dev);
> +	drm_dev_put(hantro_dev.drm_dev);
> +	platform_device_unregister(hantro_dev.platformdev);
> +	platform_driver_unregister(&hantro_drm_platform_driver);
> +}
> +
> +int __init hantro_init(void)
> +{
> +	int result;
> +
> +	hantro_dev.platformdev = NULL;
> +	result = platform_driver_register(&hantro_drm_platform_driver);
> +	if (result < 0)
> +		return result;
> +
> +	if (!hantro_dev.platformdev) {
> +		pr_info("hantro create platform device fail");
> +		return -1;
> +	}
> +
> +	hantro_dev.drm_dev =
> +		drm_dev_alloc(&hantro_drm_driver, &hantro_dev.platformdev->dev);
> +	if (IS_ERR(hantro_dev.drm_dev)) {
> +		pr_info("init drm failed\n");
> +		platform_driver_unregister(&hantro_drm_platform_driver);
> +		return PTR_ERR(hantro_dev.drm_dev);
> +	}
> +
> +	hantro_dev.drm_dev->dev = &hantro_dev.platformdev->dev;
> +	pr_info("hantro device created");
> +
> +	drm_mode_config_init(hantro_dev.drm_dev);
> +	result = drm_dev_register(hantro_dev.drm_dev, 0);
> +	if (result < 0) {
> +		drm_dev_unregister(hantro_dev.drm_dev);
> +		drm_dev_put(hantro_dev.drm_dev);
> +		platform_driver_unregister(&hantro_drm_platform_driver);
> +	} else {
> +		init_fence_data();
> +	}
> +
> +	return result;
> +}
> +
> +module_init(hantro_init);
> +module_exit(hantro_cleanup);
> +
> +/* module description */
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Verisilicon");
> +MODULE_DESCRIPTION("Hantro DRM manager");
> diff --git a/drivers/gpu/drm/hantro/hantro_drm.h b/drivers/gpu/drm/hantro/hantro_drm.h
> new file mode 100644
> index 0000000..13b6f14
> --- /dev/null
> +++ b/drivers/gpu/drm/hantro/hantro_drm.h
> @@ -0,0 +1,208 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + *    Hantro driver public header file.
> + *
> + *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
> + *    Copyright (c) 2020 Intel Corporation
> + */
> +
> +#ifndef HANTRO_H
> +#define HANTRO_H
> +
> +#include <linux/ioctl.h>
> +#include <linux/dma-resv.h>
> +#include <linux/dma-mapping.h>
> +#include <drm/drm_vma_manager.h>
> +#include <drm/drm_gem_cma_helper.h>
> +#include <drm/drm_gem.h>
> +#include <linux/dma-buf.h>
> +#include <drm/drm.h>
> +#include <drm/drm_auth.h>
> +#include <drm/drm.h>
> +#include <drm/drm_framebuffer.h>
> +#include <drm/drm_drv.h>
> +#include <drm/drm_fourcc.h>
> +#include <linux/version.h>
> +#include <linux/dma-fence.h>
> +#include <linux/platform_device.h>
> +
> +/* basic driver definitions */
> +#define DRIVER_NAME     "hantro"
> +#define DRIVER_DESC     "hantro DRM"
> +#define DRIVER_DATE     "20201008"
> +#define DRIVER_MAJOR    1
> +#define DRIVER_MINOR    0
> +
> +/* these domain definitions are identical to hantro_bufmgr.h */
> +#define HANTRO_DOMAIN_NONE		0x00000
> +#define HANTRO_CPU_DOMAIN		0x00001
> +#define HANTRO_HEVC264_DOMAIN		0x00002
> +#define HANTRO_JPEG_DOMAIN		0x00004
> +#define HANTRO_DECODER0_DOMAIN		0x00008
> +#define HANTRO_DECODER1_DOMAIN		0x00010
> +#define HANTRO_DECODER2_DOMAIN		0x00020
> +#define HANTRO_GEM_FLAG_IMPORT		BIT(0)
> +#define HANTRO_GEM_FLAG_EXPORT		BIT(1)
> +#define HANTRO_GEM_FLAG_EXPORTUSED	BIT(2)
> +#define HANTRO_FENCE_WRITE 1
> +
> +/* dynamic ddr allocation defines */
> +#define DDR0_CHANNEL			0
> +#define DDR1_CHANNEL			1
> +
> +struct hantro_mem {
> +	struct device *dev;	/* Child device managing the memory region. */
> +	void *vaddr;		/* The virtual address of the memory region. */
> +	dma_addr_t dma_handle;	/* The address of the memory region. */
> +	size_t size;		/* The size of the memory region. */
> +};
> +
> +struct hantro_drm_fb {
> +	struct drm_framebuffer fb;
> +	struct drm_gem_object *obj[4];
> +};
> +
> +struct drm_gem_hantro_object {
> +	struct drm_gem_object base;
> +	dma_addr_t paddr;
> +	struct sg_table *sgt;
> +	/* For objects with DMA memory allocated by GEM CMA */
> +	void *vaddr;
> +	struct page *pageaddr;
> +	struct page **pages;
> +	unsigned long num_pages;
> +	/* fence ref */
> +	struct dma_resv kresv;
> +	unsigned int ctxno;
> +	int handle;
> +	int flag;
> +	int ddr_channel;
> +};
> +
> +struct hantro_fencecheck {
> +	unsigned int handle;
> +	int ready;
> +};
> +
> +struct hantro_domainset {
> +	unsigned int handle;
> +	unsigned int writedomain;
> +	unsigned int readdomain;
> +};
> +
> +struct hantro_addrmap {
> +	unsigned int handle;
> +	unsigned long vm_addr;
> +	unsigned long phy_addr;
> +};
> +
> +struct hantro_regtransfer {
> +	unsigned long coreid;
> +	unsigned long offset;
> +	unsigned long size;
> +	const void *data;
> +	int benc; /* encoder core or decoder core */
> +	int direction; /* 0=read, 1=write */
> +};
> +
> +struct hantro_corenum {
> +	unsigned int deccore;
> +	unsigned int enccore;
> +};
> +
> +struct hantro_acquirebuf {
> +	unsigned long handle;
> +	unsigned long flags;
> +	unsigned long timeout;
> +	unsigned long fence_handle;
> +};
> +
> +struct hantro_releasebuf {
> +	unsigned long fence_handle;
> +};
> +
> +struct core_desc {
> +	__u32 id;	/* id of the core */
> +	__u32 *regs;	/* pointer to user registers */
> +	__u32 size;	/* size of register space */
> +	__u32 reg_id;
> +};
> +
> +/* Ioctl definitions */
> +/*hantro drm */
> +#define HANTRO_IOCTL_START (DRM_COMMAND_BASE)
> +#define DRM_IOCTL_HANTRO_TESTCMD DRM_IOWR(HANTRO_IOCTL_START, unsigned int)
> +#define DRM_IOCTL_HANTRO_GETPADDR                                              \
> +	DRM_IOWR(HANTRO_IOCTL_START + 1, struct hantro_addrmap)
> +#define DRM_IOCTL_HANTRO_TESTREADY                                             \
> +	DRM_IOWR(HANTRO_IOCTL_START + 3, struct hantro_fencecheck)
> +#define DRM_IOCTL_HANTRO_SETDOMAIN                                             \
> +	DRM_IOWR(HANTRO_IOCTL_START + 4, struct hantro_domainset)
> +#define DRM_IOCTL_HANTRO_ACQUIREBUF                                            \
> +	DRM_IOWR(HANTRO_IOCTL_START + 6, struct hantro_acquirebuf)
> +#define DRM_IOCTL_HANTRO_RELEASEBUF                                            \
> +	DRM_IOWR(HANTRO_IOCTL_START + 7, struct hantro_releasebuf)
> +#define DRM_IOCTL_HANTRO_GETPRIMEADDR                                          \
> +	DRM_IOWR(HANTRO_IOCTL_START + 8, unsigned long *)
> +#define DRM_IOCTL_HANTRO_PTR_PHYADDR                                           \
> +	DRM_IOWR(HANTRO_IOCTL_START + 9, unsigned long *)
> +
> +/* hantro enc */
> +#define HX280ENC_IOC_START DRM_IO(HANTRO_IOCTL_START + 16)
> +#define HX280ENC_IOCGHWOFFSET DRM_IOR(HANTRO_IOCTL_START + 17, unsigned long *)
> +#define HX280ENC_IOCGHWIOSIZE DRM_IOWR(HANTRO_IOCTL_START + 18, unsigned long *)
> +#define HX280ENC_IOC_CLI DRM_IO(HANTRO_IOCTL_START + 19)
> +#define HX280ENC_IOC_STI DRM_IO(HANTRO_IOCTL_START + 20)
> +#define HX280ENC_IOCHARDRESET                                                  \
> +	DRM_IO(HANTRO_IOCTL_START + 21) /* debugging tool */
> +#define HX280ENC_IOCGSRAMOFFSET                                                \
> +	DRM_IOR(HANTRO_IOCTL_START + 22, unsigned long *)
> +#define HX280ENC_IOCGSRAMEIOSIZE                                               \
> +	DRM_IOR(HANTRO_IOCTL_START + 23, unsigned int *)
> +#define HX280ENC_IOCH_ENC_RESERVE                                              \
> +	DRM_IOR(HANTRO_IOCTL_START + 24, unsigned int *)
> +#define HX280ENC_IOCH_ENC_RELEASE                                              \
> +	DRM_IOR(HANTRO_IOCTL_START + 25, unsigned int *)
> +#define HX280ENC_IOCG_CORE_NUM DRM_IOR(HANTRO_IOCTL_START + 26, unsigned int *)
> +#define HX280ENC_IOCG_CORE_WAIT DRM_IOR(HANTRO_IOCTL_START + 27, unsigned int *)
> +#define HX280ENC_IOC_END DRM_IO(HANTRO_IOCTL_START + 39)
> +
> +/* hantro dec */
> +#define HANTRODEC_IOC_START DRM_IO(HANTRO_IOCTL_START + 40)
> +#define HANTRODEC_PP_INSTANCE DRM_IO(HANTRO_IOCTL_START + 41)
> +#define HANTRODEC_HW_PERFORMANCE DRM_IO(HANTRO_IOCTL_START + 42)
> +#define HANTRODEC_IOCGHWOFFSET DRM_IOR(HANTRO_IOCTL_START + 43, unsigned long *)
> +#define HANTRODEC_IOCGHWIOSIZE DRM_IOR(HANTRO_IOCTL_START + 44, unsigned int *)
> +#define HANTRODEC_IOC_CLI DRM_IO(HANTRO_IOCTL_START + 45)
> +#define HANTRODEC_IOC_STI DRM_IO(HANTRO_IOCTL_START + 46)
> +#define HANTRODEC_IOC_MC_OFFSETS                                               \
> +	DRM_IOR(HANTRO_IOCTL_START + 47, unsigned long *)
> +#define HANTRODEC_IOC_MC_CORES DRM_IOR(HANTRO_IOCTL_START + 48, unsigned int *)
> +#define HANTRODEC_IOCS_DEC_PUSH_REG                                            \
> +	DRM_IOW(HANTRO_IOCTL_START + 49, struct core_desc *)
> +#define HANTRODEC_IOCS_PP_PUSH_REG                                             \
> +	DRM_IOW(HANTRO_IOCTL_START + 50, struct core_desc *)
> +#define HANTRODEC_IOCH_DEC_RESERVE DRM_IO(HANTRO_IOCTL_START + 51)
> +#define HANTRODEC_IOCT_DEC_RELEASE DRM_IO(HANTRO_IOCTL_START + 52)
> +#define HANTRODEC_IOCQ_PP_RESERVE DRM_IO(HANTRO_IOCTL_START + 53)
> +#define HANTRODEC_IOCT_PP_RELEASE DRM_IO(HANTRO_IOCTL_START + 54)
> +#define HANTRODEC_IOCX_DEC_WAIT                                                \
> +	DRM_IOWR(HANTRO_IOCTL_START + 55, struct core_desc *)
> +#define HANTRODEC_IOCX_PP_WAIT                                                 \
> +	DRM_IOWR(HANTRO_IOCTL_START + 56, struct core_desc *)
> +#define HANTRODEC_IOCS_DEC_PULL_REG                                            \
> +	DRM_IOWR(HANTRO_IOCTL_START + 57, struct core_desc *)
> +#define HANTRODEC_IOCS_PP_PULL_REG                                             \
> +	DRM_IOWR(HANTRO_IOCTL_START + 58, struct core_desc *)
> +#define HANTRODEC_IOCG_CORE_WAIT DRM_IOR(HANTRO_IOCTL_START + 59, int *)
> +#define HANTRODEC_IOX_ASIC_ID DRM_IOR(HANTRO_IOCTL_START + 60, __u32 *)
> +#define HANTRODEC_IOCG_CORE_ID DRM_IO(HANTRO_IOCTL_START + 61)
> +#define HANTRODEC_IOCS_DEC_WRITE_REG                                           \
> +	DRM_IOW(HANTRO_IOCTL_START + 62, struct core_desc *)
> +#define HANTRODEC_IOCS_DEC_READ_REG                                            \
> +	DRM_IOWR(HANTRO_IOCTL_START + 63, struct core_desc *)
> +#define HANTRODEC_DEBUG_STATUS DRM_IO(HANTRO_IOCTL_START + 64)
> +#define HANTRODEC_IOX_ASIC_BUILD_ID DRM_IOR(HANTRO_IOCTL_START + 65, __u32 *)
> +#define HANTRODEC_IOC_END DRM_IO(HANTRO_IOCTL_START + 80)
> +
> +#endif /* HANTRO_H */
> diff --git a/drivers/gpu/drm/hantro/hantro_fence.c b/drivers/gpu/drm/hantro/hantro_fence.c
> new file mode 100644
> index 0000000..e009ba9
> --- /dev/null
> +++ b/drivers/gpu/drm/hantro/hantro_fence.c
> @@ -0,0 +1,284 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + *    Hantro driver DMA_BUF fence operation.
> + *
> + *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
> + *    Copyright (c) 2020 Intel Corporation
> + */
> +
> +#include "hantro_priv.h"
> +
> +static unsigned long seqno;
> +DEFINE_IDR(fence_idr);
> +/* fence mutex struct */
> +struct mutex fence_mutex;
> +
> +static const char *hantro_fence_get_driver_name(struct dma_fence *fence)
> +{
> +	return "hantro";
> +}
> +
> +static const char *hantro_fence_get_timeline_name(struct dma_fence *fence)
> +{
> +	return " ";
> +}
> +
> +static bool hantro_fence_enable_signaling(struct dma_fence *fence)
> +{
> +	if (test_bit(HANTRO_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags))
> +		return true;
> +	else
> +		return false;
> +}
> +
> +static bool hantro_fence_signaled(struct dma_fence *fobj)
> +{
> +	unsigned long irqflags;
> +	bool ret;
> +
> +	spin_lock_irqsave(fobj->lock, irqflags);
> +	ret = (test_bit(HANTRO_FENCE_FLAG_SIGNAL_BIT, &fobj->flags) != 0);
> +	spin_unlock_irqrestore(fobj->lock, irqflags);
> +
> +	return ret;
> +}
> +
> +static void hantro_fence_free(struct dma_fence *fence)
> +{
> +	kfree(fence->lock);
> +	fence->lock = NULL;
> +	dma_fence_free(fence);
> +}
> +
> +const static struct dma_fence_ops hantro_fenceops = {
> +	.get_driver_name = hantro_fence_get_driver_name,
> +	.get_timeline_name = hantro_fence_get_timeline_name,
> +	.enable_signaling = hantro_fence_enable_signaling,
> +	.signaled = hantro_fence_signaled,
> +	.wait = hantro_fence_default_wait,
> +	.release = hantro_fence_free,
> +};
> +
> +static struct dma_fence *alloc_fence(unsigned int ctxno)
> +{
> +	struct dma_fence *fobj;
> +	/* spinlock for fence */
> +	spinlock_t *lock;
> +
> +	fobj = kzalloc(sizeof(*fobj), GFP_KERNEL);
> +	if (!fobj)
> +		return NULL;
> +	lock = kzalloc(sizeof(*lock), GFP_KERNEL);
> +	if (!lock) {
> +		kfree(fobj);
> +		return NULL;
> +	}
> +
> +	spin_lock_init(lock);
> +	hantro_fence_init(fobj, &hantro_fenceops, lock, ctxno, seqno++);
> +	clear_bit(HANTRO_FENCE_FLAG_SIGNAL_BIT, &fobj->flags);
> +	set_bit(HANTRO_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fobj->flags);
> +
> +	return fobj;
> +}
> +
> +static int is_hantro_fence(struct dma_fence *fence)
> +{
> +	return (fence->ops == &hantro_fenceops);
> +}
> +
> +int init_hantro_resv(struct dma_resv *presv,
> +		     struct drm_gem_hantro_object *cma_obj)
> +{
> +	dma_resv_init(presv);
> +	cma_obj->ctxno = hantro_fence_context_alloc(1);
> +
> +	return 0;
> +}
> +
> +int hantro_waitfence(struct dma_fence *pfence)
> +{
> +	if (test_bit(HANTRO_FENCE_FLAG_SIGNAL_BIT, &pfence->flags))
> +		return 0;
> +
> +	if (is_hantro_fence(pfence))
> +		return 0;
> +	else
> +		return hantro_fence_wait_timeout(pfence, true, 30 * HZ);
> +}
> +
> +int hantro_setdomain(struct drm_device *dev, void *data,
> +		     struct drm_file *file_priv)
> +{
> +	return 0;
> +}
> +
> +void init_fence_data(void)
> +{
> +	seqno = 0;
> +	mutex_init(&fence_mutex);
> +	idr_init(&fence_idr);
> +}
> +
> +static int fence_idr_fini(int id, void *p, void *data)
> +{
> +	hantro_fence_signal(p);
> +	hantro_fence_put(p);
> +
> +	return 0;
> +}
> +
> +void release_fence_data(void)
> +{
> +	mutex_lock(&fence_mutex);
> +	idr_for_each(&fence_idr, fence_idr_fini, NULL);
> +	idr_destroy(&fence_idr);
> +	mutex_unlock(&fence_mutex);
> +}
> +
> +int hantro_acquirebuf(struct drm_device *dev, void *data,
> +		      struct drm_file *file_priv)
> +{
> +	struct hantro_acquirebuf *arg = data;
> +	struct dma_resv *resv;
> +	struct drm_gem_object *obj;
> +	struct dma_fence *fence = NULL;
> +	unsigned long timeout = arg->timeout;
> +	unsigned long fenceid = -1;
> +	int ret = 0;
> +
> +	obj = hantro_gem_object_lookup(dev, file_priv, arg->handle);
> +	if (!obj)
> +		return -ENOENT;
> +
> +	if (!obj->dma_buf) {
> +		if (hantro_dev.drm_dev == obj->dev) {
> +			struct drm_gem_hantro_object *hobj =
> +				to_drm_gem_hantro_obj(obj);
> +
> +			resv = &hobj->kresv;
> +		} else {
> +			ret = -ENOENT;
> +			goto err;
> +		}
> +	} else {
> +		resv = obj->dma_buf->resv;
> +	}
> +	/* Check for a stalled fence */
> +	if (!dma_resv_wait_timeout_rcu(resv, arg->flags
> +					& HANTRO_FENCE_WRITE,
> +					1, timeout)) {
> +		ret = -EBUSY;
> +		goto err;
> +	}
> +
> +	/* Expose the fence via the dma-buf */
> +	ret = -ENOMEM;
> +	fence = alloc_fence(hantro_fence_context_alloc(1));
> +	if (!fence)
> +		goto err;
> +
> +	mutex_lock(&fence_mutex);
> +	ret = idr_alloc(&fence_idr, fence, 1, 0, GFP_KERNEL);
> +	mutex_unlock(&fence_mutex);
> +	if (ret >= 0)
> +		fenceid = ret;
> +	else
> +		goto err;
> +
> +	dma_resv_lock(resv, NULL);
> +	if (resv->fence_excl  &&
> +	    !hantro_fence_is_signaled(resv->fence_excl)) {
> +		dma_resv_unlock(resv);
> +		ret = -EBUSY;
> +		goto err;
> +	}
> +	ret = 0;
> +	if (arg->flags & HANTRO_FENCE_WRITE) {
> +		dma_resv_add_excl_fence(resv, fence);
> +	} else {
> +		ret = hantro_reserve_obj_shared(resv, 1);
> +		if (ret == 0)
> +			dma_resv_add_shared_fence(resv, fence);
> +	}
> +	dma_resv_unlock(resv);
> +
> +	/* Record the fence in our idr for later signaling */
> +	if (ret == 0) {
> +		arg->fence_handle = fenceid;
> +		hantro_unref_drmobj(obj);
> +
> +		return ret;
> +	}
> +err:
> +	if (fenceid >= 0) {
> +		mutex_lock(&fence_mutex);
> +		idr_remove(&fence_idr, fenceid);
> +		mutex_unlock(&fence_mutex);
> +	}
> +	if (fence) {
> +		hantro_fence_signal(fence);
> +		hantro_fence_put(fence);
> +	}
> +	hantro_unref_drmobj(obj);
> +	return ret;
> +}
> +
> +int hantro_testbufvalid(struct drm_device *dev, void *data,
> +			struct drm_file *file_priv)
> +{
> +	struct hantro_fencecheck *arg = data;
> +	struct dma_resv *resv;
> +	struct drm_gem_object *obj;
> +
> +	arg->ready = 0;
> +	obj = hantro_gem_object_lookup(dev, file_priv, arg->handle);
> +	if (!obj)
> +		return -ENOENT;
> +
> +	if (!obj->dma_buf) {
> +		if (hantro_dev.drm_dev == obj->dev) {
> +			struct drm_gem_hantro_object *hobj =
> +				to_drm_gem_hantro_obj(obj);
> +
> +			resv = &hobj->kresv;
> +		} else {
> +			return -ENOENT;
> +		}
> +	} else {
> +		resv = obj->dma_buf->resv;
> +	}
> +
> +	/* Check for a stalled fence */
> +	if (dma_resv_wait_timeout_rcu(resv, 1, 1, 0) <= 0)
> +		arg->ready = 0;
> +	else
> +		arg->ready = 1;
> +
> +	return 0;
> +}
> +
> +int hantro_releasebuf(struct drm_device *dev, void *data,
> +		      struct drm_file *file_priv)
> +{
> +	struct hantro_releasebuf *arg = data;
> +	struct dma_fence *fence;
> +	int ret = 0;
> +
> +	mutex_lock(&fence_mutex);
> +	fence = idr_replace(&fence_idr, NULL, arg->fence_handle);
> +	mutex_unlock(&fence_mutex);
> +
> +	if (!fence || IS_ERR(fence))
> +		return -ENOENT;
> +	if (hantro_fence_is_signaled(fence))
> +		ret = -ETIMEDOUT;
> +
> +	hantro_fence_signal(fence);
> +	hantro_fence_put(fence);
> +	mutex_lock(&fence_mutex);
> +	idr_remove(&fence_idr, arg->fence_handle);
> +	mutex_unlock(&fence_mutex);
> +
> +	return ret;
> +}
> diff --git a/drivers/gpu/drm/hantro/hantro_priv.h b/drivers/gpu/drm/hantro/hantro_priv.h
> new file mode 100644
> index 0000000..7257cfd
> --- /dev/null
> +++ b/drivers/gpu/drm/hantro/hantro_priv.h
> @@ -0,0 +1,106 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + *    Hantro driver private header file.
> + *
> + *    Copyright (c) 2017 - 2020, VeriSilicon Inc.
> + *    Copyright (c) 2020 Intel Corporation
> + */
> +
> +#ifndef HANTRO_PRIV_H
> +#define HANTRO_PRIV_H
> +#include "hantro_drm.h"
> +
> +#define hantro_access_ok(a, b, c)	access_ok(b, c)
> +#define hantro_reserve_obj_shared(a, b)	dma_resv_reserve_shared(a, b)
> +#define hantro_ref_drmobj		drm_gem_object_get
> +#define hantro_unref_drmobj		drm_gem_object_put
> +
> +struct hantro_device_handle {
> +	struct platform_device *platformdev; /* parent device */
> +	struct drm_device *drm_dev;
> +	int bprobed;
> +};
> +
> +extern struct hantro_device_handle hantro_dev;
> +
> +#define HANTRO_FENCE_FLAG_ENABLE_SIGNAL_BIT DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT
> +#define HANTRO_FENCE_FLAG_SIGNAL_BIT DMA_FENCE_FLAG_SIGNALED_BIT
> +
> +static inline signed long
> +hantro_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
> +{
> +	return dma_fence_default_wait(fence, intr, timeout);
> +}
> +
> +static inline void hantro_fence_init(struct dma_fence *fence,
> +				     const struct dma_fence_ops *ops,
> +				     spinlock_t *lock, unsigned int context,
> +				     unsigned int seqno)
> +{
> +	return dma_fence_init(fence, ops, lock, context, seqno);
> +}
> +
> +static inline unsigned int hantro_fence_context_alloc(unsigned int num)
> +{
> +	return dma_fence_context_alloc(num);
> +}
> +
> +static inline signed long
> +hantro_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
> +{
> +	return dma_fence_wait_timeout(fence, intr, timeout);
> +}
> +
> +static inline struct drm_gem_object *
> +hantro_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
> +			 u32 handle)
> +{
> +	return drm_gem_object_lookup(filp, handle);
> +}
> +
> +static inline void hantro_fence_put(struct dma_fence *fence)
> +{
> +	return dma_fence_put(fence);
> +}
> +
> +static inline int hantro_fence_signal(struct dma_fence *fence)
> +{
> +	return dma_fence_signal(fence);
> +}
> +
> +static inline void ref_page(struct page *pp)
> +{
> +	atomic_inc(&pp->_refcount);
> +	atomic_inc(&pp->_mapcount);
> +}
> +
> +static inline void unref_page(struct page *pp)
> +{
> +	atomic_dec(&pp->_refcount);
> +	atomic_dec(&pp->_mapcount);
> +}
> +
> +static inline bool hantro_fence_is_signaled(struct dma_fence *fence)
> +{
> +	return dma_fence_is_signaled(fence);
> +}
> +
> +static inline struct drm_gem_hantro_object *
> +to_drm_gem_hantro_obj(struct drm_gem_object *gem_obj)
> +{
> +	return container_of(gem_obj, struct drm_gem_hantro_object, base);
> +}
> +
> +int hantro_setdomain(struct drm_device *dev, void *data,
> +		     struct drm_file *file_priv);
> +int hantro_acquirebuf(struct drm_device *dev, void *data,
> +		      struct drm_file *file_priv);
> +int hantro_testbufvalid(struct drm_device *dev, void *data,
> +			struct drm_file *file_priv);
> +int hantro_releasebuf(struct drm_device *dev, void *data,
> +		      struct drm_file *file_priv);
> +int init_hantro_resv(struct dma_resv *presv,
> +		     struct drm_gem_hantro_object *cma_obj);
> +void init_fence_data(void);
> +void release_fence_data(void);
> +#endif /*HANTRO_PRIV_H*/
> -- 
> 1.9.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM
  2020-10-09 22:15   ` Daniel Vetter
@ 2020-10-10  8:20     ` Ezequiel Garcia
  0 siblings, 0 replies; 7+ messages in thread
From: Ezequiel Garcia @ 2020-10-10  8:20 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: mgross, Jonas Karlman, Greg KH, Adrian Ratiu, dri-devel,
	Tomasz Figa, kuhanh.murugasen.krishnan, Nicolas Dufresne,
	linux-media

Hello everyone,

(Adding some Hantro developers)

On Fri, 9 Oct 2020 at 19:15, Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Fri, Oct 09, 2020 at 07:57:52PM +0800, kuhanh.murugasen.krishnan@intel.com wrote:
> > From: "Murugasen Krishnan, Kuhanh" <kuhanh.murugasen.krishnan@intel.com>
> >
> > This is a new DRM media codec driver for Intel's Keem Bay SOC which
> > integrates the Verisilicon's Hantro Video Processor Unit (VPU) IP.
> > The SoC couples an ARM Cortex A53 CPU with an Intel Movidius VPU.
> >
> > Hantro VPU IP is a series of video decoder and encoder semiconductor IP cores,
> > which can be flexibly configured for video surveillance, multimedia consumer
> > products, Internet of Things, cloud service products, data centers, aerial
> > photography and recorders, thereby providing video transcoding and multi-channel
> > HD video encoding and decoding.
> >
> > Hantro VPU IP consists of Hantro VC8000D for decoder and Hantro VC8000E for encoder.
> >

Before you guys even start reviewing or discussing this: good news everyone!
Verisilicon Hantro VPU support is in mainline since a few releases now.

How about you run a quick "git grep hantro -- drivers/" and see for
yourself :-) ?

Spoiler alert: we currently support G1 core, supporting MPEG-2, H.264, VP8
and some post-processor features.

We are working on G2 for HEVC and VP9, and we have patches ready
for VC8000D for H264.

Given the VPU is stateless, it requires quite a bit of work on the
application side.
There are implementations in GStreamer (see v4l2codecs plugin), Chromium,
and Ffmpeg.

Given all the stateless codec drivers depend on the stateless controls APIs,
and given this API is still marked as experimental/unstable, the drivers
are in staging. Other than this, these drivers are just as good as any,
and have been shipped for quite some time now.

I expect to move them out of staging soon, just as soon as we clean and
stabilize this control API.

I will be happy to review patches adding Keem Bay support,
to be honest, unsure what that implies, but we'll see.

Thanks,
Ezequiel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-10-10  8:20 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-09 11:57 [PATCH v1 0/4] Add support for Keem Bay VPU DRM driver kuhanh.murugasen.krishnan
2020-10-09 11:57 ` [PATCH v1 1/4] drm: Add Keem Bay VPU codec DRM kuhanh.murugasen.krishnan
2020-10-09 22:15   ` Daniel Vetter
2020-10-10  8:20     ` Ezequiel Garcia
2020-10-09 11:57 ` [PATCH v1 2/4] drm: hantro: Keem Bay VPU DRM encoder kuhanh.murugasen.krishnan
2020-10-09 11:57 ` [PATCH v1 3/4] drm: hantro: Keem Bay VPU DRM decoder kuhanh.murugasen.krishnan
2020-10-09 11:57 ` [PATCH v1 4/4] drm: hantro: Keem Bay VPU DRM build files kuhanh.murugasen.krishnan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).