All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2017-12-19 19:29 ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Upload of intial version of hyper_DMABUF driver enabling
DMA_BUF exchange between two different VMs in virtualized
platform based on hypervisor such as KVM or XEN.

Hyper_DMABUF drv's primary role is to import a DMA_BUF
from originator then re-export it to another Linux VM
so that it can be mapped and accessed by it.

The functionality of this driver highly depends on
Hypervisor's native page sharing mechanism and inter-VM
communication support.

This driver has two layers, one is main hyper_DMABUF
framework for scatter-gather list management that handles
actual import and export of DMA_BUF. Lower layer is about
actual memory sharing and communication between two VMs,
which is hypervisor-specific interface.

This driver is initially designed to enable DMA_BUF
sharing across VMs in Xen environment, so currently working
with Xen only.

This also adds Kernel configuration for hyper_DMABUF drv
under Device Drivers->Xen driver support->hyper_dmabuf
options.

To give some brief information about each source file,

hyper_dmabuf/hyper_dmabuf_conf.h
: configuration info

hyper_dmabuf/hyper_dmabuf_drv.c
: driver interface and initialization

hyper_dmabuf/hyper_dmabuf_imp.c
: scatter-gather list generation and management. DMA_BUF
ops for DMA_BUF reconstructed from hyper_DMABUF

hyper_dmabuf/hyper_dmabuf_ioctl.c
: IOCTLs calls for export/import and comm channel creation
unexport.

hyper_dmabuf/hyper_dmabuf_list.c
: Database (linked-list) for exported and imported
hyper_DMABUF

hyper_dmabuf/hyper_dmabuf_msg.c
: creation and management of messages between exporter and
importer

hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
: comm ch management and ISRs for incoming messages.

hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
: Database (linked-list) for keeping information about
existing comm channels among VMs

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/xen/Kconfig                                |   2 +
 drivers/xen/Makefile                               |   1 +
 drivers/xen/hyper_dmabuf/Kconfig                   |  14 +
 drivers/xen/hyper_dmabuf/Makefile                  |  34 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   2 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  54 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 101 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 852 +++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  31 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 462 +++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 119 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  40 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 212 +++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  45 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  16 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  70 ++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 328 ++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  62 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 106 +++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  35 +
 20 files changed, 2586 insertions(+)
 create mode 100644 drivers/xen/hyper_dmabuf/Kconfig
 create mode 100644 drivers/xen/hyper_dmabuf/Makefile
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index d8dd546..b59b0e3 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -321,4 +321,6 @@ config XEN_SYMS
 config XEN_HAVE_VPMU
        bool
 
+source "drivers/xen/hyper_dmabuf/Kconfig"
+
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 451e833..a6e253a 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
 obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
 obj-y	+= events/
 obj-y	+= xenbus/
+obj-y	+= hyper_dmabuf/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
new file mode 100644
index 0000000..75e1f96
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -0,0 +1,14 @@
+menu "hyper_dmabuf options"
+
+config HYPER_DMABUF
+	tristate "Enables hyper dmabuf driver"
+	default y
+
+config HYPER_DMABUF_XEN
+	bool "Configure hyper_dmabuf for XEN hypervisor"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Configuring hyper_dmabuf driver for XEN hypervisor
+
+endmenu
diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
new file mode 100644
index 0000000..0be7445
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -0,0 +1,34 @@
+TARGET_MODULE:=hyper_dmabuf
+
+# If we running by kernel building system
+ifneq ($(KERNELRELEASE),)
+	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
+                                 hyper_dmabuf_ioctl.o \
+                                 hyper_dmabuf_list.o \
+				 hyper_dmabuf_imp.o \
+				 hyper_dmabuf_msg.o \
+				 xen/hyper_dmabuf_xen_comm.o \
+				 xen/hyper_dmabuf_xen_comm_list.o
+
+obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
+
+# If we are running without kernel build system
+else
+BUILDSYSTEM_DIR?=../../../
+PWD:=$(shell pwd)
+
+all :
+# run kernel build system to make module
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
+
+clean:
+# run kernel build system to cleanup in current directory
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
+
+load:
+	insmod ./$(TARGET_MODULE).ko
+
+unload:
+	rmmod ./$(TARGET_MODULE).ko
+
+endif
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
new file mode 100644
index 0000000..3d9b2d6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -0,0 +1,2 @@
+#define CURRENT_TARGET XEN
+#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
new file mode 100644
index 0000000..0698327
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -0,0 +1,54 @@
+#include <linux/init.h>       /* module_init, module_exit */
+#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include "hyper_dmabuf_conf.h"
+#include "hyper_dmabuf_list.h"
+#include "xen/hyper_dmabuf_xen_comm_list.h"
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("IOTG-PED, INTEL");
+
+int register_device(void);
+int unregister_device(void);
+
+/*===============================================================================================*/
+static int hyper_dmabuf_drv_init(void)
+{
+	int ret = 0;
+
+	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
+
+	ret = register_device();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
+
+	ret = hyper_dmabuf_table_init();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	ret = hyper_dmabuf_ring_table_init();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	/* interrupt for comm should be registered here: */
+	return ret;
+}
+
+/*-----------------------------------------------------------------------------------------------*/
+static void hyper_dmabuf_drv_exit(void)
+{
+	/* hash tables for export/import entries and ring_infos */
+	hyper_dmabuf_table_destroy();
+	hyper_dmabuf_ring_table_init();
+
+	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
+	unregister_device();
+}
+/*===============================================================================================*/
+
+module_init(hyper_dmabuf_drv_init);
+module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
new file mode 100644
index 0000000..2dad9a6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -0,0 +1,101 @@
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
+struct ioctl_hyper_dmabuf_exporter_ring_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	uint32_t remote_domain;
+	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
+	uint32_t port; /* assigned by driver, copied to userspace after initialization */
+};
+
+#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
+struct ioctl_hyper_dmabuf_importer_ring_setup {
+	/* IN parameters */
+	/* Source domain id */
+	uint32_t source_domain;
+	/* Ring shared page refid */
+	grant_ref_t ring_refid;
+	/* Port number */
+	uint32_t port;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	uint32_t dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	uint32_t remote_domain;
+	/* exported dma buf id */
+	uint32_t hyper_dmabuf_id;
+	uint32_t private[4];
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	uint32_t hyper_dmabuf_id;
+	/* flags */
+	uint32_t flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	uint32_t fd;
+};
+
+#define IOCTL_HYPER_DMABUF_DESTROY \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
+struct ioctl_hyper_dmabuf_destroy {
+	/* IN parameters */
+	/* hyper dmabuf id to be destroyed */
+	uint32_t hyper_dmabuf_id;
+	/* OUT parameters */
+	/* Status of request */
+	uint32_t status;
+};
+
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	uint32_t hyper_dmabuf_id;
+	/* item to be queried */
+	uint32_t item;
+	/* OUT parameters */
+	/* Value of queried item */
+	uint32_t info;
+};
+
+#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
+struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
+	/* in parameters */
+	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
+	uint32_t info;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
new file mode 100644
index 0000000..faa5c1b
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -0,0 +1,852 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_imp.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/* return total number of pages referecned by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct hyper_dmabuf_pages_info *pinfo;
+	int i, j;
+	int length;
+	struct scatterlist *sgl;
+
+	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
+	if (pinfo == NULL)
+		return NULL;
+
+	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
+	if (pinfo->pages == NULL)
+		return NULL;
+
+	sgl = sgt->sgl;
+
+	pinfo->nents = 1;
+	pinfo->frst_ofst = sgl->offset;
+	pinfo->pages[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i=1;
+
+	while (length > 0) {
+		pinfo->pages[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pinfo->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pinfo->pages[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pinfo->nents++;
+
+		while (length > 0) {
+			pinfo->pages[i] = nth_page(sg_page(sgl), i);
+			length -= PAGE_SIZE;
+			pinfo->nents++;
+			i++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pinfo->last_len = PAGE_SIZE + length;
+
+	return pinfo;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+				int frst_ofst, int last_len, int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (sgt == NULL) {
+		return NULL;
+	}
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		kfree(sgt);
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i=1; i<nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
+	}
+
+	if (i > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      Top level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
+ * +-------------------------+ | | +--------------------+      |Data page 1 |
+ *                             | |                             +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|-->+------------------+
+ *                                 +-----------------------+   |Data page 1048575 |
+ *                                                             +------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
+						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int i;
+	unsigned long gref_page_start;
+	grant_ref_t *tmp_page;
+	grant_ref_t top_level_ref;
+	grant_ref_t * addr_refs;
+	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
+
+	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
+	tmp_page = (grant_ref_t *)gref_page_start;
+
+	/* Store 2nd level pages to be freed later */
+	shared_pages_info->addr_pages = tmp_page;
+
+	/*TODO: make sure that allocated memory is filled with 0*/
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i=0; i< n_2nd_level_pages; i++) {
+		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
+	}
+
+	/*
+	 * fill second level pages with data refs
+	 */
+	for (i = 0; i < nents; i++) {
+		tmp_page[i] = data_refs[i];
+	}
+
+
+	/* allocate top level page */
+	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
+	tmp_page = (grant_ref_t *)gref_page_start;
+
+	/* Store top level page to be freed later */
+	shared_pages_info->top_level_page = tmp_page;
+
+	/*
+	 * fill top level page with reference numbers of second level pages refs.
+	 */
+	for (i=0; i< n_2nd_level_pages; i++) {
+		tmp_page[i] =  addr_refs[i];
+	}
+
+	/* Share top level addressing page in readonly mode*/
+	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
+
+	kfree(addr_refs);
+
+	return top_level_ref;
+}
+
+/*
+ * Maps provided top level ref id and then return array of pages containing data refs.
+ */
+struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
+					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	struct page *top_level_page;
+	struct page **level2_pages;
+
+	grant_ref_t *top_level_refs;
+
+	struct gnttab_map_grant_ref top_level_map_ops;
+	struct gnttab_unmap_grant_ref top_level_unmap_ops;
+
+	struct gnttab_map_grant_ref *map_ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	unsigned long addr;
+	int n_level2_refs = 0;
+	int i;
+
+	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
+
+	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
+
+	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
+	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &top_level_page)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
+	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
+	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (top_level_map_ops.status) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+				top_level_map_ops.status);
+		return NULL;
+	} else {
+		top_level_unmap_ops.handle = top_level_map_ops.handle;
+	}
+
+	/* Parse contents of top level addressing page to find how many second level pages is there*/
+	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	for (i = 0; i < n_level2_refs; i++) {
+		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
+		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
+		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
+	for (i = 0; i < n_level2_refs; i++) {
+		if (map_ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+					map_ops[i].status);
+			return NULL;
+		} else {
+			unmap_ops[i].handle = map_ops[i].handle;
+		}
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
+		printk("\xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	gnttab_free_pages(1, &top_level_page);
+	kfree(map_ops);
+	shared_pages_info->unmap_ops = unmap_ops;
+
+	return level2_pages;
+}
+
+
+/* This collects all reference numbers for 2nd level shared pages and create a table
+ * with those in 1st level shared pages then return reference numbers for this top level
+ * table. */
+grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
+					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	int i = 0;
+	grant_ref_t *data_refs;
+	grant_ref_t top_level_ref;
+
+	/* allocate temp array for refs of shared data pages */
+	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
+
+	/* share data pages in rw mode*/
+	for (i=0; i<nents; i++) {
+		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
+	}
+
+	/* create additional shared pages with 2 level addressing of data pages */
+	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
+							      shared_pages_info);
+
+	/* Store exported pages refid to be unshared later */
+	shared_pages_info->data_refs = data_refs;
+	shared_pages_info->top_level_ref = top_level_ref;
+
+	return top_level_ref;
+}
+
+int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
+	uint32_t i = 0;
+	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
+
+	grant_ref_t *ref = shared_pages_info->top_level_page;
+	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+
+
+	if (shared_pages_info->data_refs == NULL ||
+	    shared_pages_info->addr_pages ==  NULL ||
+	    shared_pages_info->top_level_page == NULL ||
+	    shared_pages_info->top_level_ref == -1) {
+		printk("gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	while(ref[i] != 0 && i < n_2nd_level_pages) {
+		if (gnttab_query_foreign_access(ref[i])) {
+			printk("refid not shared !!\n");
+		}
+		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
+			printk("refid still in use!!!\n");
+		}
+		i++;
+	}
+	free_pages((unsigned long)shared_pages_info->addr_pages, i);
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
+		printk("refid not shared !!\n");
+	}
+	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
+		printk("refid still in use!!!\n");
+	}
+	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
+	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < sgt_info->sgt->nents; i++) {
+		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
+			printk("refid not shared !!\n");
+		}
+		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
+	}
+
+	kfree(shared_pages_info->data_refs);
+
+	shared_pages_info->data_refs = NULL;
+	shared_pages_info->addr_pages = NULL;
+	shared_pages_info->top_level_page = NULL;
+	shared_pages_info->top_level_ref = -1;
+
+	return 0;
+}
+
+int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
+	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
+
+	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
+		printk("Imported pages already cleaned up or buffer was not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
+		printk("Cannot unmap data pages\n");
+		return -EINVAL;
+	}
+
+	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
+	kfree(shared_pages_info->data_pages);
+	kfree(shared_pages_info->unmap_ops);
+	shared_pages_info->unmap_ops = NULL;
+	shared_pages_info->data_pages = NULL;
+
+	return 0;
+}
+
+/* map and construct sg_lists from reference numbers */
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
+					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	struct sg_table *st;
+	struct page **pages;
+	struct gnttab_map_grant_ref *ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	unsigned long addr;
+	grant_ref_t *refs;
+	int i;
+	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
+
+	/* Get data refids */
+	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
+							       shared_pages_info);
+
+	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
+	if (pages == NULL) {
+		return NULL;
+	}
+
+	/* allocate new pages that are mapped to shared pages via grant-table */
+	if (gnttab_alloc_pages(nents, pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
+	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
+
+	for (i=0; i<nents; i++) {
+		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
+		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
+		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
+		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	if (gnttab_map_refs(ops, NULL, pages, nents)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	for (i=0; i<nents; i++) {
+		if (ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
+				ops[0].status);
+			return NULL;
+		} else {
+			unmap_ops[i].handle = ops[i].handle;
+		}
+	}
+
+	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
+
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
+		printk("Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	gnttab_free_pages(n_level2_refs, refid_pages);
+	kfree(refid_pages);
+
+	kfree(shared_pages_info->unmap_ops);
+	shared_pages_info->unmap_ops = unmap_ops;
+	shared_pages_info->data_pages = pages;
+	kfree(ops);
+
+	return st;
+}
+
+inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
+{
+	struct hyper_dmabuf_ring_rq *req;
+	int operands[2];
+	int ret;
+
+	operands[0] = id;
+	operands[1] = ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
+
+	/* send request */
+	ret = hyper_dmabuf_send_request(id, req);
+
+	/* TODO: wait until it gets response.. or can we just move on? */
+
+	kfree(req);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
+			struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_ATTACH);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_DETACH);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
+						enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_pages_info *page_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
+				page_info->last_len, page_info->nents);
+	if (st == NULL)
+		goto err_free_sg;
+
+        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+                goto err_free_sg;
+        }
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_MAP);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return st;
+
+err_free_sg:
+	sg_free_table(st);
+	kfree(st);
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+						struct sg_table *sg,
+						enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_UNMAP);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_RELEASE);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KUNMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_MMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_VMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_VUNMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+		.attach = hyper_dmabuf_ops_attach,
+		.detach = hyper_dmabuf_ops_detach,
+		.map_dma_buf = hyper_dmabuf_ops_map,
+		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+		.release = hyper_dmabuf_ops_release,
+		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
+		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+		.map = hyper_dmabuf_ops_kmap,
+		.unmap = hyper_dmabuf_ops_kunmap,
+		.mmap = hyper_dmabuf_ops_mmap,
+		.vmap = hyper_dmabuf_ops_vmap,
+		.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
+{
+	int fd;
+
+	struct dma_buf* dmabuf;
+
+/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
+ * then release */
+
+	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
+
+	fd = dma_buf_fd(dmabuf, flags);
+
+	return fd;
+}
+
+struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
+	exp_info.flags = /* not sure about flag */0;
+	exp_info.priv = dinfo;
+
+	return dma_buf_export(&exp_info);
+};
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
new file mode 100644
index 0000000..003c158
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -0,0 +1,31 @@
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+                                int frst_ofst, int last_len, int nents);
+
+grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
+					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
+
+int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
+
+int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
+
+/* map first level tables that contains reference numbers for actual shared pages */
+grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
+
+/* map and construct sg_lists from reference numbers */
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
+					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
+
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
+
+struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
new file mode 100644
index 0000000..5e50908
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -0,0 +1,462 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/miscdevice.h>
+#include <linux/uaccess.h>
+#include <linux/dma-buf.h>
+#include <linux/delay.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_query.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+struct hyper_dmabuf_private {
+	struct device *device;
+} hyper_dmabuf_private;
+
+static uint32_t hyper_dmabuf_id_gen(void) {
+	/* TODO: add proper implementation */
+	static uint32_t id = 0;
+	static int32_t domid = -1;
+	if (domid == -1) {
+		domid = hyper_dmabuf_get_domid();
+	}
+	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
+}
+
+static int hyper_dmabuf_exporter_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
+
+	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
+						&ring_attr->ring_refid,
+						&ring_attr->port);
+
+	return ret;
+}
+
+static int hyper_dmabuf_importer_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
+
+	/* user need to provide a port number and ref # for the page used as ring buffer */
+	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
+						 setup_imp_ring_attr->ring_refid,
+						 setup_imp_ring_attr->port);
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_remote(void *data)
+{
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
+	struct dma_buf *dma_buf;
+	struct dma_buf_attachment *attachment;
+	struct sg_table *sgt;
+	struct hyper_dmabuf_pages_info *page_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int operands[9];
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
+
+	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+	if (!dma_buf) {
+		printk("Cannot get dma buf\n");
+		return -1;
+	}
+
+	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	if (!attachment) {
+		printk("Cannot get attachment\n");
+		return -1;
+	}
+
+	/* we check if this specific attachment was already exported
+	 * to the same domain and if yes, it returns hyper_dmabuf_id
+	 * of pre-exported sgt */
+	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
+	if (ret != -1) {
+		dma_buf_detach(dma_buf, attachment);
+		dma_buf_put(dma_buf);
+		export_remote_attr->hyper_dmabuf_id = ret;
+		return 0;
+	}
+	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
+	ret = 0;
+
+	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
+
+	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
+
+	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
+	/* TODO: We might need to consider using port number on event channel? */
+	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
+	sgt_info->sgt = sgt;
+	sgt_info->attachment = attachment;
+	sgt_info->dma_buf = dma_buf;
+
+	page_info = hyper_dmabuf_ext_pgs(sgt);
+	if (page_info == NULL)
+		goto fail_export;
+
+	/* now register it to export list */
+	hyper_dmabuf_register_exported(sgt_info);
+
+	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
+	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
+
+	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+
+	/* now create table of grefs for shared pages and */
+
+	/* now create request for importer via ring */
+	operands[0] = page_info->hyper_dmabuf_id;
+	operands[1] = page_info->nents;
+	operands[2] = page_info->frst_ofst;
+	operands[3] = page_info->last_len;
+	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
+						page_info->nents, &sgt_info->shared_pages_info);
+	/* driver/application specific private info, max 32 bytes */
+	operands[5] = export_remote_attr->private[0];
+	operands[6] = export_remote_attr->private[1];
+	operands[7] = export_remote_attr->private[2];
+	operands[8] = export_remote_attr->private[3];
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
+	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
+		goto fail_send_request;
+
+	/* free msg */
+	kfree(req);
+	/* free page_info */
+	kfree(page_info);
+
+	return ret;
+
+fail_send_request:
+	kfree(req);
+	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
+
+fail_export:
+	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+	dma_buf_put(sgt_info->dma_buf);
+
+	return -EINVAL;
+}
+
+static int hyper_dmabuf_export_fd_ioctl(void *data)
+{
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
+
+	/* look for dmabuf for the id */
+	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
+	if (imported_sgt_info == NULL) /* can't find sgt from the table */
+		return -1;
+
+	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
+		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
+		imported_sgt_info->last_len, imported_sgt_info->nents,
+		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+
+	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
+						imported_sgt_info->frst_ofst,
+						imported_sgt_info->last_len,
+						imported_sgt_info->nents,
+						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+						&imported_sgt_info->shared_pages_info);
+
+	if (!imported_sgt_info->sgt) {
+		return -1;
+	}
+
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
+	if (export_fd_attr < 0) {
+		ret = export_fd_attr->fd;
+	}
+
+	return ret;
+}
+
+/* removing dmabuf from the database and send int req to the source domain
+* to unmap it. */
+static int hyper_dmabuf_destroy(void *data)
+{
+	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int ret;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -EINVAL;
+	}
+
+	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
+
+	/* find dmabuf in export list */
+	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
+	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
+		destroy_attr->status = -EINVAL;
+		return -EFAULT;
+	}
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
+
+	/* now send destroy request to remote domain
+	 * currently assuming there's only one importer exist */
+	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
+	if (ret < 0) {
+		kfree(req);
+		return -EFAULT;
+	}
+
+	/* free msg */
+	kfree(req);
+	destroy_attr->status = ret;
+
+	/* Rest of cleanup will follow when importer will free it's buffer,
+	 * current implementation assumes that there is only one importer
+         */
+
+	return ret;
+}
+
+static int hyper_dmabuf_query(void *data)
+{
+	struct ioctl_hyper_dmabuf_query *query_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -EINVAL;
+	}
+
+	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
+
+	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
+	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
+
+	/* if dmabuf can't be found in both lists, return */
+	if (!(sgt_info && imported_sgt_info)) {
+		printk("can't find entry anywhere\n");
+		return -EINVAL;
+	}
+
+	/* not considering the case where a dmabuf is found on both queues
+	 * in one domain */
+	switch (query_attr->item)
+	{
+		case DMABUF_QUERY_TYPE_LIST:
+			if (sgt_info) {
+				query_attr->info = EXPORTED;
+			} else {
+				query_attr->info = IMPORTED;
+			}
+			break;
+
+		/* exporting domain of this specific dmabuf*/
+		case DMABUF_QUERY_EXPORTER:
+			if (sgt_info) {
+				query_attr->info = 0xFFFFFFFF; /* myself */
+			} else {
+				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+			}
+			break;
+
+		/* importing domain of this specific dmabuf */
+		case DMABUF_QUERY_IMPORTER:
+			if (sgt_info) {
+				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
+			} else {
+#if 0 /* TODO: a global variable, current_domain does not exist yet*/
+				query_attr->info = current_domain;
+#endif
+			}
+			break;
+
+		/* size of dmabuf in byte */
+		case DMABUF_QUERY_SIZE:
+			if (sgt_info) {
+#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
+				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
+#endif
+			} else {
+				query_attr->info = imported_sgt_info->nents * 4096 -
+						   imported_sgt_info->frst_ofst - 4096 +
+						   imported_sgt_info->last_len;
+			}
+			break;
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
+	struct hyper_dmabuf_ring_rq *req;
+
+	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
+
+	/* requesting remote domain to set-up exporter's ring */
+	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
+		kfree(req);
+		return -EINVAL;
+	}
+
+	kfree(req);
+	return 0;
+}
+
+static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
+};
+
+static long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param)
+{
+	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
+	unsigned int nr = _IOC_NR(cmd);
+	int ret = -EINVAL;
+	hyper_dmabuf_ioctl_t func;
+	char *kdata;
+
+	ioctl = &hyper_dmabuf_ioctls[nr];
+
+	func = ioctl->func;
+
+	if (unlikely(!func)) {
+		printk("no function\n");
+		return -EINVAL;
+	}
+
+	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+	if (!kdata) {
+		printk("no memory\n");
+		return -ENOMEM;
+	}
+
+	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
+		printk("failed to copy from user arguments\n");
+		return -EFAULT;
+	}
+
+	ret = func(kdata);
+
+	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
+		printk("failed to copy to user arguments\n");
+		return -EFAULT;
+	}
+
+	kfree(kdata);
+
+	return ret;
+}
+
+struct device_info {
+	int curr_domain;
+};
+
+/*===============================================================================================*/
+static struct file_operations hyper_dmabuf_driver_fops =
+{
+   .owner = THIS_MODULE,
+   .unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "xen/hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+static const char device_name[] = "hyper_dmabuf";
+
+/*===============================================================================================*/
+int register_device(void)
+{
+	int result = 0;
+
+	result = misc_register(&hyper_dmabuf_miscdev);
+
+	if (result != 0) {
+		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
+		return result;
+	}
+
+	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask nicely */
+	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
+
+	/* TODO find a way to provide parameters for below function or move that to ioctl */
+/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
+				src_sink_isr, PORT_NUM, "remote_domain", &info);
+	if (err < 0) {
+		printk("hyper_dmabuf: can't register interrupt handlers\n");
+		return -EFAULT;
+	}
+
+	info.irq = err;
+*/
+	return result;
+}
+
+/*-----------------------------------------------------------------------------------------------*/
+void unregister_device(void)
+{
+	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
new file mode 100644
index 0000000..77a7e65
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -0,0 +1,119 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <asm/uaccess.h>
+#include <linux/hashtable.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_list.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
+
+int hyper_dmabuf_table_init()
+{
+	hash_init(hyper_dmabuf_hash_imported);
+	hash_init(hyper_dmabuf_hash_exported);
+	return 0;
+}
+
+int hyper_dmabuf_table_destroy()
+{
+	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
+	return 0;
+}
+
+int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = info;
+
+	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
+		info_entry->info->hyper_dmabuf_id);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = info;
+
+	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
+		info_entry->info->hyper_dmabuf_id);
+
+	return 0;
+}
+
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id)
+			return info_entry->info;
+
+	return NULL;
+}
+
+/* search for pre-exported sgt and return id of it if it exist */
+int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->attachment == attach &&
+			info_entry->info->hyper_dmabuf_rdomain == domid)
+			return info_entry->info->hyper_dmabuf_id;
+
+	return -1;
+}
+
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exported(int id)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
+
+int hyper_dmabuf_remove_imported(int id)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
new file mode 100644
index 0000000..869cd9a
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -0,0 +1,40 @@
+#ifndef __HYPER_DMABUF_LIST_H__
+#define __HYPER_DMABUF_LIST_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORTED 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORTED 7
+
+struct hyper_dmabuf_info_entry_exported {
+        struct hyper_dmabuf_sgt_info *info;
+        struct hlist_node node;
+};
+
+struct hyper_dmabuf_info_entry_imported {
+        struct hyper_dmabuf_imported_sgt_info *info;
+        struct hlist_node node;
+};
+
+int hyper_dmabuf_table_init(void);
+
+int hyper_dmabuf_table_destroy(void);
+
+int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
+
+/* search for pre-exported sgt and return id of it if it exist */
+int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
+
+int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
+
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
+
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
+
+int hyper_dmabuf_remove_exported(int id);
+
+int hyper_dmabuf_remove_imported(int id);
+
+#endif // __HYPER_DMABUF_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
new file mode 100644
index 0000000..3237e50
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -0,0 +1,212 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_imp.h"
+//#include "hyper_dmabuf_remote_sync.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
+				        enum hyper_dmabuf_command command, int *operands)
+{
+	int i;
+
+	request->request_id = hyper_dmabuf_next_req_id_export();
+	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	request->command = command;
+
+	switch(command) {
+	/* as exporter, commands to importer */
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : number of pages to be shared
+		 * operands2 : offset of data in the first page
+		 * operands3 : length of data in the last page
+		 * operands4 : top-level reference number for shared pages
+		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 */
+		for (i=0; i < 8; i++)
+			request->operands[i] = operands[i];
+		break;
+
+	case HYPER_DMABUF_DESTROY:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		request->operands[0] = operands[0];
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed) */
+		/* for dmabuf synchronization */
+		break;
+
+	/* as importer, command to exporter */
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		* or unmapping for synchronization with original exporter (e.g. i915) */
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		for (i=0; i<2; i++)
+			request->operands[i] = operands[i];
+		break;
+
+	/* requesting the other side to setup another ring channel for reverse direction */
+	case HYPER_DMABUF_EXPORTER_RING_SETUP:
+		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
+		/* no operands needed */
+		break;
+
+	default:
+		/* no command found */
+		return;
+	}
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+{
+	uint32_t i, ret;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+
+	/* make sure req is not NULL (may not be needed) */
+	if (!req) {
+		return -EINVAL;
+	}
+
+	req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+	switch (req->command) {
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : number of pages to be shared
+		 * operands2 : offset of data in the first page
+		 * operands3 : length of data in the last page
+		 * operands4 : top-level reference number for shared pages
+		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 */
+		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
+		imported_sgt_info->frst_ofst = req->operands[2];
+		imported_sgt_info->last_len = req->operands[3];
+		imported_sgt_info->nents = req->operands[1];
+		imported_sgt_info->gref = req->operands[4];
+
+		printk("DMABUF was exported\n");
+		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
+		printk("\tnents %d\n", req->operands[1]);
+		printk("\tfirst offset %d\n", req->operands[2]);
+		printk("\tlast len %d\n", req->operands[3]);
+		printk("\tgrefid %d\n", req->operands[4]);
+
+		for (i=0; i<4; i++)
+			imported_sgt_info->private[i] = req->operands[5+i];
+
+		hyper_dmabuf_register_imported(imported_sgt_info);
+		break;
+
+	case HYPER_DMABUF_DESTROY:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+
+		imported_sgt_info =
+			hyper_dmabuf_find_imported(req->operands[0]);
+
+		if (imported_sgt_info) {
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+
+			hyper_dmabuf_remove_imported(req->operands[0]);
+
+			/* TODO: cleanup sgt on importer side etc */
+		}
+
+		/* Notify exporter that buffer is freed and it can cleanup it */
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_DESTROY_FINISH;
+
+#if 0 /* function is not implemented yet */
+
+		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
+#endif
+		break;
+
+	case HYPER_DMABUF_DESTROY_FINISH:
+		/* destroy sg_list for hyper_dmabuf_id on local side */
+		/* command : DMABUF_DESTROY_FINISH,
+		 * operands0 : hyper_dmabuf_id
+		 */
+
+		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
+		sgt_info =
+			hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (sgt_info) {
+			hyper_dmabuf_cleanup_gref_table(sgt_info);
+
+			/* unmap dmabuf */
+			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
+			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+			dma_buf_put(sgt_info->dma_buf);
+
+			/* TODO: Rest of cleanup, sgt cleanup etc */
+		}
+
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed) */
+		/* for dmabuf synchronization */
+		break;
+
+	/* as importer, command to exporter */
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		* or unmapping for synchronization with original exporter (e.g. i915) */
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		break;
+
+	/* requesting the other side to setup another ring channel for reverse direction */
+	case HYPER_DMABUF_EXPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
+		 * no operands needed */
+		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
+		if (ret < 0) {
+			req->status = HYPER_DMABUF_REQ_ERROR;
+			return -EINVAL;
+		}
+
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
+		break;
+
+	case HYPER_DMABUF_IMPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
+		/* no operands needed */
+		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
+		if (ret < 0)
+			return -EINVAL;
+
+		break;
+
+	default:
+		/* no matched command, nothing to do.. just return error */
+		return -EINVAL;
+	}
+
+	return req->command;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
new file mode 100644
index 0000000..44bfb70
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -0,0 +1,45 @@
+#ifndef __HYPER_DMABUF_MSG_H__
+#define __HYPER_DMABUF_MSG_H__
+
+enum hyper_dmabuf_command {
+	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_DESTROY,
+	HYPER_DMABUF_DESTROY_FINISH,
+	HYPER_DMABUF_OPS_TO_REMOTE,
+	HYPER_DMABUF_OPS_TO_SOURCE,
+	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
+	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
+};
+
+enum hyper_dmabuf_ops {
+	HYPER_DMABUF_OPS_ATTACH = 0x1000,
+	HYPER_DMABUF_OPS_DETACH,
+	HYPER_DMABUF_OPS_MAP,
+	HYPER_DMABUF_OPS_UNMAP,
+	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
+	HYPER_DMABUF_OPS_END_CPU_ACCESS,
+	HYPER_DMABUF_OPS_KMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KMAP,
+	HYPER_DMABUF_OPS_KUNMAP,
+	HYPER_DMABUF_OPS_MMAP,
+	HYPER_DMABUF_OPS_VMAP,
+	HYPER_DMABUF_OPS_VUNMAP,
+};
+
+enum hyper_dmabuf_req_feedback {
+	HYPER_DMABUF_REQ_PROCESSED = 0x100,
+	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
+	HYPER_DMABUF_REQ_ERROR,
+	HYPER_DMABUF_REQ_NOT_RESPONDED
+};
+
+/* create a request packet with given command and operands */
+void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
+                                        enum hyper_dmabuf_command command, int *operands);
+
+/* parse incoming request packet (or response) and take appropriate actions for those */
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
+
+#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
new file mode 100644
index 0000000..a577167
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -0,0 +1,16 @@
+#ifndef __HYPER_DMABUF_QUERY_H__
+#define __HYPER_DMABUF_QUERY_H__
+
+enum hyper_dmabuf_query {
+	DMABUF_QUERY_TYPE_LIST = 0x10,
+	DMABUF_QUERY_EXPORTER,
+	DMABUF_QUERY_IMPORTER,
+	DMABUF_QUERY_SIZE
+};
+
+enum hyper_dmabuf_status {
+	EXPORTED = 0x01,
+	IMPORTED
+};
+
+#endif /* __HYPER_DMABUF_QUERY_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
new file mode 100644
index 0000000..c8a2f4d
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -0,0 +1,70 @@
+#ifndef __HYPER_DMABUF_STRUCT_H__
+#define __HYPER_DMABUF_STRUCT_H__
+
+#include <xen/interface/grant_table.h>
+
+/* Importer combine source domain id with given hyper_dmabuf_id
+ * to make it unique in case there are multiple exporters */
+
+#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
+	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
+	(((id) >> 24) & 0xFF)
+
+/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
+ * in this block meaning we can share 4KB*4096 = 16MB of buffer
+ * (needs to be increased for large buffer use-cases such as 4K
+ * frame buffer) */
+#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
+
+struct hyper_dmabuf_shared_pages_info {
+	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
+	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
+	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
+	grant_ref_t top_level_ref; /* top level refid */
+	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
+	struct page **data_pages; /* data pages to be unmapped */
+};
+
+/* Exporter builds pages_info before sharing pages */
+struct hyper_dmabuf_pages_info {
+        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
+        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
+        int frst_ofst; /* offset of data in the first page */
+        int last_len; /* length of data in the last page */
+        int nents; /* # of pages */
+        struct page **pages; /* pages that contains reference numbers of shared pages*/
+};
+
+/* Both importer and exporter use this structure to point to sg lists
+ *
+ * Exporter stores references to sgt in a hash table
+ * Exporter keeps these references for synchronization and tracking purposes
+ *
+ * Importer use this structure exporting to other drivers in the same domain */
+struct hyper_dmabuf_sgt_info {
+        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
+	int hyper_dmabuf_rdomain; /* domain importing this sgt */
+        struct sg_table *sgt; /* pointer to sgt */
+	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
+	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int private[4]; /* device specific info (e.g. image's meta info?) */
+};
+
+/* Importer store references (before mapping) on shared pages
+ * Importer store these references in the table and map it in
+ * its own memory map once userspace asks for reference for the buffer */
+struct hyper_dmabuf_imported_sgt_info {
+	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
+	int frst_ofst;	/* start offset in shared page #1 */
+	int last_len;	/* length of data in the last shared page */
+	int nents;	/* number of pages to be shared */
+	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
+	struct sg_table *sgt; /* sgt pointer after importing buffer */
+	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int private[4]; /* device specific info (e.g. image's meta info?) */
+};
+
+#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
new file mode 100644
index 0000000..22f2ef0
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -0,0 +1,328 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <xen/grant_table.h>
+#include <xen/events.h>
+#include <xen/xenbus.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+#include "../hyper_dmabuf_imp.h"
+#include "../hyper_dmabuf_list.h"
+#include "../hyper_dmabuf_msg.h"
+
+static int export_req_id = 0;
+static int import_req_id = 0;
+
+int32_t hyper_dmabuf_get_domid(void)
+{
+	struct xenbus_transaction xbt;
+	int32_t domid;
+
+        xenbus_transaction_start(&xbt);
+
+        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
+		domid = -1;
+        }
+        xenbus_transaction_end(xbt, 0);
+
+	return domid;
+}
+
+int hyper_dmabuf_next_req_id_export(void)
+{
+        export_req_id++;
+        return export_req_id;
+}
+
+int hyper_dmabuf_next_req_id_import(void)
+{
+        import_req_id++;
+        return import_req_id;
+}
+
+/* For now cache latast rings as global variables TODO: keep them in list*/
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
+
+/* exporter needs to generated info for page sharing */
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
+{
+	struct hyper_dmabuf_ring_info_export *ring_info;
+	struct hyper_dmabuf_sring *sring;
+	struct evtchn_alloc_unbound alloc_unbound;
+	struct evtchn_close close;
+
+	void *shared_ring;
+	int ret;
+
+	ring_info = (struct hyper_dmabuf_ring_info_export*)
+				kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	/* from exporter to importer */
+	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
+	if (shared_ring == 0) {
+		return -EINVAL;
+	}
+
+	sring = (struct hyper_dmabuf_sring *) shared_ring;
+
+	SHARED_RING_INIT(sring);
+
+	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
+
+	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
+							virt_to_mfn(shared_ring), 0);
+	if (ring_info->gref_ring < 0) {
+		return -EINVAL; /* fail to get gref */
+	}
+
+	alloc_unbound.dom = DOMID_SELF;
+	alloc_unbound.remote_dom = rdomain;
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
+	if (ret != 0) {
+		printk("Cannot allocate event channel\n");
+		return -EINVAL;
+	}
+
+	/* setting up interrupt */
+	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
+					hyper_dmabuf_front_ring_isr, 0,
+					NULL, (void*) ring_info);
+
+	if (ret < 0) {
+		printk("Failed to setup event channel\n");
+		close.port = alloc_unbound.port;
+		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
+		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
+		return -EINVAL;
+	}
+
+	ring_info->rdomain = rdomain;
+	ring_info->irq = ret;
+	ring_info->port = alloc_unbound.port;
+
+	/* store refid and port numbers for userspace's use */
+	*refid = ring_info->gref_ring;
+	*port = ring_info->port;
+
+	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
+		ring_info->gref_ring,
+		ring_info->port,
+		ring_info->irq);
+
+	/* register ring info */
+	ret = hyper_dmabuf_register_exporter_ring(ring_info);
+
+	return ret;
+}
+
+/* importer needs to know about shared page and port numbers for ring buffer and event channel */
+int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
+{
+	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct hyper_dmabuf_sring *sring;
+
+	struct page *shared_ring;
+
+	struct gnttab_map_grant_ref *ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	int ret;
+
+	ring_info = (struct hyper_dmabuf_ring_info_import *)
+			kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	ring_info->sdomain = sdomain;
+	ring_info->evtchn = port;
+
+	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
+	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
+
+	if (gnttab_alloc_pages(1, &shared_ring)) {
+		return -EINVAL;
+	}
+
+	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			GNTMAP_host_map, gref, sdomain);
+
+	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
+	if (ret < 0) {
+		printk("Cannot map ring\n");
+		return -EINVAL;
+	}
+
+	if (ops[0].status) {
+		printk("Ring mapping failed\n");
+		return -EINVAL;
+	}
+
+	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
+
+	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
+
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
+						    NULL, (void*)ring_info);
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	ring_info->irq = ret;
+
+	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+		port,
+		ring_info->irq);
+
+	ret = hyper_dmabuf_register_importer_ring(ring_info);
+
+	return ret;
+}
+
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
+{
+	struct hyper_dmabuf_front_ring *ring;
+	struct hyper_dmabuf_ring_rq *new_req;
+	struct hyper_dmabuf_ring_info_export *ring_info;
+	int notify;
+
+	/* find a ring info for the channel */
+	ring_info = hyper_dmabuf_find_exporter_ring(domain);
+	if (!ring_info) {
+		printk("Can't find ring info for the channel\n");
+		return -EINVAL;
+	}
+
+	ring = &ring_info->ring_front;
+
+	if (RING_FULL(ring))
+		return -EBUSY;
+
+	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
+	if (!new_req) {
+		printk("NULL REQUEST\n");
+		return -EIO;
+	}
+
+	memcpy(new_req, req, sizeof(*new_req));
+
+	ring->req_prod_pvt++;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
+	if (notify) {
+		notify_remote_via_irq(ring_info->irq);
+	}
+
+	return 0;
+}
+
+/* called by interrupt (WORKQUEUE) */
+int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
+{
+	/* as a importer and as a exporter */
+	return 0;
+}
+
+/* ISR for request from exporter (as an importer) */
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
+{
+	RING_IDX rc, rp;
+	struct hyper_dmabuf_ring_rq request;
+	struct hyper_dmabuf_ring_rp response;
+	int notify, more_to_do;
+	int ret;
+//	struct hyper_dmabuf_work *work;
+
+	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
+	struct hyper_dmabuf_back_ring *ring;
+
+	ring = &ring_info->ring_back;
+
+	do {
+		rc = ring->req_cons;
+		rp = ring->sring->req_prod;
+
+		while (rc != rp) {
+			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
+				break;
+
+			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
+			printk("Got request\n");
+			ring->req_cons = ++rc;
+
+			/* TODO: probably using linked list for multiple requests then let
+			 * a task in a workqueue to process those is better idea becuase
+			 * we do not want to stay in ISR for long.
+			 */
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
+
+			if (ret > 0) {
+				/* build response */
+				memcpy(&response, &request, sizeof(response));
+
+				/* we sent back modified request as a response.. we might just need to have request only..*/
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
+				ring->rsp_prod_pvt++;
+
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
+
+				if (notify) {
+					printk("Notyfing\n");
+					notify_remote_via_irq(ring_info->irq);
+				}
+			}
+
+			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
+			printk("Final check for requests %d\n", more_to_do);
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
+
+/* ISR for responses from importer */
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
+{
+	/* front ring only care about response from back */
+	struct hyper_dmabuf_ring_rp *response;
+	RING_IDX i, rp;
+	int more_to_do, ret;
+
+	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
+	struct hyper_dmabuf_front_ring *ring;
+	ring = &ring_info->ring_front;
+
+	do {
+		more_to_do = 0;
+		rp = ring->sring->rsp_prod;
+		for (i = ring->rsp_cons; i != rp; i++) {
+			unsigned long id;
+
+			response = RING_GET_RESPONSE(ring, i);
+			id = response->response_id;
+
+			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+				/* parsing response */
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
+
+				if (ret < 0) {
+					printk("getting error while parsing response\n");
+				}
+			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
+				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
+			}
+
+		}
+
+		ring->rsp_cons = i;
+
+		if (i != ring->req_prod_pvt) {
+			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
+			printk("more to do %d\n", more_to_do);
+		} else {
+			ring->sring->rsp_event = i+1;
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
new file mode 100644
index 0000000..2754917
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -0,0 +1,62 @@
+#ifndef __HYPER_DMABUF_XEN_COMM_H__
+#define __HYPER_DMABUF_XEN_COMM_H__
+
+#include "xen/interface/io/ring.h"
+
+#define MAX_NUMBER_OF_OPERANDS 9
+
+struct hyper_dmabuf_ring_rq {
+        unsigned int request_id;
+        unsigned int status;
+        unsigned int command;
+        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_ring_rp {
+        unsigned int response_id;
+        unsigned int status;
+        unsigned int command;
+        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
+
+struct hyper_dmabuf_ring_info_export {
+        struct hyper_dmabuf_front_ring ring_front;
+	int rdomain;
+        int gref_ring;
+        int irq;
+        int port;
+};
+
+struct hyper_dmabuf_ring_info_import {
+        int sdomain;
+        int irq;
+        int evtchn;
+        struct hyper_dmabuf_back_ring ring_back;
+};
+
+//struct hyper_dmabuf_work {
+//	hyper_dmabuf_ring_rq requrest;
+//	struct work_struct msg_parse;
+//};
+
+int32_t hyper_dmabuf_get_domid(void);
+
+int hyper_dmabuf_next_req_id_export(void);
+
+int hyper_dmabuf_next_req_id_import(void);
+
+/* exporter needs to generated info for page sharing */
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
+
+/* importer needs to know about shared page and port numbers for ring buffer and event channel */
+int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
+
+/* send request to the remote domain */
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
+
+/* called by interrupt (WORKQUEUE) */
+int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
new file mode 100644
index 0000000..15c9d29
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -0,0 +1,106 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <asm/uaccess.h>
+#include <linux/hashtable.h>
+#include <xen/grant_table.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
+
+int hyper_dmabuf_ring_table_init()
+{
+	hash_init(hyper_dmabuf_hash_importer_ring);
+	hash_init(hyper_dmabuf_hash_exporter_ring);
+	return 0;
+}
+
+int hyper_dmabuf_ring_table_destroy()
+{
+	/* TODO: cleanup tables*/
+	return 0;
+}
+
+int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = ring_info;
+
+	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
+		info_entry->info->rdomain);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = ring_info;
+
+	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
+		info_entry->info->sdomain);
+
+	return 0;
+}
+
+struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+		if(info_entry->info->rdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+		if(info_entry->info->sdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exporter_ring(int domid)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+		if(info_entry->info->rdomain == domid) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
+
+int hyper_dmabuf_remove_importer_ring(int domid)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+		if(info_entry->info->sdomain == domid) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
new file mode 100644
index 0000000..5929f99
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -0,0 +1,35 @@
+#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
+#define __HYPER_DMABUF_XEN_COMM_LIST_H__
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORT_RING 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORT_RING 7
+
+struct hyper_dmabuf_exporter_ring_info {
+        struct hyper_dmabuf_ring_info_export *info;
+        struct hlist_node node;
+};
+
+struct hyper_dmabuf_importer_ring_info {
+        struct hyper_dmabuf_ring_info_import *info;
+        struct hlist_node node;
+};
+
+int hyper_dmabuf_ring_table_init(void);
+
+int hyper_dmabuf_ring_table_destroy(void);
+
+int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
+
+int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
+
+struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
+
+struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
+
+int hyper_dmabuf_remove_exporter_ring(int domid);
+
+int hyper_dmabuf_remove_importer_ring(int domid);
+
+#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2017-12-19 19:29 ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Upload of intial version of hyper_DMABUF driver enabling
DMA_BUF exchange between two different VMs in virtualized
platform based on hypervisor such as KVM or XEN.

Hyper_DMABUF drv's primary role is to import a DMA_BUF
from originator then re-export it to another Linux VM
so that it can be mapped and accessed by it.

The functionality of this driver highly depends on
Hypervisor's native page sharing mechanism and inter-VM
communication support.

This driver has two layers, one is main hyper_DMABUF
framework for scatter-gather list management that handles
actual import and export of DMA_BUF. Lower layer is about
actual memory sharing and communication between two VMs,
which is hypervisor-specific interface.

This driver is initially designed to enable DMA_BUF
sharing across VMs in Xen environment, so currently working
with Xen only.

This also adds Kernel configuration for hyper_DMABUF drv
under Device Drivers->Xen driver support->hyper_dmabuf
options.

To give some brief information about each source file,

hyper_dmabuf/hyper_dmabuf_conf.h
: configuration info

hyper_dmabuf/hyper_dmabuf_drv.c
: driver interface and initialization

hyper_dmabuf/hyper_dmabuf_imp.c
: scatter-gather list generation and management. DMA_BUF
ops for DMA_BUF reconstructed from hyper_DMABUF

hyper_dmabuf/hyper_dmabuf_ioctl.c
: IOCTLs calls for export/import and comm channel creation
unexport.

hyper_dmabuf/hyper_dmabuf_list.c
: Database (linked-list) for exported and imported
hyper_DMABUF

hyper_dmabuf/hyper_dmabuf_msg.c
: creation and management of messages between exporter and
importer

hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
: comm ch management and ISRs for incoming messages.

hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
: Database (linked-list) for keeping information about
existing comm channels among VMs

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/xen/Kconfig                                |   2 +
 drivers/xen/Makefile                               |   1 +
 drivers/xen/hyper_dmabuf/Kconfig                   |  14 +
 drivers/xen/hyper_dmabuf/Makefile                  |  34 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   2 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  54 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 101 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 852 +++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  31 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 462 +++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 119 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  40 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 212 +++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  45 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  16 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  70 ++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 328 ++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  62 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 106 +++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  35 +
 20 files changed, 2586 insertions(+)
 create mode 100644 drivers/xen/hyper_dmabuf/Kconfig
 create mode 100644 drivers/xen/hyper_dmabuf/Makefile
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index d8dd546..b59b0e3 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -321,4 +321,6 @@ config XEN_SYMS
 config XEN_HAVE_VPMU
        bool
 
+source "drivers/xen/hyper_dmabuf/Kconfig"
+
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 451e833..a6e253a 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
 obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
 obj-y	+= events/
 obj-y	+= xenbus/
+obj-y	+= hyper_dmabuf/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
new file mode 100644
index 0000000..75e1f96
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -0,0 +1,14 @@
+menu "hyper_dmabuf options"
+
+config HYPER_DMABUF
+	tristate "Enables hyper dmabuf driver"
+	default y
+
+config HYPER_DMABUF_XEN
+	bool "Configure hyper_dmabuf for XEN hypervisor"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Configuring hyper_dmabuf driver for XEN hypervisor
+
+endmenu
diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
new file mode 100644
index 0000000..0be7445
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -0,0 +1,34 @@
+TARGET_MODULE:=hyper_dmabuf
+
+# If we running by kernel building system
+ifneq ($(KERNELRELEASE),)
+	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
+                                 hyper_dmabuf_ioctl.o \
+                                 hyper_dmabuf_list.o \
+				 hyper_dmabuf_imp.o \
+				 hyper_dmabuf_msg.o \
+				 xen/hyper_dmabuf_xen_comm.o \
+				 xen/hyper_dmabuf_xen_comm_list.o
+
+obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
+
+# If we are running without kernel build system
+else
+BUILDSYSTEM_DIR?=../../../
+PWD:=$(shell pwd)
+
+all :
+# run kernel build system to make module
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
+
+clean:
+# run kernel build system to cleanup in current directory
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
+
+load:
+	insmod ./$(TARGET_MODULE).ko
+
+unload:
+	rmmod ./$(TARGET_MODULE).ko
+
+endif
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
new file mode 100644
index 0000000..3d9b2d6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -0,0 +1,2 @@
+#define CURRENT_TARGET XEN
+#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
new file mode 100644
index 0000000..0698327
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -0,0 +1,54 @@
+#include <linux/init.h>       /* module_init, module_exit */
+#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include "hyper_dmabuf_conf.h"
+#include "hyper_dmabuf_list.h"
+#include "xen/hyper_dmabuf_xen_comm_list.h"
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("IOTG-PED, INTEL");
+
+int register_device(void);
+int unregister_device(void);
+
+/*===============================================================================================*/
+static int hyper_dmabuf_drv_init(void)
+{
+	int ret = 0;
+
+	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
+
+	ret = register_device();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
+
+	ret = hyper_dmabuf_table_init();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	ret = hyper_dmabuf_ring_table_init();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	/* interrupt for comm should be registered here: */
+	return ret;
+}
+
+/*-----------------------------------------------------------------------------------------------*/
+static void hyper_dmabuf_drv_exit(void)
+{
+	/* hash tables for export/import entries and ring_infos */
+	hyper_dmabuf_table_destroy();
+	hyper_dmabuf_ring_table_init();
+
+	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
+	unregister_device();
+}
+/*===============================================================================================*/
+
+module_init(hyper_dmabuf_drv_init);
+module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
new file mode 100644
index 0000000..2dad9a6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -0,0 +1,101 @@
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
+struct ioctl_hyper_dmabuf_exporter_ring_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	uint32_t remote_domain;
+	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
+	uint32_t port; /* assigned by driver, copied to userspace after initialization */
+};
+
+#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
+struct ioctl_hyper_dmabuf_importer_ring_setup {
+	/* IN parameters */
+	/* Source domain id */
+	uint32_t source_domain;
+	/* Ring shared page refid */
+	grant_ref_t ring_refid;
+	/* Port number */
+	uint32_t port;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	uint32_t dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	uint32_t remote_domain;
+	/* exported dma buf id */
+	uint32_t hyper_dmabuf_id;
+	uint32_t private[4];
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	uint32_t hyper_dmabuf_id;
+	/* flags */
+	uint32_t flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	uint32_t fd;
+};
+
+#define IOCTL_HYPER_DMABUF_DESTROY \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
+struct ioctl_hyper_dmabuf_destroy {
+	/* IN parameters */
+	/* hyper dmabuf id to be destroyed */
+	uint32_t hyper_dmabuf_id;
+	/* OUT parameters */
+	/* Status of request */
+	uint32_t status;
+};
+
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	uint32_t hyper_dmabuf_id;
+	/* item to be queried */
+	uint32_t item;
+	/* OUT parameters */
+	/* Value of queried item */
+	uint32_t info;
+};
+
+#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
+struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
+	/* in parameters */
+	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
+	uint32_t info;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
new file mode 100644
index 0000000..faa5c1b
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -0,0 +1,852 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_imp.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/* return total number of pages referecned by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct hyper_dmabuf_pages_info *pinfo;
+	int i, j;
+	int length;
+	struct scatterlist *sgl;
+
+	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
+	if (pinfo == NULL)
+		return NULL;
+
+	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
+	if (pinfo->pages == NULL)
+		return NULL;
+
+	sgl = sgt->sgl;
+
+	pinfo->nents = 1;
+	pinfo->frst_ofst = sgl->offset;
+	pinfo->pages[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i=1;
+
+	while (length > 0) {
+		pinfo->pages[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pinfo->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pinfo->pages[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pinfo->nents++;
+
+		while (length > 0) {
+			pinfo->pages[i] = nth_page(sg_page(sgl), i);
+			length -= PAGE_SIZE;
+			pinfo->nents++;
+			i++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pinfo->last_len = PAGE_SIZE + length;
+
+	return pinfo;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+				int frst_ofst, int last_len, int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (sgt == NULL) {
+		return NULL;
+	}
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		kfree(sgt);
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i=1; i<nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
+	}
+
+	if (i > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      Top level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
+ * +-------------------------+ | | +--------------------+      |Data page 1 |
+ *                             | |                             +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|-->+------------------+
+ *                                 +-----------------------+   |Data page 1048575 |
+ *                                                             +------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
+						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int i;
+	unsigned long gref_page_start;
+	grant_ref_t *tmp_page;
+	grant_ref_t top_level_ref;
+	grant_ref_t * addr_refs;
+	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
+
+	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
+	tmp_page = (grant_ref_t *)gref_page_start;
+
+	/* Store 2nd level pages to be freed later */
+	shared_pages_info->addr_pages = tmp_page;
+
+	/*TODO: make sure that allocated memory is filled with 0*/
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i=0; i< n_2nd_level_pages; i++) {
+		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
+	}
+
+	/*
+	 * fill second level pages with data refs
+	 */
+	for (i = 0; i < nents; i++) {
+		tmp_page[i] = data_refs[i];
+	}
+
+
+	/* allocate top level page */
+	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
+	tmp_page = (grant_ref_t *)gref_page_start;
+
+	/* Store top level page to be freed later */
+	shared_pages_info->top_level_page = tmp_page;
+
+	/*
+	 * fill top level page with reference numbers of second level pages refs.
+	 */
+	for (i=0; i< n_2nd_level_pages; i++) {
+		tmp_page[i] =  addr_refs[i];
+	}
+
+	/* Share top level addressing page in readonly mode*/
+	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
+
+	kfree(addr_refs);
+
+	return top_level_ref;
+}
+
+/*
+ * Maps provided top level ref id and then return array of pages containing data refs.
+ */
+struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
+					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	struct page *top_level_page;
+	struct page **level2_pages;
+
+	grant_ref_t *top_level_refs;
+
+	struct gnttab_map_grant_ref top_level_map_ops;
+	struct gnttab_unmap_grant_ref top_level_unmap_ops;
+
+	struct gnttab_map_grant_ref *map_ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	unsigned long addr;
+	int n_level2_refs = 0;
+	int i;
+
+	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
+
+	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
+
+	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
+	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &top_level_page)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
+	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
+	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (top_level_map_ops.status) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+				top_level_map_ops.status);
+		return NULL;
+	} else {
+		top_level_unmap_ops.handle = top_level_map_ops.handle;
+	}
+
+	/* Parse contents of top level addressing page to find how many second level pages is there*/
+	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	for (i = 0; i < n_level2_refs; i++) {
+		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
+		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
+		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
+	for (i = 0; i < n_level2_refs; i++) {
+		if (map_ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+					map_ops[i].status);
+			return NULL;
+		} else {
+			unmap_ops[i].handle = map_ops[i].handle;
+		}
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
+		printk("\xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	gnttab_free_pages(1, &top_level_page);
+	kfree(map_ops);
+	shared_pages_info->unmap_ops = unmap_ops;
+
+	return level2_pages;
+}
+
+
+/* This collects all reference numbers for 2nd level shared pages and create a table
+ * with those in 1st level shared pages then return reference numbers for this top level
+ * table. */
+grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
+					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	int i = 0;
+	grant_ref_t *data_refs;
+	grant_ref_t top_level_ref;
+
+	/* allocate temp array for refs of shared data pages */
+	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
+
+	/* share data pages in rw mode*/
+	for (i=0; i<nents; i++) {
+		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
+	}
+
+	/* create additional shared pages with 2 level addressing of data pages */
+	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
+							      shared_pages_info);
+
+	/* Store exported pages refid to be unshared later */
+	shared_pages_info->data_refs = data_refs;
+	shared_pages_info->top_level_ref = top_level_ref;
+
+	return top_level_ref;
+}
+
+int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
+	uint32_t i = 0;
+	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
+
+	grant_ref_t *ref = shared_pages_info->top_level_page;
+	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+
+
+	if (shared_pages_info->data_refs == NULL ||
+	    shared_pages_info->addr_pages ==  NULL ||
+	    shared_pages_info->top_level_page == NULL ||
+	    shared_pages_info->top_level_ref == -1) {
+		printk("gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	while(ref[i] != 0 && i < n_2nd_level_pages) {
+		if (gnttab_query_foreign_access(ref[i])) {
+			printk("refid not shared !!\n");
+		}
+		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
+			printk("refid still in use!!!\n");
+		}
+		i++;
+	}
+	free_pages((unsigned long)shared_pages_info->addr_pages, i);
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
+		printk("refid not shared !!\n");
+	}
+	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
+		printk("refid still in use!!!\n");
+	}
+	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
+	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < sgt_info->sgt->nents; i++) {
+		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
+			printk("refid not shared !!\n");
+		}
+		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
+	}
+
+	kfree(shared_pages_info->data_refs);
+
+	shared_pages_info->data_refs = NULL;
+	shared_pages_info->addr_pages = NULL;
+	shared_pages_info->top_level_page = NULL;
+	shared_pages_info->top_level_ref = -1;
+
+	return 0;
+}
+
+int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
+	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
+
+	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
+		printk("Imported pages already cleaned up or buffer was not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
+		printk("Cannot unmap data pages\n");
+		return -EINVAL;
+	}
+
+	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
+	kfree(shared_pages_info->data_pages);
+	kfree(shared_pages_info->unmap_ops);
+	shared_pages_info->unmap_ops = NULL;
+	shared_pages_info->data_pages = NULL;
+
+	return 0;
+}
+
+/* map and construct sg_lists from reference numbers */
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
+					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	struct sg_table *st;
+	struct page **pages;
+	struct gnttab_map_grant_ref *ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	unsigned long addr;
+	grant_ref_t *refs;
+	int i;
+	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
+
+	/* Get data refids */
+	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
+							       shared_pages_info);
+
+	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
+	if (pages == NULL) {
+		return NULL;
+	}
+
+	/* allocate new pages that are mapped to shared pages via grant-table */
+	if (gnttab_alloc_pages(nents, pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
+	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
+
+	for (i=0; i<nents; i++) {
+		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
+		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
+		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
+		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	if (gnttab_map_refs(ops, NULL, pages, nents)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	for (i=0; i<nents; i++) {
+		if (ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
+				ops[0].status);
+			return NULL;
+		} else {
+			unmap_ops[i].handle = ops[i].handle;
+		}
+	}
+
+	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
+
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
+		printk("Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	gnttab_free_pages(n_level2_refs, refid_pages);
+	kfree(refid_pages);
+
+	kfree(shared_pages_info->unmap_ops);
+	shared_pages_info->unmap_ops = unmap_ops;
+	shared_pages_info->data_pages = pages;
+	kfree(ops);
+
+	return st;
+}
+
+inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
+{
+	struct hyper_dmabuf_ring_rq *req;
+	int operands[2];
+	int ret;
+
+	operands[0] = id;
+	operands[1] = ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
+
+	/* send request */
+	ret = hyper_dmabuf_send_request(id, req);
+
+	/* TODO: wait until it gets response.. or can we just move on? */
+
+	kfree(req);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
+			struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_ATTACH);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_DETACH);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
+						enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_pages_info *page_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
+				page_info->last_len, page_info->nents);
+	if (st == NULL)
+		goto err_free_sg;
+
+        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+                goto err_free_sg;
+        }
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_MAP);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return st;
+
+err_free_sg:
+	sg_free_table(st);
+	kfree(st);
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+						struct sg_table *sg,
+						enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_UNMAP);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_RELEASE);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KUNMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_MMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_VMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_VUNMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+		.attach = hyper_dmabuf_ops_attach,
+		.detach = hyper_dmabuf_ops_detach,
+		.map_dma_buf = hyper_dmabuf_ops_map,
+		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+		.release = hyper_dmabuf_ops_release,
+		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
+		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+		.map = hyper_dmabuf_ops_kmap,
+		.unmap = hyper_dmabuf_ops_kunmap,
+		.mmap = hyper_dmabuf_ops_mmap,
+		.vmap = hyper_dmabuf_ops_vmap,
+		.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
+{
+	int fd;
+
+	struct dma_buf* dmabuf;
+
+/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
+ * then release */
+
+	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
+
+	fd = dma_buf_fd(dmabuf, flags);
+
+	return fd;
+}
+
+struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
+	exp_info.flags = /* not sure about flag */0;
+	exp_info.priv = dinfo;
+
+	return dma_buf_export(&exp_info);
+};
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
new file mode 100644
index 0000000..003c158
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -0,0 +1,31 @@
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+                                int frst_ofst, int last_len, int nents);
+
+grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
+					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
+
+int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
+
+int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
+
+/* map first level tables that contains reference numbers for actual shared pages */
+grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
+
+/* map and construct sg_lists from reference numbers */
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
+					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
+
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
+
+struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
new file mode 100644
index 0000000..5e50908
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -0,0 +1,462 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/miscdevice.h>
+#include <linux/uaccess.h>
+#include <linux/dma-buf.h>
+#include <linux/delay.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_query.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+struct hyper_dmabuf_private {
+	struct device *device;
+} hyper_dmabuf_private;
+
+static uint32_t hyper_dmabuf_id_gen(void) {
+	/* TODO: add proper implementation */
+	static uint32_t id = 0;
+	static int32_t domid = -1;
+	if (domid == -1) {
+		domid = hyper_dmabuf_get_domid();
+	}
+	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
+}
+
+static int hyper_dmabuf_exporter_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
+
+	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
+						&ring_attr->ring_refid,
+						&ring_attr->port);
+
+	return ret;
+}
+
+static int hyper_dmabuf_importer_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
+
+	/* user need to provide a port number and ref # for the page used as ring buffer */
+	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
+						 setup_imp_ring_attr->ring_refid,
+						 setup_imp_ring_attr->port);
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_remote(void *data)
+{
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
+	struct dma_buf *dma_buf;
+	struct dma_buf_attachment *attachment;
+	struct sg_table *sgt;
+	struct hyper_dmabuf_pages_info *page_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int operands[9];
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
+
+	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+	if (!dma_buf) {
+		printk("Cannot get dma buf\n");
+		return -1;
+	}
+
+	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	if (!attachment) {
+		printk("Cannot get attachment\n");
+		return -1;
+	}
+
+	/* we check if this specific attachment was already exported
+	 * to the same domain and if yes, it returns hyper_dmabuf_id
+	 * of pre-exported sgt */
+	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
+	if (ret != -1) {
+		dma_buf_detach(dma_buf, attachment);
+		dma_buf_put(dma_buf);
+		export_remote_attr->hyper_dmabuf_id = ret;
+		return 0;
+	}
+	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
+	ret = 0;
+
+	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
+
+	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
+
+	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
+	/* TODO: We might need to consider using port number on event channel? */
+	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
+	sgt_info->sgt = sgt;
+	sgt_info->attachment = attachment;
+	sgt_info->dma_buf = dma_buf;
+
+	page_info = hyper_dmabuf_ext_pgs(sgt);
+	if (page_info == NULL)
+		goto fail_export;
+
+	/* now register it to export list */
+	hyper_dmabuf_register_exported(sgt_info);
+
+	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
+	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
+
+	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+
+	/* now create table of grefs for shared pages and */
+
+	/* now create request for importer via ring */
+	operands[0] = page_info->hyper_dmabuf_id;
+	operands[1] = page_info->nents;
+	operands[2] = page_info->frst_ofst;
+	operands[3] = page_info->last_len;
+	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
+						page_info->nents, &sgt_info->shared_pages_info);
+	/* driver/application specific private info, max 32 bytes */
+	operands[5] = export_remote_attr->private[0];
+	operands[6] = export_remote_attr->private[1];
+	operands[7] = export_remote_attr->private[2];
+	operands[8] = export_remote_attr->private[3];
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
+	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
+		goto fail_send_request;
+
+	/* free msg */
+	kfree(req);
+	/* free page_info */
+	kfree(page_info);
+
+	return ret;
+
+fail_send_request:
+	kfree(req);
+	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
+
+fail_export:
+	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+	dma_buf_put(sgt_info->dma_buf);
+
+	return -EINVAL;
+}
+
+static int hyper_dmabuf_export_fd_ioctl(void *data)
+{
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
+
+	/* look for dmabuf for the id */
+	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
+	if (imported_sgt_info == NULL) /* can't find sgt from the table */
+		return -1;
+
+	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
+		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
+		imported_sgt_info->last_len, imported_sgt_info->nents,
+		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+
+	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
+						imported_sgt_info->frst_ofst,
+						imported_sgt_info->last_len,
+						imported_sgt_info->nents,
+						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+						&imported_sgt_info->shared_pages_info);
+
+	if (!imported_sgt_info->sgt) {
+		return -1;
+	}
+
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
+	if (export_fd_attr < 0) {
+		ret = export_fd_attr->fd;
+	}
+
+	return ret;
+}
+
+/* removing dmabuf from the database and send int req to the source domain
+* to unmap it. */
+static int hyper_dmabuf_destroy(void *data)
+{
+	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int ret;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -EINVAL;
+	}
+
+	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
+
+	/* find dmabuf in export list */
+	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
+	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
+		destroy_attr->status = -EINVAL;
+		return -EFAULT;
+	}
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
+
+	/* now send destroy request to remote domain
+	 * currently assuming there's only one importer exist */
+	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
+	if (ret < 0) {
+		kfree(req);
+		return -EFAULT;
+	}
+
+	/* free msg */
+	kfree(req);
+	destroy_attr->status = ret;
+
+	/* Rest of cleanup will follow when importer will free it's buffer,
+	 * current implementation assumes that there is only one importer
+         */
+
+	return ret;
+}
+
+static int hyper_dmabuf_query(void *data)
+{
+	struct ioctl_hyper_dmabuf_query *query_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -EINVAL;
+	}
+
+	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
+
+	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
+	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
+
+	/* if dmabuf can't be found in both lists, return */
+	if (!(sgt_info && imported_sgt_info)) {
+		printk("can't find entry anywhere\n");
+		return -EINVAL;
+	}
+
+	/* not considering the case where a dmabuf is found on both queues
+	 * in one domain */
+	switch (query_attr->item)
+	{
+		case DMABUF_QUERY_TYPE_LIST:
+			if (sgt_info) {
+				query_attr->info = EXPORTED;
+			} else {
+				query_attr->info = IMPORTED;
+			}
+			break;
+
+		/* exporting domain of this specific dmabuf*/
+		case DMABUF_QUERY_EXPORTER:
+			if (sgt_info) {
+				query_attr->info = 0xFFFFFFFF; /* myself */
+			} else {
+				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+			}
+			break;
+
+		/* importing domain of this specific dmabuf */
+		case DMABUF_QUERY_IMPORTER:
+			if (sgt_info) {
+				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
+			} else {
+#if 0 /* TODO: a global variable, current_domain does not exist yet*/
+				query_attr->info = current_domain;
+#endif
+			}
+			break;
+
+		/* size of dmabuf in byte */
+		case DMABUF_QUERY_SIZE:
+			if (sgt_info) {
+#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
+				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
+#endif
+			} else {
+				query_attr->info = imported_sgt_info->nents * 4096 -
+						   imported_sgt_info->frst_ofst - 4096 +
+						   imported_sgt_info->last_len;
+			}
+			break;
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
+	struct hyper_dmabuf_ring_rq *req;
+
+	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
+
+	/* requesting remote domain to set-up exporter's ring */
+	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
+		kfree(req);
+		return -EINVAL;
+	}
+
+	kfree(req);
+	return 0;
+}
+
+static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
+};
+
+static long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param)
+{
+	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
+	unsigned int nr = _IOC_NR(cmd);
+	int ret = -EINVAL;
+	hyper_dmabuf_ioctl_t func;
+	char *kdata;
+
+	ioctl = &hyper_dmabuf_ioctls[nr];
+
+	func = ioctl->func;
+
+	if (unlikely(!func)) {
+		printk("no function\n");
+		return -EINVAL;
+	}
+
+	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+	if (!kdata) {
+		printk("no memory\n");
+		return -ENOMEM;
+	}
+
+	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
+		printk("failed to copy from user arguments\n");
+		return -EFAULT;
+	}
+
+	ret = func(kdata);
+
+	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
+		printk("failed to copy to user arguments\n");
+		return -EFAULT;
+	}
+
+	kfree(kdata);
+
+	return ret;
+}
+
+struct device_info {
+	int curr_domain;
+};
+
+/*===============================================================================================*/
+static struct file_operations hyper_dmabuf_driver_fops =
+{
+   .owner = THIS_MODULE,
+   .unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "xen/hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+static const char device_name[] = "hyper_dmabuf";
+
+/*===============================================================================================*/
+int register_device(void)
+{
+	int result = 0;
+
+	result = misc_register(&hyper_dmabuf_miscdev);
+
+	if (result != 0) {
+		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
+		return result;
+	}
+
+	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask nicely */
+	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
+
+	/* TODO find a way to provide parameters for below function or move that to ioctl */
+/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
+				src_sink_isr, PORT_NUM, "remote_domain", &info);
+	if (err < 0) {
+		printk("hyper_dmabuf: can't register interrupt handlers\n");
+		return -EFAULT;
+	}
+
+	info.irq = err;
+*/
+	return result;
+}
+
+/*-----------------------------------------------------------------------------------------------*/
+void unregister_device(void)
+{
+	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
new file mode 100644
index 0000000..77a7e65
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -0,0 +1,119 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <asm/uaccess.h>
+#include <linux/hashtable.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_list.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
+
+int hyper_dmabuf_table_init()
+{
+	hash_init(hyper_dmabuf_hash_imported);
+	hash_init(hyper_dmabuf_hash_exported);
+	return 0;
+}
+
+int hyper_dmabuf_table_destroy()
+{
+	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
+	return 0;
+}
+
+int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = info;
+
+	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
+		info_entry->info->hyper_dmabuf_id);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = info;
+
+	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
+		info_entry->info->hyper_dmabuf_id);
+
+	return 0;
+}
+
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id)
+			return info_entry->info;
+
+	return NULL;
+}
+
+/* search for pre-exported sgt and return id of it if it exist */
+int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->attachment == attach &&
+			info_entry->info->hyper_dmabuf_rdomain == domid)
+			return info_entry->info->hyper_dmabuf_id;
+
+	return -1;
+}
+
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exported(int id)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
+
+int hyper_dmabuf_remove_imported(int id)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
new file mode 100644
index 0000000..869cd9a
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -0,0 +1,40 @@
+#ifndef __HYPER_DMABUF_LIST_H__
+#define __HYPER_DMABUF_LIST_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORTED 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORTED 7
+
+struct hyper_dmabuf_info_entry_exported {
+        struct hyper_dmabuf_sgt_info *info;
+        struct hlist_node node;
+};
+
+struct hyper_dmabuf_info_entry_imported {
+        struct hyper_dmabuf_imported_sgt_info *info;
+        struct hlist_node node;
+};
+
+int hyper_dmabuf_table_init(void);
+
+int hyper_dmabuf_table_destroy(void);
+
+int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
+
+/* search for pre-exported sgt and return id of it if it exist */
+int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
+
+int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
+
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
+
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
+
+int hyper_dmabuf_remove_exported(int id);
+
+int hyper_dmabuf_remove_imported(int id);
+
+#endif // __HYPER_DMABUF_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
new file mode 100644
index 0000000..3237e50
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -0,0 +1,212 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_imp.h"
+//#include "hyper_dmabuf_remote_sync.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
+				        enum hyper_dmabuf_command command, int *operands)
+{
+	int i;
+
+	request->request_id = hyper_dmabuf_next_req_id_export();
+	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	request->command = command;
+
+	switch(command) {
+	/* as exporter, commands to importer */
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : number of pages to be shared
+		 * operands2 : offset of data in the first page
+		 * operands3 : length of data in the last page
+		 * operands4 : top-level reference number for shared pages
+		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 */
+		for (i=0; i < 8; i++)
+			request->operands[i] = operands[i];
+		break;
+
+	case HYPER_DMABUF_DESTROY:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		request->operands[0] = operands[0];
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed) */
+		/* for dmabuf synchronization */
+		break;
+
+	/* as importer, command to exporter */
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		* or unmapping for synchronization with original exporter (e.g. i915) */
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		for (i=0; i<2; i++)
+			request->operands[i] = operands[i];
+		break;
+
+	/* requesting the other side to setup another ring channel for reverse direction */
+	case HYPER_DMABUF_EXPORTER_RING_SETUP:
+		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
+		/* no operands needed */
+		break;
+
+	default:
+		/* no command found */
+		return;
+	}
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+{
+	uint32_t i, ret;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+
+	/* make sure req is not NULL (may not be needed) */
+	if (!req) {
+		return -EINVAL;
+	}
+
+	req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+	switch (req->command) {
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : number of pages to be shared
+		 * operands2 : offset of data in the first page
+		 * operands3 : length of data in the last page
+		 * operands4 : top-level reference number for shared pages
+		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 */
+		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
+		imported_sgt_info->frst_ofst = req->operands[2];
+		imported_sgt_info->last_len = req->operands[3];
+		imported_sgt_info->nents = req->operands[1];
+		imported_sgt_info->gref = req->operands[4];
+
+		printk("DMABUF was exported\n");
+		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
+		printk("\tnents %d\n", req->operands[1]);
+		printk("\tfirst offset %d\n", req->operands[2]);
+		printk("\tlast len %d\n", req->operands[3]);
+		printk("\tgrefid %d\n", req->operands[4]);
+
+		for (i=0; i<4; i++)
+			imported_sgt_info->private[i] = req->operands[5+i];
+
+		hyper_dmabuf_register_imported(imported_sgt_info);
+		break;
+
+	case HYPER_DMABUF_DESTROY:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+
+		imported_sgt_info =
+			hyper_dmabuf_find_imported(req->operands[0]);
+
+		if (imported_sgt_info) {
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+
+			hyper_dmabuf_remove_imported(req->operands[0]);
+
+			/* TODO: cleanup sgt on importer side etc */
+		}
+
+		/* Notify exporter that buffer is freed and it can cleanup it */
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_DESTROY_FINISH;
+
+#if 0 /* function is not implemented yet */
+
+		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
+#endif
+		break;
+
+	case HYPER_DMABUF_DESTROY_FINISH:
+		/* destroy sg_list for hyper_dmabuf_id on local side */
+		/* command : DMABUF_DESTROY_FINISH,
+		 * operands0 : hyper_dmabuf_id
+		 */
+
+		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
+		sgt_info =
+			hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (sgt_info) {
+			hyper_dmabuf_cleanup_gref_table(sgt_info);
+
+			/* unmap dmabuf */
+			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
+			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+			dma_buf_put(sgt_info->dma_buf);
+
+			/* TODO: Rest of cleanup, sgt cleanup etc */
+		}
+
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed) */
+		/* for dmabuf synchronization */
+		break;
+
+	/* as importer, command to exporter */
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		* or unmapping for synchronization with original exporter (e.g. i915) */
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		break;
+
+	/* requesting the other side to setup another ring channel for reverse direction */
+	case HYPER_DMABUF_EXPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
+		 * no operands needed */
+		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
+		if (ret < 0) {
+			req->status = HYPER_DMABUF_REQ_ERROR;
+			return -EINVAL;
+		}
+
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
+		break;
+
+	case HYPER_DMABUF_IMPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
+		/* no operands needed */
+		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
+		if (ret < 0)
+			return -EINVAL;
+
+		break;
+
+	default:
+		/* no matched command, nothing to do.. just return error */
+		return -EINVAL;
+	}
+
+	return req->command;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
new file mode 100644
index 0000000..44bfb70
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -0,0 +1,45 @@
+#ifndef __HYPER_DMABUF_MSG_H__
+#define __HYPER_DMABUF_MSG_H__
+
+enum hyper_dmabuf_command {
+	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_DESTROY,
+	HYPER_DMABUF_DESTROY_FINISH,
+	HYPER_DMABUF_OPS_TO_REMOTE,
+	HYPER_DMABUF_OPS_TO_SOURCE,
+	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
+	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
+};
+
+enum hyper_dmabuf_ops {
+	HYPER_DMABUF_OPS_ATTACH = 0x1000,
+	HYPER_DMABUF_OPS_DETACH,
+	HYPER_DMABUF_OPS_MAP,
+	HYPER_DMABUF_OPS_UNMAP,
+	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
+	HYPER_DMABUF_OPS_END_CPU_ACCESS,
+	HYPER_DMABUF_OPS_KMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KMAP,
+	HYPER_DMABUF_OPS_KUNMAP,
+	HYPER_DMABUF_OPS_MMAP,
+	HYPER_DMABUF_OPS_VMAP,
+	HYPER_DMABUF_OPS_VUNMAP,
+};
+
+enum hyper_dmabuf_req_feedback {
+	HYPER_DMABUF_REQ_PROCESSED = 0x100,
+	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
+	HYPER_DMABUF_REQ_ERROR,
+	HYPER_DMABUF_REQ_NOT_RESPONDED
+};
+
+/* create a request packet with given command and operands */
+void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
+                                        enum hyper_dmabuf_command command, int *operands);
+
+/* parse incoming request packet (or response) and take appropriate actions for those */
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
+
+#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
new file mode 100644
index 0000000..a577167
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -0,0 +1,16 @@
+#ifndef __HYPER_DMABUF_QUERY_H__
+#define __HYPER_DMABUF_QUERY_H__
+
+enum hyper_dmabuf_query {
+	DMABUF_QUERY_TYPE_LIST = 0x10,
+	DMABUF_QUERY_EXPORTER,
+	DMABUF_QUERY_IMPORTER,
+	DMABUF_QUERY_SIZE
+};
+
+enum hyper_dmabuf_status {
+	EXPORTED = 0x01,
+	IMPORTED
+};
+
+#endif /* __HYPER_DMABUF_QUERY_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
new file mode 100644
index 0000000..c8a2f4d
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -0,0 +1,70 @@
+#ifndef __HYPER_DMABUF_STRUCT_H__
+#define __HYPER_DMABUF_STRUCT_H__
+
+#include <xen/interface/grant_table.h>
+
+/* Importer combine source domain id with given hyper_dmabuf_id
+ * to make it unique in case there are multiple exporters */
+
+#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
+	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
+	(((id) >> 24) & 0xFF)
+
+/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
+ * in this block meaning we can share 4KB*4096 = 16MB of buffer
+ * (needs to be increased for large buffer use-cases such as 4K
+ * frame buffer) */
+#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
+
+struct hyper_dmabuf_shared_pages_info {
+	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
+	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
+	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
+	grant_ref_t top_level_ref; /* top level refid */
+	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
+	struct page **data_pages; /* data pages to be unmapped */
+};
+
+/* Exporter builds pages_info before sharing pages */
+struct hyper_dmabuf_pages_info {
+        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
+        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
+        int frst_ofst; /* offset of data in the first page */
+        int last_len; /* length of data in the last page */
+        int nents; /* # of pages */
+        struct page **pages; /* pages that contains reference numbers of shared pages*/
+};
+
+/* Both importer and exporter use this structure to point to sg lists
+ *
+ * Exporter stores references to sgt in a hash table
+ * Exporter keeps these references for synchronization and tracking purposes
+ *
+ * Importer use this structure exporting to other drivers in the same domain */
+struct hyper_dmabuf_sgt_info {
+        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
+	int hyper_dmabuf_rdomain; /* domain importing this sgt */
+        struct sg_table *sgt; /* pointer to sgt */
+	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
+	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int private[4]; /* device specific info (e.g. image's meta info?) */
+};
+
+/* Importer store references (before mapping) on shared pages
+ * Importer store these references in the table and map it in
+ * its own memory map once userspace asks for reference for the buffer */
+struct hyper_dmabuf_imported_sgt_info {
+	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
+	int frst_ofst;	/* start offset in shared page #1 */
+	int last_len;	/* length of data in the last shared page */
+	int nents;	/* number of pages to be shared */
+	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
+	struct sg_table *sgt; /* sgt pointer after importing buffer */
+	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int private[4]; /* device specific info (e.g. image's meta info?) */
+};
+
+#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
new file mode 100644
index 0000000..22f2ef0
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -0,0 +1,328 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <xen/grant_table.h>
+#include <xen/events.h>
+#include <xen/xenbus.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+#include "../hyper_dmabuf_imp.h"
+#include "../hyper_dmabuf_list.h"
+#include "../hyper_dmabuf_msg.h"
+
+static int export_req_id = 0;
+static int import_req_id = 0;
+
+int32_t hyper_dmabuf_get_domid(void)
+{
+	struct xenbus_transaction xbt;
+	int32_t domid;
+
+        xenbus_transaction_start(&xbt);
+
+        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
+		domid = -1;
+        }
+        xenbus_transaction_end(xbt, 0);
+
+	return domid;
+}
+
+int hyper_dmabuf_next_req_id_export(void)
+{
+        export_req_id++;
+        return export_req_id;
+}
+
+int hyper_dmabuf_next_req_id_import(void)
+{
+        import_req_id++;
+        return import_req_id;
+}
+
+/* For now cache latast rings as global variables TODO: keep them in list*/
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
+
+/* exporter needs to generated info for page sharing */
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
+{
+	struct hyper_dmabuf_ring_info_export *ring_info;
+	struct hyper_dmabuf_sring *sring;
+	struct evtchn_alloc_unbound alloc_unbound;
+	struct evtchn_close close;
+
+	void *shared_ring;
+	int ret;
+
+	ring_info = (struct hyper_dmabuf_ring_info_export*)
+				kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	/* from exporter to importer */
+	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
+	if (shared_ring == 0) {
+		return -EINVAL;
+	}
+
+	sring = (struct hyper_dmabuf_sring *) shared_ring;
+
+	SHARED_RING_INIT(sring);
+
+	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
+
+	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
+							virt_to_mfn(shared_ring), 0);
+	if (ring_info->gref_ring < 0) {
+		return -EINVAL; /* fail to get gref */
+	}
+
+	alloc_unbound.dom = DOMID_SELF;
+	alloc_unbound.remote_dom = rdomain;
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
+	if (ret != 0) {
+		printk("Cannot allocate event channel\n");
+		return -EINVAL;
+	}
+
+	/* setting up interrupt */
+	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
+					hyper_dmabuf_front_ring_isr, 0,
+					NULL, (void*) ring_info);
+
+	if (ret < 0) {
+		printk("Failed to setup event channel\n");
+		close.port = alloc_unbound.port;
+		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
+		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
+		return -EINVAL;
+	}
+
+	ring_info->rdomain = rdomain;
+	ring_info->irq = ret;
+	ring_info->port = alloc_unbound.port;
+
+	/* store refid and port numbers for userspace's use */
+	*refid = ring_info->gref_ring;
+	*port = ring_info->port;
+
+	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
+		ring_info->gref_ring,
+		ring_info->port,
+		ring_info->irq);
+
+	/* register ring info */
+	ret = hyper_dmabuf_register_exporter_ring(ring_info);
+
+	return ret;
+}
+
+/* importer needs to know about shared page and port numbers for ring buffer and event channel */
+int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
+{
+	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct hyper_dmabuf_sring *sring;
+
+	struct page *shared_ring;
+
+	struct gnttab_map_grant_ref *ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	int ret;
+
+	ring_info = (struct hyper_dmabuf_ring_info_import *)
+			kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	ring_info->sdomain = sdomain;
+	ring_info->evtchn = port;
+
+	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
+	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
+
+	if (gnttab_alloc_pages(1, &shared_ring)) {
+		return -EINVAL;
+	}
+
+	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			GNTMAP_host_map, gref, sdomain);
+
+	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
+	if (ret < 0) {
+		printk("Cannot map ring\n");
+		return -EINVAL;
+	}
+
+	if (ops[0].status) {
+		printk("Ring mapping failed\n");
+		return -EINVAL;
+	}
+
+	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
+
+	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
+
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
+						    NULL, (void*)ring_info);
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	ring_info->irq = ret;
+
+	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+		port,
+		ring_info->irq);
+
+	ret = hyper_dmabuf_register_importer_ring(ring_info);
+
+	return ret;
+}
+
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
+{
+	struct hyper_dmabuf_front_ring *ring;
+	struct hyper_dmabuf_ring_rq *new_req;
+	struct hyper_dmabuf_ring_info_export *ring_info;
+	int notify;
+
+	/* find a ring info for the channel */
+	ring_info = hyper_dmabuf_find_exporter_ring(domain);
+	if (!ring_info) {
+		printk("Can't find ring info for the channel\n");
+		return -EINVAL;
+	}
+
+	ring = &ring_info->ring_front;
+
+	if (RING_FULL(ring))
+		return -EBUSY;
+
+	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
+	if (!new_req) {
+		printk("NULL REQUEST\n");
+		return -EIO;
+	}
+
+	memcpy(new_req, req, sizeof(*new_req));
+
+	ring->req_prod_pvt++;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
+	if (notify) {
+		notify_remote_via_irq(ring_info->irq);
+	}
+
+	return 0;
+}
+
+/* called by interrupt (WORKQUEUE) */
+int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
+{
+	/* as a importer and as a exporter */
+	return 0;
+}
+
+/* ISR for request from exporter (as an importer) */
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
+{
+	RING_IDX rc, rp;
+	struct hyper_dmabuf_ring_rq request;
+	struct hyper_dmabuf_ring_rp response;
+	int notify, more_to_do;
+	int ret;
+//	struct hyper_dmabuf_work *work;
+
+	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
+	struct hyper_dmabuf_back_ring *ring;
+
+	ring = &ring_info->ring_back;
+
+	do {
+		rc = ring->req_cons;
+		rp = ring->sring->req_prod;
+
+		while (rc != rp) {
+			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
+				break;
+
+			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
+			printk("Got request\n");
+			ring->req_cons = ++rc;
+
+			/* TODO: probably using linked list for multiple requests then let
+			 * a task in a workqueue to process those is better idea becuase
+			 * we do not want to stay in ISR for long.
+			 */
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
+
+			if (ret > 0) {
+				/* build response */
+				memcpy(&response, &request, sizeof(response));
+
+				/* we sent back modified request as a response.. we might just need to have request only..*/
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
+				ring->rsp_prod_pvt++;
+
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
+
+				if (notify) {
+					printk("Notyfing\n");
+					notify_remote_via_irq(ring_info->irq);
+				}
+			}
+
+			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
+			printk("Final check for requests %d\n", more_to_do);
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
+
+/* ISR for responses from importer */
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
+{
+	/* front ring only care about response from back */
+	struct hyper_dmabuf_ring_rp *response;
+	RING_IDX i, rp;
+	int more_to_do, ret;
+
+	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
+	struct hyper_dmabuf_front_ring *ring;
+	ring = &ring_info->ring_front;
+
+	do {
+		more_to_do = 0;
+		rp = ring->sring->rsp_prod;
+		for (i = ring->rsp_cons; i != rp; i++) {
+			unsigned long id;
+
+			response = RING_GET_RESPONSE(ring, i);
+			id = response->response_id;
+
+			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+				/* parsing response */
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
+
+				if (ret < 0) {
+					printk("getting error while parsing response\n");
+				}
+			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
+				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
+			}
+
+		}
+
+		ring->rsp_cons = i;
+
+		if (i != ring->req_prod_pvt) {
+			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
+			printk("more to do %d\n", more_to_do);
+		} else {
+			ring->sring->rsp_event = i+1;
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
new file mode 100644
index 0000000..2754917
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -0,0 +1,62 @@
+#ifndef __HYPER_DMABUF_XEN_COMM_H__
+#define __HYPER_DMABUF_XEN_COMM_H__
+
+#include "xen/interface/io/ring.h"
+
+#define MAX_NUMBER_OF_OPERANDS 9
+
+struct hyper_dmabuf_ring_rq {
+        unsigned int request_id;
+        unsigned int status;
+        unsigned int command;
+        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_ring_rp {
+        unsigned int response_id;
+        unsigned int status;
+        unsigned int command;
+        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
+
+struct hyper_dmabuf_ring_info_export {
+        struct hyper_dmabuf_front_ring ring_front;
+	int rdomain;
+        int gref_ring;
+        int irq;
+        int port;
+};
+
+struct hyper_dmabuf_ring_info_import {
+        int sdomain;
+        int irq;
+        int evtchn;
+        struct hyper_dmabuf_back_ring ring_back;
+};
+
+//struct hyper_dmabuf_work {
+//	hyper_dmabuf_ring_rq requrest;
+//	struct work_struct msg_parse;
+//};
+
+int32_t hyper_dmabuf_get_domid(void);
+
+int hyper_dmabuf_next_req_id_export(void);
+
+int hyper_dmabuf_next_req_id_import(void);
+
+/* exporter needs to generated info for page sharing */
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
+
+/* importer needs to know about shared page and port numbers for ring buffer and event channel */
+int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
+
+/* send request to the remote domain */
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
+
+/* called by interrupt (WORKQUEUE) */
+int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
new file mode 100644
index 0000000..15c9d29
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -0,0 +1,106 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <asm/uaccess.h>
+#include <linux/hashtable.h>
+#include <xen/grant_table.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
+
+int hyper_dmabuf_ring_table_init()
+{
+	hash_init(hyper_dmabuf_hash_importer_ring);
+	hash_init(hyper_dmabuf_hash_exporter_ring);
+	return 0;
+}
+
+int hyper_dmabuf_ring_table_destroy()
+{
+	/* TODO: cleanup tables*/
+	return 0;
+}
+
+int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = ring_info;
+
+	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
+		info_entry->info->rdomain);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = ring_info;
+
+	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
+		info_entry->info->sdomain);
+
+	return 0;
+}
+
+struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+		if(info_entry->info->rdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+		if(info_entry->info->sdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exporter_ring(int domid)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+		if(info_entry->info->rdomain == domid) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
+
+int hyper_dmabuf_remove_importer_ring(int domid)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+		if(info_entry->info->sdomain == domid) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
new file mode 100644
index 0000000..5929f99
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -0,0 +1,35 @@
+#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
+#define __HYPER_DMABUF_XEN_COMM_LIST_H__
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORT_RING 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORT_RING 7
+
+struct hyper_dmabuf_exporter_ring_info {
+        struct hyper_dmabuf_ring_info_export *info;
+        struct hlist_node node;
+};
+
+struct hyper_dmabuf_importer_ring_info {
+        struct hyper_dmabuf_ring_info_import *info;
+        struct hlist_node node;
+};
+
+int hyper_dmabuf_ring_table_init(void);
+
+int hyper_dmabuf_ring_table_destroy(void);
+
+int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
+
+int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
+
+struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
+
+struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
+
+int hyper_dmabuf_remove_exporter_ring(int domid);
+
+int hyper_dmabuf_remove_importer_ring(int domid);
+
+#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
  2017-12-19 19:29 ` Dongwon Kim
  (?)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

High-level description of hyper_dmabuf driver has been added
to "Documentation" directory.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 Documentation/hyper-dmabuf-sharing.txt | 734 +++++++++++++++++++++++++++++++++
 1 file changed, 734 insertions(+)
 create mode 100644 Documentation/hyper-dmabuf-sharing.txt

diff --git a/Documentation/hyper-dmabuf-sharing.txt b/Documentation/hyper-dmabuf-sharing.txt
new file mode 100644
index 0000000..a6744f8
--- /dev/null
+++ b/Documentation/hyper-dmabuf-sharing.txt
@@ -0,0 +1,734 @@
+Linux Hyper DMABUF Driver
+
+------------------------------------------------------------------------------
+Section 1. Overview
+------------------------------------------------------------------------------
+
+Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
+achines (VMs), which expands DMA-BUF sharing capability to the VM environment
+where multiple different OS instances need to share same physical data without
+data-copy across VMs.
+
+To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
+exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
+producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
+for the buffer to the importing VM (so called, “importer”).
+
+Another instance of the Hyper_DMABUF driver on importer registers
+a hyper_dmabuf_id together with reference information for the shared physical
+pages associated with the DMA_BUF to its database when the export happens.
+
+The actual mapping of the DMA_BUF on the importer’s side is done by
+the Hyper_DMABUF driver when user space issues the IOCTL command to access
+the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
+exporting driver as is, that is, no special configuration is required.
+Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
+exchange.
+
+------------------------------------------------------------------------------
+Section 2. Architecture
+------------------------------------------------------------------------------
+
+1. Hyper_DMABUF ID
+
+hyper_dmabuf_id is a global handle for shared DMA BUFs, which is compatible
+across VMs. It is a key used by the importer to retrieve information about
+shared Kernel pages behind the DMA_BUF structure from the IMPORT list. When
+a DMA_BUF is exported to another domain, its hyper_dmabuf_id and META data
+are also kept in the EXPORT list by the exporter for further synchronization
+of control over the DMA_BUF.
+
+hyper_dmabuf_id is “targeted”, meaning it is valid only in exporting (owner of
+the buffer) and importing VMs, where the corresponding hyper_dmabuf_id is
+stored in their database (EXPORT and IMPORT lists).
+
+A user-space application specifies the targeted VM id in the user parameter
+when it calls the IOCTL command to export shared DMA_BUF to another VM.
+
+hyper_dmabuf_id_t is a data type for hyper_dmabuf_id. It is defined as 16-byte
+data structure, and it contains id and rng_key[3] as elements for
+the structure.
+
+typedef struct {
+        int id;
+        int rng_key[3]; /* 12bytes long random number */
+} hyper_dmabuf_id_t;
+
+The first element in the hyper_dmabuf_id structure, int id is combined data of
+a count number generated by the driver running on the exporter and
+the exporter’s ID. The VM’s ID is a one byte value and located at the field’s
+SB in int id. The remaining three bytes in int id are reserved for a count
+number.
+
+However, there is a limit related to this count number, which is 1000.
+Therefore, only little more than a byte starting from the LSB is actually used
+for storing this count number.
+
+#define HYPER_DMABUF_ID_CREATE(domid, id) \
+        ((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+This limit on the count number directly means the maximum number of DMA BUFs
+that  can be shared simultaneously by one VM. The second element of
+hyper_dmabuf_id, that is int rng_key[3], is an array of three integers. These
+numbers are generated by Linux’s native random number generation mechanism.
+This field is added to enhance the security of the Hyper DMABUF driver by
+maximizing the entropy of hyper_dmabuf_id (that is, preventing it from being
+guessed by a security attacker).
+
+Once DMA_BUF is no longer shared, the hyper_dmabuf_id associated with
+the DMA_BUF is released, but the count number in hyper_dmabuf_id is saved in
+the ID list for reuse. However, random keys stored in int rng_key[3] are not
+reused. Instead, those keys are always filled with freshly generated random
+keys for security.
+
+2. IOCTLs
+
+a. IOCTL_HYPER_DMABUF_TX_CH_SETUP
+
+This type of IOCTL is used for initialization of a one-directional transmit
+communication channel with a remote domain.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+    /* IN parameters */
+    /* Remote domain id */
+    int remote_domain;
+};
+
+b. IOCTL_HYPER_DMABUF_RX_CH_SETUP
+
+This type of IOCTL is used for initialization of a one-directional receive
+communication channel with a remote domain.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+    /* IN parameters */
+    /* Source domain id */
+    int source_domain;
+};
+
+c. IOCTL_HYPER_DMABUF_EXPORT_REMOTE
+
+This type of IOCTL is used to export a DMA BUF to another VM. When a user
+space application makes this call to the driver, it extracts Kernel pages
+associated with the DMA_BUF, then makes those shared with the importing VM.
+
+All reference information for this shared pages and hyper_dmabuf_id is
+created, then passed to the importing domain through a communications
+channel for synchronous registration. In the meantime, the hyper_dmabuf_id
+for the shared DMA_BUF is also returned to user-space application.
+
+This IOCTL can accept a reference to “user-defined” data as well as a FD
+for the DMA BUF. This private data is then attached to the DMA BUF and
+exported together with it.
+
+ore details regarding this private data can be found in chapter for
+“Hyper_DMABUF Private Data”.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_export_remote {
+    /* IN parameters */
+    /* DMA buf fd to be exported */
+    int dmabuf_fd;
+    /* Domain id to which buffer should be exported */
+    int remote_domain;
+    /* exported dma buf id */
+    hyper_dmabuf_id_t hid;
+    /* size of private data */
+    int sz_priv;
+    /* ptr to the private data for Hyper_DMABUF */
+    char *priv;
+};
+
+d. IOCTL_HYPER_DMABUF_EXPORT_FD
+
+The importing VM uses this IOCTL to import and re-export a shared DMA_BUF
+locally to the end-consumer using the standard Linux DMA_BUF framework.
+Upon IOCTL call, the Hyper_DMABUF driver finds the reference information
+of the shared DMA_BUF with the given hyper_dmabuf_id, then maps all shared
+pages in its own Kernel space. The driver then constructs a scatter-gather
+list with those mapped pages and creates a brand-new DMA_BUF with the list,
+which is eventually exported with a file descriptor to the local consumer.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_export_fd {
+    /* IN parameters */
+    /* hyper dmabuf id to be imported */
+    int hyper_dmabuf_id;
+    /* flags */
+    int flags;
+    /* OUT parameters */
+    /* exported dma buf fd */
+    int fd;
+};
+
+e. IOCTL_HYPER_DMABUF_UNEXPORT
+
+This type of IOCTL is used when it is necessary to terminate the current
+sharing of a DMA_BUF. When called, the driver first checks if there are any
+consumers actively using the DMA_BUF. Then, it unexports it if it is not
+mapped or used by any consumers. Otherwise, it postpones unexporting, but
+makes the buffer invalid to prevent any further import of the same DMA_BUF.
+DMA_BUF is completely unexported after the last consumer releases it.
+
+”Unexport” means remove all reference information about the DMA_BUF from the
+LISTs and make all pages private again.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_unexport {
+    /* IN parameters */
+    /* hyper dmabuf id to be unexported */
+    int hyper_dmabuf_id;
+    /* delay in ms by which unexport processing will be postponed */
+    int delay_ms;
+    /* OUT parameters */
+    /* Status of request */
+    int status;
+};
+
+f. IOCTL_HYPER_DMABUF_QUERY
+
+This IOCTL is used to retrieve specific information about a DMA_BUF that
+is being shared.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_query {
+    /* in parameters */
+    /* hyper dmabuf id to be queried */
+    int hyper_dmabuf_id;
+    /* item to be queried */
+    int item;
+    /* OUT parameters */
+    /* output of query */
+    /* info can be either value or reference */
+    unsigned long info;
+};
+
+<Available Queries>
+
+HYPER_DMABUF_QUERY_TYPE
+ - Return the type of DMA_BUF from the current domain, Exported or Imported.
+
+HYPER_DMABUF_QUERY_EXPORTER
+ - Return the exporting domain’s ID of a shared DMA_BUF.
+
+HYPER_DMABUF_QUERY_IMPORTER
+ - Return the importing domain’s ID of a shared DMA_BUF.
+
+HYPER_DMABUF_QUERY_SIZE
+ - Return the size of a shared DMA_BUF in bytes.
+
+HYPER_DMABUF_QUERY_BUSY
+ - Return ‘true’ if a shared DMA_BUF is currently used
+   (mapped by the end-consumer).
+
+HYPER_DMABUF_QUERY_UNEXPORTED
+ - Return ‘true’ if a shared DMA_BUF is not valid anymore
+   (so it does not allow a new consumer to map it).
+
+HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED
+ - Return ‘true’ if a shared DMA_BUF is scheduled to be unexported
+   (but is still valid) within a fixed time.
+
+HYPER_DMABUF_QUERY_PRIV_INFO
+ - Return ‘private’ data attached to shared DMA_BUF to the user space.
+   ‘unsigned long info’ is the user space pointer for the buffer, where
+   private data will be copied to.
+
+HYPER_DMABUF_QUERY_PRIV_INFO_SIZE
+ - Return the size of the private data attached to the shared DMA_BUF.
+
+3. Event Polling
+
+Event-polling can be enabled optionally by selecting the Kernel config option,
+Enable event-generation and polling operation under xen/hypervisor in Kernel’s
+menuconfig. The event-polling mechanism includes the generation of
+an import-event, adding it to the event-queue and providing a notification to
+the application so that it can retrieve the event data from the queue.
+
+For this mechanism, “Poll” and “Read” operations are added to the Hyper_DMABUF
+driver. A user application that polls the driver goes into a sleep state until
+there is a new event added to the queue. An application uses “Read” to retrieve
+event data from the event queue. Event data contains the hyper_dmabuf_id and
+the private data of the buffer that has been received by the importer.
+
+For more information on private data, refer to Section 3.5).
+Using this method, it is possible to lower the risk of the hyper_dmabuf_id and
+other sensitive information about the shared buffer (for example, meta-data
+for shared images) being leaked while being transferred to the importer because
+all of this data is shared as “private info” at the driver level. However,
+please note there should be a way for the importer to find the correct DMA_BUF
+in this case when there are multiple Hyper_DMABUFs being shared simultaneously.
+For example, the surface name or the surface ID of a specific rendering surface
+needs to be sent to the importer in advance before it is exported in a surface-
+sharing use-case.
+
+Each event data given to the user-space consists of a header and the private
+information of the buffer. The data type is defined as follows:
+
+struct hyper_dmabuf_event_hdr {
+        int event_type; /* one type only for now - new import */
+        hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
+        int size; /* size of data */
+};
+
+struct hyper_dmabuf_event_data {
+        struct hyper_dmabuf_event_hdr hdr;
+        void *data; /* private data */
+};
+
+4. Hyper_DMABUF Private Data
+
+Each Hyper_DMABUF can come with private data, the size of which can be up to
+AX_SIZE_PRIV_DATA (currently 192 byte). This private data is just a chunk of
+plain data attached to every Hyper_DMABUF. It is guaranteed to be synchronized
+across VMs, exporter and importer. This private data does not have any specific
+structure defined at the driver level, so any “user-defined” format or
+structure can be used. In addition, there is no dedicated use-case for this
+data. It can be used virtually for any purpose. For example, it can be used to
+share meta-data such as dimension and color formats for shared images in
+a surface sharing model. Another example is when we share protected media
+contents.
+
+This private data can be used to transfer flags related to content protection
+information on streamed media to the importer.
+
+Private data is initially generated when a buffer is exported for the first
+time. Then, it is updated whenever the same buffer is re-exported. During the
+re-exporting process, the Hyper_DMABUF driver only updates private data on
+both sides with new data from user-space since the same buffer already exists
+on both the IMPORT LIST and EXPORT LIST.
+
+There are two different ways to retrieve this private data from user-space.
+The first way is to use “Read” on the Hyper_DMABUF driver. “Read” returns the
+data of events containing private data of the buffer. The second way is to
+make a query to Hyper_DMABUF. There are two query items,
+HYPER_DMABUF_QUERY_PRIV_INFO and HYPER_DMABUF_QUERY_PRIV_INFO_SIZE available
+for retrieving private data and its size.
+
+5. Scatter-Gather List Table (SGT) Management
+
+SGT management is the core part of the Hyper_DMABUF driver that manages an
+SGT, a representation of the group of kernel pages associated with a DMA_BUF.
+This block includes four different sub-blocks:
+
+a. Hyper_DMABUF_id Manager
+
+This ID manager is responsible for generating a hyper_dmabuf_id for an
+exported DMA_BUF. When an ID is requested, the ID Manager first checks if
+there are any reusable IDs left in the list and returns one of those,
+if available. Otherwise, it creates the next count number and returns it
+to the caller.
+
+b. SGT Creator
+
+The SGT (struct sg_table) contains information about the DMA_BUF such as
+references to all kernel pages for the buffer and their connections. The
+SGT Creator creates a new SGT on the importer side with pages shared by
+the hypervisor.
+
+c. Kernel Page Extractor
+
+The Page Extractor extracts pages from a given SGT before those pages
+are shared.
+
+d. List Manager Interface
+
+The SGT manger also interacts with export and import list managers. It
+sends out information (for example, hyper_dmabuf_id, reference, and
+DMA_BUF information) about the exported or imported DMA_BUFs to the
+list manager. Also, on IOCTL request, it asks the list manager to find
+and return the information for a corresponding DMA_BUF in the list.
+
+6. DMA-BUF Interface
+
+The DMA-BUF interface provides standard methods to manage DMA_BUFs
+reconstructed by the Hyper_DMABUF driver from shared pages. All of the
+relevant operations are listed in struct dma_buf_ops. These operations
+are standard DMA_BUF operations, therefore they follow standard DMA BUF
+protocols.
+
+Each DMA_BUF operation communicates with the exporter at the end of the
+routine for “indirect DMA_BUF synchronization”.
+
+7. Export/Import List Management
+
+Whenever a DMA_BUF is shared and exported, its information is added to the
+database (EXPORT-list) on the exporting VM. Similarly, information about an
+imported DMA_BUF is added to the importing database (IMPORT list) on the
+importing VM, when the export happens.
+
+All of the entries in the lists are needed to manage the exported/imported
+DMA_BUF more efficiently. Both lists are implemented as Linux hash tables.
+The key to the list is hyper_dmabuf_id and the output is the information of
+the DMA_BUF. The List Manager manages all requests from other blocks and
+transactions within lists to ensure that all entries are up-to-date and
+that the list structure is consistent.
+
+The List Manager provides basic functionality, such as:
+
+- Adding to the List
+- Removal from the List
+- Finding information about a DMA_BUF, given the hyper_dmabuf_id
+
+8. Page Sharing by Hypercalls
+
+The Hyper_DMABUF driver assumes that there is a native page-by-page memory
+sharing mechanism available on the hypervisor. Referencing a group of pages
+that are being shared is what the driver expects from “backend” APIs or the
+hypervisor itself.
+
+For the example, xen backend integrated in current code base utilizes Xen’s
+grant-table interface for sharing the underlying kernel pages (struct *page).
+
+ore details about grant-table interface can be found at the following locations:
+
+https://wiki.xen.org/wiki/Grant_Table
+https://xenbits.xen.org/docs/4.6-testing/misc/grant-tables.txt
+
+9. Message Management
+
+The exporter and importer can each create a message that consists of an opcode
+(command) and operands (parameters) and send it to each other.
+
+The message format is defined as:
+
+struct hyper_dmabuf_req {
+        unsigned int req_id; /* Sequence number. Used for RING BUF
+                                synchronization */
+        unsigned int stat; /* Status.Response from receiver. */
+        unsigned int cmd;  /* Opcode */
+        unsigned int op[MAX_NUMBER_OF_OPERANDS]; /* Operands */
+};
+
+The following table gives the list of opcodes:
+
+<Opcodes in Message to Exporter/Importer>
+
+HYPER_DMABUF_EXPORT (exporter --> importer)
+ - Export a DMA_BUF to the importer. The importer registers the corresponding
+   DMA_BUF in its IMPORT LIST when the message is received.
+
+HYPER_DMABUF_EXPORT_FD (importer --> exporter)
+ - Locally exported as FD. The importer sends out this command to the exporter
+   to notify that the buffer is now locally exported (mapped and used).
+
+HYPER_DMABUF_EXPORT_FD_FAILED (importer --> exporter)
+ - Failed while exporting locally. The importer sends out this command to the
+   exporter to notify the exporter that the EXPORT_FD failed.
+
+HYPER_DMABUF_NOTIFY_UNEXPORT (exporter --> importer)
+ - Termination of sharing. The exporter notifies the importer that the DMA_BUF
+   has been unexported.
+
+HYPER_DMABUF_OPS_TO_REMOTE (importer --> exporter)
+ - Not implemented yet.
+
+HYPER_DMABUF_OPS_TO_SOURCE (exporter --> importer)
+ - DMA_BUF ops to the exporter, for DMA_BUF upstream synchronization.
+   Note: Implemented but it is done asynchronously due to performance issues.
+
+The following table shows the list of operands for each opcode.
+
+<Operands in Message to Exporter/Importer>
+
+- HYPER_DMABUF_EXPORT
+
+op0 to op3 – hyper_dmabuf_id
+op4 – number of pages to be shared
+op5 – offset of data in the first page
+op6 – length of data in the last page
+op7 – reference number for the group of shared pages
+op8 – size of private data
+op9 to (op9+op8)  – private data
+
+- HYPER_DMABUF_EXPORT_FD
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_EXPORT_FD_FAILED
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_NOTIFY_UNEXPORT
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_OPS_TO_REMOTE(Not implemented)
+
+- HYPER_DMABUF_OPS_TO_SOURCE
+
+op0 to op3 – hyper_dmabuf_id
+op4 – type of DMA_BUF operation
+
+9. Inter VM (Domain) Communication
+
+Two different types of inter-domain communication channels are required,
+one in kernel space and the other in user space. The communication channel
+in user space is for transmitting or receiving the hyper_dmabuf_id. Since
+there is no specific security (for example, encryption) involved in the
+generation of a global id at the driver level, it is highly recommended that
+the customer’s user application set up a very secure channel for exchanging
+hyper_dmabuf_id between VMs.
+
+The communication channel in kernel space is required for exchanging messages
+from “message management” block between two VMs. In the current reference
+backend for Xen hypervisor, Xen ring-buffer and event-channel mechanisms are
+used for message exchange between impoter and exporter.
+
+10. What are required in hypervisor
+
+emory sharing and message communication between VMs
+
+------------------------------------------------------------------------------
+Section 3. Hyper DMABUF Sharing Flow
+------------------------------------------------------------------------------
+
+1. Exporting
+
+To export a DMA_BUF to another VM, user space has to call an IOCTL
+(IOCTL_HYPER_DMABUF_EXPORT_REMOTE) with a file descriptor for the buffer given
+by the original exporter. The Hyper_DMABUF driver maps a DMA_BUF locally, then
+issues a hyper_dmabuf_id and SGT for the DMA_BUF, which is registered to the
+EXPORT list. Then, all pages for the SGT are extracted and each individual
+page is shared via a hypervisor-specific memory sharing mechanism
+(for example, in Xen this is grant-table).
+
+One important requirement on this memory sharing method is that it needs to
+create a single integer value that represents the list of pages, which can
+then be used by the importer for retrieving the group of shared pages.  For
+this, the “Backend” in the reference driver utilizes the multiple level
+addressing mechanism.
+
+Once the integer reference to the list of pages is created, the exporter
+builds the “export” command and sends it to the importer, then notifies the
+importer.
+
+2. Importing
+
+The Import process is divided into two sections. One is the registration
+of DMA_BUF from the exporter. The other is the actual mapping of the buffer
+before accessing the data in the buffer. The former (termed “Registration”)
+happens on an export event (that is, the export command with an interrupt)
+in the exporter.
+
+The latter (termed “Mapping”) is done asynchronously when the driver gets the
+IOCTL call from user space. When the importer gets an interrupt from the
+exporter, it checks the command in the receiving queue and if it is an
+“export” command, the registration process is started. It first finds
+hyper_dmabuf_id and the integer reference for the shared pages, then stores
+all of that information together with the “domain id” of the exporting domain
+in the IMPORT LIST.
+
+In the case where “event-polling” is enabled (Kernel Config - Enable event-
+generation and polling operation), a “new sharing available” event is
+generated right after the reference info for the new shared DMA_BUF is
+registered to the IMPORT LIST. This event is added to the event-queue.
+
+The user process that polls Hyper_DMABUF driver wakes up when this event-queue
+is not empty and is able to read back event data from the queue using the
+driver’s “Read” function. Once the user-application calls EXPORT_FD IOCTL with
+the proper parameters including hyper_dmabuf_id, the Hyper_DMABUF driver
+retrieves information about the matched DMA_BUF from the IMPORT LIST. Then, it
+maps all pages shared (referenced by the integer reference) in its kernel
+space and creates its own DMA_BUF referencing the same shared pages. After
+this, it exports this new DMA_BUF to the other drivers with a file descriptor.
+DMA_BUF can then be used just in the same way a local DMA_BUF is.
+
+3. Indirect Synchronization of DMA_BUF
+
+Synchronization of a DMA_BUF within a single OS is automatically achieved
+because all of importer’s DMA_BUF operations are done using functions defined
+on the exporter’s side, which means there is one central place that has full
+control over the DMA_BUF. In other words, any primary activities such as
+attaching/detaching and mapping/un-mapping are all captured by the exporter,
+meaning that the exporter knows basic information such as who is using the
+DMA_BUF and how it is being used. This, however, is not applicable if this
+sharing is done beyond a single OS because kernel space (where the exporter’s
+DMA_BUF operations reside) is simply not visible to the importing VM.
+
+Therefore, “indirect synchronization” was introduced as an alternative solution,
+which is now implemented in the Hyper_DMABUF driver. This technique makes
+the exporter create a shadow DMA_BUF when the end-consumer of the buffer maps
+the DMA_BUF, then duplicates any DMA_BUF operations performed on
+the importer’s side. Through this “indirect synchronization”, the exporter is
+able to virtually track all activities done by the consumer (mostly reference
+counter) as if those are done in exporter’s local system.
+
+------------------------------------------------------------------------------
+Section 4. Hypervisor Backend Interface
+------------------------------------------------------------------------------
+
+The Hyper_DMABUF driver has a standard “Backend” structure that contains
+mappings to various functions designed for a specific Hypervisor. Most of
+these API functions should provide a low-level implementation of communication
+and memory sharing capability that utilize a Hypervisor’s native mechanisms.
+
+struct hyper_dmabuf_backend_ops {
+        /* retreiving id of current virtual machine */
+        int (*get_vm_id)(void);
+        /* get pages shared via hypervisor-specific method */
+        int (*share_pages)(struct page **, int, int, void **);
+        /* make shared pages unshared via hypervisor specific method */
+        int (*unshare_pages)(void **, int);
+        /* map remotely shared pages on importer's side via
+         *  hypervisor-specific method
+         */
+        struct page ** (*map_shared_pages)(int, int, int, void **);
+        /* unmap and free shared pages on importer's side via
+         *  hypervisor-specific method
+         */
+        int (*unmap_shared_pages)(void **, int);
+        /* initialize communication environment */
+        int (*init_comm_env)(void);
+        /* destroy communication channel */
+        void (*destroy_comm)(void);
+        /* upstream ch setup (receiving and responding) */
+        int (*init_rx_ch)(int);
+        /* downstream ch setup (transmitting and parsing responses) */
+        int (*init_tx_ch)(int);
+        /* send msg via communication ch */
+        int (*send_req)(int, struct hyper_dmabuf_req *, int);
+};
+
+<Hypervisor-specific Backend Structure>
+
+1. get_vm_id
+
+	Returns the VM (domain) ID
+
+	Input:
+
+		-ID of the current domain
+
+	Output:
+
+		None
+
+2. share_pages
+
+	Get pages shared via hypervisor-specific method and return one reference
+	ID that represents the complete list of shared pages
+
+	Input:
+
+		-Array of pages
+		-ID of importing VM
+		-Number of pages
+		-Hypervisor specific Representation of reference info of shared
+		 pages
+
+	Output:
+
+		-Hypervisor specific integer value that represents all of
+		 the shared pages
+
+3. unshare_pages
+
+	Stop sharing pages
+
+	Input:
+
+		-Hypervisor specific Representation of reference info of shared
+		 pages
+		-Number of shared pages
+
+	Output:
+
+		0
+
+4. map_shared_pages
+
+	Map shared pages locally using a hypervisor-specific method
+
+	Input:
+
+		-Reference number that represents all of shared pages
+		-ID of exporting VM, Number of pages
+		-Reference information for any purpose
+
+	Output:
+
+		-An array of shared pages (struct page**)
+
+5. unmap_shared_pages
+
+	Unmap shared pages
+
+	Input:
+
+		-Hypervisor specific Representation of reference info of shared pages
+
+	Output:
+
+		-0 (successful) or one of Standard Kernel errors
+
+6. init_comm_env
+
+	Setup infrastructure needed for communication channel
+
+	Input:
+
+		None
+
+	Output:
+
+		None
+
+7. destroy_comm
+
+	Cleanup everything done via init_comm_env
+
+	Input:
+
+		None
+
+	Output:
+
+		None
+
+8. init_rx_ch
+
+	Configure receive channel
+
+	Input:
+
+		-ID of VM on the other side of the channel
+
+	Output:
+
+		-0 (successful) or one of Standard Kernel errors
+
+9. init_tx_ch
+
+	Configure transmit channel
+
+	Input:
+
+		-ID of VM on the other side of the channel
+
+	Output:
+
+		-0 (success) or one of Standard Kernel errors
+
+10. send_req
+
+	Send message to other VM
+
+	Input:
+
+		-ID of VM that receives the message
+		-Message
+
+	Output:
+
+		-0 (success) or one of Standard Kernel errors
+
+-------------------------------------------------------------------------------
+-------------------------------------------------------------------------------
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
  2017-12-19 19:29 ` Dongwon Kim
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

High-level description of hyper_dmabuf driver has been added
to "Documentation" directory.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 Documentation/hyper-dmabuf-sharing.txt | 734 +++++++++++++++++++++++++++++++++
 1 file changed, 734 insertions(+)
 create mode 100644 Documentation/hyper-dmabuf-sharing.txt

diff --git a/Documentation/hyper-dmabuf-sharing.txt b/Documentation/hyper-dmabuf-sharing.txt
new file mode 100644
index 0000000..a6744f8
--- /dev/null
+++ b/Documentation/hyper-dmabuf-sharing.txt
@@ -0,0 +1,734 @@
+Linux Hyper DMABUF Driver
+
+------------------------------------------------------------------------------
+Section 1. Overview
+------------------------------------------------------------------------------
+
+Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
+achines (VMs), which expands DMA-BUF sharing capability to the VM environment
+where multiple different OS instances need to share same physical data without
+data-copy across VMs.
+
+To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
+exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
+producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
+for the buffer to the importing VM (so called, “importer”).
+
+Another instance of the Hyper_DMABUF driver on importer registers
+a hyper_dmabuf_id together with reference information for the shared physical
+pages associated with the DMA_BUF to its database when the export happens.
+
+The actual mapping of the DMA_BUF on the importer’s side is done by
+the Hyper_DMABUF driver when user space issues the IOCTL command to access
+the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
+exporting driver as is, that is, no special configuration is required.
+Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
+exchange.
+
+------------------------------------------------------------------------------
+Section 2. Architecture
+------------------------------------------------------------------------------
+
+1. Hyper_DMABUF ID
+
+hyper_dmabuf_id is a global handle for shared DMA BUFs, which is compatible
+across VMs. It is a key used by the importer to retrieve information about
+shared Kernel pages behind the DMA_BUF structure from the IMPORT list. When
+a DMA_BUF is exported to another domain, its hyper_dmabuf_id and META data
+are also kept in the EXPORT list by the exporter for further synchronization
+of control over the DMA_BUF.
+
+hyper_dmabuf_id is “targeted”, meaning it is valid only in exporting (owner of
+the buffer) and importing VMs, where the corresponding hyper_dmabuf_id is
+stored in their database (EXPORT and IMPORT lists).
+
+A user-space application specifies the targeted VM id in the user parameter
+when it calls the IOCTL command to export shared DMA_BUF to another VM.
+
+hyper_dmabuf_id_t is a data type for hyper_dmabuf_id. It is defined as 16-byte
+data structure, and it contains id and rng_key[3] as elements for
+the structure.
+
+typedef struct {
+        int id;
+        int rng_key[3]; /* 12bytes long random number */
+} hyper_dmabuf_id_t;
+
+The first element in the hyper_dmabuf_id structure, int id is combined data of
+a count number generated by the driver running on the exporter and
+the exporter’s ID. The VM’s ID is a one byte value and located at the field’s
+SB in int id. The remaining three bytes in int id are reserved for a count
+number.
+
+However, there is a limit related to this count number, which is 1000.
+Therefore, only little more than a byte starting from the LSB is actually used
+for storing this count number.
+
+#define HYPER_DMABUF_ID_CREATE(domid, id) \
+        ((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+This limit on the count number directly means the maximum number of DMA BUFs
+that  can be shared simultaneously by one VM. The second element of
+hyper_dmabuf_id, that is int rng_key[3], is an array of three integers. These
+numbers are generated by Linux’s native random number generation mechanism.
+This field is added to enhance the security of the Hyper DMABUF driver by
+maximizing the entropy of hyper_dmabuf_id (that is, preventing it from being
+guessed by a security attacker).
+
+Once DMA_BUF is no longer shared, the hyper_dmabuf_id associated with
+the DMA_BUF is released, but the count number in hyper_dmabuf_id is saved in
+the ID list for reuse. However, random keys stored in int rng_key[3] are not
+reused. Instead, those keys are always filled with freshly generated random
+keys for security.
+
+2. IOCTLs
+
+a. IOCTL_HYPER_DMABUF_TX_CH_SETUP
+
+This type of IOCTL is used for initialization of a one-directional transmit
+communication channel with a remote domain.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+    /* IN parameters */
+    /* Remote domain id */
+    int remote_domain;
+};
+
+b. IOCTL_HYPER_DMABUF_RX_CH_SETUP
+
+This type of IOCTL is used for initialization of a one-directional receive
+communication channel with a remote domain.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+    /* IN parameters */
+    /* Source domain id */
+    int source_domain;
+};
+
+c. IOCTL_HYPER_DMABUF_EXPORT_REMOTE
+
+This type of IOCTL is used to export a DMA BUF to another VM. When a user
+space application makes this call to the driver, it extracts Kernel pages
+associated with the DMA_BUF, then makes those shared with the importing VM.
+
+All reference information for this shared pages and hyper_dmabuf_id is
+created, then passed to the importing domain through a communications
+channel for synchronous registration. In the meantime, the hyper_dmabuf_id
+for the shared DMA_BUF is also returned to user-space application.
+
+This IOCTL can accept a reference to “user-defined” data as well as a FD
+for the DMA BUF. This private data is then attached to the DMA BUF and
+exported together with it.
+
+ore details regarding this private data can be found in chapter for
+“Hyper_DMABUF Private Data”.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_export_remote {
+    /* IN parameters */
+    /* DMA buf fd to be exported */
+    int dmabuf_fd;
+    /* Domain id to which buffer should be exported */
+    int remote_domain;
+    /* exported dma buf id */
+    hyper_dmabuf_id_t hid;
+    /* size of private data */
+    int sz_priv;
+    /* ptr to the private data for Hyper_DMABUF */
+    char *priv;
+};
+
+d. IOCTL_HYPER_DMABUF_EXPORT_FD
+
+The importing VM uses this IOCTL to import and re-export a shared DMA_BUF
+locally to the end-consumer using the standard Linux DMA_BUF framework.
+Upon IOCTL call, the Hyper_DMABUF driver finds the reference information
+of the shared DMA_BUF with the given hyper_dmabuf_id, then maps all shared
+pages in its own Kernel space. The driver then constructs a scatter-gather
+list with those mapped pages and creates a brand-new DMA_BUF with the list,
+which is eventually exported with a file descriptor to the local consumer.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_export_fd {
+    /* IN parameters */
+    /* hyper dmabuf id to be imported */
+    int hyper_dmabuf_id;
+    /* flags */
+    int flags;
+    /* OUT parameters */
+    /* exported dma buf fd */
+    int fd;
+};
+
+e. IOCTL_HYPER_DMABUF_UNEXPORT
+
+This type of IOCTL is used when it is necessary to terminate the current
+sharing of a DMA_BUF. When called, the driver first checks if there are any
+consumers actively using the DMA_BUF. Then, it unexports it if it is not
+mapped or used by any consumers. Otherwise, it postpones unexporting, but
+makes the buffer invalid to prevent any further import of the same DMA_BUF.
+DMA_BUF is completely unexported after the last consumer releases it.
+
+”Unexport” means remove all reference information about the DMA_BUF from the
+LISTs and make all pages private again.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_unexport {
+    /* IN parameters */
+    /* hyper dmabuf id to be unexported */
+    int hyper_dmabuf_id;
+    /* delay in ms by which unexport processing will be postponed */
+    int delay_ms;
+    /* OUT parameters */
+    /* Status of request */
+    int status;
+};
+
+f. IOCTL_HYPER_DMABUF_QUERY
+
+This IOCTL is used to retrieve specific information about a DMA_BUF that
+is being shared.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_query {
+    /* in parameters */
+    /* hyper dmabuf id to be queried */
+    int hyper_dmabuf_id;
+    /* item to be queried */
+    int item;
+    /* OUT parameters */
+    /* output of query */
+    /* info can be either value or reference */
+    unsigned long info;
+};
+
+<Available Queries>
+
+HYPER_DMABUF_QUERY_TYPE
+ - Return the type of DMA_BUF from the current domain, Exported or Imported.
+
+HYPER_DMABUF_QUERY_EXPORTER
+ - Return the exporting domain’s ID of a shared DMA_BUF.
+
+HYPER_DMABUF_QUERY_IMPORTER
+ - Return the importing domain’s ID of a shared DMA_BUF.
+
+HYPER_DMABUF_QUERY_SIZE
+ - Return the size of a shared DMA_BUF in bytes.
+
+HYPER_DMABUF_QUERY_BUSY
+ - Return ‘true’ if a shared DMA_BUF is currently used
+   (mapped by the end-consumer).
+
+HYPER_DMABUF_QUERY_UNEXPORTED
+ - Return ‘true’ if a shared DMA_BUF is not valid anymore
+   (so it does not allow a new consumer to map it).
+
+HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED
+ - Return ‘true’ if a shared DMA_BUF is scheduled to be unexported
+   (but is still valid) within a fixed time.
+
+HYPER_DMABUF_QUERY_PRIV_INFO
+ - Return ‘private’ data attached to shared DMA_BUF to the user space.
+   ‘unsigned long info’ is the user space pointer for the buffer, where
+   private data will be copied to.
+
+HYPER_DMABUF_QUERY_PRIV_INFO_SIZE
+ - Return the size of the private data attached to the shared DMA_BUF.
+
+3. Event Polling
+
+Event-polling can be enabled optionally by selecting the Kernel config option,
+Enable event-generation and polling operation under xen/hypervisor in Kernel’s
+menuconfig. The event-polling mechanism includes the generation of
+an import-event, adding it to the event-queue and providing a notification to
+the application so that it can retrieve the event data from the queue.
+
+For this mechanism, “Poll” and “Read” operations are added to the Hyper_DMABUF
+driver. A user application that polls the driver goes into a sleep state until
+there is a new event added to the queue. An application uses “Read” to retrieve
+event data from the event queue. Event data contains the hyper_dmabuf_id and
+the private data of the buffer that has been received by the importer.
+
+For more information on private data, refer to Section 3.5).
+Using this method, it is possible to lower the risk of the hyper_dmabuf_id and
+other sensitive information about the shared buffer (for example, meta-data
+for shared images) being leaked while being transferred to the importer because
+all of this data is shared as “private info” at the driver level. However,
+please note there should be a way for the importer to find the correct DMA_BUF
+in this case when there are multiple Hyper_DMABUFs being shared simultaneously.
+For example, the surface name or the surface ID of a specific rendering surface
+needs to be sent to the importer in advance before it is exported in a surface-
+sharing use-case.
+
+Each event data given to the user-space consists of a header and the private
+information of the buffer. The data type is defined as follows:
+
+struct hyper_dmabuf_event_hdr {
+        int event_type; /* one type only for now - new import */
+        hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
+        int size; /* size of data */
+};
+
+struct hyper_dmabuf_event_data {
+        struct hyper_dmabuf_event_hdr hdr;
+        void *data; /* private data */
+};
+
+4. Hyper_DMABUF Private Data
+
+Each Hyper_DMABUF can come with private data, the size of which can be up to
+AX_SIZE_PRIV_DATA (currently 192 byte). This private data is just a chunk of
+plain data attached to every Hyper_DMABUF. It is guaranteed to be synchronized
+across VMs, exporter and importer. This private data does not have any specific
+structure defined at the driver level, so any “user-defined” format or
+structure can be used. In addition, there is no dedicated use-case for this
+data. It can be used virtually for any purpose. For example, it can be used to
+share meta-data such as dimension and color formats for shared images in
+a surface sharing model. Another example is when we share protected media
+contents.
+
+This private data can be used to transfer flags related to content protection
+information on streamed media to the importer.
+
+Private data is initially generated when a buffer is exported for the first
+time. Then, it is updated whenever the same buffer is re-exported. During the
+re-exporting process, the Hyper_DMABUF driver only updates private data on
+both sides with new data from user-space since the same buffer already exists
+on both the IMPORT LIST and EXPORT LIST.
+
+There are two different ways to retrieve this private data from user-space.
+The first way is to use “Read” on the Hyper_DMABUF driver. “Read” returns the
+data of events containing private data of the buffer. The second way is to
+make a query to Hyper_DMABUF. There are two query items,
+HYPER_DMABUF_QUERY_PRIV_INFO and HYPER_DMABUF_QUERY_PRIV_INFO_SIZE available
+for retrieving private data and its size.
+
+5. Scatter-Gather List Table (SGT) Management
+
+SGT management is the core part of the Hyper_DMABUF driver that manages an
+SGT, a representation of the group of kernel pages associated with a DMA_BUF.
+This block includes four different sub-blocks:
+
+a. Hyper_DMABUF_id Manager
+
+This ID manager is responsible for generating a hyper_dmabuf_id for an
+exported DMA_BUF. When an ID is requested, the ID Manager first checks if
+there are any reusable IDs left in the list and returns one of those,
+if available. Otherwise, it creates the next count number and returns it
+to the caller.
+
+b. SGT Creator
+
+The SGT (struct sg_table) contains information about the DMA_BUF such as
+references to all kernel pages for the buffer and their connections. The
+SGT Creator creates a new SGT on the importer side with pages shared by
+the hypervisor.
+
+c. Kernel Page Extractor
+
+The Page Extractor extracts pages from a given SGT before those pages
+are shared.
+
+d. List Manager Interface
+
+The SGT manger also interacts with export and import list managers. It
+sends out information (for example, hyper_dmabuf_id, reference, and
+DMA_BUF information) about the exported or imported DMA_BUFs to the
+list manager. Also, on IOCTL request, it asks the list manager to find
+and return the information for a corresponding DMA_BUF in the list.
+
+6. DMA-BUF Interface
+
+The DMA-BUF interface provides standard methods to manage DMA_BUFs
+reconstructed by the Hyper_DMABUF driver from shared pages. All of the
+relevant operations are listed in struct dma_buf_ops. These operations
+are standard DMA_BUF operations, therefore they follow standard DMA BUF
+protocols.
+
+Each DMA_BUF operation communicates with the exporter at the end of the
+routine for “indirect DMA_BUF synchronization”.
+
+7. Export/Import List Management
+
+Whenever a DMA_BUF is shared and exported, its information is added to the
+database (EXPORT-list) on the exporting VM. Similarly, information about an
+imported DMA_BUF is added to the importing database (IMPORT list) on the
+importing VM, when the export happens.
+
+All of the entries in the lists are needed to manage the exported/imported
+DMA_BUF more efficiently. Both lists are implemented as Linux hash tables.
+The key to the list is hyper_dmabuf_id and the output is the information of
+the DMA_BUF. The List Manager manages all requests from other blocks and
+transactions within lists to ensure that all entries are up-to-date and
+that the list structure is consistent.
+
+The List Manager provides basic functionality, such as:
+
+- Adding to the List
+- Removal from the List
+- Finding information about a DMA_BUF, given the hyper_dmabuf_id
+
+8. Page Sharing by Hypercalls
+
+The Hyper_DMABUF driver assumes that there is a native page-by-page memory
+sharing mechanism available on the hypervisor. Referencing a group of pages
+that are being shared is what the driver expects from “backend” APIs or the
+hypervisor itself.
+
+For the example, xen backend integrated in current code base utilizes Xen’s
+grant-table interface for sharing the underlying kernel pages (struct *page).
+
+ore details about grant-table interface can be found at the following locations:
+
+https://wiki.xen.org/wiki/Grant_Table
+https://xenbits.xen.org/docs/4.6-testing/misc/grant-tables.txt
+
+9. Message Management
+
+The exporter and importer can each create a message that consists of an opcode
+(command) and operands (parameters) and send it to each other.
+
+The message format is defined as:
+
+struct hyper_dmabuf_req {
+        unsigned int req_id; /* Sequence number. Used for RING BUF
+                                synchronization */
+        unsigned int stat; /* Status.Response from receiver. */
+        unsigned int cmd;  /* Opcode */
+        unsigned int op[MAX_NUMBER_OF_OPERANDS]; /* Operands */
+};
+
+The following table gives the list of opcodes:
+
+<Opcodes in Message to Exporter/Importer>
+
+HYPER_DMABUF_EXPORT (exporter --> importer)
+ - Export a DMA_BUF to the importer. The importer registers the corresponding
+   DMA_BUF in its IMPORT LIST when the message is received.
+
+HYPER_DMABUF_EXPORT_FD (importer --> exporter)
+ - Locally exported as FD. The importer sends out this command to the exporter
+   to notify that the buffer is now locally exported (mapped and used).
+
+HYPER_DMABUF_EXPORT_FD_FAILED (importer --> exporter)
+ - Failed while exporting locally. The importer sends out this command to the
+   exporter to notify the exporter that the EXPORT_FD failed.
+
+HYPER_DMABUF_NOTIFY_UNEXPORT (exporter --> importer)
+ - Termination of sharing. The exporter notifies the importer that the DMA_BUF
+   has been unexported.
+
+HYPER_DMABUF_OPS_TO_REMOTE (importer --> exporter)
+ - Not implemented yet.
+
+HYPER_DMABUF_OPS_TO_SOURCE (exporter --> importer)
+ - DMA_BUF ops to the exporter, for DMA_BUF upstream synchronization.
+   Note: Implemented but it is done asynchronously due to performance issues.
+
+The following table shows the list of operands for each opcode.
+
+<Operands in Message to Exporter/Importer>
+
+- HYPER_DMABUF_EXPORT
+
+op0 to op3 – hyper_dmabuf_id
+op4 – number of pages to be shared
+op5 – offset of data in the first page
+op6 – length of data in the last page
+op7 – reference number for the group of shared pages
+op8 – size of private data
+op9 to (op9+op8)  – private data
+
+- HYPER_DMABUF_EXPORT_FD
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_EXPORT_FD_FAILED
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_NOTIFY_UNEXPORT
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_OPS_TO_REMOTE(Not implemented)
+
+- HYPER_DMABUF_OPS_TO_SOURCE
+
+op0 to op3 – hyper_dmabuf_id
+op4 – type of DMA_BUF operation
+
+9. Inter VM (Domain) Communication
+
+Two different types of inter-domain communication channels are required,
+one in kernel space and the other in user space. The communication channel
+in user space is for transmitting or receiving the hyper_dmabuf_id. Since
+there is no specific security (for example, encryption) involved in the
+generation of a global id at the driver level, it is highly recommended that
+the customer’s user application set up a very secure channel for exchanging
+hyper_dmabuf_id between VMs.
+
+The communication channel in kernel space is required for exchanging messages
+from “message management” block between two VMs. In the current reference
+backend for Xen hypervisor, Xen ring-buffer and event-channel mechanisms are
+used for message exchange between impoter and exporter.
+
+10. What are required in hypervisor
+
+emory sharing and message communication between VMs
+
+------------------------------------------------------------------------------
+Section 3. Hyper DMABUF Sharing Flow
+------------------------------------------------------------------------------
+
+1. Exporting
+
+To export a DMA_BUF to another VM, user space has to call an IOCTL
+(IOCTL_HYPER_DMABUF_EXPORT_REMOTE) with a file descriptor for the buffer given
+by the original exporter. The Hyper_DMABUF driver maps a DMA_BUF locally, then
+issues a hyper_dmabuf_id and SGT for the DMA_BUF, which is registered to the
+EXPORT list. Then, all pages for the SGT are extracted and each individual
+page is shared via a hypervisor-specific memory sharing mechanism
+(for example, in Xen this is grant-table).
+
+One important requirement on this memory sharing method is that it needs to
+create a single integer value that represents the list of pages, which can
+then be used by the importer for retrieving the group of shared pages.  For
+this, the “Backend” in the reference driver utilizes the multiple level
+addressing mechanism.
+
+Once the integer reference to the list of pages is created, the exporter
+builds the “export” command and sends it to the importer, then notifies the
+importer.
+
+2. Importing
+
+The Import process is divided into two sections. One is the registration
+of DMA_BUF from the exporter. The other is the actual mapping of the buffer
+before accessing the data in the buffer. The former (termed “Registration”)
+happens on an export event (that is, the export command with an interrupt)
+in the exporter.
+
+The latter (termed “Mapping”) is done asynchronously when the driver gets the
+IOCTL call from user space. When the importer gets an interrupt from the
+exporter, it checks the command in the receiving queue and if it is an
+“export” command, the registration process is started. It first finds
+hyper_dmabuf_id and the integer reference for the shared pages, then stores
+all of that information together with the “domain id” of the exporting domain
+in the IMPORT LIST.
+
+In the case where “event-polling” is enabled (Kernel Config - Enable event-
+generation and polling operation), a “new sharing available” event is
+generated right after the reference info for the new shared DMA_BUF is
+registered to the IMPORT LIST. This event is added to the event-queue.
+
+The user process that polls Hyper_DMABUF driver wakes up when this event-queue
+is not empty and is able to read back event data from the queue using the
+driver’s “Read” function. Once the user-application calls EXPORT_FD IOCTL with
+the proper parameters including hyper_dmabuf_id, the Hyper_DMABUF driver
+retrieves information about the matched DMA_BUF from the IMPORT LIST. Then, it
+maps all pages shared (referenced by the integer reference) in its kernel
+space and creates its own DMA_BUF referencing the same shared pages. After
+this, it exports this new DMA_BUF to the other drivers with a file descriptor.
+DMA_BUF can then be used just in the same way a local DMA_BUF is.
+
+3. Indirect Synchronization of DMA_BUF
+
+Synchronization of a DMA_BUF within a single OS is automatically achieved
+because all of importer’s DMA_BUF operations are done using functions defined
+on the exporter’s side, which means there is one central place that has full
+control over the DMA_BUF. In other words, any primary activities such as
+attaching/detaching and mapping/un-mapping are all captured by the exporter,
+meaning that the exporter knows basic information such as who is using the
+DMA_BUF and how it is being used. This, however, is not applicable if this
+sharing is done beyond a single OS because kernel space (where the exporter’s
+DMA_BUF operations reside) is simply not visible to the importing VM.
+
+Therefore, “indirect synchronization” was introduced as an alternative solution,
+which is now implemented in the Hyper_DMABUF driver. This technique makes
+the exporter create a shadow DMA_BUF when the end-consumer of the buffer maps
+the DMA_BUF, then duplicates any DMA_BUF operations performed on
+the importer’s side. Through this “indirect synchronization”, the exporter is
+able to virtually track all activities done by the consumer (mostly reference
+counter) as if those are done in exporter’s local system.
+
+------------------------------------------------------------------------------
+Section 4. Hypervisor Backend Interface
+------------------------------------------------------------------------------
+
+The Hyper_DMABUF driver has a standard “Backend” structure that contains
+mappings to various functions designed for a specific Hypervisor. Most of
+these API functions should provide a low-level implementation of communication
+and memory sharing capability that utilize a Hypervisor’s native mechanisms.
+
+struct hyper_dmabuf_backend_ops {
+        /* retreiving id of current virtual machine */
+        int (*get_vm_id)(void);
+        /* get pages shared via hypervisor-specific method */
+        int (*share_pages)(struct page **, int, int, void **);
+        /* make shared pages unshared via hypervisor specific method */
+        int (*unshare_pages)(void **, int);
+        /* map remotely shared pages on importer's side via
+         *  hypervisor-specific method
+         */
+        struct page ** (*map_shared_pages)(int, int, int, void **);
+        /* unmap and free shared pages on importer's side via
+         *  hypervisor-specific method
+         */
+        int (*unmap_shared_pages)(void **, int);
+        /* initialize communication environment */
+        int (*init_comm_env)(void);
+        /* destroy communication channel */
+        void (*destroy_comm)(void);
+        /* upstream ch setup (receiving and responding) */
+        int (*init_rx_ch)(int);
+        /* downstream ch setup (transmitting and parsing responses) */
+        int (*init_tx_ch)(int);
+        /* send msg via communication ch */
+        int (*send_req)(int, struct hyper_dmabuf_req *, int);
+};
+
+<Hypervisor-specific Backend Structure>
+
+1. get_vm_id
+
+	Returns the VM (domain) ID
+
+	Input:
+
+		-ID of the current domain
+
+	Output:
+
+		None
+
+2. share_pages
+
+	Get pages shared via hypervisor-specific method and return one reference
+	ID that represents the complete list of shared pages
+
+	Input:
+
+		-Array of pages
+		-ID of importing VM
+		-Number of pages
+		-Hypervisor specific Representation of reference info of shared
+		 pages
+
+	Output:
+
+		-Hypervisor specific integer value that represents all of
+		 the shared pages
+
+3. unshare_pages
+
+	Stop sharing pages
+
+	Input:
+
+		-Hypervisor specific Representation of reference info of shared
+		 pages
+		-Number of shared pages
+
+	Output:
+
+		0
+
+4. map_shared_pages
+
+	Map shared pages locally using a hypervisor-specific method
+
+	Input:
+
+		-Reference number that represents all of shared pages
+		-ID of exporting VM, Number of pages
+		-Reference information for any purpose
+
+	Output:
+
+		-An array of shared pages (struct page**)
+
+5. unmap_shared_pages
+
+	Unmap shared pages
+
+	Input:
+
+		-Hypervisor specific Representation of reference info of shared pages
+
+	Output:
+
+		-0 (successful) or one of Standard Kernel errors
+
+6. init_comm_env
+
+	Setup infrastructure needed for communication channel
+
+	Input:
+
+		None
+
+	Output:
+
+		None
+
+7. destroy_comm
+
+	Cleanup everything done via init_comm_env
+
+	Input:
+
+		None
+
+	Output:
+
+		None
+
+8. init_rx_ch
+
+	Configure receive channel
+
+	Input:
+
+		-ID of VM on the other side of the channel
+
+	Output:
+
+		-0 (successful) or one of Standard Kernel errors
+
+9. init_tx_ch
+
+	Configure transmit channel
+
+	Input:
+
+		-ID of VM on the other side of the channel
+
+	Output:
+
+		-0 (success) or one of Standard Kernel errors
+
+10. send_req
+
+	Send message to other VM
+
+	Input:
+
+		-ID of VM that receives the message
+		-Message
+
+	Output:
+
+		-0 (success) or one of Standard Kernel errors
+
+-------------------------------------------------------------------------------
+-------------------------------------------------------------------------------
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 03/60] hyper_dmabuf: re-use dma_buf previously exported if exist
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Now we re-use dma_buf instead of exporting it via normal process
(including new mappings). For this, hyper_dmabuf list entries can
be searched with "struct dma_buf*". Also, ioctl (export_remote) is
modified to just return hyper_dmabuf_id if the specific dmabuf
has already been exported to the target domain.

This patch also Includes changes in printk calles for debugging.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 28 +++++++++++++--------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 17 ++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c  |  4 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h  |  2 +-
 4 files changed, 26 insertions(+), 25 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index faa5c1b..7cb5c35 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -532,7 +532,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 						HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -552,7 +552,7 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 						HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -586,7 +586,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 						HYPER_DMABUF_OPS_MAP);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return st;
@@ -618,7 +618,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -636,7 +636,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -653,7 +653,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -672,7 +672,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return 0;
@@ -691,7 +691,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -710,7 +710,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -727,7 +727,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -746,7 +746,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -763,7 +763,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -782,7 +782,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL;
@@ -801,7 +801,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 5e50908..665cada 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -86,27 +86,28 @@ static int hyper_dmabuf_export_remote(void *data)
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+
 	if (!dma_buf) {
 		printk("Cannot get dma buf\n");
 		return -1;
 	}
 
-	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
-	if (!attachment) {
-		printk("Cannot get attachment\n");
-		return -1;
-	}
-
 	/* we check if this specific attachment was already exported
 	 * to the same domain and if yes, it returns hyper_dmabuf_id
 	 * of pre-exported sgt */
-	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
+	ret = hyper_dmabuf_find_id(dma_buf, export_remote_attr->remote_domain);
 	if (ret != -1) {
-		dma_buf_detach(dma_buf, attachment);
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
 	}
+
+	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	if (!attachment) {
+		printk("Cannot get attachment\n");
+		return -1;
+	}
+
 	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
 	ret = 0;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 77a7e65..ad2109c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -65,13 +65,13 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
+int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->attachment == attach &&
+		if(info_entry->info->attachment->dmabuf == dmabuf &&
 			info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 869cd9a..463a6da 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -25,7 +25,7 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
+int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 03/60] hyper_dmabuf: re-use dma_buf previously exported if exist
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Now we re-use dma_buf instead of exporting it via normal process
(including new mappings). For this, hyper_dmabuf list entries can
be searched with "struct dma_buf*". Also, ioctl (export_remote) is
modified to just return hyper_dmabuf_id if the specific dmabuf
has already been exported to the target domain.

This patch also Includes changes in printk calles for debugging.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 28 +++++++++++++--------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 17 ++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c  |  4 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h  |  2 +-
 4 files changed, 26 insertions(+), 25 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index faa5c1b..7cb5c35 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -532,7 +532,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 						HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -552,7 +552,7 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 						HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -586,7 +586,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 						HYPER_DMABUF_OPS_MAP);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return st;
@@ -618,7 +618,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -636,7 +636,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -653,7 +653,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -672,7 +672,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return 0;
@@ -691,7 +691,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -710,7 +710,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -727,7 +727,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -746,7 +746,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -763,7 +763,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -782,7 +782,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL;
@@ -801,7 +801,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 5e50908..665cada 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -86,27 +86,28 @@ static int hyper_dmabuf_export_remote(void *data)
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+
 	if (!dma_buf) {
 		printk("Cannot get dma buf\n");
 		return -1;
 	}
 
-	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
-	if (!attachment) {
-		printk("Cannot get attachment\n");
-		return -1;
-	}
-
 	/* we check if this specific attachment was already exported
 	 * to the same domain and if yes, it returns hyper_dmabuf_id
 	 * of pre-exported sgt */
-	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
+	ret = hyper_dmabuf_find_id(dma_buf, export_remote_attr->remote_domain);
 	if (ret != -1) {
-		dma_buf_detach(dma_buf, attachment);
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
 	}
+
+	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	if (!attachment) {
+		printk("Cannot get attachment\n");
+		return -1;
+	}
+
 	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
 	ret = 0;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 77a7e65..ad2109c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -65,13 +65,13 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
+int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->attachment == attach &&
+		if(info_entry->info->attachment->dmabuf == dmabuf &&
 			info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 869cd9a..463a6da 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -25,7 +25,7 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
+int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 03/60] hyper_dmabuf: re-use dma_buf previously exported if exist
  2017-12-19 19:29 ` Dongwon Kim
                   ` (2 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Now we re-use dma_buf instead of exporting it via normal process
(including new mappings). For this, hyper_dmabuf list entries can
be searched with "struct dma_buf*". Also, ioctl (export_remote) is
modified to just return hyper_dmabuf_id if the specific dmabuf
has already been exported to the target domain.

This patch also Includes changes in printk calles for debugging.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 28 +++++++++++++--------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 17 ++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c  |  4 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h  |  2 +-
 4 files changed, 26 insertions(+), 25 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index faa5c1b..7cb5c35 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -532,7 +532,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 						HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -552,7 +552,7 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 						HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -586,7 +586,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 						HYPER_DMABUF_OPS_MAP);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return st;
@@ -618,7 +618,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -636,7 +636,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -653,7 +653,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -672,7 +672,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return 0;
@@ -691,7 +691,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -710,7 +710,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -727,7 +727,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -746,7 +746,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -763,7 +763,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -782,7 +782,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL;
@@ -801,7 +801,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
-		printk("send dmabuf sync request failed\n");
+		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 5e50908..665cada 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -86,27 +86,28 @@ static int hyper_dmabuf_export_remote(void *data)
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+
 	if (!dma_buf) {
 		printk("Cannot get dma buf\n");
 		return -1;
 	}
 
-	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
-	if (!attachment) {
-		printk("Cannot get attachment\n");
-		return -1;
-	}
-
 	/* we check if this specific attachment was already exported
 	 * to the same domain and if yes, it returns hyper_dmabuf_id
 	 * of pre-exported sgt */
-	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
+	ret = hyper_dmabuf_find_id(dma_buf, export_remote_attr->remote_domain);
 	if (ret != -1) {
-		dma_buf_detach(dma_buf, attachment);
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
 	}
+
+	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	if (!attachment) {
+		printk("Cannot get attachment\n");
+		return -1;
+	}
+
 	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
 	ret = 0;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 77a7e65..ad2109c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -65,13 +65,13 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
+int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->attachment == attach &&
+		if(info_entry->info->attachment->dmabuf == dmabuf &&
 			info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 869cd9a..463a6da 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -25,7 +25,7 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
+int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 04/60] hyper_dmabuf: new index, k for pointing a right n-th page
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Need a new index, k in hyper_dmabuf_extract_pgs function for
picking up a correct n-th page in contigous memory space.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 7cb5c35..3b40ec0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -39,7 +39,7 @@ static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
 struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 {
 	struct hyper_dmabuf_pages_info *pinfo;
-	int i, j;
+	int i, j, k;
 	int length;
 	struct scatterlist *sgl;
 
@@ -57,7 +57,7 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	pinfo->frst_ofst = sgl->offset;
 	pinfo->pages[0] = sg_page(sgl);
 	length = sgl->length - PAGE_SIZE + sgl->offset;
-	i=1;
+	i = 1;
 
 	while (length > 0) {
 		pinfo->pages[i] = nth_page(sg_page(sgl), i);
@@ -71,12 +71,12 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 		pinfo->pages[i++] = sg_page(sgl);
 		length = sgl->length - PAGE_SIZE;
 		pinfo->nents++;
+		k = 1;
 
 		while (length > 0) {
-			pinfo->pages[i] = nth_page(sg_page(sgl), i);
+			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
 			length -= PAGE_SIZE;
 			pinfo->nents++;
-			i++;
 		}
 	}
 
@@ -535,7 +535,8 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
-	return ret;
+	/* Ignoring ret for now */
+	return 0;
 }
 
 static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 04/60] hyper_dmabuf: new index, k for pointing a right n-th page
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Need a new index, k in hyper_dmabuf_extract_pgs function for
picking up a correct n-th page in contigous memory space.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 7cb5c35..3b40ec0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -39,7 +39,7 @@ static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
 struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 {
 	struct hyper_dmabuf_pages_info *pinfo;
-	int i, j;
+	int i, j, k;
 	int length;
 	struct scatterlist *sgl;
 
@@ -57,7 +57,7 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	pinfo->frst_ofst = sgl->offset;
 	pinfo->pages[0] = sg_page(sgl);
 	length = sgl->length - PAGE_SIZE + sgl->offset;
-	i=1;
+	i = 1;
 
 	while (length > 0) {
 		pinfo->pages[i] = nth_page(sg_page(sgl), i);
@@ -71,12 +71,12 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 		pinfo->pages[i++] = sg_page(sgl);
 		length = sgl->length - PAGE_SIZE;
 		pinfo->nents++;
+		k = 1;
 
 		while (length > 0) {
-			pinfo->pages[i] = nth_page(sg_page(sgl), i);
+			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
 			length -= PAGE_SIZE;
 			pinfo->nents++;
-			i++;
 		}
 	}
 
@@ -535,7 +535,8 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
-	return ret;
+	/* Ignoring ret for now */
+	return 0;
 }
 
 static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 04/60] hyper_dmabuf: new index, k for pointing a right n-th page
  2017-12-19 19:29 ` Dongwon Kim
                   ` (4 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Need a new index, k in hyper_dmabuf_extract_pgs function for
picking up a correct n-th page in contigous memory space.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 7cb5c35..3b40ec0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -39,7 +39,7 @@ static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
 struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 {
 	struct hyper_dmabuf_pages_info *pinfo;
-	int i, j;
+	int i, j, k;
 	int length;
 	struct scatterlist *sgl;
 
@@ -57,7 +57,7 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	pinfo->frst_ofst = sgl->offset;
 	pinfo->pages[0] = sg_page(sgl);
 	length = sgl->length - PAGE_SIZE + sgl->offset;
-	i=1;
+	i = 1;
 
 	while (length > 0) {
 		pinfo->pages[i] = nth_page(sg_page(sgl), i);
@@ -71,12 +71,12 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 		pinfo->pages[i++] = sg_page(sgl);
 		length = sgl->length - PAGE_SIZE;
 		pinfo->nents++;
+		k = 1;
 
 		while (length > 0) {
-			pinfo->pages[i] = nth_page(sg_page(sgl), i);
+			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
 			length -= PAGE_SIZE;
 			pinfo->nents++;
-			i++;
 		}
 	}
 
@@ -535,7 +535,8 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
-	return ret;
+	/* Ignoring ret for now */
+	return 0;
 }
 
 static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 05/60] hyper_dmabuf: skip creating a comm ch if exist for the VM
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

hyper_dmabuf_importer_ring_setup creates new channel only if
there is no existing downstream communication channel previously
created for the exporter VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 13 +++++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 ++++++++++++++++++++
 2 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 3b40ec0..6b16e37 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -827,12 +827,11 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
 {
 	int fd;
-
 	struct dma_buf* dmabuf;
 
-/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
- * then release */
-
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
 	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
 
 	fd = dma_buf_fd(dmabuf, flags);
@@ -845,9 +844,11 @@ struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_inf
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
 	exp_info.ops = &hyper_dmabuf_ops;
-	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
 	exp_info.flags = /* not sure about flag */0;
 	exp_info.priv = dinfo;
 
 	return dma_buf_export(&exp_info);
-};
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 665cada..90e0c65 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -12,6 +12,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_query.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
+#include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "hyper_dmabuf_msg.h"
 
 struct hyper_dmabuf_private {
@@ -31,6 +32,7 @@ static uint32_t hyper_dmabuf_id_gen(void) {
 static int hyper_dmabuf_exporter_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
+	struct hyper_dmabuf_ring_info_export *ring_info;
 	int ret = 0;
 
 	if (!data) {
@@ -39,6 +41,15 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 	}
 	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
 
+	/* check if the ring ch already exists */
+	ring_info = hyper_dmabuf_find_exporter_ring(ring_attr->remote_domain);
+
+	if (ring_info) {
+		printk("(exporter's) ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+			ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
 	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
 						&ring_attr->ring_refid,
 						&ring_attr->port);
@@ -49,6 +60,7 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 static int hyper_dmabuf_importer_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
+	struct hyper_dmabuf_ring_info_import *ring_info;
 	int ret = 0;
 
 	if (!data) {
@@ -58,6 +70,14 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 
 	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
 
+	/* check if the ring ch already exist */
+	ring_info = hyper_dmabuf_find_importer_ring(setup_imp_ring_attr->source_domain);
+
+	if (ring_info) {
+		printk("(importer's) ring ch to domid = %d already exist\n", ring_info->sdomain);
+		return 0;
+	}
+
 	/* user need to provide a port number and ref # for the page used as ring buffer */
 	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
 						 setup_imp_ring_attr->ring_refid,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 05/60] hyper_dmabuf: skip creating a comm ch if exist for the VM
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

hyper_dmabuf_importer_ring_setup creates new channel only if
there is no existing downstream communication channel previously
created for the exporter VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 13 +++++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 ++++++++++++++++++++
 2 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 3b40ec0..6b16e37 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -827,12 +827,11 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
 {
 	int fd;
-
 	struct dma_buf* dmabuf;
 
-/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
- * then release */
-
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
 	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
 
 	fd = dma_buf_fd(dmabuf, flags);
@@ -845,9 +844,11 @@ struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_inf
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
 	exp_info.ops = &hyper_dmabuf_ops;
-	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
 	exp_info.flags = /* not sure about flag */0;
 	exp_info.priv = dinfo;
 
 	return dma_buf_export(&exp_info);
-};
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 665cada..90e0c65 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -12,6 +12,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_query.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
+#include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "hyper_dmabuf_msg.h"
 
 struct hyper_dmabuf_private {
@@ -31,6 +32,7 @@ static uint32_t hyper_dmabuf_id_gen(void) {
 static int hyper_dmabuf_exporter_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
+	struct hyper_dmabuf_ring_info_export *ring_info;
 	int ret = 0;
 
 	if (!data) {
@@ -39,6 +41,15 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 	}
 	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
 
+	/* check if the ring ch already exists */
+	ring_info = hyper_dmabuf_find_exporter_ring(ring_attr->remote_domain);
+
+	if (ring_info) {
+		printk("(exporter's) ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+			ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
 	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
 						&ring_attr->ring_refid,
 						&ring_attr->port);
@@ -49,6 +60,7 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 static int hyper_dmabuf_importer_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
+	struct hyper_dmabuf_ring_info_import *ring_info;
 	int ret = 0;
 
 	if (!data) {
@@ -58,6 +70,14 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 
 	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
 
+	/* check if the ring ch already exist */
+	ring_info = hyper_dmabuf_find_importer_ring(setup_imp_ring_attr->source_domain);
+
+	if (ring_info) {
+		printk("(importer's) ring ch to domid = %d already exist\n", ring_info->sdomain);
+		return 0;
+	}
+
 	/* user need to provide a port number and ref # for the page used as ring buffer */
 	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
 						 setup_imp_ring_attr->ring_refid,
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 05/60] hyper_dmabuf: skip creating a comm ch if exist for the VM
  2017-12-19 19:29 ` Dongwon Kim
                   ` (6 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

hyper_dmabuf_importer_ring_setup creates new channel only if
there is no existing downstream communication channel previously
created for the exporter VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 13 +++++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 ++++++++++++++++++++
 2 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 3b40ec0..6b16e37 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -827,12 +827,11 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
 {
 	int fd;
-
 	struct dma_buf* dmabuf;
 
-/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
- * then release */
-
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
 	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
 
 	fd = dma_buf_fd(dmabuf, flags);
@@ -845,9 +844,11 @@ struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_inf
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
 	exp_info.ops = &hyper_dmabuf_ops;
-	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
 	exp_info.flags = /* not sure about flag */0;
 	exp_info.priv = dinfo;
 
 	return dma_buf_export(&exp_info);
-};
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 665cada..90e0c65 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -12,6 +12,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_query.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
+#include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "hyper_dmabuf_msg.h"
 
 struct hyper_dmabuf_private {
@@ -31,6 +32,7 @@ static uint32_t hyper_dmabuf_id_gen(void) {
 static int hyper_dmabuf_exporter_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
+	struct hyper_dmabuf_ring_info_export *ring_info;
 	int ret = 0;
 
 	if (!data) {
@@ -39,6 +41,15 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 	}
 	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
 
+	/* check if the ring ch already exists */
+	ring_info = hyper_dmabuf_find_exporter_ring(ring_attr->remote_domain);
+
+	if (ring_info) {
+		printk("(exporter's) ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+			ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
 	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
 						&ring_attr->ring_refid,
 						&ring_attr->port);
@@ -49,6 +60,7 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 static int hyper_dmabuf_importer_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
+	struct hyper_dmabuf_ring_info_import *ring_info;
 	int ret = 0;
 
 	if (!data) {
@@ -58,6 +70,14 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 
 	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
 
+	/* check if the ring ch already exist */
+	ring_info = hyper_dmabuf_find_importer_ring(setup_imp_ring_attr->source_domain);
+
+	if (ring_info) {
+		printk("(importer's) ring ch to domid = %d already exist\n", ring_info->sdomain);
+		return 0;
+	}
+
 	/* user need to provide a port number and ref # for the page used as ring buffer */
 	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
 						 setup_imp_ring_attr->ring_refid,
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 06/60] hyper_dmabuf: map shared pages only once when importing.
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

If shared pages of buffer were already mapped on importer side, do
not map them again on next request to export fd.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 90e0c65..af94359 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -203,7 +203,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	if (!data) {
 		printk("user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 
 	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
@@ -218,15 +218,17 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		imported_sgt_info->last_len, imported_sgt_info->nents,
 		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
 
-	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
-						imported_sgt_info->frst_ofst,
-						imported_sgt_info->last_len,
-						imported_sgt_info->nents,
-						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
-						&imported_sgt_info->shared_pages_info);
-
 	if (!imported_sgt_info->sgt) {
-		return -1;
+		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
+							imported_sgt_info->frst_ofst,
+							imported_sgt_info->last_len,
+							imported_sgt_info->nents,
+							HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+							&imported_sgt_info->shared_pages_info);
+		if (!imported_sgt_info->sgt) {
+			printk("Failed to create sgt\n");
+			return -EINVAL;
+		}
 	}
 
 	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 06/60] hyper_dmabuf: map shared pages only once when importing.
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

If shared pages of buffer were already mapped on importer side, do
not map them again on next request to export fd.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 90e0c65..af94359 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -203,7 +203,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	if (!data) {
 		printk("user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 
 	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
@@ -218,15 +218,17 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		imported_sgt_info->last_len, imported_sgt_info->nents,
 		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
 
-	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
-						imported_sgt_info->frst_ofst,
-						imported_sgt_info->last_len,
-						imported_sgt_info->nents,
-						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
-						&imported_sgt_info->shared_pages_info);
-
 	if (!imported_sgt_info->sgt) {
-		return -1;
+		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
+							imported_sgt_info->frst_ofst,
+							imported_sgt_info->last_len,
+							imported_sgt_info->nents,
+							HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+							&imported_sgt_info->shared_pages_info);
+		if (!imported_sgt_info->sgt) {
+			printk("Failed to create sgt\n");
+			return -EINVAL;
+		}
 	}
 
 	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 06/60] hyper_dmabuf: map shared pages only once when importing.
  2017-12-19 19:29 ` Dongwon Kim
                   ` (8 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

If shared pages of buffer were already mapped on importer side, do
not map them again on next request to export fd.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 90e0c65..af94359 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -203,7 +203,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	if (!data) {
 		printk("user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 
 	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
@@ -218,15 +218,17 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		imported_sgt_info->last_len, imported_sgt_info->nents,
 		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
 
-	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
-						imported_sgt_info->frst_ofst,
-						imported_sgt_info->last_len,
-						imported_sgt_info->nents,
-						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
-						&imported_sgt_info->shared_pages_info);
-
 	if (!imported_sgt_info->sgt) {
-		return -1;
+		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
+							imported_sgt_info->frst_ofst,
+							imported_sgt_info->last_len,
+							imported_sgt_info->nents,
+							HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+							&imported_sgt_info->shared_pages_info);
+		if (!imported_sgt_info->sgt) {
+			printk("Failed to create sgt\n");
+			return -EINVAL;
+		}
 	}
 
 	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 07/60] hyper_dmabuf: message parsing done via workqueue
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Use workqueue mechanism to delay message parsing done
after exiting from ISR to reduce ISR execution time.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  13 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   5 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 155 ++++++++++++++-------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  75 ++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   7 -
 6 files changed, 152 insertions(+), 107 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 0698327..70b4878 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,5 +1,8 @@
 #include <linux/init.h>       /* module_init, module_exit */
 #include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include <linux/workqueue.h>
+#include <xen/grant_table.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
@@ -10,6 +13,8 @@ MODULE_AUTHOR("IOTG-PED, INTEL");
 int register_device(void);
 int unregister_device(void);
 
+struct hyper_dmabuf_private hyper_dmabuf_private;
+
 /*===============================================================================================*/
 static int hyper_dmabuf_drv_init(void)
 {
@@ -24,6 +29,10 @@ static int hyper_dmabuf_drv_init(void)
 
 	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
 
+	/* device structure initialization */
+	/* currently only does work-queue initialization */
+	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		return -EINVAL;
@@ -45,6 +54,10 @@ static void hyper_dmabuf_drv_exit(void)
 	hyper_dmabuf_table_destroy();
 	hyper_dmabuf_ring_table_init();
 
+	/* destroy workqueue */
+	if (hyper_dmabuf_private.work_queue)
+		destroy_workqueue(hyper_dmabuf_private.work_queue);
+
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 2dad9a6..6145d29 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,6 +1,11 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
+struct hyper_dmabuf_private {
+        struct device *device;
+	struct workqueue_struct *work_queue;
+};
+
 typedef int (*hyper_dmabuf_ioctl_t)(void *data);
 
 struct hyper_dmabuf_ioctl_desc {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index af94359..e4d8316 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -15,9 +15,7 @@
 #include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "hyper_dmabuf_msg.h"
 
-struct hyper_dmabuf_private {
-	struct device *device;
-} hyper_dmabuf_private;
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 static uint32_t hyper_dmabuf_id_gen(void) {
 	/* TODO: add proper implementation */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 3237e50..0166e61 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -3,12 +3,23 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <linux/workqueue.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
 //#include "hyper_dmabuf_remote_sync.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+struct cmd_process {
+	struct work_struct work;
+	struct hyper_dmabuf_ring_rq *rq;
+	int domid;
+};
+
 void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 				        enum hyper_dmabuf_command command, int *operands)
 {
@@ -71,18 +82,17 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 	}
 }
 
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+void cmd_process_work(struct work_struct *work)
 {
-	uint32_t i, ret;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	struct hyper_dmabuf_sgt_info *sgt_info;
-
-	/* make sure req is not NULL (may not be needed) */
-	if (!req) {
-		return -EINVAL;
-	}
+        struct hyper_dmabuf_sgt_info *sgt_info;
+	struct cmd_process *proc = container_of(work, struct cmd_process, work);
+	struct hyper_dmabuf_ring_rq *req;
+	int domid;
+	int i;
 
-	req->status = HYPER_DMABUF_REQ_PROCESSED;
+	req = proc->rq;
+	domid = proc->domid;
 
 	switch (req->command) {
 	case HYPER_DMABUF_EXPORT:
@@ -115,33 +125,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_DESTROY:
-		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : DMABUF_DESTROY,
-		 * operands0 : hyper_dmabuf_id
-		 */
-
-		imported_sgt_info =
-			hyper_dmabuf_find_imported(req->operands[0]);
-
-		if (imported_sgt_info) {
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
-
-			hyper_dmabuf_remove_imported(req->operands[0]);
-
-			/* TODO: cleanup sgt on importer side etc */
-		}
-
-		/* Notify exporter that buffer is freed and it can cleanup it */
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_DESTROY_FINISH;
-
-#if 0 /* function is not implemented yet */
-
-		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
-#endif
-		break;
-
 	case HYPER_DMABUF_DESTROY_FINISH:
 		/* destroy sg_list for hyper_dmabuf_id on local side */
 		/* command : DMABUF_DESTROY_FINISH,
@@ -180,33 +163,101 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		 */
 		break;
 
-	/* requesting the other side to setup another ring channel for reverse direction */
-	case HYPER_DMABUF_EXPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
-		 * no operands needed */
+	case HYPER_DMABUF_IMPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
+		/* no operands needed */
+		hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
+
+		break;
+
+	default:
+		/* shouldn't get here */
+		/* no matched command, nothing to do.. just return error */
+		break;
+	}
+
+	kfree(req);
+	kfree(proc);
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+{
+	struct cmd_process *proc;
+	struct hyper_dmabuf_ring_rq *temp_req;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret;
+
+	if (!req) {
+		printk("request is NULL\n");
+		return -EINVAL;
+	}
+
+	if ((req->command < HYPER_DMABUF_EXPORT) ||
+		(req->command > HYPER_DMABUF_IMPORTER_RING_SETUP)) {
+		printk("invalid command\n");
+		return -EINVAL;
+	}
+
+	req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+	/* HYPER_DMABUF_EXPORTER_RING_SETUP requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->command == HYPER_DMABUF_EXPORTER_RING_SETUP) {
 		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
 		if (ret < 0) {
 			req->status = HYPER_DMABUF_REQ_ERROR;
-			return -EINVAL;
 		}
 
 		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
 		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
-		break;
 
-	case HYPER_DMABUF_IMPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
-		/* no operands needed */
-		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
-		if (ret < 0)
-			return -EINVAL;
+		return req->command;
+	}
 
-		break;
+	/* HYPER_DMABUF_DESTROY requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->command == HYPER_DMABUF_DESTROY) {
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		imported_sgt_info =
+			hyper_dmabuf_find_imported(req->operands[0]);
 
-	default:
-		/* no matched command, nothing to do.. just return error */
-		return -EINVAL;
+		if (imported_sgt_info) {
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+
+			hyper_dmabuf_remove_imported(req->operands[0]);
+
+			/* TODO: cleanup sgt on importer side etc */
+		}
+
+		/* Notify exporter that buffer is freed and it can cleanup it */
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_DESTROY_FINISH;
+
+#if 0 /* function is not implemented yet */
+
+		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
+#endif
+		return req->command;
 	}
 
+	temp_req = (struct hyper_dmabuf_ring_rq *)kmalloc(sizeof(*temp_req), GFP_KERNEL);
+
+	memcpy(temp_req, req, sizeof(*temp_req));
+
+	proc = (struct cmd_process *) kcalloc(1, sizeof(struct cmd_process),
+						GFP_KERNEL);
+
+	proc->rq = temp_req;
+	proc->domid = domid;
+
+	INIT_WORK(&(proc->work), cmd_process_work);
+
+	queue_work(hyper_dmabuf_private.work_queue, &(proc->work));
+
 	return req->command;
 }
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 22f2ef0..05855ba1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -14,7 +14,6 @@
 #include "../hyper_dmabuf_msg.h"
 
 static int export_req_id = 0;
-static int import_req_id = 0;
 
 int32_t hyper_dmabuf_get_domid(void)
 {
@@ -37,12 +36,6 @@ int hyper_dmabuf_next_req_id_export(void)
         return export_req_id;
 }
 
-int hyper_dmabuf_next_req_id_import(void)
-{
-        import_req_id++;
-        return import_req_id;
-}
-
 /* For now cache latast rings as global variables TODO: keep them in list*/
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
@@ -81,7 +74,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 
 	alloc_unbound.dom = DOMID_SELF;
 	alloc_unbound.remote_dom = rdomain;
-	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
+					&alloc_unbound);
 	if (ret != 0) {
 		printk("Cannot allocate event channel\n");
 		return -EINVAL;
@@ -96,7 +90,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 		printk("Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
-		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
+		gnttab_end_foreign_access(ring_info->gref_ring, 0,
+					virt_to_mfn(shared_ring));
 		return -EINVAL;
 	}
 
@@ -108,7 +103,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 	*refid = ring_info->gref_ring;
 	*port = ring_info->port;
 
-	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
+	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
 		ring_info->irq);
@@ -162,8 +158,9 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
-						    NULL, (void*)ring_info);
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port,
+						hyper_dmabuf_back_ring_isr, 0,
+						NULL, (void*)ring_info);
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -216,26 +213,20 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 	return 0;
 }
 
-/* called by interrupt (WORKQUEUE) */
-int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
-{
-	/* as a importer and as a exporter */
-	return 0;
-}
-
 /* ISR for request from exporter (as an importer) */
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
-	struct hyper_dmabuf_ring_rq request;
-	struct hyper_dmabuf_ring_rp response;
+	struct hyper_dmabuf_ring_rq req;
+	struct hyper_dmabuf_ring_rp resp;
+
 	int notify, more_to_do;
 	int ret;
-//	struct hyper_dmabuf_work *work;
 
-	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
+	struct hyper_dmabuf_ring_info_import *ring_info;
 	struct hyper_dmabuf_back_ring *ring;
 
+	ring_info = (struct hyper_dmabuf_ring_info_import *)info;
 	ring = &ring_info->ring_back;
 
 	do {
@@ -246,22 +237,16 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
 			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
 				break;
 
-			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
+			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
 			printk("Got request\n");
 			ring->req_cons = ++rc;
 
-			/* TODO: probably using linked list for multiple requests then let
-			 * a task in a workqueue to process those is better idea becuase
-			 * we do not want to stay in ISR for long.
-			 */
-			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
-				/* build response */
-				memcpy(&response, &request, sizeof(response));
-
-				/* we sent back modified request as a response.. we might just need to have request only..*/
-				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
+				memcpy(&resp, &req, sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
+							sizeof(resp));
 				ring->rsp_prod_pvt++;
 
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
@@ -281,15 +266,17 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
 }
 
 /* ISR for responses from importer */
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
-	struct hyper_dmabuf_ring_rp *response;
+	struct hyper_dmabuf_ring_rp *resp;
 	RING_IDX i, rp;
 	int more_to_do, ret;
 
-	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
+	struct hyper_dmabuf_ring_info_export *ring_info;
 	struct hyper_dmabuf_front_ring *ring;
+
+	ring_info = (struct hyper_dmabuf_ring_info_export *)info;
 	ring = &ring_info->ring_front;
 
 	do {
@@ -298,20 +285,18 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
 		for (i = ring->rsp_cons; i != rp; i++) {
 			unsigned long id;
 
-			response = RING_GET_RESPONSE(ring, i);
-			id = response->response_id;
+			resp = RING_GET_RESPONSE(ring, i);
+			id = resp->response_id;
 
-			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
-				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
+							(struct hyper_dmabuf_ring_rq *)resp);
 
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
 				}
-			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
-				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
 			}
-
 		}
 
 		ring->rsp_cons = i;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 2754917..4ad0529 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -36,17 +36,10 @@ struct hyper_dmabuf_ring_info_import {
         struct hyper_dmabuf_back_ring ring_back;
 };
 
-//struct hyper_dmabuf_work {
-//	hyper_dmabuf_ring_rq requrest;
-//	struct work_struct msg_parse;
-//};
-
 int32_t hyper_dmabuf_get_domid(void);
 
 int hyper_dmabuf_next_req_id_export(void);
 
-int hyper_dmabuf_next_req_id_import(void);
-
 /* exporter needs to generated info for page sharing */
 int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 07/60] hyper_dmabuf: message parsing done via workqueue
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Use workqueue mechanism to delay message parsing done
after exiting from ISR to reduce ISR execution time.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  13 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   5 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 155 ++++++++++++++-------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  75 ++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   7 -
 6 files changed, 152 insertions(+), 107 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 0698327..70b4878 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,5 +1,8 @@
 #include <linux/init.h>       /* module_init, module_exit */
 #include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include <linux/workqueue.h>
+#include <xen/grant_table.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
@@ -10,6 +13,8 @@ MODULE_AUTHOR("IOTG-PED, INTEL");
 int register_device(void);
 int unregister_device(void);
 
+struct hyper_dmabuf_private hyper_dmabuf_private;
+
 /*===============================================================================================*/
 static int hyper_dmabuf_drv_init(void)
 {
@@ -24,6 +29,10 @@ static int hyper_dmabuf_drv_init(void)
 
 	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
 
+	/* device structure initialization */
+	/* currently only does work-queue initialization */
+	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		return -EINVAL;
@@ -45,6 +54,10 @@ static void hyper_dmabuf_drv_exit(void)
 	hyper_dmabuf_table_destroy();
 	hyper_dmabuf_ring_table_init();
 
+	/* destroy workqueue */
+	if (hyper_dmabuf_private.work_queue)
+		destroy_workqueue(hyper_dmabuf_private.work_queue);
+
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 2dad9a6..6145d29 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,6 +1,11 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
+struct hyper_dmabuf_private {
+        struct device *device;
+	struct workqueue_struct *work_queue;
+};
+
 typedef int (*hyper_dmabuf_ioctl_t)(void *data);
 
 struct hyper_dmabuf_ioctl_desc {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index af94359..e4d8316 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -15,9 +15,7 @@
 #include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "hyper_dmabuf_msg.h"
 
-struct hyper_dmabuf_private {
-	struct device *device;
-} hyper_dmabuf_private;
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 static uint32_t hyper_dmabuf_id_gen(void) {
 	/* TODO: add proper implementation */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 3237e50..0166e61 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -3,12 +3,23 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <linux/workqueue.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
 //#include "hyper_dmabuf_remote_sync.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+struct cmd_process {
+	struct work_struct work;
+	struct hyper_dmabuf_ring_rq *rq;
+	int domid;
+};
+
 void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 				        enum hyper_dmabuf_command command, int *operands)
 {
@@ -71,18 +82,17 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 	}
 }
 
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+void cmd_process_work(struct work_struct *work)
 {
-	uint32_t i, ret;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	struct hyper_dmabuf_sgt_info *sgt_info;
-
-	/* make sure req is not NULL (may not be needed) */
-	if (!req) {
-		return -EINVAL;
-	}
+        struct hyper_dmabuf_sgt_info *sgt_info;
+	struct cmd_process *proc = container_of(work, struct cmd_process, work);
+	struct hyper_dmabuf_ring_rq *req;
+	int domid;
+	int i;
 
-	req->status = HYPER_DMABUF_REQ_PROCESSED;
+	req = proc->rq;
+	domid = proc->domid;
 
 	switch (req->command) {
 	case HYPER_DMABUF_EXPORT:
@@ -115,33 +125,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_DESTROY:
-		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : DMABUF_DESTROY,
-		 * operands0 : hyper_dmabuf_id
-		 */
-
-		imported_sgt_info =
-			hyper_dmabuf_find_imported(req->operands[0]);
-
-		if (imported_sgt_info) {
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
-
-			hyper_dmabuf_remove_imported(req->operands[0]);
-
-			/* TODO: cleanup sgt on importer side etc */
-		}
-
-		/* Notify exporter that buffer is freed and it can cleanup it */
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_DESTROY_FINISH;
-
-#if 0 /* function is not implemented yet */
-
-		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
-#endif
-		break;
-
 	case HYPER_DMABUF_DESTROY_FINISH:
 		/* destroy sg_list for hyper_dmabuf_id on local side */
 		/* command : DMABUF_DESTROY_FINISH,
@@ -180,33 +163,101 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		 */
 		break;
 
-	/* requesting the other side to setup another ring channel for reverse direction */
-	case HYPER_DMABUF_EXPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
-		 * no operands needed */
+	case HYPER_DMABUF_IMPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
+		/* no operands needed */
+		hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
+
+		break;
+
+	default:
+		/* shouldn't get here */
+		/* no matched command, nothing to do.. just return error */
+		break;
+	}
+
+	kfree(req);
+	kfree(proc);
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+{
+	struct cmd_process *proc;
+	struct hyper_dmabuf_ring_rq *temp_req;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret;
+
+	if (!req) {
+		printk("request is NULL\n");
+		return -EINVAL;
+	}
+
+	if ((req->command < HYPER_DMABUF_EXPORT) ||
+		(req->command > HYPER_DMABUF_IMPORTER_RING_SETUP)) {
+		printk("invalid command\n");
+		return -EINVAL;
+	}
+
+	req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+	/* HYPER_DMABUF_EXPORTER_RING_SETUP requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->command == HYPER_DMABUF_EXPORTER_RING_SETUP) {
 		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
 		if (ret < 0) {
 			req->status = HYPER_DMABUF_REQ_ERROR;
-			return -EINVAL;
 		}
 
 		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
 		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
-		break;
 
-	case HYPER_DMABUF_IMPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
-		/* no operands needed */
-		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
-		if (ret < 0)
-			return -EINVAL;
+		return req->command;
+	}
 
-		break;
+	/* HYPER_DMABUF_DESTROY requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->command == HYPER_DMABUF_DESTROY) {
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		imported_sgt_info =
+			hyper_dmabuf_find_imported(req->operands[0]);
 
-	default:
-		/* no matched command, nothing to do.. just return error */
-		return -EINVAL;
+		if (imported_sgt_info) {
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+
+			hyper_dmabuf_remove_imported(req->operands[0]);
+
+			/* TODO: cleanup sgt on importer side etc */
+		}
+
+		/* Notify exporter that buffer is freed and it can cleanup it */
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_DESTROY_FINISH;
+
+#if 0 /* function is not implemented yet */
+
+		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
+#endif
+		return req->command;
 	}
 
+	temp_req = (struct hyper_dmabuf_ring_rq *)kmalloc(sizeof(*temp_req), GFP_KERNEL);
+
+	memcpy(temp_req, req, sizeof(*temp_req));
+
+	proc = (struct cmd_process *) kcalloc(1, sizeof(struct cmd_process),
+						GFP_KERNEL);
+
+	proc->rq = temp_req;
+	proc->domid = domid;
+
+	INIT_WORK(&(proc->work), cmd_process_work);
+
+	queue_work(hyper_dmabuf_private.work_queue, &(proc->work));
+
 	return req->command;
 }
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 22f2ef0..05855ba1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -14,7 +14,6 @@
 #include "../hyper_dmabuf_msg.h"
 
 static int export_req_id = 0;
-static int import_req_id = 0;
 
 int32_t hyper_dmabuf_get_domid(void)
 {
@@ -37,12 +36,6 @@ int hyper_dmabuf_next_req_id_export(void)
         return export_req_id;
 }
 
-int hyper_dmabuf_next_req_id_import(void)
-{
-        import_req_id++;
-        return import_req_id;
-}
-
 /* For now cache latast rings as global variables TODO: keep them in list*/
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
@@ -81,7 +74,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 
 	alloc_unbound.dom = DOMID_SELF;
 	alloc_unbound.remote_dom = rdomain;
-	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
+					&alloc_unbound);
 	if (ret != 0) {
 		printk("Cannot allocate event channel\n");
 		return -EINVAL;
@@ -96,7 +90,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 		printk("Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
-		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
+		gnttab_end_foreign_access(ring_info->gref_ring, 0,
+					virt_to_mfn(shared_ring));
 		return -EINVAL;
 	}
 
@@ -108,7 +103,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 	*refid = ring_info->gref_ring;
 	*port = ring_info->port;
 
-	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
+	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
 		ring_info->irq);
@@ -162,8 +158,9 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
-						    NULL, (void*)ring_info);
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port,
+						hyper_dmabuf_back_ring_isr, 0,
+						NULL, (void*)ring_info);
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -216,26 +213,20 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 	return 0;
 }
 
-/* called by interrupt (WORKQUEUE) */
-int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
-{
-	/* as a importer and as a exporter */
-	return 0;
-}
-
 /* ISR for request from exporter (as an importer) */
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
-	struct hyper_dmabuf_ring_rq request;
-	struct hyper_dmabuf_ring_rp response;
+	struct hyper_dmabuf_ring_rq req;
+	struct hyper_dmabuf_ring_rp resp;
+
 	int notify, more_to_do;
 	int ret;
-//	struct hyper_dmabuf_work *work;
 
-	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
+	struct hyper_dmabuf_ring_info_import *ring_info;
 	struct hyper_dmabuf_back_ring *ring;
 
+	ring_info = (struct hyper_dmabuf_ring_info_import *)info;
 	ring = &ring_info->ring_back;
 
 	do {
@@ -246,22 +237,16 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
 			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
 				break;
 
-			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
+			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
 			printk("Got request\n");
 			ring->req_cons = ++rc;
 
-			/* TODO: probably using linked list for multiple requests then let
-			 * a task in a workqueue to process those is better idea becuase
-			 * we do not want to stay in ISR for long.
-			 */
-			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
-				/* build response */
-				memcpy(&response, &request, sizeof(response));
-
-				/* we sent back modified request as a response.. we might just need to have request only..*/
-				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
+				memcpy(&resp, &req, sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
+							sizeof(resp));
 				ring->rsp_prod_pvt++;
 
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
@@ -281,15 +266,17 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
 }
 
 /* ISR for responses from importer */
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
-	struct hyper_dmabuf_ring_rp *response;
+	struct hyper_dmabuf_ring_rp *resp;
 	RING_IDX i, rp;
 	int more_to_do, ret;
 
-	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
+	struct hyper_dmabuf_ring_info_export *ring_info;
 	struct hyper_dmabuf_front_ring *ring;
+
+	ring_info = (struct hyper_dmabuf_ring_info_export *)info;
 	ring = &ring_info->ring_front;
 
 	do {
@@ -298,20 +285,18 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
 		for (i = ring->rsp_cons; i != rp; i++) {
 			unsigned long id;
 
-			response = RING_GET_RESPONSE(ring, i);
-			id = response->response_id;
+			resp = RING_GET_RESPONSE(ring, i);
+			id = resp->response_id;
 
-			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
-				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
+							(struct hyper_dmabuf_ring_rq *)resp);
 
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
 				}
-			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
-				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
 			}
-
 		}
 
 		ring->rsp_cons = i;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 2754917..4ad0529 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -36,17 +36,10 @@ struct hyper_dmabuf_ring_info_import {
         struct hyper_dmabuf_back_ring ring_back;
 };
 
-//struct hyper_dmabuf_work {
-//	hyper_dmabuf_ring_rq requrest;
-//	struct work_struct msg_parse;
-//};
-
 int32_t hyper_dmabuf_get_domid(void);
 
 int hyper_dmabuf_next_req_id_export(void);
 
-int hyper_dmabuf_next_req_id_import(void);
-
 /* exporter needs to generated info for page sharing */
 int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
 
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 07/60] hyper_dmabuf: message parsing done via workqueue
  2017-12-19 19:29 ` Dongwon Kim
                   ` (11 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Use workqueue mechanism to delay message parsing done
after exiting from ISR to reduce ISR execution time.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  13 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   5 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 155 ++++++++++++++-------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  75 ++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   7 -
 6 files changed, 152 insertions(+), 107 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 0698327..70b4878 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,5 +1,8 @@
 #include <linux/init.h>       /* module_init, module_exit */
 #include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include <linux/workqueue.h>
+#include <xen/grant_table.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
@@ -10,6 +13,8 @@ MODULE_AUTHOR("IOTG-PED, INTEL");
 int register_device(void);
 int unregister_device(void);
 
+struct hyper_dmabuf_private hyper_dmabuf_private;
+
 /*===============================================================================================*/
 static int hyper_dmabuf_drv_init(void)
 {
@@ -24,6 +29,10 @@ static int hyper_dmabuf_drv_init(void)
 
 	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
 
+	/* device structure initialization */
+	/* currently only does work-queue initialization */
+	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		return -EINVAL;
@@ -45,6 +54,10 @@ static void hyper_dmabuf_drv_exit(void)
 	hyper_dmabuf_table_destroy();
 	hyper_dmabuf_ring_table_init();
 
+	/* destroy workqueue */
+	if (hyper_dmabuf_private.work_queue)
+		destroy_workqueue(hyper_dmabuf_private.work_queue);
+
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 2dad9a6..6145d29 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,6 +1,11 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
+struct hyper_dmabuf_private {
+        struct device *device;
+	struct workqueue_struct *work_queue;
+};
+
 typedef int (*hyper_dmabuf_ioctl_t)(void *data);
 
 struct hyper_dmabuf_ioctl_desc {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index af94359..e4d8316 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -15,9 +15,7 @@
 #include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "hyper_dmabuf_msg.h"
 
-struct hyper_dmabuf_private {
-	struct device *device;
-} hyper_dmabuf_private;
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 static uint32_t hyper_dmabuf_id_gen(void) {
 	/* TODO: add proper implementation */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 3237e50..0166e61 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -3,12 +3,23 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <linux/workqueue.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
 //#include "hyper_dmabuf_remote_sync.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+struct cmd_process {
+	struct work_struct work;
+	struct hyper_dmabuf_ring_rq *rq;
+	int domid;
+};
+
 void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 				        enum hyper_dmabuf_command command, int *operands)
 {
@@ -71,18 +82,17 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 	}
 }
 
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+void cmd_process_work(struct work_struct *work)
 {
-	uint32_t i, ret;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	struct hyper_dmabuf_sgt_info *sgt_info;
-
-	/* make sure req is not NULL (may not be needed) */
-	if (!req) {
-		return -EINVAL;
-	}
+        struct hyper_dmabuf_sgt_info *sgt_info;
+	struct cmd_process *proc = container_of(work, struct cmd_process, work);
+	struct hyper_dmabuf_ring_rq *req;
+	int domid;
+	int i;
 
-	req->status = HYPER_DMABUF_REQ_PROCESSED;
+	req = proc->rq;
+	domid = proc->domid;
 
 	switch (req->command) {
 	case HYPER_DMABUF_EXPORT:
@@ -115,33 +125,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_DESTROY:
-		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : DMABUF_DESTROY,
-		 * operands0 : hyper_dmabuf_id
-		 */
-
-		imported_sgt_info =
-			hyper_dmabuf_find_imported(req->operands[0]);
-
-		if (imported_sgt_info) {
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
-
-			hyper_dmabuf_remove_imported(req->operands[0]);
-
-			/* TODO: cleanup sgt on importer side etc */
-		}
-
-		/* Notify exporter that buffer is freed and it can cleanup it */
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_DESTROY_FINISH;
-
-#if 0 /* function is not implemented yet */
-
-		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
-#endif
-		break;
-
 	case HYPER_DMABUF_DESTROY_FINISH:
 		/* destroy sg_list for hyper_dmabuf_id on local side */
 		/* command : DMABUF_DESTROY_FINISH,
@@ -180,33 +163,101 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		 */
 		break;
 
-	/* requesting the other side to setup another ring channel for reverse direction */
-	case HYPER_DMABUF_EXPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
-		 * no operands needed */
+	case HYPER_DMABUF_IMPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
+		/* no operands needed */
+		hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
+
+		break;
+
+	default:
+		/* shouldn't get here */
+		/* no matched command, nothing to do.. just return error */
+		break;
+	}
+
+	kfree(req);
+	kfree(proc);
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+{
+	struct cmd_process *proc;
+	struct hyper_dmabuf_ring_rq *temp_req;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret;
+
+	if (!req) {
+		printk("request is NULL\n");
+		return -EINVAL;
+	}
+
+	if ((req->command < HYPER_DMABUF_EXPORT) ||
+		(req->command > HYPER_DMABUF_IMPORTER_RING_SETUP)) {
+		printk("invalid command\n");
+		return -EINVAL;
+	}
+
+	req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+	/* HYPER_DMABUF_EXPORTER_RING_SETUP requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->command == HYPER_DMABUF_EXPORTER_RING_SETUP) {
 		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
 		if (ret < 0) {
 			req->status = HYPER_DMABUF_REQ_ERROR;
-			return -EINVAL;
 		}
 
 		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
 		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
-		break;
 
-	case HYPER_DMABUF_IMPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
-		/* no operands needed */
-		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
-		if (ret < 0)
-			return -EINVAL;
+		return req->command;
+	}
 
-		break;
+	/* HYPER_DMABUF_DESTROY requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->command == HYPER_DMABUF_DESTROY) {
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		imported_sgt_info =
+			hyper_dmabuf_find_imported(req->operands[0]);
 
-	default:
-		/* no matched command, nothing to do.. just return error */
-		return -EINVAL;
+		if (imported_sgt_info) {
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+
+			hyper_dmabuf_remove_imported(req->operands[0]);
+
+			/* TODO: cleanup sgt on importer side etc */
+		}
+
+		/* Notify exporter that buffer is freed and it can cleanup it */
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_DESTROY_FINISH;
+
+#if 0 /* function is not implemented yet */
+
+		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
+#endif
+		return req->command;
 	}
 
+	temp_req = (struct hyper_dmabuf_ring_rq *)kmalloc(sizeof(*temp_req), GFP_KERNEL);
+
+	memcpy(temp_req, req, sizeof(*temp_req));
+
+	proc = (struct cmd_process *) kcalloc(1, sizeof(struct cmd_process),
+						GFP_KERNEL);
+
+	proc->rq = temp_req;
+	proc->domid = domid;
+
+	INIT_WORK(&(proc->work), cmd_process_work);
+
+	queue_work(hyper_dmabuf_private.work_queue, &(proc->work));
+
 	return req->command;
 }
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 22f2ef0..05855ba1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -14,7 +14,6 @@
 #include "../hyper_dmabuf_msg.h"
 
 static int export_req_id = 0;
-static int import_req_id = 0;
 
 int32_t hyper_dmabuf_get_domid(void)
 {
@@ -37,12 +36,6 @@ int hyper_dmabuf_next_req_id_export(void)
         return export_req_id;
 }
 
-int hyper_dmabuf_next_req_id_import(void)
-{
-        import_req_id++;
-        return import_req_id;
-}
-
 /* For now cache latast rings as global variables TODO: keep them in list*/
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
@@ -81,7 +74,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 
 	alloc_unbound.dom = DOMID_SELF;
 	alloc_unbound.remote_dom = rdomain;
-	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
+					&alloc_unbound);
 	if (ret != 0) {
 		printk("Cannot allocate event channel\n");
 		return -EINVAL;
@@ -96,7 +90,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 		printk("Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
-		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
+		gnttab_end_foreign_access(ring_info->gref_ring, 0,
+					virt_to_mfn(shared_ring));
 		return -EINVAL;
 	}
 
@@ -108,7 +103,8 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 	*refid = ring_info->gref_ring;
 	*port = ring_info->port;
 
-	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
+	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
 		ring_info->irq);
@@ -162,8 +158,9 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
-						    NULL, (void*)ring_info);
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port,
+						hyper_dmabuf_back_ring_isr, 0,
+						NULL, (void*)ring_info);
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -216,26 +213,20 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 	return 0;
 }
 
-/* called by interrupt (WORKQUEUE) */
-int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
-{
-	/* as a importer and as a exporter */
-	return 0;
-}
-
 /* ISR for request from exporter (as an importer) */
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
-	struct hyper_dmabuf_ring_rq request;
-	struct hyper_dmabuf_ring_rp response;
+	struct hyper_dmabuf_ring_rq req;
+	struct hyper_dmabuf_ring_rp resp;
+
 	int notify, more_to_do;
 	int ret;
-//	struct hyper_dmabuf_work *work;
 
-	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
+	struct hyper_dmabuf_ring_info_import *ring_info;
 	struct hyper_dmabuf_back_ring *ring;
 
+	ring_info = (struct hyper_dmabuf_ring_info_import *)info;
 	ring = &ring_info->ring_back;
 
 	do {
@@ -246,22 +237,16 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
 			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
 				break;
 
-			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
+			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
 			printk("Got request\n");
 			ring->req_cons = ++rc;
 
-			/* TODO: probably using linked list for multiple requests then let
-			 * a task in a workqueue to process those is better idea becuase
-			 * we do not want to stay in ISR for long.
-			 */
-			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
-				/* build response */
-				memcpy(&response, &request, sizeof(response));
-
-				/* we sent back modified request as a response.. we might just need to have request only..*/
-				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
+				memcpy(&resp, &req, sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
+							sizeof(resp));
 				ring->rsp_prod_pvt++;
 
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
@@ -281,15 +266,17 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
 }
 
 /* ISR for responses from importer */
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
-	struct hyper_dmabuf_ring_rp *response;
+	struct hyper_dmabuf_ring_rp *resp;
 	RING_IDX i, rp;
 	int more_to_do, ret;
 
-	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
+	struct hyper_dmabuf_ring_info_export *ring_info;
 	struct hyper_dmabuf_front_ring *ring;
+
+	ring_info = (struct hyper_dmabuf_ring_info_export *)info;
 	ring = &ring_info->ring_front;
 
 	do {
@@ -298,20 +285,18 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
 		for (i = ring->rsp_cons; i != rp; i++) {
 			unsigned long id;
 
-			response = RING_GET_RESPONSE(ring, i);
-			id = response->response_id;
+			resp = RING_GET_RESPONSE(ring, i);
+			id = resp->response_id;
 
-			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
-				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
+							(struct hyper_dmabuf_ring_rq *)resp);
 
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
 				}
-			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
-				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
 			}
-
 		}
 
 		ring->rsp_cons = i;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 2754917..4ad0529 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -36,17 +36,10 @@ struct hyper_dmabuf_ring_info_import {
         struct hyper_dmabuf_back_ring ring_back;
 };
 
-//struct hyper_dmabuf_work {
-//	hyper_dmabuf_ring_rq requrest;
-//	struct work_struct msg_parse;
-//};
-
 int32_t hyper_dmabuf_get_domid(void);
 
 int hyper_dmabuf_next_req_id_export(void);
 
-int hyper_dmabuf_next_req_id_import(void);
-
 /* exporter needs to generated info for page sharing */
 int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 08/60] hyper_dmabuf: automatic comm channel initialization using xenstore
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

This introduces use of xenstore for creating and managing
communication channels between two VMs in the system.

When hyper_dmabuf driver is loaded in the service VM (host OS),
a new xenstore directory, "/local/domain/<domid>/data/hyper_dmabuf"
is created in xenstore filesystem. Whenever a new guest OS
creates and initailizes its own upstream channel the service VM,
new directory is created under the main directory created above
as shown here:

/local/domain/<domid>/data/hyper_dmabuf/<remote domid>/port
/local/domain/<domid>/data/hyper_dmabuf/<remote domid>/gref

This patch also adds a "xenstore watch" callback is called
when a new upstream connection is made from another VM (VM-b).
Upon detection, this VM (VM-a) intializes a downstream channel
,paired with detected upstream connection as shown below.

VM-a (downstream) <----- (upstream) VM-a

And as soon as this downstream channel is created, a new upstream
channel from VM-a to VM-b is automatically created and initialized
via "xenstore watch" call back on VM-b.

VM-a (upstream) <----- (downstream) VM-b

As a result, there will be bi-directional communication channel
available between two VMs.

When upstream channel is removed (e.g. unloading driver), VM on the
other side is notified and "xenstore watch" callback is invoked.
Via this callback, VM can remove corresponding downstream channel.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  11 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  14 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  30 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  31 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   2 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 226 +++++++++++++++++++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  18 +-
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  22 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |   6 +
 9 files changed, 270 insertions(+), 90 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 70b4878..5b5dae44 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -6,6 +6,7 @@
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
 
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_AUTHOR("IOTG-PED, INTEL");
@@ -43,6 +44,11 @@ static int hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
+	ret = hyper_dmabuf_setup_data_dir();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
 	/* interrupt for comm should be registered here: */
 	return ret;
 }
@@ -52,12 +58,15 @@ static void hyper_dmabuf_drv_exit(void)
 {
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
-	hyper_dmabuf_ring_table_init();
+
+	hyper_dmabuf_cleanup_ringbufs();
+	hyper_dmabuf_ring_table_destroy();
 
 	/* destroy workqueue */
 	if (hyper_dmabuf_private.work_queue)
 		destroy_workqueue(hyper_dmabuf_private.work_queue);
 
+	hyper_dmabuf_destroy_data_dir();
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 6145d29..7511afb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -29,8 +29,6 @@ struct ioctl_hyper_dmabuf_exporter_ring_setup {
 	/* IN parameters */
 	/* Remote domain id */
 	uint32_t remote_domain;
-	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
-	uint32_t port; /* assigned by driver, copied to userspace after initialization */
 };
 
 #define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
@@ -39,10 +37,6 @@ struct ioctl_hyper_dmabuf_importer_ring_setup {
 	/* IN parameters */
 	/* Source domain id */
 	uint32_t source_domain;
-	/* Ring shared page refid */
-	grant_ref_t ring_refid;
-	/* Port number */
-	uint32_t port;
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
@@ -95,12 +89,4 @@ struct ioctl_hyper_dmabuf_query {
 	uint32_t info;
 };
 
-#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
-_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
-struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
-	/* in parameters */
-	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
-	uint32_t info;
-};
-
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index e4d8316..44a153b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -48,9 +48,7 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 		return 0;
 	}
 
-	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
-						&ring_attr->ring_refid,
-						&ring_attr->port);
+	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain);
 
 	return ret;
 }
@@ -76,10 +74,7 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 		return 0;
 	}
 
-	/* user need to provide a port number and ref # for the page used as ring buffer */
-	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
-						 setup_imp_ring_attr->ring_refid,
-						 setup_imp_ring_attr->port);
+	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain);
 
 	return ret;
 }
@@ -355,26 +350,6 @@ static int hyper_dmabuf_query(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
-{
-	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
-	struct hyper_dmabuf_ring_rq *req;
-
-	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
-
-	/* requesting remote domain to set-up exporter's ring */
-	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
-		kfree(req);
-		return -EINVAL;
-	}
-
-	kfree(req);
-	return 0;
-}
-
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
@@ -382,7 +357,6 @@ static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
 };
 
 static long hyper_dmabuf_ioctl(struct file *filp,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 0166e61..8a059c8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -70,12 +70,6 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 			request->operands[i] = operands[i];
 		break;
 
-	/* requesting the other side to setup another ring channel for reverse direction */
-	case HYPER_DMABUF_EXPORTER_RING_SETUP:
-		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
-		/* no operands needed */
-		break;
-
 	default:
 		/* no command found */
 		return;
@@ -163,13 +157,6 @@ void cmd_process_work(struct work_struct *work)
 		 */
 		break;
 
-	case HYPER_DMABUF_IMPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
-		/* no operands needed */
-		hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
-
-		break;
-
 	default:
 		/* shouldn't get here */
 		/* no matched command, nothing to do.. just return error */
@@ -185,7 +172,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_ring_rq *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	int ret;
 
 	if (!req) {
 		printk("request is NULL\n");
@@ -193,28 +179,13 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	}
 
 	if ((req->command < HYPER_DMABUF_EXPORT) ||
-		(req->command > HYPER_DMABUF_IMPORTER_RING_SETUP)) {
+		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
 		printk("invalid command\n");
 		return -EINVAL;
 	}
 
 	req->status = HYPER_DMABUF_REQ_PROCESSED;
 
-	/* HYPER_DMABUF_EXPORTER_RING_SETUP requires immediate
-	 * follow up so can't be processed in workqueue
-	 */
-	if (req->command == HYPER_DMABUF_EXPORTER_RING_SETUP) {
-		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
-		if (ret < 0) {
-			req->status = HYPER_DMABUF_REQ_ERROR;
-		}
-
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
-
-		return req->command;
-	}
-
 	/* HYPER_DMABUF_DESTROY requires immediate
 	 * follow up so can't be processed in workqueue
 	 */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 44bfb70..9b25bdb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -7,8 +7,6 @@ enum hyper_dmabuf_command {
 	HYPER_DMABUF_DESTROY_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
-	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
-	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
 };
 
 enum hyper_dmabuf_ops {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 05855ba1..5db58b0 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -15,6 +15,83 @@
 
 static int export_req_id = 0;
 
+/* Creates entry in xen store that will keep details of all exporter rings created by this domain */
+int32_t hyper_dmabuf_setup_data_dir()
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	return xenbus_mkdir(XBT_NIL, buf, "");
+}
+
+
+/* Removes entry from xenstore with exporter ring details.
+ * Other domains that has connected to any of exporter rings created by this domain,
+ * will be notified about removal of this entry and will treat that as signal to
+ * cleanup importer rings created for this domain
+ */
+int32_t hyper_dmabuf_destroy_data_dir()
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	return xenbus_rm(XBT_NIL, buf, "");
+}
+
+/*
+ * Adds xenstore entries with details of exporter ring created for given remote domain.
+ * It requires special daemon running in dom0 to make sure that given remote domain will
+ * have right permissions to access that data.
+ */
+static int32_t hyper_dmabuf_expose_ring_details(uint32_t domid, uint32_t rdomid, uint32_t grefid, uint32_t port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", domid, rdomid);
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret) {
+		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret) {
+		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Queries details of ring exposed by remote domain.
+ */
+static int32_t hyper_dmabuf_get_ring_details(uint32_t domid, uint32_t rdomid, uint32_t *grefid, uint32_t *port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", rdomid, domid);
+	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret <= 0) {
+		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret <= 0) {
+		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	return (ret <= 0 ? 1 : 0);
+}
+
 int32_t hyper_dmabuf_get_domid(void)
 {
 	struct xenbus_transaction xbt;
@@ -40,8 +117,49 @@ int hyper_dmabuf_next_req_id_export(void)
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
 
+/*
+ * Callback function that will be called on any change of xenbus path being watched.
+ * Used for detecting creation/destruction of remote domain exporter ring.
+ * When remote domain's exporter ring will be detected, importer ring on this domain will be created.
+ * When remote domain's exporter ring destruction will be detected it will celanup this domain importer ring.
+ * Destruction can be caused by unloading module by remote domain or it's crash/force shutdown.
+ */
+static void remote_domain_exporter_watch_cb(struct xenbus_watch *watch,
+				   const char *path, const char *token)
+{
+	int rdom,ret;
+	uint32_t grefid, port;
+	struct hyper_dmabuf_ring_info_import *ring_info;
+
+	/* Check which domain has changed its exporter rings */
+	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
+	if (ret <= 0) {
+		return;
+	}
+
+	/* Check if we have importer ring for given remote domain alrady created */
+	ring_info = hyper_dmabuf_find_importer_ring(rdom);
+
+	/*
+	 * Try to query remote domain exporter ring details - if that will fail and we have
+	 * importer ring that means remote domains has cleanup its exporter ring, so our
+	 * importer ring is no longer useful.
+	 * If querying details will succeed and we don't have importer ring, it means that
+	 * remote domain has setup it for us and we should connect to it.
+	 */
+	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), rdom, &grefid, &port);
+
+	if (ring_info && ret != 0) {
+		printk("Remote exporter closed, cleaninup importer\n");
+		hyper_dmabuf_importer_ringbuf_cleanup(rdom);
+	} else if (!ring_info && ret == 0) {
+		printk("Registering importer\n");
+		hyper_dmabuf_importer_ringbuf_init(rdom);
+	}
+}
+
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 {
 	struct hyper_dmabuf_ring_info_export *ring_info;
 	struct hyper_dmabuf_sring *sring;
@@ -99,24 +217,58 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
-	/* store refid and port numbers for userspace's use */
-	*refid = ring_info->gref_ring;
-	*port = ring_info->port;
-
 	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
 		ring_info->irq);
 
-	/* register ring info */
 	ret = hyper_dmabuf_register_exporter_ring(ring_info);
 
+	ret = hyper_dmabuf_expose_ring_details(hyper_dmabuf_get_domid(), rdomain,
+                                               ring_info->gref_ring, ring_info->port);
+
+	/*
+	 * Register watch for remote domain exporter ring.
+	 * When remote domain will setup its exporter ring, we will automatically connect our importer ring to it.
+	 */
+	ring_info->watch.callback = remote_domain_exporter_watch_cb;
+	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
+	sprintf((char*)ring_info->watch.node, "/local/domain/%d/data/hyper_dmabuf/%d/port", rdomain, hyper_dmabuf_get_domid());
+	register_xenbus_watch(&ring_info->watch);
+
 	return ret;
 }
 
+/* cleans up exporter ring created for given remote domain */
+void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain)
+{
+	struct hyper_dmabuf_ring_info_export *ring_info;
+
+	/* check if we at all have exporter ring for given rdomain */
+	ring_info = hyper_dmabuf_find_exporter_ring(rdomain);
+
+	if (!ring_info) {
+		return;
+	}
+
+	hyper_dmabuf_remove_exporter_ring(rdomain);
+
+	unregister_xenbus_watch(&ring_info->watch);
+	kfree(ring_info->watch.node);
+
+	/* No need to close communication channel, will be done by this function */
+	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+
+	/* No need to free sring page, will be freed by this function when other side will end its access */
+	gnttab_end_foreign_access(ring_info->gref_ring, 0,
+				  (unsigned long) ring_info->ring_front.sring);
+
+	kfree(ring_info);
+}
+
 /* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
+int hyper_dmabuf_importer_ringbuf_init(int sdomain)
 {
 	struct hyper_dmabuf_ring_info_import *ring_info;
 	struct hyper_dmabuf_sring *sring;
@@ -124,24 +276,33 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	struct page *shared_ring;
 
 	struct gnttab_map_grant_ref *ops;
-	struct gnttab_unmap_grant_ref *unmap_ops;
 	int ret;
+	int importer_gref, importer_port;
+
+	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), sdomain,
+					    &importer_gref, &importer_port);
+
+	if (ret) {
+		printk("Domain %d has not created exporter ring for current domain\n", sdomain);
+		return ret;
+	}
 
 	ring_info = (struct hyper_dmabuf_ring_info_import *)
 			kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	ring_info->sdomain = sdomain;
-	ring_info->evtchn = port;
+	ring_info->evtchn = importer_port;
 
 	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
-	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
 
 	if (gnttab_alloc_pages(1, &shared_ring)) {
 		return -EINVAL;
 	}
 
 	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
-			GNTMAP_host_map, gref, sdomain);
+			GNTMAP_host_map, importer_gref, sdomain);
+	gnttab_set_unmap_op(&ring_info->unmap_op, (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			GNTMAP_host_map, -1);
 
 	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
@@ -152,13 +313,15 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	if (ops[0].status) {
 		printk("Ring mapping failed\n");
 		return -EINVAL;
+	} else {
+		ring_info->unmap_op.handle = ops[0].handle;
 	}
 
 	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port,
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, importer_port,
 						hyper_dmabuf_back_ring_isr, 0,
 						NULL, (void*)ring_info);
 	if (ret < 0) {
@@ -168,14 +331,51 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	ring_info->irq = ret;
 
 	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
-		port,
+		importer_port,
 		ring_info->irq);
 
 	ret = hyper_dmabuf_register_importer_ring(ring_info);
 
+	/* Setup communcation channel in opposite direction */
+	if (!hyper_dmabuf_find_exporter_ring(sdomain)) {
+		ret = hyper_dmabuf_exporter_ringbuf_init(sdomain);
+	}
+
 	return ret;
 }
 
+/* clenas up importer ring create for given source domain */
+void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain)
+{
+	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct page *shared_ring;
+
+	/* check if we have importer ring created for given sdomain */
+	ring_info = hyper_dmabuf_find_importer_ring(sdomain);
+
+	if (!ring_info)
+		return;
+
+	hyper_dmabuf_remove_importer_ring(sdomain);
+
+	/* no need to close event channel, will be done by that function */
+	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+
+	/* unmapping shared ring page */
+	shared_ring = virt_to_page(ring_info->ring_back.sring);
+	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
+	gnttab_free_pages(1, &shared_ring);
+
+	kfree(ring_info);
+}
+
+/* cleans up all exporter/importer rings */
+void hyper_dmabuf_cleanup_ringbufs(void)
+{
+	hyper_dmabuf_foreach_exporter_ring(hyper_dmabuf_exporter_ringbuf_cleanup);
+	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
+}
+
 int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 {
 	struct hyper_dmabuf_front_ring *ring;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 4ad0529..a4819ca 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -2,6 +2,7 @@
 #define __HYPER_DMABUF_XEN_COMM_H__
 
 #include "xen/interface/io/ring.h"
+#include "xen/xenbus.h"
 
 #define MAX_NUMBER_OF_OPERANDS 9
 
@@ -27,6 +28,7 @@ struct hyper_dmabuf_ring_info_export {
         int gref_ring;
         int irq;
         int port;
+	struct xenbus_watch watch;
 };
 
 struct hyper_dmabuf_ring_info_import {
@@ -34,17 +36,29 @@ struct hyper_dmabuf_ring_info_import {
         int irq;
         int evtchn;
         struct hyper_dmabuf_back_ring ring_back;
+	struct gnttab_unmap_grant_ref unmap_op;
 };
 
 int32_t hyper_dmabuf_get_domid(void);
+int32_t hyper_dmabuf_setup_data_dir(void);
+int32_t hyper_dmabuf_destroy_data_dir(void);
 
 int hyper_dmabuf_next_req_id_export(void);
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain);
 
 /* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
+int hyper_dmabuf_importer_ringbuf_init(int sdomain);
+
+/* cleans up exporter ring created for given domain */
+void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain);
+
+/* cleans up importer ring created for given domain */
+void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
+
+/* cleans up all exporter/importer rings */
+void hyper_dmabuf_cleanup_ringbufs(void);
 
 /* send request to the remote domain */
 int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 15c9d29..5778468 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -104,3 +104,25 @@ int hyper_dmabuf_remove_importer_ring(int domid)
 
 	return -1;
 }
+
+void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom))
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exporter_ring, bkt, tmp, info_entry, node) {
+		func(info_entry->info->rdomain);
+	}
+}
+
+void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom))
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_importer_ring, bkt, tmp, info_entry, node) {
+		func(info_entry->info->sdomain);
+	}
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index 5929f99..fd1958c 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -32,4 +32,10 @@ int hyper_dmabuf_remove_exporter_ring(int domid);
 
 int hyper_dmabuf_remove_importer_ring(int domid);
 
+/* iterates over all exporter rings and calls provided function for each of them */
+void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom));
+
+/* iterates over all importer rings and calls provided function for each of them */
+void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom));
+
 #endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 08/60] hyper_dmabuf: automatic comm channel initialization using xenstore
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

This introduces use of xenstore for creating and managing
communication channels between two VMs in the system.

When hyper_dmabuf driver is loaded in the service VM (host OS),
a new xenstore directory, "/local/domain/<domid>/data/hyper_dmabuf"
is created in xenstore filesystem. Whenever a new guest OS
creates and initailizes its own upstream channel the service VM,
new directory is created under the main directory created above
as shown here:

/local/domain/<domid>/data/hyper_dmabuf/<remote domid>/port
/local/domain/<domid>/data/hyper_dmabuf/<remote domid>/gref

This patch also adds a "xenstore watch" callback is called
when a new upstream connection is made from another VM (VM-b).
Upon detection, this VM (VM-a) intializes a downstream channel
,paired with detected upstream connection as shown below.

VM-a (downstream) <----- (upstream) VM-a

And as soon as this downstream channel is created, a new upstream
channel from VM-a to VM-b is automatically created and initialized
via "xenstore watch" call back on VM-b.

VM-a (upstream) <----- (downstream) VM-b

As a result, there will be bi-directional communication channel
available between two VMs.

When upstream channel is removed (e.g. unloading driver), VM on the
other side is notified and "xenstore watch" callback is invoked.
Via this callback, VM can remove corresponding downstream channel.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  11 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  14 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  30 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  31 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   2 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 226 +++++++++++++++++++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  18 +-
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  22 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |   6 +
 9 files changed, 270 insertions(+), 90 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 70b4878..5b5dae44 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -6,6 +6,7 @@
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
 
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_AUTHOR("IOTG-PED, INTEL");
@@ -43,6 +44,11 @@ static int hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
+	ret = hyper_dmabuf_setup_data_dir();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
 	/* interrupt for comm should be registered here: */
 	return ret;
 }
@@ -52,12 +58,15 @@ static void hyper_dmabuf_drv_exit(void)
 {
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
-	hyper_dmabuf_ring_table_init();
+
+	hyper_dmabuf_cleanup_ringbufs();
+	hyper_dmabuf_ring_table_destroy();
 
 	/* destroy workqueue */
 	if (hyper_dmabuf_private.work_queue)
 		destroy_workqueue(hyper_dmabuf_private.work_queue);
 
+	hyper_dmabuf_destroy_data_dir();
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 6145d29..7511afb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -29,8 +29,6 @@ struct ioctl_hyper_dmabuf_exporter_ring_setup {
 	/* IN parameters */
 	/* Remote domain id */
 	uint32_t remote_domain;
-	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
-	uint32_t port; /* assigned by driver, copied to userspace after initialization */
 };
 
 #define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
@@ -39,10 +37,6 @@ struct ioctl_hyper_dmabuf_importer_ring_setup {
 	/* IN parameters */
 	/* Source domain id */
 	uint32_t source_domain;
-	/* Ring shared page refid */
-	grant_ref_t ring_refid;
-	/* Port number */
-	uint32_t port;
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
@@ -95,12 +89,4 @@ struct ioctl_hyper_dmabuf_query {
 	uint32_t info;
 };
 
-#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
-_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
-struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
-	/* in parameters */
-	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
-	uint32_t info;
-};
-
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index e4d8316..44a153b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -48,9 +48,7 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 		return 0;
 	}
 
-	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
-						&ring_attr->ring_refid,
-						&ring_attr->port);
+	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain);
 
 	return ret;
 }
@@ -76,10 +74,7 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 		return 0;
 	}
 
-	/* user need to provide a port number and ref # for the page used as ring buffer */
-	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
-						 setup_imp_ring_attr->ring_refid,
-						 setup_imp_ring_attr->port);
+	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain);
 
 	return ret;
 }
@@ -355,26 +350,6 @@ static int hyper_dmabuf_query(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
-{
-	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
-	struct hyper_dmabuf_ring_rq *req;
-
-	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
-
-	/* requesting remote domain to set-up exporter's ring */
-	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
-		kfree(req);
-		return -EINVAL;
-	}
-
-	kfree(req);
-	return 0;
-}
-
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
@@ -382,7 +357,6 @@ static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
 };
 
 static long hyper_dmabuf_ioctl(struct file *filp,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 0166e61..8a059c8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -70,12 +70,6 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 			request->operands[i] = operands[i];
 		break;
 
-	/* requesting the other side to setup another ring channel for reverse direction */
-	case HYPER_DMABUF_EXPORTER_RING_SETUP:
-		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
-		/* no operands needed */
-		break;
-
 	default:
 		/* no command found */
 		return;
@@ -163,13 +157,6 @@ void cmd_process_work(struct work_struct *work)
 		 */
 		break;
 
-	case HYPER_DMABUF_IMPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
-		/* no operands needed */
-		hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
-
-		break;
-
 	default:
 		/* shouldn't get here */
 		/* no matched command, nothing to do.. just return error */
@@ -185,7 +172,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_ring_rq *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	int ret;
 
 	if (!req) {
 		printk("request is NULL\n");
@@ -193,28 +179,13 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	}
 
 	if ((req->command < HYPER_DMABUF_EXPORT) ||
-		(req->command > HYPER_DMABUF_IMPORTER_RING_SETUP)) {
+		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
 		printk("invalid command\n");
 		return -EINVAL;
 	}
 
 	req->status = HYPER_DMABUF_REQ_PROCESSED;
 
-	/* HYPER_DMABUF_EXPORTER_RING_SETUP requires immediate
-	 * follow up so can't be processed in workqueue
-	 */
-	if (req->command == HYPER_DMABUF_EXPORTER_RING_SETUP) {
-		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
-		if (ret < 0) {
-			req->status = HYPER_DMABUF_REQ_ERROR;
-		}
-
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
-
-		return req->command;
-	}
-
 	/* HYPER_DMABUF_DESTROY requires immediate
 	 * follow up so can't be processed in workqueue
 	 */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 44bfb70..9b25bdb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -7,8 +7,6 @@ enum hyper_dmabuf_command {
 	HYPER_DMABUF_DESTROY_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
-	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
-	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
 };
 
 enum hyper_dmabuf_ops {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 05855ba1..5db58b0 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -15,6 +15,83 @@
 
 static int export_req_id = 0;
 
+/* Creates entry in xen store that will keep details of all exporter rings created by this domain */
+int32_t hyper_dmabuf_setup_data_dir()
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	return xenbus_mkdir(XBT_NIL, buf, "");
+}
+
+
+/* Removes entry from xenstore with exporter ring details.
+ * Other domains that has connected to any of exporter rings created by this domain,
+ * will be notified about removal of this entry and will treat that as signal to
+ * cleanup importer rings created for this domain
+ */
+int32_t hyper_dmabuf_destroy_data_dir()
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	return xenbus_rm(XBT_NIL, buf, "");
+}
+
+/*
+ * Adds xenstore entries with details of exporter ring created for given remote domain.
+ * It requires special daemon running in dom0 to make sure that given remote domain will
+ * have right permissions to access that data.
+ */
+static int32_t hyper_dmabuf_expose_ring_details(uint32_t domid, uint32_t rdomid, uint32_t grefid, uint32_t port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", domid, rdomid);
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret) {
+		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret) {
+		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Queries details of ring exposed by remote domain.
+ */
+static int32_t hyper_dmabuf_get_ring_details(uint32_t domid, uint32_t rdomid, uint32_t *grefid, uint32_t *port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", rdomid, domid);
+	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret <= 0) {
+		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret <= 0) {
+		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	return (ret <= 0 ? 1 : 0);
+}
+
 int32_t hyper_dmabuf_get_domid(void)
 {
 	struct xenbus_transaction xbt;
@@ -40,8 +117,49 @@ int hyper_dmabuf_next_req_id_export(void)
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
 
+/*
+ * Callback function that will be called on any change of xenbus path being watched.
+ * Used for detecting creation/destruction of remote domain exporter ring.
+ * When remote domain's exporter ring will be detected, importer ring on this domain will be created.
+ * When remote domain's exporter ring destruction will be detected it will celanup this domain importer ring.
+ * Destruction can be caused by unloading module by remote domain or it's crash/force shutdown.
+ */
+static void remote_domain_exporter_watch_cb(struct xenbus_watch *watch,
+				   const char *path, const char *token)
+{
+	int rdom,ret;
+	uint32_t grefid, port;
+	struct hyper_dmabuf_ring_info_import *ring_info;
+
+	/* Check which domain has changed its exporter rings */
+	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
+	if (ret <= 0) {
+		return;
+	}
+
+	/* Check if we have importer ring for given remote domain alrady created */
+	ring_info = hyper_dmabuf_find_importer_ring(rdom);
+
+	/*
+	 * Try to query remote domain exporter ring details - if that will fail and we have
+	 * importer ring that means remote domains has cleanup its exporter ring, so our
+	 * importer ring is no longer useful.
+	 * If querying details will succeed and we don't have importer ring, it means that
+	 * remote domain has setup it for us and we should connect to it.
+	 */
+	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), rdom, &grefid, &port);
+
+	if (ring_info && ret != 0) {
+		printk("Remote exporter closed, cleaninup importer\n");
+		hyper_dmabuf_importer_ringbuf_cleanup(rdom);
+	} else if (!ring_info && ret == 0) {
+		printk("Registering importer\n");
+		hyper_dmabuf_importer_ringbuf_init(rdom);
+	}
+}
+
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 {
 	struct hyper_dmabuf_ring_info_export *ring_info;
 	struct hyper_dmabuf_sring *sring;
@@ -99,24 +217,58 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
-	/* store refid and port numbers for userspace's use */
-	*refid = ring_info->gref_ring;
-	*port = ring_info->port;
-
 	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
 		ring_info->irq);
 
-	/* register ring info */
 	ret = hyper_dmabuf_register_exporter_ring(ring_info);
 
+	ret = hyper_dmabuf_expose_ring_details(hyper_dmabuf_get_domid(), rdomain,
+                                               ring_info->gref_ring, ring_info->port);
+
+	/*
+	 * Register watch for remote domain exporter ring.
+	 * When remote domain will setup its exporter ring, we will automatically connect our importer ring to it.
+	 */
+	ring_info->watch.callback = remote_domain_exporter_watch_cb;
+	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
+	sprintf((char*)ring_info->watch.node, "/local/domain/%d/data/hyper_dmabuf/%d/port", rdomain, hyper_dmabuf_get_domid());
+	register_xenbus_watch(&ring_info->watch);
+
 	return ret;
 }
 
+/* cleans up exporter ring created for given remote domain */
+void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain)
+{
+	struct hyper_dmabuf_ring_info_export *ring_info;
+
+	/* check if we at all have exporter ring for given rdomain */
+	ring_info = hyper_dmabuf_find_exporter_ring(rdomain);
+
+	if (!ring_info) {
+		return;
+	}
+
+	hyper_dmabuf_remove_exporter_ring(rdomain);
+
+	unregister_xenbus_watch(&ring_info->watch);
+	kfree(ring_info->watch.node);
+
+	/* No need to close communication channel, will be done by this function */
+	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+
+	/* No need to free sring page, will be freed by this function when other side will end its access */
+	gnttab_end_foreign_access(ring_info->gref_ring, 0,
+				  (unsigned long) ring_info->ring_front.sring);
+
+	kfree(ring_info);
+}
+
 /* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
+int hyper_dmabuf_importer_ringbuf_init(int sdomain)
 {
 	struct hyper_dmabuf_ring_info_import *ring_info;
 	struct hyper_dmabuf_sring *sring;
@@ -124,24 +276,33 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	struct page *shared_ring;
 
 	struct gnttab_map_grant_ref *ops;
-	struct gnttab_unmap_grant_ref *unmap_ops;
 	int ret;
+	int importer_gref, importer_port;
+
+	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), sdomain,
+					    &importer_gref, &importer_port);
+
+	if (ret) {
+		printk("Domain %d has not created exporter ring for current domain\n", sdomain);
+		return ret;
+	}
 
 	ring_info = (struct hyper_dmabuf_ring_info_import *)
 			kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	ring_info->sdomain = sdomain;
-	ring_info->evtchn = port;
+	ring_info->evtchn = importer_port;
 
 	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
-	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
 
 	if (gnttab_alloc_pages(1, &shared_ring)) {
 		return -EINVAL;
 	}
 
 	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
-			GNTMAP_host_map, gref, sdomain);
+			GNTMAP_host_map, importer_gref, sdomain);
+	gnttab_set_unmap_op(&ring_info->unmap_op, (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			GNTMAP_host_map, -1);
 
 	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
@@ -152,13 +313,15 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	if (ops[0].status) {
 		printk("Ring mapping failed\n");
 		return -EINVAL;
+	} else {
+		ring_info->unmap_op.handle = ops[0].handle;
 	}
 
 	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port,
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, importer_port,
 						hyper_dmabuf_back_ring_isr, 0,
 						NULL, (void*)ring_info);
 	if (ret < 0) {
@@ -168,14 +331,51 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	ring_info->irq = ret;
 
 	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
-		port,
+		importer_port,
 		ring_info->irq);
 
 	ret = hyper_dmabuf_register_importer_ring(ring_info);
 
+	/* Setup communcation channel in opposite direction */
+	if (!hyper_dmabuf_find_exporter_ring(sdomain)) {
+		ret = hyper_dmabuf_exporter_ringbuf_init(sdomain);
+	}
+
 	return ret;
 }
 
+/* clenas up importer ring create for given source domain */
+void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain)
+{
+	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct page *shared_ring;
+
+	/* check if we have importer ring created for given sdomain */
+	ring_info = hyper_dmabuf_find_importer_ring(sdomain);
+
+	if (!ring_info)
+		return;
+
+	hyper_dmabuf_remove_importer_ring(sdomain);
+
+	/* no need to close event channel, will be done by that function */
+	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+
+	/* unmapping shared ring page */
+	shared_ring = virt_to_page(ring_info->ring_back.sring);
+	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
+	gnttab_free_pages(1, &shared_ring);
+
+	kfree(ring_info);
+}
+
+/* cleans up all exporter/importer rings */
+void hyper_dmabuf_cleanup_ringbufs(void)
+{
+	hyper_dmabuf_foreach_exporter_ring(hyper_dmabuf_exporter_ringbuf_cleanup);
+	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
+}
+
 int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 {
 	struct hyper_dmabuf_front_ring *ring;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 4ad0529..a4819ca 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -2,6 +2,7 @@
 #define __HYPER_DMABUF_XEN_COMM_H__
 
 #include "xen/interface/io/ring.h"
+#include "xen/xenbus.h"
 
 #define MAX_NUMBER_OF_OPERANDS 9
 
@@ -27,6 +28,7 @@ struct hyper_dmabuf_ring_info_export {
         int gref_ring;
         int irq;
         int port;
+	struct xenbus_watch watch;
 };
 
 struct hyper_dmabuf_ring_info_import {
@@ -34,17 +36,29 @@ struct hyper_dmabuf_ring_info_import {
         int irq;
         int evtchn;
         struct hyper_dmabuf_back_ring ring_back;
+	struct gnttab_unmap_grant_ref unmap_op;
 };
 
 int32_t hyper_dmabuf_get_domid(void);
+int32_t hyper_dmabuf_setup_data_dir(void);
+int32_t hyper_dmabuf_destroy_data_dir(void);
 
 int hyper_dmabuf_next_req_id_export(void);
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain);
 
 /* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
+int hyper_dmabuf_importer_ringbuf_init(int sdomain);
+
+/* cleans up exporter ring created for given domain */
+void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain);
+
+/* cleans up importer ring created for given domain */
+void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
+
+/* cleans up all exporter/importer rings */
+void hyper_dmabuf_cleanup_ringbufs(void);
 
 /* send request to the remote domain */
 int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 15c9d29..5778468 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -104,3 +104,25 @@ int hyper_dmabuf_remove_importer_ring(int domid)
 
 	return -1;
 }
+
+void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom))
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exporter_ring, bkt, tmp, info_entry, node) {
+		func(info_entry->info->rdomain);
+	}
+}
+
+void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom))
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_importer_ring, bkt, tmp, info_entry, node) {
+		func(info_entry->info->sdomain);
+	}
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index 5929f99..fd1958c 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -32,4 +32,10 @@ int hyper_dmabuf_remove_exporter_ring(int domid);
 
 int hyper_dmabuf_remove_importer_ring(int domid);
 
+/* iterates over all exporter rings and calls provided function for each of them */
+void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom));
+
+/* iterates over all importer rings and calls provided function for each of them */
+void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom));
+
 #endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 08/60] hyper_dmabuf: automatic comm channel initialization using xenstore
  2017-12-19 19:29 ` Dongwon Kim
                   ` (13 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

This introduces use of xenstore for creating and managing
communication channels between two VMs in the system.

When hyper_dmabuf driver is loaded in the service VM (host OS),
a new xenstore directory, "/local/domain/<domid>/data/hyper_dmabuf"
is created in xenstore filesystem. Whenever a new guest OS
creates and initailizes its own upstream channel the service VM,
new directory is created under the main directory created above
as shown here:

/local/domain/<domid>/data/hyper_dmabuf/<remote domid>/port
/local/domain/<domid>/data/hyper_dmabuf/<remote domid>/gref

This patch also adds a "xenstore watch" callback is called
when a new upstream connection is made from another VM (VM-b).
Upon detection, this VM (VM-a) intializes a downstream channel
,paired with detected upstream connection as shown below.

VM-a (downstream) <----- (upstream) VM-a

And as soon as this downstream channel is created, a new upstream
channel from VM-a to VM-b is automatically created and initialized
via "xenstore watch" call back on VM-b.

VM-a (upstream) <----- (downstream) VM-b

As a result, there will be bi-directional communication channel
available between two VMs.

When upstream channel is removed (e.g. unloading driver), VM on the
other side is notified and "xenstore watch" callback is invoked.
Via this callback, VM can remove corresponding downstream channel.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  11 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  14 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  30 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  31 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   2 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 226 +++++++++++++++++++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  18 +-
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  22 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |   6 +
 9 files changed, 270 insertions(+), 90 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 70b4878..5b5dae44 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -6,6 +6,7 @@
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
 
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_AUTHOR("IOTG-PED, INTEL");
@@ -43,6 +44,11 @@ static int hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
+	ret = hyper_dmabuf_setup_data_dir();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
 	/* interrupt for comm should be registered here: */
 	return ret;
 }
@@ -52,12 +58,15 @@ static void hyper_dmabuf_drv_exit(void)
 {
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
-	hyper_dmabuf_ring_table_init();
+
+	hyper_dmabuf_cleanup_ringbufs();
+	hyper_dmabuf_ring_table_destroy();
 
 	/* destroy workqueue */
 	if (hyper_dmabuf_private.work_queue)
 		destroy_workqueue(hyper_dmabuf_private.work_queue);
 
+	hyper_dmabuf_destroy_data_dir();
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 6145d29..7511afb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -29,8 +29,6 @@ struct ioctl_hyper_dmabuf_exporter_ring_setup {
 	/* IN parameters */
 	/* Remote domain id */
 	uint32_t remote_domain;
-	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
-	uint32_t port; /* assigned by driver, copied to userspace after initialization */
 };
 
 #define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
@@ -39,10 +37,6 @@ struct ioctl_hyper_dmabuf_importer_ring_setup {
 	/* IN parameters */
 	/* Source domain id */
 	uint32_t source_domain;
-	/* Ring shared page refid */
-	grant_ref_t ring_refid;
-	/* Port number */
-	uint32_t port;
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
@@ -95,12 +89,4 @@ struct ioctl_hyper_dmabuf_query {
 	uint32_t info;
 };
 
-#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
-_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
-struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
-	/* in parameters */
-	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
-	uint32_t info;
-};
-
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index e4d8316..44a153b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -48,9 +48,7 @@ static int hyper_dmabuf_exporter_ring_setup(void *data)
 		return 0;
 	}
 
-	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
-						&ring_attr->ring_refid,
-						&ring_attr->port);
+	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain);
 
 	return ret;
 }
@@ -76,10 +74,7 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 		return 0;
 	}
 
-	/* user need to provide a port number and ref # for the page used as ring buffer */
-	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
-						 setup_imp_ring_attr->ring_refid,
-						 setup_imp_ring_attr->port);
+	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain);
 
 	return ret;
 }
@@ -355,26 +350,6 @@ static int hyper_dmabuf_query(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
-{
-	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
-	struct hyper_dmabuf_ring_rq *req;
-
-	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
-
-	/* requesting remote domain to set-up exporter's ring */
-	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
-		kfree(req);
-		return -EINVAL;
-	}
-
-	kfree(req);
-	return 0;
-}
-
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
@@ -382,7 +357,6 @@ static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
 };
 
 static long hyper_dmabuf_ioctl(struct file *filp,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 0166e61..8a059c8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -70,12 +70,6 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 			request->operands[i] = operands[i];
 		break;
 
-	/* requesting the other side to setup another ring channel for reverse direction */
-	case HYPER_DMABUF_EXPORTER_RING_SETUP:
-		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
-		/* no operands needed */
-		break;
-
 	default:
 		/* no command found */
 		return;
@@ -163,13 +157,6 @@ void cmd_process_work(struct work_struct *work)
 		 */
 		break;
 
-	case HYPER_DMABUF_IMPORTER_RING_SETUP:
-		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
-		/* no operands needed */
-		hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
-
-		break;
-
 	default:
 		/* shouldn't get here */
 		/* no matched command, nothing to do.. just return error */
@@ -185,7 +172,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_ring_rq *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	int ret;
 
 	if (!req) {
 		printk("request is NULL\n");
@@ -193,28 +179,13 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	}
 
 	if ((req->command < HYPER_DMABUF_EXPORT) ||
-		(req->command > HYPER_DMABUF_IMPORTER_RING_SETUP)) {
+		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
 		printk("invalid command\n");
 		return -EINVAL;
 	}
 
 	req->status = HYPER_DMABUF_REQ_PROCESSED;
 
-	/* HYPER_DMABUF_EXPORTER_RING_SETUP requires immediate
-	 * follow up so can't be processed in workqueue
-	 */
-	if (req->command == HYPER_DMABUF_EXPORTER_RING_SETUP) {
-		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
-		if (ret < 0) {
-			req->status = HYPER_DMABUF_REQ_ERROR;
-		}
-
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
-
-		return req->command;
-	}
-
 	/* HYPER_DMABUF_DESTROY requires immediate
 	 * follow up so can't be processed in workqueue
 	 */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 44bfb70..9b25bdb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -7,8 +7,6 @@ enum hyper_dmabuf_command {
 	HYPER_DMABUF_DESTROY_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
-	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
-	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
 };
 
 enum hyper_dmabuf_ops {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 05855ba1..5db58b0 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -15,6 +15,83 @@
 
 static int export_req_id = 0;
 
+/* Creates entry in xen store that will keep details of all exporter rings created by this domain */
+int32_t hyper_dmabuf_setup_data_dir()
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	return xenbus_mkdir(XBT_NIL, buf, "");
+}
+
+
+/* Removes entry from xenstore with exporter ring details.
+ * Other domains that has connected to any of exporter rings created by this domain,
+ * will be notified about removal of this entry and will treat that as signal to
+ * cleanup importer rings created for this domain
+ */
+int32_t hyper_dmabuf_destroy_data_dir()
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	return xenbus_rm(XBT_NIL, buf, "");
+}
+
+/*
+ * Adds xenstore entries with details of exporter ring created for given remote domain.
+ * It requires special daemon running in dom0 to make sure that given remote domain will
+ * have right permissions to access that data.
+ */
+static int32_t hyper_dmabuf_expose_ring_details(uint32_t domid, uint32_t rdomid, uint32_t grefid, uint32_t port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", domid, rdomid);
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret) {
+		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret) {
+		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Queries details of ring exposed by remote domain.
+ */
+static int32_t hyper_dmabuf_get_ring_details(uint32_t domid, uint32_t rdomid, uint32_t *grefid, uint32_t *port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", rdomid, domid);
+	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret <= 0) {
+		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret <= 0) {
+		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		return ret;
+	}
+
+	return (ret <= 0 ? 1 : 0);
+}
+
 int32_t hyper_dmabuf_get_domid(void)
 {
 	struct xenbus_transaction xbt;
@@ -40,8 +117,49 @@ int hyper_dmabuf_next_req_id_export(void)
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
 
+/*
+ * Callback function that will be called on any change of xenbus path being watched.
+ * Used for detecting creation/destruction of remote domain exporter ring.
+ * When remote domain's exporter ring will be detected, importer ring on this domain will be created.
+ * When remote domain's exporter ring destruction will be detected it will celanup this domain importer ring.
+ * Destruction can be caused by unloading module by remote domain or it's crash/force shutdown.
+ */
+static void remote_domain_exporter_watch_cb(struct xenbus_watch *watch,
+				   const char *path, const char *token)
+{
+	int rdom,ret;
+	uint32_t grefid, port;
+	struct hyper_dmabuf_ring_info_import *ring_info;
+
+	/* Check which domain has changed its exporter rings */
+	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
+	if (ret <= 0) {
+		return;
+	}
+
+	/* Check if we have importer ring for given remote domain alrady created */
+	ring_info = hyper_dmabuf_find_importer_ring(rdom);
+
+	/*
+	 * Try to query remote domain exporter ring details - if that will fail and we have
+	 * importer ring that means remote domains has cleanup its exporter ring, so our
+	 * importer ring is no longer useful.
+	 * If querying details will succeed and we don't have importer ring, it means that
+	 * remote domain has setup it for us and we should connect to it.
+	 */
+	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), rdom, &grefid, &port);
+
+	if (ring_info && ret != 0) {
+		printk("Remote exporter closed, cleaninup importer\n");
+		hyper_dmabuf_importer_ringbuf_cleanup(rdom);
+	} else if (!ring_info && ret == 0) {
+		printk("Registering importer\n");
+		hyper_dmabuf_importer_ringbuf_init(rdom);
+	}
+}
+
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 {
 	struct hyper_dmabuf_ring_info_export *ring_info;
 	struct hyper_dmabuf_sring *sring;
@@ -99,24 +217,58 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *por
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
-	/* store refid and port numbers for userspace's use */
-	*refid = ring_info->gref_ring;
-	*port = ring_info->port;
-
 	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
 		ring_info->irq);
 
-	/* register ring info */
 	ret = hyper_dmabuf_register_exporter_ring(ring_info);
 
+	ret = hyper_dmabuf_expose_ring_details(hyper_dmabuf_get_domid(), rdomain,
+                                               ring_info->gref_ring, ring_info->port);
+
+	/*
+	 * Register watch for remote domain exporter ring.
+	 * When remote domain will setup its exporter ring, we will automatically connect our importer ring to it.
+	 */
+	ring_info->watch.callback = remote_domain_exporter_watch_cb;
+	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
+	sprintf((char*)ring_info->watch.node, "/local/domain/%d/data/hyper_dmabuf/%d/port", rdomain, hyper_dmabuf_get_domid());
+	register_xenbus_watch(&ring_info->watch);
+
 	return ret;
 }
 
+/* cleans up exporter ring created for given remote domain */
+void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain)
+{
+	struct hyper_dmabuf_ring_info_export *ring_info;
+
+	/* check if we at all have exporter ring for given rdomain */
+	ring_info = hyper_dmabuf_find_exporter_ring(rdomain);
+
+	if (!ring_info) {
+		return;
+	}
+
+	hyper_dmabuf_remove_exporter_ring(rdomain);
+
+	unregister_xenbus_watch(&ring_info->watch);
+	kfree(ring_info->watch.node);
+
+	/* No need to close communication channel, will be done by this function */
+	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+
+	/* No need to free sring page, will be freed by this function when other side will end its access */
+	gnttab_end_foreign_access(ring_info->gref_ring, 0,
+				  (unsigned long) ring_info->ring_front.sring);
+
+	kfree(ring_info);
+}
+
 /* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
+int hyper_dmabuf_importer_ringbuf_init(int sdomain)
 {
 	struct hyper_dmabuf_ring_info_import *ring_info;
 	struct hyper_dmabuf_sring *sring;
@@ -124,24 +276,33 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	struct page *shared_ring;
 
 	struct gnttab_map_grant_ref *ops;
-	struct gnttab_unmap_grant_ref *unmap_ops;
 	int ret;
+	int importer_gref, importer_port;
+
+	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), sdomain,
+					    &importer_gref, &importer_port);
+
+	if (ret) {
+		printk("Domain %d has not created exporter ring for current domain\n", sdomain);
+		return ret;
+	}
 
 	ring_info = (struct hyper_dmabuf_ring_info_import *)
 			kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	ring_info->sdomain = sdomain;
-	ring_info->evtchn = port;
+	ring_info->evtchn = importer_port;
 
 	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
-	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
 
 	if (gnttab_alloc_pages(1, &shared_ring)) {
 		return -EINVAL;
 	}
 
 	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
-			GNTMAP_host_map, gref, sdomain);
+			GNTMAP_host_map, importer_gref, sdomain);
+	gnttab_set_unmap_op(&ring_info->unmap_op, (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			GNTMAP_host_map, -1);
 
 	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
@@ -152,13 +313,15 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	if (ops[0].status) {
 		printk("Ring mapping failed\n");
 		return -EINVAL;
+	} else {
+		ring_info->unmap_op.handle = ops[0].handle;
 	}
 
 	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port,
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, importer_port,
 						hyper_dmabuf_back_ring_isr, 0,
 						NULL, (void*)ring_info);
 	if (ret < 0) {
@@ -168,14 +331,51 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
 	ring_info->irq = ret;
 
 	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
-		port,
+		importer_port,
 		ring_info->irq);
 
 	ret = hyper_dmabuf_register_importer_ring(ring_info);
 
+	/* Setup communcation channel in opposite direction */
+	if (!hyper_dmabuf_find_exporter_ring(sdomain)) {
+		ret = hyper_dmabuf_exporter_ringbuf_init(sdomain);
+	}
+
 	return ret;
 }
 
+/* clenas up importer ring create for given source domain */
+void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain)
+{
+	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct page *shared_ring;
+
+	/* check if we have importer ring created for given sdomain */
+	ring_info = hyper_dmabuf_find_importer_ring(sdomain);
+
+	if (!ring_info)
+		return;
+
+	hyper_dmabuf_remove_importer_ring(sdomain);
+
+	/* no need to close event channel, will be done by that function */
+	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+
+	/* unmapping shared ring page */
+	shared_ring = virt_to_page(ring_info->ring_back.sring);
+	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
+	gnttab_free_pages(1, &shared_ring);
+
+	kfree(ring_info);
+}
+
+/* cleans up all exporter/importer rings */
+void hyper_dmabuf_cleanup_ringbufs(void)
+{
+	hyper_dmabuf_foreach_exporter_ring(hyper_dmabuf_exporter_ringbuf_cleanup);
+	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
+}
+
 int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 {
 	struct hyper_dmabuf_front_ring *ring;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 4ad0529..a4819ca 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -2,6 +2,7 @@
 #define __HYPER_DMABUF_XEN_COMM_H__
 
 #include "xen/interface/io/ring.h"
+#include "xen/xenbus.h"
 
 #define MAX_NUMBER_OF_OPERANDS 9
 
@@ -27,6 +28,7 @@ struct hyper_dmabuf_ring_info_export {
         int gref_ring;
         int irq;
         int port;
+	struct xenbus_watch watch;
 };
 
 struct hyper_dmabuf_ring_info_import {
@@ -34,17 +36,29 @@ struct hyper_dmabuf_ring_info_import {
         int irq;
         int evtchn;
         struct hyper_dmabuf_back_ring ring_back;
+	struct gnttab_unmap_grant_ref unmap_op;
 };
 
 int32_t hyper_dmabuf_get_domid(void);
+int32_t hyper_dmabuf_setup_data_dir(void);
+int32_t hyper_dmabuf_destroy_data_dir(void);
 
 int hyper_dmabuf_next_req_id_export(void);
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain);
 
 /* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
+int hyper_dmabuf_importer_ringbuf_init(int sdomain);
+
+/* cleans up exporter ring created for given domain */
+void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain);
+
+/* cleans up importer ring created for given domain */
+void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
+
+/* cleans up all exporter/importer rings */
+void hyper_dmabuf_cleanup_ringbufs(void);
 
 /* send request to the remote domain */
 int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 15c9d29..5778468 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -104,3 +104,25 @@ int hyper_dmabuf_remove_importer_ring(int domid)
 
 	return -1;
 }
+
+void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom))
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exporter_ring, bkt, tmp, info_entry, node) {
+		func(info_entry->info->rdomain);
+	}
+}
+
+void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom))
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_importer_ring, bkt, tmp, info_entry, node) {
+		func(info_entry->info->sdomain);
+	}
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index 5929f99..fd1958c 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -32,4 +32,10 @@ int hyper_dmabuf_remove_exporter_ring(int domid);
 
 int hyper_dmabuf_remove_importer_ring(int domid);
 
+/* iterates over all exporter rings and calls provided function for each of them */
+void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom));
+
+/* iterates over all importer rings and calls provided function for each of them */
+void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom));
+
 #endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 09/60] hyper_dmabuf: indirect DMA_BUF synchronization via shadowing
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Importer now sends a synchronization request to the
exporter when any of DMA_BUF operations on imported
Hyper_DMABUF is executed (e.g dma_buf_map and dma_buf_unmap).
This results in a creation of shadow DMA_BUF and exactly same
DMA_BUF operation to be executed on it.

The main purpose of this is to get DMA_BUF synchronized
eventually between the original creator of DMA_BUF and the
end consumer of it running on the importer VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  90 ++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  52 ++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |   8 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  43 +++--
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 189 +++++++++++++++++++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    |   6 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  32 +++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  52 +++++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   2 +-
 10 files changed, 397 insertions(+), 78 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 0be7445..3459382 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -7,6 +7,7 @@ ifneq ($(KERNELRELEASE),)
                                  hyper_dmabuf_list.o \
 				 hyper_dmabuf_imp.o \
 				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_remote_sync.o \
 				 xen/hyper_dmabuf_xen_comm.o \
 				 xen/hyper_dmabuf_xen_comm_list.o
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 6b16e37..2c78bc1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -169,7 +169,8 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 	/*
 	 * Calculate number of pages needed for 2nd level addresing:
 	 */
-	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int n_2nd_level_pages = (nents/REFS_PER_PAGE +
+				((nents % REFS_PER_PAGE) ? 1: 0));
 	int i;
 	unsigned long gref_page_start;
 	grant_ref_t *tmp_page;
@@ -187,7 +188,9 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_2nd_level_pages; i++) {
-		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
+		addr_refs[i] = gnttab_grant_foreign_access(rdomain,
+							   virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ),
+							   1);
 	}
 
 	/*
@@ -213,7 +216,9 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 	}
 
 	/* Share top level addressing page in readonly mode*/
-	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
+	top_level_ref = gnttab_grant_foreign_access(rdomain,
+						    virt_to_mfn((unsigned long)tmp_page),
+						    1);
 
 	kfree(addr_refs);
 
@@ -255,7 +260,9 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 	}
 
 	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
-	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
+	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly,
+			  top_level_ref, domid);
+
 	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
@@ -282,7 +289,8 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 
 	for (i = 0; i < n_level2_refs; i++) {
 		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
-		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
+		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
+				  top_level_refs[i], domid);
 		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 	}
 
@@ -295,7 +303,7 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 	for (i = 0; i < n_level2_refs; i++) {
 		if (map_ops[i].status) {
 			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
-					map_ops[i].status);
+			       map_ops[i].status);
 			return NULL;
 		} else {
 			unmap_ops[i].handle = map_ops[i].handle;
@@ -331,7 +339,9 @@ grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int
 
 	/* share data pages in rw mode*/
 	for (i=0; i<nents; i++) {
-		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
+		data_refs[i] = gnttab_grant_foreign_access(rdomain,
+							   pfn_to_mfn(page_to_pfn(pages[i])),
+							   0);
 	}
 
 	/* create additional shared pages with 2 level addressing of data pages */
@@ -350,7 +360,8 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
 	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int n_2nd_level_pages = (sgt_info->active_sgts->sgt->nents/REFS_PER_PAGE +
+				((sgt_info->active_sgts->sgt->nents % REFS_PER_PAGE) ? 1: 0));
 
 
 	if (shared_pages_info->data_refs == NULL ||
@@ -384,7 +395,7 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
 
 	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->sgt->nents; i++) {
+	for (i = 0; i < sgt_info->active_sgts->sgt->nents; i++) {
 		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
 			printk("refid not shared !!\n");
 		}
@@ -404,12 +415,14 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
-	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
+	if(shared_pages_info->unmap_ops == NULL ||
+	   shared_pages_info->data_pages == NULL) {
 		printk("Imported pages already cleaned up or buffer was not imported yet\n");
 		return 0;
 	}
 
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL,
+			      shared_pages_info->data_pages, sgt_info->nents) ) {
 		printk("Cannot unmap data pages\n");
 		return -EINVAL;
 	}
@@ -424,7 +437,8 @@ int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *s
 }
 
 /* map and construct sg_lists from reference numbers */
-struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst,
+					int last_len, int nents, int sdomain,
 					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
 {
 	struct sg_table *st;
@@ -451,13 +465,16 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 		return NULL;
 	}
 
-	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
-	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
+	ops = kcalloc(nents, sizeof(struct gnttab_map_grant_ref),
+		      GFP_KERNEL);
+	unmap_ops = kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref),
+			    GFP_KERNEL);
 
 	for (i=0; i<nents; i++) {
 		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
 		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
-		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
+		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
+				refs[i % REFS_PER_PAGE], sdomain);
 		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 	}
 
@@ -478,7 +495,8 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 
 	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
 
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages,
+			n_level2_refs) ) {
 		printk("Cannot unmap 2nd level refs\n");
 		return NULL;
 	}
@@ -507,10 +525,8 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
-	/* send request */
-	ret = hyper_dmabuf_send_request(id, req);
-
-	/* TODO: wait until it gets response.. or can we just move on? */
+	/* send request and wait for a response */
+	ret = hyper_dmabuf_send_request(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id), req, true);
 
 	kfree(req);
 
@@ -528,14 +544,14 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
-						HYPER_DMABUF_OPS_ATTACH);
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+						 HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		return ret;
 	}
 
-	/* Ignoring ret for now */
 	return 0;
 }
 
@@ -549,8 +565,8 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
-						HYPER_DMABUF_OPS_DETACH);
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+						 HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -583,7 +599,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
                 goto err_free_sg;
         }
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MAP);
 
 	if (ret < 0) {
@@ -615,7 +631,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
@@ -633,7 +649,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
@@ -651,7 +667,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -670,7 +686,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -689,7 +705,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -708,7 +724,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -725,7 +741,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -744,7 +760,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -761,7 +777,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -780,7 +796,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -799,7 +815,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 44a153b..bace8b2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -6,6 +6,7 @@
 #include <linux/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
+#include <linux/list.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_list.h"
@@ -121,7 +122,9 @@ static int hyper_dmabuf_export_remote(void *data)
 		return -1;
 	}
 
-	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
+	/* Clear ret, as that will cause whole ioctl to return failure
+	 * to userspace, which is not true
+	 */
 	ret = 0;
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
@@ -131,10 +134,26 @@ static int hyper_dmabuf_export_remote(void *data)
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
-	sgt_info->sgt = sgt;
-	sgt_info->attachment = attachment;
 	sgt_info->dma_buf = dma_buf;
 
+	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
+	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
+	sgt_info->va_kmapped = kcalloc(1, sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	sgt_info->va_vmapped = kcalloc(1, sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+
+	sgt_info->active_sgts->sgt = sgt;
+	sgt_info->active_attached->attach = attachment;
+	sgt_info->va_kmapped->vaddr = NULL; /* first vaddr is NULL */
+	sgt_info->va_vmapped->vaddr = NULL; /* first vaddr is NULL */
+
+	/* initialize list of sgt, attachment and vaddr for dmabuf sync
+	 * via shadow dma-buf
+	 */
+	INIT_LIST_HEAD(&sgt_info->active_sgts->list);
+	INIT_LIST_HEAD(&sgt_info->active_attached->list);
+	INIT_LIST_HEAD(&sgt_info->va_kmapped->list);
+	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
+
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (page_info == NULL)
 		goto fail_export;
@@ -155,7 +174,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	operands[2] = page_info->frst_ofst;
 	operands[3] = page_info->last_len;
 	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
-						page_info->nents, &sgt_info->shared_pages_info);
+						     page_info->nents, &sgt_info->shared_pages_info);
 	/* driver/application specific private info, max 32 bytes */
 	operands[5] = export_remote_attr->private[0];
 	operands[6] = export_remote_attr->private[1];
@@ -166,7 +185,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
-	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
+	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req, false))
 		goto fail_send_request;
 
 	/* free msg */
@@ -181,10 +200,17 @@ static int hyper_dmabuf_export_remote(void *data)
 	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
 
 fail_export:
-	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+				 sgt_info->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 	dma_buf_put(sgt_info->dma_buf);
 
+	kfree(sgt_info->active_attached);
+	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->va_kmapped);
+	kfree(sgt_info->va_vmapped);
+
 	return -EINVAL;
 }
 
@@ -233,7 +259,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 }
 
 /* removing dmabuf from the database and send int req to the source domain
-* to unmap it. */
+ * to unmap it.
+ */
 static int hyper_dmabuf_destroy(void *data)
 {
 	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
@@ -250,7 +277,9 @@ static int hyper_dmabuf_destroy(void *data)
 
 	/* find dmabuf in export list */
 	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
-	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
+
+	/* failed to find corresponding entry in export list */
+	if (sgt_info == NULL) {
 		destroy_attr->status = -EINVAL;
 		return -EFAULT;
 	}
@@ -260,8 +289,9 @@ static int hyper_dmabuf_destroy(void *data)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
 
 	/* now send destroy request to remote domain
-	 * currently assuming there's only one importer exist */
-	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
+	 * currently assuming there's only one importer exist
+	 */
+	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
 		return -EFAULT;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index ad2109c..2b3ef6b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -33,7 +33,7 @@ int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hyper_dmabuf_id);
 
 	return 0;
 }
@@ -47,7 +47,7 @@ int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hyper_dmabuf_id);
 
 	return 0;
 }
@@ -71,8 +71,8 @@ int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->attachment->dmabuf == dmabuf &&
-			info_entry->info->hyper_dmabuf_rdomain == domid)
+		if(info_entry->info->dma_buf == dmabuf &&
+		   info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
 	return -1;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 8a059c8..2432a4e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -7,7 +7,7 @@
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
-//#include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_remote_sync.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
@@ -125,7 +125,9 @@ void cmd_process_work(struct work_struct *work)
 		 * operands0 : hyper_dmabuf_id
 		 */
 
-		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
+		/* TODO: that should be done on workqueue, when received ack from
+		 * all importers that buffer is no longer used
+		 */
 		sgt_info =
 			hyper_dmabuf_find_exported(req->operands[0]);
 
@@ -133,8 +135,10 @@ void cmd_process_work(struct work_struct *work)
 			hyper_dmabuf_cleanup_gref_table(sgt_info);
 
 			/* unmap dmabuf */
-			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
-			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+			dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+						 sgt_info->active_sgts->sgt,
+						 DMA_BIDIRECTIONAL);
+			dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 			dma_buf_put(sgt_info->dma_buf);
 
 			/* TODO: Rest of cleanup, sgt cleanup etc */
@@ -147,16 +151,6 @@ void cmd_process_work(struct work_struct *work)
 		/* for dmabuf synchronization */
 		break;
 
-	/* as importer, command to exporter */
-	case HYPER_DMABUF_OPS_TO_SOURCE:
-		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
-		* or unmapping for synchronization with original exporter (e.g. i915) */
-		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
-		 */
-		break;
-
 	default:
 		/* shouldn't get here */
 		/* no matched command, nothing to do.. just return error */
@@ -172,6 +166,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_ring_rq *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret;
 
 	if (!req) {
 		printk("request is NULL\n");
@@ -216,7 +211,25 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		return req->command;
 	}
 
-	temp_req = (struct hyper_dmabuf_ring_rq *)kmalloc(sizeof(*temp_req), GFP_KERNEL);
+	/* dma buf remote synchronization */
+	if (req->command == HYPER_DMABUF_OPS_TO_SOURCE) {
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		 * or unmapping for synchronization with original exporter (e.g. i915) */
+
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : enum hyper_dmabuf_ops {....}
+		 */
+		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
+		if (ret)
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		else
+			req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+		return req->command;
+	}
+
+	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
 	memcpy(temp_req, req, sizeof(*temp_req));
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
new file mode 100644
index 0000000..6ba932f
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -0,0 +1,189 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_drv.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+int hyper_dmabuf_remote_sync(int id, int ops)
+{
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	int ret;
+
+	/* find a coresponding SGT for the id */
+	sgt_info = hyper_dmabuf_find_exported(id);
+
+	if (!sgt_info) {
+		printk("dmabuf remote sync::can't find exported list\n");
+		return -EINVAL;
+	}
+
+	switch (ops) {
+	case HYPER_DMABUF_OPS_ATTACH:
+		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
+
+		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
+						hyper_dmabuf_private.device);
+
+		if (!attachl->attach) {
+			kfree(attachl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			return -EINVAL;
+		}
+
+		list_add(&attachl->list, &sgt_info->active_attached->list);
+		break;
+
+	case HYPER_DMABUF_OPS_DETACH:
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+
+		if (!attachl) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			return -EINVAL;
+		}
+		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+		break;
+
+	case HYPER_DMABUF_OPS_MAP:
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
+		if (!sgtl->sgt) {
+			kfree(sgtl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			return -EINVAL;
+		}
+		list_add(&sgtl->list, &sgt_info->active_sgts->list);
+		break;
+
+	case HYPER_DMABUF_OPS_UNMAP:
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+		if (!attachl || !sgtl) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			return -EINVAL;
+		}
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+		break;
+
+	case HYPER_DMABUF_OPS_RELEASE:
+		/* remote importer shouldn't release dma_buf because
+		 * exporter will hold handle to the dma_buf as
+		 * far as dma_buf is shared with other domains.
+		 */
+		break;
+
+	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
+		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		if (!ret) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			ret = -EINVAL;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
+		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		if (!ret) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			ret = -EINVAL;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KMAP:
+		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+
+		/* dummy kmapping of 1 page */
+		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
+			va_kmapl->vaddr = dma_buf_kmap_atomic(sgt_info->dma_buf, 1);
+		else
+			va_kmapl->vaddr = dma_buf_kmap(sgt_info->dma_buf, 1);
+
+		if (!va_kmapl->vaddr) {
+			kfree(va_kmapl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -EINVAL;
+		}
+		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KUNMAP:
+		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+					struct kmap_vaddr_list, list);
+		if (!va_kmapl || va_kmapl->vaddr == NULL) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			return -EINVAL;
+		}
+
+		/* unmapping 1 page */
+		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
+			dma_buf_kunmap_atomic(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		else
+			dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+		break;
+
+	case HYPER_DMABUF_OPS_MMAP:
+		/* currently not supported: looking for a way to create
+		 * a dummy vma */
+		printk("dmabuf remote sync::sychronized mmap is not supported\n");
+		break;
+
+	case HYPER_DMABUF_OPS_VMAP:
+		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
+
+		/* dummy vmapping */
+		va_vmapl->vaddr = dma_buf_vmap(sgt_info->dma_buf);
+
+		if (!va_vmapl->vaddr) {
+			kfree(va_vmapl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			return -EINVAL;
+		}
+		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_VUNMAP:
+		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+					struct vmap_vaddr_list, list);
+		if (!va_vmapl || va_vmapl->vaddr == NULL) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			return -EINVAL;
+		}
+
+		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+		break;
+
+	default:
+		/* program should not get here */
+		break;
+	}
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
new file mode 100644
index 0000000..fc85fa8
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -0,0 +1,6 @@
+#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
+#define __HYPER_DMABUF_REMOTE_SYNC_H__
+
+int hyper_dmabuf_remote_sync(int id, int ops);
+
+#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index c8a2f4d..bfe80ee 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -18,6 +18,30 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
+/* stack of mapped sgts */
+struct sgt_list {
+	struct sg_table *sgt;
+	struct list_head list;
+};
+
+/* stack of attachments */
+struct attachment_list {
+	struct dma_buf_attachment *attach;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via kmap */
+struct kmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via vmap */
+struct vmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
 struct hyper_dmabuf_shared_pages_info {
 	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
 	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
@@ -46,9 +70,13 @@ struct hyper_dmabuf_pages_info {
 struct hyper_dmabuf_sgt_info {
         int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
-        struct sg_table *sgt; /* pointer to sgt */
+
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
-	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
+	struct sgt_list *active_sgts;
+	struct attachment_list *active_attached;
+	struct kmap_vaddr_list *va_kmapped;
+	struct vmap_vaddr_list *va_vmapped;
+
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 5db58b0..576085f 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -3,6 +3,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/workqueue.h>
+#include <linux/delay.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
 #include <xen/xenbus.h>
@@ -15,6 +16,8 @@
 
 static int export_req_id = 0;
 
+struct hyper_dmabuf_ring_rq req_pending = {0};
+
 /* Creates entry in xen store that will keep details of all exporter rings created by this domain */
 int32_t hyper_dmabuf_setup_data_dir()
 {
@@ -114,8 +117,8 @@ int hyper_dmabuf_next_req_id_export(void)
 }
 
 /* For now cache latast rings as global variables TODO: keep them in list*/
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info);
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info);
 
 /*
  * Callback function that will be called on any change of xenbus path being watched.
@@ -376,12 +379,13 @@ void hyper_dmabuf_cleanup_ringbufs(void)
 	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
 }
 
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait)
 {
 	struct hyper_dmabuf_front_ring *ring;
 	struct hyper_dmabuf_ring_rq *new_req;
 	struct hyper_dmabuf_ring_info_export *ring_info;
 	int notify;
+	int timeout = 1000;
 
 	/* find a ring info for the channel */
 	ring_info = hyper_dmabuf_find_exporter_ring(domain);
@@ -401,6 +405,10 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 		return -EIO;
 	}
 
+	/* update req_pending with current request */
+	memcpy(&req_pending, req, sizeof(req_pending));
+
+	/* pass current request to the ring */
 	memcpy(new_req, req, sizeof(*new_req));
 
 	ring->req_prod_pvt++;
@@ -410,10 +418,24 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 		notify_remote_via_irq(ring_info->irq);
 	}
 
+	if (wait) {
+		while (timeout--) {
+			if (req_pending.status !=
+			    HYPER_DMABUF_REQ_NOT_RESPONDED)
+				break;
+			usleep_range(100, 120);
+		}
+
+		if (timeout < 0) {
+			printk("request timed-out\n");
+			return -EBUSY;
+		}
+	}
+
 	return 0;
 }
 
-/* ISR for request from exporter (as an importer) */
+/* ISR for handling request */
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
@@ -444,6 +466,9 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
+				/* preparing a response for the request and send it to
+				 * the requester
+				 */
 				memcpy(&resp, &req, sizeof(resp));
 				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
 							sizeof(resp));
@@ -465,7 +490,7 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 	return IRQ_HANDLED;
 }
 
-/* ISR for responses from importer */
+/* ISR for handling responses */
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
@@ -483,10 +508,13 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 		more_to_do = 0;
 		rp = ring->sring->rsp_prod;
 		for (i = ring->rsp_cons; i != rp; i++) {
-			unsigned long id;
-
 			resp = RING_GET_RESPONSE(ring, i);
-			id = resp->response_id;
+
+			/* update pending request's status with what is
+			 * in the response
+			 */
+			if (req_pending.request_id == resp->response_id)
+				req_pending.status = resp->status;
 
 			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
@@ -496,6 +524,14 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
 				}
+			} else if (resp->status == HYPER_DMABUF_REQ_PROCESSED) {
+				/* for debugging dma_buf remote synchronization */
+				printk("original request = 0x%x\n", resp->command);
+				printk("Just got HYPER_DMABUF_REQ_PROCESSED\n");
+			} else if (resp->status == HYPER_DMABUF_REQ_ERROR) {
+				/* for debugging dma_buf remote synchronization */
+				printk("original request = 0x%x\n", resp->command);
+				printk("Just got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index a4819ca..4ab031a 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -61,7 +61,7 @@ void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
 void hyper_dmabuf_cleanup_ringbufs(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait);
 
 /* called by interrupt (WORKQUEUE) */
 int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 09/60] hyper_dmabuf: indirect DMA_BUF synchronization via shadowing
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Importer now sends a synchronization request to the
exporter when any of DMA_BUF operations on imported
Hyper_DMABUF is executed (e.g dma_buf_map and dma_buf_unmap).
This results in a creation of shadow DMA_BUF and exactly same
DMA_BUF operation to be executed on it.

The main purpose of this is to get DMA_BUF synchronized
eventually between the original creator of DMA_BUF and the
end consumer of it running on the importer VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  90 ++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  52 ++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |   8 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  43 +++--
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 189 +++++++++++++++++++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    |   6 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  32 +++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  52 +++++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   2 +-
 10 files changed, 397 insertions(+), 78 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 0be7445..3459382 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -7,6 +7,7 @@ ifneq ($(KERNELRELEASE),)
                                  hyper_dmabuf_list.o \
 				 hyper_dmabuf_imp.o \
 				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_remote_sync.o \
 				 xen/hyper_dmabuf_xen_comm.o \
 				 xen/hyper_dmabuf_xen_comm_list.o
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 6b16e37..2c78bc1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -169,7 +169,8 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 	/*
 	 * Calculate number of pages needed for 2nd level addresing:
 	 */
-	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int n_2nd_level_pages = (nents/REFS_PER_PAGE +
+				((nents % REFS_PER_PAGE) ? 1: 0));
 	int i;
 	unsigned long gref_page_start;
 	grant_ref_t *tmp_page;
@@ -187,7 +188,9 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_2nd_level_pages; i++) {
-		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
+		addr_refs[i] = gnttab_grant_foreign_access(rdomain,
+							   virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ),
+							   1);
 	}
 
 	/*
@@ -213,7 +216,9 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 	}
 
 	/* Share top level addressing page in readonly mode*/
-	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
+	top_level_ref = gnttab_grant_foreign_access(rdomain,
+						    virt_to_mfn((unsigned long)tmp_page),
+						    1);
 
 	kfree(addr_refs);
 
@@ -255,7 +260,9 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 	}
 
 	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
-	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
+	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly,
+			  top_level_ref, domid);
+
 	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
@@ -282,7 +289,8 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 
 	for (i = 0; i < n_level2_refs; i++) {
 		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
-		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
+		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
+				  top_level_refs[i], domid);
 		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 	}
 
@@ -295,7 +303,7 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 	for (i = 0; i < n_level2_refs; i++) {
 		if (map_ops[i].status) {
 			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
-					map_ops[i].status);
+			       map_ops[i].status);
 			return NULL;
 		} else {
 			unmap_ops[i].handle = map_ops[i].handle;
@@ -331,7 +339,9 @@ grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int
 
 	/* share data pages in rw mode*/
 	for (i=0; i<nents; i++) {
-		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
+		data_refs[i] = gnttab_grant_foreign_access(rdomain,
+							   pfn_to_mfn(page_to_pfn(pages[i])),
+							   0);
 	}
 
 	/* create additional shared pages with 2 level addressing of data pages */
@@ -350,7 +360,8 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
 	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int n_2nd_level_pages = (sgt_info->active_sgts->sgt->nents/REFS_PER_PAGE +
+				((sgt_info->active_sgts->sgt->nents % REFS_PER_PAGE) ? 1: 0));
 
 
 	if (shared_pages_info->data_refs == NULL ||
@@ -384,7 +395,7 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
 
 	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->sgt->nents; i++) {
+	for (i = 0; i < sgt_info->active_sgts->sgt->nents; i++) {
 		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
 			printk("refid not shared !!\n");
 		}
@@ -404,12 +415,14 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
-	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
+	if(shared_pages_info->unmap_ops == NULL ||
+	   shared_pages_info->data_pages == NULL) {
 		printk("Imported pages already cleaned up or buffer was not imported yet\n");
 		return 0;
 	}
 
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL,
+			      shared_pages_info->data_pages, sgt_info->nents) ) {
 		printk("Cannot unmap data pages\n");
 		return -EINVAL;
 	}
@@ -424,7 +437,8 @@ int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *s
 }
 
 /* map and construct sg_lists from reference numbers */
-struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst,
+					int last_len, int nents, int sdomain,
 					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
 {
 	struct sg_table *st;
@@ -451,13 +465,16 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 		return NULL;
 	}
 
-	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
-	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
+	ops = kcalloc(nents, sizeof(struct gnttab_map_grant_ref),
+		      GFP_KERNEL);
+	unmap_ops = kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref),
+			    GFP_KERNEL);
 
 	for (i=0; i<nents; i++) {
 		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
 		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
-		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
+		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
+				refs[i % REFS_PER_PAGE], sdomain);
 		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 	}
 
@@ -478,7 +495,8 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 
 	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
 
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages,
+			n_level2_refs) ) {
 		printk("Cannot unmap 2nd level refs\n");
 		return NULL;
 	}
@@ -507,10 +525,8 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
-	/* send request */
-	ret = hyper_dmabuf_send_request(id, req);
-
-	/* TODO: wait until it gets response.. or can we just move on? */
+	/* send request and wait for a response */
+	ret = hyper_dmabuf_send_request(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id), req, true);
 
 	kfree(req);
 
@@ -528,14 +544,14 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
-						HYPER_DMABUF_OPS_ATTACH);
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+						 HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		return ret;
 	}
 
-	/* Ignoring ret for now */
 	return 0;
 }
 
@@ -549,8 +565,8 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
-						HYPER_DMABUF_OPS_DETACH);
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+						 HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -583,7 +599,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
                 goto err_free_sg;
         }
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MAP);
 
 	if (ret < 0) {
@@ -615,7 +631,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
@@ -633,7 +649,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
@@ -651,7 +667,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -670,7 +686,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -689,7 +705,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -708,7 +724,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -725,7 +741,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -744,7 +760,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -761,7 +777,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -780,7 +796,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -799,7 +815,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 44a153b..bace8b2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -6,6 +6,7 @@
 #include <linux/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
+#include <linux/list.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_list.h"
@@ -121,7 +122,9 @@ static int hyper_dmabuf_export_remote(void *data)
 		return -1;
 	}
 
-	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
+	/* Clear ret, as that will cause whole ioctl to return failure
+	 * to userspace, which is not true
+	 */
 	ret = 0;
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
@@ -131,10 +134,26 @@ static int hyper_dmabuf_export_remote(void *data)
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
-	sgt_info->sgt = sgt;
-	sgt_info->attachment = attachment;
 	sgt_info->dma_buf = dma_buf;
 
+	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
+	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
+	sgt_info->va_kmapped = kcalloc(1, sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	sgt_info->va_vmapped = kcalloc(1, sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+
+	sgt_info->active_sgts->sgt = sgt;
+	sgt_info->active_attached->attach = attachment;
+	sgt_info->va_kmapped->vaddr = NULL; /* first vaddr is NULL */
+	sgt_info->va_vmapped->vaddr = NULL; /* first vaddr is NULL */
+
+	/* initialize list of sgt, attachment and vaddr for dmabuf sync
+	 * via shadow dma-buf
+	 */
+	INIT_LIST_HEAD(&sgt_info->active_sgts->list);
+	INIT_LIST_HEAD(&sgt_info->active_attached->list);
+	INIT_LIST_HEAD(&sgt_info->va_kmapped->list);
+	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
+
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (page_info == NULL)
 		goto fail_export;
@@ -155,7 +174,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	operands[2] = page_info->frst_ofst;
 	operands[3] = page_info->last_len;
 	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
-						page_info->nents, &sgt_info->shared_pages_info);
+						     page_info->nents, &sgt_info->shared_pages_info);
 	/* driver/application specific private info, max 32 bytes */
 	operands[5] = export_remote_attr->private[0];
 	operands[6] = export_remote_attr->private[1];
@@ -166,7 +185,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
-	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
+	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req, false))
 		goto fail_send_request;
 
 	/* free msg */
@@ -181,10 +200,17 @@ static int hyper_dmabuf_export_remote(void *data)
 	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
 
 fail_export:
-	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+				 sgt_info->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 	dma_buf_put(sgt_info->dma_buf);
 
+	kfree(sgt_info->active_attached);
+	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->va_kmapped);
+	kfree(sgt_info->va_vmapped);
+
 	return -EINVAL;
 }
 
@@ -233,7 +259,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 }
 
 /* removing dmabuf from the database and send int req to the source domain
-* to unmap it. */
+ * to unmap it.
+ */
 static int hyper_dmabuf_destroy(void *data)
 {
 	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
@@ -250,7 +277,9 @@ static int hyper_dmabuf_destroy(void *data)
 
 	/* find dmabuf in export list */
 	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
-	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
+
+	/* failed to find corresponding entry in export list */
+	if (sgt_info == NULL) {
 		destroy_attr->status = -EINVAL;
 		return -EFAULT;
 	}
@@ -260,8 +289,9 @@ static int hyper_dmabuf_destroy(void *data)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
 
 	/* now send destroy request to remote domain
-	 * currently assuming there's only one importer exist */
-	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
+	 * currently assuming there's only one importer exist
+	 */
+	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
 		return -EFAULT;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index ad2109c..2b3ef6b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -33,7 +33,7 @@ int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hyper_dmabuf_id);
 
 	return 0;
 }
@@ -47,7 +47,7 @@ int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hyper_dmabuf_id);
 
 	return 0;
 }
@@ -71,8 +71,8 @@ int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->attachment->dmabuf == dmabuf &&
-			info_entry->info->hyper_dmabuf_rdomain == domid)
+		if(info_entry->info->dma_buf == dmabuf &&
+		   info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
 	return -1;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 8a059c8..2432a4e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -7,7 +7,7 @@
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
-//#include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_remote_sync.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
@@ -125,7 +125,9 @@ void cmd_process_work(struct work_struct *work)
 		 * operands0 : hyper_dmabuf_id
 		 */
 
-		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
+		/* TODO: that should be done on workqueue, when received ack from
+		 * all importers that buffer is no longer used
+		 */
 		sgt_info =
 			hyper_dmabuf_find_exported(req->operands[0]);
 
@@ -133,8 +135,10 @@ void cmd_process_work(struct work_struct *work)
 			hyper_dmabuf_cleanup_gref_table(sgt_info);
 
 			/* unmap dmabuf */
-			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
-			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+			dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+						 sgt_info->active_sgts->sgt,
+						 DMA_BIDIRECTIONAL);
+			dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 			dma_buf_put(sgt_info->dma_buf);
 
 			/* TODO: Rest of cleanup, sgt cleanup etc */
@@ -147,16 +151,6 @@ void cmd_process_work(struct work_struct *work)
 		/* for dmabuf synchronization */
 		break;
 
-	/* as importer, command to exporter */
-	case HYPER_DMABUF_OPS_TO_SOURCE:
-		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
-		* or unmapping for synchronization with original exporter (e.g. i915) */
-		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
-		 */
-		break;
-
 	default:
 		/* shouldn't get here */
 		/* no matched command, nothing to do.. just return error */
@@ -172,6 +166,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_ring_rq *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret;
 
 	if (!req) {
 		printk("request is NULL\n");
@@ -216,7 +211,25 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		return req->command;
 	}
 
-	temp_req = (struct hyper_dmabuf_ring_rq *)kmalloc(sizeof(*temp_req), GFP_KERNEL);
+	/* dma buf remote synchronization */
+	if (req->command == HYPER_DMABUF_OPS_TO_SOURCE) {
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		 * or unmapping for synchronization with original exporter (e.g. i915) */
+
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : enum hyper_dmabuf_ops {....}
+		 */
+		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
+		if (ret)
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		else
+			req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+		return req->command;
+	}
+
+	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
 	memcpy(temp_req, req, sizeof(*temp_req));
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
new file mode 100644
index 0000000..6ba932f
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -0,0 +1,189 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_drv.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+int hyper_dmabuf_remote_sync(int id, int ops)
+{
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	int ret;
+
+	/* find a coresponding SGT for the id */
+	sgt_info = hyper_dmabuf_find_exported(id);
+
+	if (!sgt_info) {
+		printk("dmabuf remote sync::can't find exported list\n");
+		return -EINVAL;
+	}
+
+	switch (ops) {
+	case HYPER_DMABUF_OPS_ATTACH:
+		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
+
+		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
+						hyper_dmabuf_private.device);
+
+		if (!attachl->attach) {
+			kfree(attachl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			return -EINVAL;
+		}
+
+		list_add(&attachl->list, &sgt_info->active_attached->list);
+		break;
+
+	case HYPER_DMABUF_OPS_DETACH:
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+
+		if (!attachl) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			return -EINVAL;
+		}
+		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+		break;
+
+	case HYPER_DMABUF_OPS_MAP:
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
+		if (!sgtl->sgt) {
+			kfree(sgtl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			return -EINVAL;
+		}
+		list_add(&sgtl->list, &sgt_info->active_sgts->list);
+		break;
+
+	case HYPER_DMABUF_OPS_UNMAP:
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+		if (!attachl || !sgtl) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			return -EINVAL;
+		}
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+		break;
+
+	case HYPER_DMABUF_OPS_RELEASE:
+		/* remote importer shouldn't release dma_buf because
+		 * exporter will hold handle to the dma_buf as
+		 * far as dma_buf is shared with other domains.
+		 */
+		break;
+
+	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
+		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		if (!ret) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			ret = -EINVAL;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
+		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		if (!ret) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			ret = -EINVAL;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KMAP:
+		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+
+		/* dummy kmapping of 1 page */
+		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
+			va_kmapl->vaddr = dma_buf_kmap_atomic(sgt_info->dma_buf, 1);
+		else
+			va_kmapl->vaddr = dma_buf_kmap(sgt_info->dma_buf, 1);
+
+		if (!va_kmapl->vaddr) {
+			kfree(va_kmapl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -EINVAL;
+		}
+		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KUNMAP:
+		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+					struct kmap_vaddr_list, list);
+		if (!va_kmapl || va_kmapl->vaddr == NULL) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			return -EINVAL;
+		}
+
+		/* unmapping 1 page */
+		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
+			dma_buf_kunmap_atomic(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		else
+			dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+		break;
+
+	case HYPER_DMABUF_OPS_MMAP:
+		/* currently not supported: looking for a way to create
+		 * a dummy vma */
+		printk("dmabuf remote sync::sychronized mmap is not supported\n");
+		break;
+
+	case HYPER_DMABUF_OPS_VMAP:
+		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
+
+		/* dummy vmapping */
+		va_vmapl->vaddr = dma_buf_vmap(sgt_info->dma_buf);
+
+		if (!va_vmapl->vaddr) {
+			kfree(va_vmapl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			return -EINVAL;
+		}
+		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_VUNMAP:
+		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+					struct vmap_vaddr_list, list);
+		if (!va_vmapl || va_vmapl->vaddr == NULL) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			return -EINVAL;
+		}
+
+		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+		break;
+
+	default:
+		/* program should not get here */
+		break;
+	}
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
new file mode 100644
index 0000000..fc85fa8
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -0,0 +1,6 @@
+#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
+#define __HYPER_DMABUF_REMOTE_SYNC_H__
+
+int hyper_dmabuf_remote_sync(int id, int ops);
+
+#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index c8a2f4d..bfe80ee 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -18,6 +18,30 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
+/* stack of mapped sgts */
+struct sgt_list {
+	struct sg_table *sgt;
+	struct list_head list;
+};
+
+/* stack of attachments */
+struct attachment_list {
+	struct dma_buf_attachment *attach;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via kmap */
+struct kmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via vmap */
+struct vmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
 struct hyper_dmabuf_shared_pages_info {
 	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
 	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
@@ -46,9 +70,13 @@ struct hyper_dmabuf_pages_info {
 struct hyper_dmabuf_sgt_info {
         int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
-        struct sg_table *sgt; /* pointer to sgt */
+
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
-	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
+	struct sgt_list *active_sgts;
+	struct attachment_list *active_attached;
+	struct kmap_vaddr_list *va_kmapped;
+	struct vmap_vaddr_list *va_vmapped;
+
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 5db58b0..576085f 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -3,6 +3,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/workqueue.h>
+#include <linux/delay.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
 #include <xen/xenbus.h>
@@ -15,6 +16,8 @@
 
 static int export_req_id = 0;
 
+struct hyper_dmabuf_ring_rq req_pending = {0};
+
 /* Creates entry in xen store that will keep details of all exporter rings created by this domain */
 int32_t hyper_dmabuf_setup_data_dir()
 {
@@ -114,8 +117,8 @@ int hyper_dmabuf_next_req_id_export(void)
 }
 
 /* For now cache latast rings as global variables TODO: keep them in list*/
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info);
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info);
 
 /*
  * Callback function that will be called on any change of xenbus path being watched.
@@ -376,12 +379,13 @@ void hyper_dmabuf_cleanup_ringbufs(void)
 	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
 }
 
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait)
 {
 	struct hyper_dmabuf_front_ring *ring;
 	struct hyper_dmabuf_ring_rq *new_req;
 	struct hyper_dmabuf_ring_info_export *ring_info;
 	int notify;
+	int timeout = 1000;
 
 	/* find a ring info for the channel */
 	ring_info = hyper_dmabuf_find_exporter_ring(domain);
@@ -401,6 +405,10 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 		return -EIO;
 	}
 
+	/* update req_pending with current request */
+	memcpy(&req_pending, req, sizeof(req_pending));
+
+	/* pass current request to the ring */
 	memcpy(new_req, req, sizeof(*new_req));
 
 	ring->req_prod_pvt++;
@@ -410,10 +418,24 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 		notify_remote_via_irq(ring_info->irq);
 	}
 
+	if (wait) {
+		while (timeout--) {
+			if (req_pending.status !=
+			    HYPER_DMABUF_REQ_NOT_RESPONDED)
+				break;
+			usleep_range(100, 120);
+		}
+
+		if (timeout < 0) {
+			printk("request timed-out\n");
+			return -EBUSY;
+		}
+	}
+
 	return 0;
 }
 
-/* ISR for request from exporter (as an importer) */
+/* ISR for handling request */
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
@@ -444,6 +466,9 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
+				/* preparing a response for the request and send it to
+				 * the requester
+				 */
 				memcpy(&resp, &req, sizeof(resp));
 				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
 							sizeof(resp));
@@ -465,7 +490,7 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 	return IRQ_HANDLED;
 }
 
-/* ISR for responses from importer */
+/* ISR for handling responses */
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
@@ -483,10 +508,13 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 		more_to_do = 0;
 		rp = ring->sring->rsp_prod;
 		for (i = ring->rsp_cons; i != rp; i++) {
-			unsigned long id;
-
 			resp = RING_GET_RESPONSE(ring, i);
-			id = resp->response_id;
+
+			/* update pending request's status with what is
+			 * in the response
+			 */
+			if (req_pending.request_id == resp->response_id)
+				req_pending.status = resp->status;
 
 			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
@@ -496,6 +524,14 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
 				}
+			} else if (resp->status == HYPER_DMABUF_REQ_PROCESSED) {
+				/* for debugging dma_buf remote synchronization */
+				printk("original request = 0x%x\n", resp->command);
+				printk("Just got HYPER_DMABUF_REQ_PROCESSED\n");
+			} else if (resp->status == HYPER_DMABUF_REQ_ERROR) {
+				/* for debugging dma_buf remote synchronization */
+				printk("original request = 0x%x\n", resp->command);
+				printk("Just got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index a4819ca..4ab031a 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -61,7 +61,7 @@ void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
 void hyper_dmabuf_cleanup_ringbufs(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait);
 
 /* called by interrupt (WORKQUEUE) */
 int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 09/60] hyper_dmabuf: indirect DMA_BUF synchronization via shadowing
  2017-12-19 19:29 ` Dongwon Kim
                   ` (15 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Importer now sends a synchronization request to the
exporter when any of DMA_BUF operations on imported
Hyper_DMABUF is executed (e.g dma_buf_map and dma_buf_unmap).
This results in a creation of shadow DMA_BUF and exactly same
DMA_BUF operation to be executed on it.

The main purpose of this is to get DMA_BUF synchronized
eventually between the original creator of DMA_BUF and the
end consumer of it running on the importer VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  90 ++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  52 ++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |   8 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  43 +++--
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 189 +++++++++++++++++++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    |   6 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  32 +++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  52 +++++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   2 +-
 10 files changed, 397 insertions(+), 78 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 0be7445..3459382 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -7,6 +7,7 @@ ifneq ($(KERNELRELEASE),)
                                  hyper_dmabuf_list.o \
 				 hyper_dmabuf_imp.o \
 				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_remote_sync.o \
 				 xen/hyper_dmabuf_xen_comm.o \
 				 xen/hyper_dmabuf_xen_comm_list.o
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 6b16e37..2c78bc1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -169,7 +169,8 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 	/*
 	 * Calculate number of pages needed for 2nd level addresing:
 	 */
-	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int n_2nd_level_pages = (nents/REFS_PER_PAGE +
+				((nents % REFS_PER_PAGE) ? 1: 0));
 	int i;
 	unsigned long gref_page_start;
 	grant_ref_t *tmp_page;
@@ -187,7 +188,9 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_2nd_level_pages; i++) {
-		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
+		addr_refs[i] = gnttab_grant_foreign_access(rdomain,
+							   virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ),
+							   1);
 	}
 
 	/*
@@ -213,7 +216,9 @@ grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int ne
 	}
 
 	/* Share top level addressing page in readonly mode*/
-	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
+	top_level_ref = gnttab_grant_foreign_access(rdomain,
+						    virt_to_mfn((unsigned long)tmp_page),
+						    1);
 
 	kfree(addr_refs);
 
@@ -255,7 +260,9 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 	}
 
 	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
-	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
+	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly,
+			  top_level_ref, domid);
+
 	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
@@ -282,7 +289,8 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 
 	for (i = 0; i < n_level2_refs; i++) {
 		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
-		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
+		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
+				  top_level_refs[i], domid);
 		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 	}
 
@@ -295,7 +303,7 @@ struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, i
 	for (i = 0; i < n_level2_refs; i++) {
 		if (map_ops[i].status) {
 			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
-					map_ops[i].status);
+			       map_ops[i].status);
 			return NULL;
 		} else {
 			unmap_ops[i].handle = map_ops[i].handle;
@@ -331,7 +339,9 @@ grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int
 
 	/* share data pages in rw mode*/
 	for (i=0; i<nents; i++) {
-		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
+		data_refs[i] = gnttab_grant_foreign_access(rdomain,
+							   pfn_to_mfn(page_to_pfn(pages[i])),
+							   0);
 	}
 
 	/* create additional shared pages with 2 level addressing of data pages */
@@ -350,7 +360,8 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
 	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int n_2nd_level_pages = (sgt_info->active_sgts->sgt->nents/REFS_PER_PAGE +
+				((sgt_info->active_sgts->sgt->nents % REFS_PER_PAGE) ? 1: 0));
 
 
 	if (shared_pages_info->data_refs == NULL ||
@@ -384,7 +395,7 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
 
 	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->sgt->nents; i++) {
+	for (i = 0; i < sgt_info->active_sgts->sgt->nents; i++) {
 		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
 			printk("refid not shared !!\n");
 		}
@@ -404,12 +415,14 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
-	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
+	if(shared_pages_info->unmap_ops == NULL ||
+	   shared_pages_info->data_pages == NULL) {
 		printk("Imported pages already cleaned up or buffer was not imported yet\n");
 		return 0;
 	}
 
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL,
+			      shared_pages_info->data_pages, sgt_info->nents) ) {
 		printk("Cannot unmap data pages\n");
 		return -EINVAL;
 	}
@@ -424,7 +437,8 @@ int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *s
 }
 
 /* map and construct sg_lists from reference numbers */
-struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst,
+					int last_len, int nents, int sdomain,
 					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
 {
 	struct sg_table *st;
@@ -451,13 +465,16 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 		return NULL;
 	}
 
-	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
-	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
+	ops = kcalloc(nents, sizeof(struct gnttab_map_grant_ref),
+		      GFP_KERNEL);
+	unmap_ops = kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref),
+			    GFP_KERNEL);
 
 	for (i=0; i<nents; i++) {
 		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
 		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
-		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
+		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
+				refs[i % REFS_PER_PAGE], sdomain);
 		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
 	}
 
@@ -478,7 +495,8 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 
 	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
 
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages,
+			n_level2_refs) ) {
 		printk("Cannot unmap 2nd level refs\n");
 		return NULL;
 	}
@@ -507,10 +525,8 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
-	/* send request */
-	ret = hyper_dmabuf_send_request(id, req);
-
-	/* TODO: wait until it gets response.. or can we just move on? */
+	/* send request and wait for a response */
+	ret = hyper_dmabuf_send_request(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id), req, true);
 
 	kfree(req);
 
@@ -528,14 +544,14 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
-						HYPER_DMABUF_OPS_ATTACH);
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+						 HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		return ret;
 	}
 
-	/* Ignoring ret for now */
 	return 0;
 }
 
@@ -549,8 +565,8 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
-						HYPER_DMABUF_OPS_DETACH);
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+						 HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -583,7 +599,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
                 goto err_free_sg;
         }
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MAP);
 
 	if (ret < 0) {
@@ -615,7 +631,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
@@ -633,7 +649,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
@@ -651,7 +667,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -670,7 +686,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -689,7 +705,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -708,7 +724,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -725,7 +741,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -744,7 +760,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -761,7 +777,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -780,7 +796,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -799,7 +815,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 44a153b..bace8b2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -6,6 +6,7 @@
 #include <linux/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
+#include <linux/list.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_list.h"
@@ -121,7 +122,9 @@ static int hyper_dmabuf_export_remote(void *data)
 		return -1;
 	}
 
-	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
+	/* Clear ret, as that will cause whole ioctl to return failure
+	 * to userspace, which is not true
+	 */
 	ret = 0;
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
@@ -131,10 +134,26 @@ static int hyper_dmabuf_export_remote(void *data)
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
-	sgt_info->sgt = sgt;
-	sgt_info->attachment = attachment;
 	sgt_info->dma_buf = dma_buf;
 
+	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
+	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
+	sgt_info->va_kmapped = kcalloc(1, sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	sgt_info->va_vmapped = kcalloc(1, sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+
+	sgt_info->active_sgts->sgt = sgt;
+	sgt_info->active_attached->attach = attachment;
+	sgt_info->va_kmapped->vaddr = NULL; /* first vaddr is NULL */
+	sgt_info->va_vmapped->vaddr = NULL; /* first vaddr is NULL */
+
+	/* initialize list of sgt, attachment and vaddr for dmabuf sync
+	 * via shadow dma-buf
+	 */
+	INIT_LIST_HEAD(&sgt_info->active_sgts->list);
+	INIT_LIST_HEAD(&sgt_info->active_attached->list);
+	INIT_LIST_HEAD(&sgt_info->va_kmapped->list);
+	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
+
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (page_info == NULL)
 		goto fail_export;
@@ -155,7 +174,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	operands[2] = page_info->frst_ofst;
 	operands[3] = page_info->last_len;
 	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
-						page_info->nents, &sgt_info->shared_pages_info);
+						     page_info->nents, &sgt_info->shared_pages_info);
 	/* driver/application specific private info, max 32 bytes */
 	operands[5] = export_remote_attr->private[0];
 	operands[6] = export_remote_attr->private[1];
@@ -166,7 +185,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
-	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
+	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req, false))
 		goto fail_send_request;
 
 	/* free msg */
@@ -181,10 +200,17 @@ static int hyper_dmabuf_export_remote(void *data)
 	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
 
 fail_export:
-	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+				 sgt_info->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 	dma_buf_put(sgt_info->dma_buf);
 
+	kfree(sgt_info->active_attached);
+	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->va_kmapped);
+	kfree(sgt_info->va_vmapped);
+
 	return -EINVAL;
 }
 
@@ -233,7 +259,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 }
 
 /* removing dmabuf from the database and send int req to the source domain
-* to unmap it. */
+ * to unmap it.
+ */
 static int hyper_dmabuf_destroy(void *data)
 {
 	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
@@ -250,7 +277,9 @@ static int hyper_dmabuf_destroy(void *data)
 
 	/* find dmabuf in export list */
 	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
-	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
+
+	/* failed to find corresponding entry in export list */
+	if (sgt_info == NULL) {
 		destroy_attr->status = -EINVAL;
 		return -EFAULT;
 	}
@@ -260,8 +289,9 @@ static int hyper_dmabuf_destroy(void *data)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
 
 	/* now send destroy request to remote domain
-	 * currently assuming there's only one importer exist */
-	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
+	 * currently assuming there's only one importer exist
+	 */
+	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
 		return -EFAULT;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index ad2109c..2b3ef6b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -33,7 +33,7 @@ int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hyper_dmabuf_id);
 
 	return 0;
 }
@@ -47,7 +47,7 @@ int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hyper_dmabuf_id);
 
 	return 0;
 }
@@ -71,8 +71,8 @@ int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->attachment->dmabuf == dmabuf &&
-			info_entry->info->hyper_dmabuf_rdomain == domid)
+		if(info_entry->info->dma_buf == dmabuf &&
+		   info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
 	return -1;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 8a059c8..2432a4e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -7,7 +7,7 @@
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
-//#include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_remote_sync.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
@@ -125,7 +125,9 @@ void cmd_process_work(struct work_struct *work)
 		 * operands0 : hyper_dmabuf_id
 		 */
 
-		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
+		/* TODO: that should be done on workqueue, when received ack from
+		 * all importers that buffer is no longer used
+		 */
 		sgt_info =
 			hyper_dmabuf_find_exported(req->operands[0]);
 
@@ -133,8 +135,10 @@ void cmd_process_work(struct work_struct *work)
 			hyper_dmabuf_cleanup_gref_table(sgt_info);
 
 			/* unmap dmabuf */
-			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
-			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+			dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+						 sgt_info->active_sgts->sgt,
+						 DMA_BIDIRECTIONAL);
+			dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 			dma_buf_put(sgt_info->dma_buf);
 
 			/* TODO: Rest of cleanup, sgt cleanup etc */
@@ -147,16 +151,6 @@ void cmd_process_work(struct work_struct *work)
 		/* for dmabuf synchronization */
 		break;
 
-	/* as importer, command to exporter */
-	case HYPER_DMABUF_OPS_TO_SOURCE:
-		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
-		* or unmapping for synchronization with original exporter (e.g. i915) */
-		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
-		 */
-		break;
-
 	default:
 		/* shouldn't get here */
 		/* no matched command, nothing to do.. just return error */
@@ -172,6 +166,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_ring_rq *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret;
 
 	if (!req) {
 		printk("request is NULL\n");
@@ -216,7 +211,25 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		return req->command;
 	}
 
-	temp_req = (struct hyper_dmabuf_ring_rq *)kmalloc(sizeof(*temp_req), GFP_KERNEL);
+	/* dma buf remote synchronization */
+	if (req->command == HYPER_DMABUF_OPS_TO_SOURCE) {
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		 * or unmapping for synchronization with original exporter (e.g. i915) */
+
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : enum hyper_dmabuf_ops {....}
+		 */
+		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
+		if (ret)
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		else
+			req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+		return req->command;
+	}
+
+	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
 	memcpy(temp_req, req, sizeof(*temp_req));
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
new file mode 100644
index 0000000..6ba932f
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -0,0 +1,189 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_drv.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+int hyper_dmabuf_remote_sync(int id, int ops)
+{
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	int ret;
+
+	/* find a coresponding SGT for the id */
+	sgt_info = hyper_dmabuf_find_exported(id);
+
+	if (!sgt_info) {
+		printk("dmabuf remote sync::can't find exported list\n");
+		return -EINVAL;
+	}
+
+	switch (ops) {
+	case HYPER_DMABUF_OPS_ATTACH:
+		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
+
+		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
+						hyper_dmabuf_private.device);
+
+		if (!attachl->attach) {
+			kfree(attachl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			return -EINVAL;
+		}
+
+		list_add(&attachl->list, &sgt_info->active_attached->list);
+		break;
+
+	case HYPER_DMABUF_OPS_DETACH:
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+
+		if (!attachl) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			return -EINVAL;
+		}
+		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+		break;
+
+	case HYPER_DMABUF_OPS_MAP:
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
+		if (!sgtl->sgt) {
+			kfree(sgtl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			return -EINVAL;
+		}
+		list_add(&sgtl->list, &sgt_info->active_sgts->list);
+		break;
+
+	case HYPER_DMABUF_OPS_UNMAP:
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					struct attachment_list, list);
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+		if (!attachl || !sgtl) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			return -EINVAL;
+		}
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+		break;
+
+	case HYPER_DMABUF_OPS_RELEASE:
+		/* remote importer shouldn't release dma_buf because
+		 * exporter will hold handle to the dma_buf as
+		 * far as dma_buf is shared with other domains.
+		 */
+		break;
+
+	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
+		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		if (!ret) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			ret = -EINVAL;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
+		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		if (!ret) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			ret = -EINVAL;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KMAP:
+		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+
+		/* dummy kmapping of 1 page */
+		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
+			va_kmapl->vaddr = dma_buf_kmap_atomic(sgt_info->dma_buf, 1);
+		else
+			va_kmapl->vaddr = dma_buf_kmap(sgt_info->dma_buf, 1);
+
+		if (!va_kmapl->vaddr) {
+			kfree(va_kmapl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -EINVAL;
+		}
+		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KUNMAP:
+		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+					struct kmap_vaddr_list, list);
+		if (!va_kmapl || va_kmapl->vaddr == NULL) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			return -EINVAL;
+		}
+
+		/* unmapping 1 page */
+		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
+			dma_buf_kunmap_atomic(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		else
+			dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+		break;
+
+	case HYPER_DMABUF_OPS_MMAP:
+		/* currently not supported: looking for a way to create
+		 * a dummy vma */
+		printk("dmabuf remote sync::sychronized mmap is not supported\n");
+		break;
+
+	case HYPER_DMABUF_OPS_VMAP:
+		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
+
+		/* dummy vmapping */
+		va_vmapl->vaddr = dma_buf_vmap(sgt_info->dma_buf);
+
+		if (!va_vmapl->vaddr) {
+			kfree(va_vmapl);
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			return -EINVAL;
+		}
+		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_VUNMAP:
+		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+					struct vmap_vaddr_list, list);
+		if (!va_vmapl || va_vmapl->vaddr == NULL) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			return -EINVAL;
+		}
+
+		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+		break;
+
+	default:
+		/* program should not get here */
+		break;
+	}
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
new file mode 100644
index 0000000..fc85fa8
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -0,0 +1,6 @@
+#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
+#define __HYPER_DMABUF_REMOTE_SYNC_H__
+
+int hyper_dmabuf_remote_sync(int id, int ops);
+
+#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index c8a2f4d..bfe80ee 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -18,6 +18,30 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
+/* stack of mapped sgts */
+struct sgt_list {
+	struct sg_table *sgt;
+	struct list_head list;
+};
+
+/* stack of attachments */
+struct attachment_list {
+	struct dma_buf_attachment *attach;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via kmap */
+struct kmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via vmap */
+struct vmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
 struct hyper_dmabuf_shared_pages_info {
 	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
 	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
@@ -46,9 +70,13 @@ struct hyper_dmabuf_pages_info {
 struct hyper_dmabuf_sgt_info {
         int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
-        struct sg_table *sgt; /* pointer to sgt */
+
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
-	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
+	struct sgt_list *active_sgts;
+	struct attachment_list *active_attached;
+	struct kmap_vaddr_list *va_kmapped;
+	struct vmap_vaddr_list *va_vmapped;
+
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 5db58b0..576085f 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -3,6 +3,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/workqueue.h>
+#include <linux/delay.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
 #include <xen/xenbus.h>
@@ -15,6 +16,8 @@
 
 static int export_req_id = 0;
 
+struct hyper_dmabuf_ring_rq req_pending = {0};
+
 /* Creates entry in xen store that will keep details of all exporter rings created by this domain */
 int32_t hyper_dmabuf_setup_data_dir()
 {
@@ -114,8 +117,8 @@ int hyper_dmabuf_next_req_id_export(void)
 }
 
 /* For now cache latast rings as global variables TODO: keep them in list*/
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info);
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info);
 
 /*
  * Callback function that will be called on any change of xenbus path being watched.
@@ -376,12 +379,13 @@ void hyper_dmabuf_cleanup_ringbufs(void)
 	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
 }
 
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait)
 {
 	struct hyper_dmabuf_front_ring *ring;
 	struct hyper_dmabuf_ring_rq *new_req;
 	struct hyper_dmabuf_ring_info_export *ring_info;
 	int notify;
+	int timeout = 1000;
 
 	/* find a ring info for the channel */
 	ring_info = hyper_dmabuf_find_exporter_ring(domain);
@@ -401,6 +405,10 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 		return -EIO;
 	}
 
+	/* update req_pending with current request */
+	memcpy(&req_pending, req, sizeof(req_pending));
+
+	/* pass current request to the ring */
 	memcpy(new_req, req, sizeof(*new_req));
 
 	ring->req_prod_pvt++;
@@ -410,10 +418,24 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
 		notify_remote_via_irq(ring_info->irq);
 	}
 
+	if (wait) {
+		while (timeout--) {
+			if (req_pending.status !=
+			    HYPER_DMABUF_REQ_NOT_RESPONDED)
+				break;
+			usleep_range(100, 120);
+		}
+
+		if (timeout < 0) {
+			printk("request timed-out\n");
+			return -EBUSY;
+		}
+	}
+
 	return 0;
 }
 
-/* ISR for request from exporter (as an importer) */
+/* ISR for handling request */
 static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
@@ -444,6 +466,9 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
+				/* preparing a response for the request and send it to
+				 * the requester
+				 */
 				memcpy(&resp, &req, sizeof(resp));
 				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
 							sizeof(resp));
@@ -465,7 +490,7 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 	return IRQ_HANDLED;
 }
 
-/* ISR for responses from importer */
+/* ISR for handling responses */
 static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
@@ -483,10 +508,13 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 		more_to_do = 0;
 		rp = ring->sring->rsp_prod;
 		for (i = ring->rsp_cons; i != rp; i++) {
-			unsigned long id;
-
 			resp = RING_GET_RESPONSE(ring, i);
-			id = resp->response_id;
+
+			/* update pending request's status with what is
+			 * in the response
+			 */
+			if (req_pending.request_id == resp->response_id)
+				req_pending.status = resp->status;
 
 			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
@@ -496,6 +524,14 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
 				}
+			} else if (resp->status == HYPER_DMABUF_REQ_PROCESSED) {
+				/* for debugging dma_buf remote synchronization */
+				printk("original request = 0x%x\n", resp->command);
+				printk("Just got HYPER_DMABUF_REQ_PROCESSED\n");
+			} else if (resp->status == HYPER_DMABUF_REQ_ERROR) {
+				/* for debugging dma_buf remote synchronization */
+				printk("original request = 0x%x\n", resp->command);
+				printk("Just got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index a4819ca..4ab031a 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -61,7 +61,7 @@ void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
 void hyper_dmabuf_cleanup_ringbufs(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait);
 
 /* called by interrupt (WORKQUEUE) */
 int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 10/60] hyper_dmabuf: make sure to free memory to prevent leak
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

In hyper_dmabuf_export_remote, page_info->pages needs to
be freed before freeing page_info.

Also, info_entry in hyper_dmabuf_remove_exported/imported
and hyper_dmabuf_remove_exporter/importer_ring needs to
be freed after removal of an entry.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c             | 1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c              | 2 ++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c      | 2 ++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c | 2 ++
 4 files changed, 7 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index bace8b2..6f100ef 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -191,6 +191,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* free msg */
 	kfree(req);
 	/* free page_info */
+	kfree(page_info->pages);
 	kfree(page_info);
 
 	return ret;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 2b3ef6b..1420df9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -98,6 +98,7 @@ int hyper_dmabuf_remove_exported(int id)
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		if(info_entry->info->hyper_dmabuf_id == id) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
@@ -112,6 +113,7 @@ int hyper_dmabuf_remove_imported(int id)
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		if(info_entry->info->hyper_dmabuf_id == id) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 576085f..116850e 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -320,6 +320,8 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain)
 		ring_info->unmap_op.handle = ops[0].handle;
 	}
 
+	kfree(ops);
+
 	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 5778468..a068276 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -85,6 +85,7 @@ int hyper_dmabuf_remove_exporter_ring(int domid)
 	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
 		if(info_entry->info->rdomain == domid) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
@@ -99,6 +100,7 @@ int hyper_dmabuf_remove_importer_ring(int domid)
 	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
 		if(info_entry->info->sdomain == domid) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 10/60] hyper_dmabuf: make sure to free memory to prevent leak
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

In hyper_dmabuf_export_remote, page_info->pages needs to
be freed before freeing page_info.

Also, info_entry in hyper_dmabuf_remove_exported/imported
and hyper_dmabuf_remove_exporter/importer_ring needs to
be freed after removal of an entry.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c             | 1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c              | 2 ++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c      | 2 ++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c | 2 ++
 4 files changed, 7 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index bace8b2..6f100ef 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -191,6 +191,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* free msg */
 	kfree(req);
 	/* free page_info */
+	kfree(page_info->pages);
 	kfree(page_info);
 
 	return ret;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 2b3ef6b..1420df9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -98,6 +98,7 @@ int hyper_dmabuf_remove_exported(int id)
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		if(info_entry->info->hyper_dmabuf_id == id) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
@@ -112,6 +113,7 @@ int hyper_dmabuf_remove_imported(int id)
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		if(info_entry->info->hyper_dmabuf_id == id) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 576085f..116850e 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -320,6 +320,8 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain)
 		ring_info->unmap_op.handle = ops[0].handle;
 	}
 
+	kfree(ops);
+
 	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 5778468..a068276 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -85,6 +85,7 @@ int hyper_dmabuf_remove_exporter_ring(int domid)
 	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
 		if(info_entry->info->rdomain == domid) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
@@ -99,6 +100,7 @@ int hyper_dmabuf_remove_importer_ring(int domid)
 	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
 		if(info_entry->info->sdomain == domid) {
 			hash_del(&info_entry->node);
+			kfree(info_entry);
 			return 0;
 		}
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 11/60] hyper_dmabuf: check stack before unmapping/detaching shadow DMA_BUF
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Make sure list of mapping/attaching activities on impoter VM is not
empty before doing unmapping/detaching shadow DMA BUF for indirect
synchronization.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 68 +++++++++++++++++-----
 1 file changed, 53 insertions(+), 15 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 6ba932f..fa2fa11 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -11,6 +11,21 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
+/* Whenever importer does dma operations from remote domain,
+ * a notification is sent to the exporter so that exporter
+ * issues equivalent dma operation on the original dma buf
+ * for indirect synchronization via shadow operations.
+ *
+ * All ptrs and references (e.g struct sg_table*,
+ * struct dma_buf_attachment) created via these operations on
+ * exporter's side are kept in stack (implemented as circular
+ * linked-lists) separately so that those can be re-referenced
+ * later when unmapping operations are invoked to free those.
+ *
+ * The very first element on the bottom of each stack holds
+ * are what is created when initial exporting is issued so it
+ * should not be modified or released by this fuction.
+ */
 int hyper_dmabuf_remote_sync(int id, int ops)
 {
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -33,7 +48,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
 		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
-						hyper_dmabuf_private.device);
+						 hyper_dmabuf_private.device);
 
 		if (!attachl->attach) {
 			kfree(attachl);
@@ -45,22 +60,31 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_DETACH:
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
-
-		if (!attachl) {
+		if (list_empty(&sgt_info->active_attached->list)) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			printk("no more dmabuf attachment left to be detached\n");
 			return -EINVAL;
 		}
+
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
 		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
 		list_del(&attachl->list);
 		kfree(attachl);
 		break;
 
 	case HYPER_DMABUF_OPS_MAP:
-		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+		if (list_empty(&sgt_info->active_attached->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			printk("no more dmabuf attachment left to be detached\n");
+			return -EINVAL;
+		}
+
 		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
+					   struct attachment_list, list);
+
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
@@ -71,17 +95,20 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_UNMAP:
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
-					struct sgt_list, list);
-		if (!attachl || !sgtl) {
+		if (list_empty(&sgt_info->active_sgts->list) ||
+		    list_empty(&sgt_info->active_attached->list)) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			printk("no more SGT or attachment left to be freed\n");
 			return -EINVAL;
 		}
 
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+
 		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					DMA_BIDIRECTIONAL);
+					 DMA_BIDIRECTIONAL);
 		list_del(&sgtl->list);
 		kfree(sgtl);
 		break;
@@ -129,9 +156,15 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KUNMAP:
+		if (list_empty(&sgt_info->va_kmapped->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			printk("no more dmabuf VA to be freed\n");
+			return -EINVAL;
+		}
+
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
-					struct kmap_vaddr_list, list);
-		if (!va_kmapl || va_kmapl->vaddr == NULL) {
+					    struct kmap_vaddr_list, list);
+		if (va_kmapl->vaddr == NULL) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return -EINVAL;
 		}
@@ -167,6 +200,11 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_VUNMAP:
+		if (list_empty(&sgt_info->va_vmapped->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			printk("no more dmabuf VA to be freed\n");
+			return -EINVAL;
+		}
 		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 11/60] hyper_dmabuf: check stack before unmapping/detaching shadow DMA_BUF
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Make sure list of mapping/attaching activities on impoter VM is not
empty before doing unmapping/detaching shadow DMA BUF for indirect
synchronization.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 68 +++++++++++++++++-----
 1 file changed, 53 insertions(+), 15 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 6ba932f..fa2fa11 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -11,6 +11,21 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
+/* Whenever importer does dma operations from remote domain,
+ * a notification is sent to the exporter so that exporter
+ * issues equivalent dma operation on the original dma buf
+ * for indirect synchronization via shadow operations.
+ *
+ * All ptrs and references (e.g struct sg_table*,
+ * struct dma_buf_attachment) created via these operations on
+ * exporter's side are kept in stack (implemented as circular
+ * linked-lists) separately so that those can be re-referenced
+ * later when unmapping operations are invoked to free those.
+ *
+ * The very first element on the bottom of each stack holds
+ * are what is created when initial exporting is issued so it
+ * should not be modified or released by this fuction.
+ */
 int hyper_dmabuf_remote_sync(int id, int ops)
 {
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -33,7 +48,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
 		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
-						hyper_dmabuf_private.device);
+						 hyper_dmabuf_private.device);
 
 		if (!attachl->attach) {
 			kfree(attachl);
@@ -45,22 +60,31 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_DETACH:
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
-
-		if (!attachl) {
+		if (list_empty(&sgt_info->active_attached->list)) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			printk("no more dmabuf attachment left to be detached\n");
 			return -EINVAL;
 		}
+
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
 		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
 		list_del(&attachl->list);
 		kfree(attachl);
 		break;
 
 	case HYPER_DMABUF_OPS_MAP:
-		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+		if (list_empty(&sgt_info->active_attached->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			printk("no more dmabuf attachment left to be detached\n");
+			return -EINVAL;
+		}
+
 		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
+					   struct attachment_list, list);
+
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
@@ -71,17 +95,20 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_UNMAP:
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
-					struct sgt_list, list);
-		if (!attachl || !sgtl) {
+		if (list_empty(&sgt_info->active_sgts->list) ||
+		    list_empty(&sgt_info->active_attached->list)) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			printk("no more SGT or attachment left to be freed\n");
 			return -EINVAL;
 		}
 
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+
 		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					DMA_BIDIRECTIONAL);
+					 DMA_BIDIRECTIONAL);
 		list_del(&sgtl->list);
 		kfree(sgtl);
 		break;
@@ -129,9 +156,15 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KUNMAP:
+		if (list_empty(&sgt_info->va_kmapped->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			printk("no more dmabuf VA to be freed\n");
+			return -EINVAL;
+		}
+
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
-					struct kmap_vaddr_list, list);
-		if (!va_kmapl || va_kmapl->vaddr == NULL) {
+					    struct kmap_vaddr_list, list);
+		if (va_kmapl->vaddr == NULL) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return -EINVAL;
 		}
@@ -167,6 +200,11 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_VUNMAP:
+		if (list_empty(&sgt_info->va_vmapped->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			printk("no more dmabuf VA to be freed\n");
+			return -EINVAL;
+		}
 		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 11/60] hyper_dmabuf: check stack before unmapping/detaching shadow DMA_BUF
  2017-12-19 19:29 ` Dongwon Kim
                   ` (17 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Make sure list of mapping/attaching activities on impoter VM is not
empty before doing unmapping/detaching shadow DMA BUF for indirect
synchronization.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 68 +++++++++++++++++-----
 1 file changed, 53 insertions(+), 15 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 6ba932f..fa2fa11 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -11,6 +11,21 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
+/* Whenever importer does dma operations from remote domain,
+ * a notification is sent to the exporter so that exporter
+ * issues equivalent dma operation on the original dma buf
+ * for indirect synchronization via shadow operations.
+ *
+ * All ptrs and references (e.g struct sg_table*,
+ * struct dma_buf_attachment) created via these operations on
+ * exporter's side are kept in stack (implemented as circular
+ * linked-lists) separately so that those can be re-referenced
+ * later when unmapping operations are invoked to free those.
+ *
+ * The very first element on the bottom of each stack holds
+ * are what is created when initial exporting is issued so it
+ * should not be modified or released by this fuction.
+ */
 int hyper_dmabuf_remote_sync(int id, int ops)
 {
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -33,7 +48,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
 		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
-						hyper_dmabuf_private.device);
+						 hyper_dmabuf_private.device);
 
 		if (!attachl->attach) {
 			kfree(attachl);
@@ -45,22 +60,31 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_DETACH:
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
-
-		if (!attachl) {
+		if (list_empty(&sgt_info->active_attached->list)) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			printk("no more dmabuf attachment left to be detached\n");
 			return -EINVAL;
 		}
+
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
 		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
 		list_del(&attachl->list);
 		kfree(attachl);
 		break;
 
 	case HYPER_DMABUF_OPS_MAP:
-		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+		if (list_empty(&sgt_info->active_attached->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			printk("no more dmabuf attachment left to be detached\n");
+			return -EINVAL;
+		}
+
 		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
+					   struct attachment_list, list);
+
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
@@ -71,17 +95,20 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_UNMAP:
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					struct attachment_list, list);
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
-					struct sgt_list, list);
-		if (!attachl || !sgtl) {
+		if (list_empty(&sgt_info->active_sgts->list) ||
+		    list_empty(&sgt_info->active_attached->list)) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			printk("no more SGT or attachment left to be freed\n");
 			return -EINVAL;
 		}
 
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+
 		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					DMA_BIDIRECTIONAL);
+					 DMA_BIDIRECTIONAL);
 		list_del(&sgtl->list);
 		kfree(sgtl);
 		break;
@@ -129,9 +156,15 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KUNMAP:
+		if (list_empty(&sgt_info->va_kmapped->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			printk("no more dmabuf VA to be freed\n");
+			return -EINVAL;
+		}
+
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
-					struct kmap_vaddr_list, list);
-		if (!va_kmapl || va_kmapl->vaddr == NULL) {
+					    struct kmap_vaddr_list, list);
+		if (va_kmapl->vaddr == NULL) {
 			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return -EINVAL;
 		}
@@ -167,6 +200,11 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_VUNMAP:
+		if (list_empty(&sgt_info->va_vmapped->list)) {
+			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			printk("no more dmabuf VA to be freed\n");
+			return -EINVAL;
+		}
 		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 12/60] hyper_dmabuf: two different unexporting mechanisms
  2017-12-19 19:29 ` Dongwon Kim
                   ` (20 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

unexporting on exporter's side now have two options, one is
, that just remove and free everything to literally "disconnect"
from importer, the other is just to return fail if any apps
running on importer is still attached or DMAing. Currently whether
forcing or unforcing it is determined by how "FORCED_UNEXPORING"
is defined.

Also, the word "destroy" in IOCTL commands and several functions
have been modified to "unexport", which sounds more reasonable.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h   |  8 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 94 ++++++++++++++++++++++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h   |  4 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c   | 62 +++++++++---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h   |  4 +-
 6 files changed, 142 insertions(+), 50 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 7511afb..8778a19 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -65,11 +65,11 @@ struct ioctl_hyper_dmabuf_export_fd {
 	uint32_t fd;
 };
 
-#define IOCTL_HYPER_DMABUF_DESTROY \
-_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
-struct ioctl_hyper_dmabuf_destroy {
+#define IOCTL_HYPER_DMABUF_UNEXPORT \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
+struct ioctl_hyper_dmabuf_unexport {
 	/* IN parameters */
-	/* hyper dmabuf id to be destroyed */
+	/* hyper dmabuf id to be unexported */
 	uint32_t hyper_dmabuf_id;
 	/* OUT parameters */
 	/* Status of request */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 2c78bc1..06bd8e5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -104,7 +104,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
-		kfree(sgt);
+		sg_free_table(sgt);
 		return NULL;
 	}
 
@@ -125,6 +125,12 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 	return sgt;
 }
 
+/* free sg_table */
+void hyper_dmabuf_free_sgt(struct sg_table* sgt)
+{
+	sg_free_table(sgt);
+}
+
 /*
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
@@ -512,6 +518,92 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 	return st;
 }
 
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
+{
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+
+	if (!sgt_info) {
+		printk("invalid hyper_dmabuf_id\n");
+		return -EINVAL;
+	}
+
+	/* if force != 1, sgt_info can be released only if
+	 * there's no activity on exported dma-buf on importer
+	 * side.
+	 */
+	if (!force &&
+	    (!list_empty(&sgt_info->va_kmapped->list) ||
+	    !list_empty(&sgt_info->va_vmapped->list) ||
+	    !list_empty(&sgt_info->active_sgts->list) ||
+	    !list_empty(&sgt_info->active_attached->list))) {
+		printk("dma-buf is used by importer\n");
+		return -EPERM;
+	}
+
+	while (!list_empty(&sgt_info->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+
+		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+	}
+
+	while (!list_empty(&sgt_info->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+					    struct vmap_vaddr_list, list);
+
+		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+	}
+
+	/* unmap dma-buf */
+	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+				 sgt_info->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+
+	/* detatch dma-buf */
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
+
+	/* close connection to dma-buf completely */
+	dma_buf_put(sgt_info->dma_buf);
+
+	hyper_dmabuf_cleanup_gref_table(sgt_info);
+
+	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->active_attached);
+	kfree(sgt_info->va_kmapped);
+	kfree(sgt_info->va_vmapped);
+
+	return 0;
+}
+
 inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 {
 	struct hyper_dmabuf_ring_rq *req;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index 003c158..71c1bb0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -24,6 +24,10 @@ grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_ta
 struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
 					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
 
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
+
+void hyper_dmabuf_free_sgt(struct sg_table *sgt);
+
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
 
 struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 6f100ef..a222c1b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -20,7 +20,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 static uint32_t hyper_dmabuf_id_gen(void) {
 	/* TODO: add proper implementation */
-	static uint32_t id = 0;
+	static uint32_t id = 1000;
 	static int32_t domid = -1;
 	if (domid == -1) {
 		domid = hyper_dmabuf_get_domid();
@@ -259,12 +259,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	return ret;
 }
 
-/* removing dmabuf from the database and send int req to the source domain
+/* unexport dmabuf from the database and send int req to the source domain
  * to unmap it.
  */
-static int hyper_dmabuf_destroy(void *data)
+static int hyper_dmabuf_unexport(void *data)
 {
-	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_ring_rq *req;
 	int ret;
@@ -274,20 +274,20 @@ static int hyper_dmabuf_destroy(void *data)
 		return -EINVAL;
 	}
 
-	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
+	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
 
 	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
 
 	/* failed to find corresponding entry in export list */
 	if (sgt_info == NULL) {
-		destroy_attr->status = -EINVAL;
+		unexport_attr->status = -EINVAL;
 		return -EFAULT;
 	}
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
 
 	/* now send destroy request to remote domain
 	 * currently assuming there's only one importer exist
@@ -300,7 +300,7 @@ static int hyper_dmabuf_destroy(void *data)
 
 	/* free msg */
 	kfree(req);
-	destroy_attr->status = ret;
+	unexport_attr->status = ret;
 
 	/* Rest of cleanup will follow when importer will free it's buffer,
 	 * current implementation assumes that there is only one importer
@@ -386,7 +386,7 @@ static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
 };
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 2432a4e..e7532b5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -12,6 +12,8 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
+#define FORCED_UNEXPORTING 0
+
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 struct cmd_process {
@@ -45,7 +47,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 			request->operands[i] = operands[i];
 		break;
 
-	case HYPER_DMABUF_DESTROY:
+	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
 		 * operands0 : hyper_dmabuf_id
@@ -83,7 +85,7 @@ void cmd_process_work(struct work_struct *work)
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
-	int i;
+	int i, ret;
 
 	req = proc->rq;
 	domid = proc->domid;
@@ -99,7 +101,7 @@ void cmd_process_work(struct work_struct *work)
 		 * operands4 : top-level reference number for shared pages
 		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
-		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
 		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
 		imported_sgt_info->frst_ofst = req->operands[2];
 		imported_sgt_info->last_len = req->operands[3];
@@ -119,7 +121,7 @@ void cmd_process_work(struct work_struct *work)
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_DESTROY_FINISH:
+	case HYPER_DMABUF_UNEXPORT_FINISH:
 		/* destroy sg_list for hyper_dmabuf_id on local side */
 		/* command : DMABUF_DESTROY_FINISH,
 		 * operands0 : hyper_dmabuf_id
@@ -128,21 +130,16 @@ void cmd_process_work(struct work_struct *work)
 		/* TODO: that should be done on workqueue, when received ack from
 		 * all importers that buffer is no longer used
 		 */
-		sgt_info =
-			hyper_dmabuf_find_exported(req->operands[0]);
-
-		if (sgt_info) {
-			hyper_dmabuf_cleanup_gref_table(sgt_info);
-
-			/* unmap dmabuf */
-			dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-						 sgt_info->active_sgts->sgt,
-						 DMA_BIDIRECTIONAL);
-			dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
-			dma_buf_put(sgt_info->dma_buf);
-
-			/* TODO: Rest of cleanup, sgt cleanup etc */
-		}
+		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+		hyper_dmabuf_remove_exported(req->operands[0]);
+		if (!sgt_info)
+			printk("sgt_info does not exist in the list\n");
+
+		ret = hyper_dmabuf_cleanup_sgt_info(sgt_info, FORCED_UNEXPORTING);
+		if (!ret)
+			kfree(sgt_info);
+		else
+			printk("failed to clean up sgt_info\n");
 
 		break;
 
@@ -184,30 +181,30 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	/* HYPER_DMABUF_DESTROY requires immediate
 	 * follow up so can't be processed in workqueue
 	 */
-	if (req->command == HYPER_DMABUF_DESTROY) {
+	if (req->command == HYPER_DMABUF_NOTIFY_UNEXPORT) {
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : DMABUF_DESTROY,
+		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
 		 * operands0 : hyper_dmabuf_id
 		 */
+
 		imported_sgt_info =
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+			hyper_dmabuf_free_sgt(imported_sgt_info->sgt);
 
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
 			hyper_dmabuf_remove_imported(req->operands[0]);
 
-			/* TODO: cleanup sgt on importer side etc */
+			/* Notify exporter that buffer is freed and it can
+			 * cleanup it
+			 */
+			req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+			req->command = HYPER_DMABUF_UNEXPORT_FINISH;
+		} else {
+			req->status = HYPER_DMABUF_REQ_ERROR;
 		}
 
-		/* Notify exporter that buffer is freed and it can cleanup it */
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_DESTROY_FINISH;
-
-#if 0 /* function is not implemented yet */
-
-		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
-#endif
 		return req->command;
 	}
 
@@ -233,8 +230,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 
 	memcpy(temp_req, req, sizeof(*temp_req));
 
-	proc = (struct cmd_process *) kcalloc(1, sizeof(struct cmd_process),
-						GFP_KERNEL);
+	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
 
 	proc->rq = temp_req;
 	proc->domid = domid;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 9b25bdb..39a114a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -3,8 +3,8 @@
 
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
-	HYPER_DMABUF_DESTROY,
-	HYPER_DMABUF_DESTROY_FINISH,
+	HYPER_DMABUF_NOTIFY_UNEXPORT,
+	HYPER_DMABUF_UNEXPORT_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 12/60] hyper_dmabuf: two different unexporting mechanisms
  2017-12-19 19:29 ` Dongwon Kim
                   ` (19 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

unexporting on exporter's side now have two options, one is
, that just remove and free everything to literally "disconnect"
from importer, the other is just to return fail if any apps
running on importer is still attached or DMAing. Currently whether
forcing or unforcing it is determined by how "FORCED_UNEXPORING"
is defined.

Also, the word "destroy" in IOCTL commands and several functions
have been modified to "unexport", which sounds more reasonable.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h   |  8 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c   | 94 ++++++++++++++++++++++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h   |  4 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 20 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c   | 62 +++++++++---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h   |  4 +-
 6 files changed, 142 insertions(+), 50 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 7511afb..8778a19 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -65,11 +65,11 @@ struct ioctl_hyper_dmabuf_export_fd {
 	uint32_t fd;
 };
 
-#define IOCTL_HYPER_DMABUF_DESTROY \
-_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
-struct ioctl_hyper_dmabuf_destroy {
+#define IOCTL_HYPER_DMABUF_UNEXPORT \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
+struct ioctl_hyper_dmabuf_unexport {
 	/* IN parameters */
-	/* hyper dmabuf id to be destroyed */
+	/* hyper dmabuf id to be unexported */
 	uint32_t hyper_dmabuf_id;
 	/* OUT parameters */
 	/* Status of request */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 2c78bc1..06bd8e5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -104,7 +104,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
-		kfree(sgt);
+		sg_free_table(sgt);
 		return NULL;
 	}
 
@@ -125,6 +125,12 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 	return sgt;
 }
 
+/* free sg_table */
+void hyper_dmabuf_free_sgt(struct sg_table* sgt)
+{
+	sg_free_table(sgt);
+}
+
 /*
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
@@ -512,6 +518,92 @@ struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofs
 	return st;
 }
 
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
+{
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+
+	if (!sgt_info) {
+		printk("invalid hyper_dmabuf_id\n");
+		return -EINVAL;
+	}
+
+	/* if force != 1, sgt_info can be released only if
+	 * there's no activity on exported dma-buf on importer
+	 * side.
+	 */
+	if (!force &&
+	    (!list_empty(&sgt_info->va_kmapped->list) ||
+	    !list_empty(&sgt_info->va_vmapped->list) ||
+	    !list_empty(&sgt_info->active_sgts->list) ||
+	    !list_empty(&sgt_info->active_attached->list))) {
+		printk("dma-buf is used by importer\n");
+		return -EPERM;
+	}
+
+	while (!list_empty(&sgt_info->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+
+		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+	}
+
+	while (!list_empty(&sgt_info->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+					    struct vmap_vaddr_list, list);
+
+		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+	}
+
+	/* unmap dma-buf */
+	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+				 sgt_info->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+
+	/* detatch dma-buf */
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
+
+	/* close connection to dma-buf completely */
+	dma_buf_put(sgt_info->dma_buf);
+
+	hyper_dmabuf_cleanup_gref_table(sgt_info);
+
+	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->active_attached);
+	kfree(sgt_info->va_kmapped);
+	kfree(sgt_info->va_vmapped);
+
+	return 0;
+}
+
 inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 {
 	struct hyper_dmabuf_ring_rq *req;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index 003c158..71c1bb0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -24,6 +24,10 @@ grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_ta
 struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
 					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
 
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
+
+void hyper_dmabuf_free_sgt(struct sg_table *sgt);
+
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
 
 struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 6f100ef..a222c1b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -20,7 +20,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 static uint32_t hyper_dmabuf_id_gen(void) {
 	/* TODO: add proper implementation */
-	static uint32_t id = 0;
+	static uint32_t id = 1000;
 	static int32_t domid = -1;
 	if (domid == -1) {
 		domid = hyper_dmabuf_get_domid();
@@ -259,12 +259,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	return ret;
 }
 
-/* removing dmabuf from the database and send int req to the source domain
+/* unexport dmabuf from the database and send int req to the source domain
  * to unmap it.
  */
-static int hyper_dmabuf_destroy(void *data)
+static int hyper_dmabuf_unexport(void *data)
 {
-	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_ring_rq *req;
 	int ret;
@@ -274,20 +274,20 @@ static int hyper_dmabuf_destroy(void *data)
 		return -EINVAL;
 	}
 
-	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
+	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
 
 	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
 
 	/* failed to find corresponding entry in export list */
 	if (sgt_info == NULL) {
-		destroy_attr->status = -EINVAL;
+		unexport_attr->status = -EINVAL;
 		return -EFAULT;
 	}
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
 
 	/* now send destroy request to remote domain
 	 * currently assuming there's only one importer exist
@@ -300,7 +300,7 @@ static int hyper_dmabuf_destroy(void *data)
 
 	/* free msg */
 	kfree(req);
-	destroy_attr->status = ret;
+	unexport_attr->status = ret;
 
 	/* Rest of cleanup will follow when importer will free it's buffer,
 	 * current implementation assumes that there is only one importer
@@ -386,7 +386,7 @@ static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
 };
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 2432a4e..e7532b5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -12,6 +12,8 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
+#define FORCED_UNEXPORTING 0
+
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 struct cmd_process {
@@ -45,7 +47,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 			request->operands[i] = operands[i];
 		break;
 
-	case HYPER_DMABUF_DESTROY:
+	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
 		 * operands0 : hyper_dmabuf_id
@@ -83,7 +85,7 @@ void cmd_process_work(struct work_struct *work)
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
-	int i;
+	int i, ret;
 
 	req = proc->rq;
 	domid = proc->domid;
@@ -99,7 +101,7 @@ void cmd_process_work(struct work_struct *work)
 		 * operands4 : top-level reference number for shared pages
 		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
-		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
 		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
 		imported_sgt_info->frst_ofst = req->operands[2];
 		imported_sgt_info->last_len = req->operands[3];
@@ -119,7 +121,7 @@ void cmd_process_work(struct work_struct *work)
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_DESTROY_FINISH:
+	case HYPER_DMABUF_UNEXPORT_FINISH:
 		/* destroy sg_list for hyper_dmabuf_id on local side */
 		/* command : DMABUF_DESTROY_FINISH,
 		 * operands0 : hyper_dmabuf_id
@@ -128,21 +130,16 @@ void cmd_process_work(struct work_struct *work)
 		/* TODO: that should be done on workqueue, when received ack from
 		 * all importers that buffer is no longer used
 		 */
-		sgt_info =
-			hyper_dmabuf_find_exported(req->operands[0]);
-
-		if (sgt_info) {
-			hyper_dmabuf_cleanup_gref_table(sgt_info);
-
-			/* unmap dmabuf */
-			dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-						 sgt_info->active_sgts->sgt,
-						 DMA_BIDIRECTIONAL);
-			dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
-			dma_buf_put(sgt_info->dma_buf);
-
-			/* TODO: Rest of cleanup, sgt cleanup etc */
-		}
+		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+		hyper_dmabuf_remove_exported(req->operands[0]);
+		if (!sgt_info)
+			printk("sgt_info does not exist in the list\n");
+
+		ret = hyper_dmabuf_cleanup_sgt_info(sgt_info, FORCED_UNEXPORTING);
+		if (!ret)
+			kfree(sgt_info);
+		else
+			printk("failed to clean up sgt_info\n");
 
 		break;
 
@@ -184,30 +181,30 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 	/* HYPER_DMABUF_DESTROY requires immediate
 	 * follow up so can't be processed in workqueue
 	 */
-	if (req->command == HYPER_DMABUF_DESTROY) {
+	if (req->command == HYPER_DMABUF_NOTIFY_UNEXPORT) {
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : DMABUF_DESTROY,
+		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
 		 * operands0 : hyper_dmabuf_id
 		 */
+
 		imported_sgt_info =
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+			hyper_dmabuf_free_sgt(imported_sgt_info->sgt);
 
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
 			hyper_dmabuf_remove_imported(req->operands[0]);
 
-			/* TODO: cleanup sgt on importer side etc */
+			/* Notify exporter that buffer is freed and it can
+			 * cleanup it
+			 */
+			req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+			req->command = HYPER_DMABUF_UNEXPORT_FINISH;
+		} else {
+			req->status = HYPER_DMABUF_REQ_ERROR;
 		}
 
-		/* Notify exporter that buffer is freed and it can cleanup it */
-		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-		req->command = HYPER_DMABUF_DESTROY_FINISH;
-
-#if 0 /* function is not implemented yet */
-
-		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
-#endif
 		return req->command;
 	}
 
@@ -233,8 +230,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 
 	memcpy(temp_req, req, sizeof(*temp_req));
 
-	proc = (struct cmd_process *) kcalloc(1, sizeof(struct cmd_process),
-						GFP_KERNEL);
+	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
 
 	proc->rq = temp_req;
 	proc->domid = domid;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 9b25bdb..39a114a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -3,8 +3,8 @@
 
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
-	HYPER_DMABUF_DESTROY,
-	HYPER_DMABUF_DESTROY_FINISH,
+	HYPER_DMABUF_NOTIFY_UNEXPORT,
+	HYPER_DMABUF_UNEXPORT_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
 };
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 13/60] hyper_dmabuf: postponing cleanup of hyper_DMABUF
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Immediate clean up of buffer is not possible if the buffer is
actively used by importer. In this case, we need to postpone
freeing hyper_DMABUF until the last consumer unmaps and releases
the buffer on impoter VM. New reference count is added for tracking
usage by importers.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 37 ++++++++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 34 +++++++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 49 +++++++---------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  1 -
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 14 +++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  9 +++-
 6 files changed, 95 insertions(+), 49 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 06bd8e5..f258981 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -9,6 +9,7 @@
 #include "hyper_dmabuf_imp.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
@@ -104,7 +105,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
-		sg_free_table(sgt);
+		hyper_dmabuf_free_sgt(sgt);
 		return NULL;
 	}
 
@@ -129,6 +130,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 void hyper_dmabuf_free_sgt(struct sg_table* sgt)
 {
 	sg_free_table(sgt);
+	kfree(sgt);
 }
 
 /*
@@ -583,6 +585,9 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		kfree(attachl);
 	}
 
+	/* Start cleanup of buffer in reverse order to exporting */
+	hyper_dmabuf_cleanup_gref_table(sgt_info);
+
 	/* unmap dma-buf */
 	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
 				 sgt_info->active_sgts->sgt,
@@ -594,8 +599,6 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	/* close connection to dma-buf completely */
 	dma_buf_put(sgt_info->dma_buf);
 
-	hyper_dmabuf_cleanup_gref_table(sgt_info);
-
 	kfree(sgt_info->active_sgts);
 	kfree(sgt_info->active_attached);
 	kfree(sgt_info->va_kmapped);
@@ -694,6 +697,9 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MAP);
 
+	kfree(page_info->pages);
+	kfree(page_info);
+
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
@@ -741,12 +747,34 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
+	if (sgt_info) {
+		/* dmabuf fd is being released - decrease refcount */
+		sgt_info->ref_count--;
+
+		/* if no one else in that domain is using that buffer, unmap it for now */
+		if (sgt_info->ref_count == 0) {
+			hyper_dmabuf_cleanup_imported_pages(sgt_info);
+			hyper_dmabuf_free_sgt(sgt_info->sgt);
+			sgt_info->sgt = NULL;
+		}
+	}
+
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
+
+	/*
+	 * Check if buffer is still valid and if not remove it from imported list.
+	 * That has to be done after sending sync request
+	 */
+	if (sgt_info && sgt_info->ref_count == 0 &&
+	    sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
+		kfree(sgt_info);
+	}
 }
 
 static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
@@ -944,6 +972,9 @@ int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int fla
 
 	fd = dma_buf_fd(dmabuf, flags);
 
+	/* dmabuf fd is exported for given bufer - increase its ref count */
+	dinfo->ref_count++;
+
 	return fd;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index a222c1b..c57acafe 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -135,6 +135,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
+	sgt_info->flags = 0;
 
 	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
 	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
@@ -233,6 +234,15 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (imported_sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
+	/*
+	 * Check if buffer was not unexported by exporter.
+	 * In such exporter is waiting for importer to finish using that buffer,
+	 * so do not allow export fd of such buffer anymore.
+	 */
+	if (imported_sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+		return -EINVAL;
+	}
+
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
 		imported_sgt_info->last_len, imported_sgt_info->nents,
@@ -289,9 +299,7 @@ static int hyper_dmabuf_unexport(void *data)
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
 
-	/* now send destroy request to remote domain
-	 * currently assuming there's only one importer exist
-	 */
+	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
 	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
@@ -300,11 +308,23 @@ static int hyper_dmabuf_unexport(void *data)
 
 	/* free msg */
 	kfree(req);
-	unexport_attr->status = ret;
 
-	/* Rest of cleanup will follow when importer will free it's buffer,
-	 * current implementation assumes that there is only one importer
-         */
+	/*
+	 * Check if any importer is still using buffer, if not clean it up completly,
+	 * otherwise mark buffer as unexported and postpone its cleanup to time when
+	 * importer will finish using it.
+	 */
+	if (list_empty(&sgt_info->active_sgts->list) &&
+	    list_empty(&sgt_info->active_attached->list)) {
+		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
+		kfree(sgt_info);
+	} else {
+		sgt_info->flags = HYPER_DMABUF_SGT_UNEXPORTED;
+	}
+
+	/* TODO: should we mark here that buffer was destroyed immiedetaly or that was postponed ? */
+	unexport_attr->status = ret;
 
 	return ret;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index e7532b5..97b42a4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -81,11 +81,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 void cmd_process_work(struct work_struct *work)
 {
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-        struct hyper_dmabuf_sgt_info *sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
-	int i, ret;
+	int i;
 
 	req = proc->rq;
 	domid = proc->domid;
@@ -118,31 +117,11 @@ void cmd_process_work(struct work_struct *work)
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
 
+		imported_sgt_info->flags = 0;
+		imported_sgt_info->ref_count = 0;
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_UNEXPORT_FINISH:
-		/* destroy sg_list for hyper_dmabuf_id on local side */
-		/* command : DMABUF_DESTROY_FINISH,
-		 * operands0 : hyper_dmabuf_id
-		 */
-
-		/* TODO: that should be done on workqueue, when received ack from
-		 * all importers that buffer is no longer used
-		 */
-		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
-		hyper_dmabuf_remove_exported(req->operands[0]);
-		if (!sgt_info)
-			printk("sgt_info does not exist in the list\n");
-
-		ret = hyper_dmabuf_cleanup_sgt_info(sgt_info, FORCED_UNEXPORTING);
-		if (!ret)
-			kfree(sgt_info);
-		else
-			printk("failed to clean up sgt_info\n");
-
-		break;
-
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -191,16 +170,18 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			hyper_dmabuf_free_sgt(imported_sgt_info->sgt);
-
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
-			hyper_dmabuf_remove_imported(req->operands[0]);
-
-			/* Notify exporter that buffer is freed and it can
-			 * cleanup it
-			 */
-			req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-			req->command = HYPER_DMABUF_UNEXPORT_FINISH;
+			/* check if buffer is still mapped and in use */
+			if (imported_sgt_info->sgt) {
+				/*
+				 * Buffer is still in  use, just mark that it should
+				 * not be allowed to export its fd anymore.
+				 */
+				imported_sgt_info->flags = HYPER_DMABUF_SGT_INVALID;
+			} else {
+				/* No one is using buffer, remove it from imported list */
+				hyper_dmabuf_remove_imported(req->operands[0]);
+				kfree(imported_sgt_info);
+			}
 		} else {
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 39a114a..fc1365b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -4,7 +4,6 @@
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
-	HYPER_DMABUF_UNEXPORT_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
 };
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index fa2fa11..61ba4ed 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -8,6 +8,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_imp.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
@@ -114,10 +115,17 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_RELEASE:
-		/* remote importer shouldn't release dma_buf because
-		 * exporter will hold handle to the dma_buf as
-		 * far as dma_buf is shared with other domains.
+		/*
+		 * Importer just released buffer fd, check if there is any other importer still using it.
+		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
+		 if (list_empty(&sgt_info->active_sgts->list) &&                                                                  	    list_empty(&sgt_info->active_attached->list) &&
+		     sgt_info->flags == HYPER_DMABUF_SGT_UNEXPORTED) {
+			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+			hyper_dmabuf_remove_exported(id);
+			kfree(sgt_info);
+		}
+
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index bfe80ee..1194cf2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -18,6 +18,11 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
+enum hyper_dmabuf_sgt_flags {
+        HYPER_DMABUF_SGT_INVALID = 0x10,
+        HYPER_DMABUF_SGT_UNEXPORTED,
+};
+
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -76,7 +81,7 @@ struct hyper_dmabuf_sgt_info {
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-
+	int flags;
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
@@ -92,6 +97,8 @@ struct hyper_dmabuf_imported_sgt_info {
 	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int flags;
+	int ref_count;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 13/60] hyper_dmabuf: postponing cleanup of hyper_DMABUF
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Immediate clean up of buffer is not possible if the buffer is
actively used by importer. In this case, we need to postpone
freeing hyper_DMABUF until the last consumer unmaps and releases
the buffer on impoter VM. New reference count is added for tracking
usage by importers.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 37 ++++++++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 34 +++++++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 49 +++++++---------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  1 -
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 14 +++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  9 +++-
 6 files changed, 95 insertions(+), 49 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 06bd8e5..f258981 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -9,6 +9,7 @@
 #include "hyper_dmabuf_imp.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
@@ -104,7 +105,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
-		sg_free_table(sgt);
+		hyper_dmabuf_free_sgt(sgt);
 		return NULL;
 	}
 
@@ -129,6 +130,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 void hyper_dmabuf_free_sgt(struct sg_table* sgt)
 {
 	sg_free_table(sgt);
+	kfree(sgt);
 }
 
 /*
@@ -583,6 +585,9 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		kfree(attachl);
 	}
 
+	/* Start cleanup of buffer in reverse order to exporting */
+	hyper_dmabuf_cleanup_gref_table(sgt_info);
+
 	/* unmap dma-buf */
 	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
 				 sgt_info->active_sgts->sgt,
@@ -594,8 +599,6 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	/* close connection to dma-buf completely */
 	dma_buf_put(sgt_info->dma_buf);
 
-	hyper_dmabuf_cleanup_gref_table(sgt_info);
-
 	kfree(sgt_info->active_sgts);
 	kfree(sgt_info->active_attached);
 	kfree(sgt_info->va_kmapped);
@@ -694,6 +697,9 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MAP);
 
+	kfree(page_info->pages);
+	kfree(page_info);
+
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
@@ -741,12 +747,34 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
+	if (sgt_info) {
+		/* dmabuf fd is being released - decrease refcount */
+		sgt_info->ref_count--;
+
+		/* if no one else in that domain is using that buffer, unmap it for now */
+		if (sgt_info->ref_count == 0) {
+			hyper_dmabuf_cleanup_imported_pages(sgt_info);
+			hyper_dmabuf_free_sgt(sgt_info->sgt);
+			sgt_info->sgt = NULL;
+		}
+	}
+
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
+
+	/*
+	 * Check if buffer is still valid and if not remove it from imported list.
+	 * That has to be done after sending sync request
+	 */
+	if (sgt_info && sgt_info->ref_count == 0 &&
+	    sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
+		kfree(sgt_info);
+	}
 }
 
 static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
@@ -944,6 +972,9 @@ int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int fla
 
 	fd = dma_buf_fd(dmabuf, flags);
 
+	/* dmabuf fd is exported for given bufer - increase its ref count */
+	dinfo->ref_count++;
+
 	return fd;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index a222c1b..c57acafe 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -135,6 +135,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
+	sgt_info->flags = 0;
 
 	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
 	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
@@ -233,6 +234,15 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (imported_sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
+	/*
+	 * Check if buffer was not unexported by exporter.
+	 * In such exporter is waiting for importer to finish using that buffer,
+	 * so do not allow export fd of such buffer anymore.
+	 */
+	if (imported_sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+		return -EINVAL;
+	}
+
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
 		imported_sgt_info->last_len, imported_sgt_info->nents,
@@ -289,9 +299,7 @@ static int hyper_dmabuf_unexport(void *data)
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
 
-	/* now send destroy request to remote domain
-	 * currently assuming there's only one importer exist
-	 */
+	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
 	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
@@ -300,11 +308,23 @@ static int hyper_dmabuf_unexport(void *data)
 
 	/* free msg */
 	kfree(req);
-	unexport_attr->status = ret;
 
-	/* Rest of cleanup will follow when importer will free it's buffer,
-	 * current implementation assumes that there is only one importer
-         */
+	/*
+	 * Check if any importer is still using buffer, if not clean it up completly,
+	 * otherwise mark buffer as unexported and postpone its cleanup to time when
+	 * importer will finish using it.
+	 */
+	if (list_empty(&sgt_info->active_sgts->list) &&
+	    list_empty(&sgt_info->active_attached->list)) {
+		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
+		kfree(sgt_info);
+	} else {
+		sgt_info->flags = HYPER_DMABUF_SGT_UNEXPORTED;
+	}
+
+	/* TODO: should we mark here that buffer was destroyed immiedetaly or that was postponed ? */
+	unexport_attr->status = ret;
 
 	return ret;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index e7532b5..97b42a4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -81,11 +81,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 void cmd_process_work(struct work_struct *work)
 {
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-        struct hyper_dmabuf_sgt_info *sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
-	int i, ret;
+	int i;
 
 	req = proc->rq;
 	domid = proc->domid;
@@ -118,31 +117,11 @@ void cmd_process_work(struct work_struct *work)
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
 
+		imported_sgt_info->flags = 0;
+		imported_sgt_info->ref_count = 0;
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_UNEXPORT_FINISH:
-		/* destroy sg_list for hyper_dmabuf_id on local side */
-		/* command : DMABUF_DESTROY_FINISH,
-		 * operands0 : hyper_dmabuf_id
-		 */
-
-		/* TODO: that should be done on workqueue, when received ack from
-		 * all importers that buffer is no longer used
-		 */
-		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
-		hyper_dmabuf_remove_exported(req->operands[0]);
-		if (!sgt_info)
-			printk("sgt_info does not exist in the list\n");
-
-		ret = hyper_dmabuf_cleanup_sgt_info(sgt_info, FORCED_UNEXPORTING);
-		if (!ret)
-			kfree(sgt_info);
-		else
-			printk("failed to clean up sgt_info\n");
-
-		break;
-
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -191,16 +170,18 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			hyper_dmabuf_free_sgt(imported_sgt_info->sgt);
-
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
-			hyper_dmabuf_remove_imported(req->operands[0]);
-
-			/* Notify exporter that buffer is freed and it can
-			 * cleanup it
-			 */
-			req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-			req->command = HYPER_DMABUF_UNEXPORT_FINISH;
+			/* check if buffer is still mapped and in use */
+			if (imported_sgt_info->sgt) {
+				/*
+				 * Buffer is still in  use, just mark that it should
+				 * not be allowed to export its fd anymore.
+				 */
+				imported_sgt_info->flags = HYPER_DMABUF_SGT_INVALID;
+			} else {
+				/* No one is using buffer, remove it from imported list */
+				hyper_dmabuf_remove_imported(req->operands[0]);
+				kfree(imported_sgt_info);
+			}
 		} else {
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 39a114a..fc1365b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -4,7 +4,6 @@
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
-	HYPER_DMABUF_UNEXPORT_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
 };
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index fa2fa11..61ba4ed 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -8,6 +8,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_imp.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
@@ -114,10 +115,17 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_RELEASE:
-		/* remote importer shouldn't release dma_buf because
-		 * exporter will hold handle to the dma_buf as
-		 * far as dma_buf is shared with other domains.
+		/*
+		 * Importer just released buffer fd, check if there is any other importer still using it.
+		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
+		 if (list_empty(&sgt_info->active_sgts->list) &&                                                                  	    list_empty(&sgt_info->active_attached->list) &&
+		     sgt_info->flags == HYPER_DMABUF_SGT_UNEXPORTED) {
+			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+			hyper_dmabuf_remove_exported(id);
+			kfree(sgt_info);
+		}
+
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index bfe80ee..1194cf2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -18,6 +18,11 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
+enum hyper_dmabuf_sgt_flags {
+        HYPER_DMABUF_SGT_INVALID = 0x10,
+        HYPER_DMABUF_SGT_UNEXPORTED,
+};
+
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -76,7 +81,7 @@ struct hyper_dmabuf_sgt_info {
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-
+	int flags;
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
@@ -92,6 +97,8 @@ struct hyper_dmabuf_imported_sgt_info {
 	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int flags;
+	int ref_count;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 13/60] hyper_dmabuf: postponing cleanup of hyper_DMABUF
  2017-12-19 19:29 ` Dongwon Kim
                   ` (22 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Immediate clean up of buffer is not possible if the buffer is
actively used by importer. In this case, we need to postpone
freeing hyper_DMABUF until the last consumer unmaps and releases
the buffer on impoter VM. New reference count is added for tracking
usage by importers.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 37 ++++++++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 34 +++++++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 49 +++++++---------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  1 -
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 14 +++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  9 +++-
 6 files changed, 95 insertions(+), 49 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 06bd8e5..f258981 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -9,6 +9,7 @@
 #include "hyper_dmabuf_imp.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
@@ -104,7 +105,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
-		sg_free_table(sgt);
+		hyper_dmabuf_free_sgt(sgt);
 		return NULL;
 	}
 
@@ -129,6 +130,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 void hyper_dmabuf_free_sgt(struct sg_table* sgt)
 {
 	sg_free_table(sgt);
+	kfree(sgt);
 }
 
 /*
@@ -583,6 +585,9 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		kfree(attachl);
 	}
 
+	/* Start cleanup of buffer in reverse order to exporting */
+	hyper_dmabuf_cleanup_gref_table(sgt_info);
+
 	/* unmap dma-buf */
 	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
 				 sgt_info->active_sgts->sgt,
@@ -594,8 +599,6 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	/* close connection to dma-buf completely */
 	dma_buf_put(sgt_info->dma_buf);
 
-	hyper_dmabuf_cleanup_gref_table(sgt_info);
-
 	kfree(sgt_info->active_sgts);
 	kfree(sgt_info->active_attached);
 	kfree(sgt_info->va_kmapped);
@@ -694,6 +697,9 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MAP);
 
+	kfree(page_info->pages);
+	kfree(page_info);
+
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
@@ -741,12 +747,34 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
+	if (sgt_info) {
+		/* dmabuf fd is being released - decrease refcount */
+		sgt_info->ref_count--;
+
+		/* if no one else in that domain is using that buffer, unmap it for now */
+		if (sgt_info->ref_count == 0) {
+			hyper_dmabuf_cleanup_imported_pages(sgt_info);
+			hyper_dmabuf_free_sgt(sgt_info->sgt);
+			sgt_info->sgt = NULL;
+		}
+	}
+
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_RELEASE);
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
+
+	/*
+	 * Check if buffer is still valid and if not remove it from imported list.
+	 * That has to be done after sending sync request
+	 */
+	if (sgt_info && sgt_info->ref_count == 0 &&
+	    sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
+		kfree(sgt_info);
+	}
 }
 
 static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
@@ -944,6 +972,9 @@ int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int fla
 
 	fd = dma_buf_fd(dmabuf, flags);
 
+	/* dmabuf fd is exported for given bufer - increase its ref count */
+	dinfo->ref_count++;
+
 	return fd;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index a222c1b..c57acafe 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -135,6 +135,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
+	sgt_info->flags = 0;
 
 	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
 	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
@@ -233,6 +234,15 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (imported_sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
+	/*
+	 * Check if buffer was not unexported by exporter.
+	 * In such exporter is waiting for importer to finish using that buffer,
+	 * so do not allow export fd of such buffer anymore.
+	 */
+	if (imported_sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+		return -EINVAL;
+	}
+
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
 		imported_sgt_info->last_len, imported_sgt_info->nents,
@@ -289,9 +299,7 @@ static int hyper_dmabuf_unexport(void *data)
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
 
-	/* now send destroy request to remote domain
-	 * currently assuming there's only one importer exist
-	 */
+	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
 	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
@@ -300,11 +308,23 @@ static int hyper_dmabuf_unexport(void *data)
 
 	/* free msg */
 	kfree(req);
-	unexport_attr->status = ret;
 
-	/* Rest of cleanup will follow when importer will free it's buffer,
-	 * current implementation assumes that there is only one importer
-         */
+	/*
+	 * Check if any importer is still using buffer, if not clean it up completly,
+	 * otherwise mark buffer as unexported and postpone its cleanup to time when
+	 * importer will finish using it.
+	 */
+	if (list_empty(&sgt_info->active_sgts->list) &&
+	    list_empty(&sgt_info->active_attached->list)) {
+		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
+		kfree(sgt_info);
+	} else {
+		sgt_info->flags = HYPER_DMABUF_SGT_UNEXPORTED;
+	}
+
+	/* TODO: should we mark here that buffer was destroyed immiedetaly or that was postponed ? */
+	unexport_attr->status = ret;
 
 	return ret;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index e7532b5..97b42a4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -81,11 +81,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 void cmd_process_work(struct work_struct *work)
 {
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-        struct hyper_dmabuf_sgt_info *sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
-	int i, ret;
+	int i;
 
 	req = proc->rq;
 	domid = proc->domid;
@@ -118,31 +117,11 @@ void cmd_process_work(struct work_struct *work)
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
 
+		imported_sgt_info->flags = 0;
+		imported_sgt_info->ref_count = 0;
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_UNEXPORT_FINISH:
-		/* destroy sg_list for hyper_dmabuf_id on local side */
-		/* command : DMABUF_DESTROY_FINISH,
-		 * operands0 : hyper_dmabuf_id
-		 */
-
-		/* TODO: that should be done on workqueue, when received ack from
-		 * all importers that buffer is no longer used
-		 */
-		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
-		hyper_dmabuf_remove_exported(req->operands[0]);
-		if (!sgt_info)
-			printk("sgt_info does not exist in the list\n");
-
-		ret = hyper_dmabuf_cleanup_sgt_info(sgt_info, FORCED_UNEXPORTING);
-		if (!ret)
-			kfree(sgt_info);
-		else
-			printk("failed to clean up sgt_info\n");
-
-		break;
-
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -191,16 +170,18 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			hyper_dmabuf_free_sgt(imported_sgt_info->sgt);
-
-			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
-			hyper_dmabuf_remove_imported(req->operands[0]);
-
-			/* Notify exporter that buffer is freed and it can
-			 * cleanup it
-			 */
-			req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
-			req->command = HYPER_DMABUF_UNEXPORT_FINISH;
+			/* check if buffer is still mapped and in use */
+			if (imported_sgt_info->sgt) {
+				/*
+				 * Buffer is still in  use, just mark that it should
+				 * not be allowed to export its fd anymore.
+				 */
+				imported_sgt_info->flags = HYPER_DMABUF_SGT_INVALID;
+			} else {
+				/* No one is using buffer, remove it from imported list */
+				hyper_dmabuf_remove_imported(req->operands[0]);
+				kfree(imported_sgt_info);
+			}
 		} else {
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 39a114a..fc1365b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -4,7 +4,6 @@
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
-	HYPER_DMABUF_UNEXPORT_FINISH,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
 };
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index fa2fa11..61ba4ed 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -8,6 +8,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_imp.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
@@ -114,10 +115,17 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_RELEASE:
-		/* remote importer shouldn't release dma_buf because
-		 * exporter will hold handle to the dma_buf as
-		 * far as dma_buf is shared with other domains.
+		/*
+		 * Importer just released buffer fd, check if there is any other importer still using it.
+		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
+		 if (list_empty(&sgt_info->active_sgts->list) &&                                                                  	    list_empty(&sgt_info->active_attached->list) &&
+		     sgt_info->flags == HYPER_DMABUF_SGT_UNEXPORTED) {
+			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+			hyper_dmabuf_remove_exported(id);
+			kfree(sgt_info);
+		}
+
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index bfe80ee..1194cf2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -18,6 +18,11 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
+enum hyper_dmabuf_sgt_flags {
+        HYPER_DMABUF_SGT_INVALID = 0x10,
+        HYPER_DMABUF_SGT_UNEXPORTED,
+};
+
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -76,7 +81,7 @@ struct hyper_dmabuf_sgt_info {
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-
+	int flags;
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
@@ -92,6 +97,8 @@ struct hyper_dmabuf_imported_sgt_info {
 	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int flags;
+	int ref_count;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 14/60] hyper_dmabuf: clean-up process based on file->f_count
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Now relaese funcs checks f_count for the file instead of
our own refcount because it can't track dma_buf_get.

Also, importer now sends out HYPER_DMABUF_FIRST_EXPORT
to let the exporter know corresponding dma-buf has ever
exported on importer's side. This is to cover the case
where exporter exports a buffer and unexport it right
away before importer does first export_fd (there won't
be any dma_buf_release nofication to exporter since SGT
was never created by importer.)

After importer creates its own SGT, only condition it is
completely released is that dma_buf is unexported
(so valid == 0) and user app closes all locally
assigned FDs (when dma_buf_release is called.)
Otherwise, it needs to stay there since previously exported
FD can be reused.

Also includes minor changes;

1. flag had been changed to "bool valid" for conciseness.
2. added bool importer_exported in sgt_info as an indicator
   for usage of buffer on the importer.
3. num of pages is added (nents) to hyper_dmabuf_sgt_info
   to keep the size info in EXPORT list.
3. more minor changes and clean-ups.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 76 ++++++++++++---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  5 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 78 ++++++++++++----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 34 ++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  2 +
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 10 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     | 19 +++---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  6 +-
 12 files changed, 143 insertions(+), 93 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 5b5dae44..5a7cfa5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -33,6 +33,7 @@ static int hyper_dmabuf_drv_init(void)
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
 	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+	hyper_dmabuf_private.domid = hyper_dmabuf_get_domid();
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 8778a19..ff883e1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -3,6 +3,7 @@
 
 struct hyper_dmabuf_private {
         struct device *device;
+	int domid;
 	struct workqueue_struct *work_queue;
 };
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index f258981..fa445e5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -13,6 +13,14 @@
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
+int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -1;
+}
+
 /* return total number of pages referecned by a sgt
  * for pre-calculation of # of pages behind a given sgt
  */
@@ -368,8 +376,8 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
 	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->active_sgts->sgt->nents/REFS_PER_PAGE +
-				((sgt_info->active_sgts->sgt->nents % REFS_PER_PAGE) ? 1: 0));
+	int n_2nd_level_pages = (sgt_info->nents/REFS_PER_PAGE +
+				((sgt_info->nents % REFS_PER_PAGE) ? 1: 0));
 
 
 	if (shared_pages_info->data_refs == NULL ||
@@ -388,26 +396,28 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
 			printk("refid still in use!!!\n");
 		}
+		gnttab_free_grant_reference(ref[i]);
 		i++;
 	}
 	free_pages((unsigned long)shared_pages_info->addr_pages, i);
 
+
 	/* End foreign access for top level addressing page */
 	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
 		printk("refid not shared !!\n");
 	}
-	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
-		printk("refid still in use!!!\n");
-	}
 	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
+	gnttab_free_grant_reference(shared_pages_info->top_level_ref);
+
 	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
 
 	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->active_sgts->sgt->nents; i++) {
+	for (i = 0; i < sgt_info->nents; i++) {
 		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
 			printk("refid not shared !!\n");
 		}
 		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
+		gnttab_free_grant_reference(shared_pages_info->data_refs[i]);
 	}
 
 	kfree(shared_pages_info->data_refs);
@@ -545,6 +555,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		return -EPERM;
 	}
 
+	/* force == 1 is not recommended */
 	while (!list_empty(&sgt_info->va_kmapped->list)) {
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
@@ -598,6 +609,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 
 	/* close connection to dma-buf completely */
 	dma_buf_put(sgt_info->dma_buf);
+	sgt_info->dma_buf = NULL;
 
 	kfree(sgt_info->active_sgts);
 	kfree(sgt_info->active_attached);
@@ -621,7 +633,7 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = hyper_dmabuf_send_request(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id), req, true);
+	ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(id), req, true);
 
 	kfree(req);
 
@@ -737,30 +749,33 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	}
 }
 
-static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	int ret;
+	int final_release;
 
-	if (!dmabuf->priv)
+	if (!dma_buf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
 
-	if (sgt_info) {
-		/* dmabuf fd is being released - decrease refcount */
-		sgt_info->ref_count--;
+	final_release = sgt_info && !sgt_info->valid &&
+		       !dmabuf_refcount(sgt_info->dma_buf);
 
-		/* if no one else in that domain is using that buffer, unmap it for now */
-		if (sgt_info->ref_count == 0) {
-			hyper_dmabuf_cleanup_imported_pages(sgt_info);
-			hyper_dmabuf_free_sgt(sgt_info->sgt);
-			sgt_info->sgt = NULL;
-		}
+	if (!dmabuf_refcount(sgt_info->dma_buf)) {
+		sgt_info->dma_buf = NULL;
 	}
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_RELEASE);
+	if (final_release) {
+		hyper_dmabuf_cleanup_imported_pages(sgt_info);
+		hyper_dmabuf_free_sgt(sgt_info->sgt);
+		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+							HYPER_DMABUF_OPS_RELEASE_FINAL);
+	} else {
+		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+							HYPER_DMABUF_OPS_RELEASE);
+	}
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -770,8 +785,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 	 * Check if buffer is still valid and if not remove it from imported list.
 	 * That has to be done after sending sync request
 	 */
-	if (sgt_info && sgt_info->ref_count == 0 &&
-	    sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+	if (final_release) {
 		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
 		kfree(sgt_info);
 	}
@@ -962,23 +976,21 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 /* exporting dmabuf as fd */
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
 {
-	int fd;
-	struct dma_buf* dmabuf;
+	int fd = -1;
 
 	/* call hyper_dmabuf_export_dmabuf and create
 	 * and bind a handle for it then release
 	 */
-	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
-
-	fd = dma_buf_fd(dmabuf, flags);
+	hyper_dmabuf_export_dma_buf(dinfo);
 
-	/* dmabuf fd is exported for given bufer - increase its ref count */
-	dinfo->ref_count++;
+	if (dinfo->dma_buf) {
+		fd = dma_buf_fd(dinfo->dma_buf, flags);
+	}
 
 	return fd;
 }
 
-struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
 {
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
@@ -989,5 +1001,5 @@ struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_inf
 	exp_info.flags = /* not sure about flag */0;
 	exp_info.priv = dinfo;
 
-	return dma_buf_export(&exp_info);
+	dinfo->dma_buf = dma_buf_export(&exp_info);
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index 71c1bb0..1b0801f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -1,6 +1,7 @@
 #ifndef __HYPER_DMABUF_IMP_H__
 #define __HYPER_DMABUF_IMP_H__
 
+#include <linux/fs.h>
 #include "hyper_dmabuf_struct.h"
 
 /* extract pages directly from struct sg_table */
@@ -30,6 +31,8 @@ void hyper_dmabuf_free_sgt(struct sg_table *sgt);
 
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
 
-struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+int dmabuf_refcount(struct dma_buf *dma_buf);
 
 #endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index c57acafe..e334b77 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -107,10 +107,12 @@ static int hyper_dmabuf_export_remote(void *data)
 	}
 
 	/* we check if this specific attachment was already exported
-	 * to the same domain and if yes, it returns hyper_dmabuf_id
-	 * of pre-exported sgt */
-	ret = hyper_dmabuf_find_id(dma_buf, export_remote_attr->remote_domain);
-	if (ret != -1) {
+	 * to the same domain and if yes and it's valid sgt_info,
+	 * it returns hyper_dmabuf_id of pre-exported sgt_info
+	 */
+	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
+	sgt_info = hyper_dmabuf_find_exported(ret);
+	if (ret != -1 && sgt_info->valid) {
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
@@ -135,12 +137,13 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
-	sgt_info->flags = 0;
+	sgt_info->valid = 1;
+	sgt_info->importer_exported = 0;
 
-	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
-	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
-	sgt_info->va_kmapped = kcalloc(1, sizeof(struct kmap_vaddr_list), GFP_KERNEL);
-	sgt_info->va_vmapped = kcalloc(1, sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	sgt_info->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	sgt_info->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
 
 	sgt_info->active_sgts->sgt = sgt;
 	sgt_info->active_attached->attach = attachment;
@@ -159,6 +162,8 @@ static int hyper_dmabuf_export_remote(void *data)
 	if (page_info == NULL)
 		goto fail_export;
 
+	sgt_info->nents = page_info->nents;
+
 	/* now register it to export list */
 	hyper_dmabuf_register_exported(sgt_info);
 
@@ -220,6 +225,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int operand;
 	int ret = 0;
 
 	if (!data) {
@@ -234,35 +241,38 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (imported_sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
-	/*
-	 * Check if buffer was not unexported by exporter.
-	 * In such exporter is waiting for importer to finish using that buffer,
-	 * so do not allow export fd of such buffer anymore.
-	 */
-	if (imported_sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
-		return -EINVAL;
-	}
-
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
 		imported_sgt_info->last_len, imported_sgt_info->nents,
-		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+		HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
 
 	if (!imported_sgt_info->sgt) {
 		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
 							imported_sgt_info->frst_ofst,
 							imported_sgt_info->last_len,
 							imported_sgt_info->nents,
-							HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+							HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id),
 							&imported_sgt_info->shared_pages_info);
-		if (!imported_sgt_info->sgt) {
-			printk("Failed to create sgt\n");
+
+		/* send notifiticatio for first export_fd to exporter */
+		operand = imported_sgt_info->hyper_dmabuf_id;
+		req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+		hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+
+		ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(operand), req, false);
+
+		if (!imported_sgt_info->sgt || ret) {
+			kfree(req);
+			printk("Failed to create sgt or notify exporter\n");
 			return -EINVAL;
 		}
+		kfree(req);
 	}
 
 	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
-	if (export_fd_attr < 0) {
+
+	if (export_fd_attr->fd < 0) {
+		/* fail to get fd */
 		ret = export_fd_attr->fd;
 	}
 
@@ -309,23 +319,23 @@ static int hyper_dmabuf_unexport(void *data)
 	/* free msg */
 	kfree(req);
 
+	/* no longer valid */
+	sgt_info->valid = 0;
+
 	/*
-	 * Check if any importer is still using buffer, if not clean it up completly,
-	 * otherwise mark buffer as unexported and postpone its cleanup to time when
-	 * importer will finish using it.
+	 * Immediately clean-up if it has never been exported by importer
+	 * (so no SGT is constructed on importer).
+	 * clean it up later in remote sync when final release ops
+	 * is called (importer does this only when there's no
+	 * no consumer of locally exported FDs)
 	 */
-	if (list_empty(&sgt_info->active_sgts->list) &&
-	    list_empty(&sgt_info->active_attached->list)) {
+	printk("before claning up buffer completly\n");
+	if (!sgt_info->importer_exported) {
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
 		kfree(sgt_info);
-	} else {
-		sgt_info->flags = HYPER_DMABUF_SGT_UNEXPORTED;
 	}
 
-	/* TODO: should we mark here that buffer was destroyed immiedetaly or that was postponed ? */
-	unexport_attr->status = ret;
-
 	return ret;
 }
 
@@ -369,7 +379,7 @@ static int hyper_dmabuf_query(void *data)
 			if (sgt_info) {
 				query_attr->info = 0xFFFFFFFF; /* myself */
 			} else {
-				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+				query_attr->info = (HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
 			}
 			break;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 1420df9..18731de 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -65,7 +65,7 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
+int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 463a6da..f55d06e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -25,7 +25,7 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid);
+int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 97b42a4..a2d687f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -55,6 +55,14 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		request->operands[0] = operands[0];
 		break;
 
+	case HYPER_DMABUF_FIRST_EXPORT:
+		/* dmabuf fd is being created on imported side for first time */
+		/* command : HYPER_DMABUF_FIRST_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		request->operands[0] = operands[0];
+		break;
+
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -81,6 +89,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 void cmd_process_work(struct work_struct *work)
 {
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
@@ -117,11 +126,25 @@ void cmd_process_work(struct work_struct *work)
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
 
-		imported_sgt_info->flags = 0;
-		imported_sgt_info->ref_count = 0;
+		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
+	case HYPER_DMABUF_FIRST_EXPORT:
+		/* find a corresponding SGT for the id */
+		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (!sgt_info) {
+			printk("critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			break;
+		}
+
+		if (sgt_info->importer_exported)
+			printk("warning: exported flag is not supposed to be 1 already\n");
+
+		sgt_info->importer_exported = 1;
+		break;
+
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -170,13 +193,14 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			/* check if buffer is still mapped and in use */
-			if (imported_sgt_info->sgt) {
+			/* if anything is still using dma_buf */
+			if (imported_sgt_info->dma_buf &&
+			    dmabuf_refcount(imported_sgt_info->dma_buf) > 0) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
 				 */
-				imported_sgt_info->flags = HYPER_DMABUF_SGT_INVALID;
+				imported_sgt_info->valid = 0;
 			} else {
 				/* No one is using buffer, remove it from imported list */
 				hyper_dmabuf_remove_imported(req->operands[0]);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index fc1365b..1e9d827 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -3,6 +3,7 @@
 
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_FIRST_EXPORT,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
@@ -14,6 +15,7 @@ enum hyper_dmabuf_ops {
 	HYPER_DMABUF_OPS_MAP,
 	HYPER_DMABUF_OPS_UNMAP,
 	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_RELEASE_FINAL,
 	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
 	HYPER_DMABUF_OPS_END_CPU_ACCESS,
 	HYPER_DMABUF_OPS_KMAP_ATOMIC,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 61ba4ed..5017b17 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -114,13 +114,13 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		kfree(sgtl);
 		break;
 
-	case HYPER_DMABUF_OPS_RELEASE:
+	case HYPER_DMABUF_OPS_RELEASE_FINAL:
 		/*
 		 * Importer just released buffer fd, check if there is any other importer still using it.
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
-		 if (list_empty(&sgt_info->active_sgts->list) &&                                                                  	    list_empty(&sgt_info->active_attached->list) &&
-		     sgt_info->flags == HYPER_DMABUF_SGT_UNEXPORTED) {
+		 if (list_empty(&sgt_info->active_attached->list) &&
+		     !sgt_info->valid) {
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 			hyper_dmabuf_remove_exported(id);
 			kfree(sgt_info);
@@ -128,6 +128,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		break;
 
+	case HYPER_DMABUF_OPS_RELEASE:
+		/* place holder */
+		break;
+
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (!ret) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 1194cf2..92e06ff 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -6,10 +6,10 @@
 /* Importer combine source domain id with given hyper_dmabuf_id
  * to make it unique in case there are multiple exporters */
 
-#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
-	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+#define HYPER_DMABUF_ID_IMPORTER(domid, id) \
+	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
 
-#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
+#define HYPER_DMABUF_DOM_ID(id) \
 	(((id) >> 24) & 0xFF)
 
 /* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
@@ -18,11 +18,6 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
-enum hyper_dmabuf_sgt_flags {
-        HYPER_DMABUF_SGT_INVALID = 0x10,
-        HYPER_DMABUF_SGT_UNEXPORTED,
-};
-
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -77,11 +72,13 @@ struct hyper_dmabuf_sgt_info {
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	int nents; /* number of pages, which may be different than sgt->nents */
 	struct sgt_list *active_sgts;
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-	int flags;
+	bool valid;
+	bool importer_exported; /* exported locally on importer's side */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
@@ -95,10 +92,10 @@ struct hyper_dmabuf_imported_sgt_info {
 	int last_len;	/* length of data in the last shared page */
 	int nents;	/* number of pages to be shared */
 	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
+	struct dma_buf *dma_buf;
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
-	int flags;
-	int ref_count;
+	bool valid;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 116850e..f9e0df3 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -456,13 +456,12 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 	do {
 		rc = ring->req_cons;
 		rp = ring->sring->req_prod;
-
+		more_to_do = 0;
 		while (rc != rp) {
 			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
 				break;
 
 			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
-			printk("Got request\n");
 			ring->req_cons = ++rc;
 
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
@@ -479,13 +478,11 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
 
 				if (notify) {
-					printk("Notyfing\n");
 					notify_remote_via_irq(ring_info->irq);
 				}
 			}
 
 			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
-			printk("Final check for requests %d\n", more_to_do);
 		}
 	} while (more_to_do);
 
@@ -541,7 +538,6 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 
 		if (i != ring->req_prod_pvt) {
 			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
-			printk("more to do %d\n", more_to_do);
 		} else {
 			ring->sring->rsp_event = i+1;
 		}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 14/60] hyper_dmabuf: clean-up process based on file->f_count
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Now relaese funcs checks f_count for the file instead of
our own refcount because it can't track dma_buf_get.

Also, importer now sends out HYPER_DMABUF_FIRST_EXPORT
to let the exporter know corresponding dma-buf has ever
exported on importer's side. This is to cover the case
where exporter exports a buffer and unexport it right
away before importer does first export_fd (there won't
be any dma_buf_release nofication to exporter since SGT
was never created by importer.)

After importer creates its own SGT, only condition it is
completely released is that dma_buf is unexported
(so valid == 0) and user app closes all locally
assigned FDs (when dma_buf_release is called.)
Otherwise, it needs to stay there since previously exported
FD can be reused.

Also includes minor changes;

1. flag had been changed to "bool valid" for conciseness.
2. added bool importer_exported in sgt_info as an indicator
   for usage of buffer on the importer.
3. num of pages is added (nents) to hyper_dmabuf_sgt_info
   to keep the size info in EXPORT list.
3. more minor changes and clean-ups.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 76 ++++++++++++---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  5 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 78 ++++++++++++----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 34 ++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  2 +
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 10 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     | 19 +++---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  6 +-
 12 files changed, 143 insertions(+), 93 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 5b5dae44..5a7cfa5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -33,6 +33,7 @@ static int hyper_dmabuf_drv_init(void)
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
 	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+	hyper_dmabuf_private.domid = hyper_dmabuf_get_domid();
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 8778a19..ff883e1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -3,6 +3,7 @@
 
 struct hyper_dmabuf_private {
         struct device *device;
+	int domid;
 	struct workqueue_struct *work_queue;
 };
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index f258981..fa445e5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -13,6 +13,14 @@
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
+int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -1;
+}
+
 /* return total number of pages referecned by a sgt
  * for pre-calculation of # of pages behind a given sgt
  */
@@ -368,8 +376,8 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
 	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->active_sgts->sgt->nents/REFS_PER_PAGE +
-				((sgt_info->active_sgts->sgt->nents % REFS_PER_PAGE) ? 1: 0));
+	int n_2nd_level_pages = (sgt_info->nents/REFS_PER_PAGE +
+				((sgt_info->nents % REFS_PER_PAGE) ? 1: 0));
 
 
 	if (shared_pages_info->data_refs == NULL ||
@@ -388,26 +396,28 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
 			printk("refid still in use!!!\n");
 		}
+		gnttab_free_grant_reference(ref[i]);
 		i++;
 	}
 	free_pages((unsigned long)shared_pages_info->addr_pages, i);
 
+
 	/* End foreign access for top level addressing page */
 	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
 		printk("refid not shared !!\n");
 	}
-	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
-		printk("refid still in use!!!\n");
-	}
 	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
+	gnttab_free_grant_reference(shared_pages_info->top_level_ref);
+
 	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
 
 	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->active_sgts->sgt->nents; i++) {
+	for (i = 0; i < sgt_info->nents; i++) {
 		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
 			printk("refid not shared !!\n");
 		}
 		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
+		gnttab_free_grant_reference(shared_pages_info->data_refs[i]);
 	}
 
 	kfree(shared_pages_info->data_refs);
@@ -545,6 +555,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		return -EPERM;
 	}
 
+	/* force == 1 is not recommended */
 	while (!list_empty(&sgt_info->va_kmapped->list)) {
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
@@ -598,6 +609,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 
 	/* close connection to dma-buf completely */
 	dma_buf_put(sgt_info->dma_buf);
+	sgt_info->dma_buf = NULL;
 
 	kfree(sgt_info->active_sgts);
 	kfree(sgt_info->active_attached);
@@ -621,7 +633,7 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = hyper_dmabuf_send_request(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id), req, true);
+	ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(id), req, true);
 
 	kfree(req);
 
@@ -737,30 +749,33 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	}
 }
 
-static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	int ret;
+	int final_release;
 
-	if (!dmabuf->priv)
+	if (!dma_buf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
 
-	if (sgt_info) {
-		/* dmabuf fd is being released - decrease refcount */
-		sgt_info->ref_count--;
+	final_release = sgt_info && !sgt_info->valid &&
+		       !dmabuf_refcount(sgt_info->dma_buf);
 
-		/* if no one else in that domain is using that buffer, unmap it for now */
-		if (sgt_info->ref_count == 0) {
-			hyper_dmabuf_cleanup_imported_pages(sgt_info);
-			hyper_dmabuf_free_sgt(sgt_info->sgt);
-			sgt_info->sgt = NULL;
-		}
+	if (!dmabuf_refcount(sgt_info->dma_buf)) {
+		sgt_info->dma_buf = NULL;
 	}
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_RELEASE);
+	if (final_release) {
+		hyper_dmabuf_cleanup_imported_pages(sgt_info);
+		hyper_dmabuf_free_sgt(sgt_info->sgt);
+		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+							HYPER_DMABUF_OPS_RELEASE_FINAL);
+	} else {
+		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+							HYPER_DMABUF_OPS_RELEASE);
+	}
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -770,8 +785,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 	 * Check if buffer is still valid and if not remove it from imported list.
 	 * That has to be done after sending sync request
 	 */
-	if (sgt_info && sgt_info->ref_count == 0 &&
-	    sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+	if (final_release) {
 		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
 		kfree(sgt_info);
 	}
@@ -962,23 +976,21 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 /* exporting dmabuf as fd */
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
 {
-	int fd;
-	struct dma_buf* dmabuf;
+	int fd = -1;
 
 	/* call hyper_dmabuf_export_dmabuf and create
 	 * and bind a handle for it then release
 	 */
-	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
-
-	fd = dma_buf_fd(dmabuf, flags);
+	hyper_dmabuf_export_dma_buf(dinfo);
 
-	/* dmabuf fd is exported for given bufer - increase its ref count */
-	dinfo->ref_count++;
+	if (dinfo->dma_buf) {
+		fd = dma_buf_fd(dinfo->dma_buf, flags);
+	}
 
 	return fd;
 }
 
-struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
 {
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
@@ -989,5 +1001,5 @@ struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_inf
 	exp_info.flags = /* not sure about flag */0;
 	exp_info.priv = dinfo;
 
-	return dma_buf_export(&exp_info);
+	dinfo->dma_buf = dma_buf_export(&exp_info);
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index 71c1bb0..1b0801f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -1,6 +1,7 @@
 #ifndef __HYPER_DMABUF_IMP_H__
 #define __HYPER_DMABUF_IMP_H__
 
+#include <linux/fs.h>
 #include "hyper_dmabuf_struct.h"
 
 /* extract pages directly from struct sg_table */
@@ -30,6 +31,8 @@ void hyper_dmabuf_free_sgt(struct sg_table *sgt);
 
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
 
-struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+int dmabuf_refcount(struct dma_buf *dma_buf);
 
 #endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index c57acafe..e334b77 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -107,10 +107,12 @@ static int hyper_dmabuf_export_remote(void *data)
 	}
 
 	/* we check if this specific attachment was already exported
-	 * to the same domain and if yes, it returns hyper_dmabuf_id
-	 * of pre-exported sgt */
-	ret = hyper_dmabuf_find_id(dma_buf, export_remote_attr->remote_domain);
-	if (ret != -1) {
+	 * to the same domain and if yes and it's valid sgt_info,
+	 * it returns hyper_dmabuf_id of pre-exported sgt_info
+	 */
+	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
+	sgt_info = hyper_dmabuf_find_exported(ret);
+	if (ret != -1 && sgt_info->valid) {
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
@@ -135,12 +137,13 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
-	sgt_info->flags = 0;
+	sgt_info->valid = 1;
+	sgt_info->importer_exported = 0;
 
-	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
-	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
-	sgt_info->va_kmapped = kcalloc(1, sizeof(struct kmap_vaddr_list), GFP_KERNEL);
-	sgt_info->va_vmapped = kcalloc(1, sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	sgt_info->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	sgt_info->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
 
 	sgt_info->active_sgts->sgt = sgt;
 	sgt_info->active_attached->attach = attachment;
@@ -159,6 +162,8 @@ static int hyper_dmabuf_export_remote(void *data)
 	if (page_info == NULL)
 		goto fail_export;
 
+	sgt_info->nents = page_info->nents;
+
 	/* now register it to export list */
 	hyper_dmabuf_register_exported(sgt_info);
 
@@ -220,6 +225,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int operand;
 	int ret = 0;
 
 	if (!data) {
@@ -234,35 +241,38 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (imported_sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
-	/*
-	 * Check if buffer was not unexported by exporter.
-	 * In such exporter is waiting for importer to finish using that buffer,
-	 * so do not allow export fd of such buffer anymore.
-	 */
-	if (imported_sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
-		return -EINVAL;
-	}
-
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
 		imported_sgt_info->last_len, imported_sgt_info->nents,
-		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+		HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
 
 	if (!imported_sgt_info->sgt) {
 		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
 							imported_sgt_info->frst_ofst,
 							imported_sgt_info->last_len,
 							imported_sgt_info->nents,
-							HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+							HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id),
 							&imported_sgt_info->shared_pages_info);
-		if (!imported_sgt_info->sgt) {
-			printk("Failed to create sgt\n");
+
+		/* send notifiticatio for first export_fd to exporter */
+		operand = imported_sgt_info->hyper_dmabuf_id;
+		req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+		hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+
+		ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(operand), req, false);
+
+		if (!imported_sgt_info->sgt || ret) {
+			kfree(req);
+			printk("Failed to create sgt or notify exporter\n");
 			return -EINVAL;
 		}
+		kfree(req);
 	}
 
 	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
-	if (export_fd_attr < 0) {
+
+	if (export_fd_attr->fd < 0) {
+		/* fail to get fd */
 		ret = export_fd_attr->fd;
 	}
 
@@ -309,23 +319,23 @@ static int hyper_dmabuf_unexport(void *data)
 	/* free msg */
 	kfree(req);
 
+	/* no longer valid */
+	sgt_info->valid = 0;
+
 	/*
-	 * Check if any importer is still using buffer, if not clean it up completly,
-	 * otherwise mark buffer as unexported and postpone its cleanup to time when
-	 * importer will finish using it.
+	 * Immediately clean-up if it has never been exported by importer
+	 * (so no SGT is constructed on importer).
+	 * clean it up later in remote sync when final release ops
+	 * is called (importer does this only when there's no
+	 * no consumer of locally exported FDs)
 	 */
-	if (list_empty(&sgt_info->active_sgts->list) &&
-	    list_empty(&sgt_info->active_attached->list)) {
+	printk("before claning up buffer completly\n");
+	if (!sgt_info->importer_exported) {
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
 		kfree(sgt_info);
-	} else {
-		sgt_info->flags = HYPER_DMABUF_SGT_UNEXPORTED;
 	}
 
-	/* TODO: should we mark here that buffer was destroyed immiedetaly or that was postponed ? */
-	unexport_attr->status = ret;
-
 	return ret;
 }
 
@@ -369,7 +379,7 @@ static int hyper_dmabuf_query(void *data)
 			if (sgt_info) {
 				query_attr->info = 0xFFFFFFFF; /* myself */
 			} else {
-				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+				query_attr->info = (HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
 			}
 			break;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 1420df9..18731de 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -65,7 +65,7 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
+int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 463a6da..f55d06e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -25,7 +25,7 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid);
+int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 97b42a4..a2d687f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -55,6 +55,14 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		request->operands[0] = operands[0];
 		break;
 
+	case HYPER_DMABUF_FIRST_EXPORT:
+		/* dmabuf fd is being created on imported side for first time */
+		/* command : HYPER_DMABUF_FIRST_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		request->operands[0] = operands[0];
+		break;
+
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -81,6 +89,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 void cmd_process_work(struct work_struct *work)
 {
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
@@ -117,11 +126,25 @@ void cmd_process_work(struct work_struct *work)
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
 
-		imported_sgt_info->flags = 0;
-		imported_sgt_info->ref_count = 0;
+		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
+	case HYPER_DMABUF_FIRST_EXPORT:
+		/* find a corresponding SGT for the id */
+		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (!sgt_info) {
+			printk("critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			break;
+		}
+
+		if (sgt_info->importer_exported)
+			printk("warning: exported flag is not supposed to be 1 already\n");
+
+		sgt_info->importer_exported = 1;
+		break;
+
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -170,13 +193,14 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			/* check if buffer is still mapped and in use */
-			if (imported_sgt_info->sgt) {
+			/* if anything is still using dma_buf */
+			if (imported_sgt_info->dma_buf &&
+			    dmabuf_refcount(imported_sgt_info->dma_buf) > 0) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
 				 */
-				imported_sgt_info->flags = HYPER_DMABUF_SGT_INVALID;
+				imported_sgt_info->valid = 0;
 			} else {
 				/* No one is using buffer, remove it from imported list */
 				hyper_dmabuf_remove_imported(req->operands[0]);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index fc1365b..1e9d827 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -3,6 +3,7 @@
 
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_FIRST_EXPORT,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
@@ -14,6 +15,7 @@ enum hyper_dmabuf_ops {
 	HYPER_DMABUF_OPS_MAP,
 	HYPER_DMABUF_OPS_UNMAP,
 	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_RELEASE_FINAL,
 	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
 	HYPER_DMABUF_OPS_END_CPU_ACCESS,
 	HYPER_DMABUF_OPS_KMAP_ATOMIC,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 61ba4ed..5017b17 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -114,13 +114,13 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		kfree(sgtl);
 		break;
 
-	case HYPER_DMABUF_OPS_RELEASE:
+	case HYPER_DMABUF_OPS_RELEASE_FINAL:
 		/*
 		 * Importer just released buffer fd, check if there is any other importer still using it.
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
-		 if (list_empty(&sgt_info->active_sgts->list) &&                                                                  	    list_empty(&sgt_info->active_attached->list) &&
-		     sgt_info->flags == HYPER_DMABUF_SGT_UNEXPORTED) {
+		 if (list_empty(&sgt_info->active_attached->list) &&
+		     !sgt_info->valid) {
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 			hyper_dmabuf_remove_exported(id);
 			kfree(sgt_info);
@@ -128,6 +128,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		break;
 
+	case HYPER_DMABUF_OPS_RELEASE:
+		/* place holder */
+		break;
+
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (!ret) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 1194cf2..92e06ff 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -6,10 +6,10 @@
 /* Importer combine source domain id with given hyper_dmabuf_id
  * to make it unique in case there are multiple exporters */
 
-#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
-	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+#define HYPER_DMABUF_ID_IMPORTER(domid, id) \
+	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
 
-#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
+#define HYPER_DMABUF_DOM_ID(id) \
 	(((id) >> 24) & 0xFF)
 
 /* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
@@ -18,11 +18,6 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
-enum hyper_dmabuf_sgt_flags {
-        HYPER_DMABUF_SGT_INVALID = 0x10,
-        HYPER_DMABUF_SGT_UNEXPORTED,
-};
-
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -77,11 +72,13 @@ struct hyper_dmabuf_sgt_info {
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	int nents; /* number of pages, which may be different than sgt->nents */
 	struct sgt_list *active_sgts;
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-	int flags;
+	bool valid;
+	bool importer_exported; /* exported locally on importer's side */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
@@ -95,10 +92,10 @@ struct hyper_dmabuf_imported_sgt_info {
 	int last_len;	/* length of data in the last shared page */
 	int nents;	/* number of pages to be shared */
 	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
+	struct dma_buf *dma_buf;
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
-	int flags;
-	int ref_count;
+	bool valid;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 116850e..f9e0df3 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -456,13 +456,12 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 	do {
 		rc = ring->req_cons;
 		rp = ring->sring->req_prod;
-
+		more_to_do = 0;
 		while (rc != rp) {
 			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
 				break;
 
 			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
-			printk("Got request\n");
 			ring->req_cons = ++rc;
 
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
@@ -479,13 +478,11 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
 
 				if (notify) {
-					printk("Notyfing\n");
 					notify_remote_via_irq(ring_info->irq);
 				}
 			}
 
 			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
-			printk("Final check for requests %d\n", more_to_do);
 		}
 	} while (more_to_do);
 
@@ -541,7 +538,6 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 
 		if (i != ring->req_prod_pvt) {
 			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
-			printk("more to do %d\n", more_to_do);
 		} else {
 			ring->sring->rsp_event = i+1;
 		}
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 14/60] hyper_dmabuf: clean-up process based on file->f_count
  2017-12-19 19:29 ` Dongwon Kim
                   ` (23 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Now relaese funcs checks f_count for the file instead of
our own refcount because it can't track dma_buf_get.

Also, importer now sends out HYPER_DMABUF_FIRST_EXPORT
to let the exporter know corresponding dma-buf has ever
exported on importer's side. This is to cover the case
where exporter exports a buffer and unexport it right
away before importer does first export_fd (there won't
be any dma_buf_release nofication to exporter since SGT
was never created by importer.)

After importer creates its own SGT, only condition it is
completely released is that dma_buf is unexported
(so valid == 0) and user app closes all locally
assigned FDs (when dma_buf_release is called.)
Otherwise, it needs to stay there since previously exported
FD can be reused.

Also includes minor changes;

1. flag had been changed to "bool valid" for conciseness.
2. added bool importer_exported in sgt_info as an indicator
   for usage of buffer on the importer.
3. num of pages is added (nents) to hyper_dmabuf_sgt_info
   to keep the size info in EXPORT list.
3. more minor changes and clean-ups.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 76 ++++++++++++---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  5 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 78 ++++++++++++----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 34 ++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  2 +
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 10 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     | 19 +++---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  6 +-
 12 files changed, 143 insertions(+), 93 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 5b5dae44..5a7cfa5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -33,6 +33,7 @@ static int hyper_dmabuf_drv_init(void)
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
 	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+	hyper_dmabuf_private.domid = hyper_dmabuf_get_domid();
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 8778a19..ff883e1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -3,6 +3,7 @@
 
 struct hyper_dmabuf_private {
         struct device *device;
+	int domid;
 	struct workqueue_struct *work_queue;
 };
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index f258981..fa445e5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -13,6 +13,14 @@
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
+int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -1;
+}
+
 /* return total number of pages referecned by a sgt
  * for pre-calculation of # of pages behind a given sgt
  */
@@ -368,8 +376,8 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
 
 	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->active_sgts->sgt->nents/REFS_PER_PAGE +
-				((sgt_info->active_sgts->sgt->nents % REFS_PER_PAGE) ? 1: 0));
+	int n_2nd_level_pages = (sgt_info->nents/REFS_PER_PAGE +
+				((sgt_info->nents % REFS_PER_PAGE) ? 1: 0));
 
 
 	if (shared_pages_info->data_refs == NULL ||
@@ -388,26 +396,28 @@ int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
 		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
 			printk("refid still in use!!!\n");
 		}
+		gnttab_free_grant_reference(ref[i]);
 		i++;
 	}
 	free_pages((unsigned long)shared_pages_info->addr_pages, i);
 
+
 	/* End foreign access for top level addressing page */
 	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
 		printk("refid not shared !!\n");
 	}
-	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
-		printk("refid still in use!!!\n");
-	}
 	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
+	gnttab_free_grant_reference(shared_pages_info->top_level_ref);
+
 	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
 
 	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->active_sgts->sgt->nents; i++) {
+	for (i = 0; i < sgt_info->nents; i++) {
 		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
 			printk("refid not shared !!\n");
 		}
 		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
+		gnttab_free_grant_reference(shared_pages_info->data_refs[i]);
 	}
 
 	kfree(shared_pages_info->data_refs);
@@ -545,6 +555,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		return -EPERM;
 	}
 
+	/* force == 1 is not recommended */
 	while (!list_empty(&sgt_info->va_kmapped->list)) {
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
@@ -598,6 +609,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 
 	/* close connection to dma-buf completely */
 	dma_buf_put(sgt_info->dma_buf);
+	sgt_info->dma_buf = NULL;
 
 	kfree(sgt_info->active_sgts);
 	kfree(sgt_info->active_attached);
@@ -621,7 +633,7 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = hyper_dmabuf_send_request(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id), req, true);
+	ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(id), req, true);
 
 	kfree(req);
 
@@ -737,30 +749,33 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	}
 }
 
-static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	int ret;
+	int final_release;
 
-	if (!dmabuf->priv)
+	if (!dma_buf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
 
-	if (sgt_info) {
-		/* dmabuf fd is being released - decrease refcount */
-		sgt_info->ref_count--;
+	final_release = sgt_info && !sgt_info->valid &&
+		       !dmabuf_refcount(sgt_info->dma_buf);
 
-		/* if no one else in that domain is using that buffer, unmap it for now */
-		if (sgt_info->ref_count == 0) {
-			hyper_dmabuf_cleanup_imported_pages(sgt_info);
-			hyper_dmabuf_free_sgt(sgt_info->sgt);
-			sgt_info->sgt = NULL;
-		}
+	if (!dmabuf_refcount(sgt_info->dma_buf)) {
+		sgt_info->dma_buf = NULL;
 	}
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_RELEASE);
+	if (final_release) {
+		hyper_dmabuf_cleanup_imported_pages(sgt_info);
+		hyper_dmabuf_free_sgt(sgt_info->sgt);
+		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+							HYPER_DMABUF_OPS_RELEASE_FINAL);
+	} else {
+		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
+							HYPER_DMABUF_OPS_RELEASE);
+	}
 
 	if (ret < 0) {
 		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -770,8 +785,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
 	 * Check if buffer is still valid and if not remove it from imported list.
 	 * That has to be done after sending sync request
 	 */
-	if (sgt_info && sgt_info->ref_count == 0 &&
-	    sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
+	if (final_release) {
 		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
 		kfree(sgt_info);
 	}
@@ -962,23 +976,21 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 /* exporting dmabuf as fd */
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
 {
-	int fd;
-	struct dma_buf* dmabuf;
+	int fd = -1;
 
 	/* call hyper_dmabuf_export_dmabuf and create
 	 * and bind a handle for it then release
 	 */
-	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
-
-	fd = dma_buf_fd(dmabuf, flags);
+	hyper_dmabuf_export_dma_buf(dinfo);
 
-	/* dmabuf fd is exported for given bufer - increase its ref count */
-	dinfo->ref_count++;
+	if (dinfo->dma_buf) {
+		fd = dma_buf_fd(dinfo->dma_buf, flags);
+	}
 
 	return fd;
 }
 
-struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
 {
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
@@ -989,5 +1001,5 @@ struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_inf
 	exp_info.flags = /* not sure about flag */0;
 	exp_info.priv = dinfo;
 
-	return dma_buf_export(&exp_info);
+	dinfo->dma_buf = dma_buf_export(&exp_info);
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index 71c1bb0..1b0801f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -1,6 +1,7 @@
 #ifndef __HYPER_DMABUF_IMP_H__
 #define __HYPER_DMABUF_IMP_H__
 
+#include <linux/fs.h>
 #include "hyper_dmabuf_struct.h"
 
 /* extract pages directly from struct sg_table */
@@ -30,6 +31,8 @@ void hyper_dmabuf_free_sgt(struct sg_table *sgt);
 
 int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
 
-struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+int dmabuf_refcount(struct dma_buf *dma_buf);
 
 #endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index c57acafe..e334b77 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -107,10 +107,12 @@ static int hyper_dmabuf_export_remote(void *data)
 	}
 
 	/* we check if this specific attachment was already exported
-	 * to the same domain and if yes, it returns hyper_dmabuf_id
-	 * of pre-exported sgt */
-	ret = hyper_dmabuf_find_id(dma_buf, export_remote_attr->remote_domain);
-	if (ret != -1) {
+	 * to the same domain and if yes and it's valid sgt_info,
+	 * it returns hyper_dmabuf_id of pre-exported sgt_info
+	 */
+	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
+	sgt_info = hyper_dmabuf_find_exported(ret);
+	if (ret != -1 && sgt_info->valid) {
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
@@ -135,12 +137,13 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
-	sgt_info->flags = 0;
+	sgt_info->valid = 1;
+	sgt_info->importer_exported = 0;
 
-	sgt_info->active_sgts = kcalloc(1, sizeof(struct sgt_list), GFP_KERNEL);
-	sgt_info->active_attached = kcalloc(1, sizeof(struct attachment_list), GFP_KERNEL);
-	sgt_info->va_kmapped = kcalloc(1, sizeof(struct kmap_vaddr_list), GFP_KERNEL);
-	sgt_info->va_vmapped = kcalloc(1, sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	sgt_info->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	sgt_info->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
 
 	sgt_info->active_sgts->sgt = sgt;
 	sgt_info->active_attached->attach = attachment;
@@ -159,6 +162,8 @@ static int hyper_dmabuf_export_remote(void *data)
 	if (page_info == NULL)
 		goto fail_export;
 
+	sgt_info->nents = page_info->nents;
+
 	/* now register it to export list */
 	hyper_dmabuf_register_exported(sgt_info);
 
@@ -220,6 +225,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int operand;
 	int ret = 0;
 
 	if (!data) {
@@ -234,35 +241,38 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (imported_sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
-	/*
-	 * Check if buffer was not unexported by exporter.
-	 * In such exporter is waiting for importer to finish using that buffer,
-	 * so do not allow export fd of such buffer anymore.
-	 */
-	if (imported_sgt_info->flags == HYPER_DMABUF_SGT_INVALID) {
-		return -EINVAL;
-	}
-
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
 		imported_sgt_info->last_len, imported_sgt_info->nents,
-		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+		HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
 
 	if (!imported_sgt_info->sgt) {
 		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
 							imported_sgt_info->frst_ofst,
 							imported_sgt_info->last_len,
 							imported_sgt_info->nents,
-							HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+							HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id),
 							&imported_sgt_info->shared_pages_info);
-		if (!imported_sgt_info->sgt) {
-			printk("Failed to create sgt\n");
+
+		/* send notifiticatio for first export_fd to exporter */
+		operand = imported_sgt_info->hyper_dmabuf_id;
+		req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+		hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+
+		ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(operand), req, false);
+
+		if (!imported_sgt_info->sgt || ret) {
+			kfree(req);
+			printk("Failed to create sgt or notify exporter\n");
 			return -EINVAL;
 		}
+		kfree(req);
 	}
 
 	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
-	if (export_fd_attr < 0) {
+
+	if (export_fd_attr->fd < 0) {
+		/* fail to get fd */
 		ret = export_fd_attr->fd;
 	}
 
@@ -309,23 +319,23 @@ static int hyper_dmabuf_unexport(void *data)
 	/* free msg */
 	kfree(req);
 
+	/* no longer valid */
+	sgt_info->valid = 0;
+
 	/*
-	 * Check if any importer is still using buffer, if not clean it up completly,
-	 * otherwise mark buffer as unexported and postpone its cleanup to time when
-	 * importer will finish using it.
+	 * Immediately clean-up if it has never been exported by importer
+	 * (so no SGT is constructed on importer).
+	 * clean it up later in remote sync when final release ops
+	 * is called (importer does this only when there's no
+	 * no consumer of locally exported FDs)
 	 */
-	if (list_empty(&sgt_info->active_sgts->list) &&
-	    list_empty(&sgt_info->active_attached->list)) {
+	printk("before claning up buffer completly\n");
+	if (!sgt_info->importer_exported) {
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
 		kfree(sgt_info);
-	} else {
-		sgt_info->flags = HYPER_DMABUF_SGT_UNEXPORTED;
 	}
 
-	/* TODO: should we mark here that buffer was destroyed immiedetaly or that was postponed ? */
-	unexport_attr->status = ret;
-
 	return ret;
 }
 
@@ -369,7 +379,7 @@ static int hyper_dmabuf_query(void *data)
 			if (sgt_info) {
 				query_attr->info = 0xFFFFFFFF; /* myself */
 			} else {
-				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+				query_attr->info = (HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
 			}
 			break;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 1420df9..18731de 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -65,7 +65,7 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid)
+int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 463a6da..f55d06e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -25,7 +25,7 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id(struct dma_buf *dmabuf, int domid);
+int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 97b42a4..a2d687f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -55,6 +55,14 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		request->operands[0] = operands[0];
 		break;
 
+	case HYPER_DMABUF_FIRST_EXPORT:
+		/* dmabuf fd is being created on imported side for first time */
+		/* command : HYPER_DMABUF_FIRST_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		request->operands[0] = operands[0];
+		break;
+
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -81,6 +89,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 void cmd_process_work(struct work_struct *work)
 {
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_ring_rq *req;
 	int domid;
@@ -117,11 +126,25 @@ void cmd_process_work(struct work_struct *work)
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
 
-		imported_sgt_info->flags = 0;
-		imported_sgt_info->ref_count = 0;
+		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
+	case HYPER_DMABUF_FIRST_EXPORT:
+		/* find a corresponding SGT for the id */
+		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (!sgt_info) {
+			printk("critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			break;
+		}
+
+		if (sgt_info->importer_exported)
+			printk("warning: exported flag is not supposed to be 1 already\n");
+
+		sgt_info->importer_exported = 1;
+		break;
+
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -170,13 +193,14 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 			hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (imported_sgt_info) {
-			/* check if buffer is still mapped and in use */
-			if (imported_sgt_info->sgt) {
+			/* if anything is still using dma_buf */
+			if (imported_sgt_info->dma_buf &&
+			    dmabuf_refcount(imported_sgt_info->dma_buf) > 0) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
 				 */
-				imported_sgt_info->flags = HYPER_DMABUF_SGT_INVALID;
+				imported_sgt_info->valid = 0;
 			} else {
 				/* No one is using buffer, remove it from imported list */
 				hyper_dmabuf_remove_imported(req->operands[0]);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index fc1365b..1e9d827 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -3,6 +3,7 @@
 
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_FIRST_EXPORT,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
@@ -14,6 +15,7 @@ enum hyper_dmabuf_ops {
 	HYPER_DMABUF_OPS_MAP,
 	HYPER_DMABUF_OPS_UNMAP,
 	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_RELEASE_FINAL,
 	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
 	HYPER_DMABUF_OPS_END_CPU_ACCESS,
 	HYPER_DMABUF_OPS_KMAP_ATOMIC,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 61ba4ed..5017b17 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -114,13 +114,13 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		kfree(sgtl);
 		break;
 
-	case HYPER_DMABUF_OPS_RELEASE:
+	case HYPER_DMABUF_OPS_RELEASE_FINAL:
 		/*
 		 * Importer just released buffer fd, check if there is any other importer still using it.
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
-		 if (list_empty(&sgt_info->active_sgts->list) &&                                                                  	    list_empty(&sgt_info->active_attached->list) &&
-		     sgt_info->flags == HYPER_DMABUF_SGT_UNEXPORTED) {
+		 if (list_empty(&sgt_info->active_attached->list) &&
+		     !sgt_info->valid) {
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 			hyper_dmabuf_remove_exported(id);
 			kfree(sgt_info);
@@ -128,6 +128,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		break;
 
+	case HYPER_DMABUF_OPS_RELEASE:
+		/* place holder */
+		break;
+
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (!ret) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 1194cf2..92e06ff 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -6,10 +6,10 @@
 /* Importer combine source domain id with given hyper_dmabuf_id
  * to make it unique in case there are multiple exporters */
 
-#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
-	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+#define HYPER_DMABUF_ID_IMPORTER(domid, id) \
+	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
 
-#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
+#define HYPER_DMABUF_DOM_ID(id) \
 	(((id) >> 24) & 0xFF)
 
 /* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
@@ -18,11 +18,6 @@
  * frame buffer) */
 #define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
 
-enum hyper_dmabuf_sgt_flags {
-        HYPER_DMABUF_SGT_INVALID = 0x10,
-        HYPER_DMABUF_SGT_UNEXPORTED,
-};
-
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -77,11 +72,13 @@ struct hyper_dmabuf_sgt_info {
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	int nents; /* number of pages, which may be different than sgt->nents */
 	struct sgt_list *active_sgts;
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-	int flags;
+	bool valid;
+	bool importer_exported; /* exported locally on importer's side */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
@@ -95,10 +92,10 @@ struct hyper_dmabuf_imported_sgt_info {
 	int last_len;	/* length of data in the last shared page */
 	int nents;	/* number of pages to be shared */
 	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
+	struct dma_buf *dma_buf;
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
 	struct hyper_dmabuf_shared_pages_info shared_pages_info;
-	int flags;
-	int ref_count;
+	bool valid;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 116850e..f9e0df3 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -456,13 +456,12 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 	do {
 		rc = ring->req_cons;
 		rp = ring->sring->req_prod;
-
+		more_to_do = 0;
 		while (rc != rp) {
 			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
 				break;
 
 			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
-			printk("Got request\n");
 			ring->req_cons = ++rc;
 
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
@@ -479,13 +478,11 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
 
 				if (notify) {
-					printk("Notyfing\n");
 					notify_remote_via_irq(ring_info->irq);
 				}
 			}
 
 			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
-			printk("Final check for requests %d\n", more_to_do);
 		}
 	} while (more_to_do);
 
@@ -541,7 +538,6 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 
 		if (i != ring->req_prod_pvt) {
 			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
-			printk("more to do %d\n", more_to_do);
 		} else {
 			ring->sring->rsp_event = i+1;
 		}
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 15/60] hyper_dmabuf: reusing previously released hyper_dmabuf_id
  2017-12-19 19:29 ` Dongwon Kim
                   ` (26 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Now, released hyper_dmabuf_ids are stored in a stack -
(hyper_dmabuf_private.id_queue) for reuse. This is to prevent
overflow of ids for buffers. We also limit maximum number for
the id to 1000 for the stability and optimal performance.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  5 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  6 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         | 76 ++++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         | 24 +++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 15 ++---
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  3 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  9 ---
 9 files changed, 120 insertions(+), 20 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 3459382..c9b8b7f 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -7,6 +7,7 @@ ifneq ($(KERNELRELEASE),)
                                  hyper_dmabuf_list.o \
 				 hyper_dmabuf_imp.o \
 				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
 				 xen/hyper_dmabuf_xen_comm.o \
 				 xen/hyper_dmabuf_xen_comm_list.o
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 5a7cfa5..66d6cb9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -5,6 +5,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 
@@ -67,6 +68,10 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.work_queue)
 		destroy_workqueue(hyper_dmabuf_private.work_queue);
 
+	/* destroy id_queue */
+	if (hyper_dmabuf_private.id_queue)
+		destroy_reusable_list();
+
 	hyper_dmabuf_destroy_data_dir();
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index ff883e1..37b0cc1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,10 +1,16 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
+struct list_reusable_id {
+	int id;
+	struct list_head list;
+};
+
 struct hyper_dmabuf_private {
         struct device *device;
 	int domid;
 	struct workqueue_struct *work_queue;
+	struct list_reusable_id *id_queue;
 };
 
 typedef int (*hyper_dmabuf_ioctl_t)(void *data);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
new file mode 100644
index 0000000..7bbb179
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -0,0 +1,76 @@
+#include <linux/list.h>
+#include <linux/slab.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+void store_reusable_id(int id)
+{
+	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *new_reusable;
+
+	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
+	new_reusable->id = id;
+
+	list_add(&new_reusable->list, &reusable_head->list);
+}
+
+static int retrieve_reusable_id(void)
+{
+	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+
+	/* check there is reusable id */
+	if (!list_empty(&reusable_head->list)) {
+		reusable_head = list_first_entry(&reusable_head->list,
+						 struct list_reusable_id,
+						 list);
+
+		list_del(&reusable_head->list);
+		return reusable_head->id;
+	}
+
+	return -1;
+}
+
+void destroy_reusable_list(void)
+{
+	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *temp_head;
+
+	if (reusable_head) {
+		/* freeing mem space all reusable ids in the stack */
+		while (!list_empty(&reusable_head->list)) {
+			temp_head = list_first_entry(&reusable_head->list,
+						     struct list_reusable_id,
+						     list);
+			list_del(&temp_head->list);
+			kfree(temp_head);
+		}
+
+		/* freeing head */
+		kfree(reusable_head);
+	}
+}
+
+int hyper_dmabuf_get_id(void)
+{
+	static int id = 0;
+	struct list_reusable_id *reusable_head;
+	int ret;
+
+	/* first cla to hyper_dmabuf_get_id */
+	if (id == 0) {
+		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
+		reusable_head->id = -1; /* list head have invalid id */
+		INIT_LIST_HEAD(&reusable_head->list);
+		hyper_dmabuf_private.id_queue = reusable_head;
+	}
+
+	ret = retrieve_reusable_id();
+
+	if (ret < 0 && id < HYPER_DMABUF_ID_MAX)
+		return HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, id++);
+
+	return ret;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
new file mode 100644
index 0000000..2c8daf3
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -0,0 +1,24 @@
+#ifndef __HYPER_DMABUF_ID_H__
+#define __HYPER_DMABUF_ID_H__
+
+/* Importer combine source domain id with given hyper_dmabuf_id
+ * to make it unique in case there are multiple exporters */
+
+#define HYPER_DMABUF_ID_CREATE(domid, id) \
+	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+#define HYPER_DMABUF_DOM_ID(id) \
+	(((id) >> 24) & 0xFF)
+
+/* currently maximum number of buffers shared
+ * at any given moment is limited to 1000
+ */
+#define HYPER_DMABUF_ID_MAX 1000
+
+void store_reusable_id(int id);
+
+void destroy_reusable_list(void);
+
+int hyper_dmabuf_get_id(void);
+
+#endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index fa445e5..b109138 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -7,6 +7,7 @@
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_id.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index e334b77..5c6d9c8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -11,6 +11,7 @@
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_query.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
@@ -18,16 +19,6 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static uint32_t hyper_dmabuf_id_gen(void) {
-	/* TODO: add proper implementation */
-	static uint32_t id = 1000;
-	static int32_t domid = -1;
-	if (domid == -1) {
-		domid = hyper_dmabuf_get_domid();
-	}
-	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
-}
-
 static int hyper_dmabuf_exporter_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
@@ -133,7 +124,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
 
-	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
+	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
@@ -334,6 +325,8 @@ static int hyper_dmabuf_unexport(void *data)
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
 		kfree(sgt_info);
+		/* register hyper_dmabuf_id to the list for reuse */
+		store_reusable_id(unexport_attr->hyper_dmabuf_id);
 	}
 
 	return ret;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 5017b17..c5950e0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -6,6 +6,7 @@
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_imp.h"
@@ -124,6 +125,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 			hyper_dmabuf_remove_exported(id);
 			kfree(sgt_info);
+			/* store hyper_dmabuf_id in the list for reuse */
+			store_reusable_id(id);
 		}
 
 		break;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 92e06ff..b52f958 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -3,15 +3,6 @@
 
 #include <xen/interface/grant_table.h>
 
-/* Importer combine source domain id with given hyper_dmabuf_id
- * to make it unique in case there are multiple exporters */
-
-#define HYPER_DMABUF_ID_IMPORTER(domid, id) \
-	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
-
-#define HYPER_DMABUF_DOM_ID(id) \
-	(((id) >> 24) & 0xFF)
-
 /* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
  * in this block meaning we can share 4KB*4096 = 16MB of buffer
  * (needs to be increased for large buffer use-cases such as 4K
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 15/60] hyper_dmabuf: reusing previously released hyper_dmabuf_id
  2017-12-19 19:29 ` Dongwon Kim
                   ` (25 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Now, released hyper_dmabuf_ids are stored in a stack -
(hyper_dmabuf_private.id_queue) for reuse. This is to prevent
overflow of ids for buffers. We also limit maximum number for
the id to 1000 for the stability and optimal performance.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  5 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  6 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         | 76 ++++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         | 24 +++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 15 ++---
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  3 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  9 ---
 9 files changed, 120 insertions(+), 20 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 3459382..c9b8b7f 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -7,6 +7,7 @@ ifneq ($(KERNELRELEASE),)
                                  hyper_dmabuf_list.o \
 				 hyper_dmabuf_imp.o \
 				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
 				 xen/hyper_dmabuf_xen_comm.o \
 				 xen/hyper_dmabuf_xen_comm_list.o
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 5a7cfa5..66d6cb9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -5,6 +5,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 
@@ -67,6 +68,10 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.work_queue)
 		destroy_workqueue(hyper_dmabuf_private.work_queue);
 
+	/* destroy id_queue */
+	if (hyper_dmabuf_private.id_queue)
+		destroy_reusable_list();
+
 	hyper_dmabuf_destroy_data_dir();
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index ff883e1..37b0cc1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,10 +1,16 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
+struct list_reusable_id {
+	int id;
+	struct list_head list;
+};
+
 struct hyper_dmabuf_private {
         struct device *device;
 	int domid;
 	struct workqueue_struct *work_queue;
+	struct list_reusable_id *id_queue;
 };
 
 typedef int (*hyper_dmabuf_ioctl_t)(void *data);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
new file mode 100644
index 0000000..7bbb179
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -0,0 +1,76 @@
+#include <linux/list.h>
+#include <linux/slab.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+void store_reusable_id(int id)
+{
+	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *new_reusable;
+
+	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
+	new_reusable->id = id;
+
+	list_add(&new_reusable->list, &reusable_head->list);
+}
+
+static int retrieve_reusable_id(void)
+{
+	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+
+	/* check there is reusable id */
+	if (!list_empty(&reusable_head->list)) {
+		reusable_head = list_first_entry(&reusable_head->list,
+						 struct list_reusable_id,
+						 list);
+
+		list_del(&reusable_head->list);
+		return reusable_head->id;
+	}
+
+	return -1;
+}
+
+void destroy_reusable_list(void)
+{
+	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *temp_head;
+
+	if (reusable_head) {
+		/* freeing mem space all reusable ids in the stack */
+		while (!list_empty(&reusable_head->list)) {
+			temp_head = list_first_entry(&reusable_head->list,
+						     struct list_reusable_id,
+						     list);
+			list_del(&temp_head->list);
+			kfree(temp_head);
+		}
+
+		/* freeing head */
+		kfree(reusable_head);
+	}
+}
+
+int hyper_dmabuf_get_id(void)
+{
+	static int id = 0;
+	struct list_reusable_id *reusable_head;
+	int ret;
+
+	/* first cla to hyper_dmabuf_get_id */
+	if (id == 0) {
+		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
+		reusable_head->id = -1; /* list head have invalid id */
+		INIT_LIST_HEAD(&reusable_head->list);
+		hyper_dmabuf_private.id_queue = reusable_head;
+	}
+
+	ret = retrieve_reusable_id();
+
+	if (ret < 0 && id < HYPER_DMABUF_ID_MAX)
+		return HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, id++);
+
+	return ret;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
new file mode 100644
index 0000000..2c8daf3
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -0,0 +1,24 @@
+#ifndef __HYPER_DMABUF_ID_H__
+#define __HYPER_DMABUF_ID_H__
+
+/* Importer combine source domain id with given hyper_dmabuf_id
+ * to make it unique in case there are multiple exporters */
+
+#define HYPER_DMABUF_ID_CREATE(domid, id) \
+	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+#define HYPER_DMABUF_DOM_ID(id) \
+	(((id) >> 24) & 0xFF)
+
+/* currently maximum number of buffers shared
+ * at any given moment is limited to 1000
+ */
+#define HYPER_DMABUF_ID_MAX 1000
+
+void store_reusable_id(int id);
+
+void destroy_reusable_list(void);
+
+int hyper_dmabuf_get_id(void);
+
+#endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index fa445e5..b109138 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -7,6 +7,7 @@
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_id.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index e334b77..5c6d9c8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -11,6 +11,7 @@
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_query.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "xen/hyper_dmabuf_xen_comm_list.h"
@@ -18,16 +19,6 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static uint32_t hyper_dmabuf_id_gen(void) {
-	/* TODO: add proper implementation */
-	static uint32_t id = 1000;
-	static int32_t domid = -1;
-	if (domid == -1) {
-		domid = hyper_dmabuf_get_domid();
-	}
-	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
-}
-
 static int hyper_dmabuf_exporter_ring_setup(void *data)
 {
 	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
@@ -133,7 +124,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
 
-	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
+	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
@@ -334,6 +325,8 @@ static int hyper_dmabuf_unexport(void *data)
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
 		kfree(sgt_info);
+		/* register hyper_dmabuf_id to the list for reuse */
+		store_reusable_id(unexport_attr->hyper_dmabuf_id);
 	}
 
 	return ret;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 5017b17..c5950e0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -6,6 +6,7 @@
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
 #include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_imp.h"
@@ -124,6 +125,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 			hyper_dmabuf_remove_exported(id);
 			kfree(sgt_info);
+			/* store hyper_dmabuf_id in the list for reuse */
+			store_reusable_id(id);
 		}
 
 		break;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 92e06ff..b52f958 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -3,15 +3,6 @@
 
 #include <xen/interface/grant_table.h>
 
-/* Importer combine source domain id with given hyper_dmabuf_id
- * to make it unique in case there are multiple exporters */
-
-#define HYPER_DMABUF_ID_IMPORTER(domid, id) \
-	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
-
-#define HYPER_DMABUF_DOM_ID(id) \
-	(((id) >> 24) & 0xFF)
-
 /* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
  * in this block meaning we can share 4KB*4096 = 16MB of buffer
  * (needs to be increased for large buffer use-cases such as 4K
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 16/60] hyper_dmabuf: define hypervisor specific backend API
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

For adoption of hyper_dmabuf driver to various hypervisors
other than Xen, a "backend" layer is defined and separated out
from existing one-body structure.

"Backend" is basically a list of entry points of function calls
that provides method to do Kernel's page-level sharing and inter
VMs communication using hypervisor's native mechanism (hypercall).

All backend APIs are listed up in "struct hyper_dmabuf_backend_ops"
as shown below.

struct hyper_dmabuf_backend_ops {
        /* retreiving id of current virtual machine */
        int (*get_vm_id)(void);

        /* get pages shared via hypervisor-specific method */
        int (*share_pages)(struct page **, int, int, void **);

        /* make shared pages unshared via hypervisor specific method */
        int (*unshare_pages)(void **, int);

        /* map remotely shared pages on importer's side via
         * hypervisor-specific method
         */
        struct page ** (*map_shared_pages)(int, int, int, void **);

        /* unmap and free shared pages on importer's side via
         * hypervisor-specific method
         */
        int (*unmap_shared_pages)(void **, int);

        /* initialize communication environment */
        int (*init_comm_env)(void);

        void (*destroy_comm)(void);

        /* upstream ch setup (receiving and responding) */
        int (*init_rx_ch)(int);

        /* downstream ch setup (transmitting and parsing responses) */
        int (*init_tx_ch)(int);

        int (*send_req)(int, struct hyper_dmabuf_req *, int);
};

Within this new structure, only backend APIs need to be re-designed or
replaced with new ones when porting this sharing model to a different
hypervisor environment, which is a lot simpler than completely redesiging
whole driver for a new hypervisor.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |  11 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   1 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  33 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 112 ++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |   6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 426 ++-------------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  14 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 134 +++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |  87 +++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  52 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  23 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  26 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 303 +++++++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  51 +--
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  67 ++--
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  32 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  22 ++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  20 +
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 356 +++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  19 +
 21 files changed, 949 insertions(+), 850 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index c9b8b7f..d90cfc3 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -1,5 +1,7 @@
 TARGET_MODULE:=hyper_dmabuf
 
+PLATFORM:=XEN
+
 # If we running by kernel building system
 ifneq ($(KERNELRELEASE),)
 	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
@@ -9,8 +11,13 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
-				 xen/hyper_dmabuf_xen_comm.o \
-				 xen/hyper_dmabuf_xen_comm_list.o
+
+ifeq ($(CONFIG_XEN), y)
+	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
+				 xen/hyper_dmabuf_xen_comm_list.o \
+				 xen/hyper_dmabuf_xen_shm.o \
+				 xen/hyper_dmabuf_xen_drv.o
+endif
 
 obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
index 3d9b2d6..d012b05 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -1,2 +1 @@
 #define CURRENT_TARGET XEN
-#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 66d6cb9..ddcc955 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,15 +1,18 @@
-#include <linux/init.h>       /* module_init, module_exit */
-#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include <linux/init.h>
+#include <linux/module.h>
 #include <linux/workqueue.h>
-#include <xen/grant_table.h>
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
-#include "xen/hyper_dmabuf_xen_comm_list.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
 
-MODULE_LICENSE("Dual BSD/GPL");
+#ifdef CONFIG_XEN
+#include "xen/hyper_dmabuf_xen_drv.h"
+extern struct hyper_dmabuf_backend_ops xen_backend_ops;
+#endif
+
+MODULE_LICENSE("GPL");
 MODULE_AUTHOR("IOTG-PED, INTEL");
 
 int register_device(void);
@@ -29,24 +32,24 @@ static int hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
+#ifdef CONFIG_XEN
+	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
+#endif
+
 	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
 
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
 	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
-	hyper_dmabuf_private.domid = hyper_dmabuf_get_domid();
+	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		return -EINVAL;
 	}
 
-	ret = hyper_dmabuf_ring_table_init();
-	if (ret < 0) {
-		return -EINVAL;
-	}
+	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
 
-	ret = hyper_dmabuf_setup_data_dir();
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -61,8 +64,7 @@ static void hyper_dmabuf_drv_exit(void)
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
-	hyper_dmabuf_cleanup_ringbufs();
-	hyper_dmabuf_ring_table_destroy();
+	hyper_dmabuf_private.backend_ops->destroy_comm();
 
 	/* destroy workqueue */
 	if (hyper_dmabuf_private.work_queue)
@@ -72,7 +74,6 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.id_queue)
 		destroy_reusable_list();
 
-	hyper_dmabuf_destroy_data_dir();
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 37b0cc1..03d77d7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -6,94 +6,48 @@ struct list_reusable_id {
 	struct list_head list;
 };
 
-struct hyper_dmabuf_private {
-        struct device *device;
-	int domid;
-	struct workqueue_struct *work_queue;
-	struct list_reusable_id *id_queue;
-};
+struct hyper_dmabuf_backend_ops {
+	/* retreiving id of current virtual machine */
+	int (*get_vm_id)(void);
 
-typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+	/* get pages shared via hypervisor-specific method */
+	int (*share_pages)(struct page **, int, int, void **);
 
-struct hyper_dmabuf_ioctl_desc {
-	unsigned int cmd;
-	int flags;
-	hyper_dmabuf_ioctl_t func;
-	const char *name;
-};
+	/* make shared pages unshared via hypervisor specific method */
+	int (*unshare_pages)(void **, int);
 
-#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
-	[_IOC_NR(ioctl)] = {				\
-			.cmd = ioctl,			\
-			.func = _func,			\
-			.flags = _flags,		\
-			.name = #ioctl			\
-	}
+	/* map remotely shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	struct page ** (*map_shared_pages)(int, int, int, void **);
 
-#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
-_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
-struct ioctl_hyper_dmabuf_exporter_ring_setup {
-	/* IN parameters */
-	/* Remote domain id */
-	uint32_t remote_domain;
-};
+	/* unmap and free shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	int (*unmap_shared_pages)(void **, int);
 
-#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
-_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
-struct ioctl_hyper_dmabuf_importer_ring_setup {
-	/* IN parameters */
-	/* Source domain id */
-	uint32_t source_domain;
-};
+	/* initialize communication environment */
+	int (*init_comm_env)(void);
 
-#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
-_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
-struct ioctl_hyper_dmabuf_export_remote {
-	/* IN parameters */
-	/* DMA buf fd to be exported */
-	uint32_t dmabuf_fd;
-	/* Domain id to which buffer should be exported */
-	uint32_t remote_domain;
-	/* exported dma buf id */
-	uint32_t hyper_dmabuf_id;
-	uint32_t private[4];
-};
+	void (*destroy_comm)(void);
 
-#define IOCTL_HYPER_DMABUF_EXPORT_FD \
-_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
-struct ioctl_hyper_dmabuf_export_fd {
-	/* IN parameters */
-	/* hyper dmabuf id to be imported */
-	uint32_t hyper_dmabuf_id;
-	/* flags */
-	uint32_t flags;
-	/* OUT parameters */
-	/* exported dma buf fd */
-	uint32_t fd;
-};
+	/* upstream ch setup (receiving and responding) */
+	int (*init_rx_ch)(int);
+
+	/* downstream ch setup (transmitting and parsing responses) */
+	int (*init_tx_ch)(int);
 
-#define IOCTL_HYPER_DMABUF_UNEXPORT \
-_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
-struct ioctl_hyper_dmabuf_unexport {
-	/* IN parameters */
-	/* hyper dmabuf id to be unexported */
-	uint32_t hyper_dmabuf_id;
-	/* OUT parameters */
-	/* Status of request */
-	uint32_t status;
+	int (*send_req)(int, struct hyper_dmabuf_req *, int);
 };
 
-#define IOCTL_HYPER_DMABUF_QUERY \
-_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
-struct ioctl_hyper_dmabuf_query {
-	/* in parameters */
-	/* hyper dmabuf id to be queried */
-	uint32_t hyper_dmabuf_id;
-	/* item to be queried */
-	uint32_t item;
-	/* OUT parameters */
-	/* Value of queried item */
-	uint32_t info;
+struct hyper_dmabuf_private {
+        struct device *device;
+	int domid;
+	struct workqueue_struct *work_queue;
+	struct list_reusable_id *id_queue;
+
+	/* backend ops - hypervisor specific */
+	struct hyper_dmabuf_backend_ops *backend_ops;
 };
 
-#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 7bbb179..b58a111 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -1,5 +1,6 @@
 #include <linux/list.h>
 #include <linux/slab.h>
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
 
@@ -19,6 +20,7 @@ void store_reusable_id(int id)
 static int retrieve_reusable_id(void)
 {
 	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	int id;
 
 	/* check there is reusable id */
 	if (!list_empty(&reusable_head->list)) {
@@ -27,7 +29,9 @@ static int retrieve_reusable_id(void)
 						 list);
 
 		list_del(&reusable_head->list);
-		return reusable_head->id;
+		id = reusable_head->id;
+		kfree(reusable_head);
+		return id;
 	}
 
 	return -1;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index b109138..0f104b9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -8,10 +8,12 @@
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_id.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
 int dmabuf_refcount(struct dma_buf *dma_buf)
@@ -138,397 +140,10 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 /* free sg_table */
 void hyper_dmabuf_free_sgt(struct sg_table* sgt)
 {
-	sg_free_table(sgt);
-	kfree(sgt);
-}
-
-/*
- * Creates 2 level page directory structure for referencing shared pages.
- * Top level page is a single page that contains up to 1024 refids that
- * point to 2nd level pages.
- * Each 2nd level page contains up to 1024 refids that point to shared
- * data pages.
- * There will always be one top level page and number of 2nd level pages
- * depends on number of shared data pages.
- *
- *      Top level page                2nd level pages            Data pages
- * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
- * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
- * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
- * |           ...           |   | |     ....           | |
- * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
- * +-------------------------+ | | +--------------------+      |Data page 1 |
- *                             | |                             +------------+
- *                             | └>+--------------------+
- *                             |   |Data page 1024 refid|
- *                             |   |Data page 1025 refid|
- *                             |   |       ...          |
- *                             |   |Data page 2047 refid|
- *                             |   +--------------------+
- *                             |
- *                             |        .....
- *                             └-->+-----------------------+
- *                                 |Data page 1047552 refid|
- *                                 |Data page 1047553 refid|
- *                                 |       ...             |
- *                                 |Data page 1048575 refid|-->+------------------+
- *                                 +-----------------------+   |Data page 1048575 |
- *                                                             +------------------+
- *
- * Using such 2 level structure it is possible to reference up to 4GB of
- * shared data using single refid pointing to top level page.
- *
- * Returns refid of top level page.
- */
-grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
-						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	/*
-	 * Calculate number of pages needed for 2nd level addresing:
-	 */
-	int n_2nd_level_pages = (nents/REFS_PER_PAGE +
-				((nents % REFS_PER_PAGE) ? 1: 0));
-	int i;
-	unsigned long gref_page_start;
-	grant_ref_t *tmp_page;
-	grant_ref_t top_level_ref;
-	grant_ref_t * addr_refs;
-	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
-
-	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
-	tmp_page = (grant_ref_t *)gref_page_start;
-
-	/* Store 2nd level pages to be freed later */
-	shared_pages_info->addr_pages = tmp_page;
-
-	/*TODO: make sure that allocated memory is filled with 0*/
-
-	/* Share 2nd level addressing pages in readonly mode*/
-	for (i=0; i< n_2nd_level_pages; i++) {
-		addr_refs[i] = gnttab_grant_foreign_access(rdomain,
-							   virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ),
-							   1);
-	}
-
-	/*
-	 * fill second level pages with data refs
-	 */
-	for (i = 0; i < nents; i++) {
-		tmp_page[i] = data_refs[i];
-	}
-
-
-	/* allocate top level page */
-	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
-	tmp_page = (grant_ref_t *)gref_page_start;
-
-	/* Store top level page to be freed later */
-	shared_pages_info->top_level_page = tmp_page;
-
-	/*
-	 * fill top level page with reference numbers of second level pages refs.
-	 */
-	for (i=0; i< n_2nd_level_pages; i++) {
-		tmp_page[i] =  addr_refs[i];
-	}
-
-	/* Share top level addressing page in readonly mode*/
-	top_level_ref = gnttab_grant_foreign_access(rdomain,
-						    virt_to_mfn((unsigned long)tmp_page),
-						    1);
-
-	kfree(addr_refs);
-
-	return top_level_ref;
-}
-
-/*
- * Maps provided top level ref id and then return array of pages containing data refs.
- */
-struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
-					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	struct page *top_level_page;
-	struct page **level2_pages;
-
-	grant_ref_t *top_level_refs;
-
-	struct gnttab_map_grant_ref top_level_map_ops;
-	struct gnttab_unmap_grant_ref top_level_unmap_ops;
-
-	struct gnttab_map_grant_ref *map_ops;
-	struct gnttab_unmap_grant_ref *unmap_ops;
-
-	unsigned long addr;
-	int n_level2_refs = 0;
-	int i;
-
-	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
-
-	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
-
-	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
-	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
-
-	/* Map top level addressing page */
-	if (gnttab_alloc_pages(1, &top_level_page)) {
-		printk("Cannot allocate pages\n");
-		return NULL;
-	}
-
-	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
-	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly,
-			  top_level_ref, domid);
-
-	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
-
-	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	if (top_level_map_ops.status) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
-				top_level_map_ops.status);
-		return NULL;
-	} else {
-		top_level_unmap_ops.handle = top_level_map_ops.handle;
-	}
-
-	/* Parse contents of top level addressing page to find how many second level pages is there*/
-	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
-
-	/* Map all second level pages */
-	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
-		printk("Cannot allocate pages\n");
-		return NULL;
-	}
-
-	for (i = 0; i < n_level2_refs; i++) {
-		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
-		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
-				  top_level_refs[i], domid);
-		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
-	}
-
-	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
-	for (i = 0; i < n_level2_refs; i++) {
-		if (map_ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
-			       map_ops[i].status);
-			return NULL;
-		} else {
-			unmap_ops[i].handle = map_ops[i].handle;
-		}
-	}
-
-	/* Unmap top level page, as it won't be needed any longer */
-	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
-		printk("\xen: cannot unmap top level page\n");
-		return NULL;
-	}
-
-	gnttab_free_pages(1, &top_level_page);
-	kfree(map_ops);
-	shared_pages_info->unmap_ops = unmap_ops;
-
-	return level2_pages;
-}
-
-
-/* This collects all reference numbers for 2nd level shared pages and create a table
- * with those in 1st level shared pages then return reference numbers for this top level
- * table. */
-grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
-					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	int i = 0;
-	grant_ref_t *data_refs;
-	grant_ref_t top_level_ref;
-
-	/* allocate temp array for refs of shared data pages */
-	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
-
-	/* share data pages in rw mode*/
-	for (i=0; i<nents; i++) {
-		data_refs[i] = gnttab_grant_foreign_access(rdomain,
-							   pfn_to_mfn(page_to_pfn(pages[i])),
-							   0);
-	}
-
-	/* create additional shared pages with 2 level addressing of data pages */
-	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
-							      shared_pages_info);
-
-	/* Store exported pages refid to be unshared later */
-	shared_pages_info->data_refs = data_refs;
-	shared_pages_info->top_level_ref = top_level_ref;
-
-	return top_level_ref;
-}
-
-int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
-	uint32_t i = 0;
-	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
-
-	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->nents/REFS_PER_PAGE +
-				((sgt_info->nents % REFS_PER_PAGE) ? 1: 0));
-
-
-	if (shared_pages_info->data_refs == NULL ||
-	    shared_pages_info->addr_pages ==  NULL ||
-	    shared_pages_info->top_level_page == NULL ||
-	    shared_pages_info->top_level_ref == -1) {
-		printk("gref table for hyper_dmabuf already cleaned up\n");
-		return 0;
-	}
-
-	/* End foreign access for 2nd level addressing pages */
-	while(ref[i] != 0 && i < n_2nd_level_pages) {
-		if (gnttab_query_foreign_access(ref[i])) {
-			printk("refid not shared !!\n");
-		}
-		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
-			printk("refid still in use!!!\n");
-		}
-		gnttab_free_grant_reference(ref[i]);
-		i++;
-	}
-	free_pages((unsigned long)shared_pages_info->addr_pages, i);
-
-
-	/* End foreign access for top level addressing page */
-	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
-		printk("refid not shared !!\n");
-	}
-	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
-	gnttab_free_grant_reference(shared_pages_info->top_level_ref);
-
-	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
-
-	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->nents; i++) {
-		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
-			printk("refid not shared !!\n");
-		}
-		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
-		gnttab_free_grant_reference(shared_pages_info->data_refs[i]);
-	}
-
-	kfree(shared_pages_info->data_refs);
-
-	shared_pages_info->data_refs = NULL;
-	shared_pages_info->addr_pages = NULL;
-	shared_pages_info->top_level_page = NULL;
-	shared_pages_info->top_level_ref = -1;
-
-	return 0;
-}
-
-int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
-	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
-
-	if(shared_pages_info->unmap_ops == NULL ||
-	   shared_pages_info->data_pages == NULL) {
-		printk("Imported pages already cleaned up or buffer was not imported yet\n");
-		return 0;
-	}
-
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL,
-			      shared_pages_info->data_pages, sgt_info->nents) ) {
-		printk("Cannot unmap data pages\n");
-		return -EINVAL;
-	}
-
-	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
-	kfree(shared_pages_info->data_pages);
-	kfree(shared_pages_info->unmap_ops);
-	shared_pages_info->unmap_ops = NULL;
-	shared_pages_info->data_pages = NULL;
-
-	return 0;
-}
-
-/* map and construct sg_lists from reference numbers */
-struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst,
-					int last_len, int nents, int sdomain,
-					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	struct sg_table *st;
-	struct page **pages;
-	struct gnttab_map_grant_ref *ops;
-	struct gnttab_unmap_grant_ref *unmap_ops;
-	unsigned long addr;
-	grant_ref_t *refs;
-	int i;
-	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
-
-	/* Get data refids */
-	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
-							       shared_pages_info);
-
-	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
-	if (pages == NULL) {
-		return NULL;
-	}
-
-	/* allocate new pages that are mapped to shared pages via grant-table */
-	if (gnttab_alloc_pages(nents, pages)) {
-		printk("Cannot allocate pages\n");
-		return NULL;
-	}
-
-	ops = kcalloc(nents, sizeof(struct gnttab_map_grant_ref),
-		      GFP_KERNEL);
-	unmap_ops = kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref),
-			    GFP_KERNEL);
-
-	for (i=0; i<nents; i++) {
-		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
-		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
-		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
-				refs[i % REFS_PER_PAGE], sdomain);
-		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
-	}
-
-	if (gnttab_map_refs(ops, NULL, pages, nents)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
-		return NULL;
-	}
-
-	for (i=0; i<nents; i++) {
-		if (ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
-				ops[0].status);
-			return NULL;
-		} else {
-			unmap_ops[i].handle = ops[i].handle;
-		}
+	if (sgt) {
+		sg_free_table(sgt);
+		kfree(sgt);
 	}
-
-	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
-
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages,
-			n_level2_refs) ) {
-		printk("Cannot unmap 2nd level refs\n");
-		return NULL;
-	}
-
-	gnttab_free_pages(n_level2_refs, refid_pages);
-	kfree(refid_pages);
-
-	kfree(shared_pages_info->unmap_ops);
-	shared_pages_info->unmap_ops = unmap_ops;
-	shared_pages_info->data_pages = pages;
-	kfree(ops);
-
-	return st;
 }
 
 int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
@@ -537,6 +152,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
 	struct vmap_vaddr_list *va_vmapl;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 
 	if (!sgt_info) {
 		printk("invalid hyper_dmabuf_id\n");
@@ -598,7 +214,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	}
 
 	/* Start cleanup of buffer in reverse order to exporting */
-	hyper_dmabuf_cleanup_gref_table(sgt_info);
+	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
 
 	/* unmap dma-buf */
 	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
@@ -620,21 +236,22 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	return 0;
 }
 
-inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
+inline int hyper_dmabuf_sync_request_and_wait(int id, int dmabuf_ops)
 {
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int operands[2];
 	int ret;
 
 	operands[0] = id;
-	operands[1] = ops;
+	operands[1] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(id), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, true);
 
 	kfree(req);
 
@@ -753,6 +370,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int ret;
 	int final_release;
 
@@ -761,16 +379,22 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
 
-	final_release = sgt_info && !sgt_info->valid &&
-		       !dmabuf_refcount(sgt_info->dma_buf);
-
 	if (!dmabuf_refcount(sgt_info->dma_buf)) {
 		sgt_info->dma_buf = NULL;
 	}
 
-	if (final_release) {
-		hyper_dmabuf_cleanup_imported_pages(sgt_info);
+	sgt_info->num_importers--;
+
+	if (sgt_info->num_importers == 0) {
+		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
 		hyper_dmabuf_free_sgt(sgt_info->sgt);
+		sgt_info->sgt = NULL;
+	}
+
+	final_release = sgt_info && !sgt_info->valid &&
+		        !sgt_info->num_importers;
+
+	if (final_release) {
 		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 							HYPER_DMABUF_OPS_RELEASE_FINAL);
 	} else {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index 1b0801f..a4a6d63 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -11,20 +11,6 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
                                 int frst_ofst, int last_len, int nents);
 
-grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
-					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
-
-int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
-
-int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
-
-/* map first level tables that contains reference numbers for actual shared pages */
-grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
-
-/* map and construct sg_lists from reference numbers */
-struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
-					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
-
 int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
 
 void hyper_dmabuf_free_sgt(struct sg_table *sgt);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 5c6d9c8..70107bb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -8,47 +8,37 @@
 #include <linux/delay.h>
 #include <linux/list.h>
 #include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_query.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
-#include "xen/hyper_dmabuf_xen_comm_list.h"
-#include "hyper_dmabuf_msg.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static int hyper_dmabuf_exporter_ring_setup(void *data)
+static int hyper_dmabuf_tx_ch_setup(void *data)
 {
-	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
-	struct hyper_dmabuf_ring_info_export *ring_info;
+	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int ret = 0;
 
 	if (!data) {
 		printk("user data is NULL\n");
 		return -1;
 	}
-	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
-
-	/* check if the ring ch already exists */
-	ring_info = hyper_dmabuf_find_exporter_ring(ring_attr->remote_domain);
-
-	if (ring_info) {
-		printk("(exporter's) ring ch to domid = %d already exist\ngref = %d, port = %d\n",
-			ring_info->rdomain, ring_info->gref_ring, ring_info->port);
-		return 0;
-	}
+	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
 
-	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain);
+	ret = ops->init_tx_ch(tx_ch_attr->remote_domain);
 
 	return ret;
 }
 
-static int hyper_dmabuf_importer_ring_setup(void *data)
+static int hyper_dmabuf_rx_ch_setup(void *data)
 {
-	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
-	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int ret = 0;
 
 	if (!data) {
@@ -56,17 +46,9 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 		return -1;
 	}
 
-	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
-
-	/* check if the ring ch already exist */
-	ring_info = hyper_dmabuf_find_importer_ring(setup_imp_ring_attr->source_domain);
+	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
 
-	if (ring_info) {
-		printk("(importer's) ring ch to domid = %d already exist\n", ring_info->sdomain);
-		return 0;
-	}
-
-	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain);
+	ret = ops->init_rx_ch(rx_ch_attr->source_domain);
 
 	return ret;
 }
@@ -74,13 +56,14 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 static int hyper_dmabuf_export_remote(void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct dma_buf *dma_buf;
 	struct dma_buf_attachment *attachment;
 	struct sg_table *sgt;
 	struct hyper_dmabuf_pages_info *page_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_ring_rq *req;
-	int operands[9];
+	struct hyper_dmabuf_req *req;
+	int operands[MAX_NUMBER_OF_OPERANDS];
 	int ret = 0;
 
 	if (!data) {
@@ -125,6 +108,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
 
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
+
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
@@ -163,15 +147,14 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
 
-	/* now create table of grefs for shared pages and */
-
 	/* now create request for importer via ring */
 	operands[0] = page_info->hyper_dmabuf_id;
 	operands[1] = page_info->nents;
 	operands[2] = page_info->frst_ofst;
 	operands[3] = page_info->last_len;
-	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
-						     page_info->nents, &sgt_info->shared_pages_info);
+	operands[4] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
+					page_info->nents, &sgt_info->refs_info);
+
 	/* driver/application specific private info, max 32 bytes */
 	operands[5] = export_remote_attr->private[0];
 	operands[6] = export_remote_attr->private[1];
@@ -182,7 +165,8 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
-	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req, false))
+
+	if(ops->send_req(export_remote_attr->remote_domain, req, false))
 		goto fail_send_request;
 
 	/* free msg */
@@ -215,8 +199,10 @@ static int hyper_dmabuf_export_remote(void *data)
 static int hyper_dmabuf_export_fd_ioctl(void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_req *req;
+	struct page **data_pages;
 	int operand;
 	int ret = 0;
 
@@ -228,43 +214,48 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
 
 	/* look for dmabuf for the id */
-	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
-	if (imported_sgt_info == NULL) /* can't find sgt from the table */
+	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
+	if (sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
-		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
-		imported_sgt_info->last_len, imported_sgt_info->nents,
-		HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
-
-	if (!imported_sgt_info->sgt) {
-		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
-							imported_sgt_info->frst_ofst,
-							imported_sgt_info->last_len,
-							imported_sgt_info->nents,
-							HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id),
-							&imported_sgt_info->shared_pages_info);
-
-		/* send notifiticatio for first export_fd to exporter */
-		operand = imported_sgt_info->hyper_dmabuf_id;
-		req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-		hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
-
-		ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(operand), req, false);
-
-		if (!imported_sgt_info->sgt || ret) {
-			kfree(req);
-			printk("Failed to create sgt or notify exporter\n");
-			return -EINVAL;
-		}
+		sgt_info->ref_handle, sgt_info->frst_ofst,
+		sgt_info->last_len, sgt_info->nents,
+		HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
+
+	if (!sgt_info->sgt) {
+		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
+						   HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id),
+						   sgt_info->nents,
+						   &sgt_info->refs_info);
+
+		sgt_info->sgt = hyper_dmabuf_create_sgt(data_pages, sgt_info->frst_ofst,
+							sgt_info->last_len, sgt_info->nents);
+
+	}
+
+	/* send notification for export_fd to exporter */
+	operand = sgt_info->hyper_dmabuf_id;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+
+	if (!sgt_info->sgt || ret) {
 		kfree(req);
+		printk("Failed to create sgt or notify exporter\n");
+		return -EINVAL;
 	}
+	kfree(req);
 
-	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
+	export_fd_attr->fd = hyper_dmabuf_export_fd(sgt_info, export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
 		ret = export_fd_attr->fd;
+	} else {
+		sgt_info->num_importers++;
 	}
 
 	return ret;
@@ -276,8 +267,9 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 static int hyper_dmabuf_unexport(void *data)
 {
 	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_req *req;
 	int ret;
 
 	if (!data) {
@@ -301,7 +293,7 @@ static int hyper_dmabuf_unexport(void *data)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
-	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
+	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
 		return -EFAULT;
@@ -405,8 +397,8 @@ static int hyper_dmabuf_query(void *data)
 }
 
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport, 0),
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
new file mode 100644
index 0000000..de216d3
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -0,0 +1,87 @@
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	int remote_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+	/* IN parameters */
+	/* Source domain id */
+	int source_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	int dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	int remote_domain;
+	/* exported dma buf id */
+	int hyper_dmabuf_id;
+	int private[4];
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	int hyper_dmabuf_id;
+	/* flags */
+	int flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	int fd;
+};
+
+#define IOCTL_HYPER_DMABUF_UNEXPORT \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
+struct ioctl_hyper_dmabuf_unexport {
+	/* IN parameters */
+	/* hyper dmabuf id to be unexported */
+	int hyper_dmabuf_id;
+	/* OUT parameters */
+	/* Status of request */
+	int status;
+};
+
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	int hyper_dmabuf_id;
+	/* item to be queried */
+	int item;
+	/* OUT parameters */
+	/* Value of queried item */
+	int info;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index a2d687f..4647115 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -5,11 +5,10 @@
 #include <linux/dma-buf.h>
 #include <xen/grant_table.h>
 #include <linux/workqueue.h>
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_remote_sync.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
 #define FORCED_UNEXPORTING 0
@@ -18,18 +17,17 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 struct cmd_process {
 	struct work_struct work;
-	struct hyper_dmabuf_ring_rq *rq;
+	struct hyper_dmabuf_req *rq;
 	int domid;
 };
 
-void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
-				        enum hyper_dmabuf_command command, int *operands)
+void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
+				 enum hyper_dmabuf_command command, int *operands)
 {
 	int i;
 
-	request->request_id = hyper_dmabuf_next_req_id_export();
-	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
-	request->command = command;
+	req->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	req->command = command;
 
 	switch(command) {
 	/* as exporter, commands to importer */
@@ -44,7 +42,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 		for (i=0; i < 8; i++)
-			request->operands[i] = operands[i];
+			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
@@ -52,7 +50,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		/* command : DMABUF_DESTROY,
 		 * operands0 : hyper_dmabuf_id
 		 */
-		request->operands[0] = operands[0];
+		req->operands[0] = operands[0];
 		break;
 
 	case HYPER_DMABUF_FIRST_EXPORT:
@@ -60,7 +58,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		/* command : HYPER_DMABUF_FIRST_EXPORT,
 		 * operands0 : hyper_dmabuf_id
 		 */
-		request->operands[0] = operands[0];
+		req->operands[0] = operands[0];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -77,7 +75,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
 		for (i=0; i<2; i++)
-			request->operands[i] = operands[i];
+			req->operands[i] = operands[i];
 		break;
 
 	default:
@@ -88,10 +86,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 
 void cmd_process_work(struct work_struct *work)
 {
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_req *req;
 	int domid;
 	int i;
 
@@ -114,7 +112,7 @@ void cmd_process_work(struct work_struct *work)
 		imported_sgt_info->frst_ofst = req->operands[2];
 		imported_sgt_info->last_len = req->operands[3];
 		imported_sgt_info->nents = req->operands[1];
-		imported_sgt_info->gref = req->operands[4];
+		imported_sgt_info->ref_handle = req->operands[4];
 
 		printk("DMABUF was exported\n");
 		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
@@ -139,10 +137,7 @@ void cmd_process_work(struct work_struct *work)
 			break;
 		}
 
-		if (sgt_info->importer_exported)
-			printk("warning: exported flag is not supposed to be 1 already\n");
-
-		sgt_info->importer_exported = 1;
+		sgt_info->importer_exported++;
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -160,11 +155,11 @@ void cmd_process_work(struct work_struct *work)
 	kfree(proc);
 }
 
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 {
 	struct cmd_process *proc;
-	struct hyper_dmabuf_ring_rq *temp_req;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_req *temp_req;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	int ret;
 
 	if (!req) {
@@ -189,22 +184,21 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		 * operands0 : hyper_dmabuf_id
 		 */
 
-		imported_sgt_info =
-			hyper_dmabuf_find_imported(req->operands[0]);
+		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
 
-		if (imported_sgt_info) {
+		if (sgt_info) {
 			/* if anything is still using dma_buf */
-			if (imported_sgt_info->dma_buf &&
-			    dmabuf_refcount(imported_sgt_info->dma_buf) > 0) {
+			if (sgt_info->dma_buf &&
+			    dmabuf_refcount(sgt_info->dma_buf) > 0) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
 				 */
-				imported_sgt_info->valid = 0;
+				sgt_info->valid = 0;
 			} else {
 				/* No one is using buffer, remove it from imported list */
 				hyper_dmabuf_remove_imported(req->operands[0]);
-				kfree(imported_sgt_info);
+				kfree(sgt_info);
 			}
 		} else {
 			req->status = HYPER_DMABUF_REQ_ERROR;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 1e9d827..ac4caeb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -1,6 +1,22 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
+#define MAX_NUMBER_OF_OPERANDS 9
+
+struct hyper_dmabuf_req {
+	unsigned int request_id;
+	unsigned int status;
+	unsigned int command;
+	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_resp {
+	unsigned int response_id;
+	unsigned int status;
+	unsigned int command;
+	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
 	HYPER_DMABUF_FIRST_EXPORT,
@@ -35,10 +51,11 @@ enum hyper_dmabuf_req_feedback {
 };
 
 /* create a request packet with given command and operands */
-void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
-                                        enum hyper_dmabuf_command command, int *operands);
+void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
+				 enum hyper_dmabuf_command command,
+				 int *operands);
 
 /* parse incoming request packet (or response) and take appropriate actions for those */
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
 
 #endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index c5950e0..0f4735c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -5,9 +5,9 @@
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_imp.h"
 
@@ -133,6 +133,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_RELEASE:
 		/* place holder */
+                sgt_info->importer_exported--;
+
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index b52f958..f053dd10 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -1,14 +1,6 @@
 #ifndef __HYPER_DMABUF_STRUCT_H__
 #define __HYPER_DMABUF_STRUCT_H__
 
-#include <xen/interface/grant_table.h>
-
-/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
- * in this block meaning we can share 4KB*4096 = 16MB of buffer
- * (needs to be increased for large buffer use-cases such as 4K
- * frame buffer) */
-#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
-
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -33,15 +25,6 @@ struct vmap_vaddr_list {
 	struct list_head list;
 };
 
-struct hyper_dmabuf_shared_pages_info {
-	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
-	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
-	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
-	grant_ref_t top_level_ref; /* top level refid */
-	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
-	struct page **data_pages; /* data pages to be unmapped */
-};
-
 /* Exporter builds pages_info before sharing pages */
 struct hyper_dmabuf_pages_info {
         int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
@@ -69,8 +52,8 @@ struct hyper_dmabuf_sgt_info {
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
 	bool valid;
-	bool importer_exported; /* exported locally on importer's side */
-	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int importer_exported; /* exported locally on importer's side */
+	void *refs_info; /* hypervisor-specific info for the references */
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
@@ -79,14 +62,15 @@ struct hyper_dmabuf_sgt_info {
  * its own memory map once userspace asks for reference for the buffer */
 struct hyper_dmabuf_imported_sgt_info {
 	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
+	int ref_handle; /* reference number of top level addressing page of shared pages */
 	int frst_ofst;	/* start offset in shared page #1 */
 	int last_len;	/* length of data in the last shared page */
 	int nents;	/* number of pages to be shared */
-	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
 	struct dma_buf *dma_buf;
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
-	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	void *refs_info;
 	bool valid;
+	int num_importers;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index f9e0df3..bd37ec2 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -10,16 +10,15 @@
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
-#include "../hyper_dmabuf_imp.h"
-#include "../hyper_dmabuf_list.h"
-#include "../hyper_dmabuf_msg.h"
 
 static int export_req_id = 0;
 
-struct hyper_dmabuf_ring_rq req_pending = {0};
+struct hyper_dmabuf_req req_pending = {0};
 
-/* Creates entry in xen store that will keep details of all exporter rings created by this domain */
-int32_t hyper_dmabuf_setup_data_dir()
+/* Creates entry in xen store that will keep details of all
+ * exporter rings created by this domain
+ */
+static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
@@ -27,13 +26,13 @@ int32_t hyper_dmabuf_setup_data_dir()
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
-
 /* Removes entry from xenstore with exporter ring details.
- * Other domains that has connected to any of exporter rings created by this domain,
- * will be notified about removal of this entry and will treat that as signal to
- * cleanup importer rings created for this domain
+ * Other domains that has connected to any of exporter rings
+ * created by this domain, will be notified about removal of
+ * this entry and will treat that as signal to cleanup importer
+ * rings created for this domain
  */
-int32_t hyper_dmabuf_destroy_data_dir()
+static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
@@ -41,18 +40,19 @@ int32_t hyper_dmabuf_destroy_data_dir()
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
-/*
- * Adds xenstore entries with details of exporter ring created for given remote domain.
- * It requires special daemon running in dom0 to make sure that given remote domain will
- * have right permissions to access that data.
+/* Adds xenstore entries with details of exporter ring created
+ * for given remote domain. It requires special daemon running
+ * in dom0 to make sure that given remote domain will have right
+ * permissions to access that data.
  */
-static int32_t hyper_dmabuf_expose_ring_details(uint32_t domid, uint32_t rdomid, uint32_t grefid, uint32_t port)
+static int xen_comm_expose_ring_details(int domid, int rdomid,
+					int gref, int port)
 {
 	char buf[255];
 	int ret;
 
 	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", domid, rdomid);
-	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", grefid);
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
 
 	if (ret) {
 		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
@@ -72,7 +72,7 @@ static int32_t hyper_dmabuf_expose_ring_details(uint32_t domid, uint32_t rdomid,
 /*
  * Queries details of ring exposed by remote domain.
  */
-static int32_t hyper_dmabuf_get_ring_details(uint32_t domid, uint32_t rdomid, uint32_t *grefid, uint32_t *port)
+static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *port)
 {
 	char buf[255];
 	int ret;
@@ -95,10 +95,10 @@ static int32_t hyper_dmabuf_get_ring_details(uint32_t domid, uint32_t rdomid, ui
 	return (ret <= 0 ? 1 : 0);
 }
 
-int32_t hyper_dmabuf_get_domid(void)
+int hyper_dmabuf_get_domid(void)
 {
 	struct xenbus_transaction xbt;
-	int32_t domid;
+	int domid;
 
         xenbus_transaction_start(&xbt);
 
@@ -110,29 +110,35 @@ int32_t hyper_dmabuf_get_domid(void)
 	return domid;
 }
 
-int hyper_dmabuf_next_req_id_export(void)
+static int xen_comm_next_req_id(void)
 {
         export_req_id++;
         return export_req_id;
 }
 
 /* For now cache latast rings as global variables TODO: keep them in list*/
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info);
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info);
-
-/*
- * Callback function that will be called on any change of xenbus path being watched.
- * Used for detecting creation/destruction of remote domain exporter ring.
- * When remote domain's exporter ring will be detected, importer ring on this domain will be created.
- * When remote domain's exporter ring destruction will be detected it will celanup this domain importer ring.
- * Destruction can be caused by unloading module by remote domain or it's crash/force shutdown.
+static irqreturn_t front_ring_isr(int irq, void *info);
+static irqreturn_t back_ring_isr(int irq, void *info);
+
+/* Callback function that will be called on any change of xenbus path
+ * being watched. Used for detecting creation/destruction of remote
+ * domain exporter ring.
+ *
+ * When remote domain's exporter ring will be detected, importer ring
+ * on this domain will be created.
+ *
+ * When remote domain's exporter ring destruction will be detected it
+ * will celanup this domain importer ring.
+ *
+ * Destruction can be caused by unloading module by remote domain or
+ * it's crash/force shutdown.
  */
-static void remote_domain_exporter_watch_cb(struct xenbus_watch *watch,
-				   const char *path, const char *token)
+static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
+					 const char *path, const char *token)
 {
 	int rdom,ret;
 	uint32_t grefid, port;
-	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct xen_comm_rx_ring_info *ring_info;
 
 	/* Check which domain has changed its exporter rings */
 	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
@@ -141,39 +147,49 @@ static void remote_domain_exporter_watch_cb(struct xenbus_watch *watch,
 	}
 
 	/* Check if we have importer ring for given remote domain alrady created */
-	ring_info = hyper_dmabuf_find_importer_ring(rdom);
-
-	/*
-	 * Try to query remote domain exporter ring details - if that will fail and we have
-	 * importer ring that means remote domains has cleanup its exporter ring, so our
-	 * importer ring is no longer useful.
-	 * If querying details will succeed and we don't have importer ring, it means that
-	 * remote domain has setup it for us and we should connect to it.
+	ring_info = xen_comm_find_rx_ring(rdom);
+
+	/* Try to query remote domain exporter ring details - if that will
+	 * fail and we have importer ring that means remote domains has cleanup
+	 * its exporter ring, so our importer ring is no longer useful.
+	 *
+	 * If querying details will succeed and we don't have importer ring,
+	 * it means that remote domain has setup it for us and we should connect
+	 * to it.
 	 */
-	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), rdom, &grefid, &port);
+	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), rdom,
+					&grefid, &port);
 
 	if (ring_info && ret != 0) {
 		printk("Remote exporter closed, cleaninup importer\n");
-		hyper_dmabuf_importer_ringbuf_cleanup(rdom);
+		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
 		printk("Registering importer\n");
-		hyper_dmabuf_importer_ringbuf_init(rdom);
+		hyper_dmabuf_xen_init_rx_rbuf(rdom);
 	}
 }
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
+int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_export *ring_info;
-	struct hyper_dmabuf_sring *sring;
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
 	struct evtchn_alloc_unbound alloc_unbound;
 	struct evtchn_close close;
 
 	void *shared_ring;
 	int ret;
 
-	ring_info = (struct hyper_dmabuf_ring_info_export*)
-				kmalloc(sizeof(*ring_info), GFP_KERNEL);
+	/* check if there's any existing tx channel in the table */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (ring_info) {
+		printk("tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
@@ -181,20 +197,22 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 		return -EINVAL;
 	}
 
-	sring = (struct hyper_dmabuf_sring *) shared_ring;
+	sring = (struct xen_comm_sring *) shared_ring;
 
 	SHARED_RING_INIT(sring);
 
 	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
 
-	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
-							virt_to_mfn(shared_ring), 0);
+	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
+							   virt_to_mfn(shared_ring),
+							   0);
 	if (ring_info->gref_ring < 0) {
-		return -EINVAL; /* fail to get gref */
+		/* fail to get gref */
+		return -EINVAL;
 	}
 
 	alloc_unbound.dom = DOMID_SELF;
-	alloc_unbound.remote_dom = rdomain;
+	alloc_unbound.remote_dom = domid;
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
 					&alloc_unbound);
 	if (ret != 0) {
@@ -204,7 +222,7 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 
 	/* setting up interrupt */
 	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
-					hyper_dmabuf_front_ring_isr, 0,
+					front_ring_isr, 0,
 					NULL, (void*) ring_info);
 
 	if (ret < 0) {
@@ -216,7 +234,7 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 		return -EINVAL;
 	}
 
-	ring_info->rdomain = rdomain;
+	ring_info->rdomain = domid;
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
@@ -226,109 +244,128 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 		ring_info->port,
 		ring_info->irq);
 
-	ret = hyper_dmabuf_register_exporter_ring(ring_info);
+	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = hyper_dmabuf_expose_ring_details(hyper_dmabuf_get_domid(), rdomain,
-                                               ring_info->gref_ring, ring_info->port);
+	ret = xen_comm_expose_ring_details(hyper_dmabuf_get_domid(), domid,
+					   ring_info->gref_ring, ring_info->port);
 
 	/*
 	 * Register watch for remote domain exporter ring.
-	 * When remote domain will setup its exporter ring, we will automatically connect our importer ring to it.
+	 * When remote domain will setup its exporter ring,
+	 * we will automatically connect our importer ring to it.
 	 */
-	ring_info->watch.callback = remote_domain_exporter_watch_cb;
+	ring_info->watch.callback = remote_dom_exporter_watch_cb;
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
-	sprintf((char*)ring_info->watch.node, "/local/domain/%d/data/hyper_dmabuf/%d/port", rdomain, hyper_dmabuf_get_domid());
+	sprintf((char*)ring_info->watch.node,
+		"/local/domain/%d/data/hyper_dmabuf/%d/port",
+		domid, hyper_dmabuf_get_domid());
+
 	register_xenbus_watch(&ring_info->watch);
 
 	return ret;
 }
 
 /* cleans up exporter ring created for given remote domain */
-void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain)
+void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_export *ring_info;
+	struct xen_comm_tx_ring_info *ring_info;
 
 	/* check if we at all have exporter ring for given rdomain */
-	ring_info = hyper_dmabuf_find_exporter_ring(rdomain);
+	ring_info = xen_comm_find_tx_ring(domid);
 
 	if (!ring_info) {
 		return;
 	}
 
-	hyper_dmabuf_remove_exporter_ring(rdomain);
+	xen_comm_remove_tx_ring(domid);
 
 	unregister_xenbus_watch(&ring_info->watch);
 	kfree(ring_info->watch.node);
 
-	/* No need to close communication channel, will be done by this function */
-	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+	/* No need to close communication channel, will be done by
+	 * this function
+	 */
+	unbind_from_irqhandler(ring_info->irq, (void*) ring_info);
 
-	/* No need to free sring page, will be freed by this function when other side will end its access */
+	/* No need to free sring page, will be freed by this function
+	 * when other side will end its access
+	 */
 	gnttab_end_foreign_access(ring_info->gref_ring, 0,
 				  (unsigned long) ring_info->ring_front.sring);
 
 	kfree(ring_info);
 }
 
-/* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain)
+/* importer needs to know about shared page and port numbers for
+ * ring buffer and event channel
+ */
+int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_import *ring_info;
-	struct hyper_dmabuf_sring *sring;
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
 
 	struct page *shared_ring;
 
-	struct gnttab_map_grant_ref *ops;
+	struct gnttab_map_grant_ref *map_ops;
+
 	int ret;
-	int importer_gref, importer_port;
+	int rx_gref, rx_port;
 
-	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), sdomain,
-					    &importer_gref, &importer_port);
+	/* check if there's existing rx ring channel */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (ring_info) {
+		printk("rx ring ch from domid = %d already exist\n", ring_info->sdomain);
+		return 0;
+	}
+
+	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), domid,
+					&rx_gref, &rx_port);
 
 	if (ret) {
-		printk("Domain %d has not created exporter ring for current domain\n", sdomain);
+		printk("Domain %d has not created exporter ring for current domain\n", domid);
 		return ret;
 	}
 
-	ring_info = (struct hyper_dmabuf_ring_info_import *)
-			kmalloc(sizeof(*ring_info), GFP_KERNEL);
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
-	ring_info->sdomain = sdomain;
-	ring_info->evtchn = importer_port;
+	ring_info->sdomain = domid;
+	ring_info->evtchn = rx_port;
 
-	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
+	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (gnttab_alloc_pages(1, &shared_ring)) {
 		return -EINVAL;
 	}
 
-	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
-			GNTMAP_host_map, importer_gref, sdomain);
+	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			  GNTMAP_host_map, rx_gref, domid);
+
 	gnttab_set_unmap_op(&ring_info->unmap_op, (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
-			GNTMAP_host_map, -1);
+			    GNTMAP_host_map, -1);
 
-	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
+	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
 		printk("Cannot map ring\n");
 		return -EINVAL;
 	}
 
-	if (ops[0].status) {
+	if (map_ops[0].status) {
 		printk("Ring mapping failed\n");
 		return -EINVAL;
 	} else {
-		ring_info->unmap_op.handle = ops[0].handle;
+		ring_info->unmap_op.handle = map_ops[0].handle;
 	}
 
-	kfree(ops);
+	kfree(map_ops);
 
-	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
+	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, importer_port,
-						hyper_dmabuf_back_ring_isr, 0,
-						NULL, (void*)ring_info);
+	ret = bind_interdomain_evtchn_to_irqhandler(domid, rx_port,
+						    back_ring_isr, 0,
+						    NULL, (void*)ring_info);
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -336,35 +373,35 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain)
 	ring_info->irq = ret;
 
 	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
-		importer_port,
+		rx_port,
 		ring_info->irq);
 
-	ret = hyper_dmabuf_register_importer_ring(ring_info);
+	ret = xen_comm_add_rx_ring(ring_info);
 
 	/* Setup communcation channel in opposite direction */
-	if (!hyper_dmabuf_find_exporter_ring(sdomain)) {
-		ret = hyper_dmabuf_exporter_ringbuf_init(sdomain);
+	if (!xen_comm_find_tx_ring(domid)) {
+		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
 	}
 
 	return ret;
 }
 
 /* clenas up importer ring create for given source domain */
-void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain)
+void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct xen_comm_rx_ring_info *ring_info;
 	struct page *shared_ring;
 
 	/* check if we have importer ring created for given sdomain */
-	ring_info = hyper_dmabuf_find_importer_ring(sdomain);
+	ring_info = xen_comm_find_rx_ring(domid);
 
 	if (!ring_info)
 		return;
 
-	hyper_dmabuf_remove_importer_ring(sdomain);
+	xen_comm_remove_rx_ring(domid);
 
 	/* no need to close event channel, will be done by that function */
-	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+	unbind_from_irqhandler(ring_info->irq, (void*)ring_info);
 
 	/* unmapping shared ring page */
 	shared_ring = virt_to_page(ring_info->ring_back.sring);
@@ -374,23 +411,39 @@ void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain)
 	kfree(ring_info);
 }
 
-/* cleans up all exporter/importer rings */
-void hyper_dmabuf_cleanup_ringbufs(void)
+int hyper_dmabuf_xen_init_comm_env(void)
 {
-	hyper_dmabuf_foreach_exporter_ring(hyper_dmabuf_exporter_ringbuf_cleanup);
-	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
+	int ret;
+
+	xen_comm_ring_table_init();
+	ret = xen_comm_setup_data_dir();
+
+	return ret;
 }
 
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait)
+/* cleans up all tx/rx rings */
+static void hyper_dmabuf_xen_cleanup_all_rbufs(void)
 {
-	struct hyper_dmabuf_front_ring *ring;
-	struct hyper_dmabuf_ring_rq *new_req;
-	struct hyper_dmabuf_ring_info_export *ring_info;
+	xen_comm_foreach_tx_ring(hyper_dmabuf_xen_cleanup_tx_rbuf);
+	xen_comm_foreach_rx_ring(hyper_dmabuf_xen_cleanup_rx_rbuf);
+}
+
+void hyper_dmabuf_xen_destroy_comm(void)
+{
+	hyper_dmabuf_xen_cleanup_all_rbufs();
+	xen_comm_destroy_data_dir();
+}
+
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
+{
+	struct xen_comm_front_ring *ring;
+	struct hyper_dmabuf_req *new_req;
+	struct xen_comm_tx_ring_info *ring_info;
 	int notify;
 	int timeout = 1000;
 
 	/* find a ring info for the channel */
-	ring_info = hyper_dmabuf_find_exporter_ring(domain);
+	ring_info = xen_comm_find_tx_ring(domid);
 	if (!ring_info) {
 		printk("Can't find ring info for the channel\n");
 		return -EINVAL;
@@ -407,6 +460,8 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int
 		return -EIO;
 	}
 
+	req->request_id = xen_comm_next_req_id();
+
 	/* update req_pending with current request */
 	memcpy(&req_pending, req, sizeof(req_pending));
 
@@ -438,19 +493,19 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int
 }
 
 /* ISR for handling request */
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
+static irqreturn_t back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
-	struct hyper_dmabuf_ring_rq req;
-	struct hyper_dmabuf_ring_rp resp;
+	struct hyper_dmabuf_req req;
+	struct hyper_dmabuf_resp resp;
 
 	int notify, more_to_do;
 	int ret;
 
-	struct hyper_dmabuf_ring_info_import *ring_info;
-	struct hyper_dmabuf_back_ring *ring;
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_back_ring *ring;
 
-	ring_info = (struct hyper_dmabuf_ring_info_import *)info;
+	ring_info = (struct xen_comm_rx_ring_info *)info;
 	ring = &ring_info->ring_back;
 
 	do {
@@ -490,17 +545,17 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 }
 
 /* ISR for handling responses */
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
+static irqreturn_t front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
-	struct hyper_dmabuf_ring_rp *resp;
+	struct hyper_dmabuf_resp *resp;
 	RING_IDX i, rp;
 	int more_to_do, ret;
 
-	struct hyper_dmabuf_ring_info_export *ring_info;
-	struct hyper_dmabuf_front_ring *ring;
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_front_ring *ring;
 
-	ring_info = (struct hyper_dmabuf_ring_info_export *)info;
+	ring_info = (struct xen_comm_tx_ring_info *)info;
 	ring = &ring_info->ring_front;
 
 	do {
@@ -518,7 +573,7 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
 				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
-							(struct hyper_dmabuf_ring_rq *)resp);
+							(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 4ab031a..ba41e9d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -3,27 +3,14 @@
 
 #include "xen/interface/io/ring.h"
 #include "xen/xenbus.h"
+#include "../hyper_dmabuf_msg.h"
 
 #define MAX_NUMBER_OF_OPERANDS 9
 
-struct hyper_dmabuf_ring_rq {
-        unsigned int request_id;
-        unsigned int status;
-        unsigned int command;
-        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
-};
-
-struct hyper_dmabuf_ring_rp {
-        unsigned int response_id;
-        unsigned int status;
-        unsigned int command;
-        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
-};
+DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
 
-DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
-
-struct hyper_dmabuf_ring_info_export {
-        struct hyper_dmabuf_front_ring ring_front;
+struct xen_comm_tx_ring_info {
+        struct xen_comm_front_ring ring_front;
 	int rdomain;
         int gref_ring;
         int irq;
@@ -31,39 +18,35 @@ struct hyper_dmabuf_ring_info_export {
 	struct xenbus_watch watch;
 };
 
-struct hyper_dmabuf_ring_info_import {
+struct xen_comm_rx_ring_info {
         int sdomain;
         int irq;
         int evtchn;
-        struct hyper_dmabuf_back_ring ring_back;
+        struct xen_comm_back_ring ring_back;
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
-int32_t hyper_dmabuf_get_domid(void);
-int32_t hyper_dmabuf_setup_data_dir(void);
-int32_t hyper_dmabuf_destroy_data_dir(void);
+int hyper_dmabuf_get_domid(void);
 
-int hyper_dmabuf_next_req_id_export(void);
+int hyper_dmabuf_xen_init_comm_env(void);
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain);
+int hyper_dmabuf_xen_init_tx_rbuf(int domid);
 
-/* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain);
+/* importer needs to know about shared page and port numbers
+ * for ring buffer and event channel
+ */
+int hyper_dmabuf_xen_init_rx_rbuf(int domid);
 
 /* cleans up exporter ring created for given domain */
-void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain);
+void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid);
 
 /* cleans up importer ring created for given domain */
-void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
+void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid);
 
-/* cleans up all exporter/importer rings */
-void hyper_dmabuf_cleanup_ringbufs(void);
+void hyper_dmabuf_xen_destroy_comm(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait);
-
-/* called by interrupt (WORKQUEUE) */
-int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait);
 
 #endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index a068276..2a1f45b 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -9,80 +9,73 @@
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
 
-DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
-DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
+DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
+DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
-int hyper_dmabuf_ring_table_init()
+void xen_comm_ring_table_init()
 {
-	hash_init(hyper_dmabuf_hash_importer_ring);
-	hash_init(hyper_dmabuf_hash_exporter_ring);
-	return 0;
-}
-
-int hyper_dmabuf_ring_table_destroy()
-{
-	/* TODO: cleanup tables*/
-	return 0;
+	hash_init(xen_comm_rx_ring_hash);
+	hash_init(xen_comm_tx_ring_hash);
 }
 
-int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	info_entry->info = ring_info;
 
-	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
+	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
 		info_entry->info->rdomain);
 
 	return 0;
 }
 
-int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	info_entry->info = ring_info;
 
-	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
+	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
 		info_entry->info->sdomain);
 
 	return 0;
 }
 
-struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->rdomain == domid)
 			return info_entry->info;
 
 	return NULL;
 }
 
-struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->sdomain == domid)
 			return info_entry->info;
 
 	return NULL;
 }
 
-int hyper_dmabuf_remove_exporter_ring(int domid)
+int xen_comm_remove_tx_ring(int domid)
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->rdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
@@ -92,12 +85,12 @@ int hyper_dmabuf_remove_exporter_ring(int domid)
 	return -1;
 }
 
-int hyper_dmabuf_remove_importer_ring(int domid)
+int xen_comm_remove_rx_ring(int domid)
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->sdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
@@ -107,24 +100,26 @@ int hyper_dmabuf_remove_importer_ring(int domid)
 	return -1;
 }
 
-void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom))
+void xen_comm_foreach_tx_ring(void (*func)(int domid))
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 	struct hlist_node *tmp;
 	int bkt;
 
-	hash_for_each_safe(hyper_dmabuf_hash_exporter_ring, bkt, tmp, info_entry, node) {
+	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
+			   info_entry, node) {
 		func(info_entry->info->rdomain);
 	}
 }
 
-void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom))
+void xen_comm_foreach_rx_ring(void (*func)(int domid))
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 	struct hlist_node *tmp;
 	int bkt;
 
-	hash_for_each_safe(hyper_dmabuf_hash_importer_ring, bkt, tmp, info_entry, node) {
+	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
+			   info_entry, node) {
 		func(info_entry->info->sdomain);
 	}
 }
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index fd1958c..18b3afd 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -2,40 +2,38 @@
 #define __HYPER_DMABUF_XEN_COMM_LIST_H__
 
 /* number of bits to be used for exported dmabufs hash table */
-#define MAX_ENTRY_EXPORT_RING 7
+#define MAX_ENTRY_TX_RING 7
 /* number of bits to be used for imported dmabufs hash table */
-#define MAX_ENTRY_IMPORT_RING 7
+#define MAX_ENTRY_RX_RING 7
 
-struct hyper_dmabuf_exporter_ring_info {
-        struct hyper_dmabuf_ring_info_export *info;
+struct xen_comm_tx_ring_info_entry {
+        struct xen_comm_tx_ring_info *info;
         struct hlist_node node;
 };
 
-struct hyper_dmabuf_importer_ring_info {
-        struct hyper_dmabuf_ring_info_import *info;
+struct xen_comm_rx_ring_info_entry {
+        struct xen_comm_rx_ring_info *info;
         struct hlist_node node;
 };
 
-int hyper_dmabuf_ring_table_init(void);
+void xen_comm_ring_table_init(void);
 
-int hyper_dmabuf_ring_table_destroy(void);
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
 
-int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
 
-int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
+int xen_comm_remove_tx_ring(int domid);
 
-struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
+int xen_comm_remove_rx_ring(int domid);
 
-struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
 
-int hyper_dmabuf_remove_exporter_ring(int domid);
-
-int hyper_dmabuf_remove_importer_ring(int domid);
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
 
 /* iterates over all exporter rings and calls provided function for each of them */
-void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom));
+void xen_comm_foreach_tx_ring(void (*func)(int domid));
 
 /* iterates over all importer rings and calls provided function for each of them */
-void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom));
+void xen_comm_foreach_rx_ring(void (*func)(int domid));
 
 #endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
new file mode 100644
index 0000000..e7b871a
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -0,0 +1,22 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <xen/grant_table.h>
+#include "../hyper_dmabuf_msg.h"
+#include "../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_shm.h"
+
+struct hyper_dmabuf_backend_ops xen_backend_ops = {
+	.get_vm_id = hyper_dmabuf_get_domid,
+	.share_pages = hyper_dmabuf_xen_share_pages,
+	.unshare_pages = hyper_dmabuf_xen_unshare_pages,
+	.map_shared_pages = (void *)hyper_dmabuf_xen_map_shared_pages,
+	.unmap_shared_pages = hyper_dmabuf_xen_unmap_shared_pages,
+	.init_comm_env = hyper_dmabuf_xen_init_comm_env,
+	.destroy_comm = hyper_dmabuf_xen_destroy_comm,
+	.init_rx_ch = hyper_dmabuf_xen_init_rx_rbuf,
+	.init_tx_ch = hyper_dmabuf_xen_init_tx_rbuf,
+	.send_req = hyper_dmabuf_xen_send_req,
+};
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
new file mode 100644
index 0000000..e351c08
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -0,0 +1,20 @@
+#ifndef __HYPER_DMABUF_XEN_DRV_H__
+#define __HYPER_DMABUF_XEN_DRV_H__
+#include <xen/interface/grant_table.h>
+
+extern struct hyper_dmabuf_backend_ops xen_backend_ops;
+
+/* Main purpose of this structure is to keep
+ * all references created or acquired for sharing
+ * pages with another domain for freeing those later
+ * when unsharing.
+ */
+struct xen_shared_pages_info {
+        grant_ref_t lvl3_gref; /* top level refid */
+        grant_ref_t *lvl3_table; /* page of top level addressing, it contains refids of 2nd level pages */
+        grant_ref_t *lvl2_table; /* table of 2nd level pages, that contains refids to data pages */
+        struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
+        struct page **data_pages; /* data pages to be unmapped */
+};
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
new file mode 100644
index 0000000..c0045d4
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -0,0 +1,356 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_drv.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      3rd level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
+ * +-------------------------+ | | +--------------------+      |Data page 1 |
+ *                             | |                             +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|-->+------------------+
+ *                                 +-----------------------+   |Data page 1048575 |
+ *                                                             +------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
+				 void **refs_info)
+{
+	grant_ref_t lvl3_gref;
+	grant_ref_t *lvl2_table;
+	grant_ref_t *lvl3_table;
+
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			   ((nents % REFS_PER_PAGE) ? 1: 0));
+
+	struct xen_shared_pages_info *sh_pages_info;
+	int i;
+
+	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
+	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+	*refs_info = (void *)sh_pages_info;
+
+	/* share data pages in rw mode*/
+	for (i=0; i<nents; i++) {
+		lvl2_table[i] = gnttab_grant_foreign_access(domid,
+							    pfn_to_mfn(page_to_pfn(pages[i])),
+							    0);
+	}
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i=0; i< n_lvl2_grefs; i++) {
+		lvl3_table[i] = gnttab_grant_foreign_access(domid,
+							   virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+							   1);
+	}
+
+	/* Share lvl3_table in readonly mode*/
+	lvl3_gref = gnttab_grant_foreign_access(domid,
+						virt_to_mfn((unsigned long)lvl3_table),
+						1);
+
+
+	/* Store lvl3_table page to be freed later */
+	sh_pages_info->lvl3_table = lvl3_table;
+
+	/* Store lvl2_table pages to be freed later */
+	sh_pages_info->lvl2_table = lvl2_table;
+
+	/* Store exported pages refid to be unshared later */
+	sh_pages_info->lvl3_gref = lvl3_gref;
+
+	return lvl3_gref;
+}
+
+int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
+	struct xen_shared_pages_info *sh_pages_info;
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));
+	int i;
+
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->lvl3_table == NULL ||
+	    sh_pages_info->lvl2_table ==  NULL ||
+	    sh_pages_info->lvl3_gref == -1) {
+		printk("gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < nents; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
+			printk("refid not shared !!\n");
+		}
+		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
+		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
+			printk("refid not shared !!\n");
+		}
+		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
+			printk("refid still in use!!!\n");
+		}
+		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
+	}
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
+		printk("gref not shared !!\n");
+	}
+
+	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
+	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
+
+	/* freeing all pages used for 2 level addressing */
+	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
+
+	sh_pages_info->lvl3_gref = -1;
+	sh_pages_info->lvl2_table = NULL;
+	sh_pages_info->lvl3_table = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	return 0;
+}
+
+/*
+ * Maps provided top level ref id and then return array of pages containing data refs.
+ */
+struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents, void **refs_info)
+{
+	struct page *lvl3_table_page;
+	struct page **lvl2_table_pages;
+	struct page **data_pages;
+	struct xen_shared_pages_info *sh_pages_info;
+
+	grant_ref_t *lvl3_table;
+	grant_ref_t *lvl2_table;
+
+	struct gnttab_map_grant_ref lvl3_map_ops;
+	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
+
+	struct gnttab_map_grant_ref *lvl2_map_ops;
+	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
+
+	struct gnttab_map_grant_ref *data_map_ops;
+	struct gnttab_unmap_grant_ref *data_unmap_ops;
+
+	int nents_last = nents % REFS_PER_PAGE;
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0);
+	int i, j, k;
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+	*refs_info = (void *) sh_pages_info;
+
+	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs, GFP_KERNEL);
+	data_pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
+
+	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs, GFP_KERNEL);
+	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs, GFP_KERNEL);
+
+	data_map_ops = kcalloc(sizeof(*data_map_ops), nents, GFP_KERNEL);
+	data_unmap_ops = kcalloc(sizeof(*data_unmap_ops), nents, GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
+
+	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly,
+			  (grant_ref_t)lvl3_gref, domid);
+
+	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (lvl3_map_ops.status) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+			lvl3_map_ops.status);
+		return NULL;
+	} else {
+		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
+	}
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl2_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		gnttab_set_map_op(&lvl2_map_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly,
+				  lvl3_table[i], domid);
+		gnttab_set_unmap_op(&lvl2_unmap_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
+		printk("\xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (lvl2_map_ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+			       lvl2_map_ops[i].status);
+			return NULL;
+		} else {
+			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
+		}
+	}
+
+	if (gnttab_alloc_pages(nents, data_pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	k = 0;
+
+	for (i = 0; i < (nents_last ? n_lvl2_grefs - 1 : n_lvl2_grefs); i++) {
+		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		for (j = 0; j < REFS_PER_PAGE; j++) {
+			gnttab_set_map_op(&data_map_ops[k],
+					  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+					  GNTMAP_host_map,
+					  lvl2_table[j], domid);
+
+			gnttab_set_unmap_op(&data_unmap_ops[k],
+					    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+					    GNTMAP_host_map, -1);
+			k++;
+		}
+	}
+
+	/* for grefs in the last lvl2 table page */
+	lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[n_lvl2_grefs - 1]));
+
+	for (j = 0; j < nents_last; j++) {
+		gnttab_set_map_op(&data_map_ops[k],
+				  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				  GNTMAP_host_map,
+				  lvl2_table[j], domid);
+
+		gnttab_set_unmap_op(&data_unmap_ops[k],
+				    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				    GNTMAP_host_map, -1);
+		k++;
+	}
+
+	if (gnttab_map_refs(data_map_ops, NULL, data_pages, nents)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	/* unmapping lvl2 table pages */
+	if (gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
+			      n_lvl2_grefs)) {
+		printk("Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	for (i = 0; i < nents; i++) {
+		if (data_map_ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
+				data_map_ops[i].status);
+			return NULL;
+		} else {
+			data_unmap_ops[i].handle = data_map_ops[i].handle;
+		}
+	}
+
+	/* store these references for unmapping in the future */
+	sh_pages_info->unmap_ops = data_unmap_ops;
+	sh_pages_info->data_pages = data_pages;
+
+	gnttab_free_pages(1, &lvl3_table_page);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+	return data_pages;
+}
+
+int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
+	struct xen_shared_pages_info *sh_pages_info;
+
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->unmap_ops == NULL ||
+	    sh_pages_info->data_pages == NULL) {
+		printk("Imported pages already cleaned up or buffer was not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
+			      sh_pages_info->data_pages, nents) ) {
+		printk("Cannot unmap data pages\n");
+		return -EINVAL;
+	}
+
+	gnttab_free_pages(nents, sh_pages_info->data_pages);
+
+	kfree(sh_pages_info->data_pages);
+	kfree(sh_pages_info->unmap_ops);
+	sh_pages_info->unmap_ops = NULL;
+	sh_pages_info->data_pages = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
new file mode 100644
index 0000000..2287804
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -0,0 +1,19 @@
+#ifndef __HYPER_DMABUF_XEN_SHM_H__
+#define __HYPER_DMABUF_XEN_SHM_H__
+
+/* This collects all reference numbers for 2nd level shared pages and create a table
+ * with those in 1st level shared pages then return reference numbers for this top level
+ * table. */
+int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
+				 void **refs_info);
+
+int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents);
+
+/* Maps provided top level ref id and then return array of pages containing data refs.
+ */
+struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents,
+						void **refs_info);
+
+int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents);
+
+#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 16/60] hyper_dmabuf: define hypervisor specific backend API
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

For adoption of hyper_dmabuf driver to various hypervisors
other than Xen, a "backend" layer is defined and separated out
from existing one-body structure.

"Backend" is basically a list of entry points of function calls
that provides method to do Kernel's page-level sharing and inter
VMs communication using hypervisor's native mechanism (hypercall).

All backend APIs are listed up in "struct hyper_dmabuf_backend_ops"
as shown below.

struct hyper_dmabuf_backend_ops {
        /* retreiving id of current virtual machine */
        int (*get_vm_id)(void);

        /* get pages shared via hypervisor-specific method */
        int (*share_pages)(struct page **, int, int, void **);

        /* make shared pages unshared via hypervisor specific method */
        int (*unshare_pages)(void **, int);

        /* map remotely shared pages on importer's side via
         * hypervisor-specific method
         */
        struct page ** (*map_shared_pages)(int, int, int, void **);

        /* unmap and free shared pages on importer's side via
         * hypervisor-specific method
         */
        int (*unmap_shared_pages)(void **, int);

        /* initialize communication environment */
        int (*init_comm_env)(void);

        void (*destroy_comm)(void);

        /* upstream ch setup (receiving and responding) */
        int (*init_rx_ch)(int);

        /* downstream ch setup (transmitting and parsing responses) */
        int (*init_tx_ch)(int);

        int (*send_req)(int, struct hyper_dmabuf_req *, int);
};

Within this new structure, only backend APIs need to be re-designed or
replaced with new ones when porting this sharing model to a different
hypervisor environment, which is a lot simpler than completely redesiging
whole driver for a new hypervisor.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |  11 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   1 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  33 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 112 ++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |   6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 426 ++-------------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  14 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 134 +++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |  87 +++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  52 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  23 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  26 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 303 +++++++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  51 +--
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  67 ++--
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  32 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  22 ++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  20 +
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 356 +++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  19 +
 21 files changed, 949 insertions(+), 850 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index c9b8b7f..d90cfc3 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -1,5 +1,7 @@
 TARGET_MODULE:=hyper_dmabuf
 
+PLATFORM:=XEN
+
 # If we running by kernel building system
 ifneq ($(KERNELRELEASE),)
 	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
@@ -9,8 +11,13 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
-				 xen/hyper_dmabuf_xen_comm.o \
-				 xen/hyper_dmabuf_xen_comm_list.o
+
+ifeq ($(CONFIG_XEN), y)
+	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
+				 xen/hyper_dmabuf_xen_comm_list.o \
+				 xen/hyper_dmabuf_xen_shm.o \
+				 xen/hyper_dmabuf_xen_drv.o
+endif
 
 obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
index 3d9b2d6..d012b05 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -1,2 +1 @@
 #define CURRENT_TARGET XEN
-#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 66d6cb9..ddcc955 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,15 +1,18 @@
-#include <linux/init.h>       /* module_init, module_exit */
-#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include <linux/init.h>
+#include <linux/module.h>
 #include <linux/workqueue.h>
-#include <xen/grant_table.h>
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
-#include "xen/hyper_dmabuf_xen_comm_list.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
 
-MODULE_LICENSE("Dual BSD/GPL");
+#ifdef CONFIG_XEN
+#include "xen/hyper_dmabuf_xen_drv.h"
+extern struct hyper_dmabuf_backend_ops xen_backend_ops;
+#endif
+
+MODULE_LICENSE("GPL");
 MODULE_AUTHOR("IOTG-PED, INTEL");
 
 int register_device(void);
@@ -29,24 +32,24 @@ static int hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
+#ifdef CONFIG_XEN
+	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
+#endif
+
 	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
 
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
 	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
-	hyper_dmabuf_private.domid = hyper_dmabuf_get_domid();
+	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		return -EINVAL;
 	}
 
-	ret = hyper_dmabuf_ring_table_init();
-	if (ret < 0) {
-		return -EINVAL;
-	}
+	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
 
-	ret = hyper_dmabuf_setup_data_dir();
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -61,8 +64,7 @@ static void hyper_dmabuf_drv_exit(void)
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
-	hyper_dmabuf_cleanup_ringbufs();
-	hyper_dmabuf_ring_table_destroy();
+	hyper_dmabuf_private.backend_ops->destroy_comm();
 
 	/* destroy workqueue */
 	if (hyper_dmabuf_private.work_queue)
@@ -72,7 +74,6 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.id_queue)
 		destroy_reusable_list();
 
-	hyper_dmabuf_destroy_data_dir();
 	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 37b0cc1..03d77d7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -6,94 +6,48 @@ struct list_reusable_id {
 	struct list_head list;
 };
 
-struct hyper_dmabuf_private {
-        struct device *device;
-	int domid;
-	struct workqueue_struct *work_queue;
-	struct list_reusable_id *id_queue;
-};
+struct hyper_dmabuf_backend_ops {
+	/* retreiving id of current virtual machine */
+	int (*get_vm_id)(void);
 
-typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+	/* get pages shared via hypervisor-specific method */
+	int (*share_pages)(struct page **, int, int, void **);
 
-struct hyper_dmabuf_ioctl_desc {
-	unsigned int cmd;
-	int flags;
-	hyper_dmabuf_ioctl_t func;
-	const char *name;
-};
+	/* make shared pages unshared via hypervisor specific method */
+	int (*unshare_pages)(void **, int);
 
-#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
-	[_IOC_NR(ioctl)] = {				\
-			.cmd = ioctl,			\
-			.func = _func,			\
-			.flags = _flags,		\
-			.name = #ioctl			\
-	}
+	/* map remotely shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	struct page ** (*map_shared_pages)(int, int, int, void **);
 
-#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
-_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
-struct ioctl_hyper_dmabuf_exporter_ring_setup {
-	/* IN parameters */
-	/* Remote domain id */
-	uint32_t remote_domain;
-};
+	/* unmap and free shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	int (*unmap_shared_pages)(void **, int);
 
-#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
-_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
-struct ioctl_hyper_dmabuf_importer_ring_setup {
-	/* IN parameters */
-	/* Source domain id */
-	uint32_t source_domain;
-};
+	/* initialize communication environment */
+	int (*init_comm_env)(void);
 
-#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
-_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
-struct ioctl_hyper_dmabuf_export_remote {
-	/* IN parameters */
-	/* DMA buf fd to be exported */
-	uint32_t dmabuf_fd;
-	/* Domain id to which buffer should be exported */
-	uint32_t remote_domain;
-	/* exported dma buf id */
-	uint32_t hyper_dmabuf_id;
-	uint32_t private[4];
-};
+	void (*destroy_comm)(void);
 
-#define IOCTL_HYPER_DMABUF_EXPORT_FD \
-_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
-struct ioctl_hyper_dmabuf_export_fd {
-	/* IN parameters */
-	/* hyper dmabuf id to be imported */
-	uint32_t hyper_dmabuf_id;
-	/* flags */
-	uint32_t flags;
-	/* OUT parameters */
-	/* exported dma buf fd */
-	uint32_t fd;
-};
+	/* upstream ch setup (receiving and responding) */
+	int (*init_rx_ch)(int);
+
+	/* downstream ch setup (transmitting and parsing responses) */
+	int (*init_tx_ch)(int);
 
-#define IOCTL_HYPER_DMABUF_UNEXPORT \
-_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
-struct ioctl_hyper_dmabuf_unexport {
-	/* IN parameters */
-	/* hyper dmabuf id to be unexported */
-	uint32_t hyper_dmabuf_id;
-	/* OUT parameters */
-	/* Status of request */
-	uint32_t status;
+	int (*send_req)(int, struct hyper_dmabuf_req *, int);
 };
 
-#define IOCTL_HYPER_DMABUF_QUERY \
-_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
-struct ioctl_hyper_dmabuf_query {
-	/* in parameters */
-	/* hyper dmabuf id to be queried */
-	uint32_t hyper_dmabuf_id;
-	/* item to be queried */
-	uint32_t item;
-	/* OUT parameters */
-	/* Value of queried item */
-	uint32_t info;
+struct hyper_dmabuf_private {
+        struct device *device;
+	int domid;
+	struct workqueue_struct *work_queue;
+	struct list_reusable_id *id_queue;
+
+	/* backend ops - hypervisor specific */
+	struct hyper_dmabuf_backend_ops *backend_ops;
 };
 
-#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 7bbb179..b58a111 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -1,5 +1,6 @@
 #include <linux/list.h>
 #include <linux/slab.h>
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
 
@@ -19,6 +20,7 @@ void store_reusable_id(int id)
 static int retrieve_reusable_id(void)
 {
 	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	int id;
 
 	/* check there is reusable id */
 	if (!list_empty(&reusable_head->list)) {
@@ -27,7 +29,9 @@ static int retrieve_reusable_id(void)
 						 list);
 
 		list_del(&reusable_head->list);
-		return reusable_head->id;
+		id = reusable_head->id;
+		kfree(reusable_head);
+		return id;
 	}
 
 	return -1;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index b109138..0f104b9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -8,10 +8,12 @@
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_id.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
 int dmabuf_refcount(struct dma_buf *dma_buf)
@@ -138,397 +140,10 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 /* free sg_table */
 void hyper_dmabuf_free_sgt(struct sg_table* sgt)
 {
-	sg_free_table(sgt);
-	kfree(sgt);
-}
-
-/*
- * Creates 2 level page directory structure for referencing shared pages.
- * Top level page is a single page that contains up to 1024 refids that
- * point to 2nd level pages.
- * Each 2nd level page contains up to 1024 refids that point to shared
- * data pages.
- * There will always be one top level page and number of 2nd level pages
- * depends on number of shared data pages.
- *
- *      Top level page                2nd level pages            Data pages
- * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
- * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
- * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
- * |           ...           |   | |     ....           | |
- * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
- * +-------------------------+ | | +--------------------+      |Data page 1 |
- *                             | |                             +------------+
- *                             | └>+--------------------+
- *                             |   |Data page 1024 refid|
- *                             |   |Data page 1025 refid|
- *                             |   |       ...          |
- *                             |   |Data page 2047 refid|
- *                             |   +--------------------+
- *                             |
- *                             |        .....
- *                             └-->+-----------------------+
- *                                 |Data page 1047552 refid|
- *                                 |Data page 1047553 refid|
- *                                 |       ...             |
- *                                 |Data page 1048575 refid|-->+------------------+
- *                                 +-----------------------+   |Data page 1048575 |
- *                                                             +------------------+
- *
- * Using such 2 level structure it is possible to reference up to 4GB of
- * shared data using single refid pointing to top level page.
- *
- * Returns refid of top level page.
- */
-grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
-						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	/*
-	 * Calculate number of pages needed for 2nd level addresing:
-	 */
-	int n_2nd_level_pages = (nents/REFS_PER_PAGE +
-				((nents % REFS_PER_PAGE) ? 1: 0));
-	int i;
-	unsigned long gref_page_start;
-	grant_ref_t *tmp_page;
-	grant_ref_t top_level_ref;
-	grant_ref_t * addr_refs;
-	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
-
-	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
-	tmp_page = (grant_ref_t *)gref_page_start;
-
-	/* Store 2nd level pages to be freed later */
-	shared_pages_info->addr_pages = tmp_page;
-
-	/*TODO: make sure that allocated memory is filled with 0*/
-
-	/* Share 2nd level addressing pages in readonly mode*/
-	for (i=0; i< n_2nd_level_pages; i++) {
-		addr_refs[i] = gnttab_grant_foreign_access(rdomain,
-							   virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ),
-							   1);
-	}
-
-	/*
-	 * fill second level pages with data refs
-	 */
-	for (i = 0; i < nents; i++) {
-		tmp_page[i] = data_refs[i];
-	}
-
-
-	/* allocate top level page */
-	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
-	tmp_page = (grant_ref_t *)gref_page_start;
-
-	/* Store top level page to be freed later */
-	shared_pages_info->top_level_page = tmp_page;
-
-	/*
-	 * fill top level page with reference numbers of second level pages refs.
-	 */
-	for (i=0; i< n_2nd_level_pages; i++) {
-		tmp_page[i] =  addr_refs[i];
-	}
-
-	/* Share top level addressing page in readonly mode*/
-	top_level_ref = gnttab_grant_foreign_access(rdomain,
-						    virt_to_mfn((unsigned long)tmp_page),
-						    1);
-
-	kfree(addr_refs);
-
-	return top_level_ref;
-}
-
-/*
- * Maps provided top level ref id and then return array of pages containing data refs.
- */
-struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
-					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	struct page *top_level_page;
-	struct page **level2_pages;
-
-	grant_ref_t *top_level_refs;
-
-	struct gnttab_map_grant_ref top_level_map_ops;
-	struct gnttab_unmap_grant_ref top_level_unmap_ops;
-
-	struct gnttab_map_grant_ref *map_ops;
-	struct gnttab_unmap_grant_ref *unmap_ops;
-
-	unsigned long addr;
-	int n_level2_refs = 0;
-	int i;
-
-	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
-
-	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
-
-	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
-	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
-
-	/* Map top level addressing page */
-	if (gnttab_alloc_pages(1, &top_level_page)) {
-		printk("Cannot allocate pages\n");
-		return NULL;
-	}
-
-	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
-	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly,
-			  top_level_ref, domid);
-
-	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
-
-	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	if (top_level_map_ops.status) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
-				top_level_map_ops.status);
-		return NULL;
-	} else {
-		top_level_unmap_ops.handle = top_level_map_ops.handle;
-	}
-
-	/* Parse contents of top level addressing page to find how many second level pages is there*/
-	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
-
-	/* Map all second level pages */
-	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
-		printk("Cannot allocate pages\n");
-		return NULL;
-	}
-
-	for (i = 0; i < n_level2_refs; i++) {
-		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
-		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
-				  top_level_refs[i], domid);
-		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
-	}
-
-	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
-	for (i = 0; i < n_level2_refs; i++) {
-		if (map_ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
-			       map_ops[i].status);
-			return NULL;
-		} else {
-			unmap_ops[i].handle = map_ops[i].handle;
-		}
-	}
-
-	/* Unmap top level page, as it won't be needed any longer */
-	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
-		printk("\xen: cannot unmap top level page\n");
-		return NULL;
-	}
-
-	gnttab_free_pages(1, &top_level_page);
-	kfree(map_ops);
-	shared_pages_info->unmap_ops = unmap_ops;
-
-	return level2_pages;
-}
-
-
-/* This collects all reference numbers for 2nd level shared pages and create a table
- * with those in 1st level shared pages then return reference numbers for this top level
- * table. */
-grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
-					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	int i = 0;
-	grant_ref_t *data_refs;
-	grant_ref_t top_level_ref;
-
-	/* allocate temp array for refs of shared data pages */
-	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
-
-	/* share data pages in rw mode*/
-	for (i=0; i<nents; i++) {
-		data_refs[i] = gnttab_grant_foreign_access(rdomain,
-							   pfn_to_mfn(page_to_pfn(pages[i])),
-							   0);
-	}
-
-	/* create additional shared pages with 2 level addressing of data pages */
-	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
-							      shared_pages_info);
-
-	/* Store exported pages refid to be unshared later */
-	shared_pages_info->data_refs = data_refs;
-	shared_pages_info->top_level_ref = top_level_ref;
-
-	return top_level_ref;
-}
-
-int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
-	uint32_t i = 0;
-	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
-
-	grant_ref_t *ref = shared_pages_info->top_level_page;
-	int n_2nd_level_pages = (sgt_info->nents/REFS_PER_PAGE +
-				((sgt_info->nents % REFS_PER_PAGE) ? 1: 0));
-
-
-	if (shared_pages_info->data_refs == NULL ||
-	    shared_pages_info->addr_pages ==  NULL ||
-	    shared_pages_info->top_level_page == NULL ||
-	    shared_pages_info->top_level_ref == -1) {
-		printk("gref table for hyper_dmabuf already cleaned up\n");
-		return 0;
-	}
-
-	/* End foreign access for 2nd level addressing pages */
-	while(ref[i] != 0 && i < n_2nd_level_pages) {
-		if (gnttab_query_foreign_access(ref[i])) {
-			printk("refid not shared !!\n");
-		}
-		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
-			printk("refid still in use!!!\n");
-		}
-		gnttab_free_grant_reference(ref[i]);
-		i++;
-	}
-	free_pages((unsigned long)shared_pages_info->addr_pages, i);
-
-
-	/* End foreign access for top level addressing page */
-	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
-		printk("refid not shared !!\n");
-	}
-	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
-	gnttab_free_grant_reference(shared_pages_info->top_level_ref);
-
-	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
-
-	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < sgt_info->nents; i++) {
-		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
-			printk("refid not shared !!\n");
-		}
-		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
-		gnttab_free_grant_reference(shared_pages_info->data_refs[i]);
-	}
-
-	kfree(shared_pages_info->data_refs);
-
-	shared_pages_info->data_refs = NULL;
-	shared_pages_info->addr_pages = NULL;
-	shared_pages_info->top_level_page = NULL;
-	shared_pages_info->top_level_ref = -1;
-
-	return 0;
-}
-
-int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
-	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
-
-	if(shared_pages_info->unmap_ops == NULL ||
-	   shared_pages_info->data_pages == NULL) {
-		printk("Imported pages already cleaned up or buffer was not imported yet\n");
-		return 0;
-	}
-
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL,
-			      shared_pages_info->data_pages, sgt_info->nents) ) {
-		printk("Cannot unmap data pages\n");
-		return -EINVAL;
-	}
-
-	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
-	kfree(shared_pages_info->data_pages);
-	kfree(shared_pages_info->unmap_ops);
-	shared_pages_info->unmap_ops = NULL;
-	shared_pages_info->data_pages = NULL;
-
-	return 0;
-}
-
-/* map and construct sg_lists from reference numbers */
-struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst,
-					int last_len, int nents, int sdomain,
-					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
-{
-	struct sg_table *st;
-	struct page **pages;
-	struct gnttab_map_grant_ref *ops;
-	struct gnttab_unmap_grant_ref *unmap_ops;
-	unsigned long addr;
-	grant_ref_t *refs;
-	int i;
-	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
-
-	/* Get data refids */
-	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
-							       shared_pages_info);
-
-	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
-	if (pages == NULL) {
-		return NULL;
-	}
-
-	/* allocate new pages that are mapped to shared pages via grant-table */
-	if (gnttab_alloc_pages(nents, pages)) {
-		printk("Cannot allocate pages\n");
-		return NULL;
-	}
-
-	ops = kcalloc(nents, sizeof(struct gnttab_map_grant_ref),
-		      GFP_KERNEL);
-	unmap_ops = kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref),
-			    GFP_KERNEL);
-
-	for (i=0; i<nents; i++) {
-		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
-		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
-		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly,
-				refs[i % REFS_PER_PAGE], sdomain);
-		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
-	}
-
-	if (gnttab_map_refs(ops, NULL, pages, nents)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
-		return NULL;
-	}
-
-	for (i=0; i<nents; i++) {
-		if (ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
-				ops[0].status);
-			return NULL;
-		} else {
-			unmap_ops[i].handle = ops[i].handle;
-		}
+	if (sgt) {
+		sg_free_table(sgt);
+		kfree(sgt);
 	}
-
-	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
-
-	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages,
-			n_level2_refs) ) {
-		printk("Cannot unmap 2nd level refs\n");
-		return NULL;
-	}
-
-	gnttab_free_pages(n_level2_refs, refid_pages);
-	kfree(refid_pages);
-
-	kfree(shared_pages_info->unmap_ops);
-	shared_pages_info->unmap_ops = unmap_ops;
-	shared_pages_info->data_pages = pages;
-	kfree(ops);
-
-	return st;
 }
 
 int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
@@ -537,6 +152,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
 	struct vmap_vaddr_list *va_vmapl;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 
 	if (!sgt_info) {
 		printk("invalid hyper_dmabuf_id\n");
@@ -598,7 +214,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	}
 
 	/* Start cleanup of buffer in reverse order to exporting */
-	hyper_dmabuf_cleanup_gref_table(sgt_info);
+	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
 
 	/* unmap dma-buf */
 	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
@@ -620,21 +236,22 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	return 0;
 }
 
-inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
+inline int hyper_dmabuf_sync_request_and_wait(int id, int dmabuf_ops)
 {
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int operands[2];
 	int ret;
 
 	operands[0] = id;
-	operands[1] = ops;
+	operands[1] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(id), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, true);
 
 	kfree(req);
 
@@ -753,6 +370,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int ret;
 	int final_release;
 
@@ -761,16 +379,22 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
 
-	final_release = sgt_info && !sgt_info->valid &&
-		       !dmabuf_refcount(sgt_info->dma_buf);
-
 	if (!dmabuf_refcount(sgt_info->dma_buf)) {
 		sgt_info->dma_buf = NULL;
 	}
 
-	if (final_release) {
-		hyper_dmabuf_cleanup_imported_pages(sgt_info);
+	sgt_info->num_importers--;
+
+	if (sgt_info->num_importers == 0) {
+		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
 		hyper_dmabuf_free_sgt(sgt_info->sgt);
+		sgt_info->sgt = NULL;
+	}
+
+	final_release = sgt_info && !sgt_info->valid &&
+		        !sgt_info->num_importers;
+
+	if (final_release) {
 		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 							HYPER_DMABUF_OPS_RELEASE_FINAL);
 	} else {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index 1b0801f..a4a6d63 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -11,20 +11,6 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
                                 int frst_ofst, int last_len, int nents);
 
-grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
-					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
-
-int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
-
-int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
-
-/* map first level tables that contains reference numbers for actual shared pages */
-grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
-
-/* map and construct sg_lists from reference numbers */
-struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
-					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
-
 int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
 
 void hyper_dmabuf_free_sgt(struct sg_table *sgt);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 5c6d9c8..70107bb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -8,47 +8,37 @@
 #include <linux/delay.h>
 #include <linux/list.h>
 #include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_query.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
-#include "xen/hyper_dmabuf_xen_comm_list.h"
-#include "hyper_dmabuf_msg.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static int hyper_dmabuf_exporter_ring_setup(void *data)
+static int hyper_dmabuf_tx_ch_setup(void *data)
 {
-	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
-	struct hyper_dmabuf_ring_info_export *ring_info;
+	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int ret = 0;
 
 	if (!data) {
 		printk("user data is NULL\n");
 		return -1;
 	}
-	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
-
-	/* check if the ring ch already exists */
-	ring_info = hyper_dmabuf_find_exporter_ring(ring_attr->remote_domain);
-
-	if (ring_info) {
-		printk("(exporter's) ring ch to domid = %d already exist\ngref = %d, port = %d\n",
-			ring_info->rdomain, ring_info->gref_ring, ring_info->port);
-		return 0;
-	}
+	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
 
-	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain);
+	ret = ops->init_tx_ch(tx_ch_attr->remote_domain);
 
 	return ret;
 }
 
-static int hyper_dmabuf_importer_ring_setup(void *data)
+static int hyper_dmabuf_rx_ch_setup(void *data)
 {
-	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
-	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	int ret = 0;
 
 	if (!data) {
@@ -56,17 +46,9 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 		return -1;
 	}
 
-	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
-
-	/* check if the ring ch already exist */
-	ring_info = hyper_dmabuf_find_importer_ring(setup_imp_ring_attr->source_domain);
+	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
 
-	if (ring_info) {
-		printk("(importer's) ring ch to domid = %d already exist\n", ring_info->sdomain);
-		return 0;
-	}
-
-	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain);
+	ret = ops->init_rx_ch(rx_ch_attr->source_domain);
 
 	return ret;
 }
@@ -74,13 +56,14 @@ static int hyper_dmabuf_importer_ring_setup(void *data)
 static int hyper_dmabuf_export_remote(void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct dma_buf *dma_buf;
 	struct dma_buf_attachment *attachment;
 	struct sg_table *sgt;
 	struct hyper_dmabuf_pages_info *page_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_ring_rq *req;
-	int operands[9];
+	struct hyper_dmabuf_req *req;
+	int operands[MAX_NUMBER_OF_OPERANDS];
 	int ret = 0;
 
 	if (!data) {
@@ -125,6 +108,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
 
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
+
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
@@ -163,15 +147,14 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
 
-	/* now create table of grefs for shared pages and */
-
 	/* now create request for importer via ring */
 	operands[0] = page_info->hyper_dmabuf_id;
 	operands[1] = page_info->nents;
 	operands[2] = page_info->frst_ofst;
 	operands[3] = page_info->last_len;
-	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
-						     page_info->nents, &sgt_info->shared_pages_info);
+	operands[4] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
+					page_info->nents, &sgt_info->refs_info);
+
 	/* driver/application specific private info, max 32 bytes */
 	operands[5] = export_remote_attr->private[0];
 	operands[6] = export_remote_attr->private[1];
@@ -182,7 +165,8 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
-	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req, false))
+
+	if(ops->send_req(export_remote_attr->remote_domain, req, false))
 		goto fail_send_request;
 
 	/* free msg */
@@ -215,8 +199,10 @@ static int hyper_dmabuf_export_remote(void *data)
 static int hyper_dmabuf_export_fd_ioctl(void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_req *req;
+	struct page **data_pages;
 	int operand;
 	int ret = 0;
 
@@ -228,43 +214,48 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
 
 	/* look for dmabuf for the id */
-	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
-	if (imported_sgt_info == NULL) /* can't find sgt from the table */
+	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
+	if (sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
 	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
-		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
-		imported_sgt_info->last_len, imported_sgt_info->nents,
-		HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
-
-	if (!imported_sgt_info->sgt) {
-		imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
-							imported_sgt_info->frst_ofst,
-							imported_sgt_info->last_len,
-							imported_sgt_info->nents,
-							HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id),
-							&imported_sgt_info->shared_pages_info);
-
-		/* send notifiticatio for first export_fd to exporter */
-		operand = imported_sgt_info->hyper_dmabuf_id;
-		req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-		hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
-
-		ret = hyper_dmabuf_send_request(HYPER_DMABUF_DOM_ID(operand), req, false);
-
-		if (!imported_sgt_info->sgt || ret) {
-			kfree(req);
-			printk("Failed to create sgt or notify exporter\n");
-			return -EINVAL;
-		}
+		sgt_info->ref_handle, sgt_info->frst_ofst,
+		sgt_info->last_len, sgt_info->nents,
+		HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
+
+	if (!sgt_info->sgt) {
+		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
+						   HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id),
+						   sgt_info->nents,
+						   &sgt_info->refs_info);
+
+		sgt_info->sgt = hyper_dmabuf_create_sgt(data_pages, sgt_info->frst_ofst,
+							sgt_info->last_len, sgt_info->nents);
+
+	}
+
+	/* send notification for export_fd to exporter */
+	operand = sgt_info->hyper_dmabuf_id;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+
+	if (!sgt_info->sgt || ret) {
 		kfree(req);
+		printk("Failed to create sgt or notify exporter\n");
+		return -EINVAL;
 	}
+	kfree(req);
 
-	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
+	export_fd_attr->fd = hyper_dmabuf_export_fd(sgt_info, export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
 		ret = export_fd_attr->fd;
+	} else {
+		sgt_info->num_importers++;
 	}
 
 	return ret;
@@ -276,8 +267,9 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 static int hyper_dmabuf_unexport(void *data)
 {
 	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_req *req;
 	int ret;
 
 	if (!data) {
@@ -301,7 +293,7 @@ static int hyper_dmabuf_unexport(void *data)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
-	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req, true);
+	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
 		kfree(req);
 		return -EFAULT;
@@ -405,8 +397,8 @@ static int hyper_dmabuf_query(void *data)
 }
 
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport, 0),
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
new file mode 100644
index 0000000..de216d3
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -0,0 +1,87 @@
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	int remote_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+	/* IN parameters */
+	/* Source domain id */
+	int source_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	int dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	int remote_domain;
+	/* exported dma buf id */
+	int hyper_dmabuf_id;
+	int private[4];
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	int hyper_dmabuf_id;
+	/* flags */
+	int flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	int fd;
+};
+
+#define IOCTL_HYPER_DMABUF_UNEXPORT \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
+struct ioctl_hyper_dmabuf_unexport {
+	/* IN parameters */
+	/* hyper dmabuf id to be unexported */
+	int hyper_dmabuf_id;
+	/* OUT parameters */
+	/* Status of request */
+	int status;
+};
+
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	int hyper_dmabuf_id;
+	/* item to be queried */
+	int item;
+	/* OUT parameters */
+	/* Value of queried item */
+	int info;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index a2d687f..4647115 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -5,11 +5,10 @@
 #include <linux/dma-buf.h>
 #include <xen/grant_table.h>
 #include <linux/workqueue.h>
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_remote_sync.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
 #define FORCED_UNEXPORTING 0
@@ -18,18 +17,17 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 struct cmd_process {
 	struct work_struct work;
-	struct hyper_dmabuf_ring_rq *rq;
+	struct hyper_dmabuf_req *rq;
 	int domid;
 };
 
-void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
-				        enum hyper_dmabuf_command command, int *operands)
+void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
+				 enum hyper_dmabuf_command command, int *operands)
 {
 	int i;
 
-	request->request_id = hyper_dmabuf_next_req_id_export();
-	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
-	request->command = command;
+	req->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	req->command = command;
 
 	switch(command) {
 	/* as exporter, commands to importer */
@@ -44,7 +42,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 		for (i=0; i < 8; i++)
-			request->operands[i] = operands[i];
+			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
@@ -52,7 +50,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		/* command : DMABUF_DESTROY,
 		 * operands0 : hyper_dmabuf_id
 		 */
-		request->operands[0] = operands[0];
+		req->operands[0] = operands[0];
 		break;
 
 	case HYPER_DMABUF_FIRST_EXPORT:
@@ -60,7 +58,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		/* command : HYPER_DMABUF_FIRST_EXPORT,
 		 * operands0 : hyper_dmabuf_id
 		 */
-		request->operands[0] = operands[0];
+		req->operands[0] = operands[0];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -77,7 +75,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
 		for (i=0; i<2; i++)
-			request->operands[i] = operands[i];
+			req->operands[i] = operands[i];
 		break;
 
 	default:
@@ -88,10 +86,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
 
 void cmd_process_work(struct work_struct *work)
 {
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
-	struct hyper_dmabuf_ring_rq *req;
+	struct hyper_dmabuf_req *req;
 	int domid;
 	int i;
 
@@ -114,7 +112,7 @@ void cmd_process_work(struct work_struct *work)
 		imported_sgt_info->frst_ofst = req->operands[2];
 		imported_sgt_info->last_len = req->operands[3];
 		imported_sgt_info->nents = req->operands[1];
-		imported_sgt_info->gref = req->operands[4];
+		imported_sgt_info->ref_handle = req->operands[4];
 
 		printk("DMABUF was exported\n");
 		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
@@ -139,10 +137,7 @@ void cmd_process_work(struct work_struct *work)
 			break;
 		}
 
-		if (sgt_info->importer_exported)
-			printk("warning: exported flag is not supposed to be 1 already\n");
-
-		sgt_info->importer_exported = 1;
+		sgt_info->importer_exported++;
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -160,11 +155,11 @@ void cmd_process_work(struct work_struct *work)
 	kfree(proc);
 }
 
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 {
 	struct cmd_process *proc;
-	struct hyper_dmabuf_ring_rq *temp_req;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_req *temp_req;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	int ret;
 
 	if (!req) {
@@ -189,22 +184,21 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
 		 * operands0 : hyper_dmabuf_id
 		 */
 
-		imported_sgt_info =
-			hyper_dmabuf_find_imported(req->operands[0]);
+		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
 
-		if (imported_sgt_info) {
+		if (sgt_info) {
 			/* if anything is still using dma_buf */
-			if (imported_sgt_info->dma_buf &&
-			    dmabuf_refcount(imported_sgt_info->dma_buf) > 0) {
+			if (sgt_info->dma_buf &&
+			    dmabuf_refcount(sgt_info->dma_buf) > 0) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
 				 */
-				imported_sgt_info->valid = 0;
+				sgt_info->valid = 0;
 			} else {
 				/* No one is using buffer, remove it from imported list */
 				hyper_dmabuf_remove_imported(req->operands[0]);
-				kfree(imported_sgt_info);
+				kfree(sgt_info);
 			}
 		} else {
 			req->status = HYPER_DMABUF_REQ_ERROR;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 1e9d827..ac4caeb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -1,6 +1,22 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
+#define MAX_NUMBER_OF_OPERANDS 9
+
+struct hyper_dmabuf_req {
+	unsigned int request_id;
+	unsigned int status;
+	unsigned int command;
+	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_resp {
+	unsigned int response_id;
+	unsigned int status;
+	unsigned int command;
+	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
 	HYPER_DMABUF_FIRST_EXPORT,
@@ -35,10 +51,11 @@ enum hyper_dmabuf_req_feedback {
 };
 
 /* create a request packet with given command and operands */
-void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
-                                        enum hyper_dmabuf_command command, int *operands);
+void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
+				 enum hyper_dmabuf_command command,
+				 int *operands);
 
 /* parse incoming request packet (or response) and take appropriate actions for those */
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
 
 #endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index c5950e0..0f4735c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -5,9 +5,9 @@
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
-#include "xen/hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_imp.h"
 
@@ -133,6 +133,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_RELEASE:
 		/* place holder */
+                sgt_info->importer_exported--;
+
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index b52f958..f053dd10 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -1,14 +1,6 @@
 #ifndef __HYPER_DMABUF_STRUCT_H__
 #define __HYPER_DMABUF_STRUCT_H__
 
-#include <xen/interface/grant_table.h>
-
-/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
- * in this block meaning we can share 4KB*4096 = 16MB of buffer
- * (needs to be increased for large buffer use-cases such as 4K
- * frame buffer) */
-#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
-
 /* stack of mapped sgts */
 struct sgt_list {
 	struct sg_table *sgt;
@@ -33,15 +25,6 @@ struct vmap_vaddr_list {
 	struct list_head list;
 };
 
-struct hyper_dmabuf_shared_pages_info {
-	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
-	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
-	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
-	grant_ref_t top_level_ref; /* top level refid */
-	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
-	struct page **data_pages; /* data pages to be unmapped */
-};
-
 /* Exporter builds pages_info before sharing pages */
 struct hyper_dmabuf_pages_info {
         int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
@@ -69,8 +52,8 @@ struct hyper_dmabuf_sgt_info {
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
 	bool valid;
-	bool importer_exported; /* exported locally on importer's side */
-	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int importer_exported; /* exported locally on importer's side */
+	void *refs_info; /* hypervisor-specific info for the references */
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
@@ -79,14 +62,15 @@ struct hyper_dmabuf_sgt_info {
  * its own memory map once userspace asks for reference for the buffer */
 struct hyper_dmabuf_imported_sgt_info {
 	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
+	int ref_handle; /* reference number of top level addressing page of shared pages */
 	int frst_ofst;	/* start offset in shared page #1 */
 	int last_len;	/* length of data in the last shared page */
 	int nents;	/* number of pages to be shared */
-	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
 	struct dma_buf *dma_buf;
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
-	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	void *refs_info;
 	bool valid;
+	int num_importers;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index f9e0df3..bd37ec2 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -10,16 +10,15 @@
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
-#include "../hyper_dmabuf_imp.h"
-#include "../hyper_dmabuf_list.h"
-#include "../hyper_dmabuf_msg.h"
 
 static int export_req_id = 0;
 
-struct hyper_dmabuf_ring_rq req_pending = {0};
+struct hyper_dmabuf_req req_pending = {0};
 
-/* Creates entry in xen store that will keep details of all exporter rings created by this domain */
-int32_t hyper_dmabuf_setup_data_dir()
+/* Creates entry in xen store that will keep details of all
+ * exporter rings created by this domain
+ */
+static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
@@ -27,13 +26,13 @@ int32_t hyper_dmabuf_setup_data_dir()
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
-
 /* Removes entry from xenstore with exporter ring details.
- * Other domains that has connected to any of exporter rings created by this domain,
- * will be notified about removal of this entry and will treat that as signal to
- * cleanup importer rings created for this domain
+ * Other domains that has connected to any of exporter rings
+ * created by this domain, will be notified about removal of
+ * this entry and will treat that as signal to cleanup importer
+ * rings created for this domain
  */
-int32_t hyper_dmabuf_destroy_data_dir()
+static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
@@ -41,18 +40,19 @@ int32_t hyper_dmabuf_destroy_data_dir()
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
-/*
- * Adds xenstore entries with details of exporter ring created for given remote domain.
- * It requires special daemon running in dom0 to make sure that given remote domain will
- * have right permissions to access that data.
+/* Adds xenstore entries with details of exporter ring created
+ * for given remote domain. It requires special daemon running
+ * in dom0 to make sure that given remote domain will have right
+ * permissions to access that data.
  */
-static int32_t hyper_dmabuf_expose_ring_details(uint32_t domid, uint32_t rdomid, uint32_t grefid, uint32_t port)
+static int xen_comm_expose_ring_details(int domid, int rdomid,
+					int gref, int port)
 {
 	char buf[255];
 	int ret;
 
 	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", domid, rdomid);
-	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", grefid);
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
 
 	if (ret) {
 		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
@@ -72,7 +72,7 @@ static int32_t hyper_dmabuf_expose_ring_details(uint32_t domid, uint32_t rdomid,
 /*
  * Queries details of ring exposed by remote domain.
  */
-static int32_t hyper_dmabuf_get_ring_details(uint32_t domid, uint32_t rdomid, uint32_t *grefid, uint32_t *port)
+static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *port)
 {
 	char buf[255];
 	int ret;
@@ -95,10 +95,10 @@ static int32_t hyper_dmabuf_get_ring_details(uint32_t domid, uint32_t rdomid, ui
 	return (ret <= 0 ? 1 : 0);
 }
 
-int32_t hyper_dmabuf_get_domid(void)
+int hyper_dmabuf_get_domid(void)
 {
 	struct xenbus_transaction xbt;
-	int32_t domid;
+	int domid;
 
         xenbus_transaction_start(&xbt);
 
@@ -110,29 +110,35 @@ int32_t hyper_dmabuf_get_domid(void)
 	return domid;
 }
 
-int hyper_dmabuf_next_req_id_export(void)
+static int xen_comm_next_req_id(void)
 {
         export_req_id++;
         return export_req_id;
 }
 
 /* For now cache latast rings as global variables TODO: keep them in list*/
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info);
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info);
-
-/*
- * Callback function that will be called on any change of xenbus path being watched.
- * Used for detecting creation/destruction of remote domain exporter ring.
- * When remote domain's exporter ring will be detected, importer ring on this domain will be created.
- * When remote domain's exporter ring destruction will be detected it will celanup this domain importer ring.
- * Destruction can be caused by unloading module by remote domain or it's crash/force shutdown.
+static irqreturn_t front_ring_isr(int irq, void *info);
+static irqreturn_t back_ring_isr(int irq, void *info);
+
+/* Callback function that will be called on any change of xenbus path
+ * being watched. Used for detecting creation/destruction of remote
+ * domain exporter ring.
+ *
+ * When remote domain's exporter ring will be detected, importer ring
+ * on this domain will be created.
+ *
+ * When remote domain's exporter ring destruction will be detected it
+ * will celanup this domain importer ring.
+ *
+ * Destruction can be caused by unloading module by remote domain or
+ * it's crash/force shutdown.
  */
-static void remote_domain_exporter_watch_cb(struct xenbus_watch *watch,
-				   const char *path, const char *token)
+static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
+					 const char *path, const char *token)
 {
 	int rdom,ret;
 	uint32_t grefid, port;
-	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct xen_comm_rx_ring_info *ring_info;
 
 	/* Check which domain has changed its exporter rings */
 	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
@@ -141,39 +147,49 @@ static void remote_domain_exporter_watch_cb(struct xenbus_watch *watch,
 	}
 
 	/* Check if we have importer ring for given remote domain alrady created */
-	ring_info = hyper_dmabuf_find_importer_ring(rdom);
-
-	/*
-	 * Try to query remote domain exporter ring details - if that will fail and we have
-	 * importer ring that means remote domains has cleanup its exporter ring, so our
-	 * importer ring is no longer useful.
-	 * If querying details will succeed and we don't have importer ring, it means that
-	 * remote domain has setup it for us and we should connect to it.
+	ring_info = xen_comm_find_rx_ring(rdom);
+
+	/* Try to query remote domain exporter ring details - if that will
+	 * fail and we have importer ring that means remote domains has cleanup
+	 * its exporter ring, so our importer ring is no longer useful.
+	 *
+	 * If querying details will succeed and we don't have importer ring,
+	 * it means that remote domain has setup it for us and we should connect
+	 * to it.
 	 */
-	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), rdom, &grefid, &port);
+	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), rdom,
+					&grefid, &port);
 
 	if (ring_info && ret != 0) {
 		printk("Remote exporter closed, cleaninup importer\n");
-		hyper_dmabuf_importer_ringbuf_cleanup(rdom);
+		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
 		printk("Registering importer\n");
-		hyper_dmabuf_importer_ringbuf_init(rdom);
+		hyper_dmabuf_xen_init_rx_rbuf(rdom);
 	}
 }
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
+int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_export *ring_info;
-	struct hyper_dmabuf_sring *sring;
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
 	struct evtchn_alloc_unbound alloc_unbound;
 	struct evtchn_close close;
 
 	void *shared_ring;
 	int ret;
 
-	ring_info = (struct hyper_dmabuf_ring_info_export*)
-				kmalloc(sizeof(*ring_info), GFP_KERNEL);
+	/* check if there's any existing tx channel in the table */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (ring_info) {
+		printk("tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
@@ -181,20 +197,22 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 		return -EINVAL;
 	}
 
-	sring = (struct hyper_dmabuf_sring *) shared_ring;
+	sring = (struct xen_comm_sring *) shared_ring;
 
 	SHARED_RING_INIT(sring);
 
 	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
 
-	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
-							virt_to_mfn(shared_ring), 0);
+	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
+							   virt_to_mfn(shared_ring),
+							   0);
 	if (ring_info->gref_ring < 0) {
-		return -EINVAL; /* fail to get gref */
+		/* fail to get gref */
+		return -EINVAL;
 	}
 
 	alloc_unbound.dom = DOMID_SELF;
-	alloc_unbound.remote_dom = rdomain;
+	alloc_unbound.remote_dom = domid;
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
 					&alloc_unbound);
 	if (ret != 0) {
@@ -204,7 +222,7 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 
 	/* setting up interrupt */
 	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
-					hyper_dmabuf_front_ring_isr, 0,
+					front_ring_isr, 0,
 					NULL, (void*) ring_info);
 
 	if (ret < 0) {
@@ -216,7 +234,7 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 		return -EINVAL;
 	}
 
-	ring_info->rdomain = rdomain;
+	ring_info->rdomain = domid;
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
@@ -226,109 +244,128 @@ int hyper_dmabuf_exporter_ringbuf_init(int rdomain)
 		ring_info->port,
 		ring_info->irq);
 
-	ret = hyper_dmabuf_register_exporter_ring(ring_info);
+	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = hyper_dmabuf_expose_ring_details(hyper_dmabuf_get_domid(), rdomain,
-                                               ring_info->gref_ring, ring_info->port);
+	ret = xen_comm_expose_ring_details(hyper_dmabuf_get_domid(), domid,
+					   ring_info->gref_ring, ring_info->port);
 
 	/*
 	 * Register watch for remote domain exporter ring.
-	 * When remote domain will setup its exporter ring, we will automatically connect our importer ring to it.
+	 * When remote domain will setup its exporter ring,
+	 * we will automatically connect our importer ring to it.
 	 */
-	ring_info->watch.callback = remote_domain_exporter_watch_cb;
+	ring_info->watch.callback = remote_dom_exporter_watch_cb;
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
-	sprintf((char*)ring_info->watch.node, "/local/domain/%d/data/hyper_dmabuf/%d/port", rdomain, hyper_dmabuf_get_domid());
+	sprintf((char*)ring_info->watch.node,
+		"/local/domain/%d/data/hyper_dmabuf/%d/port",
+		domid, hyper_dmabuf_get_domid());
+
 	register_xenbus_watch(&ring_info->watch);
 
 	return ret;
 }
 
 /* cleans up exporter ring created for given remote domain */
-void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain)
+void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_export *ring_info;
+	struct xen_comm_tx_ring_info *ring_info;
 
 	/* check if we at all have exporter ring for given rdomain */
-	ring_info = hyper_dmabuf_find_exporter_ring(rdomain);
+	ring_info = xen_comm_find_tx_ring(domid);
 
 	if (!ring_info) {
 		return;
 	}
 
-	hyper_dmabuf_remove_exporter_ring(rdomain);
+	xen_comm_remove_tx_ring(domid);
 
 	unregister_xenbus_watch(&ring_info->watch);
 	kfree(ring_info->watch.node);
 
-	/* No need to close communication channel, will be done by this function */
-	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+	/* No need to close communication channel, will be done by
+	 * this function
+	 */
+	unbind_from_irqhandler(ring_info->irq, (void*) ring_info);
 
-	/* No need to free sring page, will be freed by this function when other side will end its access */
+	/* No need to free sring page, will be freed by this function
+	 * when other side will end its access
+	 */
 	gnttab_end_foreign_access(ring_info->gref_ring, 0,
 				  (unsigned long) ring_info->ring_front.sring);
 
 	kfree(ring_info);
 }
 
-/* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain)
+/* importer needs to know about shared page and port numbers for
+ * ring buffer and event channel
+ */
+int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_import *ring_info;
-	struct hyper_dmabuf_sring *sring;
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
 
 	struct page *shared_ring;
 
-	struct gnttab_map_grant_ref *ops;
+	struct gnttab_map_grant_ref *map_ops;
+
 	int ret;
-	int importer_gref, importer_port;
+	int rx_gref, rx_port;
 
-	ret = hyper_dmabuf_get_ring_details(hyper_dmabuf_get_domid(), sdomain,
-					    &importer_gref, &importer_port);
+	/* check if there's existing rx ring channel */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (ring_info) {
+		printk("rx ring ch from domid = %d already exist\n", ring_info->sdomain);
+		return 0;
+	}
+
+	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), domid,
+					&rx_gref, &rx_port);
 
 	if (ret) {
-		printk("Domain %d has not created exporter ring for current domain\n", sdomain);
+		printk("Domain %d has not created exporter ring for current domain\n", domid);
 		return ret;
 	}
 
-	ring_info = (struct hyper_dmabuf_ring_info_import *)
-			kmalloc(sizeof(*ring_info), GFP_KERNEL);
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
-	ring_info->sdomain = sdomain;
-	ring_info->evtchn = importer_port;
+	ring_info->sdomain = domid;
+	ring_info->evtchn = rx_port;
 
-	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
+	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (gnttab_alloc_pages(1, &shared_ring)) {
 		return -EINVAL;
 	}
 
-	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
-			GNTMAP_host_map, importer_gref, sdomain);
+	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			  GNTMAP_host_map, rx_gref, domid);
+
 	gnttab_set_unmap_op(&ring_info->unmap_op, (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
-			GNTMAP_host_map, -1);
+			    GNTMAP_host_map, -1);
 
-	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
+	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
 		printk("Cannot map ring\n");
 		return -EINVAL;
 	}
 
-	if (ops[0].status) {
+	if (map_ops[0].status) {
 		printk("Ring mapping failed\n");
 		return -EINVAL;
 	} else {
-		ring_info->unmap_op.handle = ops[0].handle;
+		ring_info->unmap_op.handle = map_ops[0].handle;
 	}
 
-	kfree(ops);
+	kfree(map_ops);
 
-	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
+	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, importer_port,
-						hyper_dmabuf_back_ring_isr, 0,
-						NULL, (void*)ring_info);
+	ret = bind_interdomain_evtchn_to_irqhandler(domid, rx_port,
+						    back_ring_isr, 0,
+						    NULL, (void*)ring_info);
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -336,35 +373,35 @@ int hyper_dmabuf_importer_ringbuf_init(int sdomain)
 	ring_info->irq = ret;
 
 	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
-		importer_port,
+		rx_port,
 		ring_info->irq);
 
-	ret = hyper_dmabuf_register_importer_ring(ring_info);
+	ret = xen_comm_add_rx_ring(ring_info);
 
 	/* Setup communcation channel in opposite direction */
-	if (!hyper_dmabuf_find_exporter_ring(sdomain)) {
-		ret = hyper_dmabuf_exporter_ringbuf_init(sdomain);
+	if (!xen_comm_find_tx_ring(domid)) {
+		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
 	}
 
 	return ret;
 }
 
 /* clenas up importer ring create for given source domain */
-void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain)
+void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 {
-	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct xen_comm_rx_ring_info *ring_info;
 	struct page *shared_ring;
 
 	/* check if we have importer ring created for given sdomain */
-	ring_info = hyper_dmabuf_find_importer_ring(sdomain);
+	ring_info = xen_comm_find_rx_ring(domid);
 
 	if (!ring_info)
 		return;
 
-	hyper_dmabuf_remove_importer_ring(sdomain);
+	xen_comm_remove_rx_ring(domid);
 
 	/* no need to close event channel, will be done by that function */
-	unbind_from_irqhandler(ring_info->irq,	(void*) ring_info);
+	unbind_from_irqhandler(ring_info->irq, (void*)ring_info);
 
 	/* unmapping shared ring page */
 	shared_ring = virt_to_page(ring_info->ring_back.sring);
@@ -374,23 +411,39 @@ void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain)
 	kfree(ring_info);
 }
 
-/* cleans up all exporter/importer rings */
-void hyper_dmabuf_cleanup_ringbufs(void)
+int hyper_dmabuf_xen_init_comm_env(void)
 {
-	hyper_dmabuf_foreach_exporter_ring(hyper_dmabuf_exporter_ringbuf_cleanup);
-	hyper_dmabuf_foreach_importer_ring(hyper_dmabuf_importer_ringbuf_cleanup);
+	int ret;
+
+	xen_comm_ring_table_init();
+	ret = xen_comm_setup_data_dir();
+
+	return ret;
 }
 
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait)
+/* cleans up all tx/rx rings */
+static void hyper_dmabuf_xen_cleanup_all_rbufs(void)
 {
-	struct hyper_dmabuf_front_ring *ring;
-	struct hyper_dmabuf_ring_rq *new_req;
-	struct hyper_dmabuf_ring_info_export *ring_info;
+	xen_comm_foreach_tx_ring(hyper_dmabuf_xen_cleanup_tx_rbuf);
+	xen_comm_foreach_rx_ring(hyper_dmabuf_xen_cleanup_rx_rbuf);
+}
+
+void hyper_dmabuf_xen_destroy_comm(void)
+{
+	hyper_dmabuf_xen_cleanup_all_rbufs();
+	xen_comm_destroy_data_dir();
+}
+
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
+{
+	struct xen_comm_front_ring *ring;
+	struct hyper_dmabuf_req *new_req;
+	struct xen_comm_tx_ring_info *ring_info;
 	int notify;
 	int timeout = 1000;
 
 	/* find a ring info for the channel */
-	ring_info = hyper_dmabuf_find_exporter_ring(domain);
+	ring_info = xen_comm_find_tx_ring(domid);
 	if (!ring_info) {
 		printk("Can't find ring info for the channel\n");
 		return -EINVAL;
@@ -407,6 +460,8 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int
 		return -EIO;
 	}
 
+	req->request_id = xen_comm_next_req_id();
+
 	/* update req_pending with current request */
 	memcpy(&req_pending, req, sizeof(req_pending));
 
@@ -438,19 +493,19 @@ int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int
 }
 
 /* ISR for handling request */
-static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
+static irqreturn_t back_ring_isr(int irq, void *info)
 {
 	RING_IDX rc, rp;
-	struct hyper_dmabuf_ring_rq req;
-	struct hyper_dmabuf_ring_rp resp;
+	struct hyper_dmabuf_req req;
+	struct hyper_dmabuf_resp resp;
 
 	int notify, more_to_do;
 	int ret;
 
-	struct hyper_dmabuf_ring_info_import *ring_info;
-	struct hyper_dmabuf_back_ring *ring;
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_back_ring *ring;
 
-	ring_info = (struct hyper_dmabuf_ring_info_import *)info;
+	ring_info = (struct xen_comm_rx_ring_info *)info;
 	ring = &ring_info->ring_back;
 
 	do {
@@ -490,17 +545,17 @@ static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *info)
 }
 
 /* ISR for handling responses */
-static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
+static irqreturn_t front_ring_isr(int irq, void *info)
 {
 	/* front ring only care about response from back */
-	struct hyper_dmabuf_ring_rp *resp;
+	struct hyper_dmabuf_resp *resp;
 	RING_IDX i, rp;
 	int more_to_do, ret;
 
-	struct hyper_dmabuf_ring_info_export *ring_info;
-	struct hyper_dmabuf_front_ring *ring;
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_front_ring *ring;
 
-	ring_info = (struct hyper_dmabuf_ring_info_export *)info;
+	ring_info = (struct xen_comm_tx_ring_info *)info;
 	ring = &ring_info->ring_front;
 
 	do {
@@ -518,7 +573,7 @@ static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *info)
 			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
 				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
-							(struct hyper_dmabuf_ring_rq *)resp);
+							(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
 					printk("getting error while parsing response\n");
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 4ab031a..ba41e9d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -3,27 +3,14 @@
 
 #include "xen/interface/io/ring.h"
 #include "xen/xenbus.h"
+#include "../hyper_dmabuf_msg.h"
 
 #define MAX_NUMBER_OF_OPERANDS 9
 
-struct hyper_dmabuf_ring_rq {
-        unsigned int request_id;
-        unsigned int status;
-        unsigned int command;
-        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
-};
-
-struct hyper_dmabuf_ring_rp {
-        unsigned int response_id;
-        unsigned int status;
-        unsigned int command;
-        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
-};
+DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
 
-DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
-
-struct hyper_dmabuf_ring_info_export {
-        struct hyper_dmabuf_front_ring ring_front;
+struct xen_comm_tx_ring_info {
+        struct xen_comm_front_ring ring_front;
 	int rdomain;
         int gref_ring;
         int irq;
@@ -31,39 +18,35 @@ struct hyper_dmabuf_ring_info_export {
 	struct xenbus_watch watch;
 };
 
-struct hyper_dmabuf_ring_info_import {
+struct xen_comm_rx_ring_info {
         int sdomain;
         int irq;
         int evtchn;
-        struct hyper_dmabuf_back_ring ring_back;
+        struct xen_comm_back_ring ring_back;
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
-int32_t hyper_dmabuf_get_domid(void);
-int32_t hyper_dmabuf_setup_data_dir(void);
-int32_t hyper_dmabuf_destroy_data_dir(void);
+int hyper_dmabuf_get_domid(void);
 
-int hyper_dmabuf_next_req_id_export(void);
+int hyper_dmabuf_xen_init_comm_env(void);
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_exporter_ringbuf_init(int rdomain);
+int hyper_dmabuf_xen_init_tx_rbuf(int domid);
 
-/* importer needs to know about shared page and port numbers for ring buffer and event channel */
-int hyper_dmabuf_importer_ringbuf_init(int sdomain);
+/* importer needs to know about shared page and port numbers
+ * for ring buffer and event channel
+ */
+int hyper_dmabuf_xen_init_rx_rbuf(int domid);
 
 /* cleans up exporter ring created for given domain */
-void hyper_dmabuf_exporter_ringbuf_cleanup(int rdomain);
+void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid);
 
 /* cleans up importer ring created for given domain */
-void hyper_dmabuf_importer_ringbuf_cleanup(int sdomain);
+void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid);
 
-/* cleans up all exporter/importer rings */
-void hyper_dmabuf_cleanup_ringbufs(void);
+void hyper_dmabuf_xen_destroy_comm(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req, int wait);
-
-/* called by interrupt (WORKQUEUE) */
-int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait);
 
 #endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index a068276..2a1f45b 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -9,80 +9,73 @@
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
 
-DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
-DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
+DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
+DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
-int hyper_dmabuf_ring_table_init()
+void xen_comm_ring_table_init()
 {
-	hash_init(hyper_dmabuf_hash_importer_ring);
-	hash_init(hyper_dmabuf_hash_exporter_ring);
-	return 0;
-}
-
-int hyper_dmabuf_ring_table_destroy()
-{
-	/* TODO: cleanup tables*/
-	return 0;
+	hash_init(xen_comm_rx_ring_hash);
+	hash_init(xen_comm_tx_ring_hash);
 }
 
-int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	info_entry->info = ring_info;
 
-	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
+	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
 		info_entry->info->rdomain);
 
 	return 0;
 }
 
-int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	info_entry->info = ring_info;
 
-	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
+	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
 		info_entry->info->sdomain);
 
 	return 0;
 }
 
-struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->rdomain == domid)
 			return info_entry->info;
 
 	return NULL;
 }
 
-struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->sdomain == domid)
 			return info_entry->info;
 
 	return NULL;
 }
 
-int hyper_dmabuf_remove_exporter_ring(int domid)
+int xen_comm_remove_tx_ring(int domid)
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->rdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
@@ -92,12 +85,12 @@ int hyper_dmabuf_remove_exporter_ring(int domid)
 	return -1;
 }
 
-int hyper_dmabuf_remove_importer_ring(int domid)
+int xen_comm_remove_rx_ring(int domid)
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 	int bkt;
 
-	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
 		if(info_entry->info->sdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
@@ -107,24 +100,26 @@ int hyper_dmabuf_remove_importer_ring(int domid)
 	return -1;
 }
 
-void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom))
+void xen_comm_foreach_tx_ring(void (*func)(int domid))
 {
-	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	struct xen_comm_tx_ring_info_entry *info_entry;
 	struct hlist_node *tmp;
 	int bkt;
 
-	hash_for_each_safe(hyper_dmabuf_hash_exporter_ring, bkt, tmp, info_entry, node) {
+	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
+			   info_entry, node) {
 		func(info_entry->info->rdomain);
 	}
 }
 
-void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom))
+void xen_comm_foreach_rx_ring(void (*func)(int domid))
 {
-	struct hyper_dmabuf_importer_ring_info *info_entry;
+	struct xen_comm_rx_ring_info_entry *info_entry;
 	struct hlist_node *tmp;
 	int bkt;
 
-	hash_for_each_safe(hyper_dmabuf_hash_importer_ring, bkt, tmp, info_entry, node) {
+	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
+			   info_entry, node) {
 		func(info_entry->info->sdomain);
 	}
 }
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index fd1958c..18b3afd 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -2,40 +2,38 @@
 #define __HYPER_DMABUF_XEN_COMM_LIST_H__
 
 /* number of bits to be used for exported dmabufs hash table */
-#define MAX_ENTRY_EXPORT_RING 7
+#define MAX_ENTRY_TX_RING 7
 /* number of bits to be used for imported dmabufs hash table */
-#define MAX_ENTRY_IMPORT_RING 7
+#define MAX_ENTRY_RX_RING 7
 
-struct hyper_dmabuf_exporter_ring_info {
-        struct hyper_dmabuf_ring_info_export *info;
+struct xen_comm_tx_ring_info_entry {
+        struct xen_comm_tx_ring_info *info;
         struct hlist_node node;
 };
 
-struct hyper_dmabuf_importer_ring_info {
-        struct hyper_dmabuf_ring_info_import *info;
+struct xen_comm_rx_ring_info_entry {
+        struct xen_comm_rx_ring_info *info;
         struct hlist_node node;
 };
 
-int hyper_dmabuf_ring_table_init(void);
+void xen_comm_ring_table_init(void);
 
-int hyper_dmabuf_ring_table_destroy(void);
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
 
-int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
 
-int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
+int xen_comm_remove_tx_ring(int domid);
 
-struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
+int xen_comm_remove_rx_ring(int domid);
 
-struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
 
-int hyper_dmabuf_remove_exporter_ring(int domid);
-
-int hyper_dmabuf_remove_importer_ring(int domid);
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
 
 /* iterates over all exporter rings and calls provided function for each of them */
-void hyper_dmabuf_foreach_exporter_ring(void (*func)(int rdom));
+void xen_comm_foreach_tx_ring(void (*func)(int domid));
 
 /* iterates over all importer rings and calls provided function for each of them */
-void hyper_dmabuf_foreach_importer_ring(void (*func)(int sdom));
+void xen_comm_foreach_rx_ring(void (*func)(int domid));
 
 #endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
new file mode 100644
index 0000000..e7b871a
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -0,0 +1,22 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <xen/grant_table.h>
+#include "../hyper_dmabuf_msg.h"
+#include "../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_shm.h"
+
+struct hyper_dmabuf_backend_ops xen_backend_ops = {
+	.get_vm_id = hyper_dmabuf_get_domid,
+	.share_pages = hyper_dmabuf_xen_share_pages,
+	.unshare_pages = hyper_dmabuf_xen_unshare_pages,
+	.map_shared_pages = (void *)hyper_dmabuf_xen_map_shared_pages,
+	.unmap_shared_pages = hyper_dmabuf_xen_unmap_shared_pages,
+	.init_comm_env = hyper_dmabuf_xen_init_comm_env,
+	.destroy_comm = hyper_dmabuf_xen_destroy_comm,
+	.init_rx_ch = hyper_dmabuf_xen_init_rx_rbuf,
+	.init_tx_ch = hyper_dmabuf_xen_init_tx_rbuf,
+	.send_req = hyper_dmabuf_xen_send_req,
+};
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
new file mode 100644
index 0000000..e351c08
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -0,0 +1,20 @@
+#ifndef __HYPER_DMABUF_XEN_DRV_H__
+#define __HYPER_DMABUF_XEN_DRV_H__
+#include <xen/interface/grant_table.h>
+
+extern struct hyper_dmabuf_backend_ops xen_backend_ops;
+
+/* Main purpose of this structure is to keep
+ * all references created or acquired for sharing
+ * pages with another domain for freeing those later
+ * when unsharing.
+ */
+struct xen_shared_pages_info {
+        grant_ref_t lvl3_gref; /* top level refid */
+        grant_ref_t *lvl3_table; /* page of top level addressing, it contains refids of 2nd level pages */
+        grant_ref_t *lvl2_table; /* table of 2nd level pages, that contains refids to data pages */
+        struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
+        struct page **data_pages; /* data pages to be unmapped */
+};
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
new file mode 100644
index 0000000..c0045d4
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -0,0 +1,356 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_drv.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      3rd level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
+ * +-------------------------+ | | +--------------------+      |Data page 1 |
+ *                             | |                             +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|-->+------------------+
+ *                                 +-----------------------+   |Data page 1048575 |
+ *                                                             +------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
+				 void **refs_info)
+{
+	grant_ref_t lvl3_gref;
+	grant_ref_t *lvl2_table;
+	grant_ref_t *lvl3_table;
+
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			   ((nents % REFS_PER_PAGE) ? 1: 0));
+
+	struct xen_shared_pages_info *sh_pages_info;
+	int i;
+
+	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
+	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+	*refs_info = (void *)sh_pages_info;
+
+	/* share data pages in rw mode*/
+	for (i=0; i<nents; i++) {
+		lvl2_table[i] = gnttab_grant_foreign_access(domid,
+							    pfn_to_mfn(page_to_pfn(pages[i])),
+							    0);
+	}
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i=0; i< n_lvl2_grefs; i++) {
+		lvl3_table[i] = gnttab_grant_foreign_access(domid,
+							   virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+							   1);
+	}
+
+	/* Share lvl3_table in readonly mode*/
+	lvl3_gref = gnttab_grant_foreign_access(domid,
+						virt_to_mfn((unsigned long)lvl3_table),
+						1);
+
+
+	/* Store lvl3_table page to be freed later */
+	sh_pages_info->lvl3_table = lvl3_table;
+
+	/* Store lvl2_table pages to be freed later */
+	sh_pages_info->lvl2_table = lvl2_table;
+
+	/* Store exported pages refid to be unshared later */
+	sh_pages_info->lvl3_gref = lvl3_gref;
+
+	return lvl3_gref;
+}
+
+int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
+	struct xen_shared_pages_info *sh_pages_info;
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));
+	int i;
+
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->lvl3_table == NULL ||
+	    sh_pages_info->lvl2_table ==  NULL ||
+	    sh_pages_info->lvl3_gref == -1) {
+		printk("gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < nents; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
+			printk("refid not shared !!\n");
+		}
+		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
+		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
+			printk("refid not shared !!\n");
+		}
+		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
+			printk("refid still in use!!!\n");
+		}
+		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
+	}
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
+		printk("gref not shared !!\n");
+	}
+
+	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
+	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
+
+	/* freeing all pages used for 2 level addressing */
+	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
+
+	sh_pages_info->lvl3_gref = -1;
+	sh_pages_info->lvl2_table = NULL;
+	sh_pages_info->lvl3_table = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	return 0;
+}
+
+/*
+ * Maps provided top level ref id and then return array of pages containing data refs.
+ */
+struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents, void **refs_info)
+{
+	struct page *lvl3_table_page;
+	struct page **lvl2_table_pages;
+	struct page **data_pages;
+	struct xen_shared_pages_info *sh_pages_info;
+
+	grant_ref_t *lvl3_table;
+	grant_ref_t *lvl2_table;
+
+	struct gnttab_map_grant_ref lvl3_map_ops;
+	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
+
+	struct gnttab_map_grant_ref *lvl2_map_ops;
+	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
+
+	struct gnttab_map_grant_ref *data_map_ops;
+	struct gnttab_unmap_grant_ref *data_unmap_ops;
+
+	int nents_last = nents % REFS_PER_PAGE;
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0);
+	int i, j, k;
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+	*refs_info = (void *) sh_pages_info;
+
+	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs, GFP_KERNEL);
+	data_pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
+
+	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs, GFP_KERNEL);
+	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs, GFP_KERNEL);
+
+	data_map_ops = kcalloc(sizeof(*data_map_ops), nents, GFP_KERNEL);
+	data_unmap_ops = kcalloc(sizeof(*data_unmap_ops), nents, GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
+
+	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly,
+			  (grant_ref_t)lvl3_gref, domid);
+
+	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (lvl3_map_ops.status) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+			lvl3_map_ops.status);
+		return NULL;
+	} else {
+		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
+	}
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl2_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		gnttab_set_map_op(&lvl2_map_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly,
+				  lvl3_table[i], domid);
+		gnttab_set_unmap_op(&lvl2_unmap_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
+		printk("\xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (lvl2_map_ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+			       lvl2_map_ops[i].status);
+			return NULL;
+		} else {
+			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
+		}
+	}
+
+	if (gnttab_alloc_pages(nents, data_pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	k = 0;
+
+	for (i = 0; i < (nents_last ? n_lvl2_grefs - 1 : n_lvl2_grefs); i++) {
+		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		for (j = 0; j < REFS_PER_PAGE; j++) {
+			gnttab_set_map_op(&data_map_ops[k],
+					  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+					  GNTMAP_host_map,
+					  lvl2_table[j], domid);
+
+			gnttab_set_unmap_op(&data_unmap_ops[k],
+					    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+					    GNTMAP_host_map, -1);
+			k++;
+		}
+	}
+
+	/* for grefs in the last lvl2 table page */
+	lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[n_lvl2_grefs - 1]));
+
+	for (j = 0; j < nents_last; j++) {
+		gnttab_set_map_op(&data_map_ops[k],
+				  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				  GNTMAP_host_map,
+				  lvl2_table[j], domid);
+
+		gnttab_set_unmap_op(&data_unmap_ops[k],
+				    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				    GNTMAP_host_map, -1);
+		k++;
+	}
+
+	if (gnttab_map_refs(data_map_ops, NULL, data_pages, nents)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	/* unmapping lvl2 table pages */
+	if (gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
+			      n_lvl2_grefs)) {
+		printk("Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	for (i = 0; i < nents; i++) {
+		if (data_map_ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
+				data_map_ops[i].status);
+			return NULL;
+		} else {
+			data_unmap_ops[i].handle = data_map_ops[i].handle;
+		}
+	}
+
+	/* store these references for unmapping in the future */
+	sh_pages_info->unmap_ops = data_unmap_ops;
+	sh_pages_info->data_pages = data_pages;
+
+	gnttab_free_pages(1, &lvl3_table_page);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+	return data_pages;
+}
+
+int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
+	struct xen_shared_pages_info *sh_pages_info;
+
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->unmap_ops == NULL ||
+	    sh_pages_info->data_pages == NULL) {
+		printk("Imported pages already cleaned up or buffer was not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
+			      sh_pages_info->data_pages, nents) ) {
+		printk("Cannot unmap data pages\n");
+		return -EINVAL;
+	}
+
+	gnttab_free_pages(nents, sh_pages_info->data_pages);
+
+	kfree(sh_pages_info->data_pages);
+	kfree(sh_pages_info->unmap_ops);
+	sh_pages_info->unmap_ops = NULL;
+	sh_pages_info->data_pages = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
new file mode 100644
index 0000000..2287804
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -0,0 +1,19 @@
+#ifndef __HYPER_DMABUF_XEN_SHM_H__
+#define __HYPER_DMABUF_XEN_SHM_H__
+
+/* This collects all reference numbers for 2nd level shared pages and create a table
+ * with those in 1st level shared pages then return reference numbers for this top level
+ * table. */
+int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
+				 void **refs_info);
+
+int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents);
+
+/* Maps provided top level ref id and then return array of pages containing data refs.
+ */
+struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents,
+						void **refs_info);
+
+int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents);
+
+#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 17/60] hyper_dmabuf: use dynamic debug macros for logging
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Replaces printk to debug macros

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  4 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 46 +++++++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 50 +++++++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 26 +++++---
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 60 ++++++++++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 73 +++++++++++++++-------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 50 +++++++++------
 8 files changed, 206 insertions(+), 107 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index ddcc955..9d99769 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,6 +1,7 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/workqueue.h>
+#include <linux/device.h>
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
@@ -36,7 +37,8 @@ static int hyper_dmabuf_drv_init(void)
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
 #endif
 
-	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
+	dev_info(hyper_dmabuf_private.device,
+		 "initializing database for imported/exported dmabufs\n");
 
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 03d77d7..c16e8d4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,6 +1,10 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
+#include <linux/device.h>
+
+struct hyper_dmabuf_req;
+
 struct list_reusable_id {
 	int id;
 	struct list_head list;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 0f104b9..b61d29a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -155,7 +155,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 
 	if (!sgt_info) {
-		printk("invalid hyper_dmabuf_id\n");
+		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
 		return -EINVAL;
 	}
 
@@ -168,7 +168,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	    !list_empty(&sgt_info->va_vmapped->list) ||
 	    !list_empty(&sgt_info->active_sgts->list) ||
 	    !list_empty(&sgt_info->active_attached->list))) {
-		printk("dma-buf is used by importer\n");
+		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
 		return -EPERM;
 	}
 
@@ -273,7 +273,8 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 						 HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 		return ret;
 	}
 
@@ -294,7 +295,8 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 						 HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -331,7 +333,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	kfree(page_info);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return st;
@@ -363,7 +366,8 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -403,7 +407,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	}
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	/*
@@ -429,7 +434,8 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -448,7 +454,8 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return 0;
@@ -467,7 +474,8 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -486,7 +494,8 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -503,7 +512,8 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -522,7 +532,8 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -539,7 +550,8 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -558,7 +570,8 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL;
@@ -577,7 +590,8 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 70107bb..b1e0bdb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -25,7 +25,7 @@ static int hyper_dmabuf_tx_ch_setup(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -1;
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
@@ -42,7 +42,7 @@ static int hyper_dmabuf_rx_ch_setup(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -1;
 	}
 
@@ -67,7 +67,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -1;
 	}
 
@@ -76,7 +76,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
 	if (!dma_buf) {
-		printk("Cannot get dma buf\n");
+		dev_err(hyper_dmabuf_private.device,  "Cannot get dma buf\n");
 		return -1;
 	}
 
@@ -94,7 +94,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
 	if (!attachment) {
-		printk("Cannot get attachment\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
 		return -1;
 	}
 
@@ -206,8 +206,10 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	int operand;
 	int ret = 0;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -218,12 +220,15 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
-	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
-		sgt_info->ref_handle, sgt_info->frst_ofst,
-		sgt_info->last_len, sgt_info->nents,
-		HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
+	dev_dbg(hyper_dmabuf_private.device,
+		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
+		  sgt_info->ref_handle, sgt_info->frst_ofst,
+		  sgt_info->last_len, sgt_info->nents,
+		  HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
 
 	if (!sgt_info->sgt) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"%s buffer %d pages not mapped yet\n", __func__,sgt_info->hyper_dmabuf_id);
 		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
 						   HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id),
 						   sgt_info->nents,
@@ -244,7 +249,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	if (!sgt_info->sgt || ret) {
 		kfree(req);
-		printk("Failed to create sgt or notify exporter\n");
+		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		return -EINVAL;
 	}
 	kfree(req);
@@ -258,6 +263,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		sgt_info->num_importers++;
 	}
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
 	return ret;
 }
 
@@ -272,8 +278,10 @@ static int hyper_dmabuf_unexport(void *data)
 	struct hyper_dmabuf_req *req;
 	int ret;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -302,6 +310,8 @@ static int hyper_dmabuf_unexport(void *data)
 	/* free msg */
 	kfree(req);
 
+	dev_dbg(hyper_dmabuf_private.device,
+		"Marking buffer %d as invalid\n", unexport_attr->hyper_dmabuf_id);
 	/* no longer valid */
 	sgt_info->valid = 0;
 
@@ -312,8 +322,9 @@ static int hyper_dmabuf_unexport(void *data)
 	 * is called (importer does this only when there's no
 	 * no consumer of locally exported FDs)
 	 */
-	printk("before claning up buffer completly\n");
 	if (!sgt_info->importer_exported) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"claning up buffer %d completly\n", unexport_attr->hyper_dmabuf_id);
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
 		kfree(sgt_info);
@@ -321,6 +332,7 @@ static int hyper_dmabuf_unexport(void *data)
 		store_reusable_id(unexport_attr->hyper_dmabuf_id);
 	}
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
 	return ret;
 }
 
@@ -332,7 +344,7 @@ static int hyper_dmabuf_query(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -343,7 +355,7 @@ static int hyper_dmabuf_query(void *data)
 
 	/* if dmabuf can't be found in both lists, return */
 	if (!(sgt_info && imported_sgt_info)) {
-		printk("can't find entry anywhere\n");
+		dev_err(hyper_dmabuf_private.device, "can't find entry anywhere\n");
 		return -EINVAL;
 	}
 
@@ -419,25 +431,25 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 	func = ioctl->func;
 
 	if (unlikely(!func)) {
-		printk("no function\n");
+		dev_err(hyper_dmabuf_private.device, "no function\n");
 		return -EINVAL;
 	}
 
 	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
 	if (!kdata) {
-		printk("no memory\n");
+		dev_err(hyper_dmabuf_private.device, "no memory\n");
 		return -ENOMEM;
 	}
 
 	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
-		printk("failed to copy from user arguments\n");
+		dev_err(hyper_dmabuf_private.device, "failed to copy from user arguments\n");
 		return -EFAULT;
 	}
 
 	ret = func(kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
-		printk("failed to copy to user arguments\n");
+		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
 		return -EFAULT;
 	}
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 4647115..9c38900 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -114,12 +114,12 @@ void cmd_process_work(struct work_struct *work)
 		imported_sgt_info->nents = req->operands[1];
 		imported_sgt_info->ref_handle = req->operands[4];
 
-		printk("DMABUF was exported\n");
-		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
-		printk("\tnents %d\n", req->operands[1]);
-		printk("\tfirst offset %d\n", req->operands[2]);
-		printk("\tlast len %d\n", req->operands[3]);
-		printk("\tgrefid %d\n", req->operands[4]);
+		dev_dbg(hyper_dmabuf_private.device, "DMABUF was exported\n");
+		dev_dbg(hyper_dmabuf_private.device, "\thyper_dmabuf_id %d\n", req->operands[0]);
+		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[1]);
+		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[2]);
+		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[3]);
+		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[4]);
 
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
@@ -133,7 +133,8 @@ void cmd_process_work(struct work_struct *work)
 		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
 
 		if (!sgt_info) {
-			printk("critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			dev_err(hyper_dmabuf_private.device,
+				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
 			break;
 		}
 
@@ -163,13 +164,13 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	int ret;
 
 	if (!req) {
-		printk("request is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "request is NULL\n");
 		return -EINVAL;
 	}
 
 	if ((req->command < HYPER_DMABUF_EXPORT) ||
 		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
-		printk("invalid command\n");
+		dev_err(hyper_dmabuf_private.device, "invalid command\n");
 		return -EINVAL;
 	}
 
@@ -183,7 +184,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
 		 * operands0 : hyper_dmabuf_id
 		 */
-
+		dev_dbg(hyper_dmabuf_private.device,
+			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
 		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (sgt_info) {
@@ -216,6 +218,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 * operands0 : hyper_dmabuf_id
 		 * operands1 : enum hyper_dmabuf_ops {....}
 		 */
+		dev_dbg(hyper_dmabuf_private.device,
+			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
 		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
 		if (ret)
 			req->status = HYPER_DMABUF_REQ_ERROR;
@@ -225,6 +229,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		return req->command;
 	}
 
+	dev_dbg(hyper_dmabuf_private.device,
+		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
 	memcpy(temp_req, req, sizeof(*temp_req));
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 0f4735c..2758915 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -41,7 +41,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	sgt_info = hyper_dmabuf_find_exported(id);
 
 	if (!sgt_info) {
-		printk("dmabuf remote sync::can't find exported list\n");
+		dev_err(hyper_dmabuf_private.device,
+			"dmabuf remote sync::can't find exported list\n");
 		return -EINVAL;
 	}
 
@@ -54,7 +55,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		if (!attachl->attach) {
 			kfree(attachl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
 			return -EINVAL;
 		}
 
@@ -63,8 +65,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_DETACH:
 		if (list_empty(&sgt_info->active_attached->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
-			printk("no more dmabuf attachment left to be detached\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf attachment left to be detached\n");
 			return -EINVAL;
 		}
 
@@ -78,8 +82,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_MAP:
 		if (list_empty(&sgt_info->active_attached->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			printk("no more dmabuf attachment left to be detached\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf attachment left to be detached\n");
 			return -EINVAL;
 		}
 
@@ -90,7 +96,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
 			return -EINVAL;
 		}
 		list_add(&sgtl->list, &sgt_info->active_sgts->list);
@@ -99,8 +106,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_UNMAP:
 		if (list_empty(&sgt_info->active_sgts->list) ||
 		    list_empty(&sgt_info->active_attached->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
-			printk("no more SGT or attachment left to be freed\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more SGT or attachment left to be freed\n");
 			return -EINVAL;
 		}
 
@@ -140,7 +149,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (!ret) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
 			ret = -EINVAL;
 		}
 		break;
@@ -148,7 +158,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
 		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (!ret) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
 			ret = -EINVAL;
 		}
 		break;
@@ -165,7 +176,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		if (!va_kmapl->vaddr) {
 			kfree(va_kmapl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -EINVAL;
 		}
 		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
@@ -174,15 +186,18 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KUNMAP:
 		if (list_empty(&sgt_info->va_kmapped->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			printk("no more dmabuf VA to be freed\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf VA to be freed\n");
 			return -EINVAL;
 		}
 
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
 		if (va_kmapl->vaddr == NULL) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return -EINVAL;
 		}
 
@@ -199,7 +214,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_MMAP:
 		/* currently not supported: looking for a way to create
 		 * a dummy vma */
-		printk("dmabuf remote sync::sychronized mmap is not supported\n");
+		dev_warn(hyper_dmabuf_private.device,
+			 "dmabuf remote sync::sychronized mmap is not supported\n");
 		break;
 
 	case HYPER_DMABUF_OPS_VMAP:
@@ -210,7 +226,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		if (!va_vmapl->vaddr) {
 			kfree(va_vmapl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
 			return -EINVAL;
 		}
 		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
@@ -218,14 +235,17 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_VUNMAP:
 		if (list_empty(&sgt_info->va_vmapped->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
-			printk("no more dmabuf VA to be freed\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf VA to be freed\n");
 			return -EINVAL;
 		}
 		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
 			return -EINVAL;
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index bd37ec2..5e7a250 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -10,11 +10,14 @@
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
+#include "../hyper_dmabuf_drv.h"
 
 static int export_req_id = 0;
 
 struct hyper_dmabuf_req req_pending = {0};
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 /* Creates entry in xen store that will keep details of all
  * exporter rings created by this domain
  */
@@ -55,14 +58,16 @@ static int xen_comm_expose_ring_details(int domid, int rdomid,
 	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
 
 	if (ret) {
-		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to write xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
 	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret) {
-		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to write xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
@@ -81,14 +86,16 @@ static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *po
 	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
 
 	if (ret <= 0) {
-		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to read xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
 	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret <= 0) {
-		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to read xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
@@ -161,10 +168,12 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 					&grefid, &port);
 
 	if (ring_info && ret != 0) {
-		printk("Remote exporter closed, cleaninup importer\n");
+		dev_info(hyper_dmabuf_private.device,
+			 "Remote exporter closed, cleaninup importer\n");
 		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
-		printk("Registering importer\n");
+		dev_info(hyper_dmabuf_private.device,
+			 "Registering importer\n");
 		hyper_dmabuf_xen_init_rx_rbuf(rdom);
 	}
 }
@@ -184,7 +193,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info = xen_comm_find_tx_ring(domid);
 
 	if (ring_info) {
-		printk("tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+		dev_info(hyper_dmabuf_private.device,
+			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
 		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
 		return 0;
 	}
@@ -216,7 +226,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
 					&alloc_unbound);
 	if (ret != 0) {
-		printk("Cannot allocate event channel\n");
+		dev_err(hyper_dmabuf_private.device,
+			"Cannot allocate event channel\n");
 		return -EINVAL;
 	}
 
@@ -226,7 +237,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 					NULL, (void*) ring_info);
 
 	if (ret < 0) {
-		printk("Failed to setup event channel\n");
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
 		gnttab_end_foreign_access(ring_info->gref_ring, 0,
@@ -238,7 +250,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
-	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+	dev_dbg(hyper_dmabuf_private.device,
+		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
@@ -315,7 +328,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ring_info = xen_comm_find_rx_ring(domid);
 
 	if (ring_info) {
-		printk("rx ring ch from domid = %d already exist\n", ring_info->sdomain);
+		dev_info(hyper_dmabuf_private.device,
+			 "rx ring ch from domid = %d already exist\n", ring_info->sdomain);
 		return 0;
 	}
 
@@ -323,7 +337,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 					&rx_gref, &rx_port);
 
 	if (ret) {
-		printk("Domain %d has not created exporter ring for current domain\n", domid);
+		dev_err(hyper_dmabuf_private.device,
+			"Domain %d has not created exporter ring for current domain\n", domid);
 		return ret;
 	}
 
@@ -346,12 +361,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
-		printk("Cannot map ring\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
 		return -EINVAL;
 	}
 
 	if (map_ops[0].status) {
-		printk("Ring mapping failed\n");
+		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
 		return -EINVAL;
 	} else {
 		ring_info->unmap_op.handle = map_ops[0].handle;
@@ -372,7 +387,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info->irq = ret;
 
-	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+	dev_dbg(hyper_dmabuf_private.device,
+		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
 		rx_port,
 		ring_info->irq);
 
@@ -445,7 +461,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	/* find a ring info for the channel */
 	ring_info = xen_comm_find_tx_ring(domid);
 	if (!ring_info) {
-		printk("Can't find ring info for the channel\n");
+		dev_err(hyper_dmabuf_private.device,
+			"Can't find ring info for the channel\n");
 		return -EINVAL;
 	}
 
@@ -456,7 +473,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
-		printk("NULL REQUEST\n");
+		dev_err(hyper_dmabuf_private.device,
+			"NULL REQUEST\n");
 		return -EIO;
 	}
 
@@ -484,7 +502,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		if (timeout < 0) {
-			printk("request timed-out\n");
+			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
 			return -EBUSY;
 		}
 	}
@@ -508,6 +526,8 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_rx_ring_info *)info;
 	ring = &ring_info->ring_back;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+
 	do {
 		rc = ring->req_cons;
 		rp = ring->sring->req_prod;
@@ -558,6 +578,8 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_tx_ring_info *)info;
 	ring = &ring_info->ring_front;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+
 	do {
 		more_to_do = 0;
 		rp = ring->sring->rsp_prod;
@@ -576,16 +598,21 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 							(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
-					printk("getting error while parsing response\n");
+					dev_err(hyper_dmabuf_private.device,
+						"getting error while parsing response\n");
 				}
 			} else if (resp->status == HYPER_DMABUF_REQ_PROCESSED) {
 				/* for debugging dma_buf remote synchronization */
-				printk("original request = 0x%x\n", resp->command);
-				printk("Just got HYPER_DMABUF_REQ_PROCESSED\n");
+				dev_dbg(hyper_dmabuf_private.device,
+					"original request = 0x%x\n", resp->command);
+				dev_dbg(hyper_dmabuf_private.device,
+					"Just got HYPER_DMABUF_REQ_PROCESSED\n");
 			} else if (resp->status == HYPER_DMABUF_REQ_ERROR) {
 				/* for debugging dma_buf remote synchronization */
-				printk("original request = 0x%x\n", resp->command);
-				printk("Just got HYPER_DMABUF_REQ_ERROR\n");
+				dev_dbg(hyper_dmabuf_private.device,
+					"original request = 0x%x\n", resp->command);
+				dev_dbg(hyper_dmabuf_private.device,
+					"Just got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index c0045d4..cc9860b 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -4,9 +4,12 @@
 #include <xen/grant_table.h>
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_xen_drv.h"
+#include "../hyper_dmabuf_drv.h"
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 /*
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
@@ -93,9 +96,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Store lvl2_table pages to be freed later */
 	sh_pages_info->lvl2_table = lvl2_table;
 
+
 	/* Store exported pages refid to be unshared later */
 	sh_pages_info->lvl3_gref = lvl3_gref;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return lvl3_gref;
 }
 
@@ -104,19 +109,21 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));
 	int i;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->lvl3_table == NULL ||
 	    sh_pages_info->lvl2_table ==  NULL ||
 	    sh_pages_info->lvl3_gref == -1) {
-		printk("gref table for hyper_dmabuf already cleaned up\n");
+		dev_warn(hyper_dmabuf_private.device,
+			 "gref table for hyper_dmabuf already cleaned up\n");
 		return 0;
 	}
 
 	/* End foreign access for data pages, but do not free them */
 	for (i = 0; i < nents; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
-			printk("refid not shared !!\n");
+			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
 		}
 		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
 		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
@@ -125,17 +132,17 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	/* End foreign access for 2nd level addressing pages */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
-			printk("refid not shared !!\n");
+			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
 		}
 		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
-			printk("refid still in use!!!\n");
+			dev_warn(hyper_dmabuf_private.device, "refid still in use!!!\n");
 		}
 		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
 	}
 
 	/* End foreign access for top level addressing page */
 	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
-		printk("gref not shared !!\n");
+		dev_warn(hyper_dmabuf_private.device, "gref not shared !!\n");
 	}
 
 	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
@@ -151,6 +158,7 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return 0;
 }
 
@@ -180,6 +188,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0);
 	int i, j, k;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 	*refs_info = (void *) sh_pages_info;
 
@@ -194,7 +204,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Map top level addressing page */
 	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
-		printk("Cannot allocate pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -206,12 +216,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	if (lvl3_map_ops.status) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
 			lvl3_map_ops.status);
 		return NULL;
 	} else {
@@ -220,7 +230,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Map all second level pages */
 	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
-		printk("Cannot allocate pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -233,19 +243,19 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Unmap top level page, as it won't be needed any longer */
 	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
-		printk("\xen: cannot unmap top level page\n");
+		dev_err(hyper_dmabuf_private.device, "xen: cannot unmap top level page\n");
 		return NULL;
 	}
 
 	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	/* Checks if pages were mapped correctly */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (lvl2_map_ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
 			       lvl2_map_ops[i].status);
 			return NULL;
 		} else {
@@ -254,7 +264,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	if (gnttab_alloc_pages(nents, data_pages)) {
-		printk("Cannot allocate pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -291,20 +301,20 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	if (gnttab_map_refs(data_map_ops, NULL, data_pages, nents)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed\n");
 		return NULL;
 	}
 
 	/* unmapping lvl2 table pages */
 	if (gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
 			      n_lvl2_grefs)) {
-		printk("Cannot unmap 2nd level refs\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot unmap 2nd level refs\n");
 		return NULL;
 	}
 
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
+			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
 			return NULL;
 		} else {
@@ -323,23 +333,26 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	kfree(lvl2_unmap_ops);
 	kfree(data_map_ops);
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return data_pages;
 }
 
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	struct xen_shared_pages_info *sh_pages_info;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->unmap_ops == NULL ||
 	    sh_pages_info->data_pages == NULL) {
-		printk("Imported pages already cleaned up or buffer was not imported yet\n");
+		dev_warn(hyper_dmabuf_private.device, "Imported pages already cleaned up or buffer was not imported yet\n");
 		return 0;
 	}
 
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
 			      sh_pages_info->data_pages, nents) ) {
-		printk("Cannot unmap data pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot unmap data pages\n");
 		return -EINVAL;
 	}
 
@@ -352,5 +365,6 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 17/60] hyper_dmabuf: use dynamic debug macros for logging
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Replaces printk to debug macros

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  4 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 46 +++++++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 50 +++++++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 26 +++++---
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 60 ++++++++++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 73 +++++++++++++++-------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 50 +++++++++------
 8 files changed, 206 insertions(+), 107 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index ddcc955..9d99769 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,6 +1,7 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/workqueue.h>
+#include <linux/device.h>
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
@@ -36,7 +37,8 @@ static int hyper_dmabuf_drv_init(void)
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
 #endif
 
-	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
+	dev_info(hyper_dmabuf_private.device,
+		 "initializing database for imported/exported dmabufs\n");
 
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 03d77d7..c16e8d4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,6 +1,10 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
+#include <linux/device.h>
+
+struct hyper_dmabuf_req;
+
 struct list_reusable_id {
 	int id;
 	struct list_head list;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 0f104b9..b61d29a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -155,7 +155,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 
 	if (!sgt_info) {
-		printk("invalid hyper_dmabuf_id\n");
+		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
 		return -EINVAL;
 	}
 
@@ -168,7 +168,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	    !list_empty(&sgt_info->va_vmapped->list) ||
 	    !list_empty(&sgt_info->active_sgts->list) ||
 	    !list_empty(&sgt_info->active_attached->list))) {
-		printk("dma-buf is used by importer\n");
+		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
 		return -EPERM;
 	}
 
@@ -273,7 +273,8 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 						 HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 		return ret;
 	}
 
@@ -294,7 +295,8 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 						 HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -331,7 +333,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	kfree(page_info);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return st;
@@ -363,7 +366,8 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 						HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -403,7 +407,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	}
 
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	/*
@@ -429,7 +434,8 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -448,7 +454,8 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return 0;
@@ -467,7 +474,8 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -486,7 +494,8 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -503,7 +512,8 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
@@ -522,7 +532,8 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
@@ -539,7 +550,8 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return ret;
@@ -558,7 +570,8 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	return NULL;
@@ -577,7 +590,8 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
 						HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
-		printk("hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 70107bb..b1e0bdb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -25,7 +25,7 @@ static int hyper_dmabuf_tx_ch_setup(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -1;
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
@@ -42,7 +42,7 @@ static int hyper_dmabuf_rx_ch_setup(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -1;
 	}
 
@@ -67,7 +67,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -1;
 	}
 
@@ -76,7 +76,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
 	if (!dma_buf) {
-		printk("Cannot get dma buf\n");
+		dev_err(hyper_dmabuf_private.device,  "Cannot get dma buf\n");
 		return -1;
 	}
 
@@ -94,7 +94,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
 	if (!attachment) {
-		printk("Cannot get attachment\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
 		return -1;
 	}
 
@@ -206,8 +206,10 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	int operand;
 	int ret = 0;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -218,12 +220,15 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (sgt_info == NULL) /* can't find sgt from the table */
 		return -1;
 
-	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
-		sgt_info->ref_handle, sgt_info->frst_ofst,
-		sgt_info->last_len, sgt_info->nents,
-		HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
+	dev_dbg(hyper_dmabuf_private.device,
+		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
+		  sgt_info->ref_handle, sgt_info->frst_ofst,
+		  sgt_info->last_len, sgt_info->nents,
+		  HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
 
 	if (!sgt_info->sgt) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"%s buffer %d pages not mapped yet\n", __func__,sgt_info->hyper_dmabuf_id);
 		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
 						   HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id),
 						   sgt_info->nents,
@@ -244,7 +249,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	if (!sgt_info->sgt || ret) {
 		kfree(req);
-		printk("Failed to create sgt or notify exporter\n");
+		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		return -EINVAL;
 	}
 	kfree(req);
@@ -258,6 +263,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		sgt_info->num_importers++;
 	}
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
 	return ret;
 }
 
@@ -272,8 +278,10 @@ static int hyper_dmabuf_unexport(void *data)
 	struct hyper_dmabuf_req *req;
 	int ret;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -302,6 +310,8 @@ static int hyper_dmabuf_unexport(void *data)
 	/* free msg */
 	kfree(req);
 
+	dev_dbg(hyper_dmabuf_private.device,
+		"Marking buffer %d as invalid\n", unexport_attr->hyper_dmabuf_id);
 	/* no longer valid */
 	sgt_info->valid = 0;
 
@@ -312,8 +322,9 @@ static int hyper_dmabuf_unexport(void *data)
 	 * is called (importer does this only when there's no
 	 * no consumer of locally exported FDs)
 	 */
-	printk("before claning up buffer completly\n");
 	if (!sgt_info->importer_exported) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"claning up buffer %d completly\n", unexport_attr->hyper_dmabuf_id);
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
 		kfree(sgt_info);
@@ -321,6 +332,7 @@ static int hyper_dmabuf_unexport(void *data)
 		store_reusable_id(unexport_attr->hyper_dmabuf_id);
 	}
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
 	return ret;
 }
 
@@ -332,7 +344,7 @@ static int hyper_dmabuf_query(void *data)
 	int ret = 0;
 
 	if (!data) {
-		printk("user data is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -343,7 +355,7 @@ static int hyper_dmabuf_query(void *data)
 
 	/* if dmabuf can't be found in both lists, return */
 	if (!(sgt_info && imported_sgt_info)) {
-		printk("can't find entry anywhere\n");
+		dev_err(hyper_dmabuf_private.device, "can't find entry anywhere\n");
 		return -EINVAL;
 	}
 
@@ -419,25 +431,25 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 	func = ioctl->func;
 
 	if (unlikely(!func)) {
-		printk("no function\n");
+		dev_err(hyper_dmabuf_private.device, "no function\n");
 		return -EINVAL;
 	}
 
 	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
 	if (!kdata) {
-		printk("no memory\n");
+		dev_err(hyper_dmabuf_private.device, "no memory\n");
 		return -ENOMEM;
 	}
 
 	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
-		printk("failed to copy from user arguments\n");
+		dev_err(hyper_dmabuf_private.device, "failed to copy from user arguments\n");
 		return -EFAULT;
 	}
 
 	ret = func(kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
-		printk("failed to copy to user arguments\n");
+		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
 		return -EFAULT;
 	}
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 4647115..9c38900 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -114,12 +114,12 @@ void cmd_process_work(struct work_struct *work)
 		imported_sgt_info->nents = req->operands[1];
 		imported_sgt_info->ref_handle = req->operands[4];
 
-		printk("DMABUF was exported\n");
-		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
-		printk("\tnents %d\n", req->operands[1]);
-		printk("\tfirst offset %d\n", req->operands[2]);
-		printk("\tlast len %d\n", req->operands[3]);
-		printk("\tgrefid %d\n", req->operands[4]);
+		dev_dbg(hyper_dmabuf_private.device, "DMABUF was exported\n");
+		dev_dbg(hyper_dmabuf_private.device, "\thyper_dmabuf_id %d\n", req->operands[0]);
+		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[1]);
+		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[2]);
+		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[3]);
+		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[4]);
 
 		for (i=0; i<4; i++)
 			imported_sgt_info->private[i] = req->operands[5+i];
@@ -133,7 +133,8 @@ void cmd_process_work(struct work_struct *work)
 		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
 
 		if (!sgt_info) {
-			printk("critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			dev_err(hyper_dmabuf_private.device,
+				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
 			break;
 		}
 
@@ -163,13 +164,13 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	int ret;
 
 	if (!req) {
-		printk("request is NULL\n");
+		dev_err(hyper_dmabuf_private.device, "request is NULL\n");
 		return -EINVAL;
 	}
 
 	if ((req->command < HYPER_DMABUF_EXPORT) ||
 		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
-		printk("invalid command\n");
+		dev_err(hyper_dmabuf_private.device, "invalid command\n");
 		return -EINVAL;
 	}
 
@@ -183,7 +184,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
 		 * operands0 : hyper_dmabuf_id
 		 */
-
+		dev_dbg(hyper_dmabuf_private.device,
+			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
 		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (sgt_info) {
@@ -216,6 +218,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 * operands0 : hyper_dmabuf_id
 		 * operands1 : enum hyper_dmabuf_ops {....}
 		 */
+		dev_dbg(hyper_dmabuf_private.device,
+			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
 		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
 		if (ret)
 			req->status = HYPER_DMABUF_REQ_ERROR;
@@ -225,6 +229,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		return req->command;
 	}
 
+	dev_dbg(hyper_dmabuf_private.device,
+		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
 	memcpy(temp_req, req, sizeof(*temp_req));
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 0f4735c..2758915 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -41,7 +41,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	sgt_info = hyper_dmabuf_find_exported(id);
 
 	if (!sgt_info) {
-		printk("dmabuf remote sync::can't find exported list\n");
+		dev_err(hyper_dmabuf_private.device,
+			"dmabuf remote sync::can't find exported list\n");
 		return -EINVAL;
 	}
 
@@ -54,7 +55,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		if (!attachl->attach) {
 			kfree(attachl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
 			return -EINVAL;
 		}
 
@@ -63,8 +65,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_DETACH:
 		if (list_empty(&sgt_info->active_attached->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
-			printk("no more dmabuf attachment left to be detached\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf attachment left to be detached\n");
 			return -EINVAL;
 		}
 
@@ -78,8 +82,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_MAP:
 		if (list_empty(&sgt_info->active_attached->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			printk("no more dmabuf attachment left to be detached\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf attachment left to be detached\n");
 			return -EINVAL;
 		}
 
@@ -90,7 +96,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
 			return -EINVAL;
 		}
 		list_add(&sgtl->list, &sgt_info->active_sgts->list);
@@ -99,8 +106,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_UNMAP:
 		if (list_empty(&sgt_info->active_sgts->list) ||
 		    list_empty(&sgt_info->active_attached->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
-			printk("no more SGT or attachment left to be freed\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more SGT or attachment left to be freed\n");
 			return -EINVAL;
 		}
 
@@ -140,7 +149,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (!ret) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
 			ret = -EINVAL;
 		}
 		break;
@@ -148,7 +158,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
 		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (!ret) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
 			ret = -EINVAL;
 		}
 		break;
@@ -165,7 +176,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		if (!va_kmapl->vaddr) {
 			kfree(va_kmapl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -EINVAL;
 		}
 		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
@@ -174,15 +186,18 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KUNMAP:
 		if (list_empty(&sgt_info->va_kmapped->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			printk("no more dmabuf VA to be freed\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf VA to be freed\n");
 			return -EINVAL;
 		}
 
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
 		if (va_kmapl->vaddr == NULL) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return -EINVAL;
 		}
 
@@ -199,7 +214,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_MMAP:
 		/* currently not supported: looking for a way to create
 		 * a dummy vma */
-		printk("dmabuf remote sync::sychronized mmap is not supported\n");
+		dev_warn(hyper_dmabuf_private.device,
+			 "dmabuf remote sync::sychronized mmap is not supported\n");
 		break;
 
 	case HYPER_DMABUF_OPS_VMAP:
@@ -210,7 +226,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		if (!va_vmapl->vaddr) {
 			kfree(va_vmapl);
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
 			return -EINVAL;
 		}
 		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
@@ -218,14 +235,17 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_VUNMAP:
 		if (list_empty(&sgt_info->va_vmapped->list)) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
-			printk("no more dmabuf VA to be freed\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"no more dmabuf VA to be freed\n");
 			return -EINVAL;
 		}
 		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-			printk("dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
 			return -EINVAL;
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index bd37ec2..5e7a250 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -10,11 +10,14 @@
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
+#include "../hyper_dmabuf_drv.h"
 
 static int export_req_id = 0;
 
 struct hyper_dmabuf_req req_pending = {0};
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 /* Creates entry in xen store that will keep details of all
  * exporter rings created by this domain
  */
@@ -55,14 +58,16 @@ static int xen_comm_expose_ring_details(int domid, int rdomid,
 	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
 
 	if (ret) {
-		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to write xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
 	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret) {
-		printk("Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to write xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
@@ -81,14 +86,16 @@ static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *po
 	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
 
 	if (ret <= 0) {
-		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to read xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
 	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret <= 0) {
-		printk("Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to read xenbus entry %s: %d\n", buf, ret);
 		return ret;
 	}
 
@@ -161,10 +168,12 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 					&grefid, &port);
 
 	if (ring_info && ret != 0) {
-		printk("Remote exporter closed, cleaninup importer\n");
+		dev_info(hyper_dmabuf_private.device,
+			 "Remote exporter closed, cleaninup importer\n");
 		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
-		printk("Registering importer\n");
+		dev_info(hyper_dmabuf_private.device,
+			 "Registering importer\n");
 		hyper_dmabuf_xen_init_rx_rbuf(rdom);
 	}
 }
@@ -184,7 +193,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info = xen_comm_find_tx_ring(domid);
 
 	if (ring_info) {
-		printk("tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+		dev_info(hyper_dmabuf_private.device,
+			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
 		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
 		return 0;
 	}
@@ -216,7 +226,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
 					&alloc_unbound);
 	if (ret != 0) {
-		printk("Cannot allocate event channel\n");
+		dev_err(hyper_dmabuf_private.device,
+			"Cannot allocate event channel\n");
 		return -EINVAL;
 	}
 
@@ -226,7 +237,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 					NULL, (void*) ring_info);
 
 	if (ret < 0) {
-		printk("Failed to setup event channel\n");
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
 		gnttab_end_foreign_access(ring_info->gref_ring, 0,
@@ -238,7 +250,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
-	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+	dev_dbg(hyper_dmabuf_private.device,
+		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
 		ring_info->gref_ring,
 		ring_info->port,
@@ -315,7 +328,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ring_info = xen_comm_find_rx_ring(domid);
 
 	if (ring_info) {
-		printk("rx ring ch from domid = %d already exist\n", ring_info->sdomain);
+		dev_info(hyper_dmabuf_private.device,
+			 "rx ring ch from domid = %d already exist\n", ring_info->sdomain);
 		return 0;
 	}
 
@@ -323,7 +337,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 					&rx_gref, &rx_port);
 
 	if (ret) {
-		printk("Domain %d has not created exporter ring for current domain\n", domid);
+		dev_err(hyper_dmabuf_private.device,
+			"Domain %d has not created exporter ring for current domain\n", domid);
 		return ret;
 	}
 
@@ -346,12 +361,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
-		printk("Cannot map ring\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
 		return -EINVAL;
 	}
 
 	if (map_ops[0].status) {
-		printk("Ring mapping failed\n");
+		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
 		return -EINVAL;
 	} else {
 		ring_info->unmap_op.handle = map_ops[0].handle;
@@ -372,7 +387,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info->irq = ret;
 
-	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+	dev_dbg(hyper_dmabuf_private.device,
+		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
 		rx_port,
 		ring_info->irq);
 
@@ -445,7 +461,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	/* find a ring info for the channel */
 	ring_info = xen_comm_find_tx_ring(domid);
 	if (!ring_info) {
-		printk("Can't find ring info for the channel\n");
+		dev_err(hyper_dmabuf_private.device,
+			"Can't find ring info for the channel\n");
 		return -EINVAL;
 	}
 
@@ -456,7 +473,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
-		printk("NULL REQUEST\n");
+		dev_err(hyper_dmabuf_private.device,
+			"NULL REQUEST\n");
 		return -EIO;
 	}
 
@@ -484,7 +502,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		if (timeout < 0) {
-			printk("request timed-out\n");
+			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
 			return -EBUSY;
 		}
 	}
@@ -508,6 +526,8 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_rx_ring_info *)info;
 	ring = &ring_info->ring_back;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+
 	do {
 		rc = ring->req_cons;
 		rp = ring->sring->req_prod;
@@ -558,6 +578,8 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_tx_ring_info *)info;
 	ring = &ring_info->ring_front;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+
 	do {
 		more_to_do = 0;
 		rp = ring->sring->rsp_prod;
@@ -576,16 +598,21 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 							(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
-					printk("getting error while parsing response\n");
+					dev_err(hyper_dmabuf_private.device,
+						"getting error while parsing response\n");
 				}
 			} else if (resp->status == HYPER_DMABUF_REQ_PROCESSED) {
 				/* for debugging dma_buf remote synchronization */
-				printk("original request = 0x%x\n", resp->command);
-				printk("Just got HYPER_DMABUF_REQ_PROCESSED\n");
+				dev_dbg(hyper_dmabuf_private.device,
+					"original request = 0x%x\n", resp->command);
+				dev_dbg(hyper_dmabuf_private.device,
+					"Just got HYPER_DMABUF_REQ_PROCESSED\n");
 			} else if (resp->status == HYPER_DMABUF_REQ_ERROR) {
 				/* for debugging dma_buf remote synchronization */
-				printk("original request = 0x%x\n", resp->command);
-				printk("Just got HYPER_DMABUF_REQ_ERROR\n");
+				dev_dbg(hyper_dmabuf_private.device,
+					"original request = 0x%x\n", resp->command);
+				dev_dbg(hyper_dmabuf_private.device,
+					"Just got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index c0045d4..cc9860b 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -4,9 +4,12 @@
 #include <xen/grant_table.h>
 #include <asm/xen/page.h>
 #include "hyper_dmabuf_xen_drv.h"
+#include "../hyper_dmabuf_drv.h"
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 /*
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
@@ -93,9 +96,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Store lvl2_table pages to be freed later */
 	sh_pages_info->lvl2_table = lvl2_table;
 
+
 	/* Store exported pages refid to be unshared later */
 	sh_pages_info->lvl3_gref = lvl3_gref;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return lvl3_gref;
 }
 
@@ -104,19 +109,21 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));
 	int i;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->lvl3_table == NULL ||
 	    sh_pages_info->lvl2_table ==  NULL ||
 	    sh_pages_info->lvl3_gref == -1) {
-		printk("gref table for hyper_dmabuf already cleaned up\n");
+		dev_warn(hyper_dmabuf_private.device,
+			 "gref table for hyper_dmabuf already cleaned up\n");
 		return 0;
 	}
 
 	/* End foreign access for data pages, but do not free them */
 	for (i = 0; i < nents; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
-			printk("refid not shared !!\n");
+			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
 		}
 		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
 		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
@@ -125,17 +132,17 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	/* End foreign access for 2nd level addressing pages */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
-			printk("refid not shared !!\n");
+			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
 		}
 		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
-			printk("refid still in use!!!\n");
+			dev_warn(hyper_dmabuf_private.device, "refid still in use!!!\n");
 		}
 		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
 	}
 
 	/* End foreign access for top level addressing page */
 	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
-		printk("gref not shared !!\n");
+		dev_warn(hyper_dmabuf_private.device, "gref not shared !!\n");
 	}
 
 	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
@@ -151,6 +158,7 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return 0;
 }
 
@@ -180,6 +188,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0);
 	int i, j, k;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 	*refs_info = (void *) sh_pages_info;
 
@@ -194,7 +204,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Map top level addressing page */
 	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
-		printk("Cannot allocate pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -206,12 +216,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	if (lvl3_map_ops.status) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
 			lvl3_map_ops.status);
 		return NULL;
 	} else {
@@ -220,7 +230,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Map all second level pages */
 	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
-		printk("Cannot allocate pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -233,19 +243,19 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Unmap top level page, as it won't be needed any longer */
 	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
-		printk("\xen: cannot unmap top level page\n");
+		dev_err(hyper_dmabuf_private.device, "xen: cannot unmap top level page\n");
 		return NULL;
 	}
 
 	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	/* Checks if pages were mapped correctly */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (lvl2_map_ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
 			       lvl2_map_ops[i].status);
 			return NULL;
 		} else {
@@ -254,7 +264,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	if (gnttab_alloc_pages(nents, data_pages)) {
-		printk("Cannot allocate pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -291,20 +301,20 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	if (gnttab_map_refs(data_map_ops, NULL, data_pages, nents)) {
-		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
+		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed\n");
 		return NULL;
 	}
 
 	/* unmapping lvl2 table pages */
 	if (gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
 			      n_lvl2_grefs)) {
-		printk("Cannot unmap 2nd level refs\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot unmap 2nd level refs\n");
 		return NULL;
 	}
 
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
-			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
+			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
 			return NULL;
 		} else {
@@ -323,23 +333,26 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	kfree(lvl2_unmap_ops);
 	kfree(data_map_ops);
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return data_pages;
 }
 
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	struct xen_shared_pages_info *sh_pages_info;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->unmap_ops == NULL ||
 	    sh_pages_info->data_pages == NULL) {
-		printk("Imported pages already cleaned up or buffer was not imported yet\n");
+		dev_warn(hyper_dmabuf_private.device, "Imported pages already cleaned up or buffer was not imported yet\n");
 		return 0;
 	}
 
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
 			      sh_pages_info->data_pages, nents) ) {
-		printk("Cannot unmap data pages\n");
+		dev_err(hyper_dmabuf_private.device, "Cannot unmap data pages\n");
 		return -EINVAL;
 	}
 
@@ -352,5 +365,6 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return 0;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 18/60] hyper_dmabuf: reset comm channel when one end has disconnected.
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

When exporter or importer is disconnected, ring buffer should be
reinitialzed, otherwise on next reconnection exporter/importer will
receive old requests/responses remaining in the ring buffer, which are
not valid anymore.

This patch also blocks back ring irq until communication channel is
initialized and fully active to prevent a race condition.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 24 +++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 5e7a250..b629032 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -282,6 +282,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 {
 	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_rx_ring_info *rx_ring_info;
 
 	/* check if we at all have exporter ring for given rdomain */
 	ring_info = xen_comm_find_tx_ring(domid);
@@ -307,6 +308,12 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 				  (unsigned long) ring_info->ring_front.sring);
 
 	kfree(ring_info);
+
+	rx_ring_info = xen_comm_find_rx_ring(domid);
+	if (!rx_ring_info)
+		return;
+
+	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring, PAGE_SIZE);
 }
 
 /* importer needs to know about shared page and port numbers for
@@ -378,9 +385,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(domid, rx_port,
-						    back_ring_isr, 0,
-						    NULL, (void*)ring_info);
+	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
+
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -399,6 +405,10 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
 	}
 
+	ret = request_irq(ring_info->irq,
+			  back_ring_isr, 0,
+			  NULL, (void*)ring_info);
+
 	return ret;
 }
 
@@ -406,6 +416,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 {
 	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_tx_ring_info *tx_ring_info;
 	struct page *shared_ring;
 
 	/* check if we have importer ring created for given sdomain */
@@ -425,6 +436,13 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 	gnttab_free_pages(1, &shared_ring);
 
 	kfree(ring_info);
+
+	tx_ring_info = xen_comm_find_tx_ring(domid);
+	if (!tx_ring_info)
+		return;
+
+	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
+	FRONT_RING_INIT(&(tx_ring_info->ring_front), tx_ring_info->ring_front.sring, PAGE_SIZE);
 }
 
 int hyper_dmabuf_xen_init_comm_env(void)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 18/60] hyper_dmabuf: reset comm channel when one end has disconnected.
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

When exporter or importer is disconnected, ring buffer should be
reinitialzed, otherwise on next reconnection exporter/importer will
receive old requests/responses remaining in the ring buffer, which are
not valid anymore.

This patch also blocks back ring irq until communication channel is
initialized and fully active to prevent a race condition.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 24 +++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 5e7a250..b629032 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -282,6 +282,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 {
 	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_rx_ring_info *rx_ring_info;
 
 	/* check if we at all have exporter ring for given rdomain */
 	ring_info = xen_comm_find_tx_ring(domid);
@@ -307,6 +308,12 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 				  (unsigned long) ring_info->ring_front.sring);
 
 	kfree(ring_info);
+
+	rx_ring_info = xen_comm_find_rx_ring(domid);
+	if (!rx_ring_info)
+		return;
+
+	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring, PAGE_SIZE);
 }
 
 /* importer needs to know about shared page and port numbers for
@@ -378,9 +385,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
 
-	ret = bind_interdomain_evtchn_to_irqhandler(domid, rx_port,
-						    back_ring_isr, 0,
-						    NULL, (void*)ring_info);
+	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
+
 	if (ret < 0) {
 		return -EINVAL;
 	}
@@ -399,6 +405,10 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
 	}
 
+	ret = request_irq(ring_info->irq,
+			  back_ring_isr, 0,
+			  NULL, (void*)ring_info);
+
 	return ret;
 }
 
@@ -406,6 +416,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 {
 	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_tx_ring_info *tx_ring_info;
 	struct page *shared_ring;
 
 	/* check if we have importer ring created for given sdomain */
@@ -425,6 +436,13 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 	gnttab_free_pages(1, &shared_ring);
 
 	kfree(ring_info);
+
+	tx_ring_info = xen_comm_find_tx_ring(domid);
+	if (!tx_ring_info)
+		return;
+
+	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
+	FRONT_RING_INIT(&(tx_ring_info->ring_front), tx_ring_info->ring_front.sring, PAGE_SIZE);
 }
 
 int hyper_dmabuf_xen_init_comm_env(void)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 19/60] hyper_dmabuf: fix the case with sharing a buffer with 2 pages
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Checking whether buffer has more than two pages should be done
by evaluating nents > 1 instead of i > 1 to properly cover the
case when nents == 2.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index b61d29a..9b05063 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -129,7 +129,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
 	}
 
-	if (i > 1) /* more than one page */ {
+	if (nents > 1) /* more than one page */ {
 		sgl = sg_next(sgl);
 		sg_set_page(sgl, pages[i], last_len, 0);
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 19/60] hyper_dmabuf: fix the case with sharing a buffer with 2 pages
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Checking whether buffer has more than two pages should be done
by evaluating nents > 1 instead of i > 1 to properly cover the
case when nents == 2.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index b61d29a..9b05063 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -129,7 +129,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
 	}
 
-	if (i > 1) /* more than one page */ {
+	if (nents > 1) /* more than one page */ {
 		sgl = sg_next(sgl);
 		sg_set_page(sgl, pages[i], last_len, 0);
 	}
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 20/60] hyper_dmabuf: optimized loop with less condition check
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Redefined nents_last, which means # of gref in the last page
of lvl2 table in any situation even if it is same as REFS_PER_PAGE.
With this, loop can be simplified with less condition check.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index cc9860b..cb5b86f 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -184,8 +184,10 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	struct gnttab_map_grant_ref *data_map_ops;
 	struct gnttab_unmap_grant_ref *data_unmap_ops;
 
-	int nents_last = nents % REFS_PER_PAGE;
-	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0);
+	/* # of grefs in the last page of lvl2 table */
+	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0) -
+			   (nents_last == REFS_PER_PAGE);
 	int i, j, k;
 
 	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
@@ -270,7 +272,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	k = 0;
 
-	for (i = 0; i < (nents_last ? n_lvl2_grefs - 1 : n_lvl2_grefs); i++) {
+	for (i = 0; i < n_lvl2_grefs - 1; i++) {
 		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 20/60] hyper_dmabuf: optimized loop with less condition check
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Redefined nents_last, which means # of gref in the last page
of lvl2 table in any situation even if it is same as REFS_PER_PAGE.
With this, loop can be simplified with less condition check.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index cc9860b..cb5b86f 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -184,8 +184,10 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	struct gnttab_map_grant_ref *data_map_ops;
 	struct gnttab_unmap_grant_ref *data_unmap_ops;
 
-	int nents_last = nents % REFS_PER_PAGE;
-	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0);
+	/* # of grefs in the last page of lvl2 table */
+	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0) -
+			   (nents_last == REFS_PER_PAGE);
 	int i, j, k;
 
 	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
@@ -270,7 +272,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	k = 0;
 
-	for (i = 0; i < (nents_last ? n_lvl2_grefs - 1 : n_lvl2_grefs); i++) {
+	for (i = 0; i < n_lvl2_grefs - 1; i++) {
 		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 21/60] hyper_dmabuf: exposing drv information using sysfs
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim,
	Michał Janiszewski

From: Michał Janiszewski <michal1x.janiszewski@intel.com>

This adds two entries in SYSFS with information about imported
and exported entries. The information exposed contains details
about number of pages, whether a buffer is valid or not, and
importer/exporter count.

Sysfs for hyper_dmabuf can be enabled by setting a new config
option, "CONFIG_HYPER_DMABUF_SYSFS" to 'yes'.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig             |  7 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c  | 12 ++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c  |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c | 74 ++++++++++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h |  3 ++
 5 files changed, 96 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index 75e1f96..56633a2 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -11,4 +11,11 @@ config HYPER_DMABUF_XEN
 	help
 	  Configuring hyper_dmabuf driver for XEN hypervisor
 
+config HYPER_DMABUF_SYSFS
+	bool "Enable sysfs information about hyper DMA buffers"
+	default y
+	help
+	  Expose information about imported and exported buffers using
+	  hyper_dmabuf driver
+
 endmenu
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 9d99769..3fc30e6 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -22,7 +22,7 @@ int unregister_device(void);
 struct hyper_dmabuf_private hyper_dmabuf_private;
 
 /*===============================================================================================*/
-static int hyper_dmabuf_drv_init(void)
+static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
 
@@ -51,10 +51,16 @@ static int hyper_dmabuf_drv_init(void)
 	}
 
 	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	if (ret < 0) {
+		return -EINVAL;
+	}
 
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
 	if (ret < 0) {
 		return -EINVAL;
 	}
+#endif
 
 	/* interrupt for comm should be registered here: */
 	return ret;
@@ -63,6 +69,10 @@ static int hyper_dmabuf_drv_init(void)
 /*-----------------------------------------------------------------------------------------------*/
 static void hyper_dmabuf_drv_exit(void)
 {
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	hyper_dmabuf_unregister_sysfs(hyper_dmabuf_private.device);
+#endif
+
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 9b05063..924710f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -24,7 +24,7 @@ int dmabuf_refcount(struct dma_buf *dma_buf)
 	return -1;
 }
 
-/* return total number of pages referecned by a sgt
+/* return total number of pages referenced by a sgt
  * for pre-calculation of # of pages behind a given sgt
  */
 static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 18731de..1d224c4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -11,6 +11,80 @@
 DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attribute *attr, char *buf)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
+		int id = info_entry->info->hyper_dmabuf_id;
+		int nents = info_entry->info->nents;
+		bool valid = info_entry->info->valid;
+		int num_importers = info_entry->info->num_importers;
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, numi:%d\n",
+				   id, nents, (valid ? 't' : 'f'), num_importers);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
+			   total);
+
+	return count;
+}
+
+static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attribute *attr, char *buf)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
+		int id = info_entry->info->hyper_dmabuf_id;
+		int nents = info_entry->info->nents;
+		bool valid = info_entry->info->valid;
+		int importer_exported = info_entry->info->importer_exported;
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, ie:%d\n",
+				   id, nents, (valid ? 't' : 'f'), importer_exported);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
+			   total);
+
+	return count;
+}
+
+static DEVICE_ATTR(imported, S_IRUSR, hyper_dmabuf_imported_show, NULL);
+static DEVICE_ATTR(exported, S_IRUSR, hyper_dmabuf_exported_show, NULL);
+
+int hyper_dmabuf_register_sysfs(struct device *dev)
+{
+	int err;
+
+	err = device_create_file(dev, &dev_attr_imported);
+	if (err < 0)
+		goto err1;
+	err = device_create_file(dev, &dev_attr_exported);
+	if (err < 0)
+		goto err2;
+
+	return 0;
+err2:
+	device_remove_file(dev, &dev_attr_imported);
+err1:
+	return -1;
+}
+
+int hyper_dmabuf_unregister_sysfs(struct device *dev)
+{
+	device_remove_file(dev, &dev_attr_imported);
+	device_remove_file(dev, &dev_attr_exported);
+	return 0;
+}
+#endif
+
 int hyper_dmabuf_table_init()
 {
 	hash_init(hyper_dmabuf_hash_imported);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index f55d06e..a46f884 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -37,4 +37,7 @@ int hyper_dmabuf_remove_exported(int id);
 
 int hyper_dmabuf_remove_imported(int id);
 
+int hyper_dmabuf_register_sysfs(struct device *dev);
+int hyper_dmabuf_unregister_sysfs(struct device *dev);
+
 #endif // __HYPER_DMABUF_LIST_H__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 21/60] hyper_dmabuf: exposing drv information using sysfs
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: Michał Janiszewski, xen-devel, mateuszx.potrola, dri-devel,
	dongwon.kim

From: Michał Janiszewski <michal1x.janiszewski@intel.com>

This adds two entries in SYSFS with information about imported
and exported entries. The information exposed contains details
about number of pages, whether a buffer is valid or not, and
importer/exporter count.

Sysfs for hyper_dmabuf can be enabled by setting a new config
option, "CONFIG_HYPER_DMABUF_SYSFS" to 'yes'.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig             |  7 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c  | 12 ++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c  |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c | 74 ++++++++++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h |  3 ++
 5 files changed, 96 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index 75e1f96..56633a2 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -11,4 +11,11 @@ config HYPER_DMABUF_XEN
 	help
 	  Configuring hyper_dmabuf driver for XEN hypervisor
 
+config HYPER_DMABUF_SYSFS
+	bool "Enable sysfs information about hyper DMA buffers"
+	default y
+	help
+	  Expose information about imported and exported buffers using
+	  hyper_dmabuf driver
+
 endmenu
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 9d99769..3fc30e6 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -22,7 +22,7 @@ int unregister_device(void);
 struct hyper_dmabuf_private hyper_dmabuf_private;
 
 /*===============================================================================================*/
-static int hyper_dmabuf_drv_init(void)
+static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
 
@@ -51,10 +51,16 @@ static int hyper_dmabuf_drv_init(void)
 	}
 
 	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	if (ret < 0) {
+		return -EINVAL;
+	}
 
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
 	if (ret < 0) {
 		return -EINVAL;
 	}
+#endif
 
 	/* interrupt for comm should be registered here: */
 	return ret;
@@ -63,6 +69,10 @@ static int hyper_dmabuf_drv_init(void)
 /*-----------------------------------------------------------------------------------------------*/
 static void hyper_dmabuf_drv_exit(void)
 {
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	hyper_dmabuf_unregister_sysfs(hyper_dmabuf_private.device);
+#endif
+
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 9b05063..924710f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -24,7 +24,7 @@ int dmabuf_refcount(struct dma_buf *dma_buf)
 	return -1;
 }
 
-/* return total number of pages referecned by a sgt
+/* return total number of pages referenced by a sgt
  * for pre-calculation of # of pages behind a given sgt
  */
 static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 18731de..1d224c4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -11,6 +11,80 @@
 DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attribute *attr, char *buf)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
+		int id = info_entry->info->hyper_dmabuf_id;
+		int nents = info_entry->info->nents;
+		bool valid = info_entry->info->valid;
+		int num_importers = info_entry->info->num_importers;
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, numi:%d\n",
+				   id, nents, (valid ? 't' : 'f'), num_importers);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
+			   total);
+
+	return count;
+}
+
+static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attribute *attr, char *buf)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
+		int id = info_entry->info->hyper_dmabuf_id;
+		int nents = info_entry->info->nents;
+		bool valid = info_entry->info->valid;
+		int importer_exported = info_entry->info->importer_exported;
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, ie:%d\n",
+				   id, nents, (valid ? 't' : 'f'), importer_exported);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
+			   total);
+
+	return count;
+}
+
+static DEVICE_ATTR(imported, S_IRUSR, hyper_dmabuf_imported_show, NULL);
+static DEVICE_ATTR(exported, S_IRUSR, hyper_dmabuf_exported_show, NULL);
+
+int hyper_dmabuf_register_sysfs(struct device *dev)
+{
+	int err;
+
+	err = device_create_file(dev, &dev_attr_imported);
+	if (err < 0)
+		goto err1;
+	err = device_create_file(dev, &dev_attr_exported);
+	if (err < 0)
+		goto err2;
+
+	return 0;
+err2:
+	device_remove_file(dev, &dev_attr_imported);
+err1:
+	return -1;
+}
+
+int hyper_dmabuf_unregister_sysfs(struct device *dev)
+{
+	device_remove_file(dev, &dev_attr_imported);
+	device_remove_file(dev, &dev_attr_exported);
+	return 0;
+}
+#endif
+
 int hyper_dmabuf_table_init()
 {
 	hash_init(hyper_dmabuf_hash_imported);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index f55d06e..a46f884 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -37,4 +37,7 @@ int hyper_dmabuf_remove_exported(int id);
 
 int hyper_dmabuf_remove_imported(int id);
 
+int hyper_dmabuf_register_sysfs(struct device *dev);
+int hyper_dmabuf_unregister_sysfs(struct device *dev);
+
 #endif // __HYPER_DMABUF_LIST_H__
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 22/60] hyper_dmabuf: configure license
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Set the license of the driver to "GPL and MIT-X dual" and owner
to "Intel". Also attached license term to all source and header
files

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       | 26 ++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 32 ++++++++++++++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   | 24 ++++++++++++++++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 28 +++++++++++++++++++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    | 24 ++++++++++++++++
 25 files changed, 648 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
index d012b05..ee1886c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -1 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+/* configuration */
+
 #define CURRENT_TARGET XEN
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 3fc30e6..4e0ccdd 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/workqueue.h>
@@ -13,8 +41,8 @@
 extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 #endif
 
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("IOTG-PED, INTEL");
+MODULE_LICENSE("GPL and additional rights");
+MODULE_AUTHOR("Intel Corporation");
 
 int register_device(void);
 int unregister_device(void);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index c16e8d4..0b1441e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index b58a111..9b4ff45 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/list.h>
 #include <linux/slab.h>
 #include "hyper_dmabuf_msg.h"
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index 2c8daf3..4394903 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_ID_H__
 #define __HYPER_DMABUF_ID_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 924710f..a017070 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/slab.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index a4a6d63..eda075b3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_IMP_H__
 #define __HYPER_DMABUF_IMP_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b1e0bdb..b0f5b5b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index de216d3..e43a25f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 1d224c4..e46ae19 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index a46f884..35dc722 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_LIST_H__
 #define __HYPER_DMABUF_LIST_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 9c38900..b9bd6d8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index ac4caeb..8b3c857 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index a577167..6cf5b2d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 2758915..4c28f11 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
index fc85fa8..71ee358 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
 #define __HYPER_DMABUF_REMOTE_SYNC_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index f053dd10..2a58218 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_STRUCT_H__
 #define __HYPER_DMABUF_STRUCT_H__
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index b629032..14336c9 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index ba41e9d..298af08 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_COMM_H__
 #define __HYPER_DMABUF_XEN_COMM_H__
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 2a1f45b..0fa2d55 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index 18b3afd..cde8ade 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
 #define __HYPER_DMABUF_XEN_COMM_LIST_H__
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index e7b871a..6afb520 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
index e351c08..c5fec24 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_DRV_H__
 #define __HYPER_DMABUF_XEN_DRV_H__
 #include <xen/interface/grant_table.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index cb5b86f..122aac1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/slab.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index 2287804..629ec0f 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_SHM_H__
 #define __HYPER_DMABUF_XEN_SHM_H__
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 22/60] hyper_dmabuf: configure license
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Set the license of the driver to "GPL and MIT-X dual" and owner
to "Intel". Also attached license term to all source and header
files

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       | 26 ++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 32 ++++++++++++++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 28 +++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    | 24 ++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   | 24 ++++++++++++++++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 28 +++++++++++++++++++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    | 24 ++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 28 +++++++++++++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    | 24 ++++++++++++++++
 25 files changed, 648 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
index d012b05..ee1886c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -1 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+/* configuration */
+
 #define CURRENT_TARGET XEN
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 3fc30e6..4e0ccdd 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/workqueue.h>
@@ -13,8 +41,8 @@
 extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 #endif
 
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("IOTG-PED, INTEL");
+MODULE_LICENSE("GPL and additional rights");
+MODULE_AUTHOR("Intel Corporation");
 
 int register_device(void);
 int unregister_device(void);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index c16e8d4..0b1441e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index b58a111..9b4ff45 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/list.h>
 #include <linux/slab.h>
 #include "hyper_dmabuf_msg.h"
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index 2c8daf3..4394903 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_ID_H__
 #define __HYPER_DMABUF_ID_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 924710f..a017070 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/slab.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
index a4a6d63..eda075b3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_IMP_H__
 #define __HYPER_DMABUF_IMP_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b1e0bdb..b0f5b5b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index de216d3..e43a25f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 1d224c4..e46ae19 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index a46f884..35dc722 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_LIST_H__
 #define __HYPER_DMABUF_LIST_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 9c38900..b9bd6d8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index ac4caeb..8b3c857 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index a577167..6cf5b2d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 2758915..4c28f11 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
index fc85fa8..71ee358 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
 #define __HYPER_DMABUF_REMOTE_SYNC_H__
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index f053dd10..2a58218 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_STRUCT_H__
 #define __HYPER_DMABUF_STRUCT_H__
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index b629032..14336c9 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index ba41e9d..298af08 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_COMM_H__
 #define __HYPER_DMABUF_XEN_COMM_H__
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 2a1f45b..0fa2d55 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index 18b3afd..cde8ade 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
 #define __HYPER_DMABUF_XEN_COMM_LIST_H__
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index e7b871a..6afb520 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/module.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
index e351c08..c5fec24 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_DRV_H__
 #define __HYPER_DMABUF_XEN_DRV_H__
 #include <xen/interface/grant_table.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index cb5b86f..122aac1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -1,3 +1,31 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
 #include <linux/kernel.h>
 #include <linux/errno.h>
 #include <linux/slab.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index 2287804..629ec0f 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -1,3 +1,27 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
 #ifndef __HYPER_DMABUF_XEN_SHM_H__
 #define __HYPER_DMABUF_XEN_SHM_H__
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 23/60] hyper_dmabuf: use CONFIG_HYPER_DMABUF_XEN instead of CONFIG_XEN
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Now, use CONFIG_HYPER_DMABUF_XEN as a configuration option
for building hyper_dmabuf for Xen hypervisor.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 4e0ccdd..569b95e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -36,7 +36,7 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 
-#ifdef CONFIG_XEN
+#ifdef CONFIG_HYPER_DMABUF_XEN
 #include "xen/hyper_dmabuf_xen_drv.h"
 extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 #endif
@@ -61,7 +61,7 @@ static int __init hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
-#ifdef CONFIG_XEN
+#ifdef CONFIG_HYPER_DMABUF_XEN
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
 #endif
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 23/60] hyper_dmabuf: use CONFIG_HYPER_DMABUF_XEN instead of CONFIG_XEN
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Now, use CONFIG_HYPER_DMABUF_XEN as a configuration option
for building hyper_dmabuf for Xen hypervisor.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 4e0ccdd..569b95e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -36,7 +36,7 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 
-#ifdef CONFIG_XEN
+#ifdef CONFIG_HYPER_DMABUF_XEN
 #include "xen/hyper_dmabuf_xen_drv.h"
 extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 #endif
@@ -61,7 +61,7 @@ static int __init hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
-#ifdef CONFIG_XEN
+#ifdef CONFIG_HYPER_DMABUF_XEN
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
 #endif
 
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 23/60] hyper_dmabuf: use CONFIG_HYPER_DMABUF_XEN instead of CONFIG_XEN
  2017-12-19 19:29 ` Dongwon Kim
                   ` (35 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Now, use CONFIG_HYPER_DMABUF_XEN as a configuration option
for building hyper_dmabuf for Xen hypervisor.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 4e0ccdd..569b95e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -36,7 +36,7 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 
-#ifdef CONFIG_XEN
+#ifdef CONFIG_HYPER_DMABUF_XEN
 #include "xen/hyper_dmabuf_xen_drv.h"
 extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 #endif
@@ -61,7 +61,7 @@ static int __init hyper_dmabuf_drv_init(void)
 		return -EINVAL;
 	}
 
-#ifdef CONFIG_XEN
+#ifdef CONFIG_HYPER_DMABUF_XEN
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
 #endif
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 24/60] hyper_dmabuf: waits for resp only if WAIT_AFTER_SYNC_REQ == 1
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

hyper_dmabuf's sync_request (previously hyper_dmabuf_sync_request_
and_wait) now does not wait for the response from exporter if
WAIT_AFTER_SYNC_REQ==0. This is to prevent peformance degradation
due to the communication latency while doing indirect hyper DMABUF
synchronization.

This patch also includes some minor changes as followed:

1. hyper_dmabuf_free_sgt is removed. Now we call sg_free_table and
   kfree directly from all the places where this function was executed.
   This was done for conciseness.

2. changed hyper_dmabuf_get_domid to hyper_dmabuf_xen_get_domid for
   consistence in func names in the backend.

3. some minor clean-ups

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |  2 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 91 +++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  2 -
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 14 ++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 21 +++--
 8 files changed, 69 insertions(+), 67 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
index ee1886c..d5125f2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -23,5 +23,3 @@
  */
 
 /* configuration */
-
-#define CURRENT_TARGET XEN
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index a017070..d7a35fc 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -131,7 +131,7 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 
 /* create sg_table with given pages and other parameters */
 struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-				int frst_ofst, int last_len, int nents)
+					 int frst_ofst, int last_len, int nents)
 {
 	struct sg_table *sgt;
 	struct scatterlist *sgl;
@@ -144,7 +144,11 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
-		hyper_dmabuf_free_sgt(sgt);
+		if (sgt) {
+			sg_free_table(sgt);
+			kfree(sgt);
+		}
+
 		return NULL;
 	}
 
@@ -165,15 +169,6 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 	return sgt;
 }
 
-/* free sg_table */
-void hyper_dmabuf_free_sgt(struct sg_table* sgt)
-{
-	if (sgt) {
-		sg_free_table(sgt);
-		kfree(sgt);
-	}
-}
-
 int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
 {
 	struct sgt_list *sgtl;
@@ -264,7 +259,9 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	return 0;
 }
 
-inline int hyper_dmabuf_sync_request_and_wait(int id, int dmabuf_ops)
+#define WAIT_AFTER_SYNC_REQ 1
+
+inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -279,7 +276,7 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int dmabuf_ops)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, WAIT_AFTER_SYNC_REQ);
 
 	kfree(req);
 
@@ -297,8 +294,8 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						 HYPER_DMABUF_OPS_ATTACH);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -319,8 +316,8 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						 HYPER_DMABUF_OPS_DETACH);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -354,8 +351,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
                 goto err_free_sg;
         }
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_MAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_MAP);
 
 	kfree(page_info->pages);
 	kfree(page_info);
@@ -390,8 +387,8 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_UNMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -419,19 +416,23 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 
 	if (sgt_info->num_importers == 0) {
 		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
-		hyper_dmabuf_free_sgt(sgt_info->sgt);
-		sgt_info->sgt = NULL;
+
+		if (sgt_info->sgt) {
+			sg_free_table(sgt_info->sgt);
+			kfree(sgt_info->sgt);
+			sgt_info->sgt = NULL;
+		}
 	}
 
 	final_release = sgt_info && !sgt_info->valid &&
 		        !sgt_info->num_importers;
 
 	if (final_release) {
-		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-							HYPER_DMABUF_OPS_RELEASE_FINAL);
+		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+						HYPER_DMABUF_OPS_RELEASE_FINAL);
 	} else {
-		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-							HYPER_DMABUF_OPS_RELEASE);
+		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+						HYPER_DMABUF_OPS_RELEASE);
 	}
 
 	if (ret < 0) {
@@ -459,8 +460,8 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -479,8 +480,8 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -499,8 +500,8 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -519,8 +520,8 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -537,8 +538,8 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -557,8 +558,8 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KUNMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -575,8 +576,8 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_MMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -595,8 +596,8 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_VMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -615,8 +616,8 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_VUNMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index b9bd6d8..c99176ac 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -39,8 +39,6 @@
 #include "hyper_dmabuf_remote_sync.h"
 #include "hyper_dmabuf_list.h"
 
-#define FORCED_UNEXPORTING 0
-
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 struct cmd_process {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 4c28f11..f93c936 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -53,7 +53,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
  * later when unmapping operations are invoked to free those.
  *
  * The very first element on the bottom of each stack holds
- * are what is created when initial exporting is issued so it
+ * is what is created when initial exporting is issued so it
  * should not be modified or released by this fuction.
  */
 int hyper_dmabuf_remote_sync(int id, int ops)
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 14336c9..ba6b126 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -53,7 +53,7 @@ static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
@@ -67,7 +67,7 @@ static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
@@ -130,7 +130,7 @@ static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *po
 	return (ret <= 0 ? 1 : 0);
 }
 
-int hyper_dmabuf_get_domid(void)
+int hyper_dmabuf_xen_get_domid(void)
 {
 	struct xenbus_transaction xbt;
 	int domid;
@@ -192,7 +192,7 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 	 * it means that remote domain has setup it for us and we should connect
 	 * to it.
 	 */
-	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), rdom,
+	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), rdom,
 					&grefid, &port);
 
 	if (ring_info && ret != 0) {
@@ -287,7 +287,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = xen_comm_expose_ring_details(hyper_dmabuf_get_domid(), domid,
+	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					   ring_info->gref_ring, ring_info->port);
 
 	/*
@@ -299,7 +299,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
 	sprintf((char*)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
-		domid, hyper_dmabuf_get_domid());
+		domid, hyper_dmabuf_xen_get_domid());
 
 	register_xenbus_watch(&ring_info->watch);
 
@@ -368,7 +368,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		return 0;
 	}
 
-	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), domid,
+	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					&rx_gref, &rx_port);
 
 	if (ret) {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 298af08..9c93165 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -50,7 +50,7 @@ struct xen_comm_rx_ring_info {
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
-int hyper_dmabuf_get_domid(void);
+int hyper_dmabuf_xen_get_domid(void);
 
 int hyper_dmabuf_xen_init_comm_env(void);
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index 6afb520..aa4c2f5 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -37,7 +37,7 @@
 #include "hyper_dmabuf_xen_shm.h"
 
 struct hyper_dmabuf_backend_ops xen_backend_ops = {
-	.get_vm_id = hyper_dmabuf_get_domid,
+	.get_vm_id = hyper_dmabuf_xen_get_domid,
 	.share_pages = hyper_dmabuf_xen_share_pages,
 	.unshare_pages = hyper_dmabuf_xen_unshare_pages,
 	.map_shared_pages = (void *)hyper_dmabuf_xen_map_shared_pages,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 122aac1..b158c11 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -108,8 +108,8 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-							   virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
-							   1);
+							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+							    1);
 	}
 
 	/* Share lvl3_table in readonly mode*/
@@ -240,10 +240,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
 
-	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly,
+	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
+			  GNTMAP_host_map | GNTMAP_readonly,
 			  (grant_ref_t)lvl3_gref, domid);
 
-	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
+			    GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
 		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
@@ -285,8 +287,9 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	/* Checks if pages were mapped correctly */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (lvl2_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
-			       lvl2_map_ops[i].status);
+			dev_err(hyper_dmabuf_private.device,
+				"HYPERVISOR map grant ref failed status = %d",
+				lvl2_map_ops[i].status);
 			return NULL;
 		} else {
 			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
@@ -344,7 +347,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d\n",
+			dev_err(hyper_dmabuf_private.device,
+				"HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
 			return NULL;
 		} else {
@@ -376,7 +380,8 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 
 	if (sh_pages_info->unmap_ops == NULL ||
 	    sh_pages_info->data_pages == NULL) {
-		dev_warn(hyper_dmabuf_private.device, "Imported pages already cleaned up or buffer was not imported yet\n");
+		dev_warn(hyper_dmabuf_private.device,
+			 "Imported pages already cleaned up or buffer was not imported yet\n");
 		return 0;
 	}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 24/60] hyper_dmabuf: waits for resp only if WAIT_AFTER_SYNC_REQ == 1
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

hyper_dmabuf's sync_request (previously hyper_dmabuf_sync_request_
and_wait) now does not wait for the response from exporter if
WAIT_AFTER_SYNC_REQ==0. This is to prevent peformance degradation
due to the communication latency while doing indirect hyper DMABUF
synchronization.

This patch also includes some minor changes as followed:

1. hyper_dmabuf_free_sgt is removed. Now we call sg_free_table and
   kfree directly from all the places where this function was executed.
   This was done for conciseness.

2. changed hyper_dmabuf_get_domid to hyper_dmabuf_xen_get_domid for
   consistence in func names in the backend.

3. some minor clean-ups

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |  2 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 91 +++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  2 -
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 14 ++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 21 +++--
 8 files changed, 69 insertions(+), 67 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
index ee1886c..d5125f2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -23,5 +23,3 @@
  */
 
 /* configuration */
-
-#define CURRENT_TARGET XEN
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index a017070..d7a35fc 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -131,7 +131,7 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 
 /* create sg_table with given pages and other parameters */
 struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-				int frst_ofst, int last_len, int nents)
+					 int frst_ofst, int last_len, int nents)
 {
 	struct sg_table *sgt;
 	struct scatterlist *sgl;
@@ -144,7 +144,11 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
-		hyper_dmabuf_free_sgt(sgt);
+		if (sgt) {
+			sg_free_table(sgt);
+			kfree(sgt);
+		}
+
 		return NULL;
 	}
 
@@ -165,15 +169,6 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 	return sgt;
 }
 
-/* free sg_table */
-void hyper_dmabuf_free_sgt(struct sg_table* sgt)
-{
-	if (sgt) {
-		sg_free_table(sgt);
-		kfree(sgt);
-	}
-}
-
 int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
 {
 	struct sgt_list *sgtl;
@@ -264,7 +259,9 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	return 0;
 }
 
-inline int hyper_dmabuf_sync_request_and_wait(int id, int dmabuf_ops)
+#define WAIT_AFTER_SYNC_REQ 1
+
+inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -279,7 +276,7 @@ inline int hyper_dmabuf_sync_request_and_wait(int id, int dmabuf_ops)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, WAIT_AFTER_SYNC_REQ);
 
 	kfree(req);
 
@@ -297,8 +294,8 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						 HYPER_DMABUF_OPS_ATTACH);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -319,8 +316,8 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						 HYPER_DMABUF_OPS_DETACH);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -354,8 +351,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
                 goto err_free_sg;
         }
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_MAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_MAP);
 
 	kfree(page_info->pages);
 	kfree(page_info);
@@ -390,8 +387,8 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_UNMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -419,19 +416,23 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 
 	if (sgt_info->num_importers == 0) {
 		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
-		hyper_dmabuf_free_sgt(sgt_info->sgt);
-		sgt_info->sgt = NULL;
+
+		if (sgt_info->sgt) {
+			sg_free_table(sgt_info->sgt);
+			kfree(sgt_info->sgt);
+			sgt_info->sgt = NULL;
+		}
 	}
 
 	final_release = sgt_info && !sgt_info->valid &&
 		        !sgt_info->num_importers;
 
 	if (final_release) {
-		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-							HYPER_DMABUF_OPS_RELEASE_FINAL);
+		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+						HYPER_DMABUF_OPS_RELEASE_FINAL);
 	} else {
-		ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-							HYPER_DMABUF_OPS_RELEASE);
+		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+						HYPER_DMABUF_OPS_RELEASE);
 	}
 
 	if (ret < 0) {
@@ -459,8 +460,8 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -479,8 +480,8 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -499,8 +500,8 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -519,8 +520,8 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -537,8 +538,8 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -557,8 +558,8 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_KUNMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -575,8 +576,8 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_MMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -595,8 +596,8 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_VMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
@@ -615,8 +616,8 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request_and_wait(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_VUNMAP);
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
 			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index b9bd6d8..c99176ac 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -39,8 +39,6 @@
 #include "hyper_dmabuf_remote_sync.h"
 #include "hyper_dmabuf_list.h"
 
-#define FORCED_UNEXPORTING 0
-
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
 struct cmd_process {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 4c28f11..f93c936 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -53,7 +53,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
  * later when unmapping operations are invoked to free those.
  *
  * The very first element on the bottom of each stack holds
- * are what is created when initial exporting is issued so it
+ * is what is created when initial exporting is issued so it
  * should not be modified or released by this fuction.
  */
 int hyper_dmabuf_remote_sync(int id, int ops)
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 14336c9..ba6b126 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -53,7 +53,7 @@ static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
@@ -67,7 +67,7 @@ static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
@@ -130,7 +130,7 @@ static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *po
 	return (ret <= 0 ? 1 : 0);
 }
 
-int hyper_dmabuf_get_domid(void)
+int hyper_dmabuf_xen_get_domid(void)
 {
 	struct xenbus_transaction xbt;
 	int domid;
@@ -192,7 +192,7 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 	 * it means that remote domain has setup it for us and we should connect
 	 * to it.
 	 */
-	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), rdom,
+	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), rdom,
 					&grefid, &port);
 
 	if (ring_info && ret != 0) {
@@ -287,7 +287,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = xen_comm_expose_ring_details(hyper_dmabuf_get_domid(), domid,
+	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					   ring_info->gref_ring, ring_info->port);
 
 	/*
@@ -299,7 +299,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
 	sprintf((char*)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
-		domid, hyper_dmabuf_get_domid());
+		domid, hyper_dmabuf_xen_get_domid());
 
 	register_xenbus_watch(&ring_info->watch);
 
@@ -368,7 +368,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		return 0;
 	}
 
-	ret = xen_comm_get_ring_details(hyper_dmabuf_get_domid(), domid,
+	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					&rx_gref, &rx_port);
 
 	if (ret) {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 298af08..9c93165 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -50,7 +50,7 @@ struct xen_comm_rx_ring_info {
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
-int hyper_dmabuf_get_domid(void);
+int hyper_dmabuf_xen_get_domid(void);
 
 int hyper_dmabuf_xen_init_comm_env(void);
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index 6afb520..aa4c2f5 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -37,7 +37,7 @@
 #include "hyper_dmabuf_xen_shm.h"
 
 struct hyper_dmabuf_backend_ops xen_backend_ops = {
-	.get_vm_id = hyper_dmabuf_get_domid,
+	.get_vm_id = hyper_dmabuf_xen_get_domid,
 	.share_pages = hyper_dmabuf_xen_share_pages,
 	.unshare_pages = hyper_dmabuf_xen_unshare_pages,
 	.map_shared_pages = (void *)hyper_dmabuf_xen_map_shared_pages,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 122aac1..b158c11 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -108,8 +108,8 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-							   virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
-							   1);
+							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+							    1);
 	}
 
 	/* Share lvl3_table in readonly mode*/
@@ -240,10 +240,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
 
-	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly,
+	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
+			  GNTMAP_host_map | GNTMAP_readonly,
 			  (grant_ref_t)lvl3_gref, domid);
 
-	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
+			    GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
 		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
@@ -285,8 +287,9 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	/* Checks if pages were mapped correctly */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (lvl2_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
-			       lvl2_map_ops[i].status);
+			dev_err(hyper_dmabuf_private.device,
+				"HYPERVISOR map grant ref failed status = %d",
+				lvl2_map_ops[i].status);
 			return NULL;
 		} else {
 			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
@@ -344,7 +347,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d\n",
+			dev_err(hyper_dmabuf_private.device,
+				"HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
 			return NULL;
 		} else {
@@ -376,7 +380,8 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 
 	if (sh_pages_info->unmap_ops == NULL ||
 	    sh_pages_info->data_pages == NULL) {
-		dev_warn(hyper_dmabuf_private.device, "Imported pages already cleaned up or buffer was not imported yet\n");
+		dev_warn(hyper_dmabuf_private.device,
+			 "Imported pages already cleaned up or buffer was not imported yet\n");
 		return 0;
 	}
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 25/60] hyper_dmabuf: introduced delayed unexport
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

To prevent overhead when a DMA BUF needs to be exported right after
it is unexported, a marginal delay is introduced in unexporting process.
This adds a probation period to the unexporting process. If the same
DMA_BUF is requested to be exported agagin, unexporting process is
canceled right away and the buffer can be reused without any extensive
re-exporting process.

Additionally, "FIRST EXPORT" message is synchronously transmitted to
the exporter VM (importer VM waits for the response.) to make sure
the buffer is still valid (not unexported) on expoter VM's side before
importer VM starts to use it.

"delayed_ms" attribute is added to unexport ioctl, used for hardcoding
delay from userspace.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |   4 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 157 ++++++++++++++-------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |   2 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  41 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |   2 +
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |   2 +
 6 files changed, 139 insertions(+), 69 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index d7a35fc..a9bc354 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -341,6 +341,10 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	/* extract pages from sgt */
 	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
 
+	if (!page_info) {
+		return NULL;
+	}
+
 	/* create a new sg_table with extracted pages */
 	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
 				page_info->last_len, page_info->nents);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b0f5b5b..018de8c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -115,11 +115,24 @@ static int hyper_dmabuf_export_remote(void *data)
 	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
 	sgt_info = hyper_dmabuf_find_exported(ret);
 	if (ret != -1 && sgt_info->valid) {
+		/*
+		 * Check if unexport is already scheduled for that buffer,
+		 * if so try to cancel it. If that will fail, buffer needs
+		 * to be reexport once again.
+		 */
+		if (sgt_info->unexport_scheduled) {
+			if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+				dma_buf_put(dma_buf);
+				goto reexport;
+			}
+			sgt_info->unexport_scheduled = 0;
+		}
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
 	}
 
+reexport:
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
 	if (!attachment) {
 		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
@@ -133,7 +146,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
-	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
+	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
 
@@ -141,7 +154,6 @@ static int hyper_dmabuf_export_remote(void *data)
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
 	sgt_info->valid = 1;
-	sgt_info->importer_exported = 0;
 
 	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
 	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
@@ -245,8 +257,35 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	/* look for dmabuf for the id */
 	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
-	if (sgt_info == NULL) /* can't find sgt from the table */
+	if (sgt_info == NULL || !sgt_info->valid) /* can't find sgt from the table */
+		return -1;
+
+	sgt_info->num_importers++;
+
+	/* send notification for export_fd to exporter */
+	operand = sgt_info->hyper_dmabuf_id;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
+
+	if (ret < 0) {
+		kfree(req);
+		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
+		sgt_info->num_importers--;
+		return -EINVAL;
+	}
+	kfree(req);
+
+	if (ret == HYPER_DMABUF_REQ_ERROR) {
+		dev_err(hyper_dmabuf_private.device,
+			"Buffer invalid\n");
+		sgt_info->num_importers--;
 		return -1;
+	} else {
+		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
+	}
 
 	dev_dbg(hyper_dmabuf_private.device,
 		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
@@ -262,86 +301,62 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 						   sgt_info->nents,
 						   &sgt_info->refs_info);
 
+		if (!data_pages) {
+			sgt_info->num_importers--;
+			return -EINVAL;
+		}
+
 		sgt_info->sgt = hyper_dmabuf_create_sgt(data_pages, sgt_info->frst_ofst,
 							sgt_info->last_len, sgt_info->nents);
 
 	}
 
-	/* send notification for export_fd to exporter */
-	operand = sgt_info->hyper_dmabuf_id;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
-
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
-
-	if (!sgt_info->sgt || ret) {
-		kfree(req);
-		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
-		return -EINVAL;
-	}
-	kfree(req);
-
 	export_fd_attr->fd = hyper_dmabuf_export_fd(sgt_info, export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
 		ret = export_fd_attr->fd;
-	} else {
-		sgt_info->num_importers++;
 	}
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-	return ret;
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	return 0;
 }
 
 /* unexport dmabuf from the database and send int req to the source domain
  * to unmap it.
  */
-static int hyper_dmabuf_unexport(void *data)
+static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 {
-	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_req *req;
+	int hyper_dmabuf_id;
 	int ret;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_sgt_info *sgt_info =
+		container_of(work, struct hyper_dmabuf_sgt_info, unexport_work.work);
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
+	if (!sgt_info)
+		return;
 
-	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
+	hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
 
-	/* failed to find corresponding entry in export list */
-	if (sgt_info == NULL) {
-		unexport_attr->status = -EINVAL;
-		return -EFAULT;
-	}
+	dev_dbg(hyper_dmabuf_private.device,
+		"Marking buffer %d as invalid\n", hyper_dmabuf_id);
+	/* no longer valid */
+	sgt_info->valid = 0;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &hyper_dmabuf_id);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
 	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
-		kfree(req);
-		return -EFAULT;
+		dev_err(hyper_dmabuf_private.device, "unexport message for buffer %d failed\n", hyper_dmabuf_id);
 	}
 
 	/* free msg */
 	kfree(req);
-
-	dev_dbg(hyper_dmabuf_private.device,
-		"Marking buffer %d as invalid\n", unexport_attr->hyper_dmabuf_id);
-	/* no longer valid */
-	sgt_info->valid = 0;
+	sgt_info->unexport_scheduled = 0;
 
 	/*
 	 * Immediately clean-up if it has never been exported by importer
@@ -352,16 +367,52 @@ static int hyper_dmabuf_unexport(void *data)
 	 */
 	if (!sgt_info->importer_exported) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"claning up buffer %d completly\n", unexport_attr->hyper_dmabuf_id);
+			"claning up buffer %d completly\n", hyper_dmabuf_id);
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
+		hyper_dmabuf_remove_exported(hyper_dmabuf_id);
 		kfree(sgt_info);
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_id(unexport_attr->hyper_dmabuf_id);
+		store_reusable_id(hyper_dmabuf_id);
 	}
+}
+
+/* Schedules unexport of dmabuf.
+ */
+static int hyper_dmabuf_unexport(void *data)
+{
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
 
 	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-	return ret;
+
+	if (!data) {
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
+		return -EINVAL;
+	}
+
+	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
+
+	/* find dmabuf in export list */
+	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
+
+	dev_dbg(hyper_dmabuf_private.device, "scheduling unexport of buffer %d\n", unexport_attr->hyper_dmabuf_id);
+
+	/* failed to find corresponding entry in export list */
+	if (sgt_info == NULL) {
+		unexport_attr->status = -EINVAL;
+		return -EFAULT;
+	}
+
+	if (sgt_info->unexport_scheduled)
+		return 0;
+
+	sgt_info->unexport_scheduled = 1;
+	INIT_DELAYED_WORK(&sgt_info->unexport_work, hyper_dmabuf_delayed_unexport);
+	schedule_delayed_work(&sgt_info->unexport_work,
+			      msecs_to_jiffies(unexport_attr->delay_ms));
+
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	return 0;
 }
 
 static int hyper_dmabuf_query(void *data)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index e43a25f..558964c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -90,6 +90,8 @@ struct ioctl_hyper_dmabuf_unexport {
 	/* IN parameters */
 	/* hyper dmabuf id to be unexported */
 	int hyper_dmabuf_id;
+	/* delay in ms by which unexport processing will be postponed */
+	int delay_ms;
 	/* OUT parameters */
 	/* Status of request */
 	int status;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index c99176ac..dd4bb01 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -112,7 +112,6 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 
 void cmd_process_work(struct work_struct *work)
 {
-	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_req *req;
@@ -154,19 +153,6 @@ void cmd_process_work(struct work_struct *work)
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_FIRST_EXPORT:
-		/* find a corresponding SGT for the id */
-		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
-
-		if (!sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
-				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
-			break;
-		}
-
-		sgt_info->importer_exported++;
-		break;
-
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -187,6 +173,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_req *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_sgt_info *exp_sgt_info;
 	int ret;
 
 	if (!req) {
@@ -216,8 +203,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 		if (sgt_info) {
 			/* if anything is still using dma_buf */
-			if (sgt_info->dma_buf &&
-			    dmabuf_refcount(sgt_info->dma_buf) > 0) {
+			if (sgt_info->num_importers) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
@@ -255,6 +241,29 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		return req->command;
 	}
 
+	/* synchronous dma_buf_fd export */
+	if (req->command == HYPER_DMABUF_FIRST_EXPORT) {
+		/* find a corresponding SGT for the id */
+		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (!exp_sgt_info) {
+			dev_err(hyper_dmabuf_private.device,
+				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		} else if (!exp_sgt_info->valid) {
+			dev_dbg(hyper_dmabuf_private.device,
+				"Buffer no longer valid - cannot export\n");
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			dev_dbg(hyper_dmabuf_private.device,
+				"Buffer still valid - can export\n");
+			exp_sgt_info->importer_exported++;
+			req->status = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->command;
+	}
+
+
 	dev_dbg(hyper_dmabuf_private.device,
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 2a58218..a41fd0a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -78,6 +78,8 @@ struct hyper_dmabuf_sgt_info {
 	bool valid;
 	int importer_exported; /* exported locally on importer's side */
 	void *refs_info; /* hypervisor-specific info for the references */
+	struct delayed_work unexport_work;
+	bool unexport_scheduled;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index ba6b126..a8cce26 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -551,6 +551,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
 			return -EBUSY;
 		}
+
+		return req_pending.status;
 	}
 
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 25/60] hyper_dmabuf: introduced delayed unexport
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

To prevent overhead when a DMA BUF needs to be exported right after
it is unexported, a marginal delay is introduced in unexporting process.
This adds a probation period to the unexporting process. If the same
DMA_BUF is requested to be exported agagin, unexporting process is
canceled right away and the buffer can be reused without any extensive
re-exporting process.

Additionally, "FIRST EXPORT" message is synchronously transmitted to
the exporter VM (importer VM waits for the response.) to make sure
the buffer is still valid (not unexported) on expoter VM's side before
importer VM starts to use it.

"delayed_ms" attribute is added to unexport ioctl, used for hardcoding
delay from userspace.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |   4 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 157 ++++++++++++++-------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |   2 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  41 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |   2 +
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |   2 +
 6 files changed, 139 insertions(+), 69 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index d7a35fc..a9bc354 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -341,6 +341,10 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	/* extract pages from sgt */
 	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
 
+	if (!page_info) {
+		return NULL;
+	}
+
 	/* create a new sg_table with extracted pages */
 	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
 				page_info->last_len, page_info->nents);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b0f5b5b..018de8c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -115,11 +115,24 @@ static int hyper_dmabuf_export_remote(void *data)
 	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
 	sgt_info = hyper_dmabuf_find_exported(ret);
 	if (ret != -1 && sgt_info->valid) {
+		/*
+		 * Check if unexport is already scheduled for that buffer,
+		 * if so try to cancel it. If that will fail, buffer needs
+		 * to be reexport once again.
+		 */
+		if (sgt_info->unexport_scheduled) {
+			if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+				dma_buf_put(dma_buf);
+				goto reexport;
+			}
+			sgt_info->unexport_scheduled = 0;
+		}
 		dma_buf_put(dma_buf);
 		export_remote_attr->hyper_dmabuf_id = ret;
 		return 0;
 	}
 
+reexport:
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
 	if (!attachment) {
 		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
@@ -133,7 +146,7 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
-	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
+	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
 
@@ -141,7 +154,6 @@ static int hyper_dmabuf_export_remote(void *data)
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
 	sgt_info->dma_buf = dma_buf;
 	sgt_info->valid = 1;
-	sgt_info->importer_exported = 0;
 
 	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
 	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
@@ -245,8 +257,35 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	/* look for dmabuf for the id */
 	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
-	if (sgt_info == NULL) /* can't find sgt from the table */
+	if (sgt_info == NULL || !sgt_info->valid) /* can't find sgt from the table */
+		return -1;
+
+	sgt_info->num_importers++;
+
+	/* send notification for export_fd to exporter */
+	operand = sgt_info->hyper_dmabuf_id;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
+
+	if (ret < 0) {
+		kfree(req);
+		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
+		sgt_info->num_importers--;
+		return -EINVAL;
+	}
+	kfree(req);
+
+	if (ret == HYPER_DMABUF_REQ_ERROR) {
+		dev_err(hyper_dmabuf_private.device,
+			"Buffer invalid\n");
+		sgt_info->num_importers--;
 		return -1;
+	} else {
+		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
+	}
 
 	dev_dbg(hyper_dmabuf_private.device,
 		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
@@ -262,86 +301,62 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 						   sgt_info->nents,
 						   &sgt_info->refs_info);
 
+		if (!data_pages) {
+			sgt_info->num_importers--;
+			return -EINVAL;
+		}
+
 		sgt_info->sgt = hyper_dmabuf_create_sgt(data_pages, sgt_info->frst_ofst,
 							sgt_info->last_len, sgt_info->nents);
 
 	}
 
-	/* send notification for export_fd to exporter */
-	operand = sgt_info->hyper_dmabuf_id;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
-
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
-
-	if (!sgt_info->sgt || ret) {
-		kfree(req);
-		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
-		return -EINVAL;
-	}
-	kfree(req);
-
 	export_fd_attr->fd = hyper_dmabuf_export_fd(sgt_info, export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
 		ret = export_fd_attr->fd;
-	} else {
-		sgt_info->num_importers++;
 	}
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-	return ret;
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	return 0;
 }
 
 /* unexport dmabuf from the database and send int req to the source domain
  * to unmap it.
  */
-static int hyper_dmabuf_unexport(void *data)
+static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 {
-	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_req *req;
+	int hyper_dmabuf_id;
 	int ret;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_sgt_info *sgt_info =
+		container_of(work, struct hyper_dmabuf_sgt_info, unexport_work.work);
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
+	if (!sgt_info)
+		return;
 
-	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
+	hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
 
-	/* failed to find corresponding entry in export list */
-	if (sgt_info == NULL) {
-		unexport_attr->status = -EINVAL;
-		return -EFAULT;
-	}
+	dev_dbg(hyper_dmabuf_private.device,
+		"Marking buffer %d as invalid\n", hyper_dmabuf_id);
+	/* no longer valid */
+	sgt_info->valid = 0;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &unexport_attr->hyper_dmabuf_id);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &hyper_dmabuf_id);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
 	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
-		kfree(req);
-		return -EFAULT;
+		dev_err(hyper_dmabuf_private.device, "unexport message for buffer %d failed\n", hyper_dmabuf_id);
 	}
 
 	/* free msg */
 	kfree(req);
-
-	dev_dbg(hyper_dmabuf_private.device,
-		"Marking buffer %d as invalid\n", unexport_attr->hyper_dmabuf_id);
-	/* no longer valid */
-	sgt_info->valid = 0;
+	sgt_info->unexport_scheduled = 0;
 
 	/*
 	 * Immediately clean-up if it has never been exported by importer
@@ -352,16 +367,52 @@ static int hyper_dmabuf_unexport(void *data)
 	 */
 	if (!sgt_info->importer_exported) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"claning up buffer %d completly\n", unexport_attr->hyper_dmabuf_id);
+			"claning up buffer %d completly\n", hyper_dmabuf_id);
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-		hyper_dmabuf_remove_exported(unexport_attr->hyper_dmabuf_id);
+		hyper_dmabuf_remove_exported(hyper_dmabuf_id);
 		kfree(sgt_info);
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_id(unexport_attr->hyper_dmabuf_id);
+		store_reusable_id(hyper_dmabuf_id);
 	}
+}
+
+/* Schedules unexport of dmabuf.
+ */
+static int hyper_dmabuf_unexport(void *data)
+{
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
 
 	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-	return ret;
+
+	if (!data) {
+		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
+		return -EINVAL;
+	}
+
+	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
+
+	/* find dmabuf in export list */
+	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
+
+	dev_dbg(hyper_dmabuf_private.device, "scheduling unexport of buffer %d\n", unexport_attr->hyper_dmabuf_id);
+
+	/* failed to find corresponding entry in export list */
+	if (sgt_info == NULL) {
+		unexport_attr->status = -EINVAL;
+		return -EFAULT;
+	}
+
+	if (sgt_info->unexport_scheduled)
+		return 0;
+
+	sgt_info->unexport_scheduled = 1;
+	INIT_DELAYED_WORK(&sgt_info->unexport_work, hyper_dmabuf_delayed_unexport);
+	schedule_delayed_work(&sgt_info->unexport_work,
+			      msecs_to_jiffies(unexport_attr->delay_ms));
+
+	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	return 0;
 }
 
 static int hyper_dmabuf_query(void *data)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index e43a25f..558964c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -90,6 +90,8 @@ struct ioctl_hyper_dmabuf_unexport {
 	/* IN parameters */
 	/* hyper dmabuf id to be unexported */
 	int hyper_dmabuf_id;
+	/* delay in ms by which unexport processing will be postponed */
+	int delay_ms;
 	/* OUT parameters */
 	/* Status of request */
 	int status;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index c99176ac..dd4bb01 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -112,7 +112,6 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 
 void cmd_process_work(struct work_struct *work)
 {
-	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_req *req;
@@ -154,19 +153,6 @@ void cmd_process_work(struct work_struct *work)
 		hyper_dmabuf_register_imported(imported_sgt_info);
 		break;
 
-	case HYPER_DMABUF_FIRST_EXPORT:
-		/* find a corresponding SGT for the id */
-		sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
-
-		if (!sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
-				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
-			break;
-		}
-
-		sgt_info->importer_exported++;
-		break;
-
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
 		/* for dmabuf synchronization */
@@ -187,6 +173,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	struct cmd_process *proc;
 	struct hyper_dmabuf_req *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_sgt_info *exp_sgt_info;
 	int ret;
 
 	if (!req) {
@@ -216,8 +203,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 		if (sgt_info) {
 			/* if anything is still using dma_buf */
-			if (sgt_info->dma_buf &&
-			    dmabuf_refcount(sgt_info->dma_buf) > 0) {
+			if (sgt_info->num_importers) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
@@ -255,6 +241,29 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		return req->command;
 	}
 
+	/* synchronous dma_buf_fd export */
+	if (req->command == HYPER_DMABUF_FIRST_EXPORT) {
+		/* find a corresponding SGT for the id */
+		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (!exp_sgt_info) {
+			dev_err(hyper_dmabuf_private.device,
+				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		} else if (!exp_sgt_info->valid) {
+			dev_dbg(hyper_dmabuf_private.device,
+				"Buffer no longer valid - cannot export\n");
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			dev_dbg(hyper_dmabuf_private.device,
+				"Buffer still valid - can export\n");
+			exp_sgt_info->importer_exported++;
+			req->status = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->command;
+	}
+
+
 	dev_dbg(hyper_dmabuf_private.device,
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 2a58218..a41fd0a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -78,6 +78,8 @@ struct hyper_dmabuf_sgt_info {
 	bool valid;
 	int importer_exported; /* exported locally on importer's side */
 	void *refs_info; /* hypervisor-specific info for the references */
+	struct delayed_work unexport_work;
+	bool unexport_scheduled;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index ba6b126..a8cce26 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -551,6 +551,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
 			return -EBUSY;
 		}
+
+		return req_pending.status;
 	}
 
 	return 0;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 26/60] hyper_dmabuf: add mutexes to prevent several race conditions
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Added mutex to export_fd ioctl to prevent double pages mapping of the
same buffer to prevent race condition when two consumers are trying to
map the same buffer on importer VM.

Also locked mutex before sending request via xen communication channel
to prevent req_pending override by another caller.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c          |  2 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h          |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c        |  6 ++++++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c | 10 ++++++++++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h |  1 +
 5 files changed, 20 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 569b95e..584d55d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -56,6 +56,8 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
 
+	mutex_init(&hyper_dmabuf_private.lock);
+
 	ret = register_device();
 	if (ret < 0) {
 		return -EINVAL;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 0b1441e..8445416 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -76,6 +76,7 @@ struct hyper_dmabuf_private {
 
 	/* backend ops - hypervisor specific */
 	struct hyper_dmabuf_backend_ops *backend_ops;
+	struct mutex lock;
 };
 
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 018de8c..8851a9c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -260,6 +260,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (sgt_info == NULL || !sgt_info->valid) /* can't find sgt from the table */
 		return -1;
 
+	mutex_lock(&hyper_dmabuf_private.lock);
+
 	sgt_info->num_importers++;
 
 	/* send notification for export_fd to exporter */
@@ -274,6 +276,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		kfree(req);
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
+		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -EINVAL;
 	}
 	kfree(req);
@@ -282,6 +285,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		dev_err(hyper_dmabuf_private.device,
 			"Buffer invalid\n");
 		sgt_info->num_importers--;
+		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -1;
 	} else {
 		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
@@ -303,6 +307,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 		if (!data_pages) {
 			sgt_info->num_importers--;
+			mutex_unlock(&hyper_dmabuf_private.lock);
 			return -EINVAL;
 		}
 
@@ -318,6 +323,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		ret = export_fd_attr->fd;
 	}
 
+	mutex_unlock(&hyper_dmabuf_private.lock);
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index a8cce26..9d67b47 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -278,6 +278,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
+	mutex_init(&ring_info->lock);
+
 	dev_dbg(hyper_dmabuf_private.device,
 		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
@@ -512,6 +514,9 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		return -EINVAL;
 	}
 
+
+	mutex_lock(&ring_info->lock);
+
 	ring = &ring_info->ring_front;
 
 	if (RING_FULL(ring))
@@ -519,6 +524,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
+		mutex_unlock(&ring_info->lock);
 		dev_err(hyper_dmabuf_private.device,
 			"NULL REQUEST\n");
 		return -EIO;
@@ -548,13 +554,17 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		if (timeout < 0) {
+			mutex_unlock(&ring_info->lock);
 			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
 			return -EBUSY;
 		}
 
+		mutex_unlock(&ring_info->lock);
 		return req_pending.status;
 	}
 
+	mutex_unlock(&ring_info->lock);
+
 	return 0;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 9c93165..0533e4d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -39,6 +39,7 @@ struct xen_comm_tx_ring_info {
         int gref_ring;
         int irq;
         int port;
+	struct mutex lock;
 	struct xenbus_watch watch;
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 26/60] hyper_dmabuf: add mutexes to prevent several race conditions
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Added mutex to export_fd ioctl to prevent double pages mapping of the
same buffer to prevent race condition when two consumers are trying to
map the same buffer on importer VM.

Also locked mutex before sending request via xen communication channel
to prevent req_pending override by another caller.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c          |  2 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h          |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c        |  6 ++++++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c | 10 ++++++++++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h |  1 +
 5 files changed, 20 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 569b95e..584d55d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -56,6 +56,8 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
 
+	mutex_init(&hyper_dmabuf_private.lock);
+
 	ret = register_device();
 	if (ret < 0) {
 		return -EINVAL;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 0b1441e..8445416 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -76,6 +76,7 @@ struct hyper_dmabuf_private {
 
 	/* backend ops - hypervisor specific */
 	struct hyper_dmabuf_backend_ops *backend_ops;
+	struct mutex lock;
 };
 
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 018de8c..8851a9c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -260,6 +260,8 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	if (sgt_info == NULL || !sgt_info->valid) /* can't find sgt from the table */
 		return -1;
 
+	mutex_lock(&hyper_dmabuf_private.lock);
+
 	sgt_info->num_importers++;
 
 	/* send notification for export_fd to exporter */
@@ -274,6 +276,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		kfree(req);
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
+		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -EINVAL;
 	}
 	kfree(req);
@@ -282,6 +285,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		dev_err(hyper_dmabuf_private.device,
 			"Buffer invalid\n");
 		sgt_info->num_importers--;
+		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -1;
 	} else {
 		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
@@ -303,6 +307,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 		if (!data_pages) {
 			sgt_info->num_importers--;
+			mutex_unlock(&hyper_dmabuf_private.lock);
 			return -EINVAL;
 		}
 
@@ -318,6 +323,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		ret = export_fd_attr->fd;
 	}
 
+	mutex_unlock(&hyper_dmabuf_private.lock);
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index a8cce26..9d67b47 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -278,6 +278,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
+	mutex_init(&ring_info->lock);
+
 	dev_dbg(hyper_dmabuf_private.device,
 		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
@@ -512,6 +514,9 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		return -EINVAL;
 	}
 
+
+	mutex_lock(&ring_info->lock);
+
 	ring = &ring_info->ring_front;
 
 	if (RING_FULL(ring))
@@ -519,6 +524,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
+		mutex_unlock(&ring_info->lock);
 		dev_err(hyper_dmabuf_private.device,
 			"NULL REQUEST\n");
 		return -EIO;
@@ -548,13 +554,17 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		if (timeout < 0) {
+			mutex_unlock(&ring_info->lock);
 			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
 			return -EBUSY;
 		}
 
+		mutex_unlock(&ring_info->lock);
 		return req_pending.status;
 	}
 
+	mutex_unlock(&ring_info->lock);
+
 	return 0;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 9c93165..0533e4d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -39,6 +39,7 @@ struct xen_comm_tx_ring_info {
         int gref_ring;
         int irq;
         int port;
+	struct mutex lock;
 	struct xenbus_watch watch;
 };
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 27/60] hyper_dmabuf: use proper error codes
  2017-12-19 19:29 ` Dongwon Kim
                   ` (40 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Cleaned up and corrected error codes and condition in various
error check routines. Also added proper err messages when func
returns error.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 14 +++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  8 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 66 ++++++++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  5 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 38 ++++++-------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 20 +++----
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  4 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    |  2 +-
 10 files changed, 94 insertions(+), 71 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 584d55d..44a9139 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -60,7 +60,7 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	ret = register_device();
 	if (ret < 0) {
-		return -EINVAL;
+		return ret;
 	}
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
@@ -77,18 +77,24 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
-		return -EINVAL;
+		dev_err(hyper_dmabuf_private.device,
+			"failed to initialize table for exported/imported entries\n");
+		return ret;
 	}
 
 	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
 	if (ret < 0) {
-		return -EINVAL;
+		dev_err(hyper_dmabuf_private.device,
+			"failed to initiailize hypervisor-specific comm env\n");
+		return ret;
 	}
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
 	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
 	if (ret < 0) {
-		return -EINVAL;
+		dev_err(hyper_dmabuf_private.device,
+			"failed to initialize sysfs\n");
+		return ret;
 	}
 #endif
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 9b4ff45..35bfdfb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -62,7 +62,7 @@ static int retrieve_reusable_id(void)
 		return id;
 	}
 
-	return -1;
+	return -ENOENT;
 }
 
 void destroy_reusable_list(void)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index a9bc354..a0b3946 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -84,11 +84,11 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	struct scatterlist *sgl;
 
 	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
-	if (pinfo == NULL)
+	if (!pinfo)
 		return NULL;
 
 	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (pinfo->pages == NULL)
+	if (!pinfo->pages)
 		return NULL;
 
 	sgl = sgt->sgl;
@@ -138,7 +138,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 	int i, ret;
 
 	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (sgt == NULL) {
+	if (!sgt) {
 		return NULL;
 	}
 
@@ -348,7 +348,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	/* create a new sg_table with extracted pages */
 	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
 				page_info->last_len, page_info->nents);
-	if (st == NULL)
+	if (!st)
 		goto err_free_sg;
 
         if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 8851a9c..19ca725 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -54,7 +54,7 @@ static int hyper_dmabuf_tx_ch_setup(void *data)
 
 	if (!data) {
 		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
 
@@ -71,7 +71,7 @@ static int hyper_dmabuf_rx_ch_setup(void *data)
 
 	if (!data) {
 		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 
 	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
@@ -96,16 +96,16 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	if (!data) {
 		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
-	if (!dma_buf) {
+	if (IS_ERR(dma_buf)) {
 		dev_err(hyper_dmabuf_private.device,  "Cannot get dma buf\n");
-		return -1;
+		return PTR_ERR(dma_buf);
 	}
 
 	/* we check if this specific attachment was already exported
@@ -114,7 +114,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	 */
 	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
 	sgt_info = hyper_dmabuf_find_exported(ret);
-	if (ret != -1 && sgt_info->valid) {
+	if (ret != -ENOENT && sgt_info->valid) {
 		/*
 		 * Check if unexport is already scheduled for that buffer,
 		 * if so try to cancel it. If that will fail, buffer needs
@@ -134,9 +134,9 @@ static int hyper_dmabuf_export_remote(void *data)
 
 reexport:
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
-	if (!attachment) {
+	if (IS_ERR(attachment)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
-		return -1;
+		return PTR_ERR(attachment);
 	}
 
 	/* Clear ret, as that will cause whole ioctl to return failure
@@ -148,6 +148,11 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
+	if(!sgt_info) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		return -ENOMEM;
+	}
+
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
 
 	/* TODO: We might need to consider using port number on event channel? */
@@ -174,8 +179,10 @@ static int hyper_dmabuf_export_remote(void *data)
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
 	page_info = hyper_dmabuf_ext_pgs(sgt);
-	if (page_info == NULL)
+	if (!page_info) {
+		dev_err(hyper_dmabuf_private.device, "failed to construct page_info\n");
 		goto fail_export;
+	}
 
 	sgt_info->nents = page_info->nents;
 
@@ -206,8 +213,12 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
 
-	if(ops->send_req(export_remote_attr->remote_domain, req, false))
+	ret = ops->send_req(export_remote_attr->remote_domain, req, false);
+
+	if(ret) {
+		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
 		goto fail_send_request;
+	}
 
 	/* free msg */
 	kfree(req);
@@ -233,7 +244,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	kfree(sgt_info->va_kmapped);
 	kfree(sgt_info->va_vmapped);
 
-	return -EINVAL;
+	return ret;
 }
 
 static int hyper_dmabuf_export_fd_ioctl(void *data)
@@ -257,8 +268,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	/* look for dmabuf for the id */
 	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
-	if (sgt_info == NULL || !sgt_info->valid) /* can't find sgt from the table */
-		return -1;
+
+	/* can't find sgt from the table */
+	if (!sgt_info) {
+		dev_err(hyper_dmabuf_private.device, "can't find the entry\n");
+		return -ENOENT;
+	}
 
 	mutex_lock(&hyper_dmabuf_private.lock);
 
@@ -277,7 +292,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
-		return -EINVAL;
+		return ret;
 	}
 	kfree(req);
 
@@ -286,9 +301,10 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 			"Buffer invalid\n");
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
-		return -1;
+		return -EINVAL;
 	} else {
 		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
+		ret = 0;
 	}
 
 	dev_dbg(hyper_dmabuf_private.device,
@@ -325,7 +341,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	mutex_unlock(&hyper_dmabuf_private.lock);
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
-	return 0;
+	return ret;
 }
 
 /* unexport dmabuf from the database and send int req to the source domain
@@ -405,8 +421,8 @@ static int hyper_dmabuf_unexport(void *data)
 
 	/* failed to find corresponding entry in export list */
 	if (sgt_info == NULL) {
-		unexport_attr->status = -EINVAL;
-		return -EFAULT;
+		unexport_attr->status = -ENOENT;
+		return -ENOENT;
 	}
 
 	if (sgt_info->unexport_scheduled)
@@ -441,7 +457,7 @@ static int hyper_dmabuf_query(void *data)
 	/* if dmabuf can't be found in both lists, return */
 	if (!(sgt_info && imported_sgt_info)) {
 		dev_err(hyper_dmabuf_private.device, "can't find entry anywhere\n");
-		return -EINVAL;
+		return -ENOENT;
 	}
 
 	/* not considering the case where a dmabuf is found on both queues
@@ -507,7 +523,7 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 {
 	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
 	unsigned int nr = _IOC_NR(cmd);
-	int ret = -EINVAL;
+	int ret;
 	hyper_dmabuf_ioctl_t func;
 	char *kdata;
 
@@ -565,13 +581,13 @@ static const char device_name[] = "hyper_dmabuf";
 /*===============================================================================================*/
 int register_device(void)
 {
-	int result = 0;
+	int ret = 0;
 
-	result = misc_register(&hyper_dmabuf_miscdev);
+	ret = misc_register(&hyper_dmabuf_miscdev);
 
-	if (result != 0) {
+	if (ret) {
 		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
-		return result;
+		return ret;
 	}
 
 	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
@@ -589,7 +605,7 @@ int register_device(void)
 
 	info.irq = err;
 */
-	return result;
+	return ret;
 }
 
 /*-----------------------------------------------------------------------------------------------*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index e46ae19..2cb4bb4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -177,7 +177,7 @@ int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid)
 		   info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
-	return -1;
+	return -ENOENT;
 }
 
 struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
@@ -204,7 +204,7 @@ int hyper_dmabuf_remove_exported(int id)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
 
 int hyper_dmabuf_remove_imported(int id)
@@ -219,5 +219,5 @@ int hyper_dmabuf_remove_imported(int id)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index dd4bb01..6e24442 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -100,7 +100,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		 * operands0 : hyper_dmabuf_id
 		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
-		for (i=0; i<2; i++)
+		for (i = 0; i < 2; i++)
 			req->operands[i] = operands[i];
 		break;
 
@@ -199,6 +199,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
+
 		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (sgt_info) {
@@ -232,6 +233,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
+
 		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
 		if (ret)
 			req->status = HYPER_DMABUF_REQ_ERROR;
@@ -271,7 +273,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	memcpy(temp_req, req, sizeof(*temp_req));
 
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
-
 	proc->rq = temp_req;
 	proc->domid = domid;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index f93c936..a74e800 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -71,7 +71,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	if (!sgt_info) {
 		dev_err(hyper_dmabuf_private.device,
 			"dmabuf remote sync::can't find exported list\n");
-		return -EINVAL;
+		return -ENOENT;
 	}
 
 	switch (ops) {
@@ -85,7 +85,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(attachl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
-			return -EINVAL;
+			return PTR_ERR(attachl->attach);
 		}
 
 		list_add(&attachl->list, &sgt_info->active_attached->list);
@@ -97,7 +97,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
 			dev_err(hyper_dmabuf_private.device,
 				"no more dmabuf attachment left to be detached\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 
 		attachl = list_first_entry(&sgt_info->active_attached->list,
@@ -113,8 +113,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
 			dev_err(hyper_dmabuf_private.device,
-				"no more dmabuf attachment left to be detached\n");
-			return -EINVAL;
+				"no more dmabuf attachment left to be mapped\n");
+			return -EFAULT;
 		}
 
 		attachl = list_first_entry(&sgt_info->active_attached->list,
@@ -126,7 +126,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(sgtl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			return -EINVAL;
+			return PTR_ERR(sgtl->sgt);
 		}
 		list_add(&sgtl->list, &sgt_info->active_sgts->list);
 		break;
@@ -137,8 +137,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
 			dev_err(hyper_dmabuf_private.device,
-				"no more SGT or attachment left to be freed\n");
-			return -EINVAL;
+				"no more SGT or attachment left to be unmapped\n");
+			return -EFAULT;
 		}
 
 		attachl = list_first_entry(&sgt_info->active_attached->list,
@@ -176,19 +176,19 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
-		if (!ret) {
+		if (ret) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
-			ret = -EINVAL;
+			return ret;
 		}
 		break;
 
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
 		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
-		if (!ret) {
+		if (ret) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
-			ret = -EINVAL;
+			return ret;
 		}
 		break;
 
@@ -206,7 +206,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_kmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
-			return -EINVAL;
+			return PTR_ERR(va_kmapl->vaddr);
 		}
 		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
 		break;
@@ -218,15 +218,15 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			dev_err(hyper_dmabuf_private.device,
 				"no more dmabuf VA to be freed\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
-		if (va_kmapl->vaddr == NULL) {
+		if (!va_kmapl->vaddr) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			return -EINVAL;
+			return PTR_ERR(va_kmapl->vaddr);
 		}
 
 		/* unmapping 1 page */
@@ -256,7 +256,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_vmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
-			return -EINVAL;
+			return PTR_ERR(va_vmapl->vaddr);
 		}
 		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
 		break;
@@ -267,14 +267,14 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
 			dev_err(hyper_dmabuf_private.device,
 				"no more dmabuf VA to be freed\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 
 		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 9d67b47..2cc35e3 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -232,7 +232,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
 	if (shared_ring == 0) {
-		return -EINVAL;
+		return -ENOMEM;
 	}
 
 	sring = (struct xen_comm_sring *) shared_ring;
@@ -246,17 +246,17 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 							   0);
 	if (ring_info->gref_ring < 0) {
 		/* fail to get gref */
-		return -EINVAL;
+		return -EFAULT;
 	}
 
 	alloc_unbound.dom = DOMID_SELF;
 	alloc_unbound.remote_dom = domid;
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
 					&alloc_unbound);
-	if (ret != 0) {
+	if (ret) {
 		dev_err(hyper_dmabuf_private.device,
 			"Cannot allocate event channel\n");
-		return -EINVAL;
+		return -EIO;
 	}
 
 	/* setting up interrupt */
@@ -271,7 +271,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
 		gnttab_end_foreign_access(ring_info->gref_ring, 0,
 					virt_to_mfn(shared_ring));
-		return -EINVAL;
+		return -EIO;
 	}
 
 	ring_info->rdomain = domid;
@@ -387,7 +387,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (gnttab_alloc_pages(1, &shared_ring)) {
-		return -EINVAL;
+		return -ENOMEM;
 	}
 
 	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
@@ -399,12 +399,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
-		return -EINVAL;
+		return -EFAULT;
 	}
 
 	if (map_ops[0].status) {
 		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
-		return -EINVAL;
+		return -EFAULT;
 	} else {
 		ring_info->unmap_op.handle = map_ops[0].handle;
 	}
@@ -418,7 +418,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
 
 	if (ret < 0) {
-		return -EINVAL;
+		return -EIO;
 	}
 
 	ring_info->irq = ret;
@@ -511,7 +511,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	if (!ring_info) {
 		dev_err(hyper_dmabuf_private.device,
 			"Can't find ring info for the channel\n");
-		return -EINVAL;
+		return -ENOENT;
 	}
 
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 0fa2d55..2f469da 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -110,7 +110,7 @@ int xen_comm_remove_tx_ring(int domid)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
 
 int xen_comm_remove_rx_ring(int domid)
@@ -125,7 +125,7 @@ int xen_comm_remove_rx_ring(int domid)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
 
 void xen_comm_foreach_tx_ring(void (*func)(int domid))
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index b158c11..c03e5a0 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -388,7 +388,7 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
 			      sh_pages_info->data_pages, nents) ) {
 		dev_err(hyper_dmabuf_private.device, "Cannot unmap data pages\n");
-		return -EINVAL;
+		return -EFAULT;
 	}
 
 	gnttab_free_pages(nents, sh_pages_info->data_pages);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 27/60] hyper_dmabuf: use proper error codes
  2017-12-19 19:29 ` Dongwon Kim
                   ` (39 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Cleaned up and corrected error codes and condition in various
error check routines. Also added proper err messages when func
returns error.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 14 +++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  8 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 66 ++++++++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  5 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 38 ++++++-------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 20 +++----
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  4 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    |  2 +-
 10 files changed, 94 insertions(+), 71 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 584d55d..44a9139 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -60,7 +60,7 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	ret = register_device();
 	if (ret < 0) {
-		return -EINVAL;
+		return ret;
 	}
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
@@ -77,18 +77,24 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
-		return -EINVAL;
+		dev_err(hyper_dmabuf_private.device,
+			"failed to initialize table for exported/imported entries\n");
+		return ret;
 	}
 
 	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
 	if (ret < 0) {
-		return -EINVAL;
+		dev_err(hyper_dmabuf_private.device,
+			"failed to initiailize hypervisor-specific comm env\n");
+		return ret;
 	}
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
 	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
 	if (ret < 0) {
-		return -EINVAL;
+		dev_err(hyper_dmabuf_private.device,
+			"failed to initialize sysfs\n");
+		return ret;
 	}
 #endif
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 9b4ff45..35bfdfb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -62,7 +62,7 @@ static int retrieve_reusable_id(void)
 		return id;
 	}
 
-	return -1;
+	return -ENOENT;
 }
 
 void destroy_reusable_list(void)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index a9bc354..a0b3946 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -84,11 +84,11 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	struct scatterlist *sgl;
 
 	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
-	if (pinfo == NULL)
+	if (!pinfo)
 		return NULL;
 
 	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (pinfo->pages == NULL)
+	if (!pinfo->pages)
 		return NULL;
 
 	sgl = sgt->sgl;
@@ -138,7 +138,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 	int i, ret;
 
 	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (sgt == NULL) {
+	if (!sgt) {
 		return NULL;
 	}
 
@@ -348,7 +348,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	/* create a new sg_table with extracted pages */
 	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
 				page_info->last_len, page_info->nents);
-	if (st == NULL)
+	if (!st)
 		goto err_free_sg;
 
         if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 8851a9c..19ca725 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -54,7 +54,7 @@ static int hyper_dmabuf_tx_ch_setup(void *data)
 
 	if (!data) {
 		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
 
@@ -71,7 +71,7 @@ static int hyper_dmabuf_rx_ch_setup(void *data)
 
 	if (!data) {
 		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 
 	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
@@ -96,16 +96,16 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	if (!data) {
 		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -1;
+		return -EINVAL;
 	}
 
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
-	if (!dma_buf) {
+	if (IS_ERR(dma_buf)) {
 		dev_err(hyper_dmabuf_private.device,  "Cannot get dma buf\n");
-		return -1;
+		return PTR_ERR(dma_buf);
 	}
 
 	/* we check if this specific attachment was already exported
@@ -114,7 +114,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	 */
 	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
 	sgt_info = hyper_dmabuf_find_exported(ret);
-	if (ret != -1 && sgt_info->valid) {
+	if (ret != -ENOENT && sgt_info->valid) {
 		/*
 		 * Check if unexport is already scheduled for that buffer,
 		 * if so try to cancel it. If that will fail, buffer needs
@@ -134,9 +134,9 @@ static int hyper_dmabuf_export_remote(void *data)
 
 reexport:
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
-	if (!attachment) {
+	if (IS_ERR(attachment)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
-		return -1;
+		return PTR_ERR(attachment);
 	}
 
 	/* Clear ret, as that will cause whole ioctl to return failure
@@ -148,6 +148,11 @@ static int hyper_dmabuf_export_remote(void *data)
 
 	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
+	if(!sgt_info) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		return -ENOMEM;
+	}
+
 	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
 
 	/* TODO: We might need to consider using port number on event channel? */
@@ -174,8 +179,10 @@ static int hyper_dmabuf_export_remote(void *data)
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
 	page_info = hyper_dmabuf_ext_pgs(sgt);
-	if (page_info == NULL)
+	if (!page_info) {
+		dev_err(hyper_dmabuf_private.device, "failed to construct page_info\n");
 		goto fail_export;
+	}
 
 	sgt_info->nents = page_info->nents;
 
@@ -206,8 +213,12 @@ static int hyper_dmabuf_export_remote(void *data)
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
 
-	if(ops->send_req(export_remote_attr->remote_domain, req, false))
+	ret = ops->send_req(export_remote_attr->remote_domain, req, false);
+
+	if(ret) {
+		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
 		goto fail_send_request;
+	}
 
 	/* free msg */
 	kfree(req);
@@ -233,7 +244,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	kfree(sgt_info->va_kmapped);
 	kfree(sgt_info->va_vmapped);
 
-	return -EINVAL;
+	return ret;
 }
 
 static int hyper_dmabuf_export_fd_ioctl(void *data)
@@ -257,8 +268,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	/* look for dmabuf for the id */
 	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
-	if (sgt_info == NULL || !sgt_info->valid) /* can't find sgt from the table */
-		return -1;
+
+	/* can't find sgt from the table */
+	if (!sgt_info) {
+		dev_err(hyper_dmabuf_private.device, "can't find the entry\n");
+		return -ENOENT;
+	}
 
 	mutex_lock(&hyper_dmabuf_private.lock);
 
@@ -277,7 +292,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
-		return -EINVAL;
+		return ret;
 	}
 	kfree(req);
 
@@ -286,9 +301,10 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 			"Buffer invalid\n");
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
-		return -1;
+		return -EINVAL;
 	} else {
 		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
+		ret = 0;
 	}
 
 	dev_dbg(hyper_dmabuf_private.device,
@@ -325,7 +341,7 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	mutex_unlock(&hyper_dmabuf_private.lock);
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
-	return 0;
+	return ret;
 }
 
 /* unexport dmabuf from the database and send int req to the source domain
@@ -405,8 +421,8 @@ static int hyper_dmabuf_unexport(void *data)
 
 	/* failed to find corresponding entry in export list */
 	if (sgt_info == NULL) {
-		unexport_attr->status = -EINVAL;
-		return -EFAULT;
+		unexport_attr->status = -ENOENT;
+		return -ENOENT;
 	}
 
 	if (sgt_info->unexport_scheduled)
@@ -441,7 +457,7 @@ static int hyper_dmabuf_query(void *data)
 	/* if dmabuf can't be found in both lists, return */
 	if (!(sgt_info && imported_sgt_info)) {
 		dev_err(hyper_dmabuf_private.device, "can't find entry anywhere\n");
-		return -EINVAL;
+		return -ENOENT;
 	}
 
 	/* not considering the case where a dmabuf is found on both queues
@@ -507,7 +523,7 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 {
 	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
 	unsigned int nr = _IOC_NR(cmd);
-	int ret = -EINVAL;
+	int ret;
 	hyper_dmabuf_ioctl_t func;
 	char *kdata;
 
@@ -565,13 +581,13 @@ static const char device_name[] = "hyper_dmabuf";
 /*===============================================================================================*/
 int register_device(void)
 {
-	int result = 0;
+	int ret = 0;
 
-	result = misc_register(&hyper_dmabuf_miscdev);
+	ret = misc_register(&hyper_dmabuf_miscdev);
 
-	if (result != 0) {
+	if (ret) {
 		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
-		return result;
+		return ret;
 	}
 
 	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
@@ -589,7 +605,7 @@ int register_device(void)
 
 	info.irq = err;
 */
-	return result;
+	return ret;
 }
 
 /*-----------------------------------------------------------------------------------------------*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index e46ae19..2cb4bb4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -177,7 +177,7 @@ int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid)
 		   info_entry->info->hyper_dmabuf_rdomain == domid)
 			return info_entry->info->hyper_dmabuf_id;
 
-	return -1;
+	return -ENOENT;
 }
 
 struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
@@ -204,7 +204,7 @@ int hyper_dmabuf_remove_exported(int id)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
 
 int hyper_dmabuf_remove_imported(int id)
@@ -219,5 +219,5 @@ int hyper_dmabuf_remove_imported(int id)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index dd4bb01..6e24442 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -100,7 +100,7 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		 * operands0 : hyper_dmabuf_id
 		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
-		for (i=0; i<2; i++)
+		for (i = 0; i < 2; i++)
 			req->operands[i] = operands[i];
 		break;
 
@@ -199,6 +199,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
+
 		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
 
 		if (sgt_info) {
@@ -232,6 +233,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
+
 		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
 		if (ret)
 			req->status = HYPER_DMABUF_REQ_ERROR;
@@ -271,7 +273,6 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	memcpy(temp_req, req, sizeof(*temp_req));
 
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
-
 	proc->rq = temp_req;
 	proc->domid = domid;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index f93c936..a74e800 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -71,7 +71,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	if (!sgt_info) {
 		dev_err(hyper_dmabuf_private.device,
 			"dmabuf remote sync::can't find exported list\n");
-		return -EINVAL;
+		return -ENOENT;
 	}
 
 	switch (ops) {
@@ -85,7 +85,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(attachl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
-			return -EINVAL;
+			return PTR_ERR(attachl->attach);
 		}
 
 		list_add(&attachl->list, &sgt_info->active_attached->list);
@@ -97,7 +97,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
 			dev_err(hyper_dmabuf_private.device,
 				"no more dmabuf attachment left to be detached\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 
 		attachl = list_first_entry(&sgt_info->active_attached->list,
@@ -113,8 +113,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
 			dev_err(hyper_dmabuf_private.device,
-				"no more dmabuf attachment left to be detached\n");
-			return -EINVAL;
+				"no more dmabuf attachment left to be mapped\n");
+			return -EFAULT;
 		}
 
 		attachl = list_first_entry(&sgt_info->active_attached->list,
@@ -126,7 +126,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(sgtl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			return -EINVAL;
+			return PTR_ERR(sgtl->sgt);
 		}
 		list_add(&sgtl->list, &sgt_info->active_sgts->list);
 		break;
@@ -137,8 +137,8 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
 			dev_err(hyper_dmabuf_private.device,
-				"no more SGT or attachment left to be freed\n");
-			return -EINVAL;
+				"no more SGT or attachment left to be unmapped\n");
+			return -EFAULT;
 		}
 
 		attachl = list_first_entry(&sgt_info->active_attached->list,
@@ -176,19 +176,19 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
-		if (!ret) {
+		if (ret) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
-			ret = -EINVAL;
+			return ret;
 		}
 		break;
 
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
 		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
-		if (!ret) {
+		if (ret) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
-			ret = -EINVAL;
+			return ret;
 		}
 		break;
 
@@ -206,7 +206,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_kmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
-			return -EINVAL;
+			return PTR_ERR(va_kmapl->vaddr);
 		}
 		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
 		break;
@@ -218,15 +218,15 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			dev_err(hyper_dmabuf_private.device,
 				"no more dmabuf VA to be freed\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 
 		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
-		if (va_kmapl->vaddr == NULL) {
+		if (!va_kmapl->vaddr) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			return -EINVAL;
+			return PTR_ERR(va_kmapl->vaddr);
 		}
 
 		/* unmapping 1 page */
@@ -256,7 +256,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_vmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
-			return -EINVAL;
+			return PTR_ERR(va_vmapl->vaddr);
 		}
 		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
 		break;
@@ -267,14 +267,14 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
 			dev_err(hyper_dmabuf_private.device,
 				"no more dmabuf VA to be freed\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
-			return -EINVAL;
+			return -EFAULT;
 		}
 
 		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 9d67b47..2cc35e3 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -232,7 +232,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
 	if (shared_ring == 0) {
-		return -EINVAL;
+		return -ENOMEM;
 	}
 
 	sring = (struct xen_comm_sring *) shared_ring;
@@ -246,17 +246,17 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 							   0);
 	if (ring_info->gref_ring < 0) {
 		/* fail to get gref */
-		return -EINVAL;
+		return -EFAULT;
 	}
 
 	alloc_unbound.dom = DOMID_SELF;
 	alloc_unbound.remote_dom = domid;
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
 					&alloc_unbound);
-	if (ret != 0) {
+	if (ret) {
 		dev_err(hyper_dmabuf_private.device,
 			"Cannot allocate event channel\n");
-		return -EINVAL;
+		return -EIO;
 	}
 
 	/* setting up interrupt */
@@ -271,7 +271,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
 		gnttab_end_foreign_access(ring_info->gref_ring, 0,
 					virt_to_mfn(shared_ring));
-		return -EINVAL;
+		return -EIO;
 	}
 
 	ring_info->rdomain = domid;
@@ -387,7 +387,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (gnttab_alloc_pages(1, &shared_ring)) {
-		return -EINVAL;
+		return -ENOMEM;
 	}
 
 	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
@@ -399,12 +399,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
-		return -EINVAL;
+		return -EFAULT;
 	}
 
 	if (map_ops[0].status) {
 		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
-		return -EINVAL;
+		return -EFAULT;
 	} else {
 		ring_info->unmap_op.handle = map_ops[0].handle;
 	}
@@ -418,7 +418,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
 
 	if (ret < 0) {
-		return -EINVAL;
+		return -EIO;
 	}
 
 	ring_info->irq = ret;
@@ -511,7 +511,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	if (!ring_info) {
 		dev_err(hyper_dmabuf_private.device,
 			"Can't find ring info for the channel\n");
-		return -EINVAL;
+		return -ENOENT;
 	}
 
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 0fa2d55..2f469da 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -110,7 +110,7 @@ int xen_comm_remove_tx_ring(int domid)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
 
 int xen_comm_remove_rx_ring(int domid)
@@ -125,7 +125,7 @@ int xen_comm_remove_rx_ring(int domid)
 			return 0;
 		}
 
-	return -1;
+	return -ENOENT;
 }
 
 void xen_comm_foreach_tx_ring(void (*func)(int domid))
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index b158c11..c03e5a0 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -388,7 +388,7 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
 			      sh_pages_info->data_pages, nents) ) {
 		dev_err(hyper_dmabuf_private.device, "Cannot unmap data pages\n");
-		return -EINVAL;
+		return -EFAULT;
 	}
 
 	gnttab_free_pages(nents, sh_pages_info->data_pages);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 28/60] hyper_dmabuf: address several synchronization issues
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

This patch addresses several synchronization issues while sharing
DMA_BUF with another VM.

1. Set WAIT_AFTER_SYNC_REQ to false by default to prevent possible
   performance degradation when waing for the response for every
   syncrhonization request to exporter VM.

2. Removed HYPER_DMABUF_OPS_RELEASE_FINAL message - now exporter can
   automatically detect when there are no more consumers of DMA_BUF
   so importer VM doesn't have to send out this message.

3. Renamed HYPER_DMABUF_FIRST_EXPORT into HYPER_DMABUF_EXPORT_FD

4. Introduced HYPER_DMABUF_EXPORT_FD_FAILED message to undo
   HYPER_DMABUF_FIRST_EXPORT in case of any failure while executing
   hyper_dmabuf_export_fd_ioctl

5. Waiting until other VM processes all pending requests when ring
   buffers are all full.

6. Create hyper_dmabuf.h with definitions of driver interface under
   include/uapi/xen/

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 21 ++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 17 +++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      | 74 +----------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 30 +++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  4 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 24 ++++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  5 +-
 include/uapi/xen/hyper_dmabuf.h                    | 96 ++++++++++++++++++++++
 8 files changed, 163 insertions(+), 108 deletions(-)
 create mode 100644 include/uapi/xen/hyper_dmabuf.h

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index a0b3946..5a034ffb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -187,10 +187,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	 * side.
 	 */
 	if (!force &&
-	    (!list_empty(&sgt_info->va_kmapped->list) ||
-	    !list_empty(&sgt_info->va_vmapped->list) ||
-	    !list_empty(&sgt_info->active_sgts->list) ||
-	    !list_empty(&sgt_info->active_attached->list))) {
+	    sgt_info->importer_exported) {
 		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
 		return -EPERM;
 	}
@@ -259,7 +256,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	return 0;
 }
 
-#define WAIT_AFTER_SYNC_REQ 1
+#define WAIT_AFTER_SYNC_REQ 0
 
 inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 {
@@ -431,17 +428,11 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	final_release = sgt_info && !sgt_info->valid &&
 		        !sgt_info->num_importers;
 
-	if (final_release) {
-		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_RELEASE_FINAL);
-	} else {
-		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_RELEASE);
-	}
-
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_RELEASE);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_warn(hyper_dmabuf_private.device,
+			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	/*
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 19ca725..58b115a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -35,6 +35,7 @@
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
+#include <xen/hyper_dmabuf.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
@@ -282,12 +283,17 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	/* send notification for export_fd to exporter */
 	operand = sgt_info->hyper_dmabuf_id;
 
+	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer %d\n", operand);
+
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operand);
 
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
 
 	if (ret < 0) {
+		/* in case of timeout other end eventually will receive request, so we need to undo it */
+		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
+		ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
 		kfree(req);
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
@@ -298,12 +304,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
 		dev_err(hyper_dmabuf_private.device,
-			"Buffer invalid\n");
+			"Buffer invalid %d, cannot import\n", operand);
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -EINVAL;
 	} else {
-		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
+		dev_dbg(hyper_dmabuf_private.device, "Can import buffer %d\n", operand);
 		ret = 0;
 	}
 
@@ -322,7 +328,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 						   &sgt_info->refs_info);
 
 		if (!data_pages) {
+			dev_err(hyper_dmabuf_private.device, "Cannot map pages of buffer %d\n", operand);
 			sgt_info->num_importers--;
+			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
+			ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+			kfree(req);
 			mutex_unlock(&hyper_dmabuf_private.lock);
 			return -EINVAL;
 		}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index 558964c..8355e30 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -22,8 +22,8 @@
  *
  */
 
-#ifndef __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
-#define __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
+#ifndef __HYPER_DMABUF_IOCTL_H__
+#define __HYPER_DMABUF_IOCTL_H__
 
 typedef int (*hyper_dmabuf_ioctl_t)(void *data);
 
@@ -42,72 +42,4 @@ struct hyper_dmabuf_ioctl_desc {
 			.name = #ioctl			\
 	}
 
-#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
-_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
-struct ioctl_hyper_dmabuf_tx_ch_setup {
-	/* IN parameters */
-	/* Remote domain id */
-	int remote_domain;
-};
-
-#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
-_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
-struct ioctl_hyper_dmabuf_rx_ch_setup {
-	/* IN parameters */
-	/* Source domain id */
-	int source_domain;
-};
-
-#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
-_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
-struct ioctl_hyper_dmabuf_export_remote {
-	/* IN parameters */
-	/* DMA buf fd to be exported */
-	int dmabuf_fd;
-	/* Domain id to which buffer should be exported */
-	int remote_domain;
-	/* exported dma buf id */
-	int hyper_dmabuf_id;
-	int private[4];
-};
-
-#define IOCTL_HYPER_DMABUF_EXPORT_FD \
-_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
-struct ioctl_hyper_dmabuf_export_fd {
-	/* IN parameters */
-	/* hyper dmabuf id to be imported */
-	int hyper_dmabuf_id;
-	/* flags */
-	int flags;
-	/* OUT parameters */
-	/* exported dma buf fd */
-	int fd;
-};
-
-#define IOCTL_HYPER_DMABUF_UNEXPORT \
-_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
-struct ioctl_hyper_dmabuf_unexport {
-	/* IN parameters */
-	/* hyper dmabuf id to be unexported */
-	int hyper_dmabuf_id;
-	/* delay in ms by which unexport processing will be postponed */
-	int delay_ms;
-	/* OUT parameters */
-	/* Status of request */
-	int status;
-};
-
-#define IOCTL_HYPER_DMABUF_QUERY \
-_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
-struct ioctl_hyper_dmabuf_query {
-	/* in parameters */
-	/* hyper dmabuf id to be queried */
-	int hyper_dmabuf_id;
-	/* item to be queried */
-	int item;
-	/* OUT parameters */
-	/* Value of queried item */
-	int info;
-};
-
-#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 6e24442..3111cdc 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -79,9 +79,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		req->operands[0] = operands[0];
 		break;
 
-	case HYPER_DMABUF_FIRST_EXPORT:
-		/* dmabuf fd is being created on imported side for first time */
-		/* command : HYPER_DMABUF_FIRST_EXPORT,
+	case HYPER_DMABUF_EXPORT_FD:
+	case HYPER_DMABUF_EXPORT_FD_FAILED:
+		/* dmabuf fd is being created on imported side or importing failed */
+		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
 		 * operands0 : hyper_dmabuf_id
 		 */
 		req->operands[0] = operands[0];
@@ -244,8 +245,10 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	}
 
 	/* synchronous dma_buf_fd export */
-	if (req->command == HYPER_DMABUF_FIRST_EXPORT) {
+	if (req->command == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
+		dev_dbg(hyper_dmabuf_private.device,
+			"Processing HYPER_DMABUF_EXPORT_FD %d\n", req->operands[0]);
 		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
 
 		if (!exp_sgt_info) {
@@ -254,17 +257,32 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else if (!exp_sgt_info->valid) {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer no longer valid - cannot export\n");
+				"Buffer no longer valid - cannot export fd %d\n", req->operands[0]);
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer still valid - can export\n");
+				"Buffer still valid - can export fd%d\n", req->operands[0]);
 			exp_sgt_info->importer_exported++;
 			req->status = HYPER_DMABUF_REQ_PROCESSED;
 		}
 		return req->command;
 	}
 
+	if (req->command == HYPER_DMABUF_EXPORT_FD_FAILED) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Processing HYPER_DMABUF_EXPORT_FD_FAILED %d\n", req->operands[0]);
+		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (!exp_sgt_info) {
+			dev_err(hyper_dmabuf_private.device,
+				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			exp_sgt_info->importer_exported--;
+			req->status = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->command;
+	}
 
 	dev_dbg(hyper_dmabuf_private.device,
 		"%s: putting request to workqueue\n", __func__);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 8b3c857..50ce617 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -43,7 +43,8 @@ struct hyper_dmabuf_resp {
 
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
-	HYPER_DMABUF_FIRST_EXPORT,
+	HYPER_DMABUF_EXPORT_FD,
+	HYPER_DMABUF_EXPORT_FD_FAILED,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
@@ -55,7 +56,6 @@ enum hyper_dmabuf_ops {
 	HYPER_DMABUF_OPS_MAP,
 	HYPER_DMABUF_OPS_UNMAP,
 	HYPER_DMABUF_OPS_RELEASE,
-	HYPER_DMABUF_OPS_RELEASE_FINAL,
 	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
 	HYPER_DMABUF_OPS_END_CPU_ACCESS,
 	HYPER_DMABUF_OPS_KMAP_ATOMIC,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index a74e800..0eded61 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -152,13 +152,25 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		kfree(sgtl);
 		break;
 
-	case HYPER_DMABUF_OPS_RELEASE_FINAL:
+	case HYPER_DMABUF_OPS_RELEASE:
+		dev_dbg(hyper_dmabuf_private.device,
+			"Buffer %d released, references left: %d\n",
+			 sgt_info->hyper_dmabuf_id,
+			 sgt_info->importer_exported -1);
+                sgt_info->importer_exported--;
+		/* If there are still importers just break, if no then continue with final cleanup */
+		if (sgt_info->importer_exported)
+			break;
+
 		/*
 		 * Importer just released buffer fd, check if there is any other importer still using it.
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
-		 if (list_empty(&sgt_info->active_attached->list) &&
-		     !sgt_info->valid) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Buffer %d final released\n", sgt_info->hyper_dmabuf_id);
+
+		if (!sgt_info->valid && !sgt_info->importer_exported &&
+		    !sgt_info->unexport_scheduled) {
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 			hyper_dmabuf_remove_exported(id);
 			kfree(sgt_info);
@@ -168,12 +180,6 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		break;
 
-	case HYPER_DMABUF_OPS_RELEASE:
-		/* place holder */
-                sgt_info->importer_exported--;
-
-		break;
-
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (ret) {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 2cc35e3..ce9862a 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -519,8 +519,9 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	ring = &ring_info->ring_front;
 
-	if (RING_FULL(ring))
-		return -EBUSY;
+	while (RING_FULL(ring)) {
+		usleep_range(100, 120);
+	}
 
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
new file mode 100644
index 0000000..2eff3a8e
--- /dev/null
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_H__
+
+#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	int remote_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+	/* IN parameters */
+	/* Source domain id */
+	int source_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	int dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	int remote_domain;
+	/* exported dma buf id */
+	int hyper_dmabuf_id;
+	int private[4];
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	int hyper_dmabuf_id;
+	/* flags */
+	int flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	int fd;
+};
+
+#define IOCTL_HYPER_DMABUF_UNEXPORT \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
+struct ioctl_hyper_dmabuf_unexport {
+	/* IN parameters */
+	/* hyper dmabuf id to be unexported */
+	int hyper_dmabuf_id;
+	/* delay in ms by which unexport processing will be postponed */
+	int delay_ms;
+	/* OUT parameters */
+	/* Status of request */
+	int status;
+};
+
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	int hyper_dmabuf_id;
+	/* item to be queried */
+	int item;
+	/* OUT parameters */
+	/* Value of queried item */
+	int info;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 28/60] hyper_dmabuf: address several synchronization issues
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

This patch addresses several synchronization issues while sharing
DMA_BUF with another VM.

1. Set WAIT_AFTER_SYNC_REQ to false by default to prevent possible
   performance degradation when waing for the response for every
   syncrhonization request to exporter VM.

2. Removed HYPER_DMABUF_OPS_RELEASE_FINAL message - now exporter can
   automatically detect when there are no more consumers of DMA_BUF
   so importer VM doesn't have to send out this message.

3. Renamed HYPER_DMABUF_FIRST_EXPORT into HYPER_DMABUF_EXPORT_FD

4. Introduced HYPER_DMABUF_EXPORT_FD_FAILED message to undo
   HYPER_DMABUF_FIRST_EXPORT in case of any failure while executing
   hyper_dmabuf_export_fd_ioctl

5. Waiting until other VM processes all pending requests when ring
   buffers are all full.

6. Create hyper_dmabuf.h with definitions of driver interface under
   include/uapi/xen/

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 21 ++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 17 +++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      | 74 +----------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 30 +++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  4 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 24 ++++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  5 +-
 include/uapi/xen/hyper_dmabuf.h                    | 96 ++++++++++++++++++++++
 8 files changed, 163 insertions(+), 108 deletions(-)
 create mode 100644 include/uapi/xen/hyper_dmabuf.h

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index a0b3946..5a034ffb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -187,10 +187,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	 * side.
 	 */
 	if (!force &&
-	    (!list_empty(&sgt_info->va_kmapped->list) ||
-	    !list_empty(&sgt_info->va_vmapped->list) ||
-	    !list_empty(&sgt_info->active_sgts->list) ||
-	    !list_empty(&sgt_info->active_attached->list))) {
+	    sgt_info->importer_exported) {
 		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
 		return -EPERM;
 	}
@@ -259,7 +256,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	return 0;
 }
 
-#define WAIT_AFTER_SYNC_REQ 1
+#define WAIT_AFTER_SYNC_REQ 0
 
 inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 {
@@ -431,17 +428,11 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	final_release = sgt_info && !sgt_info->valid &&
 		        !sgt_info->num_importers;
 
-	if (final_release) {
-		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_RELEASE_FINAL);
-	} else {
-		ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
-						HYPER_DMABUF_OPS_RELEASE);
-	}
-
+	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+					HYPER_DMABUF_OPS_RELEASE);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		dev_warn(hyper_dmabuf_private.device,
+			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
 	}
 
 	/*
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 19ca725..58b115a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -35,6 +35,7 @@
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
+#include <xen/hyper_dmabuf.h>
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
@@ -282,12 +283,17 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 	/* send notification for export_fd to exporter */
 	operand = sgt_info->hyper_dmabuf_id;
 
+	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer %d\n", operand);
+
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_FIRST_EXPORT, &operand);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operand);
 
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
 
 	if (ret < 0) {
+		/* in case of timeout other end eventually will receive request, so we need to undo it */
+		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
+		ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
 		kfree(req);
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
@@ -298,12 +304,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
 		dev_err(hyper_dmabuf_private.device,
-			"Buffer invalid\n");
+			"Buffer invalid %d, cannot import\n", operand);
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -EINVAL;
 	} else {
-		dev_dbg(hyper_dmabuf_private.device, "Can import buffer\n");
+		dev_dbg(hyper_dmabuf_private.device, "Can import buffer %d\n", operand);
 		ret = 0;
 	}
 
@@ -322,7 +328,12 @@ static int hyper_dmabuf_export_fd_ioctl(void *data)
 						   &sgt_info->refs_info);
 
 		if (!data_pages) {
+			dev_err(hyper_dmabuf_private.device, "Cannot map pages of buffer %d\n", operand);
 			sgt_info->num_importers--;
+			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
+			ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+			kfree(req);
 			mutex_unlock(&hyper_dmabuf_private.lock);
 			return -EINVAL;
 		}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index 558964c..8355e30 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -22,8 +22,8 @@
  *
  */
 
-#ifndef __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
-#define __LINUX_PUBLIC_HYPER_DMABUF_IOCTL_H__
+#ifndef __HYPER_DMABUF_IOCTL_H__
+#define __HYPER_DMABUF_IOCTL_H__
 
 typedef int (*hyper_dmabuf_ioctl_t)(void *data);
 
@@ -42,72 +42,4 @@ struct hyper_dmabuf_ioctl_desc {
 			.name = #ioctl			\
 	}
 
-#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
-_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
-struct ioctl_hyper_dmabuf_tx_ch_setup {
-	/* IN parameters */
-	/* Remote domain id */
-	int remote_domain;
-};
-
-#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
-_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
-struct ioctl_hyper_dmabuf_rx_ch_setup {
-	/* IN parameters */
-	/* Source domain id */
-	int source_domain;
-};
-
-#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
-_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
-struct ioctl_hyper_dmabuf_export_remote {
-	/* IN parameters */
-	/* DMA buf fd to be exported */
-	int dmabuf_fd;
-	/* Domain id to which buffer should be exported */
-	int remote_domain;
-	/* exported dma buf id */
-	int hyper_dmabuf_id;
-	int private[4];
-};
-
-#define IOCTL_HYPER_DMABUF_EXPORT_FD \
-_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
-struct ioctl_hyper_dmabuf_export_fd {
-	/* IN parameters */
-	/* hyper dmabuf id to be imported */
-	int hyper_dmabuf_id;
-	/* flags */
-	int flags;
-	/* OUT parameters */
-	/* exported dma buf fd */
-	int fd;
-};
-
-#define IOCTL_HYPER_DMABUF_UNEXPORT \
-_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
-struct ioctl_hyper_dmabuf_unexport {
-	/* IN parameters */
-	/* hyper dmabuf id to be unexported */
-	int hyper_dmabuf_id;
-	/* delay in ms by which unexport processing will be postponed */
-	int delay_ms;
-	/* OUT parameters */
-	/* Status of request */
-	int status;
-};
-
-#define IOCTL_HYPER_DMABUF_QUERY \
-_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
-struct ioctl_hyper_dmabuf_query {
-	/* in parameters */
-	/* hyper dmabuf id to be queried */
-	int hyper_dmabuf_id;
-	/* item to be queried */
-	int item;
-	/* OUT parameters */
-	/* Value of queried item */
-	int info;
-};
-
-#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 6e24442..3111cdc 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -79,9 +79,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		req->operands[0] = operands[0];
 		break;
 
-	case HYPER_DMABUF_FIRST_EXPORT:
-		/* dmabuf fd is being created on imported side for first time */
-		/* command : HYPER_DMABUF_FIRST_EXPORT,
+	case HYPER_DMABUF_EXPORT_FD:
+	case HYPER_DMABUF_EXPORT_FD_FAILED:
+		/* dmabuf fd is being created on imported side or importing failed */
+		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
 		 * operands0 : hyper_dmabuf_id
 		 */
 		req->operands[0] = operands[0];
@@ -244,8 +245,10 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	}
 
 	/* synchronous dma_buf_fd export */
-	if (req->command == HYPER_DMABUF_FIRST_EXPORT) {
+	if (req->command == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
+		dev_dbg(hyper_dmabuf_private.device,
+			"Processing HYPER_DMABUF_EXPORT_FD %d\n", req->operands[0]);
 		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
 
 		if (!exp_sgt_info) {
@@ -254,17 +257,32 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else if (!exp_sgt_info->valid) {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer no longer valid - cannot export\n");
+				"Buffer no longer valid - cannot export fd %d\n", req->operands[0]);
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer still valid - can export\n");
+				"Buffer still valid - can export fd%d\n", req->operands[0]);
 			exp_sgt_info->importer_exported++;
 			req->status = HYPER_DMABUF_REQ_PROCESSED;
 		}
 		return req->command;
 	}
 
+	if (req->command == HYPER_DMABUF_EXPORT_FD_FAILED) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Processing HYPER_DMABUF_EXPORT_FD_FAILED %d\n", req->operands[0]);
+		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (!exp_sgt_info) {
+			dev_err(hyper_dmabuf_private.device,
+				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+			req->status = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			exp_sgt_info->importer_exported--;
+			req->status = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->command;
+	}
 
 	dev_dbg(hyper_dmabuf_private.device,
 		"%s: putting request to workqueue\n", __func__);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 8b3c857..50ce617 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -43,7 +43,8 @@ struct hyper_dmabuf_resp {
 
 enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT = 0x10,
-	HYPER_DMABUF_FIRST_EXPORT,
+	HYPER_DMABUF_EXPORT_FD,
+	HYPER_DMABUF_EXPORT_FD_FAILED,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
 	HYPER_DMABUF_OPS_TO_REMOTE,
 	HYPER_DMABUF_OPS_TO_SOURCE,
@@ -55,7 +56,6 @@ enum hyper_dmabuf_ops {
 	HYPER_DMABUF_OPS_MAP,
 	HYPER_DMABUF_OPS_UNMAP,
 	HYPER_DMABUF_OPS_RELEASE,
-	HYPER_DMABUF_OPS_RELEASE_FINAL,
 	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
 	HYPER_DMABUF_OPS_END_CPU_ACCESS,
 	HYPER_DMABUF_OPS_KMAP_ATOMIC,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index a74e800..0eded61 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -152,13 +152,25 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		kfree(sgtl);
 		break;
 
-	case HYPER_DMABUF_OPS_RELEASE_FINAL:
+	case HYPER_DMABUF_OPS_RELEASE:
+		dev_dbg(hyper_dmabuf_private.device,
+			"Buffer %d released, references left: %d\n",
+			 sgt_info->hyper_dmabuf_id,
+			 sgt_info->importer_exported -1);
+                sgt_info->importer_exported--;
+		/* If there are still importers just break, if no then continue with final cleanup */
+		if (sgt_info->importer_exported)
+			break;
+
 		/*
 		 * Importer just released buffer fd, check if there is any other importer still using it.
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
-		 if (list_empty(&sgt_info->active_attached->list) &&
-		     !sgt_info->valid) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Buffer %d final released\n", sgt_info->hyper_dmabuf_id);
+
+		if (!sgt_info->valid && !sgt_info->importer_exported &&
+		    !sgt_info->unexport_scheduled) {
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
 			hyper_dmabuf_remove_exported(id);
 			kfree(sgt_info);
@@ -168,12 +180,6 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 		break;
 
-	case HYPER_DMABUF_OPS_RELEASE:
-		/* place holder */
-                sgt_info->importer_exported--;
-
-		break;
-
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
 		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
 		if (ret) {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 2cc35e3..ce9862a 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -519,8 +519,9 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	ring = &ring_info->ring_front;
 
-	if (RING_FULL(ring))
-		return -EBUSY;
+	while (RING_FULL(ring)) {
+		usleep_range(100, 120);
+	}
 
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
new file mode 100644
index 0000000..2eff3a8e
--- /dev/null
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -0,0 +1,96 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_H__
+
+#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	int remote_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+	/* IN parameters */
+	/* Source domain id */
+	int source_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	int dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	int remote_domain;
+	/* exported dma buf id */
+	int hyper_dmabuf_id;
+	int private[4];
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	int hyper_dmabuf_id;
+	/* flags */
+	int flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	int fd;
+};
+
+#define IOCTL_HYPER_DMABUF_UNEXPORT \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
+struct ioctl_hyper_dmabuf_unexport {
+	/* IN parameters */
+	/* hyper dmabuf id to be unexported */
+	int hyper_dmabuf_id;
+	/* delay in ms by which unexport processing will be postponed */
+	int delay_ms;
+	/* OUT parameters */
+	/* Status of request */
+	int status;
+};
+
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	int hyper_dmabuf_id;
+	/* item to be queried */
+	int item;
+	/* OUT parameters */
+	/* Value of queried item */
+	int info;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 29/60] hyper_dmabuf: make sure to release allocated buffers when exiting
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

It is required to release all allocated buffers when the application
crashes. With the change, hyper_dmabuf_sgt_info includes file pointers
for the driver. If it's released unexpectedly, the driver is now
unexporting all already-exported buffers to prevent memory leak.

In case there are multiple applications exporting same buffer to
another VM, unexporting is not started when one of those crashes.
Actual unexporting is invoked only if the last application that
exported the buffer is crashed or finished via "emergency-unexport"
routine, that is executed automatically when all of file pointers
opened for accessing hyper_dmabuf driver are closed.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c    |  6 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 73 ++++++++++++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c   | 14 +++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h   |  4 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h |  7 +++
 6 files changed, 81 insertions(+), 25 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 44a9139..a12d4dc 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -54,7 +54,7 @@ static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
 
-	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
+	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
 	mutex_init(&hyper_dmabuf_private.lock);
 
@@ -122,7 +122,9 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.id_queue)
 		destroy_reusable_list();
 
-	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
+	dev_info(hyper_dmabuf_private.device,
+		 "hyper_dmabuf driver: Exiting\n");
+
 	unregister_device();
 }
 /*===============================================================================================*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 58b115a..fa700f2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -47,7 +47,7 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static int hyper_dmabuf_tx_ch_setup(void *data)
+static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -64,7 +64,7 @@ static int hyper_dmabuf_tx_ch_setup(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_rx_ch_setup(void *data)
+static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -82,7 +82,7 @@ static int hyper_dmabuf_rx_ch_setup(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_export_remote(void *data)
+static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -227,6 +227,8 @@ static int hyper_dmabuf_export_remote(void *data)
 	kfree(page_info->pages);
 	kfree(page_info);
 
+	sgt_info->filp = filp;
+
 	return ret;
 
 fail_send_request:
@@ -248,7 +250,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_export_fd_ioctl(void *data)
+static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -411,7 +413,7 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 /* Schedules unexport of dmabuf.
  */
-static int hyper_dmabuf_unexport(void *data)
+static int hyper_dmabuf_unexport(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -448,7 +450,7 @@ static int hyper_dmabuf_unexport(void *data)
 	return 0;
 }
 
-static int hyper_dmabuf_query(void *data)
+static int hyper_dmabuf_query(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_query *query_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -558,7 +560,7 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 		return -EFAULT;
 	}
 
-	ret = func(kdata);
+	ret = func(filp, kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
 		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
@@ -570,14 +572,49 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 	return ret;
 }
 
-struct device_info {
-	int curr_domain;
-};
+int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+{
+	/* Do not allow exclusive open */
+	if (filp->f_flags & O_EXCL)
+		return -EBUSY;
+
+	return 0;
+}
+
+static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
+					   void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file*) attr;
+
+	if (!filp || !sgt_info)
+		return;
+
+	if (sgt_info->filp == filp) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Executing emergency release of buffer %d\n",
+			 sgt_info->hyper_dmabuf_id);
+
+		unexport_attr.hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport(filp, &unexport_attr);
+	}
+}
+
+int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+{
+	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
+
+	return 0;
+}
 
 /*===============================================================================================*/
 static struct file_operations hyper_dmabuf_driver_fops =
 {
    .owner = THIS_MODULE,
+   .open = hyper_dmabuf_open,
+   .release = hyper_dmabuf_release,
    .unlocked_ioctl = hyper_dmabuf_ioctl,
 };
 
@@ -597,7 +634,7 @@ int register_device(void)
 	ret = misc_register(&hyper_dmabuf_miscdev);
 
 	if (ret) {
-		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
+		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
 		return ret;
 	}
 
@@ -606,22 +643,14 @@ int register_device(void)
 	/* TODO: Check if there is a different way to initialize dma mask nicely */
 	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
 
-	/* TODO find a way to provide parameters for below function or move that to ioctl */
-/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
-				src_sink_isr, PORT_NUM, "remote_domain", &info);
-	if (err < 0) {
-		printk("hyper_dmabuf: can't register interrupt handlers\n");
-		return -EFAULT;
-	}
-
-	info.irq = err;
-*/
 	return ret;
 }
 
 /*-----------------------------------------------------------------------------------------------*/
 void unregister_device(void)
 {
-	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
+	dev_info(hyper_dmabuf_private.device,
+		 "hyper_dmabuf: unregister_device() is called\n");
+
 	misc_deregister(&hyper_dmabuf_miscdev);
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index 8355e30..ebfbb84 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_IOCTL_H__
 #define __HYPER_DMABUF_IOCTL_H__
 
-typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
 
 struct hyper_dmabuf_ioctl_desc {
 	unsigned int cmd;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 2cb4bb4..c1285eb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -221,3 +221,17 @@ int hyper_dmabuf_remove_imported(int id)
 
 	return -ENOENT;
 }
+
+void hyper_dmabuf_foreach_exported(
+	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void *attr)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
+			info_entry, node) {
+		func(info_entry->info, attr);
+	}
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 35dc722..925b0d1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -61,6 +61,10 @@ int hyper_dmabuf_remove_exported(int id);
 
 int hyper_dmabuf_remove_imported(int id);
 
+void hyper_dmabuf_foreach_exported(
+	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void *attr);
+
 int hyper_dmabuf_register_sysfs(struct device *dev);
 int hyper_dmabuf_unregister_sysfs(struct device *dev);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index a41fd0a..9952b3f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -80,6 +80,13 @@ struct hyper_dmabuf_sgt_info {
 	void *refs_info; /* hypervisor-specific info for the references */
 	struct delayed_work unexport_work;
 	bool unexport_scheduled;
+	/* owner of buffer
+	 * TODO: that is naiive as buffer may be reused by
+	 * another userspace app, so here list of struct file should be kept
+	 * and emergency unexport should be executed only after last of buffer
+	 * uses releases hyper_dmabuf device
+	 */
+	struct file *filp;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 29/60] hyper_dmabuf: make sure to release allocated buffers when exiting
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

It is required to release all allocated buffers when the application
crashes. With the change, hyper_dmabuf_sgt_info includes file pointers
for the driver. If it's released unexpectedly, the driver is now
unexporting all already-exported buffers to prevent memory leak.

In case there are multiple applications exporting same buffer to
another VM, unexporting is not started when one of those crashes.
Actual unexporting is invoked only if the last application that
exported the buffer is crashed or finished via "emergency-unexport"
routine, that is executed automatically when all of file pointers
opened for accessing hyper_dmabuf driver are closed.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c    |  6 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 73 ++++++++++++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c   | 14 +++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h   |  4 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h |  7 +++
 6 files changed, 81 insertions(+), 25 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 44a9139..a12d4dc 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -54,7 +54,7 @@ static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
 
-	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
+	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
 	mutex_init(&hyper_dmabuf_private.lock);
 
@@ -122,7 +122,9 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.id_queue)
 		destroy_reusable_list();
 
-	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
+	dev_info(hyper_dmabuf_private.device,
+		 "hyper_dmabuf driver: Exiting\n");
+
 	unregister_device();
 }
 /*===============================================================================================*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 58b115a..fa700f2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -47,7 +47,7 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static int hyper_dmabuf_tx_ch_setup(void *data)
+static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -64,7 +64,7 @@ static int hyper_dmabuf_tx_ch_setup(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_rx_ch_setup(void *data)
+static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -82,7 +82,7 @@ static int hyper_dmabuf_rx_ch_setup(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_export_remote(void *data)
+static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -227,6 +227,8 @@ static int hyper_dmabuf_export_remote(void *data)
 	kfree(page_info->pages);
 	kfree(page_info);
 
+	sgt_info->filp = filp;
+
 	return ret;
 
 fail_send_request:
@@ -248,7 +250,7 @@ static int hyper_dmabuf_export_remote(void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_export_fd_ioctl(void *data)
+static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -411,7 +413,7 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 /* Schedules unexport of dmabuf.
  */
-static int hyper_dmabuf_unexport(void *data)
+static int hyper_dmabuf_unexport(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -448,7 +450,7 @@ static int hyper_dmabuf_unexport(void *data)
 	return 0;
 }
 
-static int hyper_dmabuf_query(void *data)
+static int hyper_dmabuf_query(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_query *query_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -558,7 +560,7 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 		return -EFAULT;
 	}
 
-	ret = func(kdata);
+	ret = func(filp, kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
 		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
@@ -570,14 +572,49 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 	return ret;
 }
 
-struct device_info {
-	int curr_domain;
-};
+int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+{
+	/* Do not allow exclusive open */
+	if (filp->f_flags & O_EXCL)
+		return -EBUSY;
+
+	return 0;
+}
+
+static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
+					   void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file*) attr;
+
+	if (!filp || !sgt_info)
+		return;
+
+	if (sgt_info->filp == filp) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Executing emergency release of buffer %d\n",
+			 sgt_info->hyper_dmabuf_id);
+
+		unexport_attr.hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport(filp, &unexport_attr);
+	}
+}
+
+int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+{
+	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
+
+	return 0;
+}
 
 /*===============================================================================================*/
 static struct file_operations hyper_dmabuf_driver_fops =
 {
    .owner = THIS_MODULE,
+   .open = hyper_dmabuf_open,
+   .release = hyper_dmabuf_release,
    .unlocked_ioctl = hyper_dmabuf_ioctl,
 };
 
@@ -597,7 +634,7 @@ int register_device(void)
 	ret = misc_register(&hyper_dmabuf_miscdev);
 
 	if (ret) {
-		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
+		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
 		return ret;
 	}
 
@@ -606,22 +643,14 @@ int register_device(void)
 	/* TODO: Check if there is a different way to initialize dma mask nicely */
 	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
 
-	/* TODO find a way to provide parameters for below function or move that to ioctl */
-/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
-				src_sink_isr, PORT_NUM, "remote_domain", &info);
-	if (err < 0) {
-		printk("hyper_dmabuf: can't register interrupt handlers\n");
-		return -EFAULT;
-	}
-
-	info.irq = err;
-*/
 	return ret;
 }
 
 /*-----------------------------------------------------------------------------------------------*/
 void unregister_device(void)
 {
-	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
+	dev_info(hyper_dmabuf_private.device,
+		 "hyper_dmabuf: unregister_device() is called\n");
+
 	misc_deregister(&hyper_dmabuf_miscdev);
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index 8355e30..ebfbb84 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_IOCTL_H__
 #define __HYPER_DMABUF_IOCTL_H__
 
-typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
 
 struct hyper_dmabuf_ioctl_desc {
 	unsigned int cmd;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 2cb4bb4..c1285eb 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -221,3 +221,17 @@ int hyper_dmabuf_remove_imported(int id)
 
 	return -ENOENT;
 }
+
+void hyper_dmabuf_foreach_exported(
+	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void *attr)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
+			info_entry, node) {
+		func(info_entry->info, attr);
+	}
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 35dc722..925b0d1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -61,6 +61,10 @@ int hyper_dmabuf_remove_exported(int id);
 
 int hyper_dmabuf_remove_imported(int id);
 
+void hyper_dmabuf_foreach_exported(
+	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void *attr);
+
 int hyper_dmabuf_register_sysfs(struct device *dev);
 int hyper_dmabuf_unregister_sysfs(struct device *dev);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index a41fd0a..9952b3f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -80,6 +80,13 @@ struct hyper_dmabuf_sgt_info {
 	void *refs_info; /* hypervisor-specific info for the references */
 	struct delayed_work unexport_work;
 	bool unexport_scheduled;
+	/* owner of buffer
+	 * TODO: that is naiive as buffer may be reused by
+	 * another userspace app, so here list of struct file should be kept
+	 * and emergency unexport should be executed only after last of buffer
+	 * uses releases hyper_dmabuf device
+	 */
+	struct file *filp;
 	int private[4]; /* device specific info (e.g. image's meta info?) */
 };
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 30/60] hyper_dmabuf: free already mapped pages when error happens
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

It is needed to freeing already-mapped pages if it gets error
before finishing mapping all pages.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 43 +++++++++++++++++++---
 1 file changed, 38 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index c03e5a0..524f75c 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -255,7 +255,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	if (lvl3_map_ops.status) {
 		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
 			lvl3_map_ops.status);
-		return NULL;
+		goto error_cleanup_lvl3;
 	} else {
 		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
 	}
@@ -263,7 +263,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	/* Map all second level pages */
 	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
-		return NULL;
+		goto error_cleanup_lvl3;
 	}
 
 	for (i = 0; i < n_lvl2_grefs; i++) {
@@ -277,6 +277,9 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
 		dev_err(hyper_dmabuf_private.device, "xen: cannot unmap top level page\n");
 		return NULL;
+	} else {
+		/* Mark that page was unmapped */
+		lvl3_unmap_ops.handle = -1;
 	}
 
 	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
@@ -290,7 +293,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			dev_err(hyper_dmabuf_private.device,
 				"HYPERVISOR map grant ref failed status = %d",
 				lvl2_map_ops[i].status);
-			return NULL;
+			goto error_cleanup_lvl2;
 		} else {
 			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
 		}
@@ -298,7 +301,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	if (gnttab_alloc_pages(nents, data_pages)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
-		return NULL;
+		goto error_cleanup_lvl2;
 	}
 
 	k = 0;
@@ -343,6 +346,11 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			      n_lvl2_grefs)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot unmap 2nd level refs\n");
 		return NULL;
+	} else {
+		/* Mark that pages were unmapped */
+		for (i = 0; i < n_lvl2_grefs; i++) {
+			lvl2_unmap_ops[i].handle = -1;
+		}
 	}
 
 	for (i = 0; i < nents; i++) {
@@ -350,7 +358,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			dev_err(hyper_dmabuf_private.device,
 				"HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
-			return NULL;
+			goto error_cleanup_data;
 		} else {
 			data_unmap_ops[i].handle = data_map_ops[i].handle;
 		}
@@ -369,6 +377,31 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return data_pages;
+
+error_cleanup_data:
+	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
+			  nents);
+
+	gnttab_free_pages(nents, data_pages);
+
+error_cleanup_lvl2:
+	if (lvl2_unmap_ops[0].handle != -1)
+		gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
+				  n_lvl2_grefs);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+
+error_cleanup_lvl3:
+	if (lvl3_unmap_ops.handle != -1)
+		gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1);
+	gnttab_free_pages(1, &lvl3_table_page);
+
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+
+	return NULL;
 }
 
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 30/60] hyper_dmabuf: free already mapped pages when error happens
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

It is needed to freeing already-mapped pages if it gets error
before finishing mapping all pages.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 43 +++++++++++++++++++---
 1 file changed, 38 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index c03e5a0..524f75c 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -255,7 +255,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	if (lvl3_map_ops.status) {
 		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
 			lvl3_map_ops.status);
-		return NULL;
+		goto error_cleanup_lvl3;
 	} else {
 		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
 	}
@@ -263,7 +263,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	/* Map all second level pages */
 	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
-		return NULL;
+		goto error_cleanup_lvl3;
 	}
 
 	for (i = 0; i < n_lvl2_grefs; i++) {
@@ -277,6 +277,9 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
 		dev_err(hyper_dmabuf_private.device, "xen: cannot unmap top level page\n");
 		return NULL;
+	} else {
+		/* Mark that page was unmapped */
+		lvl3_unmap_ops.handle = -1;
 	}
 
 	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
@@ -290,7 +293,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			dev_err(hyper_dmabuf_private.device,
 				"HYPERVISOR map grant ref failed status = %d",
 				lvl2_map_ops[i].status);
-			return NULL;
+			goto error_cleanup_lvl2;
 		} else {
 			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
 		}
@@ -298,7 +301,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	if (gnttab_alloc_pages(nents, data_pages)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
-		return NULL;
+		goto error_cleanup_lvl2;
 	}
 
 	k = 0;
@@ -343,6 +346,11 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			      n_lvl2_grefs)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot unmap 2nd level refs\n");
 		return NULL;
+	} else {
+		/* Mark that pages were unmapped */
+		for (i = 0; i < n_lvl2_grefs; i++) {
+			lvl2_unmap_ops[i].handle = -1;
+		}
 	}
 
 	for (i = 0; i < nents; i++) {
@@ -350,7 +358,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			dev_err(hyper_dmabuf_private.device,
 				"HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
-			return NULL;
+			goto error_cleanup_data;
 		} else {
 			data_unmap_ops[i].handle = data_map_ops[i].handle;
 		}
@@ -369,6 +377,31 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return data_pages;
+
+error_cleanup_data:
+	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
+			  nents);
+
+	gnttab_free_pages(nents, data_pages);
+
+error_cleanup_lvl2:
+	if (lvl2_unmap_ops[0].handle != -1)
+		gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
+				  n_lvl2_grefs);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+
+error_cleanup_lvl3:
+	if (lvl3_unmap_ops.handle != -1)
+		gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1);
+	gnttab_free_pages(1, &lvl3_table_page);
+
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+
+	return NULL;
 }
 
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 31/60] hyper_dmabuf: built-in compilation option
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Enabled built-in compilation option of hyper_dmabuf driver.
Also, moved backend initialization into open() to remove
its dependencies on Kernel booting sequence.

hyper_dmabuf.h is now installed as one of standard header
files of Kernel.

This patch also addresses possible memory leaks in various
places.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig                   |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  17 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  14 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  13 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 113 +++++++++++++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  15 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  20 ++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  32 +++++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |   6 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  15 +++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    |   6 ++
 include/uapi/xen/Kbuild                            |   6 ++
 13 files changed, 227 insertions(+), 32 deletions(-)
 create mode 100644 include/uapi/xen/Kbuild

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index 56633a2..185fdf8 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -14,6 +14,7 @@ config HYPER_DMABUF_XEN
 config HYPER_DMABUF_SYSFS
 	bool "Enable sysfs information about hyper DMA buffers"
 	default y
+	depends on HYPER_DMABUF
 	help
 	  Expose information about imported and exported buffers using
 	  hyper_dmabuf driver
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index a12d4dc..92d710e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -66,6 +66,15 @@ static int __init hyper_dmabuf_drv_init(void)
 #ifdef CONFIG_HYPER_DMABUF_XEN
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
 #endif
+	/*
+	 * Defer backend setup to first open call.
+	 * Due to fact that some hypervisors eg. Xen, may have dependencies
+	 * to userspace daemons like xenstored, in that case all xenstore
+	 * calls done from kernel will block until that deamon will be
+	 * started, in case where module is built in that will block entire
+	 * kernel initialization.
+	 */
+	hyper_dmabuf_private.backend_initialized = false;
 
 	dev_info(hyper_dmabuf_private.device,
 		 "initializing database for imported/exported dmabufs\n");
@@ -73,7 +82,6 @@ static int __init hyper_dmabuf_drv_init(void)
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
 	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
-	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
@@ -82,13 +90,6 @@ static int __init hyper_dmabuf_drv_init(void)
 		return ret;
 	}
 
-	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"failed to initiailize hypervisor-specific comm env\n");
-		return ret;
-	}
-
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
 	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
 	if (ret < 0) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 8445416..91fda04 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -77,6 +77,7 @@ struct hyper_dmabuf_private {
 	/* backend ops - hypervisor specific */
 	struct hyper_dmabuf_backend_ops *backend_ops;
 	struct mutex lock;
+	bool backend_initialized;
 };
 
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 35bfdfb..fe95091 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -40,6 +40,13 @@ void store_reusable_id(int id)
 	struct list_reusable_id *new_reusable;
 
 	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
+
+	if (!new_reusable) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return;
+	}
+
 	new_reusable->id = id;
 
 	list_add(&new_reusable->list, &reusable_head->list);
@@ -94,6 +101,13 @@ int hyper_dmabuf_get_id(void)
 	/* first cla to hyper_dmabuf_get_id */
 	if (id == 0) {
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
+
+		if (!reusable_head) {
+			dev_err(hyper_dmabuf_private.device,
+				"No memory left to be allocated\n");
+			return -ENOMEM;
+		}
+
 		reusable_head->id = -1; /* list head have invalid id */
 		INIT_LIST_HEAD(&reusable_head->list);
 		hyper_dmabuf_private.id_queue = reusable_head;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 5a034ffb..34dfa18 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -270,6 +270,12 @@ inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
@@ -366,8 +372,11 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	return st;
 
 err_free_sg:
-	sg_free_table(st);
-	kfree(st);
+	if (st) {
+		sg_free_table(st);
+		kfree(st);
+	}
+
 	return NULL;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index fa700f2..c0048d9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -115,22 +115,24 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	 */
 	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
 	sgt_info = hyper_dmabuf_find_exported(ret);
-	if (ret != -ENOENT && sgt_info->valid) {
-		/*
-		 * Check if unexport is already scheduled for that buffer,
-		 * if so try to cancel it. If that will fail, buffer needs
-		 * to be reexport once again.
-		 */
-		if (sgt_info->unexport_scheduled) {
-			if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
-				dma_buf_put(dma_buf);
-				goto reexport;
+	if (ret != -ENOENT && sgt_info != NULL) {
+		if (sgt_info->valid) {
+			/*
+			 * Check if unexport is already scheduled for that buffer,
+			 * if so try to cancel it. If that will fail, buffer needs
+			 * to be reexport once again.
+			 */
+			if (sgt_info->unexport_scheduled) {
+				if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+					dma_buf_put(dma_buf);
+					goto reexport;
+				}
+				sgt_info->unexport_scheduled = 0;
 			}
-			sgt_info->unexport_scheduled = 0;
+			dma_buf_put(dma_buf);
+			export_remote_attr->hyper_dmabuf_id = ret;
+			return 0;
 		}
-		dma_buf_put(dma_buf);
-		export_remote_attr->hyper_dmabuf_id = ret;
-		return 0;
 	}
 
 reexport:
@@ -162,9 +164,32 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	sgt_info->valid = 1;
 
 	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	if (!sgt_info->active_sgts) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_active_sgts;
+	}
+
 	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	if (!sgt_info->active_attached) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_active_attached;
+	}
+
 	sgt_info->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	if (!sgt_info->va_kmapped) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_va_kmapped;
+	}
+
 	sgt_info->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	if (!sgt_info->va_vmapped) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_va_vmapped;
+	}
 
 	sgt_info->active_sgts->sgt = sgt;
 	sgt_info->active_attached->attach = attachment;
@@ -211,6 +236,11 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
+	if(!req) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		goto fail_map_req;
+	}
+
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
 
@@ -233,6 +263,8 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 fail_send_request:
 	kfree(req);
+
+fail_map_req:
 	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
 
 fail_export:
@@ -242,10 +274,14 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 	dma_buf_put(sgt_info->dma_buf);
 
-	kfree(sgt_info->active_attached);
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->va_kmapped);
 	kfree(sgt_info->va_vmapped);
+fail_map_va_vmapped:
+	kfree(sgt_info->va_kmapped);
+fail_map_va_kmapped:
+	kfree(sgt_info->active_sgts);
+fail_map_active_sgts:
+	kfree(sgt_info->active_attached);
+fail_map_active_attached:
 
 	return ret;
 }
@@ -288,6 +324,13 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer %d\n", operand);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operand);
 
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
@@ -381,6 +424,12 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return;
+	}
+
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &hyper_dmabuf_id);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
@@ -540,6 +589,11 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 	hyper_dmabuf_ioctl_t func;
 	char *kdata;
 
+	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
+		dev_err(hyper_dmabuf_private.device, "invalid ioctl\n");
+		return -EINVAL;
+	}
+
 	ioctl = &hyper_dmabuf_ioctls[nr];
 
 	func = ioctl->func;
@@ -574,11 +628,34 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 
 int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 {
+	int ret = 0;
+
 	/* Do not allow exclusive open */
 	if (filp->f_flags & O_EXCL)
 		return -EBUSY;
 
-	return 0;
+	/*
+	 * Initialize backend if neededm,
+	 * use mutex to prevent race conditions when
+	 * two userspace apps will open device at the same time
+	 */
+	mutex_lock(&hyper_dmabuf_private.lock);
+
+	if (!hyper_dmabuf_private.backend_initialized) {
+		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+
+		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	        if (ret < 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"failed to initiailize hypervisor-specific comm env\n");
+		} else {
+			hyper_dmabuf_private.backend_initialized = true;
+		}
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
+	return ret;
 }
 
 static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index c1285eb..90c8c56 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -34,8 +34,11 @@
 #include <asm/uaccess.h>
 #include <linux/hashtable.h>
 #include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
@@ -132,6 +135,12 @@ int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+                        "No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
@@ -146,6 +155,12 @@ int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+                        "No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 3111cdc..5f64261 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -134,6 +134,13 @@ void cmd_process_work(struct work_struct *work)
 		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+
+		if (!imported_sgt_info) {
+			dev_err(hyper_dmabuf_private.device,
+				"No memory left to be allocated\n");
+			break;
+		}
+
 		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
 		imported_sgt_info->frst_ofst = req->operands[2];
 		imported_sgt_info->last_len = req->operands[3];
@@ -288,9 +295,22 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
+	if (!temp_req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	memcpy(temp_req, req, sizeof(*temp_req));
 
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
+
+	if (!proc) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	proc->rq = temp_req;
 	proc->domid = domid;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 0eded61..2dab833 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -78,6 +78,12 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_ATTACH:
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
+		if (!attachl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			return -ENOMEM;
+		}
+
 		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
 						 hyper_dmabuf_private.device);
 
@@ -85,7 +91,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(attachl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
-			return PTR_ERR(attachl->attach);
+			return -ENOMEM;
 		}
 
 		list_add(&attachl->list, &sgt_info->active_attached->list);
@@ -121,12 +127,19 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 					   struct attachment_list, list);
 
 		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+
+		if (!sgtl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			return -ENOMEM;
+		}
+
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			return PTR_ERR(sgtl->sgt);
+			return -ENOMEM;
 		}
 		list_add(&sgtl->list, &sgt_info->active_sgts->list);
 		break;
@@ -201,6 +214,11 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KMAP:
 		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+		if (!va_kmapl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -ENOMEM;
+		}
 
 		/* dummy kmapping of 1 page */
 		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
@@ -212,7 +230,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_kmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
-			return PTR_ERR(va_kmapl->vaddr);
+			return -ENOMEM;
 		}
 		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
 		break;
@@ -255,6 +273,12 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_VMAP:
 		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
 
+		if (!va_vmapl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			return -ENOMEM;
+		}
+
 		/* dummy vmapping */
 		va_vmapl->vaddr = dma_buf_vmap(sgt_info->dma_buf);
 
@@ -262,7 +286,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_vmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
-			return PTR_ERR(va_vmapl->vaddr);
+			return -ENOMEM;
 		}
 		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
 		break;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index ce9862a..43dd3b6 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -381,6 +381,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
+	if (!ring_info) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	ring_info->sdomain = domid;
 	ring_info->evtchn = rx_port;
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 2f469da..4708b49 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -34,9 +34,12 @@
 #include <asm/uaccess.h>
 #include <linux/hashtable.h>
 #include <xen/grant_table.h>
+#include "../hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
 DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
@@ -52,6 +55,12 @@ int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = ring_info;
 
 	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
@@ -66,6 +75,12 @@ int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = ring_info;
 
 	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 524f75c..c6a2993 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -96,6 +96,12 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
 
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+
+	if (!sh_pages_info) {
+		dev_err(hyper_dmabuf_private.device, "No more space left\n");
+		return -ENOMEM;
+	}
+
 	*refs_info = (void *)sh_pages_info;
 
 	/* share data pages in rw mode*/
diff --git a/include/uapi/xen/Kbuild b/include/uapi/xen/Kbuild
new file mode 100644
index 0000000..bf81f42
--- /dev/null
+++ b/include/uapi/xen/Kbuild
@@ -0,0 +1,6 @@
+# UAPI Header export list
+header-y += evtchn.h
+header-y += gntalloc.h
+header-y += gntdev.h
+header-y += privcmd.h
+header-y += hyper_dmabuf.h
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 31/60] hyper_dmabuf: built-in compilation option
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Enabled built-in compilation option of hyper_dmabuf driver.
Also, moved backend initialization into open() to remove
its dependencies on Kernel booting sequence.

hyper_dmabuf.h is now installed as one of standard header
files of Kernel.

This patch also addresses possible memory leaks in various
places.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig                   |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  17 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  14 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  13 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 113 +++++++++++++++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  15 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  20 ++++
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  32 +++++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |   6 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  15 +++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    |   6 ++
 include/uapi/xen/Kbuild                            |   6 ++
 13 files changed, 227 insertions(+), 32 deletions(-)
 create mode 100644 include/uapi/xen/Kbuild

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index 56633a2..185fdf8 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -14,6 +14,7 @@ config HYPER_DMABUF_XEN
 config HYPER_DMABUF_SYSFS
 	bool "Enable sysfs information about hyper DMA buffers"
 	default y
+	depends on HYPER_DMABUF
 	help
 	  Expose information about imported and exported buffers using
 	  hyper_dmabuf driver
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index a12d4dc..92d710e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -66,6 +66,15 @@ static int __init hyper_dmabuf_drv_init(void)
 #ifdef CONFIG_HYPER_DMABUF_XEN
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
 #endif
+	/*
+	 * Defer backend setup to first open call.
+	 * Due to fact that some hypervisors eg. Xen, may have dependencies
+	 * to userspace daemons like xenstored, in that case all xenstore
+	 * calls done from kernel will block until that deamon will be
+	 * started, in case where module is built in that will block entire
+	 * kernel initialization.
+	 */
+	hyper_dmabuf_private.backend_initialized = false;
 
 	dev_info(hyper_dmabuf_private.device,
 		 "initializing database for imported/exported dmabufs\n");
@@ -73,7 +82,6 @@ static int __init hyper_dmabuf_drv_init(void)
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
 	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
-	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
@@ -82,13 +90,6 @@ static int __init hyper_dmabuf_drv_init(void)
 		return ret;
 	}
 
-	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"failed to initiailize hypervisor-specific comm env\n");
-		return ret;
-	}
-
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
 	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
 	if (ret < 0) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 8445416..91fda04 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -77,6 +77,7 @@ struct hyper_dmabuf_private {
 	/* backend ops - hypervisor specific */
 	struct hyper_dmabuf_backend_ops *backend_ops;
 	struct mutex lock;
+	bool backend_initialized;
 };
 
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 35bfdfb..fe95091 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -40,6 +40,13 @@ void store_reusable_id(int id)
 	struct list_reusable_id *new_reusable;
 
 	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
+
+	if (!new_reusable) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return;
+	}
+
 	new_reusable->id = id;
 
 	list_add(&new_reusable->list, &reusable_head->list);
@@ -94,6 +101,13 @@ int hyper_dmabuf_get_id(void)
 	/* first cla to hyper_dmabuf_get_id */
 	if (id == 0) {
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
+
+		if (!reusable_head) {
+			dev_err(hyper_dmabuf_private.device,
+				"No memory left to be allocated\n");
+			return -ENOMEM;
+		}
+
 		reusable_head->id = -1; /* list head have invalid id */
 		INIT_LIST_HEAD(&reusable_head->list);
 		hyper_dmabuf_private.id_queue = reusable_head;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 5a034ffb..34dfa18 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -270,6 +270,12 @@ inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
@@ -366,8 +372,11 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	return st;
 
 err_free_sg:
-	sg_free_table(st);
-	kfree(st);
+	if (st) {
+		sg_free_table(st);
+		kfree(st);
+	}
+
 	return NULL;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index fa700f2..c0048d9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -115,22 +115,24 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	 */
 	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
 	sgt_info = hyper_dmabuf_find_exported(ret);
-	if (ret != -ENOENT && sgt_info->valid) {
-		/*
-		 * Check if unexport is already scheduled for that buffer,
-		 * if so try to cancel it. If that will fail, buffer needs
-		 * to be reexport once again.
-		 */
-		if (sgt_info->unexport_scheduled) {
-			if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
-				dma_buf_put(dma_buf);
-				goto reexport;
+	if (ret != -ENOENT && sgt_info != NULL) {
+		if (sgt_info->valid) {
+			/*
+			 * Check if unexport is already scheduled for that buffer,
+			 * if so try to cancel it. If that will fail, buffer needs
+			 * to be reexport once again.
+			 */
+			if (sgt_info->unexport_scheduled) {
+				if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+					dma_buf_put(dma_buf);
+					goto reexport;
+				}
+				sgt_info->unexport_scheduled = 0;
 			}
-			sgt_info->unexport_scheduled = 0;
+			dma_buf_put(dma_buf);
+			export_remote_attr->hyper_dmabuf_id = ret;
+			return 0;
 		}
-		dma_buf_put(dma_buf);
-		export_remote_attr->hyper_dmabuf_id = ret;
-		return 0;
 	}
 
 reexport:
@@ -162,9 +164,32 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	sgt_info->valid = 1;
 
 	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	if (!sgt_info->active_sgts) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_active_sgts;
+	}
+
 	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	if (!sgt_info->active_attached) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_active_attached;
+	}
+
 	sgt_info->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	if (!sgt_info->va_kmapped) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_va_kmapped;
+	}
+
 	sgt_info->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	if (!sgt_info->va_vmapped) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		ret = -ENOMEM;
+		goto fail_map_va_vmapped;
+	}
 
 	sgt_info->active_sgts->sgt = sgt;
 	sgt_info->active_attached->attach = attachment;
@@ -211,6 +236,11 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
+	if(!req) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		goto fail_map_req;
+	}
+
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
 
@@ -233,6 +263,8 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 fail_send_request:
 	kfree(req);
+
+fail_map_req:
 	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
 
 fail_export:
@@ -242,10 +274,14 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
 	dma_buf_put(sgt_info->dma_buf);
 
-	kfree(sgt_info->active_attached);
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->va_kmapped);
 	kfree(sgt_info->va_vmapped);
+fail_map_va_vmapped:
+	kfree(sgt_info->va_kmapped);
+fail_map_va_kmapped:
+	kfree(sgt_info->active_sgts);
+fail_map_active_sgts:
+	kfree(sgt_info->active_attached);
+fail_map_active_attached:
 
 	return ret;
 }
@@ -288,6 +324,13 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer %d\n", operand);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operand);
 
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
@@ -381,6 +424,12 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return;
+	}
+
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &hyper_dmabuf_id);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
@@ -540,6 +589,11 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 	hyper_dmabuf_ioctl_t func;
 	char *kdata;
 
+	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
+		dev_err(hyper_dmabuf_private.device, "invalid ioctl\n");
+		return -EINVAL;
+	}
+
 	ioctl = &hyper_dmabuf_ioctls[nr];
 
 	func = ioctl->func;
@@ -574,11 +628,34 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 
 int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 {
+	int ret = 0;
+
 	/* Do not allow exclusive open */
 	if (filp->f_flags & O_EXCL)
 		return -EBUSY;
 
-	return 0;
+	/*
+	 * Initialize backend if neededm,
+	 * use mutex to prevent race conditions when
+	 * two userspace apps will open device at the same time
+	 */
+	mutex_lock(&hyper_dmabuf_private.lock);
+
+	if (!hyper_dmabuf_private.backend_initialized) {
+		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+
+		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	        if (ret < 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"failed to initiailize hypervisor-specific comm env\n");
+		} else {
+			hyper_dmabuf_private.backend_initialized = true;
+		}
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
+	return ret;
 }
 
 static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index c1285eb..90c8c56 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -34,8 +34,11 @@
 #include <asm/uaccess.h>
 #include <linux/hashtable.h>
 #include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
@@ -132,6 +135,12 @@ int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+                        "No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
@@ -146,6 +155,12 @@ int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+                        "No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 3111cdc..5f64261 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -134,6 +134,13 @@ void cmd_process_work(struct work_struct *work)
 		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+
+		if (!imported_sgt_info) {
+			dev_err(hyper_dmabuf_private.device,
+				"No memory left to be allocated\n");
+			break;
+		}
+
 		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
 		imported_sgt_info->frst_ofst = req->operands[2];
 		imported_sgt_info->last_len = req->operands[3];
@@ -288,9 +295,22 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
+	if (!temp_req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	memcpy(temp_req, req, sizeof(*temp_req));
 
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
+
+	if (!proc) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	proc->rq = temp_req;
 	proc->domid = domid;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 0eded61..2dab833 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -78,6 +78,12 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_ATTACH:
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
+		if (!attachl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			return -ENOMEM;
+		}
+
 		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
 						 hyper_dmabuf_private.device);
 
@@ -85,7 +91,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(attachl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
-			return PTR_ERR(attachl->attach);
+			return -ENOMEM;
 		}
 
 		list_add(&attachl->list, &sgt_info->active_attached->list);
@@ -121,12 +127,19 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 					   struct attachment_list, list);
 
 		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+
+		if (!sgtl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			return -ENOMEM;
+		}
+
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			return PTR_ERR(sgtl->sgt);
+			return -ENOMEM;
 		}
 		list_add(&sgtl->list, &sgt_info->active_sgts->list);
 		break;
@@ -201,6 +214,11 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KMAP:
 		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+		if (!va_kmapl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -ENOMEM;
+		}
 
 		/* dummy kmapping of 1 page */
 		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
@@ -212,7 +230,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_kmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
-			return PTR_ERR(va_kmapl->vaddr);
+			return -ENOMEM;
 		}
 		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
 		break;
@@ -255,6 +273,12 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	case HYPER_DMABUF_OPS_VMAP:
 		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
 
+		if (!va_vmapl) {
+			dev_err(hyper_dmabuf_private.device,
+				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			return -ENOMEM;
+		}
+
 		/* dummy vmapping */
 		va_vmapl->vaddr = dma_buf_vmap(sgt_info->dma_buf);
 
@@ -262,7 +286,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 			kfree(va_vmapl);
 			dev_err(hyper_dmabuf_private.device,
 				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
-			return PTR_ERR(va_vmapl->vaddr);
+			return -ENOMEM;
 		}
 		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
 		break;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index ce9862a..43dd3b6 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -381,6 +381,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
+	if (!ring_info) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	ring_info->sdomain = domid;
 	ring_info->evtchn = rx_port;
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 2f469da..4708b49 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -34,9 +34,12 @@
 #include <asm/uaccess.h>
 #include <linux/hashtable.h>
 #include <xen/grant_table.h>
+#include "../hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
 
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
 DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
 DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
@@ -52,6 +55,12 @@ int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = ring_info;
 
 	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
@@ -66,6 +75,12 @@ int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
+	if (!info_entry) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
 	info_entry->info = ring_info;
 
 	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 524f75c..c6a2993 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -96,6 +96,12 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
 
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+
+	if (!sh_pages_info) {
+		dev_err(hyper_dmabuf_private.device, "No more space left\n");
+		return -ENOMEM;
+	}
+
 	*refs_info = (void *)sh_pages_info;
 
 	/* share data pages in rw mode*/
diff --git a/include/uapi/xen/Kbuild b/include/uapi/xen/Kbuild
new file mode 100644
index 0000000..bf81f42
--- /dev/null
+++ b/include/uapi/xen/Kbuild
@@ -0,0 +1,6 @@
+# UAPI Header export list
+header-y += evtchn.h
+header-y += gntalloc.h
+header-y += gntdev.h
+header-y += privcmd.h
+header-y += hyper_dmabuf.h
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 32/60] hyper_dmabuf: make all shared pages read-only
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

All shared pages need to be read-only from importer's
point of view to prevent the buffer from being corrupted.

This patch may need to be reverted if we find a better
way to protect the original content in this sharing
model.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index c6a2993..1416a69 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -104,24 +104,24 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	*refs_info = (void *)sh_pages_info;
 
-	/* share data pages in rw mode*/
+	/* share data pages in readonly mode for security */
 	for (i=0; i<nents; i++) {
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
 							    pfn_to_mfn(page_to_pfn(pages[i])),
-							    0);
+							    true /* read-only from remote domain */);
 	}
 
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
 							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
-							    1);
+							    true);
 	}
 
 	/* Share lvl3_table in readonly mode*/
 	lvl3_gref = gnttab_grant_foreign_access(domid,
 						virt_to_mfn((unsigned long)lvl3_table),
-						1);
+						true);
 
 
 	/* Store lvl3_table page to be freed later */
@@ -317,12 +317,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
 					  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					  GNTMAP_host_map,
+					  GNTMAP_host_map | GNTMAP_readonly,
 					  lvl2_table[j], domid);
 
 			gnttab_set_unmap_op(&data_unmap_ops[k],
 					    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					    GNTMAP_host_map, -1);
+					    GNTMAP_host_map | GNTMAP_readonly, -1);
 			k++;
 		}
 	}
@@ -333,12 +333,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	for (j = 0; j < nents_last; j++) {
 		gnttab_set_map_op(&data_map_ops[k],
 				  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				  GNTMAP_host_map,
+				  GNTMAP_host_map | GNTMAP_readonly,
 				  lvl2_table[j], domid);
 
 		gnttab_set_unmap_op(&data_unmap_ops[k],
 				    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				    GNTMAP_host_map, -1);
+				    GNTMAP_host_map | GNTMAP_readonly, -1);
 		k++;
 	}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 32/60] hyper_dmabuf: make all shared pages read-only
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

All shared pages need to be read-only from importer's
point of view to prevent the buffer from being corrupted.

This patch may need to be reverted if we find a better
way to protect the original content in this sharing
model.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index c6a2993..1416a69 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -104,24 +104,24 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	*refs_info = (void *)sh_pages_info;
 
-	/* share data pages in rw mode*/
+	/* share data pages in readonly mode for security */
 	for (i=0; i<nents; i++) {
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
 							    pfn_to_mfn(page_to_pfn(pages[i])),
-							    0);
+							    true /* read-only from remote domain */);
 	}
 
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
 							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
-							    1);
+							    true);
 	}
 
 	/* Share lvl3_table in readonly mode*/
 	lvl3_gref = gnttab_grant_foreign_access(domid,
 						virt_to_mfn((unsigned long)lvl3_table),
-						1);
+						true);
 
 
 	/* Store lvl3_table page to be freed later */
@@ -317,12 +317,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
 					  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					  GNTMAP_host_map,
+					  GNTMAP_host_map | GNTMAP_readonly,
 					  lvl2_table[j], domid);
 
 			gnttab_set_unmap_op(&data_unmap_ops[k],
 					    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					    GNTMAP_host_map, -1);
+					    GNTMAP_host_map | GNTMAP_readonly, -1);
 			k++;
 		}
 	}
@@ -333,12 +333,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	for (j = 0; j < nents_last; j++) {
 		gnttab_set_map_op(&data_map_ops[k],
 				  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				  GNTMAP_host_map,
+				  GNTMAP_host_map | GNTMAP_readonly,
 				  lvl2_table[j], domid);
 
 		gnttab_set_unmap_op(&data_unmap_ops[k],
 				    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				    GNTMAP_host_map, -1);
+				    GNTMAP_host_map | GNTMAP_readonly, -1);
 		k++;
 	}
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 33/60] hyper_dmabuf: error checking on the result of dma_buf_map_attachment
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Added error checking on the result of function call,
dma_buf_map_attachment

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index c0048d9..476c0d7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -149,6 +149,11 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
+	if (IS_ERR(sgt)) {
+		dev_err(hyper_dmabuf_private.device, "Cannot map attachment\n");
+		return PTR_ERR(sgt);
+	}
+
 	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
 	if(!sgt_info) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 33/60] hyper_dmabuf: error checking on the result of dma_buf_map_attachment
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Added error checking on the result of function call,
dma_buf_map_attachment

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index c0048d9..476c0d7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -149,6 +149,11 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
+	if (IS_ERR(sgt)) {
+		dev_err(hyper_dmabuf_private.device, "Cannot map attachment\n");
+		return PTR_ERR(sgt);
+	}
+
 	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
 	if(!sgt_info) {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 34/60] hyper_dmabuf: extend DMA bitmask to 64-bits.
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Extending DMA bitmask of hyper_dmabuf device to cover whole
address space driver may access.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 476c0d7..f7d98c1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -723,7 +723,7 @@ int register_device(void)
 	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
 
 	/* TODO: Check if there is a different way to initialize dma mask nicely */
-	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
+	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
 
 	return ret;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 34/60] hyper_dmabuf: extend DMA bitmask to 64-bits.
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Extending DMA bitmask of hyper_dmabuf device to cover whole
address space driver may access.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 476c0d7..f7d98c1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -723,7 +723,7 @@ int register_device(void)
 	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
 
 	/* TODO: Check if there is a different way to initialize dma mask nicely */
-	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
+	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
 
 	return ret;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 35/60] hyper_dmabuf: 128bit hyper_dmabuf_id with random keys
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

The length of hyper_dmabuf_id is increased to 128bit by adding
random key (96bit) to the id. This is to prevent possible leak
of the id by guessing on importer VM (by unauthorized application).

hyper_dmabuf_id_t is now defined as,

	typedef struct {
	    int id;
	    int rng_key[3];
	} hyper_dmabuf_id_t;

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   3 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  57 ++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |  17 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  51 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 199 +++++++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  87 ++++++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  10 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 115 +++++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   2 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  21 ++-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  20 ++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   2 -
 include/uapi/xen/hyper_dmabuf.h                    |  13 +-
 15 files changed, 372 insertions(+), 229 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 92d710e..c802c3e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -30,9 +30,9 @@
 #include <linux/module.h>
 #include <linux/workqueue.h>
 #include <linux/device.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 91fda04..ffe4d53 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -26,11 +26,12 @@
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
 #include <linux/device.h>
+#include <xen/hyper_dmabuf.h>
 
 struct hyper_dmabuf_req;
 
 struct list_reusable_id {
-	int id;
+	hyper_dmabuf_id_t hid;
 	struct list_head list;
 };
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index fe95091..f59dee3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -28,13 +28,14 @@
 
 #include <linux/list.h>
 #include <linux/slab.h>
-#include "hyper_dmabuf_msg.h"
+#include <linux/random.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-void store_reusable_id(int id)
+void store_reusable_hid(hyper_dmabuf_id_t hid)
 {
 	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
 	struct list_reusable_id *new_reusable;
@@ -47,15 +48,15 @@ void store_reusable_id(int id)
 		return;
 	}
 
-	new_reusable->id = id;
+	new_reusable->hid = hid;
 
 	list_add(&new_reusable->list, &reusable_head->list);
 }
 
-static int retrieve_reusable_id(void)
+static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 {
 	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
-	int id;
+	hyper_dmabuf_id_t hid = {-1, {0,0,0}};
 
 	/* check there is reusable id */
 	if (!list_empty(&reusable_head->list)) {
@@ -64,12 +65,11 @@ static int retrieve_reusable_id(void)
 						 list);
 
 		list_del(&reusable_head->list);
-		id = reusable_head->id;
+		hid = reusable_head->hid;
 		kfree(reusable_head);
-		return id;
 	}
 
-	return -ENOENT;
+	return hid;
 }
 
 void destroy_reusable_list(void)
@@ -92,31 +92,50 @@ void destroy_reusable_list(void)
 	}
 }
 
-int hyper_dmabuf_get_id(void)
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 {
-	static int id = 0;
+	static int count = 0;
+	hyper_dmabuf_id_t hid;
 	struct list_reusable_id *reusable_head;
-	int ret;
 
-	/* first cla to hyper_dmabuf_get_id */
-	if (id == 0) {
+	/* first call to hyper_dmabuf_get_id */
+	if (count == 0) {
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
 
 		if (!reusable_head) {
 			dev_err(hyper_dmabuf_private.device,
 				"No memory left to be allocated\n");
-			return -ENOMEM;
+			return (hyper_dmabuf_id_t){-1, {0,0,0}};
 		}
 
-		reusable_head->id = -1; /* list head have invalid id */
+		reusable_head->hid.id = -1; /* list head has an invalid count */
 		INIT_LIST_HEAD(&reusable_head->list);
 		hyper_dmabuf_private.id_queue = reusable_head;
 	}
 
-	ret = retrieve_reusable_id();
+	hid = retrieve_reusable_hid();
 
-	if (ret < 0 && id < HYPER_DMABUF_ID_MAX)
-		return HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, id++);
+	/*creating a new H-ID only if nothing in the reusable id queue
+	 * and count is less than maximum allowed
+	 */
+	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
+		hid.id = HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, count++);
+		/* random data embedded in the id for security */
+		get_random_bytes(&hid.rng_key[0], 12);
+	}
+
+	return hid;
+}
+
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
+{
+	int i;
+
+	/* compare keys */
+	for (i=0; i<3; i++) {
+		if (hid1.rng_key[i] != hid2.rng_key[i])
+			return false;
+	}
 
-	return ret;
+	return true;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index 4394903..a3336d9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -25,24 +25,23 @@
 #ifndef __HYPER_DMABUF_ID_H__
 #define __HYPER_DMABUF_ID_H__
 
-/* Importer combine source domain id with given hyper_dmabuf_id
- * to make it unique in case there are multiple exporters */
+#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
+        ((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
 
-#define HYPER_DMABUF_ID_CREATE(domid, id) \
-	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
-
-#define HYPER_DMABUF_DOM_ID(id) \
-	(((id) >> 24) & 0xFF)
+#define HYPER_DMABUF_DOM_ID(hid) \
+        (((hid.id) >> 24) & 0xFF)
 
 /* currently maximum number of buffers shared
  * at any given moment is limited to 1000
  */
 #define HYPER_DMABUF_ID_MAX 1000
 
-void store_reusable_id(int id);
+void store_reusable_hid(hyper_dmabuf_id_t hid);
 
 void destroy_reusable_list(void);
 
-int hyper_dmabuf_get_id(void);
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
+
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
 
 #endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 34dfa18..2bf0835 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -33,11 +33,11 @@
 #include <linux/dma-buf.h>
 #include <xen/grant_table.h>
 #include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
@@ -258,15 +258,20 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 
 #define WAIT_AFTER_SYNC_REQ 0
 
-inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
+inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int operands[2];
+	int operands[5];
+	int i;
 	int ret;
 
-	operands[0] = id;
-	operands[1] = dmabuf_ops;
+	operands[0] = hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = hid.rng_key[i];
+
+	operands[4] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -279,7 +284,7 @@ inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, WAIT_AFTER_SYNC_REQ);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
 
 	kfree(req);
 
@@ -297,7 +302,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
@@ -319,7 +324,7 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
@@ -358,7 +363,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
                 goto err_free_sg;
         }
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_MAP);
 
 	kfree(page_info->pages);
@@ -381,8 +386,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 }
 
 static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
-						struct sg_table *sg,
-						enum dma_data_direction dir)
+				   struct sg_table *sg,
+				   enum dma_data_direction dir)
 {
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	int ret;
@@ -397,7 +402,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
@@ -437,7 +442,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	final_release = sgt_info && !sgt_info->valid &&
 		        !sgt_info->num_importers;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_RELEASE);
 	if (ret < 0) {
 		dev_warn(hyper_dmabuf_private.device,
@@ -449,7 +454,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	 * That has to be done after sending sync request
 	 */
 	if (final_release) {
-		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
+		hyper_dmabuf_remove_imported(sgt_info->hid);
 		kfree(sgt_info);
 	}
 }
@@ -464,7 +469,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -484,7 +489,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -504,7 +509,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -524,7 +529,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -542,7 +547,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -562,7 +567,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -580,7 +585,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -600,7 +605,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -620,7 +625,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index f7d98c1..f1581d5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -35,13 +35,12 @@
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
-#include <xen/hyper_dmabuf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_query.h"
 
@@ -93,6 +92,8 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_req *req;
 	int operands[MAX_NUMBER_OF_OPERANDS];
+	hyper_dmabuf_id_t hid;
+	int i;
 	int ret = 0;
 
 	if (!data) {
@@ -113,25 +114,27 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	 * to the same domain and if yes and it's valid sgt_info,
 	 * it returns hyper_dmabuf_id of pre-exported sgt_info
 	 */
-	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
-	sgt_info = hyper_dmabuf_find_exported(ret);
-	if (ret != -ENOENT && sgt_info != NULL) {
-		if (sgt_info->valid) {
-			/*
-			 * Check if unexport is already scheduled for that buffer,
-			 * if so try to cancel it. If that will fail, buffer needs
-			 * to be reexport once again.
-			 */
-			if (sgt_info->unexport_scheduled) {
-				if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
-					dma_buf_put(dma_buf);
-					goto reexport;
+	hid = hyper_dmabuf_find_hid_exported(dma_buf, export_remote_attr->remote_domain);
+	if (hid.id != -1) {
+		sgt_info = hyper_dmabuf_find_exported(hid);
+		if (sgt_info != NULL) {
+			if (sgt_info->valid) {
+				/*
+				 * Check if unexport is already scheduled for that buffer,
+				 * if so try to cancel it. If that will fail, buffer needs
+				 * to be reexport once again.
+				 */
+				if (sgt_info->unexport_scheduled) {
+					if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+						dma_buf_put(dma_buf);
+						goto reexport;
+					}
+					sgt_info->unexport_scheduled = 0;
 				}
-				sgt_info->unexport_scheduled = 0;
+				dma_buf_put(dma_buf);
+				export_remote_attr->hid = hid;
+				return 0;
 			}
-			dma_buf_put(dma_buf);
-			export_remote_attr->hyper_dmabuf_id = ret;
-			return 0;
 		}
 	}
 
@@ -142,11 +145,6 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 		return PTR_ERR(attachment);
 	}
 
-	/* Clear ret, as that will cause whole ioctl to return failure
-	 * to userspace, which is not true
-	 */
-	ret = 0;
-
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
@@ -161,7 +159,15 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 		return -ENOMEM;
 	}
 
-	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
+	sgt_info->hid = hyper_dmabuf_get_hid();
+
+	/* no more exported dmabuf allowed */
+	if(sgt_info->hid.id == -1) {
+		dev_err(hyper_dmabuf_private.device,
+			"exceeds allowed number of dmabuf to be exported\n");
+		/* TODO: Cleanup sgt */
+		return -ENOMEM;
+	}
 
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
@@ -198,8 +204,8 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 	sgt_info->active_sgts->sgt = sgt;
 	sgt_info->active_attached->attach = attachment;
-	sgt_info->va_kmapped->vaddr = NULL; /* first vaddr is NULL */
-	sgt_info->va_vmapped->vaddr = NULL; /* first vaddr is NULL */
+	sgt_info->va_kmapped->vaddr = NULL;
+	sgt_info->va_vmapped->vaddr = NULL;
 
 	/* initialize list of sgt, attachment and vaddr for dmabuf sync
 	 * via shadow dma-buf
@@ -221,23 +227,27 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	hyper_dmabuf_register_exported(sgt_info);
 
 	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
-	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
+	page_info->hid = sgt_info->hid; /* may not be needed */
 
-	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+	export_remote_attr->hid = sgt_info->hid;
 
 	/* now create request for importer via ring */
-	operands[0] = page_info->hyper_dmabuf_id;
-	operands[1] = page_info->nents;
-	operands[2] = page_info->frst_ofst;
-	operands[3] = page_info->last_len;
-	operands[4] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
+	operands[0] = page_info->hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = page_info->hid.rng_key[i];
+
+	operands[4] = page_info->nents;
+	operands[5] = page_info->frst_ofst;
+	operands[6] = page_info->last_len;
+	operands[7] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
 					page_info->nents, &sgt_info->refs_info);
 
-	/* driver/application specific private info, max 32 bytes */
-	operands[5] = export_remote_attr->private[0];
-	operands[6] = export_remote_attr->private[1];
-	operands[7] = export_remote_attr->private[2];
-	operands[8] = export_remote_attr->private[3];
+	/* driver/application specific private info, max 4x4 bytes */
+	operands[8] = export_remote_attr->private[0];
+	operands[9] = export_remote_attr->private[1];
+	operands[10] = export_remote_attr->private[2];
+	operands[11] = export_remote_attr->private[3];
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -270,7 +280,7 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	kfree(req);
 
 fail_map_req:
-	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
+	hyper_dmabuf_remove_exported(sgt_info->hid);
 
 fail_export:
 	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
@@ -298,7 +308,8 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	struct hyper_dmabuf_req *req;
 	struct page **data_pages;
-	int operand;
+	int operands[4];
+	int i;
 	int ret = 0;
 
 	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
@@ -311,7 +322,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
 
 	/* look for dmabuf for the id */
-	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hid);
 
 	/* can't find sgt from the table */
 	if (!sgt_info) {
@@ -324,9 +335,14 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	sgt_info->num_importers++;
 
 	/* send notification for export_fd to exporter */
-	operand = sgt_info->hyper_dmabuf_id;
+	operands[0] = sgt_info->hid.id;
 
-	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer %d\n", operand);
+	for (i=0; i<3; i++)
+		operands[i+1] = sgt_info->hid.rng_key[i];
+
+	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
+		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+		sgt_info->hid.rng_key[2]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -336,30 +352,37 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		return -ENOMEM;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operand);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operands[0]);
 
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, true);
 
 	if (ret < 0) {
 		/* in case of timeout other end eventually will receive request, so we need to undo it */
-		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
-		ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
+		ops->send_req(operands[0], req, false);
 		kfree(req);
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
 		return ret;
 	}
+
 	kfree(req);
 
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
 		dev_err(hyper_dmabuf_private.device,
-			"Buffer invalid %d, cannot import\n", operand);
+			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -EINVAL;
 	} else {
-		dev_dbg(hyper_dmabuf_private.device, "Can import buffer %d\n", operand);
+		dev_dbg(hyper_dmabuf_private.device, "Can import buffer {id:%d key:%d %d %d}\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		ret = 0;
 	}
 
@@ -367,22 +390,29 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		  sgt_info->ref_handle, sgt_info->frst_ofst,
 		  sgt_info->last_len, sgt_info->nents,
-		  HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
+		  HYPER_DMABUF_DOM_ID(sgt_info->hid));
 
 	if (!sgt_info->sgt) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"%s buffer %d pages not mapped yet\n", __func__,sgt_info->hyper_dmabuf_id);
+			"%s buffer {id:%d key:%d %d %d} pages not mapped yet\n", __func__,
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
-						   HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id),
+						   HYPER_DMABUF_DOM_ID(sgt_info->hid),
 						   sgt_info->nents,
 						   &sgt_info->refs_info);
 
 		if (!data_pages) {
-			dev_err(hyper_dmabuf_private.device, "Cannot map pages of buffer %d\n", operand);
+			dev_err(hyper_dmabuf_private.device,
+				"Cannot map pages of buffer {id:%d key:%d %d %d}\n",
+				sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+				sgt_info->hid.rng_key[2]);
+
 			sgt_info->num_importers--;
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
-			ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
+			ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, false);
 			kfree(req);
 			mutex_unlock(&hyper_dmabuf_private.lock);
 			return -EINVAL;
@@ -401,6 +431,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	}
 
 	mutex_unlock(&hyper_dmabuf_private.lock);
+
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return ret;
 }
@@ -411,8 +442,8 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
-	int hyper_dmabuf_id;
-	int ret;
+	int i, ret;
+	int operands[4];
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_sgt_info *sgt_info =
 		container_of(work, struct hyper_dmabuf_sgt_info, unexport_work.work);
@@ -420,10 +451,11 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 	if (!sgt_info)
 		return;
 
-	hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
-
 	dev_dbg(hyper_dmabuf_private.device,
-		"Marking buffer %d as invalid\n", hyper_dmabuf_id);
+		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
+		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+		sgt_info->hid.rng_key[2]);
+
 	/* no longer valid */
 	sgt_info->valid = 0;
 
@@ -435,12 +467,20 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 		return;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &hyper_dmabuf_id);
+	operands[0] = sgt_info->hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = sgt_info->hid.rng_key[i];
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &operands[0]);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
 	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device, "unexport message for buffer %d failed\n", hyper_dmabuf_id);
+		dev_err(hyper_dmabuf_private.device,
+			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
 	}
 
 	/* free msg */
@@ -456,12 +496,15 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 	 */
 	if (!sgt_info->importer_exported) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"claning up buffer %d completly\n", hyper_dmabuf_id);
+			"claning up buffer {id:%d key:%d %d %d} completly\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-		hyper_dmabuf_remove_exported(hyper_dmabuf_id);
-		kfree(sgt_info);
+		hyper_dmabuf_remove_exported(sgt_info->hid);
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_id(hyper_dmabuf_id);
+		store_reusable_hid(sgt_info->hid);
+		kfree(sgt_info);
 	}
 }
 
@@ -482,9 +525,12 @@ static int hyper_dmabuf_unexport(struct file *filp, void *data)
 	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
 
 	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hid);
 
-	dev_dbg(hyper_dmabuf_private.device, "scheduling unexport of buffer %d\n", unexport_attr->hyper_dmabuf_id);
+	dev_dbg(hyper_dmabuf_private.device,
+		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
+		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
+		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
 
 	/* failed to find corresponding entry in export list */
 	if (sgt_info == NULL) {
@@ -518,8 +564,8 @@ static int hyper_dmabuf_query(struct file *filp, void *data)
 
 	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
 
-	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
-	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
+	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
 
 	/* if dmabuf can't be found in both lists, return */
 	if (!(sgt_info && imported_sgt_info)) {
@@ -544,7 +590,7 @@ static int hyper_dmabuf_query(struct file *filp, void *data)
 			if (sgt_info) {
 				query_attr->info = 0xFFFFFFFF; /* myself */
 			} else {
-				query_attr->info = (HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
+				query_attr->info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
 			}
 			break;
 
@@ -674,10 +720,11 @@ static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_inf
 
 	if (sgt_info->filp == filp) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"Executing emergency release of buffer %d\n",
-			 sgt_info->hyper_dmabuf_id);
+			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
+			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
+			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
 
-		unexport_attr.hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+		unexport_attr.hid = sgt_info->hid;
 		unexport_attr.delay_ms = 0;
 
 		hyper_dmabuf_unexport(filp, &unexport_attr);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 90c8c56..21fc7d0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -36,6 +36,7 @@
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
@@ -51,13 +52,15 @@ static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attr
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
-		int id = info_entry->info->hyper_dmabuf_id;
+		hyper_dmabuf_id_t hid = info_entry->info->hid;
 		int nents = info_entry->info->nents;
 		bool valid = info_entry->info->valid;
 		int num_importers = info_entry->info->num_importers;
 		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, numi:%d\n",
-				   id, nents, (valid ? 't' : 'f'), num_importers);
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				   "hid:{id:%d keys:%d %d %d}, nents:%d, v:%c, numi:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
+				   nents, (valid ? 't' : 'f'), num_importers);
 	}
 	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
 			   total);
@@ -73,13 +76,15 @@ static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attr
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
-		int id = info_entry->info->hyper_dmabuf_id;
+		hyper_dmabuf_id_t hid = info_entry->info->hid;
 		int nents = info_entry->info->nents;
 		bool valid = info_entry->info->valid;
 		int importer_exported = info_entry->info->importer_exported;
 		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, ie:%d\n",
-				   id, nents, (valid ? 't' : 'f'), importer_exported);
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				   "hid:{hid:%d keys:%d %d %d}, nents:%d, v:%c, ie:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
+				   nents, (valid ? 't' : 'f'), importer_exported);
 	}
 	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
 			   total);
@@ -144,7 +149,7 @@ int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		 info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hid.id);
 
 	return 0;
 }
@@ -164,74 +169,102 @@ int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		 info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hid.id);
 
 	return 0;
 }
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id)
-			return info_entry->info;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
+				return info_entry->info;
+			/* if key is unmatched, given HID is invalid, so returning NULL */
+			else
+				break;
+		}
 
 	return NULL;
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid)
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0}};
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		if(info_entry->info->dma_buf == dmabuf &&
 		   info_entry->info->hyper_dmabuf_rdomain == domid)
-			return info_entry->info->hyper_dmabuf_id;
+			return info_entry->info->hid;
 
-	return -ENOENT;
+	return hid;
 }
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id)
-			return info_entry->info;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
+				return info_entry->info;
+			/* if key is unmatched, given HID is invalid, so returning NULL */
+			else {
+				break;
+			}
+		}
 
 	return NULL;
 }
 
-int hyper_dmabuf_remove_exported(int id)
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			} else {
+				break;
+			}
 		}
 
 	return -ENOENT;
 }
 
-int hyper_dmabuf_remove_imported(int id)
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			} else {
+				break;
+			}
 		}
 
 	return -ENOENT;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 925b0d1..8f64db8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -49,17 +49,17 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid);
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
 
-int hyper_dmabuf_remove_exported(int id);
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
 
-int hyper_dmabuf_remove_imported(int id);
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
 
 void hyper_dmabuf_foreach_exported(
 	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 5f64261..12ebad3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -60,32 +60,36 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : number of pages to be shared
-		 * operands2 : offset of data in the first page
-		 * operands3 : length of data in the last page
-		 * operands4 : top-level reference number for shared pages
-		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands0~3 : hyper_dmabuf_id
+		 * operands4 : number of pages to be shared
+		 * operands5 : offset of data in the first page
+		 * operands6 : length of data in the last page
+		 * operands7 : top-level reference number for shared pages
+		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
-		for (i=0; i < 8; i++)
+		for (i=0; i < 11; i++)
 			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id_t hid
 		 */
-		req->operands[0] = operands[0];
+
+		for (i=0; i < 4; i++)
+			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_EXPORT_FD:
 	case HYPER_DMABUF_EXPORT_FD_FAILED:
 		/* dmabuf fd is being created on imported side or importing failed */
 		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id
 		 */
-		req->operands[0] = operands[0];
+
+		for (i=0; i < 4; i++)
+			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -98,10 +102,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
 		* or unmapping for synchronization with original exporter (e.g. i915) */
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 * operands0~3 : hyper_dmabuf_id
+		 * operands4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
-		for (i = 0; i < 2; i++)
+		for (i = 0; i < 5; i++)
 			req->operands[i] = operands[i];
 		break;
 
@@ -126,12 +130,12 @@ void cmd_process_work(struct work_struct *work)
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : number of pages to be shared
-		 * operands2 : offset of data in the first page
-		 * operands3 : length of data in the last page
-		 * operands4 : top-level reference number for shared pages
-		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands0~3 : hyper_dmabuf_id
+		 * operands4 : number of pages to be shared
+		 * operands5 : offset of data in the first page
+		 * operands6 : length of data in the last page
+		 * operands7 : top-level reference number for shared pages
+		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
 
@@ -141,25 +145,31 @@ void cmd_process_work(struct work_struct *work)
 			break;
 		}
 
-		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
-		imported_sgt_info->frst_ofst = req->operands[2];
-		imported_sgt_info->last_len = req->operands[3];
-		imported_sgt_info->nents = req->operands[1];
-		imported_sgt_info->ref_handle = req->operands[4];
+		imported_sgt_info->hid.id = req->operands[0];
+
+		for (i=0; i<3; i++)
+			imported_sgt_info->hid.rng_key[i] = req->operands[i+1];
+
+		imported_sgt_info->nents = req->operands[4];
+		imported_sgt_info->frst_ofst = req->operands[5];
+		imported_sgt_info->last_len = req->operands[6];
+		imported_sgt_info->ref_handle = req->operands[7];
 
 		dev_dbg(hyper_dmabuf_private.device, "DMABUF was exported\n");
-		dev_dbg(hyper_dmabuf_private.device, "\thyper_dmabuf_id %d\n", req->operands[0]);
-		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[1]);
-		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[2]);
-		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[3]);
-		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[4]);
+		dev_dbg(hyper_dmabuf_private.device, "\thid{id:%d key:%d %d %d}\n",
+			req->operands[0], req->operands[1], req->operands[2],
+			req->operands[3]);
+		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[4]);
+		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[5]);
+		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
+		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
 
 		for (i=0; i<4; i++)
-			imported_sgt_info->private[i] = req->operands[5+i];
+			imported_sgt_info->private[i] = req->operands[8+i];
 
 		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
-		break;
+	break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
@@ -182,6 +192,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	struct hyper_dmabuf_req *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	struct hyper_dmabuf_sgt_info *exp_sgt_info;
+	hyper_dmabuf_id_t hid = {req->operands[0], /* hid.id */
+			       {req->operands[1], req->operands[2], req->operands[3]}}; /* hid.rng_key */
 	int ret;
 
 	if (!req) {
@@ -203,12 +215,12 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (req->command == HYPER_DMABUF_NOTIFY_UNEXPORT) {
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
 
-		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
+		sgt_info = hyper_dmabuf_find_imported(hid);
 
 		if (sgt_info) {
 			/* if anything is still using dma_buf */
@@ -220,7 +232,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 				sgt_info->valid = 0;
 			} else {
 				/* No one is using buffer, remove it from imported list */
-				hyper_dmabuf_remove_imported(req->operands[0]);
+				hyper_dmabuf_remove_imported(hid);
 				kfree(sgt_info);
 			}
 		} else {
@@ -236,13 +248,14 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 * or unmapping for synchronization with original exporter (e.g. i915) */
 
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id
 		 * operands1 : enum hyper_dmabuf_ops {....}
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
 
-		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
+		ret = hyper_dmabuf_remote_sync(hid, req->operands[4]);
+
 		if (ret)
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		else
@@ -255,20 +268,28 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (req->command == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
 		dev_dbg(hyper_dmabuf_private.device,
-			"Processing HYPER_DMABUF_EXPORT_FD %d\n", req->operands[0]);
-		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+			"Processing HYPER_DMABUF_EXPORT_FD for buffer {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exp_sgt_info = hyper_dmabuf_find_exported(hid);
 
 		if (!exp_sgt_info) {
 			dev_err(hyper_dmabuf_private.device,
-				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else if (!exp_sgt_info->valid) {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer no longer valid - cannot export fd %d\n", req->operands[0]);
+				"Buffer no longer valid - cannot export fd for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer still valid - can export fd%d\n", req->operands[0]);
+				"Buffer still valid - can export fd for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			exp_sgt_info->importer_exported++;
 			req->status = HYPER_DMABUF_REQ_PROCESSED;
 		}
@@ -277,12 +298,16 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	if (req->command == HYPER_DMABUF_EXPORT_FD_FAILED) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"Processing HYPER_DMABUF_EXPORT_FD_FAILED %d\n", req->operands[0]);
-		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+			"Processing HYPER_DMABUF_EXPORT_FD_FAILED for buffer {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exp_sgt_info = hyper_dmabuf_find_exported(hid);
 
 		if (!exp_sgt_info) {
 			dev_err(hyper_dmabuf_private.device,
-				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			exp_sgt_info->importer_exported--;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 50ce617..636d6f1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
-#define MAX_NUMBER_OF_OPERANDS 9
+#define MAX_NUMBER_OF_OPERANDS 13
 
 struct hyper_dmabuf_req {
 	unsigned int request_id;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 2dab833..be1d395 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -31,10 +31,10 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_imp.h"
@@ -56,7 +56,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
  * is what is created when initial exporting is issued so it
  * should not be modified or released by this fuction.
  */
-int hyper_dmabuf_remote_sync(int id, int ops)
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 {
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct sgt_list *sgtl;
@@ -66,7 +66,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	int ret;
 
 	/* find a coresponding SGT for the id */
-	sgt_info = hyper_dmabuf_find_exported(id);
+	sgt_info = hyper_dmabuf_find_exported(hid);
 
 	if (!sgt_info) {
 		dev_err(hyper_dmabuf_private.device,
@@ -167,9 +167,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_RELEASE:
 		dev_dbg(hyper_dmabuf_private.device,
-			"Buffer %d released, references left: %d\n",
-			 sgt_info->hyper_dmabuf_id,
-			 sgt_info->importer_exported -1);
+			"Buffer {id:%d key:%d %d %d} released, references left: %d\n",
+			 sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			 sgt_info->hid.rng_key[2], sgt_info->importer_exported -1);
+
                 sgt_info->importer_exported--;
 		/* If there are still importers just break, if no then continue with final cleanup */
 		if (sgt_info->importer_exported)
@@ -180,15 +181,17 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
-			"Buffer %d final released\n", sgt_info->hyper_dmabuf_id);
+			"Buffer {id:%d key:%d %d %d} final released\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
 
 		if (!sgt_info->valid && !sgt_info->importer_exported &&
 		    !sgt_info->unexport_scheduled) {
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-			hyper_dmabuf_remove_exported(id);
+			hyper_dmabuf_remove_exported(hid);
 			kfree(sgt_info);
 			/* store hyper_dmabuf_id in the list for reuse */
-			store_reusable_id(id);
+			store_reusable_hid(hid);
 		}
 
 		break;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
index 71ee358..36638928 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -25,6 +25,6 @@
 #ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
 #define __HYPER_DMABUF_REMOTE_SYNC_H__
 
-int hyper_dmabuf_remote_sync(int id, int ops);
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops);
 
 #endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 9952b3f..991a8d4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -51,7 +51,7 @@ struct vmap_vaddr_list {
 
 /* Exporter builds pages_info before sharing pages */
 struct hyper_dmabuf_pages_info {
-        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
+        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in source domain */
         int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
         int frst_ofst; /* offset of data in the first page */
         int last_len; /* length of data in the last page */
@@ -64,22 +64,27 @@ struct hyper_dmabuf_pages_info {
  * Exporter stores references to sgt in a hash table
  * Exporter keeps these references for synchronization and tracking purposes
  *
- * Importer use this structure exporting to other drivers in the same domain */
+ * Importer use this structure exporting to other drivers in the same domain
+ */
 struct hyper_dmabuf_sgt_info {
-        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
+        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in remote domain */
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
 	int nents; /* number of pages, which may be different than sgt->nents */
+
+	/* list of remote activities on dma_buf */
 	struct sgt_list *active_sgts;
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-	bool valid;
+
+	bool valid; /* set to 0 once unexported. Needed to prevent further mapping by importer */
 	int importer_exported; /* exported locally on importer's side */
 	void *refs_info; /* hypervisor-specific info for the references */
 	struct delayed_work unexport_work;
 	bool unexport_scheduled;
+
 	/* owner of buffer
 	 * TODO: that is naiive as buffer may be reused by
 	 * another userspace app, so here list of struct file should be kept
@@ -94,13 +99,16 @@ struct hyper_dmabuf_sgt_info {
  * Importer store these references in the table and map it in
  * its own memory map once userspace asks for reference for the buffer */
 struct hyper_dmabuf_imported_sgt_info {
-	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
+	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
+
 	int ref_handle; /* reference number of top level addressing page of shared pages */
-	int frst_ofst;	/* start offset in shared page #1 */
+	int frst_ofst;	/* start offset in first shared page */
 	int last_len;	/* length of data in the last shared page */
 	int nents;	/* number of pages to be shared */
+
 	struct dma_buf *dma_buf;
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
+
 	void *refs_info;
 	bool valid;
 	int num_importers;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 0533e4d..80741c1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -29,8 +29,6 @@
 #include "xen/xenbus.h"
 #include "../hyper_dmabuf_msg.h"
 
-#define MAX_NUMBER_OF_OPERANDS 9
-
 DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
 
 struct xen_comm_tx_ring_info {
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index 2eff3a8e..992a542 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -25,6 +25,11 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_H__
 
+typedef struct {
+        int id;
+        int rng_key[3]; /* 12bytes long random number */
+} hyper_dmabuf_id_t;
+
 #define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
 _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
 struct ioctl_hyper_dmabuf_tx_ch_setup {
@@ -50,7 +55,7 @@ struct ioctl_hyper_dmabuf_export_remote {
 	/* Domain id to which buffer should be exported */
 	int remote_domain;
 	/* exported dma buf id */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	int private[4];
 };
 
@@ -59,7 +64,7 @@ _IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
 struct ioctl_hyper_dmabuf_export_fd {
 	/* IN parameters */
 	/* hyper dmabuf id to be imported */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	/* flags */
 	int flags;
 	/* OUT parameters */
@@ -72,7 +77,7 @@ _IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
 struct ioctl_hyper_dmabuf_unexport {
 	/* IN parameters */
 	/* hyper dmabuf id to be unexported */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	/* delay in ms by which unexport processing will be postponed */
 	int delay_ms;
 	/* OUT parameters */
@@ -85,7 +90,7 @@ _IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
 struct ioctl_hyper_dmabuf_query {
 	/* in parameters */
 	/* hyper dmabuf id to be queried */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	/* item to be queried */
 	int item;
 	/* OUT parameters */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 35/60] hyper_dmabuf: 128bit hyper_dmabuf_id with random keys
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

The length of hyper_dmabuf_id is increased to 128bit by adding
random key (96bit) to the id. This is to prevent possible leak
of the id by guessing on importer VM (by unauthorized application).

hyper_dmabuf_id_t is now defined as,

	typedef struct {
	    int id;
	    int rng_key[3];
	} hyper_dmabuf_id_t;

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   3 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  57 ++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |  17 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        |  51 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 199 +++++++++++++--------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  87 ++++++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  10 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 115 +++++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   2 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  21 ++-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  20 ++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |   2 -
 include/uapi/xen/hyper_dmabuf.h                    |  13 +-
 15 files changed, 372 insertions(+), 229 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 92d710e..c802c3e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -30,9 +30,9 @@
 #include <linux/module.h>
 #include <linux/workqueue.h>
 #include <linux/device.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 91fda04..ffe4d53 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -26,11 +26,12 @@
 #define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
 
 #include <linux/device.h>
+#include <xen/hyper_dmabuf.h>
 
 struct hyper_dmabuf_req;
 
 struct list_reusable_id {
-	int id;
+	hyper_dmabuf_id_t hid;
 	struct list_head list;
 };
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index fe95091..f59dee3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -28,13 +28,14 @@
 
 #include <linux/list.h>
 #include <linux/slab.h>
-#include "hyper_dmabuf_msg.h"
+#include <linux/random.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-void store_reusable_id(int id)
+void store_reusable_hid(hyper_dmabuf_id_t hid)
 {
 	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
 	struct list_reusable_id *new_reusable;
@@ -47,15 +48,15 @@ void store_reusable_id(int id)
 		return;
 	}
 
-	new_reusable->id = id;
+	new_reusable->hid = hid;
 
 	list_add(&new_reusable->list, &reusable_head->list);
 }
 
-static int retrieve_reusable_id(void)
+static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 {
 	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
-	int id;
+	hyper_dmabuf_id_t hid = {-1, {0,0,0}};
 
 	/* check there is reusable id */
 	if (!list_empty(&reusable_head->list)) {
@@ -64,12 +65,11 @@ static int retrieve_reusable_id(void)
 						 list);
 
 		list_del(&reusable_head->list);
-		id = reusable_head->id;
+		hid = reusable_head->hid;
 		kfree(reusable_head);
-		return id;
 	}
 
-	return -ENOENT;
+	return hid;
 }
 
 void destroy_reusable_list(void)
@@ -92,31 +92,50 @@ void destroy_reusable_list(void)
 	}
 }
 
-int hyper_dmabuf_get_id(void)
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 {
-	static int id = 0;
+	static int count = 0;
+	hyper_dmabuf_id_t hid;
 	struct list_reusable_id *reusable_head;
-	int ret;
 
-	/* first cla to hyper_dmabuf_get_id */
-	if (id == 0) {
+	/* first call to hyper_dmabuf_get_id */
+	if (count == 0) {
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
 
 		if (!reusable_head) {
 			dev_err(hyper_dmabuf_private.device,
 				"No memory left to be allocated\n");
-			return -ENOMEM;
+			return (hyper_dmabuf_id_t){-1, {0,0,0}};
 		}
 
-		reusable_head->id = -1; /* list head have invalid id */
+		reusable_head->hid.id = -1; /* list head has an invalid count */
 		INIT_LIST_HEAD(&reusable_head->list);
 		hyper_dmabuf_private.id_queue = reusable_head;
 	}
 
-	ret = retrieve_reusable_id();
+	hid = retrieve_reusable_hid();
 
-	if (ret < 0 && id < HYPER_DMABUF_ID_MAX)
-		return HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, id++);
+	/*creating a new H-ID only if nothing in the reusable id queue
+	 * and count is less than maximum allowed
+	 */
+	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
+		hid.id = HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, count++);
+		/* random data embedded in the id for security */
+		get_random_bytes(&hid.rng_key[0], 12);
+	}
+
+	return hid;
+}
+
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
+{
+	int i;
+
+	/* compare keys */
+	for (i=0; i<3; i++) {
+		if (hid1.rng_key[i] != hid2.rng_key[i])
+			return false;
+	}
 
-	return ret;
+	return true;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index 4394903..a3336d9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -25,24 +25,23 @@
 #ifndef __HYPER_DMABUF_ID_H__
 #define __HYPER_DMABUF_ID_H__
 
-/* Importer combine source domain id with given hyper_dmabuf_id
- * to make it unique in case there are multiple exporters */
+#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
+        ((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
 
-#define HYPER_DMABUF_ID_CREATE(domid, id) \
-	((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
-
-#define HYPER_DMABUF_DOM_ID(id) \
-	(((id) >> 24) & 0xFF)
+#define HYPER_DMABUF_DOM_ID(hid) \
+        (((hid.id) >> 24) & 0xFF)
 
 /* currently maximum number of buffers shared
  * at any given moment is limited to 1000
  */
 #define HYPER_DMABUF_ID_MAX 1000
 
-void store_reusable_id(int id);
+void store_reusable_hid(hyper_dmabuf_id_t hid);
 
 void destroy_reusable_list(void);
 
-int hyper_dmabuf_get_id(void);
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
+
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
 
 #endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
index 34dfa18..2bf0835 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -33,11 +33,11 @@
 #include <linux/dma-buf.h>
 #include <xen/grant_table.h>
 #include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
@@ -258,15 +258,20 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 
 #define WAIT_AFTER_SYNC_REQ 0
 
-inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
+inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int operands[2];
+	int operands[5];
+	int i;
 	int ret;
 
-	operands[0] = id;
-	operands[1] = dmabuf_ops;
+	operands[0] = hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = hid.rng_key[i];
+
+	operands[4] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -279,7 +284,7 @@ inline int hyper_dmabuf_sync_request(int id, int dmabuf_ops)
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
 
 	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(id), req, WAIT_AFTER_SYNC_REQ);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
 
 	kfree(req);
 
@@ -297,7 +302,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_ATTACH);
 
 	if (ret < 0) {
@@ -319,7 +324,7 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attac
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_DETACH);
 
 	if (ret < 0) {
@@ -358,7 +363,7 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
                 goto err_free_sg;
         }
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_MAP);
 
 	kfree(page_info->pages);
@@ -381,8 +386,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 }
 
 static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
-						struct sg_table *sg,
-						enum dma_data_direction dir)
+				   struct sg_table *sg,
+				   enum dma_data_direction dir)
 {
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	int ret;
@@ -397,7 +402,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_UNMAP);
 
 	if (ret < 0) {
@@ -437,7 +442,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	final_release = sgt_info && !sgt_info->valid &&
 		        !sgt_info->num_importers;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_RELEASE);
 	if (ret < 0) {
 		dev_warn(hyper_dmabuf_private.device,
@@ -449,7 +454,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	 * That has to be done after sending sync request
 	 */
 	if (final_release) {
-		hyper_dmabuf_remove_imported(sgt_info->hyper_dmabuf_id);
+		hyper_dmabuf_remove_imported(sgt_info->hid);
 		kfree(sgt_info);
 	}
 }
@@ -464,7 +469,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -484,7 +489,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_END_CPU_ACCESS);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -504,7 +509,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -524,7 +529,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -542,7 +547,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -562,7 +567,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_KUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -580,7 +585,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_MMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -600,7 +605,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_VMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
@@ -620,7 +625,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hyper_dmabuf_id,
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_VUNMAP);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index f7d98c1..f1581d5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -35,13 +35,12 @@
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
-#include <xen/hyper_dmabuf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_query.h"
 
@@ -93,6 +92,8 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct hyper_dmabuf_req *req;
 	int operands[MAX_NUMBER_OF_OPERANDS];
+	hyper_dmabuf_id_t hid;
+	int i;
 	int ret = 0;
 
 	if (!data) {
@@ -113,25 +114,27 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	 * to the same domain and if yes and it's valid sgt_info,
 	 * it returns hyper_dmabuf_id of pre-exported sgt_info
 	 */
-	ret = hyper_dmabuf_find_id_exported(dma_buf, export_remote_attr->remote_domain);
-	sgt_info = hyper_dmabuf_find_exported(ret);
-	if (ret != -ENOENT && sgt_info != NULL) {
-		if (sgt_info->valid) {
-			/*
-			 * Check if unexport is already scheduled for that buffer,
-			 * if so try to cancel it. If that will fail, buffer needs
-			 * to be reexport once again.
-			 */
-			if (sgt_info->unexport_scheduled) {
-				if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
-					dma_buf_put(dma_buf);
-					goto reexport;
+	hid = hyper_dmabuf_find_hid_exported(dma_buf, export_remote_attr->remote_domain);
+	if (hid.id != -1) {
+		sgt_info = hyper_dmabuf_find_exported(hid);
+		if (sgt_info != NULL) {
+			if (sgt_info->valid) {
+				/*
+				 * Check if unexport is already scheduled for that buffer,
+				 * if so try to cancel it. If that will fail, buffer needs
+				 * to be reexport once again.
+				 */
+				if (sgt_info->unexport_scheduled) {
+					if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+						dma_buf_put(dma_buf);
+						goto reexport;
+					}
+					sgt_info->unexport_scheduled = 0;
 				}
-				sgt_info->unexport_scheduled = 0;
+				dma_buf_put(dma_buf);
+				export_remote_attr->hid = hid;
+				return 0;
 			}
-			dma_buf_put(dma_buf);
-			export_remote_attr->hyper_dmabuf_id = ret;
-			return 0;
 		}
 	}
 
@@ -142,11 +145,6 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 		return PTR_ERR(attachment);
 	}
 
-	/* Clear ret, as that will cause whole ioctl to return failure
-	 * to userspace, which is not true
-	 */
-	ret = 0;
-
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
@@ -161,7 +159,15 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 		return -ENOMEM;
 	}
 
-	sgt_info->hyper_dmabuf_id = hyper_dmabuf_get_id();
+	sgt_info->hid = hyper_dmabuf_get_hid();
+
+	/* no more exported dmabuf allowed */
+	if(sgt_info->hid.id == -1) {
+		dev_err(hyper_dmabuf_private.device,
+			"exceeds allowed number of dmabuf to be exported\n");
+		/* TODO: Cleanup sgt */
+		return -ENOMEM;
+	}
 
 	/* TODO: We might need to consider using port number on event channel? */
 	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
@@ -198,8 +204,8 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 
 	sgt_info->active_sgts->sgt = sgt;
 	sgt_info->active_attached->attach = attachment;
-	sgt_info->va_kmapped->vaddr = NULL; /* first vaddr is NULL */
-	sgt_info->va_vmapped->vaddr = NULL; /* first vaddr is NULL */
+	sgt_info->va_kmapped->vaddr = NULL;
+	sgt_info->va_vmapped->vaddr = NULL;
 
 	/* initialize list of sgt, attachment and vaddr for dmabuf sync
 	 * via shadow dma-buf
@@ -221,23 +227,27 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	hyper_dmabuf_register_exported(sgt_info);
 
 	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
-	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
+	page_info->hid = sgt_info->hid; /* may not be needed */
 
-	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+	export_remote_attr->hid = sgt_info->hid;
 
 	/* now create request for importer via ring */
-	operands[0] = page_info->hyper_dmabuf_id;
-	operands[1] = page_info->nents;
-	operands[2] = page_info->frst_ofst;
-	operands[3] = page_info->last_len;
-	operands[4] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
+	operands[0] = page_info->hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = page_info->hid.rng_key[i];
+
+	operands[4] = page_info->nents;
+	operands[5] = page_info->frst_ofst;
+	operands[6] = page_info->last_len;
+	operands[7] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
 					page_info->nents, &sgt_info->refs_info);
 
-	/* driver/application specific private info, max 32 bytes */
-	operands[5] = export_remote_attr->private[0];
-	operands[6] = export_remote_attr->private[1];
-	operands[7] = export_remote_attr->private[2];
-	operands[8] = export_remote_attr->private[3];
+	/* driver/application specific private info, max 4x4 bytes */
+	operands[8] = export_remote_attr->private[0];
+	operands[9] = export_remote_attr->private[1];
+	operands[10] = export_remote_attr->private[2];
+	operands[11] = export_remote_attr->private[3];
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -270,7 +280,7 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	kfree(req);
 
 fail_map_req:
-	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
+	hyper_dmabuf_remove_exported(sgt_info->hid);
 
 fail_export:
 	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
@@ -298,7 +308,8 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	struct hyper_dmabuf_req *req;
 	struct page **data_pages;
-	int operand;
+	int operands[4];
+	int i;
 	int ret = 0;
 
 	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
@@ -311,7 +322,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
 
 	/* look for dmabuf for the id */
-	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hid);
 
 	/* can't find sgt from the table */
 	if (!sgt_info) {
@@ -324,9 +335,14 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	sgt_info->num_importers++;
 
 	/* send notification for export_fd to exporter */
-	operand = sgt_info->hyper_dmabuf_id;
+	operands[0] = sgt_info->hid.id;
 
-	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer %d\n", operand);
+	for (i=0; i<3; i++)
+		operands[i+1] = sgt_info->hid.rng_key[i];
+
+	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
+		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+		sgt_info->hid.rng_key[2]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -336,30 +352,37 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		return -ENOMEM;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operand);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operands[0]);
 
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, true);
 
 	if (ret < 0) {
 		/* in case of timeout other end eventually will receive request, so we need to undo it */
-		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
-		ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
+		ops->send_req(operands[0], req, false);
 		kfree(req);
 		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
 		return ret;
 	}
+
 	kfree(req);
 
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
 		dev_err(hyper_dmabuf_private.device,
-			"Buffer invalid %d, cannot import\n", operand);
+			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		sgt_info->num_importers--;
 		mutex_unlock(&hyper_dmabuf_private.lock);
 		return -EINVAL;
 	} else {
-		dev_dbg(hyper_dmabuf_private.device, "Can import buffer %d\n", operand);
+		dev_dbg(hyper_dmabuf_private.device, "Can import buffer {id:%d key:%d %d %d}\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		ret = 0;
 	}
 
@@ -367,22 +390,29 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
 		  sgt_info->ref_handle, sgt_info->frst_ofst,
 		  sgt_info->last_len, sgt_info->nents,
-		  HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id));
+		  HYPER_DMABUF_DOM_ID(sgt_info->hid));
 
 	if (!sgt_info->sgt) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"%s buffer %d pages not mapped yet\n", __func__,sgt_info->hyper_dmabuf_id);
+			"%s buffer {id:%d key:%d %d %d} pages not mapped yet\n", __func__,
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
-						   HYPER_DMABUF_DOM_ID(sgt_info->hyper_dmabuf_id),
+						   HYPER_DMABUF_DOM_ID(sgt_info->hid),
 						   sgt_info->nents,
 						   &sgt_info->refs_info);
 
 		if (!data_pages) {
-			dev_err(hyper_dmabuf_private.device, "Cannot map pages of buffer %d\n", operand);
+			dev_err(hyper_dmabuf_private.device,
+				"Cannot map pages of buffer {id:%d key:%d %d %d}\n",
+				sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+				sgt_info->hid.rng_key[2]);
+
 			sgt_info->num_importers--;
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operand);
-			ops->send_req(HYPER_DMABUF_DOM_ID(operand), req, false);
+			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
+			ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, false);
 			kfree(req);
 			mutex_unlock(&hyper_dmabuf_private.lock);
 			return -EINVAL;
@@ -401,6 +431,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	}
 
 	mutex_unlock(&hyper_dmabuf_private.lock);
+
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return ret;
 }
@@ -411,8 +442,8 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
-	int hyper_dmabuf_id;
-	int ret;
+	int i, ret;
+	int operands[4];
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_sgt_info *sgt_info =
 		container_of(work, struct hyper_dmabuf_sgt_info, unexport_work.work);
@@ -420,10 +451,11 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 	if (!sgt_info)
 		return;
 
-	hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
-
 	dev_dbg(hyper_dmabuf_private.device,
-		"Marking buffer %d as invalid\n", hyper_dmabuf_id);
+		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
+		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+		sgt_info->hid.rng_key[2]);
+
 	/* no longer valid */
 	sgt_info->valid = 0;
 
@@ -435,12 +467,20 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 		return;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &hyper_dmabuf_id);
+	operands[0] = sgt_info->hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = sgt_info->hid.rng_key[i];
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &operands[0]);
 
 	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
 	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device, "unexport message for buffer %d failed\n", hyper_dmabuf_id);
+		dev_err(hyper_dmabuf_private.device,
+			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
 	}
 
 	/* free msg */
@@ -456,12 +496,15 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 	 */
 	if (!sgt_info->importer_exported) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"claning up buffer %d completly\n", hyper_dmabuf_id);
+			"claning up buffer {id:%d key:%d %d %d} completly\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
+
 		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-		hyper_dmabuf_remove_exported(hyper_dmabuf_id);
-		kfree(sgt_info);
+		hyper_dmabuf_remove_exported(sgt_info->hid);
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_id(hyper_dmabuf_id);
+		store_reusable_hid(sgt_info->hid);
+		kfree(sgt_info);
 	}
 }
 
@@ -482,9 +525,12 @@ static int hyper_dmabuf_unexport(struct file *filp, void *data)
 	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
 
 	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hid);
 
-	dev_dbg(hyper_dmabuf_private.device, "scheduling unexport of buffer %d\n", unexport_attr->hyper_dmabuf_id);
+	dev_dbg(hyper_dmabuf_private.device,
+		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
+		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
+		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
 
 	/* failed to find corresponding entry in export list */
 	if (sgt_info == NULL) {
@@ -518,8 +564,8 @@ static int hyper_dmabuf_query(struct file *filp, void *data)
 
 	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
 
-	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
-	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
+	sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
+	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
 
 	/* if dmabuf can't be found in both lists, return */
 	if (!(sgt_info && imported_sgt_info)) {
@@ -544,7 +590,7 @@ static int hyper_dmabuf_query(struct file *filp, void *data)
 			if (sgt_info) {
 				query_attr->info = 0xFFFFFFFF; /* myself */
 			} else {
-				query_attr->info = (HYPER_DMABUF_DOM_ID(imported_sgt_info->hyper_dmabuf_id));
+				query_attr->info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
 			}
 			break;
 
@@ -674,10 +720,11 @@ static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_inf
 
 	if (sgt_info->filp == filp) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"Executing emergency release of buffer %d\n",
-			 sgt_info->hyper_dmabuf_id);
+			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
+			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
+			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
 
-		unexport_attr.hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+		unexport_attr.hid = sgt_info->hid;
 		unexport_attr.delay_ms = 0;
 
 		hyper_dmabuf_unexport(filp, &unexport_attr);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 90c8c56..21fc7d0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -36,6 +36,7 @@
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
@@ -51,13 +52,15 @@ static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attr
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
-		int id = info_entry->info->hyper_dmabuf_id;
+		hyper_dmabuf_id_t hid = info_entry->info->hid;
 		int nents = info_entry->info->nents;
 		bool valid = info_entry->info->valid;
 		int num_importers = info_entry->info->num_importers;
 		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, numi:%d\n",
-				   id, nents, (valid ? 't' : 'f'), num_importers);
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				   "hid:{id:%d keys:%d %d %d}, nents:%d, v:%c, numi:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
+				   nents, (valid ? 't' : 'f'), num_importers);
 	}
 	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
 			   total);
@@ -73,13 +76,15 @@ static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attr
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
-		int id = info_entry->info->hyper_dmabuf_id;
+		hyper_dmabuf_id_t hid = info_entry->info->hid;
 		int nents = info_entry->info->nents;
 		bool valid = info_entry->info->valid;
 		int importer_exported = info_entry->info->importer_exported;
 		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count, "id:%d, nents:%d, v:%c, ie:%d\n",
-				   id, nents, (valid ? 't' : 'f'), importer_exported);
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				   "hid:{hid:%d keys:%d %d %d}, nents:%d, v:%c, ie:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
+				   nents, (valid ? 't' : 'f'), importer_exported);
 	}
 	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
 			   total);
@@ -144,7 +149,7 @@ int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		 info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hid.id);
 
 	return 0;
 }
@@ -164,74 +169,102 @@ int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
 	info_entry->info = info;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		 info_entry->info->hyper_dmabuf_id);
+		 info_entry->info->hid.id);
 
 	return 0;
 }
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id)
-			return info_entry->info;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
+				return info_entry->info;
+			/* if key is unmatched, given HID is invalid, so returning NULL */
+			else
+				break;
+		}
 
 	return NULL;
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid)
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0}};
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		if(info_entry->info->dma_buf == dmabuf &&
 		   info_entry->info->hyper_dmabuf_rdomain == domid)
-			return info_entry->info->hyper_dmabuf_id;
+			return info_entry->info->hid;
 
-	return -ENOENT;
+	return hid;
 }
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id)
-			return info_entry->info;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
+				return info_entry->info;
+			/* if key is unmatched, given HID is invalid, so returning NULL */
+			else {
+				break;
+			}
+		}
 
 	return NULL;
 }
 
-int hyper_dmabuf_remove_exported(int id)
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			} else {
+				break;
+			}
 		}
 
 	return -ENOENT;
 }
 
-int hyper_dmabuf_remove_imported(int id)
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_info_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		if(info_entry->info->hyper_dmabuf_id == id) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
+		/* checking hid.id first */
+		if(info_entry->info->hid.id == hid.id) {
+			/* then key is compared */
+			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			} else {
+				break;
+			}
 		}
 
 	return -ENOENT;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 925b0d1..8f64db8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -49,17 +49,17 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-int hyper_dmabuf_find_id_exported(struct dma_buf *dmabuf, int domid);
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid);
 
 int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
 
-int hyper_dmabuf_remove_exported(int id);
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
 
-int hyper_dmabuf_remove_imported(int id);
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
 
 void hyper_dmabuf_foreach_exported(
 	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 5f64261..12ebad3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -60,32 +60,36 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : number of pages to be shared
-		 * operands2 : offset of data in the first page
-		 * operands3 : length of data in the last page
-		 * operands4 : top-level reference number for shared pages
-		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands0~3 : hyper_dmabuf_id
+		 * operands4 : number of pages to be shared
+		 * operands5 : offset of data in the first page
+		 * operands6 : length of data in the last page
+		 * operands7 : top-level reference number for shared pages
+		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
-		for (i=0; i < 8; i++)
+		for (i=0; i < 11; i++)
 			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id_t hid
 		 */
-		req->operands[0] = operands[0];
+
+		for (i=0; i < 4; i++)
+			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_EXPORT_FD:
 	case HYPER_DMABUF_EXPORT_FD_FAILED:
 		/* dmabuf fd is being created on imported side or importing failed */
 		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id
 		 */
-		req->operands[0] = operands[0];
+
+		for (i=0; i < 4; i++)
+			req->operands[i] = operands[i];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -98,10 +102,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
 		* or unmapping for synchronization with original exporter (e.g. i915) */
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 * operands0~3 : hyper_dmabuf_id
+		 * operands4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
-		for (i = 0; i < 2; i++)
+		for (i = 0; i < 5; i++)
 			req->operands[i] = operands[i];
 		break;
 
@@ -126,12 +130,12 @@ void cmd_process_work(struct work_struct *work)
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0 : hyper_dmabuf_id
-		 * operands1 : number of pages to be shared
-		 * operands2 : offset of data in the first page
-		 * operands3 : length of data in the last page
-		 * operands4 : top-level reference number for shared pages
-		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands0~3 : hyper_dmabuf_id
+		 * operands4 : number of pages to be shared
+		 * operands5 : offset of data in the first page
+		 * operands6 : length of data in the last page
+		 * operands7 : top-level reference number for shared pages
+		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
 
@@ -141,25 +145,31 @@ void cmd_process_work(struct work_struct *work)
 			break;
 		}
 
-		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
-		imported_sgt_info->frst_ofst = req->operands[2];
-		imported_sgt_info->last_len = req->operands[3];
-		imported_sgt_info->nents = req->operands[1];
-		imported_sgt_info->ref_handle = req->operands[4];
+		imported_sgt_info->hid.id = req->operands[0];
+
+		for (i=0; i<3; i++)
+			imported_sgt_info->hid.rng_key[i] = req->operands[i+1];
+
+		imported_sgt_info->nents = req->operands[4];
+		imported_sgt_info->frst_ofst = req->operands[5];
+		imported_sgt_info->last_len = req->operands[6];
+		imported_sgt_info->ref_handle = req->operands[7];
 
 		dev_dbg(hyper_dmabuf_private.device, "DMABUF was exported\n");
-		dev_dbg(hyper_dmabuf_private.device, "\thyper_dmabuf_id %d\n", req->operands[0]);
-		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[1]);
-		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[2]);
-		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[3]);
-		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[4]);
+		dev_dbg(hyper_dmabuf_private.device, "\thid{id:%d key:%d %d %d}\n",
+			req->operands[0], req->operands[1], req->operands[2],
+			req->operands[3]);
+		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[4]);
+		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[5]);
+		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
+		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
 
 		for (i=0; i<4; i++)
-			imported_sgt_info->private[i] = req->operands[5+i];
+			imported_sgt_info->private[i] = req->operands[8+i];
 
 		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
-		break;
+	break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
@@ -182,6 +192,8 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	struct hyper_dmabuf_req *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	struct hyper_dmabuf_sgt_info *exp_sgt_info;
+	hyper_dmabuf_id_t hid = {req->operands[0], /* hid.id */
+			       {req->operands[1], req->operands[2], req->operands[3]}}; /* hid.rng_key */
 	int ret;
 
 	if (!req) {
@@ -203,12 +215,12 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (req->command == HYPER_DMABUF_NOTIFY_UNEXPORT) {
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
 
-		sgt_info = hyper_dmabuf_find_imported(req->operands[0]);
+		sgt_info = hyper_dmabuf_find_imported(hid);
 
 		if (sgt_info) {
 			/* if anything is still using dma_buf */
@@ -220,7 +232,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 				sgt_info->valid = 0;
 			} else {
 				/* No one is using buffer, remove it from imported list */
-				hyper_dmabuf_remove_imported(req->operands[0]);
+				hyper_dmabuf_remove_imported(hid);
 				kfree(sgt_info);
 			}
 		} else {
@@ -236,13 +248,14 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 * or unmapping for synchronization with original exporter (e.g. i915) */
 
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0 : hyper_dmabuf_id
+		 * operands0~3 : hyper_dmabuf_id
 		 * operands1 : enum hyper_dmabuf_ops {....}
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
 			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
 
-		ret = hyper_dmabuf_remote_sync(req->operands[0], req->operands[1]);
+		ret = hyper_dmabuf_remote_sync(hid, req->operands[4]);
+
 		if (ret)
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		else
@@ -255,20 +268,28 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (req->command == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
 		dev_dbg(hyper_dmabuf_private.device,
-			"Processing HYPER_DMABUF_EXPORT_FD %d\n", req->operands[0]);
-		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+			"Processing HYPER_DMABUF_EXPORT_FD for buffer {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exp_sgt_info = hyper_dmabuf_find_exported(hid);
 
 		if (!exp_sgt_info) {
 			dev_err(hyper_dmabuf_private.device,
-				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else if (!exp_sgt_info->valid) {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer no longer valid - cannot export fd %d\n", req->operands[0]);
+				"Buffer no longer valid - cannot export fd for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			dev_dbg(hyper_dmabuf_private.device,
-				"Buffer still valid - can export fd%d\n", req->operands[0]);
+				"Buffer still valid - can export fd for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			exp_sgt_info->importer_exported++;
 			req->status = HYPER_DMABUF_REQ_PROCESSED;
 		}
@@ -277,12 +298,16 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	if (req->command == HYPER_DMABUF_EXPORT_FD_FAILED) {
 		dev_dbg(hyper_dmabuf_private.device,
-			"Processing HYPER_DMABUF_EXPORT_FD_FAILED %d\n", req->operands[0]);
-		exp_sgt_info = hyper_dmabuf_find_exported(req->operands[0]);
+			"Processing HYPER_DMABUF_EXPORT_FD_FAILED for buffer {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exp_sgt_info = hyper_dmabuf_find_exported(hid);
 
 		if (!exp_sgt_info) {
 			dev_err(hyper_dmabuf_private.device,
-				"critical err: requested sgt_info can't be found %d\n", req->operands[0]);
+				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
 			req->status = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			exp_sgt_info->importer_exported--;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 50ce617..636d6f1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
-#define MAX_NUMBER_OF_OPERANDS 9
+#define MAX_NUMBER_OF_OPERANDS 13
 
 struct hyper_dmabuf_req {
 	unsigned int request_id;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 2dab833..be1d395 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -31,10 +31,10 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_imp.h"
@@ -56,7 +56,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
  * is what is created when initial exporting is issued so it
  * should not be modified or released by this fuction.
  */
-int hyper_dmabuf_remote_sync(int id, int ops)
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 {
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	struct sgt_list *sgtl;
@@ -66,7 +66,7 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 	int ret;
 
 	/* find a coresponding SGT for the id */
-	sgt_info = hyper_dmabuf_find_exported(id);
+	sgt_info = hyper_dmabuf_find_exported(hid);
 
 	if (!sgt_info) {
 		dev_err(hyper_dmabuf_private.device,
@@ -167,9 +167,10 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 
 	case HYPER_DMABUF_OPS_RELEASE:
 		dev_dbg(hyper_dmabuf_private.device,
-			"Buffer %d released, references left: %d\n",
-			 sgt_info->hyper_dmabuf_id,
-			 sgt_info->importer_exported -1);
+			"Buffer {id:%d key:%d %d %d} released, references left: %d\n",
+			 sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			 sgt_info->hid.rng_key[2], sgt_info->importer_exported -1);
+
                 sgt_info->importer_exported--;
 		/* If there are still importers just break, if no then continue with final cleanup */
 		if (sgt_info->importer_exported)
@@ -180,15 +181,17 @@ int hyper_dmabuf_remote_sync(int id, int ops)
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
 		dev_dbg(hyper_dmabuf_private.device,
-			"Buffer %d final released\n", sgt_info->hyper_dmabuf_id);
+			"Buffer {id:%d key:%d %d %d} final released\n",
+			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
+			sgt_info->hid.rng_key[2]);
 
 		if (!sgt_info->valid && !sgt_info->importer_exported &&
 		    !sgt_info->unexport_scheduled) {
 			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-			hyper_dmabuf_remove_exported(id);
+			hyper_dmabuf_remove_exported(hid);
 			kfree(sgt_info);
 			/* store hyper_dmabuf_id in the list for reuse */
-			store_reusable_id(id);
+			store_reusable_hid(hid);
 		}
 
 		break;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
index 71ee358..36638928 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -25,6 +25,6 @@
 #ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
 #define __HYPER_DMABUF_REMOTE_SYNC_H__
 
-int hyper_dmabuf_remote_sync(int id, int ops);
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops);
 
 #endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 9952b3f..991a8d4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -51,7 +51,7 @@ struct vmap_vaddr_list {
 
 /* Exporter builds pages_info before sharing pages */
 struct hyper_dmabuf_pages_info {
-        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
+        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in source domain */
         int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
         int frst_ofst; /* offset of data in the first page */
         int last_len; /* length of data in the last page */
@@ -64,22 +64,27 @@ struct hyper_dmabuf_pages_info {
  * Exporter stores references to sgt in a hash table
  * Exporter keeps these references for synchronization and tracking purposes
  *
- * Importer use this structure exporting to other drivers in the same domain */
+ * Importer use this structure exporting to other drivers in the same domain
+ */
 struct hyper_dmabuf_sgt_info {
-        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
+        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in remote domain */
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
 	int nents; /* number of pages, which may be different than sgt->nents */
+
+	/* list of remote activities on dma_buf */
 	struct sgt_list *active_sgts;
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
-	bool valid;
+
+	bool valid; /* set to 0 once unexported. Needed to prevent further mapping by importer */
 	int importer_exported; /* exported locally on importer's side */
 	void *refs_info; /* hypervisor-specific info for the references */
 	struct delayed_work unexport_work;
 	bool unexport_scheduled;
+
 	/* owner of buffer
 	 * TODO: that is naiive as buffer may be reused by
 	 * another userspace app, so here list of struct file should be kept
@@ -94,13 +99,16 @@ struct hyper_dmabuf_sgt_info {
  * Importer store these references in the table and map it in
  * its own memory map once userspace asks for reference for the buffer */
 struct hyper_dmabuf_imported_sgt_info {
-	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
+	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
+
 	int ref_handle; /* reference number of top level addressing page of shared pages */
-	int frst_ofst;	/* start offset in shared page #1 */
+	int frst_ofst;	/* start offset in first shared page */
 	int last_len;	/* length of data in the last shared page */
 	int nents;	/* number of pages to be shared */
+
 	struct dma_buf *dma_buf;
 	struct sg_table *sgt; /* sgt pointer after importing buffer */
+
 	void *refs_info;
 	bool valid;
 	int num_importers;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 0533e4d..80741c1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -29,8 +29,6 @@
 #include "xen/xenbus.h"
 #include "../hyper_dmabuf_msg.h"
 
-#define MAX_NUMBER_OF_OPERANDS 9
-
 DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
 
 struct xen_comm_tx_ring_info {
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index 2eff3a8e..992a542 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -25,6 +25,11 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_H__
 
+typedef struct {
+        int id;
+        int rng_key[3]; /* 12bytes long random number */
+} hyper_dmabuf_id_t;
+
 #define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
 _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
 struct ioctl_hyper_dmabuf_tx_ch_setup {
@@ -50,7 +55,7 @@ struct ioctl_hyper_dmabuf_export_remote {
 	/* Domain id to which buffer should be exported */
 	int remote_domain;
 	/* exported dma buf id */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	int private[4];
 };
 
@@ -59,7 +64,7 @@ _IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
 struct ioctl_hyper_dmabuf_export_fd {
 	/* IN parameters */
 	/* hyper dmabuf id to be imported */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	/* flags */
 	int flags;
 	/* OUT parameters */
@@ -72,7 +77,7 @@ _IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
 struct ioctl_hyper_dmabuf_unexport {
 	/* IN parameters */
 	/* hyper dmabuf id to be unexported */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	/* delay in ms by which unexport processing will be postponed */
 	int delay_ms;
 	/* OUT parameters */
@@ -85,7 +90,7 @@ _IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
 struct ioctl_hyper_dmabuf_query {
 	/* in parameters */
 	/* hyper dmabuf id to be queried */
-	int hyper_dmabuf_id;
+	hyper_dmabuf_id_t hid;
 	/* item to be queried */
 	int item;
 	/* OUT parameters */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 36/60] hyper_dmabuf: error handling when share_pages fails
  2017-12-19 19:29 ` Dongwon Kim
                   ` (50 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

When error occurs while sharing pages, all pages already shared
needs to be un-shared and proper error code has to be returned.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  6 ++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 50 ++++++++++++++++++++++
 2 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index f1581d5..375b664 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -31,7 +31,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/miscdevice.h>
-#include <linux/uaccess.h>
+#include <asm/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
@@ -242,6 +242,10 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	operands[6] = page_info->last_len;
 	operands[7] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
 					page_info->nents, &sgt_info->refs_info);
+	if (operands[7] < 0) {
+		dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
+		goto fail_map_req;
+	}
 
 	/* driver/application specific private info, max 4x4 bytes */
 	operands[8] = export_remote_attr->private[0];
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 1416a69..908eda8 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -109,6 +109,16 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
 							    pfn_to_mfn(page_to_pfn(pages[i])),
 							    true /* read-only from remote domain */);
+		if (lvl2_table[i] == -ENOSPC) {
+			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl2 */
+			while(i--) {
+				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
+				gnttab_free_grant_reference(lvl2_table[i]);
+			}
+			goto err_cleanup;
+		}
 	}
 
 	/* Share 2nd level addressing pages in readonly mode*/
@@ -116,6 +126,23 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
 							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
 							    true);
+		if (lvl3_table[i] == -ENOSPC) {
+			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl3 */
+			while(i--) {
+				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+				gnttab_free_grant_reference(lvl3_table[i]);
+			}
+
+			/* Unshare all pages for lvl2 */
+			while(nents--) {
+				gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+				gnttab_free_grant_reference(lvl2_table[nents]);
+			}
+
+			goto err_cleanup;
+		}
 	}
 
 	/* Share lvl3_table in readonly mode*/
@@ -123,6 +150,23 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 						virt_to_mfn((unsigned long)lvl3_table),
 						true);
 
+	if (lvl3_gref == -ENOSPC) {
+		dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+
+		/* Unshare all pages for lvl3 */
+		while(i--) {
+			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+			gnttab_free_grant_reference(lvl3_table[i]);
+		}
+
+		/* Unshare all pages for lvl2 */
+		while(nents--) {
+			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+			gnttab_free_grant_reference(lvl2_table[nents]);
+		}
+
+		goto err_cleanup;
+	}
 
 	/* Store lvl3_table page to be freed later */
 	sh_pages_info->lvl3_table = lvl3_table;
@@ -136,6 +180,12 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return lvl3_gref;
+
+err_cleanup:
+	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)lvl3_table, 1);
+
+	return -ENOSPC;
 }
 
 int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 36/60] hyper_dmabuf: error handling when share_pages fails
  2017-12-19 19:29 ` Dongwon Kim
                   ` (49 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

When error occurs while sharing pages, all pages already shared
needs to be un-shared and proper error code has to be returned.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  6 ++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 50 ++++++++++++++++++++++
 2 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index f1581d5..375b664 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -31,7 +31,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/miscdevice.h>
-#include <linux/uaccess.h>
+#include <asm/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
@@ -242,6 +242,10 @@ static int hyper_dmabuf_export_remote(struct file *filp, void *data)
 	operands[6] = page_info->last_len;
 	operands[7] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
 					page_info->nents, &sgt_info->refs_info);
+	if (operands[7] < 0) {
+		dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
+		goto fail_map_req;
+	}
 
 	/* driver/application specific private info, max 4x4 bytes */
 	operands[8] = export_remote_attr->private[0];
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 1416a69..908eda8 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -109,6 +109,16 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
 							    pfn_to_mfn(page_to_pfn(pages[i])),
 							    true /* read-only from remote domain */);
+		if (lvl2_table[i] == -ENOSPC) {
+			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl2 */
+			while(i--) {
+				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
+				gnttab_free_grant_reference(lvl2_table[i]);
+			}
+			goto err_cleanup;
+		}
 	}
 
 	/* Share 2nd level addressing pages in readonly mode*/
@@ -116,6 +126,23 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
 							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
 							    true);
+		if (lvl3_table[i] == -ENOSPC) {
+			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl3 */
+			while(i--) {
+				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+				gnttab_free_grant_reference(lvl3_table[i]);
+			}
+
+			/* Unshare all pages for lvl2 */
+			while(nents--) {
+				gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+				gnttab_free_grant_reference(lvl2_table[nents]);
+			}
+
+			goto err_cleanup;
+		}
 	}
 
 	/* Share lvl3_table in readonly mode*/
@@ -123,6 +150,23 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 						virt_to_mfn((unsigned long)lvl3_table),
 						true);
 
+	if (lvl3_gref == -ENOSPC) {
+		dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+
+		/* Unshare all pages for lvl3 */
+		while(i--) {
+			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+			gnttab_free_grant_reference(lvl3_table[i]);
+		}
+
+		/* Unshare all pages for lvl2 */
+		while(nents--) {
+			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+			gnttab_free_grant_reference(lvl2_table[nents]);
+		}
+
+		goto err_cleanup;
+	}
 
 	/* Store lvl3_table page to be freed later */
 	sh_pages_info->lvl3_table = lvl3_table;
@@ -136,6 +180,12 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
 	return lvl3_gref;
+
+err_cleanup:
+	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)lvl3_table, 1);
+
+	return -ENOSPC;
 }
 
 int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 37/60] hyper_dmabuf: implementation of query ioctl
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

List of queries is re-defined. Now it supports following
items:

	enum hyper_dmabuf_query {
		DMABUF_QUERY_TYPE = 0x10,
		DMABUF_QUERY_EXPORTER,
		DMABUF_QUERY_IMPORTER,
		DMABUF_QUERY_SIZE,
		DMABUF_QUERY_BUSY,
		DMABUF_QUERY_UNEXPORTED,
		DMABUF_QUERY_DELAYED_UNEXPORTED,
	};

Also, actual querying part of the function is moved to hyper_dmabuf_query.c

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile             |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 111 ++++++++++---------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c | 115 ++++++++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h |  38 +--------
 include/uapi/xen/hyper_dmabuf.h               |  17 ++++
 5 files changed, 179 insertions(+), 103 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index d90cfc3..8865f50 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -11,6 +11,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
+				 hyper_dmabuf_query.o \
 
 ifeq ($(CONFIG_XEN), y)
 	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 375b664..12f7ce4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -31,7 +31,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/miscdevice.h>
-#include <asm/uaccess.h>
+#include <linux/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
@@ -46,7 +46,7 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
+static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -63,7 +63,7 @@ static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
+static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -81,7 +81,7 @@ static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_export_remote(struct file *filp, void *data)
+static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -514,7 +514,7 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 /* Schedules unexport of dmabuf.
  */
-static int hyper_dmabuf_unexport(struct file *filp, void *data)
+static int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -554,11 +554,11 @@ static int hyper_dmabuf_unexport(struct file *filp, void *data)
 	return 0;
 }
 
-static int hyper_dmabuf_query(struct file *filp, void *data)
+static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_query *query_attr;
-	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info = NULL;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info = NULL;
 	int ret = 0;
 
 	if (!data) {
@@ -568,71 +568,46 @@ static int hyper_dmabuf_query(struct file *filp, void *data)
 
 	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
 
-	sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
-	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
-
-	/* if dmabuf can't be found in both lists, return */
-	if (!(sgt_info && imported_sgt_info)) {
-		dev_err(hyper_dmabuf_private.device, "can't find entry anywhere\n");
-		return -ENOENT;
-	}
-
-	/* not considering the case where a dmabuf is found on both queues
-	 * in one domain */
-	switch (query_attr->item)
-	{
-		case DMABUF_QUERY_TYPE_LIST:
-			if (sgt_info) {
-				query_attr->info = EXPORTED;
-			} else {
-				query_attr->info = IMPORTED;
-			}
-			break;
-
-		/* exporting domain of this specific dmabuf*/
-		case DMABUF_QUERY_EXPORTER:
-			if (sgt_info) {
-				query_attr->info = 0xFFFFFFFF; /* myself */
-			} else {
-				query_attr->info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
-			}
-			break;
-
-		/* importing domain of this specific dmabuf */
-		case DMABUF_QUERY_IMPORTER:
-			if (sgt_info) {
-				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
-			} else {
-#if 0 /* TODO: a global variable, current_domain does not exist yet*/
-				query_attr->info = current_domain;
-#endif
-			}
-			break;
-
-		/* size of dmabuf in byte */
-		case DMABUF_QUERY_SIZE:
-			if (sgt_info) {
-#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
-				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
-#endif
-			} else {
-				query_attr->info = imported_sgt_info->nents * 4096 -
-						   imported_sgt_info->frst_ofst - 4096 +
-						   imported_sgt_info->last_len;
-			}
-			break;
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hyper_dmabuf_private.domid) {
+		/* query for exported dmabuf */
+		sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
+		if (sgt_info) {
+			ret = hyper_dmabuf_query_exported(sgt_info, query_attr->item);
+			if (ret != -EINVAL)
+				query_attr->info = ret;
+		} else {
+			dev_err(hyper_dmabuf_private.device,
+				"DMA BUF {id:%d key:%d %d %d} can't be found in the export list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	} else {
+		/* query for imported dmabuf */
+		imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported_sgt_info) {
+			ret = hyper_dmabuf_query_imported(imported_sgt_info, query_attr->item);
+			if (ret != -EINVAL)
+				query_attr->info = ret;
+		} else {
+			dev_err(hyper_dmabuf_private.device,
+				"DMA BUF {id:%d key:%d %d %d} can't be found in the imported list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
 	}
 
-	return ret;
+	return 0;
 }
 
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query_ioctl, 0),
 };
 
 static long hyper_dmabuf_ioctl(struct file *filp,
@@ -731,7 +706,7 @@ static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_inf
 		unexport_attr.hid = sgt_info->hid;
 		unexport_attr.delay_ms = 0;
 
-		hyper_dmabuf_unexport(filp, &unexport_attr);
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
new file mode 100644
index 0000000..2a5201b
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -0,0 +1,115 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_id.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
+	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
+
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query)
+{
+	switch (query)
+	{
+		case HYPER_DMABUF_QUERY_TYPE:
+			return EXPORTED;
+
+		/* exporting domain of this specific dmabuf*/
+		case HYPER_DMABUF_QUERY_EXPORTER:
+			return HYPER_DMABUF_DOM_ID(sgt_info->hid);
+
+		/* importing domain of this specific dmabuf */
+		case HYPER_DMABUF_QUERY_IMPORTER:
+			return sgt_info->hyper_dmabuf_rdomain;
+
+		/* size of dmabuf in byte */
+		case HYPER_DMABUF_QUERY_SIZE:
+			return sgt_info->dma_buf->size;
+
+		/* whether the buffer is used by importer */
+		case HYPER_DMABUF_QUERY_BUSY:
+			return (sgt_info->importer_exported == 0) ? false : true;
+
+		/* whether the buffer is unexported */
+		case HYPER_DMABUF_QUERY_UNEXPORTED:
+			return !sgt_info->valid;
+
+		/* whether the buffer is scheduled to be unexported */
+		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+			return !sgt_info->unexport_scheduled;
+	}
+
+	return -EINVAL;
+}
+
+
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query)
+{
+	switch (query)
+	{
+		case HYPER_DMABUF_QUERY_TYPE:
+			return IMPORTED;
+
+		/* exporting domain of this specific dmabuf*/
+		case HYPER_DMABUF_QUERY_EXPORTER:
+			return HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+
+		/* importing domain of this specific dmabuf */
+		case HYPER_DMABUF_QUERY_IMPORTER:
+			return  hyper_dmabuf_private.domid;
+
+		/* size of dmabuf in byte */
+		case HYPER_DMABUF_QUERY_SIZE:
+			if (imported_sgt_info->dma_buf) {
+				/* if local dma_buf is created (if it's ever mapped),
+				 * retrieve it directly from struct dma_buf *
+				 */
+				return imported_sgt_info->dma_buf->size;
+			} else {
+				/* calcuate it from given nents, frst_ofst and last_len */
+				return HYPER_DMABUF_SIZE(imported_sgt_info->nents,
+							 imported_sgt_info->frst_ofst,
+							 imported_sgt_info->last_len);
+			}
+
+		/* whether the buffer is used or not */
+		case HYPER_DMABUF_QUERY_BUSY:
+			/* checks if it's used by importer */
+			return (imported_sgt_info->num_importers > 0) ? true : false;
+
+		/* whether the buffer is unexported */
+		case HYPER_DMABUF_QUERY_UNEXPORTED:
+			return !imported_sgt_info->valid;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index 6cf5b2d..295e923 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,40 +1,8 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
-enum hyper_dmabuf_query {
-	DMABUF_QUERY_TYPE_LIST = 0x10,
-	DMABUF_QUERY_EXPORTER,
-	DMABUF_QUERY_IMPORTER,
-	DMABUF_QUERY_SIZE
-};
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query);
 
-enum hyper_dmabuf_status {
-	EXPORTED = 0x01,
-	IMPORTED
-};
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query);
 
-#endif /* __HYPER_DMABUF_QUERY_H__ */
+#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index 992a542..bee0f86 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -98,4 +98,21 @@ struct ioctl_hyper_dmabuf_query {
 	int info;
 };
 
+/* DMABUF query */
+
+enum hyper_dmabuf_query {
+        HYPER_DMABUF_QUERY_TYPE = 0x10,
+        HYPER_DMABUF_QUERY_EXPORTER,
+        HYPER_DMABUF_QUERY_IMPORTER,
+        HYPER_DMABUF_QUERY_SIZE,
+        HYPER_DMABUF_QUERY_BUSY,
+        HYPER_DMABUF_QUERY_UNEXPORTED,
+        HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+};
+
+enum hyper_dmabuf_status {
+        EXPORTED= 0x01,
+        IMPORTED,
+};
+
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 37/60] hyper_dmabuf: implementation of query ioctl
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

List of queries is re-defined. Now it supports following
items:

	enum hyper_dmabuf_query {
		DMABUF_QUERY_TYPE = 0x10,
		DMABUF_QUERY_EXPORTER,
		DMABUF_QUERY_IMPORTER,
		DMABUF_QUERY_SIZE,
		DMABUF_QUERY_BUSY,
		DMABUF_QUERY_UNEXPORTED,
		DMABUF_QUERY_DELAYED_UNEXPORTED,
	};

Also, actual querying part of the function is moved to hyper_dmabuf_query.c

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile             |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 111 ++++++++++---------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c | 115 ++++++++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h |  38 +--------
 include/uapi/xen/hyper_dmabuf.h               |  17 ++++
 5 files changed, 179 insertions(+), 103 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index d90cfc3..8865f50 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -11,6 +11,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
+				 hyper_dmabuf_query.o \
 
 ifeq ($(CONFIG_XEN), y)
 	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 375b664..12f7ce4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -31,7 +31,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/miscdevice.h>
-#include <asm/uaccess.h>
+#include <linux/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
@@ -46,7 +46,7 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
+static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -63,7 +63,7 @@ static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
+static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -81,7 +81,7 @@ static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_export_remote(struct file *filp, void *data)
+static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -514,7 +514,7 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 /* Schedules unexport of dmabuf.
  */
-static int hyper_dmabuf_unexport(struct file *filp, void *data)
+static int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -554,11 +554,11 @@ static int hyper_dmabuf_unexport(struct file *filp, void *data)
 	return 0;
 }
 
-static int hyper_dmabuf_query(struct file *filp, void *data)
+static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_query *query_attr;
-	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info = NULL;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info = NULL;
 	int ret = 0;
 
 	if (!data) {
@@ -568,71 +568,46 @@ static int hyper_dmabuf_query(struct file *filp, void *data)
 
 	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
 
-	sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
-	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
-
-	/* if dmabuf can't be found in both lists, return */
-	if (!(sgt_info && imported_sgt_info)) {
-		dev_err(hyper_dmabuf_private.device, "can't find entry anywhere\n");
-		return -ENOENT;
-	}
-
-	/* not considering the case where a dmabuf is found on both queues
-	 * in one domain */
-	switch (query_attr->item)
-	{
-		case DMABUF_QUERY_TYPE_LIST:
-			if (sgt_info) {
-				query_attr->info = EXPORTED;
-			} else {
-				query_attr->info = IMPORTED;
-			}
-			break;
-
-		/* exporting domain of this specific dmabuf*/
-		case DMABUF_QUERY_EXPORTER:
-			if (sgt_info) {
-				query_attr->info = 0xFFFFFFFF; /* myself */
-			} else {
-				query_attr->info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
-			}
-			break;
-
-		/* importing domain of this specific dmabuf */
-		case DMABUF_QUERY_IMPORTER:
-			if (sgt_info) {
-				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
-			} else {
-#if 0 /* TODO: a global variable, current_domain does not exist yet*/
-				query_attr->info = current_domain;
-#endif
-			}
-			break;
-
-		/* size of dmabuf in byte */
-		case DMABUF_QUERY_SIZE:
-			if (sgt_info) {
-#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
-				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
-#endif
-			} else {
-				query_attr->info = imported_sgt_info->nents * 4096 -
-						   imported_sgt_info->frst_ofst - 4096 +
-						   imported_sgt_info->last_len;
-			}
-			break;
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hyper_dmabuf_private.domid) {
+		/* query for exported dmabuf */
+		sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
+		if (sgt_info) {
+			ret = hyper_dmabuf_query_exported(sgt_info, query_attr->item);
+			if (ret != -EINVAL)
+				query_attr->info = ret;
+		} else {
+			dev_err(hyper_dmabuf_private.device,
+				"DMA BUF {id:%d key:%d %d %d} can't be found in the export list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	} else {
+		/* query for imported dmabuf */
+		imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported_sgt_info) {
+			ret = hyper_dmabuf_query_imported(imported_sgt_info, query_attr->item);
+			if (ret != -EINVAL)
+				query_attr->info = ret;
+		} else {
+			dev_err(hyper_dmabuf_private.device,
+				"DMA BUF {id:%d key:%d %d %d} can't be found in the imported list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
 	}
 
-	return ret;
+	return 0;
 }
 
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query_ioctl, 0),
 };
 
 static long hyper_dmabuf_ioctl(struct file *filp,
@@ -731,7 +706,7 @@ static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_inf
 		unexport_attr.hid = sgt_info->hid;
 		unexport_attr.delay_ms = 0;
 
-		hyper_dmabuf_unexport(filp, &unexport_attr);
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
new file mode 100644
index 0000000..2a5201b
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -0,0 +1,115 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_id.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
+	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
+
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query)
+{
+	switch (query)
+	{
+		case HYPER_DMABUF_QUERY_TYPE:
+			return EXPORTED;
+
+		/* exporting domain of this specific dmabuf*/
+		case HYPER_DMABUF_QUERY_EXPORTER:
+			return HYPER_DMABUF_DOM_ID(sgt_info->hid);
+
+		/* importing domain of this specific dmabuf */
+		case HYPER_DMABUF_QUERY_IMPORTER:
+			return sgt_info->hyper_dmabuf_rdomain;
+
+		/* size of dmabuf in byte */
+		case HYPER_DMABUF_QUERY_SIZE:
+			return sgt_info->dma_buf->size;
+
+		/* whether the buffer is used by importer */
+		case HYPER_DMABUF_QUERY_BUSY:
+			return (sgt_info->importer_exported == 0) ? false : true;
+
+		/* whether the buffer is unexported */
+		case HYPER_DMABUF_QUERY_UNEXPORTED:
+			return !sgt_info->valid;
+
+		/* whether the buffer is scheduled to be unexported */
+		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+			return !sgt_info->unexport_scheduled;
+	}
+
+	return -EINVAL;
+}
+
+
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query)
+{
+	switch (query)
+	{
+		case HYPER_DMABUF_QUERY_TYPE:
+			return IMPORTED;
+
+		/* exporting domain of this specific dmabuf*/
+		case HYPER_DMABUF_QUERY_EXPORTER:
+			return HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+
+		/* importing domain of this specific dmabuf */
+		case HYPER_DMABUF_QUERY_IMPORTER:
+			return  hyper_dmabuf_private.domid;
+
+		/* size of dmabuf in byte */
+		case HYPER_DMABUF_QUERY_SIZE:
+			if (imported_sgt_info->dma_buf) {
+				/* if local dma_buf is created (if it's ever mapped),
+				 * retrieve it directly from struct dma_buf *
+				 */
+				return imported_sgt_info->dma_buf->size;
+			} else {
+				/* calcuate it from given nents, frst_ofst and last_len */
+				return HYPER_DMABUF_SIZE(imported_sgt_info->nents,
+							 imported_sgt_info->frst_ofst,
+							 imported_sgt_info->last_len);
+			}
+
+		/* whether the buffer is used or not */
+		case HYPER_DMABUF_QUERY_BUSY:
+			/* checks if it's used by importer */
+			return (imported_sgt_info->num_importers > 0) ? true : false;
+
+		/* whether the buffer is unexported */
+		case HYPER_DMABUF_QUERY_UNEXPORTED:
+			return !imported_sgt_info->valid;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index 6cf5b2d..295e923 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,40 +1,8 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
-enum hyper_dmabuf_query {
-	DMABUF_QUERY_TYPE_LIST = 0x10,
-	DMABUF_QUERY_EXPORTER,
-	DMABUF_QUERY_IMPORTER,
-	DMABUF_QUERY_SIZE
-};
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query);
 
-enum hyper_dmabuf_status {
-	EXPORTED = 0x01,
-	IMPORTED
-};
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query);
 
-#endif /* __HYPER_DMABUF_QUERY_H__ */
+#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index 992a542..bee0f86 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -98,4 +98,21 @@ struct ioctl_hyper_dmabuf_query {
 	int info;
 };
 
+/* DMABUF query */
+
+enum hyper_dmabuf_query {
+        HYPER_DMABUF_QUERY_TYPE = 0x10,
+        HYPER_DMABUF_QUERY_EXPORTER,
+        HYPER_DMABUF_QUERY_IMPORTER,
+        HYPER_DMABUF_QUERY_SIZE,
+        HYPER_DMABUF_QUERY_BUSY,
+        HYPER_DMABUF_QUERY_UNEXPORTED,
+        HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+};
+
+enum hyper_dmabuf_status {
+        EXPORTED= 0x01,
+        IMPORTED,
+};
+
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 37/60] hyper_dmabuf: implementation of query ioctl
  2017-12-19 19:29 ` Dongwon Kim
                   ` (51 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

List of queries is re-defined. Now it supports following
items:

	enum hyper_dmabuf_query {
		DMABUF_QUERY_TYPE = 0x10,
		DMABUF_QUERY_EXPORTER,
		DMABUF_QUERY_IMPORTER,
		DMABUF_QUERY_SIZE,
		DMABUF_QUERY_BUSY,
		DMABUF_QUERY_UNEXPORTED,
		DMABUF_QUERY_DELAYED_UNEXPORTED,
	};

Also, actual querying part of the function is moved to hyper_dmabuf_query.c

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile             |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 111 ++++++++++---------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c | 115 ++++++++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h |  38 +--------
 include/uapi/xen/hyper_dmabuf.h               |  17 ++++
 5 files changed, 179 insertions(+), 103 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index d90cfc3..8865f50 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -11,6 +11,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
+				 hyper_dmabuf_query.o \
 
 ifeq ($(CONFIG_XEN), y)
 	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 375b664..12f7ce4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -31,7 +31,7 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/miscdevice.h>
-#include <asm/uaccess.h>
+#include <linux/uaccess.h>
 #include <linux/dma-buf.h>
 #include <linux/delay.h>
 #include <linux/list.h>
@@ -46,7 +46,7 @@
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
-static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
+static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -63,7 +63,7 @@ static int hyper_dmabuf_tx_ch_setup(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
+static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -81,7 +81,7 @@ static int hyper_dmabuf_rx_ch_setup(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_export_remote(struct file *filp, void *data)
+static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
@@ -514,7 +514,7 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 /* Schedules unexport of dmabuf.
  */
-static int hyper_dmabuf_unexport(struct file *filp, void *data)
+static int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
 	struct hyper_dmabuf_sgt_info *sgt_info;
@@ -554,11 +554,11 @@ static int hyper_dmabuf_unexport(struct file *filp, void *data)
 	return 0;
 }
 
-static int hyper_dmabuf_query(struct file *filp, void *data)
+static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_query *query_attr;
-	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info = NULL;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info = NULL;
 	int ret = 0;
 
 	if (!data) {
@@ -568,71 +568,46 @@ static int hyper_dmabuf_query(struct file *filp, void *data)
 
 	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
 
-	sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
-	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
-
-	/* if dmabuf can't be found in both lists, return */
-	if (!(sgt_info && imported_sgt_info)) {
-		dev_err(hyper_dmabuf_private.device, "can't find entry anywhere\n");
-		return -ENOENT;
-	}
-
-	/* not considering the case where a dmabuf is found on both queues
-	 * in one domain */
-	switch (query_attr->item)
-	{
-		case DMABUF_QUERY_TYPE_LIST:
-			if (sgt_info) {
-				query_attr->info = EXPORTED;
-			} else {
-				query_attr->info = IMPORTED;
-			}
-			break;
-
-		/* exporting domain of this specific dmabuf*/
-		case DMABUF_QUERY_EXPORTER:
-			if (sgt_info) {
-				query_attr->info = 0xFFFFFFFF; /* myself */
-			} else {
-				query_attr->info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
-			}
-			break;
-
-		/* importing domain of this specific dmabuf */
-		case DMABUF_QUERY_IMPORTER:
-			if (sgt_info) {
-				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
-			} else {
-#if 0 /* TODO: a global variable, current_domain does not exist yet*/
-				query_attr->info = current_domain;
-#endif
-			}
-			break;
-
-		/* size of dmabuf in byte */
-		case DMABUF_QUERY_SIZE:
-			if (sgt_info) {
-#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
-				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
-#endif
-			} else {
-				query_attr->info = imported_sgt_info->nents * 4096 -
-						   imported_sgt_info->frst_ofst - 4096 +
-						   imported_sgt_info->last_len;
-			}
-			break;
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hyper_dmabuf_private.domid) {
+		/* query for exported dmabuf */
+		sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
+		if (sgt_info) {
+			ret = hyper_dmabuf_query_exported(sgt_info, query_attr->item);
+			if (ret != -EINVAL)
+				query_attr->info = ret;
+		} else {
+			dev_err(hyper_dmabuf_private.device,
+				"DMA BUF {id:%d key:%d %d %d} can't be found in the export list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	} else {
+		/* query for imported dmabuf */
+		imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported_sgt_info) {
+			ret = hyper_dmabuf_query_imported(imported_sgt_info, query_attr->item);
+			if (ret != -EINVAL)
+				query_attr->info = ret;
+		} else {
+			dev_err(hyper_dmabuf_private.device,
+				"DMA BUF {id:%d key:%d %d %d} can't be found in the imported list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
 	}
 
-	return ret;
+	return 0;
 }
 
 static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query_ioctl, 0),
 };
 
 static long hyper_dmabuf_ioctl(struct file *filp,
@@ -731,7 +706,7 @@ static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_inf
 		unexport_attr.hid = sgt_info->hid;
 		unexport_attr.delay_ms = 0;
 
-		hyper_dmabuf_unexport(filp, &unexport_attr);
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
 	}
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
new file mode 100644
index 0000000..2a5201b
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -0,0 +1,115 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_id.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
+	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
+
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query)
+{
+	switch (query)
+	{
+		case HYPER_DMABUF_QUERY_TYPE:
+			return EXPORTED;
+
+		/* exporting domain of this specific dmabuf*/
+		case HYPER_DMABUF_QUERY_EXPORTER:
+			return HYPER_DMABUF_DOM_ID(sgt_info->hid);
+
+		/* importing domain of this specific dmabuf */
+		case HYPER_DMABUF_QUERY_IMPORTER:
+			return sgt_info->hyper_dmabuf_rdomain;
+
+		/* size of dmabuf in byte */
+		case HYPER_DMABUF_QUERY_SIZE:
+			return sgt_info->dma_buf->size;
+
+		/* whether the buffer is used by importer */
+		case HYPER_DMABUF_QUERY_BUSY:
+			return (sgt_info->importer_exported == 0) ? false : true;
+
+		/* whether the buffer is unexported */
+		case HYPER_DMABUF_QUERY_UNEXPORTED:
+			return !sgt_info->valid;
+
+		/* whether the buffer is scheduled to be unexported */
+		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+			return !sgt_info->unexport_scheduled;
+	}
+
+	return -EINVAL;
+}
+
+
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query)
+{
+	switch (query)
+	{
+		case HYPER_DMABUF_QUERY_TYPE:
+			return IMPORTED;
+
+		/* exporting domain of this specific dmabuf*/
+		case HYPER_DMABUF_QUERY_EXPORTER:
+			return HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+
+		/* importing domain of this specific dmabuf */
+		case HYPER_DMABUF_QUERY_IMPORTER:
+			return  hyper_dmabuf_private.domid;
+
+		/* size of dmabuf in byte */
+		case HYPER_DMABUF_QUERY_SIZE:
+			if (imported_sgt_info->dma_buf) {
+				/* if local dma_buf is created (if it's ever mapped),
+				 * retrieve it directly from struct dma_buf *
+				 */
+				return imported_sgt_info->dma_buf->size;
+			} else {
+				/* calcuate it from given nents, frst_ofst and last_len */
+				return HYPER_DMABUF_SIZE(imported_sgt_info->nents,
+							 imported_sgt_info->frst_ofst,
+							 imported_sgt_info->last_len);
+			}
+
+		/* whether the buffer is used or not */
+		case HYPER_DMABUF_QUERY_BUSY:
+			/* checks if it's used by importer */
+			return (imported_sgt_info->num_importers > 0) ? true : false;
+
+		/* whether the buffer is unexported */
+		case HYPER_DMABUF_QUERY_UNEXPORTED:
+			return !imported_sgt_info->valid;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index 6cf5b2d..295e923 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,40 +1,8 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
-enum hyper_dmabuf_query {
-	DMABUF_QUERY_TYPE_LIST = 0x10,
-	DMABUF_QUERY_EXPORTER,
-	DMABUF_QUERY_IMPORTER,
-	DMABUF_QUERY_SIZE
-};
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query);
 
-enum hyper_dmabuf_status {
-	EXPORTED = 0x01,
-	IMPORTED
-};
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query);
 
-#endif /* __HYPER_DMABUF_QUERY_H__ */
+#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index 992a542..bee0f86 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -98,4 +98,21 @@ struct ioctl_hyper_dmabuf_query {
 	int info;
 };
 
+/* DMABUF query */
+
+enum hyper_dmabuf_query {
+        HYPER_DMABUF_QUERY_TYPE = 0x10,
+        HYPER_DMABUF_QUERY_EXPORTER,
+        HYPER_DMABUF_QUERY_IMPORTER,
+        HYPER_DMABUF_QUERY_SIZE,
+        HYPER_DMABUF_QUERY_BUSY,
+        HYPER_DMABUF_QUERY_UNEXPORTED,
+        HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+};
+
+enum hyper_dmabuf_status {
+        EXPORTED= 0x01,
+        IMPORTED,
+};
+
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 38/60] hyper_dmabuf: preventing self exporting of dma_buf
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Adding ID check to make sure a dma-buf is exported externally
since hyper_dmabuf only allows to export a dmabuf to a different
VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 12f7ce4..b77b156 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -103,6 +103,12 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
+	if (hyper_dmabuf_private.domid == export_remote_attr->remote_domain) {
+		dev_err(hyper_dmabuf_private.device,
+			"exporting to the same VM is not permitted\n");
+		return -EINVAL;
+	}
+
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
 	if (IS_ERR(dma_buf)) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 38/60] hyper_dmabuf: preventing self exporting of dma_buf
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Adding ID check to make sure a dma-buf is exported externally
since hyper_dmabuf only allows to export a dmabuf to a different
VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 12f7ce4..b77b156 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -103,6 +103,12 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
+	if (hyper_dmabuf_private.domid == export_remote_attr->remote_domain) {
+		dev_err(hyper_dmabuf_private.device,
+			"exporting to the same VM is not permitted\n");
+		return -EINVAL;
+	}
+
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
 	if (IS_ERR(dma_buf)) {
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 38/60] hyper_dmabuf: preventing self exporting of dma_buf
  2017-12-19 19:29 ` Dongwon Kim
                   ` (54 preceding siblings ...)
  (?)
@ 2017-12-19 19:29 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Adding ID check to make sure a dma-buf is exported externally
since hyper_dmabuf only allows to export a dmabuf to a different
VM.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 12f7ce4..b77b156 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -103,6 +103,12 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
 
+	if (hyper_dmabuf_private.domid == export_remote_attr->remote_domain) {
+		dev_err(hyper_dmabuf_private.device,
+			"exporting to the same VM is not permitted\n");
+		return -EINVAL;
+	}
+
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
 	if (IS_ERR(dma_buf)) {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 39/60] hyper_dmabuf: correcting DMA-BUF clean-up order
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Reordering clean-up procedure

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 37 +++++++++++++++++----------
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b77b156..2ff2c145 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -148,21 +148,24 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
 	if (IS_ERR(attachment)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
-		return PTR_ERR(attachment);
+		ret = PTR_ERR(attachment);
+		goto fail_attach;
 	}
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot map attachment\n");
-		return PTR_ERR(sgt);
+		ret = PTR_ERR(sgt);
+		goto fail_map_attachment;
 	}
 
 	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
 	if(!sgt_info) {
 		dev_err(hyper_dmabuf_private.device, "no more space left\n");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
 	}
 
 	sgt_info->hid = hyper_dmabuf_get_hid();
@@ -171,8 +174,8 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	if(sgt_info->hid.id == -1) {
 		dev_err(hyper_dmabuf_private.device,
 			"exceeds allowed number of dmabuf to be exported\n");
-		/* TODO: Cleanup sgt */
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
 	}
 
 	/* TODO: We might need to consider using port number on event channel? */
@@ -286,6 +289,8 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	return ret;
 
+/* Clean-up if error occurs */
+
 fail_send_request:
 	kfree(req);
 
@@ -293,20 +298,26 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	hyper_dmabuf_remove_exported(sgt_info->hid);
 
 fail_export:
-	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-				 sgt_info->active_sgts->sgt,
-				 DMA_BIDIRECTIONAL);
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
-	dma_buf_put(sgt_info->dma_buf);
-
 	kfree(sgt_info->va_vmapped);
+
 fail_map_va_vmapped:
 	kfree(sgt_info->va_kmapped);
+
 fail_map_va_kmapped:
-	kfree(sgt_info->active_sgts);
-fail_map_active_sgts:
 	kfree(sgt_info->active_attached);
+
 fail_map_active_attached:
+	kfree(sgt_info->active_sgts);
+
+fail_map_active_sgts:
+fail_sgt_info_creation:
+	dma_buf_unmap_attachment(attachment, sgt, DMA_BIDIRECTIONAL);
+
+fail_map_attachment:
+	dma_buf_detach(dma_buf, attachment);
+
+fail_attach:
+	dma_buf_put(dma_buf);
 
 	return ret;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 39/60] hyper_dmabuf: correcting DMA-BUF clean-up order
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Reordering clean-up procedure

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 37 +++++++++++++++++----------
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b77b156..2ff2c145 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -148,21 +148,24 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
 	if (IS_ERR(attachment)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
-		return PTR_ERR(attachment);
+		ret = PTR_ERR(attachment);
+		goto fail_attach;
 	}
 
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
 		dev_err(hyper_dmabuf_private.device, "Cannot map attachment\n");
-		return PTR_ERR(sgt);
+		ret = PTR_ERR(sgt);
+		goto fail_map_attachment;
 	}
 
 	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
 
 	if(!sgt_info) {
 		dev_err(hyper_dmabuf_private.device, "no more space left\n");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
 	}
 
 	sgt_info->hid = hyper_dmabuf_get_hid();
@@ -171,8 +174,8 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	if(sgt_info->hid.id == -1) {
 		dev_err(hyper_dmabuf_private.device,
 			"exceeds allowed number of dmabuf to be exported\n");
-		/* TODO: Cleanup sgt */
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
 	}
 
 	/* TODO: We might need to consider using port number on event channel? */
@@ -286,6 +289,8 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	return ret;
 
+/* Clean-up if error occurs */
+
 fail_send_request:
 	kfree(req);
 
@@ -293,20 +298,26 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	hyper_dmabuf_remove_exported(sgt_info->hid);
 
 fail_export:
-	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-				 sgt_info->active_sgts->sgt,
-				 DMA_BIDIRECTIONAL);
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
-	dma_buf_put(sgt_info->dma_buf);
-
 	kfree(sgt_info->va_vmapped);
+
 fail_map_va_vmapped:
 	kfree(sgt_info->va_kmapped);
+
 fail_map_va_kmapped:
-	kfree(sgt_info->active_sgts);
-fail_map_active_sgts:
 	kfree(sgt_info->active_attached);
+
 fail_map_active_attached:
+	kfree(sgt_info->active_sgts);
+
+fail_map_active_sgts:
+fail_sgt_info_creation:
+	dma_buf_unmap_attachment(attachment, sgt, DMA_BIDIRECTIONAL);
+
+fail_map_attachment:
+	dma_buf_detach(dma_buf, attachment);
+
+fail_attach:
+	dma_buf_put(dma_buf);
 
 	return ret;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 40/60] hyper_dmabuf: do not use 'private' as field name
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Using a word, 'private' is not recommended because of conflict
with language keyword when compiling with C++.
So changing those to 'priv'.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 8 ++++----
 include/uapi/xen/hyper_dmabuf.h               | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 2ff2c145..9d05d66 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -257,10 +257,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	}
 
 	/* driver/application specific private info, max 4x4 bytes */
-	operands[8] = export_remote_attr->private[0];
-	operands[9] = export_remote_attr->private[1];
-	operands[10] = export_remote_attr->private[2];
-	operands[11] = export_remote_attr->private[3];
+	operands[8] = export_remote_attr->priv[0];
+	operands[9] = export_remote_attr->priv[1];
+	operands[10] = export_remote_attr->priv[2];
+	operands[11] = export_remote_attr->priv[3];
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index bee0f86..a2d22d0 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -56,7 +56,7 @@ struct ioctl_hyper_dmabuf_export_remote {
 	int remote_domain;
 	/* exported dma buf id */
 	hyper_dmabuf_id_t hid;
-	int private[4];
+	int priv[4];
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_FD \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 40/60] hyper_dmabuf: do not use 'private' as field name
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Using a word, 'private' is not recommended because of conflict
with language keyword when compiling with C++.
So changing those to 'priv'.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 8 ++++----
 include/uapi/xen/hyper_dmabuf.h               | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 2ff2c145..9d05d66 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -257,10 +257,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	}
 
 	/* driver/application specific private info, max 4x4 bytes */
-	operands[8] = export_remote_attr->private[0];
-	operands[9] = export_remote_attr->private[1];
-	operands[10] = export_remote_attr->private[2];
-	operands[11] = export_remote_attr->private[3];
+	operands[8] = export_remote_attr->priv[0];
+	operands[9] = export_remote_attr->priv[1];
+	operands[10] = export_remote_attr->priv[2];
+	operands[11] = export_remote_attr->priv[3];
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index bee0f86..a2d22d0 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -56,7 +56,7 @@ struct ioctl_hyper_dmabuf_export_remote {
 	int remote_domain;
 	/* exported dma buf id */
 	hyper_dmabuf_id_t hid;
-	int private[4];
+	int priv[4];
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_FD \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 41/60] hyper_dmabuf: re-organize driver source
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Re-orginized source code for more intuitive structure

For this,

1. driver's file operations other than ioctls have been moved to
hyper_dmabuf_drv.c.

2. Separated out dma-buf operations from hyper_dmabuf_ops.c
and put those in a new file, 'hyper_dmabuf_ops.c'. Remaining part
(SGT core management) is also put in the a new file,
'hyper_dmabuf_sgt_proc.c'. hyper_dmabuf_imp.c and hyper_dmabuf_imp.h
are removed as a result.

3. Header files and Makefile are also updated accordingly.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |   3 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  95 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 682 ---------------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  48 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 136 +---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |   1 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 471 ++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h        |  32 +
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 258 ++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  41 ++
 11 files changed, 920 insertions(+), 849 deletions(-)
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 8865f50..5040b9f 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -7,7 +7,8 @@ ifneq ($(KERNELRELEASE),)
 	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
                                  hyper_dmabuf_ioctl.o \
                                  hyper_dmabuf_list.o \
-				 hyper_dmabuf_imp.o \
+				 hyper_dmabuf_sgl_proc.o \
+				 hyper_dmabuf_ops.o \
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index c802c3e..8c488d7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -28,10 +28,13 @@
 
 #include <linux/init.h>
 #include <linux/module.h>
+#include <linux/miscdevice.h>
 #include <linux/workqueue.h>
 #include <linux/device.h>
+#include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
+#include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
@@ -44,12 +47,94 @@ extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 MODULE_LICENSE("GPL and additional rights");
 MODULE_AUTHOR("Intel Corporation");
 
-int register_device(void);
-int unregister_device(void);
-
 struct hyper_dmabuf_private hyper_dmabuf_private;
 
-/*===============================================================================================*/
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param);
+
+void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
+				    void *attr);
+
+int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+{
+	int ret = 0;
+
+	/* Do not allow exclusive open */
+	if (filp->f_flags & O_EXCL)
+		return -EBUSY;
+
+	/*
+	 * Initialize backend if neededm,
+	 * use mutex to prevent race conditions when
+	 * two userspace apps will open device at the same time
+	 */
+	mutex_lock(&hyper_dmabuf_private.lock);
+
+	if (!hyper_dmabuf_private.backend_initialized) {
+		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+
+		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	        if (ret < 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"failed to initiailize hypervisor-specific comm env\n");
+		} else {
+			hyper_dmabuf_private.backend_initialized = true;
+		}
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
+	return ret;
+}
+
+int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+{
+	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
+
+	return 0;
+}
+
+static struct file_operations hyper_dmabuf_driver_fops =
+{
+	.owner = THIS_MODULE,
+	.open = hyper_dmabuf_open,
+	.release = hyper_dmabuf_release,
+	.unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "xen/hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+int register_device(void)
+{
+	int ret = 0;
+
+	ret = misc_register(&hyper_dmabuf_miscdev);
+
+	if (ret) {
+		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
+		return ret;
+	}
+
+	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask nicely */
+	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
+
+	return ret;
+}
+
+void unregister_device(void)
+{
+	dev_info(hyper_dmabuf_private.device,
+		"hyper_dmabuf: unregister_device() is called\n");
+
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
+
 static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
@@ -103,7 +188,6 @@ static int __init hyper_dmabuf_drv_init(void)
 	return ret;
 }
 
-/*-----------------------------------------------------------------------------------------------*/
 static void hyper_dmabuf_drv_exit(void)
 {
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
@@ -128,7 +212,6 @@ static void hyper_dmabuf_drv_exit(void)
 
 	unregister_device();
 }
-/*===============================================================================================*/
 
 module_init(hyper_dmabuf_drv_init);
 module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
deleted file mode 100644
index 2bf0835..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ /dev/null
@@ -1,682 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_imp.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_list.h"
-
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-int dmabuf_refcount(struct dma_buf *dma_buf)
-{
-	if ((dma_buf != NULL) && (dma_buf->file != NULL))
-		return file_count(dma_buf->file);
-
-	return -1;
-}
-
-/* return total number of pages referenced by a sgt
- * for pre-calculation of # of pages behind a given sgt
- */
-static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
-{
-	struct scatterlist *sgl;
-	int length, i;
-	/* at least one page */
-	int num_pages = 1;
-
-	sgl = sgt->sgl;
-
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
-
-	for (i = 1; i < sgt->nents; i++) {
-		sgl = sg_next(sgl);
-		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
-	}
-
-	return num_pages;
-}
-
-/* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
-{
-	struct hyper_dmabuf_pages_info *pinfo;
-	int i, j, k;
-	int length;
-	struct scatterlist *sgl;
-
-	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
-	if (!pinfo)
-		return NULL;
-
-	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (!pinfo->pages)
-		return NULL;
-
-	sgl = sgt->sgl;
-
-	pinfo->nents = 1;
-	pinfo->frst_ofst = sgl->offset;
-	pinfo->pages[0] = sg_page(sgl);
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-	i = 1;
-
-	while (length > 0) {
-		pinfo->pages[i] = nth_page(sg_page(sgl), i);
-		length -= PAGE_SIZE;
-		pinfo->nents++;
-		i++;
-	}
-
-	for (j = 1; j < sgt->nents; j++) {
-		sgl = sg_next(sgl);
-		pinfo->pages[i++] = sg_page(sgl);
-		length = sgl->length - PAGE_SIZE;
-		pinfo->nents++;
-		k = 1;
-
-		while (length > 0) {
-			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
-			length -= PAGE_SIZE;
-			pinfo->nents++;
-		}
-	}
-
-	/*
-	 * lenght at that point will be 0 or negative,
-	 * so to calculate last page size just add it to PAGE_SIZE
-	 */
-	pinfo->last_len = PAGE_SIZE + length;
-
-	return pinfo;
-}
-
-/* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-					 int frst_ofst, int last_len, int nents)
-{
-	struct sg_table *sgt;
-	struct scatterlist *sgl;
-	int i, ret;
-
-	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (!sgt) {
-		return NULL;
-	}
-
-	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
-	if (ret) {
-		if (sgt) {
-			sg_free_table(sgt);
-			kfree(sgt);
-		}
-
-		return NULL;
-	}
-
-	sgl = sgt->sgl;
-
-	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
-
-	for (i=1; i<nents-1; i++) {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
-	}
-
-	if (nents > 1) /* more than one page */ {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], last_len, 0);
-	}
-
-	return sgt;
-}
-
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
-{
-	struct sgt_list *sgtl;
-	struct attachment_list *attachl;
-	struct kmap_vaddr_list *va_kmapl;
-	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
-		return -EINVAL;
-	}
-
-	/* if force != 1, sgt_info can be released only if
-	 * there's no activity on exported dma-buf on importer
-	 * side.
-	 */
-	if (!force &&
-	    sgt_info->importer_exported) {
-		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
-		return -EPERM;
-	}
-
-	/* force == 1 is not recommended */
-	while (!list_empty(&sgt_info->va_kmapped->list)) {
-		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
-					    struct kmap_vaddr_list, list);
-
-		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
-		list_del(&va_kmapl->list);
-		kfree(va_kmapl);
-	}
-
-	while (!list_empty(&sgt_info->va_vmapped->list)) {
-		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
-					    struct vmap_vaddr_list, list);
-
-		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
-		list_del(&va_vmapl->list);
-		kfree(va_vmapl);
-	}
-
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					   struct attachment_list, list);
-
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
-					struct sgt_list, list);
-
-		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					 DMA_BIDIRECTIONAL);
-		list_del(&sgtl->list);
-		kfree(sgtl);
-	}
-
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					   struct attachment_list, list);
-
-		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
-		list_del(&attachl->list);
-		kfree(attachl);
-	}
-
-	/* Start cleanup of buffer in reverse order to exporting */
-	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
-
-	/* unmap dma-buf */
-	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-				 sgt_info->active_sgts->sgt,
-				 DMA_BIDIRECTIONAL);
-
-	/* detatch dma-buf */
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
-
-	/* close connection to dma-buf completely */
-	dma_buf_put(sgt_info->dma_buf);
-	sgt_info->dma_buf = NULL;
-
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->active_attached);
-	kfree(sgt_info->va_kmapped);
-	kfree(sgt_info->va_vmapped);
-
-	return 0;
-}
-
-#define WAIT_AFTER_SYNC_REQ 0
-
-inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
-{
-	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int operands[5];
-	int i;
-	int ret;
-
-	operands[0] = hid.id;
-
-	for (i=0; i<3; i++)
-		operands[i+1] = hid.rng_key[i];
-
-	operands[4] = dmabuf_ops;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
-			"No memory left to be allocated\n");
-		return -ENOMEM;
-	}
-
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
-
-	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
-
-	kfree(req);
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
-			struct dma_buf_attachment *attach)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_ATTACH);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-		return ret;
-	}
-
-	return 0;
-}
-
-static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_DETACH);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
-						enum dma_data_direction dir)
-{
-	struct sg_table *st;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_pages_info *page_info;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
-
-	/* extract pages from sgt */
-	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
-
-	if (!page_info) {
-		return NULL;
-	}
-
-	/* create a new sg_table with extracted pages */
-	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
-				page_info->last_len, page_info->nents);
-	if (!st)
-		goto err_free_sg;
-
-        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
-                goto err_free_sg;
-        }
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_MAP);
-
-	kfree(page_info->pages);
-	kfree(page_info);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return st;
-
-err_free_sg:
-	if (st) {
-		sg_free_table(st);
-		kfree(st);
-	}
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
-				   struct sg_table *sg,
-				   enum dma_data_direction dir)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
-
-	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
-
-	sg_free_table(sg);
-	kfree(sg);
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_UNMAP);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int ret;
-	int final_release;
-
-	if (!dma_buf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
-
-	if (!dmabuf_refcount(sgt_info->dma_buf)) {
-		sgt_info->dma_buf = NULL;
-	}
-
-	sgt_info->num_importers--;
-
-	if (sgt_info->num_importers == 0) {
-		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
-
-		if (sgt_info->sgt) {
-			sg_free_table(sgt_info->sgt);
-			kfree(sgt_info->sgt);
-			sgt_info->sgt = NULL;
-		}
-	}
-
-	final_release = sgt_info && !sgt_info->valid &&
-		        !sgt_info->num_importers;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_RELEASE);
-	if (ret < 0) {
-		dev_warn(hyper_dmabuf_private.device,
-			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	/*
-	 * Check if buffer is still valid and if not remove it from imported list.
-	 * That has to be done after sending sync request
-	 */
-	if (final_release) {
-		hyper_dmabuf_remove_imported(sgt_info->hid);
-		kfree(sgt_info);
-	}
-}
-
-static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_END_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return 0;
-}
-
-static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return NULL; /* for now NULL.. need to return the address of mapped region */
-}
-
-static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return NULL; /* for now NULL.. need to return the address of mapped region */
-}
-
-static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_MMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return ret;
-}
-
-static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_VMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_VUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static const struct dma_buf_ops hyper_dmabuf_ops = {
-		.attach = hyper_dmabuf_ops_attach,
-		.detach = hyper_dmabuf_ops_detach,
-		.map_dma_buf = hyper_dmabuf_ops_map,
-		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
-		.release = hyper_dmabuf_ops_release,
-		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
-		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
-		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
-		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
-		.map = hyper_dmabuf_ops_kmap,
-		.unmap = hyper_dmabuf_ops_kunmap,
-		.mmap = hyper_dmabuf_ops_mmap,
-		.vmap = hyper_dmabuf_ops_vmap,
-		.vunmap = hyper_dmabuf_ops_vunmap,
-};
-
-/* exporting dmabuf as fd */
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
-{
-	int fd = -1;
-
-	/* call hyper_dmabuf_export_dmabuf and create
-	 * and bind a handle for it then release
-	 */
-	hyper_dmabuf_export_dma_buf(dinfo);
-
-	if (dinfo->dma_buf) {
-		fd = dma_buf_fd(dinfo->dma_buf, flags);
-	}
-
-	return fd;
-}
-
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
-{
-	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
-
-	exp_info.ops = &hyper_dmabuf_ops;
-
-	/* multiple of PAGE_SIZE, not considering offset */
-	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
-	exp_info.flags = /* not sure about flag */0;
-	exp_info.priv = dinfo;
-
-	dinfo->dma_buf = dma_buf_export(&exp_info);
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
deleted file mode 100644
index eda075b3..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_IMP_H__
-#define __HYPER_DMABUF_IMP_H__
-
-#include <linux/fs.h>
-#include "hyper_dmabuf_struct.h"
-
-/* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
-
-/* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-                                int frst_ofst, int last_len, int nents);
-
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
-
-void hyper_dmabuf_free_sgt(struct sg_table *sgt);
-
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
-
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
-
-int dmabuf_refcount(struct dma_buf *dma_buf);
-
-#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 9d05d66..283fe5a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -41,7 +41,8 @@
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_ops.h"
 #include "hyper_dmabuf_query.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
@@ -618,7 +619,29 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 	return 0;
 }
 
-static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
+				    void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file*) attr;
+
+	if (!filp || !sgt_info)
+		return;
+
+	if (sgt_info->filp == filp) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
+			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
+			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
+
+		unexport_attr.hid = sgt_info->hid;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
+	}
+}
+
+const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote_ioctl, 0),
@@ -627,7 +650,7 @@ static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query_ioctl, 0),
 };
 
-static long hyper_dmabuf_ioctl(struct file *filp,
+long hyper_dmabuf_ioctl(struct file *filp,
 			unsigned int cmd, unsigned long param)
 {
 	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
@@ -672,110 +695,3 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 
 	return ret;
 }
-
-int hyper_dmabuf_open(struct inode *inode, struct file *filp)
-{
-	int ret = 0;
-
-	/* Do not allow exclusive open */
-	if (filp->f_flags & O_EXCL)
-		return -EBUSY;
-
-	/*
-	 * Initialize backend if neededm,
-	 * use mutex to prevent race conditions when
-	 * two userspace apps will open device at the same time
-	 */
-	mutex_lock(&hyper_dmabuf_private.lock);
-
-	if (!hyper_dmabuf_private.backend_initialized) {
-		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
-
-		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
-	        if (ret < 0) {
-			dev_err(hyper_dmabuf_private.device,
-				"failed to initiailize hypervisor-specific comm env\n");
-		} else {
-			hyper_dmabuf_private.backend_initialized = true;
-		}
-	}
-
-	mutex_unlock(&hyper_dmabuf_private.lock);
-
-	return ret;
-}
-
-static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
-					   void *attr)
-{
-	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file*) attr;
-
-	if (!filp || !sgt_info)
-		return;
-
-	if (sgt_info->filp == filp) {
-		dev_dbg(hyper_dmabuf_private.device,
-			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
-			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
-			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
-
-		unexport_attr.hid = sgt_info->hid;
-		unexport_attr.delay_ms = 0;
-
-		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
-	}
-}
-
-int hyper_dmabuf_release(struct inode *inode, struct file *filp)
-{
-	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
-
-	return 0;
-}
-
-/*===============================================================================================*/
-static struct file_operations hyper_dmabuf_driver_fops =
-{
-   .owner = THIS_MODULE,
-   .open = hyper_dmabuf_open,
-   .release = hyper_dmabuf_release,
-   .unlocked_ioctl = hyper_dmabuf_ioctl,
-};
-
-static struct miscdevice hyper_dmabuf_miscdev = {
-	.minor = MISC_DYNAMIC_MINOR,
-	.name = "xen/hyper_dmabuf",
-	.fops = &hyper_dmabuf_driver_fops,
-};
-
-static const char device_name[] = "hyper_dmabuf";
-
-/*===============================================================================================*/
-int register_device(void)
-{
-	int ret = 0;
-
-	ret = misc_register(&hyper_dmabuf_miscdev);
-
-	if (ret) {
-		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
-		return ret;
-	}
-
-	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
-
-	/* TODO: Check if there is a different way to initialize dma mask nicely */
-	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
-
-	return ret;
-}
-
-/*-----------------------------------------------------------------------------------------------*/
-void unregister_device(void)
-{
-	dev_info(hyper_dmabuf_private.device,
-		 "hyper_dmabuf: unregister_device() is called\n");
-
-	misc_deregister(&hyper_dmabuf_miscdev);
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 12ebad3..c516df8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -35,7 +35,6 @@
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_remote_sync.h"
 #include "hyper_dmabuf_list.h"
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
new file mode 100644
index 0000000..81cb09f
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -0,0 +1,471 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+#define WAIT_AFTER_SYNC_REQ 0
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	int operands[5];
+	int i;
+	int ret;
+
+	operands[0] = hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = hid.rng_key[i];
+
+	operands[4] = dmabuf_ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
+
+	/* send request and wait for a response */
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
+
+	kfree(req);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
+			struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_ATTACH);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_DETACH);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
+						enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_pages_info *page_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
+
+	if (!page_info) {
+		return NULL;
+	}
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
+				page_info->last_len, page_info->nents);
+	if (!st)
+		goto err_free_sg;
+
+        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+                goto err_free_sg;
+        }
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_MAP);
+
+	kfree(page_info->pages);
+	kfree(page_info);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return st;
+
+err_free_sg:
+	if (st) {
+		sg_free_table(st);
+		kfree(st);
+	}
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+				   struct sg_table *sg,
+				   enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_UNMAP);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	int ret;
+	int final_release;
+
+	if (!dma_buf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
+
+	if (!dmabuf_refcount(sgt_info->dma_buf)) {
+		sgt_info->dma_buf = NULL;
+	}
+
+	sgt_info->num_importers--;
+
+	if (sgt_info->num_importers == 0) {
+		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
+
+		if (sgt_info->sgt) {
+			sg_free_table(sgt_info->sgt);
+			kfree(sgt_info->sgt);
+			sgt_info->sgt = NULL;
+		}
+	}
+
+	final_release = sgt_info && !sgt_info->valid &&
+		        !sgt_info->num_importers;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_RELEASE);
+	if (ret < 0) {
+		dev_warn(hyper_dmabuf_private.device,
+			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	/*
+	 * Check if buffer is still valid and if not remove it from imported list.
+	 * That has to be done after sending sync request
+	 */
+	if (final_release) {
+		hyper_dmabuf_remove_imported(sgt_info->hid);
+		kfree(sgt_info);
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KUNMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_MMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return ret;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_VMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_VUNMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+		.attach = hyper_dmabuf_ops_attach,
+		.detach = hyper_dmabuf_ops_detach,
+		.map_dma_buf = hyper_dmabuf_ops_map,
+		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+		.release = hyper_dmabuf_ops_release,
+		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
+		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+		.map = hyper_dmabuf_ops_kmap,
+		.unmap = hyper_dmabuf_ops_kunmap,
+		.mmap = hyper_dmabuf_ops_mmap,
+		.vmap = hyper_dmabuf_ops_vmap,
+		.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
+{
+	int fd = -1;
+
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
+	hyper_dmabuf_export_dma_buf(dinfo);
+
+	if (dinfo->dma_buf) {
+		fd = dma_buf_fd(dinfo->dma_buf, flags);
+	}
+
+	return fd;
+}
+
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
+	exp_info.flags = /* not sure about flag */0;
+	exp_info.priv = dinfo;
+
+	dinfo->dma_buf = dma_buf_export(&exp_info);
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
new file mode 100644
index 0000000..8c06fc6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_OPS_H__
+#define __HYPER_DMABUF_OPS_H__
+
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
+
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index be1d395..9004406 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -37,7 +37,7 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_sgl_proc.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
new file mode 100644
index 0000000..c2d013a
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -0,0 +1,258 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -1;
+}
+
+/* return total number of pages referenced by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct hyper_dmabuf_pages_info *pinfo;
+	int i, j, k;
+	int length;
+	struct scatterlist *sgl;
+
+	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
+	if (!pinfo)
+		return NULL;
+
+	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
+	if (!pinfo->pages)
+		return NULL;
+
+	sgl = sgt->sgl;
+
+	pinfo->nents = 1;
+	pinfo->frst_ofst = sgl->offset;
+	pinfo->pages[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i = 1;
+
+	while (length > 0) {
+		pinfo->pages[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pinfo->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pinfo->pages[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pinfo->nents++;
+		k = 1;
+
+		while (length > 0) {
+			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
+			length -= PAGE_SIZE;
+			pinfo->nents++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pinfo->last_len = PAGE_SIZE + length;
+
+	return pinfo;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+					 int frst_ofst, int last_len, int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (!sgt) {
+		return NULL;
+	}
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		if (sgt) {
+			sg_free_table(sgt);
+			kfree(sgt);
+		}
+
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i=1; i<nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
+	}
+
+	if (nents > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
+{
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+
+	if (!sgt_info) {
+		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
+		return -EINVAL;
+	}
+
+	/* if force != 1, sgt_info can be released only if
+	 * there's no activity on exported dma-buf on importer
+	 * side.
+	 */
+	if (!force &&
+	    sgt_info->importer_exported) {
+		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
+		return -EPERM;
+	}
+
+	/* force == 1 is not recommended */
+	while (!list_empty(&sgt_info->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+
+		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+	}
+
+	while (!list_empty(&sgt_info->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+					    struct vmap_vaddr_list, list);
+
+		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+	}
+
+	/* Start cleanup of buffer in reverse order to exporting */
+	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
+
+	/* unmap dma-buf */
+	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+				 sgt_info->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+
+	/* detatch dma-buf */
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
+
+	/* close connection to dma-buf completely */
+	dma_buf_put(sgt_info->dma_buf);
+	sgt_info->dma_buf = NULL;
+
+	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->active_attached);
+	kfree(sgt_info->va_kmapped);
+	kfree(sgt_info->va_vmapped);
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
new file mode 100644
index 0000000..237ccf5
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+int dmabuf_refcount(struct dma_buf *dma_buf);
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+                                int frst_ofst, int last_len, int nents);
+
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
+
+void hyper_dmabuf_free_sgt(struct sg_table *sgt);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 41/60] hyper_dmabuf: re-organize driver source
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Re-orginized source code for more intuitive structure

For this,

1. driver's file operations other than ioctls have been moved to
hyper_dmabuf_drv.c.

2. Separated out dma-buf operations from hyper_dmabuf_ops.c
and put those in a new file, 'hyper_dmabuf_ops.c'. Remaining part
(SGT core management) is also put in the a new file,
'hyper_dmabuf_sgt_proc.c'. hyper_dmabuf_imp.c and hyper_dmabuf_imp.h
are removed as a result.

3. Header files and Makefile are also updated accordingly.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |   3 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  95 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 682 ---------------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  48 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 136 +---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |   1 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 471 ++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h        |  32 +
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 258 ++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  41 ++
 11 files changed, 920 insertions(+), 849 deletions(-)
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 8865f50..5040b9f 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -7,7 +7,8 @@ ifneq ($(KERNELRELEASE),)
 	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
                                  hyper_dmabuf_ioctl.o \
                                  hyper_dmabuf_list.o \
-				 hyper_dmabuf_imp.o \
+				 hyper_dmabuf_sgl_proc.o \
+				 hyper_dmabuf_ops.o \
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index c802c3e..8c488d7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -28,10 +28,13 @@
 
 #include <linux/init.h>
 #include <linux/module.h>
+#include <linux/miscdevice.h>
 #include <linux/workqueue.h>
 #include <linux/device.h>
+#include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
+#include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
@@ -44,12 +47,94 @@ extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 MODULE_LICENSE("GPL and additional rights");
 MODULE_AUTHOR("Intel Corporation");
 
-int register_device(void);
-int unregister_device(void);
-
 struct hyper_dmabuf_private hyper_dmabuf_private;
 
-/*===============================================================================================*/
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param);
+
+void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
+				    void *attr);
+
+int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+{
+	int ret = 0;
+
+	/* Do not allow exclusive open */
+	if (filp->f_flags & O_EXCL)
+		return -EBUSY;
+
+	/*
+	 * Initialize backend if neededm,
+	 * use mutex to prevent race conditions when
+	 * two userspace apps will open device at the same time
+	 */
+	mutex_lock(&hyper_dmabuf_private.lock);
+
+	if (!hyper_dmabuf_private.backend_initialized) {
+		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+
+		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	        if (ret < 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"failed to initiailize hypervisor-specific comm env\n");
+		} else {
+			hyper_dmabuf_private.backend_initialized = true;
+		}
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
+	return ret;
+}
+
+int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+{
+	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
+
+	return 0;
+}
+
+static struct file_operations hyper_dmabuf_driver_fops =
+{
+	.owner = THIS_MODULE,
+	.open = hyper_dmabuf_open,
+	.release = hyper_dmabuf_release,
+	.unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "xen/hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+int register_device(void)
+{
+	int ret = 0;
+
+	ret = misc_register(&hyper_dmabuf_miscdev);
+
+	if (ret) {
+		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
+		return ret;
+	}
+
+	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask nicely */
+	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
+
+	return ret;
+}
+
+void unregister_device(void)
+{
+	dev_info(hyper_dmabuf_private.device,
+		"hyper_dmabuf: unregister_device() is called\n");
+
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
+
 static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
@@ -103,7 +188,6 @@ static int __init hyper_dmabuf_drv_init(void)
 	return ret;
 }
 
-/*-----------------------------------------------------------------------------------------------*/
 static void hyper_dmabuf_drv_exit(void)
 {
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
@@ -128,7 +212,6 @@ static void hyper_dmabuf_drv_exit(void)
 
 	unregister_device();
 }
-/*===============================================================================================*/
 
 module_init(hyper_dmabuf_drv_init);
 module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
deleted file mode 100644
index 2bf0835..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
+++ /dev/null
@@ -1,682 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_imp.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_list.h"
-
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-int dmabuf_refcount(struct dma_buf *dma_buf)
-{
-	if ((dma_buf != NULL) && (dma_buf->file != NULL))
-		return file_count(dma_buf->file);
-
-	return -1;
-}
-
-/* return total number of pages referenced by a sgt
- * for pre-calculation of # of pages behind a given sgt
- */
-static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
-{
-	struct scatterlist *sgl;
-	int length, i;
-	/* at least one page */
-	int num_pages = 1;
-
-	sgl = sgt->sgl;
-
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
-
-	for (i = 1; i < sgt->nents; i++) {
-		sgl = sg_next(sgl);
-		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
-	}
-
-	return num_pages;
-}
-
-/* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
-{
-	struct hyper_dmabuf_pages_info *pinfo;
-	int i, j, k;
-	int length;
-	struct scatterlist *sgl;
-
-	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
-	if (!pinfo)
-		return NULL;
-
-	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (!pinfo->pages)
-		return NULL;
-
-	sgl = sgt->sgl;
-
-	pinfo->nents = 1;
-	pinfo->frst_ofst = sgl->offset;
-	pinfo->pages[0] = sg_page(sgl);
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-	i = 1;
-
-	while (length > 0) {
-		pinfo->pages[i] = nth_page(sg_page(sgl), i);
-		length -= PAGE_SIZE;
-		pinfo->nents++;
-		i++;
-	}
-
-	for (j = 1; j < sgt->nents; j++) {
-		sgl = sg_next(sgl);
-		pinfo->pages[i++] = sg_page(sgl);
-		length = sgl->length - PAGE_SIZE;
-		pinfo->nents++;
-		k = 1;
-
-		while (length > 0) {
-			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
-			length -= PAGE_SIZE;
-			pinfo->nents++;
-		}
-	}
-
-	/*
-	 * lenght at that point will be 0 or negative,
-	 * so to calculate last page size just add it to PAGE_SIZE
-	 */
-	pinfo->last_len = PAGE_SIZE + length;
-
-	return pinfo;
-}
-
-/* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-					 int frst_ofst, int last_len, int nents)
-{
-	struct sg_table *sgt;
-	struct scatterlist *sgl;
-	int i, ret;
-
-	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (!sgt) {
-		return NULL;
-	}
-
-	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
-	if (ret) {
-		if (sgt) {
-			sg_free_table(sgt);
-			kfree(sgt);
-		}
-
-		return NULL;
-	}
-
-	sgl = sgt->sgl;
-
-	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
-
-	for (i=1; i<nents-1; i++) {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
-	}
-
-	if (nents > 1) /* more than one page */ {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], last_len, 0);
-	}
-
-	return sgt;
-}
-
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
-{
-	struct sgt_list *sgtl;
-	struct attachment_list *attachl;
-	struct kmap_vaddr_list *va_kmapl;
-	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
-		return -EINVAL;
-	}
-
-	/* if force != 1, sgt_info can be released only if
-	 * there's no activity on exported dma-buf on importer
-	 * side.
-	 */
-	if (!force &&
-	    sgt_info->importer_exported) {
-		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
-		return -EPERM;
-	}
-
-	/* force == 1 is not recommended */
-	while (!list_empty(&sgt_info->va_kmapped->list)) {
-		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
-					    struct kmap_vaddr_list, list);
-
-		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
-		list_del(&va_kmapl->list);
-		kfree(va_kmapl);
-	}
-
-	while (!list_empty(&sgt_info->va_vmapped->list)) {
-		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
-					    struct vmap_vaddr_list, list);
-
-		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
-		list_del(&va_vmapl->list);
-		kfree(va_vmapl);
-	}
-
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					   struct attachment_list, list);
-
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
-					struct sgt_list, list);
-
-		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					 DMA_BIDIRECTIONAL);
-		list_del(&sgtl->list);
-		kfree(sgtl);
-	}
-
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
-					   struct attachment_list, list);
-
-		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
-		list_del(&attachl->list);
-		kfree(attachl);
-	}
-
-	/* Start cleanup of buffer in reverse order to exporting */
-	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
-
-	/* unmap dma-buf */
-	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-				 sgt_info->active_sgts->sgt,
-				 DMA_BIDIRECTIONAL);
-
-	/* detatch dma-buf */
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
-
-	/* close connection to dma-buf completely */
-	dma_buf_put(sgt_info->dma_buf);
-	sgt_info->dma_buf = NULL;
-
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->active_attached);
-	kfree(sgt_info->va_kmapped);
-	kfree(sgt_info->va_vmapped);
-
-	return 0;
-}
-
-#define WAIT_AFTER_SYNC_REQ 0
-
-inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
-{
-	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int operands[5];
-	int i;
-	int ret;
-
-	operands[0] = hid.id;
-
-	for (i=0; i<3; i++)
-		operands[i+1] = hid.rng_key[i];
-
-	operands[4] = dmabuf_ops;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
-			"No memory left to be allocated\n");
-		return -ENOMEM;
-	}
-
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
-
-	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
-
-	kfree(req);
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
-			struct dma_buf_attachment *attach)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_ATTACH);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-		return ret;
-	}
-
-	return 0;
-}
-
-static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_DETACH);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
-						enum dma_data_direction dir)
-{
-	struct sg_table *st;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_pages_info *page_info;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
-
-	/* extract pages from sgt */
-	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
-
-	if (!page_info) {
-		return NULL;
-	}
-
-	/* create a new sg_table with extracted pages */
-	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
-				page_info->last_len, page_info->nents);
-	if (!st)
-		goto err_free_sg;
-
-        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
-                goto err_free_sg;
-        }
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_MAP);
-
-	kfree(page_info->pages);
-	kfree(page_info);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return st;
-
-err_free_sg:
-	if (st) {
-		sg_free_table(st);
-		kfree(st);
-	}
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
-				   struct sg_table *sg,
-				   enum dma_data_direction dir)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
-
-	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
-
-	sg_free_table(sg);
-	kfree(sg);
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_UNMAP);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int ret;
-	int final_release;
-
-	if (!dma_buf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
-
-	if (!dmabuf_refcount(sgt_info->dma_buf)) {
-		sgt_info->dma_buf = NULL;
-	}
-
-	sgt_info->num_importers--;
-
-	if (sgt_info->num_importers == 0) {
-		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
-
-		if (sgt_info->sgt) {
-			sg_free_table(sgt_info->sgt);
-			kfree(sgt_info->sgt);
-			sgt_info->sgt = NULL;
-		}
-	}
-
-	final_release = sgt_info && !sgt_info->valid &&
-		        !sgt_info->num_importers;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_RELEASE);
-	if (ret < 0) {
-		dev_warn(hyper_dmabuf_private.device,
-			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	/*
-	 * Check if buffer is still valid and if not remove it from imported list.
-	 * That has to be done after sending sync request
-	 */
-	if (final_release) {
-		hyper_dmabuf_remove_imported(sgt_info->hid);
-		kfree(sgt_info);
-	}
-}
-
-static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_END_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return 0;
-}
-
-static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return NULL; /* for now NULL.. need to return the address of mapped region */
-}
-
-static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return NULL; /* for now NULL.. need to return the address of mapped region */
-}
-
-static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_KUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_MMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return ret;
-}
-
-static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_VMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
-{
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
-
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
-					HYPER_DMABUF_OPS_VUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
-}
-
-static const struct dma_buf_ops hyper_dmabuf_ops = {
-		.attach = hyper_dmabuf_ops_attach,
-		.detach = hyper_dmabuf_ops_detach,
-		.map_dma_buf = hyper_dmabuf_ops_map,
-		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
-		.release = hyper_dmabuf_ops_release,
-		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
-		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
-		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
-		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
-		.map = hyper_dmabuf_ops_kmap,
-		.unmap = hyper_dmabuf_ops_kunmap,
-		.mmap = hyper_dmabuf_ops_mmap,
-		.vmap = hyper_dmabuf_ops_vmap,
-		.vunmap = hyper_dmabuf_ops_vunmap,
-};
-
-/* exporting dmabuf as fd */
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
-{
-	int fd = -1;
-
-	/* call hyper_dmabuf_export_dmabuf and create
-	 * and bind a handle for it then release
-	 */
-	hyper_dmabuf_export_dma_buf(dinfo);
-
-	if (dinfo->dma_buf) {
-		fd = dma_buf_fd(dinfo->dma_buf, flags);
-	}
-
-	return fd;
-}
-
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
-{
-	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
-
-	exp_info.ops = &hyper_dmabuf_ops;
-
-	/* multiple of PAGE_SIZE, not considering offset */
-	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
-	exp_info.flags = /* not sure about flag */0;
-	exp_info.priv = dinfo;
-
-	dinfo->dma_buf = dma_buf_export(&exp_info);
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
deleted file mode 100644
index eda075b3..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_IMP_H__
-#define __HYPER_DMABUF_IMP_H__
-
-#include <linux/fs.h>
-#include "hyper_dmabuf_struct.h"
-
-/* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
-
-/* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-                                int frst_ofst, int last_len, int nents);
-
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
-
-void hyper_dmabuf_free_sgt(struct sg_table *sgt);
-
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
-
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
-
-int dmabuf_refcount(struct dma_buf *dma_buf);
-
-#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 9d05d66..283fe5a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -41,7 +41,8 @@
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_ops.h"
 #include "hyper_dmabuf_query.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
@@ -618,7 +619,29 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 	return 0;
 }
 
-static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
+				    void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file*) attr;
+
+	if (!filp || !sgt_info)
+		return;
+
+	if (sgt_info->filp == filp) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
+			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
+			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
+
+		unexport_attr.hid = sgt_info->hid;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
+	}
+}
+
+const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote_ioctl, 0),
@@ -627,7 +650,7 @@ static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query_ioctl, 0),
 };
 
-static long hyper_dmabuf_ioctl(struct file *filp,
+long hyper_dmabuf_ioctl(struct file *filp,
 			unsigned int cmd, unsigned long param)
 {
 	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
@@ -672,110 +695,3 @@ static long hyper_dmabuf_ioctl(struct file *filp,
 
 	return ret;
 }
-
-int hyper_dmabuf_open(struct inode *inode, struct file *filp)
-{
-	int ret = 0;
-
-	/* Do not allow exclusive open */
-	if (filp->f_flags & O_EXCL)
-		return -EBUSY;
-
-	/*
-	 * Initialize backend if neededm,
-	 * use mutex to prevent race conditions when
-	 * two userspace apps will open device at the same time
-	 */
-	mutex_lock(&hyper_dmabuf_private.lock);
-
-	if (!hyper_dmabuf_private.backend_initialized) {
-		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
-
-		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
-	        if (ret < 0) {
-			dev_err(hyper_dmabuf_private.device,
-				"failed to initiailize hypervisor-specific comm env\n");
-		} else {
-			hyper_dmabuf_private.backend_initialized = true;
-		}
-	}
-
-	mutex_unlock(&hyper_dmabuf_private.lock);
-
-	return ret;
-}
-
-static void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
-					   void *attr)
-{
-	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file*) attr;
-
-	if (!filp || !sgt_info)
-		return;
-
-	if (sgt_info->filp == filp) {
-		dev_dbg(hyper_dmabuf_private.device,
-			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
-			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
-			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
-
-		unexport_attr.hid = sgt_info->hid;
-		unexport_attr.delay_ms = 0;
-
-		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
-	}
-}
-
-int hyper_dmabuf_release(struct inode *inode, struct file *filp)
-{
-	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
-
-	return 0;
-}
-
-/*===============================================================================================*/
-static struct file_operations hyper_dmabuf_driver_fops =
-{
-   .owner = THIS_MODULE,
-   .open = hyper_dmabuf_open,
-   .release = hyper_dmabuf_release,
-   .unlocked_ioctl = hyper_dmabuf_ioctl,
-};
-
-static struct miscdevice hyper_dmabuf_miscdev = {
-	.minor = MISC_DYNAMIC_MINOR,
-	.name = "xen/hyper_dmabuf",
-	.fops = &hyper_dmabuf_driver_fops,
-};
-
-static const char device_name[] = "hyper_dmabuf";
-
-/*===============================================================================================*/
-int register_device(void)
-{
-	int ret = 0;
-
-	ret = misc_register(&hyper_dmabuf_miscdev);
-
-	if (ret) {
-		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
-		return ret;
-	}
-
-	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
-
-	/* TODO: Check if there is a different way to initialize dma mask nicely */
-	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
-
-	return ret;
-}
-
-/*-----------------------------------------------------------------------------------------------*/
-void unregister_device(void)
-{
-	dev_info(hyper_dmabuf_private.device,
-		 "hyper_dmabuf: unregister_device() is called\n");
-
-	misc_deregister(&hyper_dmabuf_miscdev);
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 12ebad3..c516df8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -35,7 +35,6 @@
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_imp.h"
 #include "hyper_dmabuf_remote_sync.h"
 #include "hyper_dmabuf_list.h"
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
new file mode 100644
index 0000000..81cb09f
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -0,0 +1,471 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+#define WAIT_AFTER_SYNC_REQ 0
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	int operands[5];
+	int i;
+	int ret;
+
+	operands[0] = hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = hid.rng_key[i];
+
+	operands[4] = dmabuf_ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		return -ENOMEM;
+	}
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
+
+	/* send request and wait for a response */
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
+
+	kfree(req);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
+			struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_ATTACH);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_DETACH);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
+						enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_pages_info *page_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
+
+	if (!page_info) {
+		return NULL;
+	}
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
+				page_info->last_len, page_info->nents);
+	if (!st)
+		goto err_free_sg;
+
+        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+                goto err_free_sg;
+        }
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_MAP);
+
+	kfree(page_info->pages);
+	kfree(page_info);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return st;
+
+err_free_sg:
+	if (st) {
+		sg_free_table(st);
+		kfree(st);
+	}
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+				   struct sg_table *sg,
+				   enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_UNMAP);
+
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	int ret;
+	int final_release;
+
+	if (!dma_buf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
+
+	if (!dmabuf_refcount(sgt_info->dma_buf)) {
+		sgt_info->dma_buf = NULL;
+	}
+
+	sgt_info->num_importers--;
+
+	if (sgt_info->num_importers == 0) {
+		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
+
+		if (sgt_info->sgt) {
+			sg_free_table(sgt_info->sgt);
+			kfree(sgt_info->sgt);
+			sgt_info->sgt = NULL;
+		}
+	}
+
+	final_release = sgt_info && !sgt_info->valid &&
+		        !sgt_info->num_importers;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_RELEASE);
+	if (ret < 0) {
+		dev_warn(hyper_dmabuf_private.device,
+			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	/*
+	 * Check if buffer is still valid and if not remove it from imported list.
+	 * That has to be done after sending sync request
+	 */
+	if (final_release) {
+		hyper_dmabuf_remove_imported(sgt_info->hid);
+		kfree(sgt_info);
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_KUNMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_MMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return ret;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_VMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+					HYPER_DMABUF_OPS_VUNMAP);
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
+	}
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+		.attach = hyper_dmabuf_ops_attach,
+		.detach = hyper_dmabuf_ops_detach,
+		.map_dma_buf = hyper_dmabuf_ops_map,
+		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+		.release = hyper_dmabuf_ops_release,
+		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
+		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+		.map = hyper_dmabuf_ops_kmap,
+		.unmap = hyper_dmabuf_ops_kunmap,
+		.mmap = hyper_dmabuf_ops_mmap,
+		.vmap = hyper_dmabuf_ops_vmap,
+		.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
+{
+	int fd = -1;
+
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
+	hyper_dmabuf_export_dma_buf(dinfo);
+
+	if (dinfo->dma_buf) {
+		fd = dma_buf_fd(dinfo->dma_buf, flags);
+	}
+
+	return fd;
+}
+
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
+	exp_info.flags = /* not sure about flag */0;
+	exp_info.priv = dinfo;
+
+	dinfo->dma_buf = dma_buf_export(&exp_info);
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
new file mode 100644
index 0000000..8c06fc6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_OPS_H__
+#define __HYPER_DMABUF_OPS_H__
+
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
+
+void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index be1d395..9004406 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -37,7 +37,7 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_sgl_proc.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
new file mode 100644
index 0000000..c2d013a
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -0,0 +1,258 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -1;
+}
+
+/* return total number of pages referenced by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct hyper_dmabuf_pages_info *pinfo;
+	int i, j, k;
+	int length;
+	struct scatterlist *sgl;
+
+	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
+	if (!pinfo)
+		return NULL;
+
+	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
+	if (!pinfo->pages)
+		return NULL;
+
+	sgl = sgt->sgl;
+
+	pinfo->nents = 1;
+	pinfo->frst_ofst = sgl->offset;
+	pinfo->pages[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i = 1;
+
+	while (length > 0) {
+		pinfo->pages[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pinfo->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pinfo->pages[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pinfo->nents++;
+		k = 1;
+
+		while (length > 0) {
+			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
+			length -= PAGE_SIZE;
+			pinfo->nents++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pinfo->last_len = PAGE_SIZE + length;
+
+	return pinfo;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+					 int frst_ofst, int last_len, int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (!sgt) {
+		return NULL;
+	}
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		if (sgt) {
+			sg_free_table(sgt);
+			kfree(sgt);
+		}
+
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i=1; i<nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
+	}
+
+	if (nents > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
+{
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+
+	if (!sgt_info) {
+		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
+		return -EINVAL;
+	}
+
+	/* if force != 1, sgt_info can be released only if
+	 * there's no activity on exported dma-buf on importer
+	 * side.
+	 */
+	if (!force &&
+	    sgt_info->importer_exported) {
+		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
+		return -EPERM;
+	}
+
+	/* force == 1 is not recommended */
+	while (!list_empty(&sgt_info->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+
+		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+	}
+
+	while (!list_empty(&sgt_info->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+					    struct vmap_vaddr_list, list);
+
+		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+	}
+
+	while (!list_empty(&sgt_info->active_sgts->list)) {
+		attachl = list_first_entry(&sgt_info->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+	}
+
+	/* Start cleanup of buffer in reverse order to exporting */
+	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
+
+	/* unmap dma-buf */
+	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
+				 sgt_info->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+
+	/* detatch dma-buf */
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
+
+	/* close connection to dma-buf completely */
+	dma_buf_put(sgt_info->dma_buf);
+	sgt_info->dma_buf = NULL;
+
+	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->active_attached);
+	kfree(sgt_info->va_kmapped);
+	kfree(sgt_info->va_vmapped);
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
new file mode 100644
index 0000000..237ccf5
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+int dmabuf_refcount(struct dma_buf *dma_buf);
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+                                int frst_ofst, int last_len, int nents);
+
+int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
+
+void hyper_dmabuf_free_sgt(struct sg_table *sgt);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 42/60] hyper_dmabuf: always generate a new random keys
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Need to update random keys when reusing hyper_dmabuf_id
in the list to increase security

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index f59dee3..cccdc19 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -120,10 +120,11 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	 */
 	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
 		hid.id = HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, count++);
-		/* random data embedded in the id for security */
-		get_random_bytes(&hid.rng_key[0], 12);
 	}
 
+	/* random data embedded in the id for security */
+	get_random_bytes(&hid.rng_key[0], 12);
+
 	return hid;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 42/60] hyper_dmabuf: always generate a new random keys
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Need to update random keys when reusing hyper_dmabuf_id
in the list to increase security

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index f59dee3..cccdc19 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -120,10 +120,11 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	 */
 	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
 		hid.id = HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, count++);
-		/* random data embedded in the id for security */
-		get_random_bytes(&hid.rng_key[0], 12);
 	}
 
+	/* random data embedded in the id for security */
+	get_random_bytes(&hid.rng_key[0], 12);
+
 	return hid;
 }
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 43/60] hyper_dmabuf: fixes on memory leaks in various places
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:29   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Make sure to free buffers before returning to prevent memory leaks

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 19 +++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  9 +++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        |  6 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  4 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 52 +++++++++++++++++++---
 5 files changed, 78 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 283fe5a..3215003 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -282,6 +282,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	/* free msg */
 	kfree(req);
+
 	/* free page_info */
 	kfree(page_info->pages);
 	kfree(page_info);
@@ -298,6 +299,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 fail_map_req:
 	hyper_dmabuf_remove_exported(sgt_info->hid);
 
+	/* free page_info */
+	kfree(page_info->pages);
+	kfree(page_info);
+
 fail_export:
 	kfree(sgt_info->va_vmapped);
 
@@ -433,6 +438,13 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 			sgt_info->num_importers--;
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+			if (!req) {
+				dev_err(hyper_dmabuf_private.device,
+					"No more space left\n");
+				return -ENOMEM;
+			}
+
 			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
 			ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, false);
 			kfree(req);
@@ -681,16 +693,19 @@ long hyper_dmabuf_ioctl(struct file *filp,
 
 	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
 		dev_err(hyper_dmabuf_private.device, "failed to copy from user arguments\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto ioctl_error;
 	}
 
 	ret = func(filp, kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
 		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto ioctl_error;
 	}
 
+ioctl_error:
 	kfree(kdata);
 
 	return ret;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index c516df8..46cf9a4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -191,8 +191,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	struct hyper_dmabuf_req *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	struct hyper_dmabuf_sgt_info *exp_sgt_info;
-	hyper_dmabuf_id_t hid = {req->operands[0], /* hid.id */
-			       {req->operands[1], req->operands[2], req->operands[3]}}; /* hid.rng_key */
+	hyper_dmabuf_id_t hid;
 	int ret;
 
 	if (!req) {
@@ -200,6 +199,11 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		return -EINVAL;
 	}
 
+	hid.id = req->operands[0];
+	hid.rng_key[0] = req->operands[1];
+	hid.rng_key[1] = req->operands[2];
+	hid.rng_key[2] = req->operands[3];
+
 	if ((req->command < HYPER_DMABUF_EXPORT) ||
 		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
 		dev_err(hyper_dmabuf_private.device, "invalid command\n");
@@ -332,6 +336,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (!proc) {
 		dev_err(hyper_dmabuf_private.device,
 			"No memory left to be allocated\n");
+		kfree(temp_req);
 		return -ENOMEM;
 	}
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 81cb09f..9313c42 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -148,9 +148,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	if (!st)
 		goto err_free_sg;
 
-        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
                 goto err_free_sg;
-        }
 
 	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_MAP);
@@ -171,6 +170,9 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 		kfree(st);
 	}
 
+	kfree(page_info->pages);
+	kfree(page_info);
+
 	return NULL;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index c2d013a..dd17d26 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -89,8 +89,10 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 		return NULL;
 
 	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (!pinfo->pages)
+	if (!pinfo->pages) {
+		kfree(pinfo);
 		return NULL;
+	}
 
 	sgl = sgt->sgl;
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 43dd3b6..9689346 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -229,9 +229,16 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
+	if (!ring_info) {
+		dev_err(hyper_dmabuf_private.device,
+			"No more spae left\n");
+		return -ENOMEM;
+	}
+
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
 	if (shared_ring == 0) {
+		kfree(ring_info);
 		return -ENOMEM;
 	}
 
@@ -246,6 +253,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 							   0);
 	if (ring_info->gref_ring < 0) {
 		/* fail to get gref */
+		kfree(ring_info);
 		return -EFAULT;
 	}
 
@@ -256,6 +264,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	if (ret) {
 		dev_err(hyper_dmabuf_private.device,
 			"Cannot allocate event channel\n");
+		kfree(ring_info);
 		return -EIO;
 	}
 
@@ -271,6 +280,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
 		gnttab_end_foreign_access(ring_info->gref_ring, 0,
 					virt_to_mfn(shared_ring));
+		kfree(ring_info);
 		return -EIO;
 	}
 
@@ -299,6 +309,14 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	 */
 	ring_info->watch.callback = remote_dom_exporter_watch_cb;
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
+
+	if (!ring_info->watch.node) {
+		dev_err(hyper_dmabuf_private.device,
+			"No more space left\n");
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
 	sprintf((char*)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
 		domid, hyper_dmabuf_xen_get_domid());
@@ -392,8 +410,16 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
+	if (!map_ops) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		ret = -ENOMEM;
+		goto fail_no_map_ops;
+	}
+
 	if (gnttab_alloc_pages(1, &shared_ring)) {
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto fail_others;
 	}
 
 	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
@@ -405,12 +431,14 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto fail_others;
 	}
 
 	if (map_ops[0].status) {
 		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto fail_others;
 	} else {
 		ring_info->unmap_op.handle = map_ops[0].handle;
 	}
@@ -424,7 +452,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
 
 	if (ret < 0) {
-		return -EIO;
+		ret = -EIO;
+		goto fail_others;
 	}
 
 	ring_info->irq = ret;
@@ -445,6 +474,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 			  back_ring_isr, 0,
 			  NULL, (void*)ring_info);
 
+fail_others:
+	kfree(map_ops);
+
+fail_no_map_ops:
+	kfree(ring_info);
+
 	return ret;
 }
 
@@ -520,15 +555,22 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		return -ENOENT;
 	}
 
-
 	mutex_lock(&ring_info->lock);
 
 	ring = &ring_info->ring_front;
 
 	while (RING_FULL(ring)) {
+		if (timeout == 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"Timeout while waiting for an entry in the ring\n");
+			return -EIO;
+		}
 		usleep_range(100, 120);
+		timeout--;
 	}
 
+	timeout = 1000;
+
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
 		mutex_unlock(&ring_info->lock);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 43/60] hyper_dmabuf: fixes on memory leaks in various places
@ 2017-12-19 19:29   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Make sure to free buffers before returning to prevent memory leaks

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 19 +++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  9 +++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        |  6 ++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  4 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 52 +++++++++++++++++++---
 5 files changed, 78 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 283fe5a..3215003 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -282,6 +282,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	/* free msg */
 	kfree(req);
+
 	/* free page_info */
 	kfree(page_info->pages);
 	kfree(page_info);
@@ -298,6 +299,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 fail_map_req:
 	hyper_dmabuf_remove_exported(sgt_info->hid);
 
+	/* free page_info */
+	kfree(page_info->pages);
+	kfree(page_info);
+
 fail_export:
 	kfree(sgt_info->va_vmapped);
 
@@ -433,6 +438,13 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 			sgt_info->num_importers--;
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+			if (!req) {
+				dev_err(hyper_dmabuf_private.device,
+					"No more space left\n");
+				return -ENOMEM;
+			}
+
 			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
 			ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, false);
 			kfree(req);
@@ -681,16 +693,19 @@ long hyper_dmabuf_ioctl(struct file *filp,
 
 	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
 		dev_err(hyper_dmabuf_private.device, "failed to copy from user arguments\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto ioctl_error;
 	}
 
 	ret = func(filp, kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
 		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto ioctl_error;
 	}
 
+ioctl_error:
 	kfree(kdata);
 
 	return ret;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index c516df8..46cf9a4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -191,8 +191,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	struct hyper_dmabuf_req *temp_req;
 	struct hyper_dmabuf_imported_sgt_info *sgt_info;
 	struct hyper_dmabuf_sgt_info *exp_sgt_info;
-	hyper_dmabuf_id_t hid = {req->operands[0], /* hid.id */
-			       {req->operands[1], req->operands[2], req->operands[3]}}; /* hid.rng_key */
+	hyper_dmabuf_id_t hid;
 	int ret;
 
 	if (!req) {
@@ -200,6 +199,11 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		return -EINVAL;
 	}
 
+	hid.id = req->operands[0];
+	hid.rng_key[0] = req->operands[1];
+	hid.rng_key[1] = req->operands[2];
+	hid.rng_key[2] = req->operands[3];
+
 	if ((req->command < HYPER_DMABUF_EXPORT) ||
 		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
 		dev_err(hyper_dmabuf_private.device, "invalid command\n");
@@ -332,6 +336,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (!proc) {
 		dev_err(hyper_dmabuf_private.device,
 			"No memory left to be allocated\n");
+		kfree(temp_req);
 		return -ENOMEM;
 	}
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 81cb09f..9313c42 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -148,9 +148,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	if (!st)
 		goto err_free_sg;
 
-        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
                 goto err_free_sg;
-        }
 
 	ret = hyper_dmabuf_sync_request(sgt_info->hid,
 					HYPER_DMABUF_OPS_MAP);
@@ -171,6 +170,9 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 		kfree(st);
 	}
 
+	kfree(page_info->pages);
+	kfree(page_info);
+
 	return NULL;
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index c2d013a..dd17d26 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -89,8 +89,10 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 		return NULL;
 
 	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (!pinfo->pages)
+	if (!pinfo->pages) {
+		kfree(pinfo);
 		return NULL;
+	}
 
 	sgl = sgt->sgl;
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 43dd3b6..9689346 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -229,9 +229,16 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
+	if (!ring_info) {
+		dev_err(hyper_dmabuf_private.device,
+			"No more spae left\n");
+		return -ENOMEM;
+	}
+
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
 	if (shared_ring == 0) {
+		kfree(ring_info);
 		return -ENOMEM;
 	}
 
@@ -246,6 +253,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 							   0);
 	if (ring_info->gref_ring < 0) {
 		/* fail to get gref */
+		kfree(ring_info);
 		return -EFAULT;
 	}
 
@@ -256,6 +264,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	if (ret) {
 		dev_err(hyper_dmabuf_private.device,
 			"Cannot allocate event channel\n");
+		kfree(ring_info);
 		return -EIO;
 	}
 
@@ -271,6 +280,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
 		gnttab_end_foreign_access(ring_info->gref_ring, 0,
 					virt_to_mfn(shared_ring));
+		kfree(ring_info);
 		return -EIO;
 	}
 
@@ -299,6 +309,14 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	 */
 	ring_info->watch.callback = remote_dom_exporter_watch_cb;
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
+
+	if (!ring_info->watch.node) {
+		dev_err(hyper_dmabuf_private.device,
+			"No more space left\n");
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
 	sprintf((char*)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
 		domid, hyper_dmabuf_xen_get_domid());
@@ -392,8 +410,16 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
+	if (!map_ops) {
+		dev_err(hyper_dmabuf_private.device,
+			"No memory left to be allocated\n");
+		ret = -ENOMEM;
+		goto fail_no_map_ops;
+	}
+
 	if (gnttab_alloc_pages(1, &shared_ring)) {
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto fail_others;
 	}
 
 	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
@@ -405,12 +431,14 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
 		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto fail_others;
 	}
 
 	if (map_ops[0].status) {
 		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
-		return -EFAULT;
+		ret = -EFAULT;
+		goto fail_others;
 	} else {
 		ring_info->unmap_op.handle = map_ops[0].handle;
 	}
@@ -424,7 +452,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
 
 	if (ret < 0) {
-		return -EIO;
+		ret = -EIO;
+		goto fail_others;
 	}
 
 	ring_info->irq = ret;
@@ -445,6 +474,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 			  back_ring_isr, 0,
 			  NULL, (void*)ring_info);
 
+fail_others:
+	kfree(map_ops);
+
+fail_no_map_ops:
+	kfree(ring_info);
+
 	return ret;
 }
 
@@ -520,15 +555,22 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		return -ENOENT;
 	}
 
-
 	mutex_lock(&ring_info->lock);
 
 	ring = &ring_info->ring_front;
 
 	while (RING_FULL(ring)) {
+		if (timeout == 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"Timeout while waiting for an entry in the ring\n");
+			return -EIO;
+		}
 		usleep_range(100, 120);
+		timeout--;
 	}
 
+	timeout = 1000;
+
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
 		mutex_unlock(&ring_info->lock);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 44/60] hyper_dmabuf: proper handling of sgt_info->priv
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

sgt_info->priv will be used to store user private info passed in
ioctl. Data in sgt_info->priv is transfered via comm channel to
the importer VM whenever DMA_BUF is exported to keep the private
data synchroized across VMs.

This patch also adds hyper_dmabuf_send_export_msg that replaces
some of export_remote_ioctl to make it more readable and
compact.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 110 ++++++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h |   6 +-
 2 files changed, 65 insertions(+), 51 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 3215003..dfdb889 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -82,17 +82,64 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
+static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
+					struct hyper_dmabuf_pages_info *page_info)
+{
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_req *req;
+	int operands[12] = {0};
+	int ret, i;
+
+	/* now create request for importer via ring */
+	operands[0] = sgt_info->hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = sgt_info->hid.rng_key[i];
+
+	if (page_info) {
+		operands[4] = page_info->nents;
+		operands[5] = page_info->frst_ofst;
+		operands[6] = page_info->last_len;
+		operands[7] = ops->share_pages (page_info->pages, sgt_info->hyper_dmabuf_rdomain,
+						page_info->nents, &sgt_info->refs_info);
+		if (operands[7] < 0) {
+			dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
+			return -1;
+		}
+	}
+
+	/* driver/application specific private info, max 4x4 bytes */
+	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 4);
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if(!req) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		return -1;
+	}
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
+
+	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, false);
+
+	if(ret) {
+		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
+	}
+
+	kfree(req);
+
+	return ret;
+}
+
 static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct dma_buf *dma_buf;
 	struct dma_buf_attachment *attachment;
 	struct sg_table *sgt;
 	struct hyper_dmabuf_pages_info *page_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_req *req;
-	int operands[MAX_NUMBER_OF_OPERANDS];
 	hyper_dmabuf_id_t hid;
 	int i;
 	int ret = 0;
@@ -138,6 +185,13 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 					}
 					sgt_info->unexport_scheduled = 0;
 				}
+
+				/* update private data in sgt_info with new ones */
+				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+
+				/* TODO: need to send this private info to the importer so that those
+				 * on importer's side are also updated */
+
 				dma_buf_put(dma_buf);
 				export_remote_attr->hid = hid;
 				return 0;
@@ -225,6 +279,9 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	INIT_LIST_HEAD(&sgt_info->va_kmapped->list);
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
+	/* copy private data to sgt_info */
+	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (!page_info) {
 		dev_err(hyper_dmabuf_private.device, "failed to construct page_info\n");
@@ -236,53 +293,15 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	/* now register it to export list */
 	hyper_dmabuf_register_exported(sgt_info);
 
-	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
-	page_info->hid = sgt_info->hid; /* may not be needed */
-
 	export_remote_attr->hid = sgt_info->hid;
 
-	/* now create request for importer via ring */
-	operands[0] = page_info->hid.id;
-
-	for (i=0; i<3; i++)
-		operands[i+1] = page_info->hid.rng_key[i];
-
-	operands[4] = page_info->nents;
-	operands[5] = page_info->frst_ofst;
-	operands[6] = page_info->last_len;
-	operands[7] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
-					page_info->nents, &sgt_info->refs_info);
-	if (operands[7] < 0) {
-		dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
-		goto fail_map_req;
-	}
-
-	/* driver/application specific private info, max 4x4 bytes */
-	operands[8] = export_remote_attr->priv[0];
-	operands[9] = export_remote_attr->priv[1];
-	operands[10] = export_remote_attr->priv[2];
-	operands[11] = export_remote_attr->priv[3];
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	ret = hyper_dmabuf_send_export_msg(sgt_info, page_info);
 
-	if(!req) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
-		goto fail_map_req;
-	}
-
-	/* composing a message to the importer */
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
-
-	ret = ops->send_req(export_remote_attr->remote_domain, req, false);
-
-	if(ret) {
-		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device, "failed to send out the export request\n");
 		goto fail_send_request;
 	}
 
-	/* free msg */
-	kfree(req);
-
 	/* free page_info */
 	kfree(page_info->pages);
 	kfree(page_info);
@@ -294,9 +313,6 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 /* Clean-up if error occurs */
 
 fail_send_request:
-	kfree(req);
-
-fail_map_req:
 	hyper_dmabuf_remove_exported(sgt_info->hid);
 
 	/* free page_info */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 991a8d4..a1d3ec6 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -51,8 +51,6 @@ struct vmap_vaddr_list {
 
 /* Exporter builds pages_info before sharing pages */
 struct hyper_dmabuf_pages_info {
-        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in source domain */
-        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
         int frst_ofst; /* offset of data in the first page */
         int last_len; /* length of data in the last page */
         int nents; /* # of pages */
@@ -71,7 +69,7 @@ struct hyper_dmabuf_sgt_info {
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
-	int nents; /* number of pages, which may be different than sgt->nents */
+	int nents;
 
 	/* list of remote activities on dma_buf */
 	struct sgt_list *active_sgts;
@@ -92,7 +90,7 @@ struct hyper_dmabuf_sgt_info {
 	 * uses releases hyper_dmabuf device
 	 */
 	struct file *filp;
-	int private[4]; /* device specific info (e.g. image's meta info?) */
+	int priv[4]; /* device specific info (e.g. image's meta info?) */
 };
 
 /* Importer store references (before mapping) on shared pages
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 44/60] hyper_dmabuf: proper handling of sgt_info->priv
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

sgt_info->priv will be used to store user private info passed in
ioctl. Data in sgt_info->priv is transfered via comm channel to
the importer VM whenever DMA_BUF is exported to keep the private
data synchroized across VMs.

This patch also adds hyper_dmabuf_send_export_msg that replaces
some of export_remote_ioctl to make it more readable and
compact.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 110 ++++++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h |   6 +-
 2 files changed, 65 insertions(+), 51 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 3215003..dfdb889 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -82,17 +82,64 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
+static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
+					struct hyper_dmabuf_pages_info *page_info)
+{
+	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_req *req;
+	int operands[12] = {0};
+	int ret, i;
+
+	/* now create request for importer via ring */
+	operands[0] = sgt_info->hid.id;
+
+	for (i=0; i<3; i++)
+		operands[i+1] = sgt_info->hid.rng_key[i];
+
+	if (page_info) {
+		operands[4] = page_info->nents;
+		operands[5] = page_info->frst_ofst;
+		operands[6] = page_info->last_len;
+		operands[7] = ops->share_pages (page_info->pages, sgt_info->hyper_dmabuf_rdomain,
+						page_info->nents, &sgt_info->refs_info);
+		if (operands[7] < 0) {
+			dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
+			return -1;
+		}
+	}
+
+	/* driver/application specific private info, max 4x4 bytes */
+	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 4);
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if(!req) {
+		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		return -1;
+	}
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
+
+	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, false);
+
+	if(ret) {
+		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
+	}
+
+	kfree(req);
+
+	return ret;
+}
+
 static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct dma_buf *dma_buf;
 	struct dma_buf_attachment *attachment;
 	struct sg_table *sgt;
 	struct hyper_dmabuf_pages_info *page_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
-	struct hyper_dmabuf_req *req;
-	int operands[MAX_NUMBER_OF_OPERANDS];
 	hyper_dmabuf_id_t hid;
 	int i;
 	int ret = 0;
@@ -138,6 +185,13 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 					}
 					sgt_info->unexport_scheduled = 0;
 				}
+
+				/* update private data in sgt_info with new ones */
+				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+
+				/* TODO: need to send this private info to the importer so that those
+				 * on importer's side are also updated */
+
 				dma_buf_put(dma_buf);
 				export_remote_attr->hid = hid;
 				return 0;
@@ -225,6 +279,9 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	INIT_LIST_HEAD(&sgt_info->va_kmapped->list);
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
+	/* copy private data to sgt_info */
+	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (!page_info) {
 		dev_err(hyper_dmabuf_private.device, "failed to construct page_info\n");
@@ -236,53 +293,15 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	/* now register it to export list */
 	hyper_dmabuf_register_exported(sgt_info);
 
-	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
-	page_info->hid = sgt_info->hid; /* may not be needed */
-
 	export_remote_attr->hid = sgt_info->hid;
 
-	/* now create request for importer via ring */
-	operands[0] = page_info->hid.id;
-
-	for (i=0; i<3; i++)
-		operands[i+1] = page_info->hid.rng_key[i];
-
-	operands[4] = page_info->nents;
-	operands[5] = page_info->frst_ofst;
-	operands[6] = page_info->last_len;
-	operands[7] = ops->share_pages (page_info->pages, export_remote_attr->remote_domain,
-					page_info->nents, &sgt_info->refs_info);
-	if (operands[7] < 0) {
-		dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
-		goto fail_map_req;
-	}
-
-	/* driver/application specific private info, max 4x4 bytes */
-	operands[8] = export_remote_attr->priv[0];
-	operands[9] = export_remote_attr->priv[1];
-	operands[10] = export_remote_attr->priv[2];
-	operands[11] = export_remote_attr->priv[3];
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	ret = hyper_dmabuf_send_export_msg(sgt_info, page_info);
 
-	if(!req) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
-		goto fail_map_req;
-	}
-
-	/* composing a message to the importer */
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
-
-	ret = ops->send_req(export_remote_attr->remote_domain, req, false);
-
-	if(ret) {
-		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device, "failed to send out the export request\n");
 		goto fail_send_request;
 	}
 
-	/* free msg */
-	kfree(req);
-
 	/* free page_info */
 	kfree(page_info->pages);
 	kfree(page_info);
@@ -294,9 +313,6 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 /* Clean-up if error occurs */
 
 fail_send_request:
-	kfree(req);
-
-fail_map_req:
 	hyper_dmabuf_remove_exported(sgt_info->hid);
 
 	/* free page_info */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 991a8d4..a1d3ec6 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -51,8 +51,6 @@ struct vmap_vaddr_list {
 
 /* Exporter builds pages_info before sharing pages */
 struct hyper_dmabuf_pages_info {
-        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in source domain */
-        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
         int frst_ofst; /* offset of data in the first page */
         int last_len; /* length of data in the last page */
         int nents; /* # of pages */
@@ -71,7 +69,7 @@ struct hyper_dmabuf_sgt_info {
 	int hyper_dmabuf_rdomain; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
-	int nents; /* number of pages, which may be different than sgt->nents */
+	int nents;
 
 	/* list of remote activities on dma_buf */
 	struct sgt_list *active_sgts;
@@ -92,7 +90,7 @@ struct hyper_dmabuf_sgt_info {
 	 * uses releases hyper_dmabuf device
 	 */
 	struct file *filp;
-	int private[4]; /* device specific info (e.g. image's meta info?) */
+	int priv[4]; /* device specific info (e.g. image's meta info?) */
 };
 
 /* Importer store references (before mapping) on shared pages
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 45/60] hyper_dmabuf: adding poll/read for event generation
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

hyper_dmabuf driver on importing domain now generates event
every time new hyper_dmabuf is available (visible) to the
importer.

Each event comes with 128 byte private data, which can
contain any meta data or user data specific to the originator
of DMA BUF.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig                   |  10 ++
 drivers/xen/hyper_dmabuf/Makefile                  |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 120 +++++++++++++++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  48 ++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      | 125 +++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h      |  38 +++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  23 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  44 +++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |   4 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  37 +++++-
 include/uapi/xen/hyper_dmabuf.h                    |  13 ++-
 13 files changed, 430 insertions(+), 36 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index 185fdf8..eb1b637 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -19,4 +19,14 @@ config HYPER_DMABUF_SYSFS
 	  Expose information about imported and exported buffers using
 	  hyper_dmabuf driver
 
+config HYPER_DMABUF_EVENT_GEN
+	bool "Enable event-generation and polling operation"
+	default n
+	depends on HYPER_DMABUF
+	help
+	  With this config enabled, hyper_dmabuf driver on the importer side
+	  generates events and queue those up in the event list whenever a new
+	  shared DMA-BUF is available. Events in the list can be retrieved by
+	  read operation.
+
 endmenu
diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 5040b9f..1cd7a81 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -13,6 +13,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
 				 hyper_dmabuf_query.o \
+				 hyper_dmabuf_event.o \
 
 ifeq ($(CONFIG_XEN), y)
 	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 8c488d7..2845224 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -30,7 +30,10 @@
 #include <linux/module.h>
 #include <linux/miscdevice.h>
 #include <linux/workqueue.h>
+#include <linux/slab.h>
 #include <linux/device.h>
+#include <linux/uaccess.h>
+#include <linux/poll.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
@@ -38,6 +41,7 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
 #include "xen/hyper_dmabuf_xen_drv.h"
@@ -64,7 +68,7 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 		return -EBUSY;
 
 	/*
-	 * Initialize backend if neededm,
+	 * Initialize backend if needed,
 	 * use mutex to prevent race conditions when
 	 * two userspace apps will open device at the same time
 	 */
@@ -91,6 +95,112 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
 	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
 
+	/* clean up event queue */
+	hyper_dmabuf_events_release();
+
+	return 0;
+}
+
+unsigned int hyper_dmabuf_event_poll(struct file *filp, struct poll_table_struct *wait)
+{
+	unsigned int mask = 0;
+
+	poll_wait(filp, &hyper_dmabuf_private.event_wait, wait);
+
+	if (!list_empty(&hyper_dmabuf_private.event_list))
+		mask |= POLLIN | POLLRDNORM;
+
+	return mask;
+}
+
+ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
+		size_t count, loff_t *offset)
+{
+	int ret;
+
+	/* only root can read events */
+	if (!capable(CAP_DAC_OVERRIDE))
+		return -EFAULT;
+
+	/* make sure user buffer can be written */
+	if (!access_ok(VERIFY_WRITE, buffer, count))
+		return -EFAULT;
+
+	ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+	if (ret)
+		return ret;
+
+	while (1) {
+		struct hyper_dmabuf_event *e = NULL;
+
+		spin_lock_irq(&hyper_dmabuf_private.event_lock);
+		if (!list_empty(&hyper_dmabuf_private.event_list)) {
+			e = list_first_entry(&hyper_dmabuf_private.event_list,
+					struct hyper_dmabuf_event, link);
+			list_del(&e->link);
+		}
+		spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+
+		if (!e) {
+			if (ret)
+				break;
+			if (filp->f_flags & O_NONBLOCK) {
+				ret = -EAGAIN;
+				break;
+			}
+
+			mutex_unlock(&hyper_dmabuf_private.event_read_lock);
+			ret = wait_event_interruptible(hyper_dmabuf_private.event_wait,
+						       !list_empty(&hyper_dmabuf_private.event_list));
+
+			if (ret >= 0)
+				ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+
+			if (ret)
+				return ret;
+		} else {
+			unsigned length = (sizeof(struct hyper_dmabuf_event_hdr) + e->event_data.hdr.size);
+
+			if (length > count - ret) {
+put_back_event:
+				spin_lock_irq(&hyper_dmabuf_private.event_lock);
+				list_add(&e->link, &hyper_dmabuf_private.event_list);
+				spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+				break;
+			}
+
+			if (copy_to_user(buffer + ret, &e->event_data.hdr,
+					 sizeof(struct hyper_dmabuf_event_hdr))) {
+				if (ret == 0)
+					ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += sizeof(struct hyper_dmabuf_event_hdr);
+
+			if (copy_to_user(buffer + ret, e->event_data.data, e->event_data.hdr.size)) {
+				/* error while copying void *data */
+
+				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
+				ret -= sizeof(struct hyper_dmabuf_event_hdr);
+
+				/* nullifying hdr of the event in user buffer */
+				copy_to_user(buffer + ret, &dummy_hdr,
+					     sizeof(dummy_hdr));
+
+				ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += e->event_data.hdr.size;
+			kfree(e);
+		}
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.event_read_lock);
+
 	return 0;
 }
 
@@ -99,6 +209,8 @@ static struct file_operations hyper_dmabuf_driver_fops =
 	.owner = THIS_MODULE,
 	.open = hyper_dmabuf_open,
 	.release = hyper_dmabuf_release,
+	.read = hyper_dmabuf_event_read,
+	.poll = hyper_dmabuf_event_poll,
 	.unlocked_ioctl = hyper_dmabuf_ioctl,
 };
 
@@ -184,6 +296,12 @@ static int __init hyper_dmabuf_drv_init(void)
 	}
 #endif
 
+	/* Initialize event queue */
+	INIT_LIST_HEAD(&hyper_dmabuf_private.event_list);
+	init_waitqueue_head(&hyper_dmabuf_private.event_wait);
+
+	hyper_dmabuf_private.curr_num_event = 0;
+
 	/* interrupt for comm should be registered here: */
 	return ret;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index ffe4d53..08e8ed7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -30,6 +30,42 @@
 
 struct hyper_dmabuf_req;
 
+struct hyper_dmabuf_event {
+	struct hyper_dmabuf_event_data event_data;
+	struct list_head link;
+};
+
+struct hyper_dmabuf_private {
+        struct device *device;
+
+	/* VM(domain) id of current VM instance */
+	int domid;
+
+	/* workqueue dedicated to hyper_dmabuf driver */
+	struct workqueue_struct *work_queue;
+
+	/* list of reusable hyper_dmabuf_ids */
+	struct list_reusable_id *id_queue;
+
+	/* backend ops - hypervisor specific */
+	struct hyper_dmabuf_backend_ops *backend_ops;
+
+	/* device global lock */
+	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
+	struct mutex lock;
+
+	/* flag that shows whether backend is initialized */
+	bool backend_initialized;
+
+        wait_queue_head_t event_wait;
+        struct list_head event_list;
+
+	spinlock_t event_lock;
+	struct mutex event_read_lock;
+
+	int curr_num_event;
+};
+
 struct list_reusable_id {
 	hyper_dmabuf_id_t hid;
 	struct list_head list;
@@ -69,16 +105,4 @@ struct hyper_dmabuf_backend_ops {
 	int (*send_req)(int, struct hyper_dmabuf_req *, int);
 };
 
-struct hyper_dmabuf_private {
-        struct device *device;
-	int domid;
-	struct workqueue_struct *work_queue;
-	struct list_reusable_id *id_queue;
-
-	/* backend ops - hypervisor specific */
-	struct hyper_dmabuf_backend_ops *backend_ops;
-	struct mutex lock;
-	bool backend_initialized;
-};
-
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
new file mode 100644
index 0000000..be70e54
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_event.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
+{
+	struct hyper_dmabuf_event *oldest;
+
+	assert_spin_locked(&hyper_dmabuf_private.event_lock);
+
+	/* check current number of event then if it hits the max num allowed
+	 * then remove the oldest event in the list */
+	if (hyper_dmabuf_private.curr_num_event > MAX_NUMBER_OF_EVENT - 1) {
+		oldest = list_first_entry(&hyper_dmabuf_private.event_list,
+				struct hyper_dmabuf_event, link);
+		list_del(&oldest->link);
+		hyper_dmabuf_private.curr_num_event--;
+	}
+
+	list_add_tail(&e->link,
+		      &hyper_dmabuf_private.event_list);
+
+	hyper_dmabuf_private.curr_num_event++;
+
+	wake_up_interruptible(&hyper_dmabuf_private.event_wait);
+}
+
+void hyper_dmabuf_events_release()
+{
+	struct hyper_dmabuf_event *e, *et;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+
+	list_for_each_entry_safe(e, et, &hyper_dmabuf_private.event_list,
+				 link) {
+		list_del(&e->link);
+		hyper_dmabuf_private.curr_num_event--;
+	}
+
+	if (hyper_dmabuf_private.curr_num_event) {
+		dev_err(hyper_dmabuf_private.device,
+			"possible leak on event_list\n");
+	}
+
+	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+}
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
+{
+	struct hyper_dmabuf_event *e;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+
+	unsigned long irqflags;
+
+	imported_sgt_info = hyper_dmabuf_find_imported(hid);
+
+	if (!imported_sgt_info) {
+		dev_err(hyper_dmabuf_private.device,
+			"can't find imported_sgt_info in the list\n");
+		return -EINVAL;
+	}
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+
+	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
+	e->event_data.hdr.hid = hid;
+	e->event_data.data = (void*)&imported_sgt_info->priv[0];
+	e->event_data.hdr.size = 128;
+
+	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+
+	hyper_dmabuf_send_event_locked(e);
+
+	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+
+	dev_dbg(hyper_dmabuf_private.device,
+			"event number = %d :", hyper_dmabuf_private.curr_num_event);
+
+	dev_dbg(hyper_dmabuf_private.device,
+			"generating events for {%d, %d, %d, %d}\n",
+			imported_sgt_info->hid.id, imported_sgt_info->hid.rng_key[0],
+			imported_sgt_info->hid.rng_key[1], imported_sgt_info->hid.rng_key[2]);
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
new file mode 100644
index 0000000..44c4856
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
@@ -0,0 +1,38 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_EVENT_H__
+#define __HYPER_DMABUF_EVENT_H__
+
+#define MAX_NUMBER_OF_EVENT 1024
+
+enum hyper_dmabuf_event_type {
+	HYPER_DMABUF_NEW_IMPORT = 0x10000,
+};
+
+void hyper_dmabuf_events_release(void);
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid);
+
+#endif /* __HYPER_DMABUF_EVENT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index dfdb889..85b70db 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -87,7 +87,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 {
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_req *req;
-	int operands[12] = {0};
+	int operands[40] = {0};
 	int ret, i;
 
 	/* now create request for importer via ring */
@@ -109,7 +109,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 	}
 
 	/* driver/application specific private info, max 4x4 bytes */
-	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 4);
+	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 32);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -121,11 +121,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
 
-	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, false);
-
-	if(ret) {
-		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
-	}
+	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 
 	kfree(req);
 
@@ -141,7 +137,6 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	struct hyper_dmabuf_pages_info *page_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	hyper_dmabuf_id_t hid;
-	int i;
 	int ret = 0;
 
 	if (!data) {
@@ -187,10 +182,14 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 				}
 
 				/* update private data in sgt_info with new ones */
-				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
+
+				/* send an export msg for updating priv in importer */
+				ret = hyper_dmabuf_send_export_msg(sgt_info, NULL);
 
-				/* TODO: need to send this private info to the importer so that those
-				 * on importer's side are also updated */
+				if (ret < 0) {
+					dev_err(hyper_dmabuf_private.device, "Failed to send a new private data\n");
+				}
 
 				dma_buf_put(dma_buf);
 				export_remote_attr->hid = hid;
@@ -280,7 +279,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
 	/* copy private data to sgt_info */
-	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
 
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (!page_info) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 21fc7d0..eaef2c1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -37,6 +37,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 46cf9a4..152f9e3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -36,6 +36,7 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_event.h"
 #include "hyper_dmabuf_list.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
@@ -64,10 +65,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		 * operands5 : offset of data in the first page
 		 * operands6 : length of data in the last page
 		 * operands7 : top-level reference number for shared pages
-		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands8~39 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
-		for (i=0; i < 11; i++)
-			req->operands[i] = operands[i];
+
+		memcpy(&req->operands[0], &operands[0], 40 * sizeof(int));
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
@@ -136,6 +137,32 @@ void cmd_process_work(struct work_struct *work)
 		 * operands7 : top-level reference number for shared pages
 		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
+
+		/* if nents == 0, it means it is a message only for priv synchronization
+		 * for existing imported_sgt_info so not creating a new one */
+		if (req->operands[4] == 0) {
+			hyper_dmabuf_id_t exist = {req->operands[0],
+						   {req->operands[1], req->operands[2],
+						    req->operands[3]}};
+
+			imported_sgt_info = hyper_dmabuf_find_imported(exist);
+
+			if (!imported_sgt_info) {
+				dev_err(hyper_dmabuf_private.device,
+					"Can't find imported sgt_info from IMPORT_LIST\n");
+				break;
+			}
+			/* updating pri data */
+			memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+			/* generating import event */
+			hyper_dmabuf_import_event(imported_sgt_info->hid);
+#endif
+
+			break;
+		}
+
 		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
 
 		if (!imported_sgt_info) {
@@ -163,12 +190,17 @@ void cmd_process_work(struct work_struct *work)
 		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
 		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
 
-		for (i=0; i<4; i++)
-			imported_sgt_info->private[i] = req->operands[8+i];
+		memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
 
 		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
-	break;
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+		/* generating import event */
+		hyper_dmabuf_import_event(imported_sgt_info->hid);
+#endif
+
+		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 636d6f1..7464273 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
-#define MAX_NUMBER_OF_OPERANDS 13
+#define MAX_NUMBER_OF_OPERANDS 40
 
 struct hyper_dmabuf_req {
 	unsigned int request_id;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index a1d3ec6..f01f535 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -90,7 +90,7 @@ struct hyper_dmabuf_sgt_info {
 	 * uses releases hyper_dmabuf device
 	 */
 	struct file *filp;
-	int priv[4]; /* device specific info (e.g. image's meta info?) */
+	int priv[32]; /* device specific info (e.g. image's meta info?) */
 };
 
 /* Importer store references (before mapping) on shared pages
@@ -110,7 +110,7 @@ struct hyper_dmabuf_imported_sgt_info {
 	void *refs_info;
 	bool valid;
 	int num_importers;
-	int private[4]; /* device specific info (e.g. image's meta info?) */
+	int priv[32]; /* device specific info (e.g. image's meta info?) */
 };
 
 #endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 9689346..370a07d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -32,6 +32,7 @@
 #include <linux/slab.h>
 #include <linux/workqueue.h>
 #include <linux/delay.h>
+#include <linux/time.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
 #include <xen/xenbus.h>
@@ -474,6 +475,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 			  back_ring_isr, 0,
 			  NULL, (void*)ring_info);
 
+	return ret;
+
 fail_others:
 	kfree(map_ops);
 
@@ -545,6 +548,10 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	struct hyper_dmabuf_req *new_req;
 	struct xen_comm_tx_ring_info *ring_info;
 	int notify;
+
+	struct timeval tv_start, tv_end;
+	struct timeval tv_diff;
+
 	int timeout = 1000;
 
 	/* find a ring info for the channel */
@@ -559,7 +566,11 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	ring = &ring_info->ring_front;
 
+	do_gettimeofday(&tv_start);
+
 	while (RING_FULL(ring)) {
+		dev_dbg(hyper_dmabuf_private.device, "RING_FULL\n");
+
 		if (timeout == 0) {
 			dev_err(hyper_dmabuf_private.device,
 				"Timeout while waiting for an entry in the ring\n");
@@ -609,6 +620,21 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		mutex_unlock(&ring_info->lock);
+		do_gettimeofday(&tv_end);
+
+		/* checking time duration for round-trip of a request for debugging */
+		if (tv_end.tv_usec >= tv_start.tv_usec) {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
+			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
+		} else {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
+			tv_diff.tv_usec = tv_end.tv_usec+1000000-tv_start.tv_usec;
+		}
+
+		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
+			dev_dbg(hyper_dmabuf_private.device, "send_req:time diff: %ld sec, %ld usec\n",
+				tv_diff.tv_sec, tv_diff.tv_usec);
+
 		return req_pending.status;
 	}
 
@@ -657,6 +683,10 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 							sizeof(resp));
 				ring->rsp_prod_pvt++;
 
+				dev_dbg(hyper_dmabuf_private.device,
+					"sending response to exporter for request id:%d\n",
+					resp.response_id);
+
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
 
 				if (notify) {
@@ -696,8 +726,13 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 			/* update pending request's status with what is
 			 * in the response
 			 */
-			if (req_pending.request_id == resp->response_id)
+
+			dev_dbg(hyper_dmabuf_private.device,
+				"getting response from importer\n");
+
+			if (req_pending.request_id == resp->response_id) {
 				req_pending.status = resp->status;
+			}
 
 			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index a2d22d0..3a6172e 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -30,6 +30,17 @@ typedef struct {
         int rng_key[3]; /* 12bytes long random number */
 } hyper_dmabuf_id_t;
 
+struct hyper_dmabuf_event_hdr {
+	int event_type; /* one type only for now - new import */
+	hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
+	size_t size; /* size of data */
+};
+
+struct hyper_dmabuf_event_data {
+	struct hyper_dmabuf_event_hdr hdr;
+	void *data; /* private data */
+};
+
 #define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
 _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
 struct ioctl_hyper_dmabuf_tx_ch_setup {
@@ -56,7 +67,7 @@ struct ioctl_hyper_dmabuf_export_remote {
 	int remote_domain;
 	/* exported dma buf id */
 	hyper_dmabuf_id_t hid;
-	int priv[4];
+	int priv[32];
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_FD \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 45/60] hyper_dmabuf: adding poll/read for event generation
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

hyper_dmabuf driver on importing domain now generates event
every time new hyper_dmabuf is available (visible) to the
importer.

Each event comes with 128 byte private data, which can
contain any meta data or user data specific to the originator
of DMA BUF.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig                   |  10 ++
 drivers/xen/hyper_dmabuf/Makefile                  |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 120 +++++++++++++++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  48 ++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      | 125 +++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h      |  38 +++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  23 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |   1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |  44 +++++++-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |   4 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  37 +++++-
 include/uapi/xen/hyper_dmabuf.h                    |  13 ++-
 13 files changed, 430 insertions(+), 36 deletions(-)
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index 185fdf8..eb1b637 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -19,4 +19,14 @@ config HYPER_DMABUF_SYSFS
 	  Expose information about imported and exported buffers using
 	  hyper_dmabuf driver
 
+config HYPER_DMABUF_EVENT_GEN
+	bool "Enable event-generation and polling operation"
+	default n
+	depends on HYPER_DMABUF
+	help
+	  With this config enabled, hyper_dmabuf driver on the importer side
+	  generates events and queue those up in the event list whenever a new
+	  shared DMA-BUF is available. Events in the list can be retrieved by
+	  read operation.
+
 endmenu
diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 5040b9f..1cd7a81 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -13,6 +13,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
 				 hyper_dmabuf_query.o \
+				 hyper_dmabuf_event.o \
 
 ifeq ($(CONFIG_XEN), y)
 	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 8c488d7..2845224 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -30,7 +30,10 @@
 #include <linux/module.h>
 #include <linux/miscdevice.h>
 #include <linux/workqueue.h>
+#include <linux/slab.h>
 #include <linux/device.h>
+#include <linux/uaccess.h>
+#include <linux/poll.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_conf.h"
@@ -38,6 +41,7 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
 #include "xen/hyper_dmabuf_xen_drv.h"
@@ -64,7 +68,7 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 		return -EBUSY;
 
 	/*
-	 * Initialize backend if neededm,
+	 * Initialize backend if needed,
 	 * use mutex to prevent race conditions when
 	 * two userspace apps will open device at the same time
 	 */
@@ -91,6 +95,112 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
 	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
 
+	/* clean up event queue */
+	hyper_dmabuf_events_release();
+
+	return 0;
+}
+
+unsigned int hyper_dmabuf_event_poll(struct file *filp, struct poll_table_struct *wait)
+{
+	unsigned int mask = 0;
+
+	poll_wait(filp, &hyper_dmabuf_private.event_wait, wait);
+
+	if (!list_empty(&hyper_dmabuf_private.event_list))
+		mask |= POLLIN | POLLRDNORM;
+
+	return mask;
+}
+
+ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
+		size_t count, loff_t *offset)
+{
+	int ret;
+
+	/* only root can read events */
+	if (!capable(CAP_DAC_OVERRIDE))
+		return -EFAULT;
+
+	/* make sure user buffer can be written */
+	if (!access_ok(VERIFY_WRITE, buffer, count))
+		return -EFAULT;
+
+	ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+	if (ret)
+		return ret;
+
+	while (1) {
+		struct hyper_dmabuf_event *e = NULL;
+
+		spin_lock_irq(&hyper_dmabuf_private.event_lock);
+		if (!list_empty(&hyper_dmabuf_private.event_list)) {
+			e = list_first_entry(&hyper_dmabuf_private.event_list,
+					struct hyper_dmabuf_event, link);
+			list_del(&e->link);
+		}
+		spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+
+		if (!e) {
+			if (ret)
+				break;
+			if (filp->f_flags & O_NONBLOCK) {
+				ret = -EAGAIN;
+				break;
+			}
+
+			mutex_unlock(&hyper_dmabuf_private.event_read_lock);
+			ret = wait_event_interruptible(hyper_dmabuf_private.event_wait,
+						       !list_empty(&hyper_dmabuf_private.event_list));
+
+			if (ret >= 0)
+				ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+
+			if (ret)
+				return ret;
+		} else {
+			unsigned length = (sizeof(struct hyper_dmabuf_event_hdr) + e->event_data.hdr.size);
+
+			if (length > count - ret) {
+put_back_event:
+				spin_lock_irq(&hyper_dmabuf_private.event_lock);
+				list_add(&e->link, &hyper_dmabuf_private.event_list);
+				spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+				break;
+			}
+
+			if (copy_to_user(buffer + ret, &e->event_data.hdr,
+					 sizeof(struct hyper_dmabuf_event_hdr))) {
+				if (ret == 0)
+					ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += sizeof(struct hyper_dmabuf_event_hdr);
+
+			if (copy_to_user(buffer + ret, e->event_data.data, e->event_data.hdr.size)) {
+				/* error while copying void *data */
+
+				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
+				ret -= sizeof(struct hyper_dmabuf_event_hdr);
+
+				/* nullifying hdr of the event in user buffer */
+				copy_to_user(buffer + ret, &dummy_hdr,
+					     sizeof(dummy_hdr));
+
+				ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += e->event_data.hdr.size;
+			kfree(e);
+		}
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.event_read_lock);
+
 	return 0;
 }
 
@@ -99,6 +209,8 @@ static struct file_operations hyper_dmabuf_driver_fops =
 	.owner = THIS_MODULE,
 	.open = hyper_dmabuf_open,
 	.release = hyper_dmabuf_release,
+	.read = hyper_dmabuf_event_read,
+	.poll = hyper_dmabuf_event_poll,
 	.unlocked_ioctl = hyper_dmabuf_ioctl,
 };
 
@@ -184,6 +296,12 @@ static int __init hyper_dmabuf_drv_init(void)
 	}
 #endif
 
+	/* Initialize event queue */
+	INIT_LIST_HEAD(&hyper_dmabuf_private.event_list);
+	init_waitqueue_head(&hyper_dmabuf_private.event_wait);
+
+	hyper_dmabuf_private.curr_num_event = 0;
+
 	/* interrupt for comm should be registered here: */
 	return ret;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index ffe4d53..08e8ed7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -30,6 +30,42 @@
 
 struct hyper_dmabuf_req;
 
+struct hyper_dmabuf_event {
+	struct hyper_dmabuf_event_data event_data;
+	struct list_head link;
+};
+
+struct hyper_dmabuf_private {
+        struct device *device;
+
+	/* VM(domain) id of current VM instance */
+	int domid;
+
+	/* workqueue dedicated to hyper_dmabuf driver */
+	struct workqueue_struct *work_queue;
+
+	/* list of reusable hyper_dmabuf_ids */
+	struct list_reusable_id *id_queue;
+
+	/* backend ops - hypervisor specific */
+	struct hyper_dmabuf_backend_ops *backend_ops;
+
+	/* device global lock */
+	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
+	struct mutex lock;
+
+	/* flag that shows whether backend is initialized */
+	bool backend_initialized;
+
+        wait_queue_head_t event_wait;
+        struct list_head event_list;
+
+	spinlock_t event_lock;
+	struct mutex event_read_lock;
+
+	int curr_num_event;
+};
+
 struct list_reusable_id {
 	hyper_dmabuf_id_t hid;
 	struct list_head list;
@@ -69,16 +105,4 @@ struct hyper_dmabuf_backend_ops {
 	int (*send_req)(int, struct hyper_dmabuf_req *, int);
 };
 
-struct hyper_dmabuf_private {
-        struct device *device;
-	int domid;
-	struct workqueue_struct *work_queue;
-	struct list_reusable_id *id_queue;
-
-	/* backend ops - hypervisor specific */
-	struct hyper_dmabuf_backend_ops *backend_ops;
-	struct mutex lock;
-	bool backend_initialized;
-};
-
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
new file mode 100644
index 0000000..be70e54
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_event.h"
+
+extern struct hyper_dmabuf_private hyper_dmabuf_private;
+
+static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
+{
+	struct hyper_dmabuf_event *oldest;
+
+	assert_spin_locked(&hyper_dmabuf_private.event_lock);
+
+	/* check current number of event then if it hits the max num allowed
+	 * then remove the oldest event in the list */
+	if (hyper_dmabuf_private.curr_num_event > MAX_NUMBER_OF_EVENT - 1) {
+		oldest = list_first_entry(&hyper_dmabuf_private.event_list,
+				struct hyper_dmabuf_event, link);
+		list_del(&oldest->link);
+		hyper_dmabuf_private.curr_num_event--;
+	}
+
+	list_add_tail(&e->link,
+		      &hyper_dmabuf_private.event_list);
+
+	hyper_dmabuf_private.curr_num_event++;
+
+	wake_up_interruptible(&hyper_dmabuf_private.event_wait);
+}
+
+void hyper_dmabuf_events_release()
+{
+	struct hyper_dmabuf_event *e, *et;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+
+	list_for_each_entry_safe(e, et, &hyper_dmabuf_private.event_list,
+				 link) {
+		list_del(&e->link);
+		hyper_dmabuf_private.curr_num_event--;
+	}
+
+	if (hyper_dmabuf_private.curr_num_event) {
+		dev_err(hyper_dmabuf_private.device,
+			"possible leak on event_list\n");
+	}
+
+	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+}
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
+{
+	struct hyper_dmabuf_event *e;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+
+	unsigned long irqflags;
+
+	imported_sgt_info = hyper_dmabuf_find_imported(hid);
+
+	if (!imported_sgt_info) {
+		dev_err(hyper_dmabuf_private.device,
+			"can't find imported_sgt_info in the list\n");
+		return -EINVAL;
+	}
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+
+	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
+	e->event_data.hdr.hid = hid;
+	e->event_data.data = (void*)&imported_sgt_info->priv[0];
+	e->event_data.hdr.size = 128;
+
+	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+
+	hyper_dmabuf_send_event_locked(e);
+
+	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+
+	dev_dbg(hyper_dmabuf_private.device,
+			"event number = %d :", hyper_dmabuf_private.curr_num_event);
+
+	dev_dbg(hyper_dmabuf_private.device,
+			"generating events for {%d, %d, %d, %d}\n",
+			imported_sgt_info->hid.id, imported_sgt_info->hid.rng_key[0],
+			imported_sgt_info->hid.rng_key[1], imported_sgt_info->hid.rng_key[2]);
+
+	return 0;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
new file mode 100644
index 0000000..44c4856
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
@@ -0,0 +1,38 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_EVENT_H__
+#define __HYPER_DMABUF_EVENT_H__
+
+#define MAX_NUMBER_OF_EVENT 1024
+
+enum hyper_dmabuf_event_type {
+	HYPER_DMABUF_NEW_IMPORT = 0x10000,
+};
+
+void hyper_dmabuf_events_release(void);
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid);
+
+#endif /* __HYPER_DMABUF_EVENT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index dfdb889..85b70db 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -87,7 +87,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 {
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_req *req;
-	int operands[12] = {0};
+	int operands[40] = {0};
 	int ret, i;
 
 	/* now create request for importer via ring */
@@ -109,7 +109,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 	}
 
 	/* driver/application specific private info, max 4x4 bytes */
-	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 4);
+	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 32);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -121,11 +121,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 	/* composing a message to the importer */
 	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
 
-	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, false);
-
-	if(ret) {
-		dev_err(hyper_dmabuf_private.device, "error while communicating\n");
-	}
+	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
 
 	kfree(req);
 
@@ -141,7 +137,6 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	struct hyper_dmabuf_pages_info *page_info;
 	struct hyper_dmabuf_sgt_info *sgt_info;
 	hyper_dmabuf_id_t hid;
-	int i;
 	int ret = 0;
 
 	if (!data) {
@@ -187,10 +182,14 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 				}
 
 				/* update private data in sgt_info with new ones */
-				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
+
+				/* send an export msg for updating priv in importer */
+				ret = hyper_dmabuf_send_export_msg(sgt_info, NULL);
 
-				/* TODO: need to send this private info to the importer so that those
-				 * on importer's side are also updated */
+				if (ret < 0) {
+					dev_err(hyper_dmabuf_private.device, "Failed to send a new private data\n");
+				}
 
 				dma_buf_put(dma_buf);
 				export_remote_attr->hid = hid;
@@ -280,7 +279,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
 	/* copy private data to sgt_info */
-	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 4);
+	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
 
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (!page_info) {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 21fc7d0..eaef2c1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -37,6 +37,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 46cf9a4..152f9e3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -36,6 +36,7 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_event.h"
 #include "hyper_dmabuf_list.h"
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
@@ -64,10 +65,10 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		 * operands5 : offset of data in the first page
 		 * operands6 : length of data in the last page
 		 * operands7 : top-level reference number for shared pages
-		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands8~39 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
-		for (i=0; i < 11; i++)
-			req->operands[i] = operands[i];
+
+		memcpy(&req->operands[0], &operands[0], 40 * sizeof(int));
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
@@ -136,6 +137,32 @@ void cmd_process_work(struct work_struct *work)
 		 * operands7 : top-level reference number for shared pages
 		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
+
+		/* if nents == 0, it means it is a message only for priv synchronization
+		 * for existing imported_sgt_info so not creating a new one */
+		if (req->operands[4] == 0) {
+			hyper_dmabuf_id_t exist = {req->operands[0],
+						   {req->operands[1], req->operands[2],
+						    req->operands[3]}};
+
+			imported_sgt_info = hyper_dmabuf_find_imported(exist);
+
+			if (!imported_sgt_info) {
+				dev_err(hyper_dmabuf_private.device,
+					"Can't find imported sgt_info from IMPORT_LIST\n");
+				break;
+			}
+			/* updating pri data */
+			memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+			/* generating import event */
+			hyper_dmabuf_import_event(imported_sgt_info->hid);
+#endif
+
+			break;
+		}
+
 		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
 
 		if (!imported_sgt_info) {
@@ -163,12 +190,17 @@ void cmd_process_work(struct work_struct *work)
 		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
 		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
 
-		for (i=0; i<4; i++)
-			imported_sgt_info->private[i] = req->operands[8+i];
+		memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
 
 		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
-	break;
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+		/* generating import event */
+		hyper_dmabuf_import_event(imported_sgt_info->hid);
+#endif
+
+		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
 		/* notifying dmabuf map/unmap to importer (probably not needed) */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 636d6f1..7464273 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
-#define MAX_NUMBER_OF_OPERANDS 13
+#define MAX_NUMBER_OF_OPERANDS 40
 
 struct hyper_dmabuf_req {
 	unsigned int request_id;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index a1d3ec6..f01f535 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -90,7 +90,7 @@ struct hyper_dmabuf_sgt_info {
 	 * uses releases hyper_dmabuf device
 	 */
 	struct file *filp;
-	int priv[4]; /* device specific info (e.g. image's meta info?) */
+	int priv[32]; /* device specific info (e.g. image's meta info?) */
 };
 
 /* Importer store references (before mapping) on shared pages
@@ -110,7 +110,7 @@ struct hyper_dmabuf_imported_sgt_info {
 	void *refs_info;
 	bool valid;
 	int num_importers;
-	int private[4]; /* device specific info (e.g. image's meta info?) */
+	int priv[32]; /* device specific info (e.g. image's meta info?) */
 };
 
 #endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 9689346..370a07d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -32,6 +32,7 @@
 #include <linux/slab.h>
 #include <linux/workqueue.h>
 #include <linux/delay.h>
+#include <linux/time.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
 #include <xen/xenbus.h>
@@ -474,6 +475,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 			  back_ring_isr, 0,
 			  NULL, (void*)ring_info);
 
+	return ret;
+
 fail_others:
 	kfree(map_ops);
 
@@ -545,6 +548,10 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	struct hyper_dmabuf_req *new_req;
 	struct xen_comm_tx_ring_info *ring_info;
 	int notify;
+
+	struct timeval tv_start, tv_end;
+	struct timeval tv_diff;
+
 	int timeout = 1000;
 
 	/* find a ring info for the channel */
@@ -559,7 +566,11 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	ring = &ring_info->ring_front;
 
+	do_gettimeofday(&tv_start);
+
 	while (RING_FULL(ring)) {
+		dev_dbg(hyper_dmabuf_private.device, "RING_FULL\n");
+
 		if (timeout == 0) {
 			dev_err(hyper_dmabuf_private.device,
 				"Timeout while waiting for an entry in the ring\n");
@@ -609,6 +620,21 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		mutex_unlock(&ring_info->lock);
+		do_gettimeofday(&tv_end);
+
+		/* checking time duration for round-trip of a request for debugging */
+		if (tv_end.tv_usec >= tv_start.tv_usec) {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
+			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
+		} else {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
+			tv_diff.tv_usec = tv_end.tv_usec+1000000-tv_start.tv_usec;
+		}
+
+		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
+			dev_dbg(hyper_dmabuf_private.device, "send_req:time diff: %ld sec, %ld usec\n",
+				tv_diff.tv_sec, tv_diff.tv_usec);
+
 		return req_pending.status;
 	}
 
@@ -657,6 +683,10 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 							sizeof(resp));
 				ring->rsp_prod_pvt++;
 
+				dev_dbg(hyper_dmabuf_private.device,
+					"sending response to exporter for request id:%d\n",
+					resp.response_id);
+
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
 
 				if (notify) {
@@ -696,8 +726,13 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 			/* update pending request's status with what is
 			 * in the response
 			 */
-			if (req_pending.request_id == resp->response_id)
+
+			dev_dbg(hyper_dmabuf_private.device,
+				"getting response from importer\n");
+
+			if (req_pending.request_id == resp->response_id) {
 				req_pending.status = resp->status;
+			}
 
 			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index a2d22d0..3a6172e 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -30,6 +30,17 @@ typedef struct {
         int rng_key[3]; /* 12bytes long random number */
 } hyper_dmabuf_id_t;
 
+struct hyper_dmabuf_event_hdr {
+	int event_type; /* one type only for now - new import */
+	hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
+	size_t size; /* size of data */
+};
+
+struct hyper_dmabuf_event_data {
+	struct hyper_dmabuf_event_hdr hdr;
+	void *data; /* private data */
+};
+
 #define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
 _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
 struct ioctl_hyper_dmabuf_tx_ch_setup {
@@ -56,7 +67,7 @@ struct ioctl_hyper_dmabuf_export_remote {
 	int remote_domain;
 	/* exported dma buf id */
 	hyper_dmabuf_id_t hid;
-	int priv[4];
+	int priv[32];
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_FD \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 46/60] hyper_dmabuf: delay auto initialization of comm_env
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Comm env intialization is now scheduled to be done
when xenstore is initialized. This scheduling is done
in driver's init routine.

Also, adding a recursively scheduled routine
that monitors any new tx ch setup from other domains
and automaticlaly configure rx channel accordingly
(every 10 sec).

Only limitation is it currently checks domain ID 0 to 10.
We could increase this range if needed.

With this patch, we don't have to call comm channel
setup IOCTL on importer side anymore. For example,
we can remove ioctl call in init_hyper_dmabuf from
vmdisplay.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig                   |  10 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  64 +++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   3 +
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 153 ++++++++++++++++++++-
 4 files changed, 199 insertions(+), 31 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index eb1b637..5efcd44 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -29,4 +29,14 @@ config HYPER_DMABUF_EVENT_GEN
 	  shared DMA-BUF is available. Events in the list can be retrieved by
 	  read operation.
 
+config HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+	bool "Enable automatic rx-ch add with 10 secs interval"
+	default y
+	depends on HYPER_DMABUF && HYPER_DMABUF_XEN
+	help
+	  If enabled, driver reads a node in xenstore every 10 seconds
+	  to check whether there is any tx comm ch configured by another
+	  domain then initialize matched rx comm ch automatically for any
+	  existing tx comm chs.
+
 endmenu
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 2845224..005677d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -67,27 +67,6 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 	if (filp->f_flags & O_EXCL)
 		return -EBUSY;
 
-	/*
-	 * Initialize backend if needed,
-	 * use mutex to prevent race conditions when
-	 * two userspace apps will open device at the same time
-	 */
-	mutex_lock(&hyper_dmabuf_private.lock);
-
-	if (!hyper_dmabuf_private.backend_initialized) {
-		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
-
-		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
-	        if (ret < 0) {
-			dev_err(hyper_dmabuf_private.device,
-				"failed to initiailize hypervisor-specific comm env\n");
-		} else {
-			hyper_dmabuf_private.backend_initialized = true;
-		}
-	}
-
-	mutex_unlock(&hyper_dmabuf_private.lock);
-
 	return ret;
 }
 
@@ -260,17 +239,22 @@ static int __init hyper_dmabuf_drv_init(void)
 		return ret;
 	}
 
+/* currently only supports XEN hypervisor */
+
 #ifdef CONFIG_HYPER_DMABUF_XEN
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
+#else
+	hyper_dmabuf_private.backend_ops = NULL;
+	printk( KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
-	/*
-	 * Defer backend setup to first open call.
-	 * Due to fact that some hypervisors eg. Xen, may have dependencies
-	 * to userspace daemons like xenstored, in that case all xenstore
-	 * calls done from kernel will block until that deamon will be
-	 * started, in case where module is built in that will block entire
-	 * kernel initialization.
-	 */
+
+	if (hyper_dmabuf_private.backend_ops == NULL) {
+		printk( KERN_ERR "Hyper_dmabuf: failed to be loaded - no backend found\n");
+		return -1;
+	}
+
+	mutex_lock(&hyper_dmabuf_private.lock);
+
 	hyper_dmabuf_private.backend_initialized = false;
 
 	dev_info(hyper_dmabuf_private.device,
@@ -301,6 +285,22 @@ static int __init hyper_dmabuf_drv_init(void)
 	init_waitqueue_head(&hyper_dmabuf_private.event_wait);
 
 	hyper_dmabuf_private.curr_num_event = 0;
+	hyper_dmabuf_private.exited = false;
+
+	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+
+	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	if (ret < 0) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"failed to initialize comm-env but it will re-attempt.\n");
+	} else {
+		hyper_dmabuf_private.backend_initialized = true;
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
+	dev_info(hyper_dmabuf_private.device,
+		"Finishing up initialization of hyper_dmabuf drv\n");
 
 	/* interrupt for comm should be registered here: */
 	return ret;
@@ -312,6 +312,8 @@ static void hyper_dmabuf_drv_exit(void)
 	hyper_dmabuf_unregister_sysfs(hyper_dmabuf_private.device);
 #endif
 
+	mutex_lock(&hyper_dmabuf_private.lock);
+
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
@@ -325,6 +327,10 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.id_queue)
 		destroy_reusable_list();
 
+	hyper_dmabuf_private.exited = true;
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
 	dev_info(hyper_dmabuf_private.device,
 		 "hyper_dmabuf driver: Exiting\n");
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 08e8ed7..a4acdd9f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -64,6 +64,9 @@ struct hyper_dmabuf_private {
 	struct mutex event_read_lock;
 
 	int curr_num_event;
+
+	/* indicate whether the driver is unloaded */
+	bool exited;
 };
 
 struct list_reusable_id {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 370a07d..920ecf4 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -47,6 +47,14 @@ struct hyper_dmabuf_req req_pending = {0};
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
+extern int xenstored_ready;
+
+static void xen_get_domid_delayed(struct work_struct *unused);
+static void xen_init_comm_env_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
+static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
+
 /* Creates entry in xen store that will keep details of all
  * exporter rings created by this domain
  */
@@ -54,7 +62,7 @@ static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
@@ -68,7 +76,7 @@ static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
@@ -131,16 +139,58 @@ static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *po
 	return (ret <= 0 ? 1 : 0);
 }
 
+void xen_get_domid_delayed(struct work_struct *unused)
+{
+	struct xenbus_transaction xbt;
+	int domid, ret;
+
+	/* scheduling another if driver is still running
+	 * and xenstore has not been initialized */
+	if (hyper_dmabuf_private.exited == false &&
+	    likely(xenstored_ready == 0)) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Xenstore is not quite ready yet. Will retry it in 500ms\n");
+		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+	} else {
+	        xenbus_transaction_start(&xbt);
+
+		ret = xenbus_scanf(xbt, "domid","", "%d", &domid);
+
+		if (ret <= 0)
+			domid = -1;
+
+		xenbus_transaction_end(xbt, 0);
+
+		/* try again since -1 is an invalid id for domain
+		 * (but only if driver is still running) */
+		if (hyper_dmabuf_private.exited == false && unlikely(domid == -1)) {
+			dev_dbg(hyper_dmabuf_private.device,
+				"domid==-1 is invalid. Will retry it in 500ms\n");
+			schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+		} else {
+			dev_info(hyper_dmabuf_private.device,
+				"Successfully retrieved domid from Xenstore:%d\n", domid);
+			hyper_dmabuf_private.domid = domid;
+		}
+	}
+}
+
 int hyper_dmabuf_xen_get_domid(void)
 {
 	struct xenbus_transaction xbt;
 	int domid;
 
+	if (unlikely(xenstored_ready == 0)) {
+		xen_get_domid_delayed(NULL);
+		return -1;
+	}
+
         xenbus_transaction_start(&xbt);
 
         if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
 		domid = -1;
         }
+
         xenbus_transaction_end(xbt, 0);
 
 	return domid;
@@ -193,6 +243,8 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 	 * it means that remote domain has setup it for us and we should connect
 	 * to it.
 	 */
+
+
 	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), rdom,
 					&grefid, &port);
 
@@ -389,6 +441,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		return 0;
 	}
 
+
 	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					&rx_gref, &rx_port);
 
@@ -519,12 +572,108 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 	FRONT_RING_INIT(&(tx_ring_info->ring_front), tx_ring_info->ring_front.sring, PAGE_SIZE);
 }
 
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
+
+#define DOMID_SCAN_START	1	/*  domid = 1 */
+#define DOMID_SCAN_END		10	/* domid = 10 */
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused)
+{
+	int ret;
+	char buf[128];
+	int i, dummy;
+
+	dev_dbg(hyper_dmabuf_private.device,
+		"Scanning new tx channel comming from another domain\n");
+
+	/* check other domains and schedule another work if driver
+	 * is still running and backend is valid
+	 */
+	if (hyper_dmabuf_private.exited == false &&
+	    hyper_dmabuf_private.backend_initialized == true) {
+		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
+			if (i == hyper_dmabuf_private.domid)
+				continue;
+
+			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", i,
+				hyper_dmabuf_private.domid);
+
+			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
+
+			if (ret > 0) {
+				if (xen_comm_find_rx_ring(i) != NULL)
+					continue;
+
+				ret = hyper_dmabuf_xen_init_rx_rbuf(i);
+
+				if (!ret)
+					dev_info(hyper_dmabuf_private.device,
+						 "Finishing up setting up rx channel for domain %d\n", i);
+			}
+		}
+
+		/* check every 10 seconds */
+		schedule_delayed_work(&xen_rx_ch_auto_add_work, msecs_to_jiffies(10000));
+	}
+}
+
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+
+void xen_init_comm_env_delayed(struct work_struct *unused)
+{
+	int ret;
+
+	/* scheduling another work if driver is still running
+	 * and xenstore hasn't been initialized or dom_id hasn't
+	 * been correctly retrieved. */
+	if (hyper_dmabuf_private.exited == false &&
+	    likely(xenstored_ready == 0 ||
+	    hyper_dmabuf_private.domid == -1)) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Xenstore is not ready yet. Re-try this again in 500ms\n");
+		schedule_delayed_work(&xen_init_comm_env_work, msecs_to_jiffies(500));
+	} else {
+		ret = xen_comm_setup_data_dir();
+		if (ret < 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"Failed to create data dir in Xenstore\n");
+		} else {
+			dev_info(hyper_dmabuf_private.device,
+				"Successfully finished comm env initialization\n");
+			hyper_dmabuf_private.backend_initialized = true;
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+			xen_rx_ch_add_delayed(NULL);
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+		}
+	}
+}
+
 int hyper_dmabuf_xen_init_comm_env(void)
 {
 	int ret;
 
 	xen_comm_ring_table_init();
+
+	if (unlikely(xenstored_ready == 0 || hyper_dmabuf_private.domid == -1)) {
+		xen_init_comm_env_delayed(NULL);
+		return -1;
+	}
+
 	ret = xen_comm_setup_data_dir();
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to create data dir in Xenstore\n");
+	} else {
+		dev_info(hyper_dmabuf_private.device,
+			"Successfully finished comm env initialization\n");
+
+		hyper_dmabuf_private.backend_initialized = true;
+	}
 
 	return ret;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 46/60] hyper_dmabuf: delay auto initialization of comm_env
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Comm env intialization is now scheduled to be done
when xenstore is initialized. This scheduling is done
in driver's init routine.

Also, adding a recursively scheduled routine
that monitors any new tx ch setup from other domains
and automaticlaly configure rx channel accordingly
(every 10 sec).

Only limitation is it currently checks domain ID 0 to 10.
We could increase this range if needed.

With this patch, we don't have to call comm channel
setup IOCTL on importer side anymore. For example,
we can remove ioctl call in init_hyper_dmabuf from
vmdisplay.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Kconfig                   |  10 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  64 +++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   3 +
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 153 ++++++++++++++++++++-
 4 files changed, 199 insertions(+), 31 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
index eb1b637..5efcd44 100644
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -29,4 +29,14 @@ config HYPER_DMABUF_EVENT_GEN
 	  shared DMA-BUF is available. Events in the list can be retrieved by
 	  read operation.
 
+config HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+	bool "Enable automatic rx-ch add with 10 secs interval"
+	default y
+	depends on HYPER_DMABUF && HYPER_DMABUF_XEN
+	help
+	  If enabled, driver reads a node in xenstore every 10 seconds
+	  to check whether there is any tx comm ch configured by another
+	  domain then initialize matched rx comm ch automatically for any
+	  existing tx comm chs.
+
 endmenu
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 2845224..005677d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -67,27 +67,6 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 	if (filp->f_flags & O_EXCL)
 		return -EBUSY;
 
-	/*
-	 * Initialize backend if needed,
-	 * use mutex to prevent race conditions when
-	 * two userspace apps will open device at the same time
-	 */
-	mutex_lock(&hyper_dmabuf_private.lock);
-
-	if (!hyper_dmabuf_private.backend_initialized) {
-		hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
-
-		ret = hyper_dmabuf_private.backend_ops->init_comm_env();
-	        if (ret < 0) {
-			dev_err(hyper_dmabuf_private.device,
-				"failed to initiailize hypervisor-specific comm env\n");
-		} else {
-			hyper_dmabuf_private.backend_initialized = true;
-		}
-	}
-
-	mutex_unlock(&hyper_dmabuf_private.lock);
-
 	return ret;
 }
 
@@ -260,17 +239,22 @@ static int __init hyper_dmabuf_drv_init(void)
 		return ret;
 	}
 
+/* currently only supports XEN hypervisor */
+
 #ifdef CONFIG_HYPER_DMABUF_XEN
 	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
+#else
+	hyper_dmabuf_private.backend_ops = NULL;
+	printk( KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
-	/*
-	 * Defer backend setup to first open call.
-	 * Due to fact that some hypervisors eg. Xen, may have dependencies
-	 * to userspace daemons like xenstored, in that case all xenstore
-	 * calls done from kernel will block until that deamon will be
-	 * started, in case where module is built in that will block entire
-	 * kernel initialization.
-	 */
+
+	if (hyper_dmabuf_private.backend_ops == NULL) {
+		printk( KERN_ERR "Hyper_dmabuf: failed to be loaded - no backend found\n");
+		return -1;
+	}
+
+	mutex_lock(&hyper_dmabuf_private.lock);
+
 	hyper_dmabuf_private.backend_initialized = false;
 
 	dev_info(hyper_dmabuf_private.device,
@@ -301,6 +285,22 @@ static int __init hyper_dmabuf_drv_init(void)
 	init_waitqueue_head(&hyper_dmabuf_private.event_wait);
 
 	hyper_dmabuf_private.curr_num_event = 0;
+	hyper_dmabuf_private.exited = false;
+
+	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+
+	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	if (ret < 0) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"failed to initialize comm-env but it will re-attempt.\n");
+	} else {
+		hyper_dmabuf_private.backend_initialized = true;
+	}
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
+	dev_info(hyper_dmabuf_private.device,
+		"Finishing up initialization of hyper_dmabuf drv\n");
 
 	/* interrupt for comm should be registered here: */
 	return ret;
@@ -312,6 +312,8 @@ static void hyper_dmabuf_drv_exit(void)
 	hyper_dmabuf_unregister_sysfs(hyper_dmabuf_private.device);
 #endif
 
+	mutex_lock(&hyper_dmabuf_private.lock);
+
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
@@ -325,6 +327,10 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hyper_dmabuf_private.id_queue)
 		destroy_reusable_list();
 
+	hyper_dmabuf_private.exited = true;
+
+	mutex_unlock(&hyper_dmabuf_private.lock);
+
 	dev_info(hyper_dmabuf_private.device,
 		 "hyper_dmabuf driver: Exiting\n");
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 08e8ed7..a4acdd9f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -64,6 +64,9 @@ struct hyper_dmabuf_private {
 	struct mutex event_read_lock;
 
 	int curr_num_event;
+
+	/* indicate whether the driver is unloaded */
+	bool exited;
 };
 
 struct list_reusable_id {
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 370a07d..920ecf4 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -47,6 +47,14 @@ struct hyper_dmabuf_req req_pending = {0};
 
 extern struct hyper_dmabuf_private hyper_dmabuf_private;
 
+extern int xenstored_ready;
+
+static void xen_get_domid_delayed(struct work_struct *unused);
+static void xen_init_comm_env_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
+static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
+
 /* Creates entry in xen store that will keep details of all
  * exporter rings created by this domain
  */
@@ -54,7 +62,7 @@ static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
@@ -68,7 +76,7 @@ static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_xen_get_domid());
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
@@ -131,16 +139,58 @@ static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *po
 	return (ret <= 0 ? 1 : 0);
 }
 
+void xen_get_domid_delayed(struct work_struct *unused)
+{
+	struct xenbus_transaction xbt;
+	int domid, ret;
+
+	/* scheduling another if driver is still running
+	 * and xenstore has not been initialized */
+	if (hyper_dmabuf_private.exited == false &&
+	    likely(xenstored_ready == 0)) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Xenstore is not quite ready yet. Will retry it in 500ms\n");
+		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+	} else {
+	        xenbus_transaction_start(&xbt);
+
+		ret = xenbus_scanf(xbt, "domid","", "%d", &domid);
+
+		if (ret <= 0)
+			domid = -1;
+
+		xenbus_transaction_end(xbt, 0);
+
+		/* try again since -1 is an invalid id for domain
+		 * (but only if driver is still running) */
+		if (hyper_dmabuf_private.exited == false && unlikely(domid == -1)) {
+			dev_dbg(hyper_dmabuf_private.device,
+				"domid==-1 is invalid. Will retry it in 500ms\n");
+			schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+		} else {
+			dev_info(hyper_dmabuf_private.device,
+				"Successfully retrieved domid from Xenstore:%d\n", domid);
+			hyper_dmabuf_private.domid = domid;
+		}
+	}
+}
+
 int hyper_dmabuf_xen_get_domid(void)
 {
 	struct xenbus_transaction xbt;
 	int domid;
 
+	if (unlikely(xenstored_ready == 0)) {
+		xen_get_domid_delayed(NULL);
+		return -1;
+	}
+
         xenbus_transaction_start(&xbt);
 
         if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
 		domid = -1;
         }
+
         xenbus_transaction_end(xbt, 0);
 
 	return domid;
@@ -193,6 +243,8 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 	 * it means that remote domain has setup it for us and we should connect
 	 * to it.
 	 */
+
+
 	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), rdom,
 					&grefid, &port);
 
@@ -389,6 +441,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		return 0;
 	}
 
+
 	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					&rx_gref, &rx_port);
 
@@ -519,12 +572,108 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 	FRONT_RING_INIT(&(tx_ring_info->ring_front), tx_ring_info->ring_front.sring, PAGE_SIZE);
 }
 
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
+
+#define DOMID_SCAN_START	1	/*  domid = 1 */
+#define DOMID_SCAN_END		10	/* domid = 10 */
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused)
+{
+	int ret;
+	char buf[128];
+	int i, dummy;
+
+	dev_dbg(hyper_dmabuf_private.device,
+		"Scanning new tx channel comming from another domain\n");
+
+	/* check other domains and schedule another work if driver
+	 * is still running and backend is valid
+	 */
+	if (hyper_dmabuf_private.exited == false &&
+	    hyper_dmabuf_private.backend_initialized == true) {
+		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
+			if (i == hyper_dmabuf_private.domid)
+				continue;
+
+			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", i,
+				hyper_dmabuf_private.domid);
+
+			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
+
+			if (ret > 0) {
+				if (xen_comm_find_rx_ring(i) != NULL)
+					continue;
+
+				ret = hyper_dmabuf_xen_init_rx_rbuf(i);
+
+				if (!ret)
+					dev_info(hyper_dmabuf_private.device,
+						 "Finishing up setting up rx channel for domain %d\n", i);
+			}
+		}
+
+		/* check every 10 seconds */
+		schedule_delayed_work(&xen_rx_ch_auto_add_work, msecs_to_jiffies(10000));
+	}
+}
+
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+
+void xen_init_comm_env_delayed(struct work_struct *unused)
+{
+	int ret;
+
+	/* scheduling another work if driver is still running
+	 * and xenstore hasn't been initialized or dom_id hasn't
+	 * been correctly retrieved. */
+	if (hyper_dmabuf_private.exited == false &&
+	    likely(xenstored_ready == 0 ||
+	    hyper_dmabuf_private.domid == -1)) {
+		dev_dbg(hyper_dmabuf_private.device,
+			"Xenstore is not ready yet. Re-try this again in 500ms\n");
+		schedule_delayed_work(&xen_init_comm_env_work, msecs_to_jiffies(500));
+	} else {
+		ret = xen_comm_setup_data_dir();
+		if (ret < 0) {
+			dev_err(hyper_dmabuf_private.device,
+				"Failed to create data dir in Xenstore\n");
+		} else {
+			dev_info(hyper_dmabuf_private.device,
+				"Successfully finished comm env initialization\n");
+			hyper_dmabuf_private.backend_initialized = true;
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+			xen_rx_ch_add_delayed(NULL);
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+		}
+	}
+}
+
 int hyper_dmabuf_xen_init_comm_env(void)
 {
 	int ret;
 
 	xen_comm_ring_table_init();
+
+	if (unlikely(xenstored_ready == 0 || hyper_dmabuf_private.domid == -1)) {
+		xen_init_comm_env_delayed(NULL);
+		return -1;
+	}
+
 	ret = xen_comm_setup_data_dir();
+	if (ret < 0) {
+		dev_err(hyper_dmabuf_private.device,
+			"Failed to create data dir in Xenstore\n");
+	} else {
+		dev_info(hyper_dmabuf_private.device,
+			"Successfully finished comm env initialization\n");
+
+		hyper_dmabuf_private.backend_initialized = true;
+	}
 
 	return ret;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 47/60] hyper_dmabuf: fix issues with event-polling
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

This patch fixes several defects on event handling
including:

1. Imported sgt info and exported sgt info now have
   buffer for private data (priv) with variable size

2. Now user input to export_remote_ioctl contain sz_priv,
   which specifies size of private data (e.g. meta data)

3. Increased max size of operands to 64 * sizeof(int)
   to accomodate maximum size of private data

4. Initialize mutexes and spinlock

5. Change max event queue depth to 32 to prevent user app
   to display too much outdated frames.

6. Frees oldest event if event queue is full to prevent
   overflow.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c      | 23 ++++++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c    |  8 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h    |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c    | 64 ++++++++++++++++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c      | 42 +++++++++++++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h      |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h   |  9 +++-
 include/uapi/xen/hyper_dmabuf.h                  |  7 ++-
 9 files changed, 131 insertions(+), 27 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 005677d..87ea6ca 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -74,9 +74,6 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
 	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
 
-	/* clean up event queue */
-	hyper_dmabuf_events_release();
-
 	return 0;
 }
 
@@ -98,12 +95,18 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 	int ret;
 
 	/* only root can read events */
-	if (!capable(CAP_DAC_OVERRIDE))
+	if (!capable(CAP_DAC_OVERRIDE)) {
+		dev_err(hyper_dmabuf_private.device,
+			"Only root can read events\n");
 		return -EFAULT;
+	}
 
 	/* make sure user buffer can be written */
-	if (!access_ok(VERIFY_WRITE, buffer, count))
+	if (!access_ok(VERIFY_WRITE, buffer, count)) {
+		dev_err(hyper_dmabuf_private.device,
+			"User buffer can't be written.\n");
 		return -EFAULT;
+	}
 
 	ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
 	if (ret)
@@ -132,7 +135,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			ret = wait_event_interruptible(hyper_dmabuf_private.event_wait,
 						       !list_empty(&hyper_dmabuf_private.event_list));
 
-			if (ret >= 0)
+			if (ret == 0)
 				ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
 
 			if (ret)
@@ -174,13 +177,14 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			}
 
 			ret += e->event_data.hdr.size;
+			hyper_dmabuf_private.curr_num_event--;
 			kfree(e);
 		}
 	}
 
 	mutex_unlock(&hyper_dmabuf_private.event_read_lock);
 
-	return 0;
+	return ret;
 }
 
 static struct file_operations hyper_dmabuf_driver_fops =
@@ -233,6 +237,8 @@ static int __init hyper_dmabuf_drv_init(void)
 	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
 	mutex_init(&hyper_dmabuf_private.lock);
+	mutex_init(&hyper_dmabuf_private.event_read_lock);
+	spin_lock_init(&hyper_dmabuf_private.event_lock);
 
 	ret = register_device();
 	if (ret < 0) {
@@ -329,6 +335,9 @@ static void hyper_dmabuf_drv_exit(void)
 
 	hyper_dmabuf_private.exited = true;
 
+	/* clean up event queue */
+	hyper_dmabuf_events_release();
+
 	mutex_unlock(&hyper_dmabuf_private.lock);
 
 	dev_info(hyper_dmabuf_private.device,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index be70e54..8998a7d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -49,11 +49,12 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 
 	/* check current number of event then if it hits the max num allowed
 	 * then remove the oldest event in the list */
-	if (hyper_dmabuf_private.curr_num_event > MAX_NUMBER_OF_EVENT - 1) {
+	if (hyper_dmabuf_private.curr_num_event > MAX_DEPTH_EVENT_QUEUE - 1) {
 		oldest = list_first_entry(&hyper_dmabuf_private.event_list,
 				struct hyper_dmabuf_event, link);
 		list_del(&oldest->link);
 		hyper_dmabuf_private.curr_num_event--;
+		kfree(oldest);
 	}
 
 	list_add_tail(&e->link,
@@ -74,6 +75,7 @@ void hyper_dmabuf_events_release()
 	list_for_each_entry_safe(e, et, &hyper_dmabuf_private.event_list,
 				 link) {
 		list_del(&e->link);
+		kfree(e);
 		hyper_dmabuf_private.curr_num_event--;
 	}
 
@@ -104,8 +106,8 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void*)&imported_sgt_info->priv[0];
-	e->event_data.hdr.size = 128;
+	e->event_data.data = (void*)imported_sgt_info->priv;
+	e->event_data.hdr.size = imported_sgt_info->sz_priv;
 
 	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
index 44c4856..50db04f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_EVENT_H__
 #define __HYPER_DMABUF_EVENT_H__
 
-#define MAX_NUMBER_OF_EVENT 1024
+#define MAX_DEPTH_EVENT_QUEUE 32
 
 enum hyper_dmabuf_event_type {
 	HYPER_DMABUF_NEW_IMPORT = 0x10000,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 85b70db..06f95ca 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -87,7 +87,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 {
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_req *req;
-	int operands[40] = {0};
+	int operands[MAX_NUMBER_OF_OPERANDS] = {0};
 	int ret, i;
 
 	/* now create request for importer via ring */
@@ -108,8 +108,10 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 		}
 	}
 
-	/* driver/application specific private info, max 4x4 bytes */
-	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 32);
+	operands[8] = sgt_info->sz_priv;
+
+	/* driver/application specific private info */
+	memcpy(&operands[9], sgt_info->priv, operands[8]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -181,8 +183,32 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 					sgt_info->unexport_scheduled = 0;
 				}
 
+				/* if there's any change in size of private data.
+				 * we reallocate space for private data with new size */
+				if (export_remote_attr->sz_priv != sgt_info->sz_priv) {
+					kfree(sgt_info->priv);
+
+					/* truncating size */
+					if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
+						sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+					} else {
+						sgt_info->sz_priv = export_remote_attr->sz_priv;
+					}
+
+					sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+
+					if(!sgt_info->priv) {
+						dev_err(hyper_dmabuf_private.device,
+							"Can't reallocate priv because there's no more space left\n");
+						hyper_dmabuf_remove_exported(sgt_info->hid);
+						hyper_dmabuf_cleanup_sgt_info(sgt_info, true);
+						kfree(sgt_info);
+						return -ENOMEM;
+					}
+				}
+
 				/* update private data in sgt_info with new ones */
-				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
+				copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
 
 				/* send an export msg for updating priv in importer */
 				ret = hyper_dmabuf_send_export_msg(sgt_info, NULL);
@@ -222,6 +248,26 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 		goto fail_sgt_info_creation;
 	}
 
+	/* possible truncation */
+	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
+		sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+	} else {
+		sgt_info->sz_priv = export_remote_attr->sz_priv;
+	}
+
+	/* creating buffer for private data of buffer */
+	if(sgt_info->sz_priv != 0) {
+		sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+
+		if(!sgt_info->priv) {
+			dev_err(hyper_dmabuf_private.device, "no more space left\n");
+			ret = -ENOMEM;
+			goto fail_priv_creation;
+		}
+	} else {
+		dev_err(hyper_dmabuf_private.device, "size is 0\n");
+	}
+
 	sgt_info->hid = hyper_dmabuf_get_hid();
 
 	/* no more exported dmabuf allowed */
@@ -279,7 +325,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
 	/* copy private data to sgt_info */
-	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
+	copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
 
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (!page_info) {
@@ -329,6 +375,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 fail_map_active_attached:
 	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->priv);
+
+fail_priv_creation:
+	kfree(sgt_info);
 
 fail_map_active_sgts:
 fail_sgt_info_creation:
@@ -553,6 +603,10 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 		hyper_dmabuf_remove_exported(sgt_info->hid);
 		/* register hyper_dmabuf_id to the list for reuse */
 		store_reusable_hid(sgt_info->hid);
+
+		if (sgt_info->sz_priv > 0 && !sgt_info->priv)
+			kfree(sgt_info->priv);
+
 		kfree(sgt_info);
 	}
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 152f9e3..ec37c3b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -65,10 +65,11 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		 * operands5 : offset of data in the first page
 		 * operands6 : length of data in the last page
 		 * operands7 : top-level reference number for shared pages
-		 * operands8~39 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands8 : size of private data (from operands9)
+		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
-		memcpy(&req->operands[0], &operands[0], 40 * sizeof(int));
+		memcpy(&req->operands[0], &operands[0], 9 * sizeof(int) + operands[8]);
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
@@ -135,7 +136,8 @@ void cmd_process_work(struct work_struct *work)
 		 * operands5 : offset of data in the first page
 		 * operands6 : length of data in the last page
 		 * operands7 : top-level reference number for shared pages
-		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands8 : size of private data (from operands9)
+		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
 		/* if nents == 0, it means it is a message only for priv synchronization
@@ -152,8 +154,25 @@ void cmd_process_work(struct work_struct *work)
 					"Can't find imported sgt_info from IMPORT_LIST\n");
 				break;
 			}
-			/* updating pri data */
-			memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
+
+			/* if size of new private data is different,
+			 * we reallocate it. */
+			if (imported_sgt_info->sz_priv != req->operands[8]) {
+				kfree(imported_sgt_info->priv);
+				imported_sgt_info->sz_priv = req->operands[8];
+				imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
+				if (!imported_sgt_info->priv) {
+					dev_err(hyper_dmabuf_private.device,
+						"Fail to allocate priv\n");
+
+					/* set it invalid */
+					imported_sgt_info->valid = 0;
+					break;
+				}
+			}
+
+			/* updating priv data */
+			memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 			/* generating import event */
@@ -171,6 +190,17 @@ void cmd_process_work(struct work_struct *work)
 			break;
 		}
 
+		imported_sgt_info->sz_priv = req->operands[8];
+		imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
+
+		if (!imported_sgt_info->priv) {
+			dev_err(hyper_dmabuf_private.device,
+				"Fail to allocate priv\n");
+
+			kfree(imported_sgt_info);
+			break;
+		}
+
 		imported_sgt_info->hid.id = req->operands[0];
 
 		for (i=0; i<3; i++)
@@ -190,7 +220,7 @@ void cmd_process_work(struct work_struct *work)
 		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
 		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
 
-		memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
+		memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
 
 		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 7464273..0f6e795 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
-#define MAX_NUMBER_OF_OPERANDS 40
+#define MAX_NUMBER_OF_OPERANDS 64
 
 struct hyper_dmabuf_req {
 	unsigned int request_id;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index dd17d26..691a714 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -255,6 +255,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	kfree(sgt_info->active_attached);
 	kfree(sgt_info->va_kmapped);
 	kfree(sgt_info->va_vmapped);
+	kfree(sgt_info->priv);
 
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index f01f535..6f929f2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -57,6 +57,7 @@ struct hyper_dmabuf_pages_info {
         struct page **pages; /* pages that contains reference numbers of shared pages*/
 };
 
+
 /* Both importer and exporter use this structure to point to sg lists
  *
  * Exporter stores references to sgt in a hash table
@@ -90,7 +91,9 @@ struct hyper_dmabuf_sgt_info {
 	 * uses releases hyper_dmabuf device
 	 */
 	struct file *filp;
-	int priv[32]; /* device specific info (e.g. image's meta info?) */
+
+	size_t sz_priv;
+	char *priv; /* device specific info (e.g. image's meta info?) */
 };
 
 /* Importer store references (before mapping) on shared pages
@@ -110,7 +113,9 @@ struct hyper_dmabuf_imported_sgt_info {
 	void *refs_info;
 	bool valid;
 	int num_importers;
-	int priv[32]; /* device specific info (e.g. image's meta info?) */
+
+	size_t sz_priv;
+	char *priv; /* device specific info (e.g. image's meta info?) */
 };
 
 #endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index 3a6172e..df01b17 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -25,6 +25,8 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_H__
 
+#define MAX_SIZE_PRIV_DATA 192
+
 typedef struct {
         int id;
         int rng_key[3]; /* 12bytes long random number */
@@ -33,7 +35,7 @@ typedef struct {
 struct hyper_dmabuf_event_hdr {
 	int event_type; /* one type only for now - new import */
 	hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
-	size_t size; /* size of data */
+	int size; /* size of data */
 };
 
 struct hyper_dmabuf_event_data {
@@ -67,7 +69,8 @@ struct ioctl_hyper_dmabuf_export_remote {
 	int remote_domain;
 	/* exported dma buf id */
 	hyper_dmabuf_id_t hid;
-	int priv[32];
+	int sz_priv;
+	char *priv;
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_FD \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 47/60] hyper_dmabuf: fix issues with event-polling
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

This patch fixes several defects on event handling
including:

1. Imported sgt info and exported sgt info now have
   buffer for private data (priv) with variable size

2. Now user input to export_remote_ioctl contain sz_priv,
   which specifies size of private data (e.g. meta data)

3. Increased max size of operands to 64 * sizeof(int)
   to accomodate maximum size of private data

4. Initialize mutexes and spinlock

5. Change max event queue depth to 32 to prevent user app
   to display too much outdated frames.

6. Frees oldest event if event queue is full to prevent
   overflow.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c      | 23 ++++++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c    |  8 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h    |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c    | 64 ++++++++++++++++++++++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c      | 42 +++++++++++++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h      |  2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c |  1 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h   |  9 +++-
 include/uapi/xen/hyper_dmabuf.h                  |  7 ++-
 9 files changed, 131 insertions(+), 27 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 005677d..87ea6ca 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -74,9 +74,6 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
 	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
 
-	/* clean up event queue */
-	hyper_dmabuf_events_release();
-
 	return 0;
 }
 
@@ -98,12 +95,18 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 	int ret;
 
 	/* only root can read events */
-	if (!capable(CAP_DAC_OVERRIDE))
+	if (!capable(CAP_DAC_OVERRIDE)) {
+		dev_err(hyper_dmabuf_private.device,
+			"Only root can read events\n");
 		return -EFAULT;
+	}
 
 	/* make sure user buffer can be written */
-	if (!access_ok(VERIFY_WRITE, buffer, count))
+	if (!access_ok(VERIFY_WRITE, buffer, count)) {
+		dev_err(hyper_dmabuf_private.device,
+			"User buffer can't be written.\n");
 		return -EFAULT;
+	}
 
 	ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
 	if (ret)
@@ -132,7 +135,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			ret = wait_event_interruptible(hyper_dmabuf_private.event_wait,
 						       !list_empty(&hyper_dmabuf_private.event_list));
 
-			if (ret >= 0)
+			if (ret == 0)
 				ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
 
 			if (ret)
@@ -174,13 +177,14 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			}
 
 			ret += e->event_data.hdr.size;
+			hyper_dmabuf_private.curr_num_event--;
 			kfree(e);
 		}
 	}
 
 	mutex_unlock(&hyper_dmabuf_private.event_read_lock);
 
-	return 0;
+	return ret;
 }
 
 static struct file_operations hyper_dmabuf_driver_fops =
@@ -233,6 +237,8 @@ static int __init hyper_dmabuf_drv_init(void)
 	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
 	mutex_init(&hyper_dmabuf_private.lock);
+	mutex_init(&hyper_dmabuf_private.event_read_lock);
+	spin_lock_init(&hyper_dmabuf_private.event_lock);
 
 	ret = register_device();
 	if (ret < 0) {
@@ -329,6 +335,9 @@ static void hyper_dmabuf_drv_exit(void)
 
 	hyper_dmabuf_private.exited = true;
 
+	/* clean up event queue */
+	hyper_dmabuf_events_release();
+
 	mutex_unlock(&hyper_dmabuf_private.lock);
 
 	dev_info(hyper_dmabuf_private.device,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index be70e54..8998a7d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -49,11 +49,12 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 
 	/* check current number of event then if it hits the max num allowed
 	 * then remove the oldest event in the list */
-	if (hyper_dmabuf_private.curr_num_event > MAX_NUMBER_OF_EVENT - 1) {
+	if (hyper_dmabuf_private.curr_num_event > MAX_DEPTH_EVENT_QUEUE - 1) {
 		oldest = list_first_entry(&hyper_dmabuf_private.event_list,
 				struct hyper_dmabuf_event, link);
 		list_del(&oldest->link);
 		hyper_dmabuf_private.curr_num_event--;
+		kfree(oldest);
 	}
 
 	list_add_tail(&e->link,
@@ -74,6 +75,7 @@ void hyper_dmabuf_events_release()
 	list_for_each_entry_safe(e, et, &hyper_dmabuf_private.event_list,
 				 link) {
 		list_del(&e->link);
+		kfree(e);
 		hyper_dmabuf_private.curr_num_event--;
 	}
 
@@ -104,8 +106,8 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void*)&imported_sgt_info->priv[0];
-	e->event_data.hdr.size = 128;
+	e->event_data.data = (void*)imported_sgt_info->priv;
+	e->event_data.hdr.size = imported_sgt_info->sz_priv;
 
 	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
index 44c4856..50db04f 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_EVENT_H__
 #define __HYPER_DMABUF_EVENT_H__
 
-#define MAX_NUMBER_OF_EVENT 1024
+#define MAX_DEPTH_EVENT_QUEUE 32
 
 enum hyper_dmabuf_event_type {
 	HYPER_DMABUF_NEW_IMPORT = 0x10000,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 85b70db..06f95ca 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -87,7 +87,7 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 {
 	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
 	struct hyper_dmabuf_req *req;
-	int operands[40] = {0};
+	int operands[MAX_NUMBER_OF_OPERANDS] = {0};
 	int ret, i;
 
 	/* now create request for importer via ring */
@@ -108,8 +108,10 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 		}
 	}
 
-	/* driver/application specific private info, max 4x4 bytes */
-	memcpy(&operands[8], &sgt_info->priv[0], sizeof(unsigned int) * 32);
+	operands[8] = sgt_info->sz_priv;
+
+	/* driver/application specific private info */
+	memcpy(&operands[9], sgt_info->priv, operands[8]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
@@ -181,8 +183,32 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 					sgt_info->unexport_scheduled = 0;
 				}
 
+				/* if there's any change in size of private data.
+				 * we reallocate space for private data with new size */
+				if (export_remote_attr->sz_priv != sgt_info->sz_priv) {
+					kfree(sgt_info->priv);
+
+					/* truncating size */
+					if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
+						sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+					} else {
+						sgt_info->sz_priv = export_remote_attr->sz_priv;
+					}
+
+					sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+
+					if(!sgt_info->priv) {
+						dev_err(hyper_dmabuf_private.device,
+							"Can't reallocate priv because there's no more space left\n");
+						hyper_dmabuf_remove_exported(sgt_info->hid);
+						hyper_dmabuf_cleanup_sgt_info(sgt_info, true);
+						kfree(sgt_info);
+						return -ENOMEM;
+					}
+				}
+
 				/* update private data in sgt_info with new ones */
-				memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
+				copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
 
 				/* send an export msg for updating priv in importer */
 				ret = hyper_dmabuf_send_export_msg(sgt_info, NULL);
@@ -222,6 +248,26 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 		goto fail_sgt_info_creation;
 	}
 
+	/* possible truncation */
+	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
+		sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+	} else {
+		sgt_info->sz_priv = export_remote_attr->sz_priv;
+	}
+
+	/* creating buffer for private data of buffer */
+	if(sgt_info->sz_priv != 0) {
+		sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+
+		if(!sgt_info->priv) {
+			dev_err(hyper_dmabuf_private.device, "no more space left\n");
+			ret = -ENOMEM;
+			goto fail_priv_creation;
+		}
+	} else {
+		dev_err(hyper_dmabuf_private.device, "size is 0\n");
+	}
+
 	sgt_info->hid = hyper_dmabuf_get_hid();
 
 	/* no more exported dmabuf allowed */
@@ -279,7 +325,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
 
 	/* copy private data to sgt_info */
-	memcpy(&sgt_info->priv[0], &export_remote_attr->priv[0], sizeof(unsigned int) * 32);
+	copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
 
 	page_info = hyper_dmabuf_ext_pgs(sgt);
 	if (!page_info) {
@@ -329,6 +375,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 fail_map_active_attached:
 	kfree(sgt_info->active_sgts);
+	kfree(sgt_info->priv);
+
+fail_priv_creation:
+	kfree(sgt_info);
 
 fail_map_active_sgts:
 fail_sgt_info_creation:
@@ -553,6 +603,10 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 		hyper_dmabuf_remove_exported(sgt_info->hid);
 		/* register hyper_dmabuf_id to the list for reuse */
 		store_reusable_hid(sgt_info->hid);
+
+		if (sgt_info->sz_priv > 0 && !sgt_info->priv)
+			kfree(sgt_info->priv);
+
 		kfree(sgt_info);
 	}
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 152f9e3..ec37c3b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -65,10 +65,11 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		 * operands5 : offset of data in the first page
 		 * operands6 : length of data in the last page
 		 * operands7 : top-level reference number for shared pages
-		 * operands8~39 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands8 : size of private data (from operands9)
+		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
-		memcpy(&req->operands[0], &operands[0], 40 * sizeof(int));
+		memcpy(&req->operands[0], &operands[0], 9 * sizeof(int) + operands[8]);
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
@@ -135,7 +136,8 @@ void cmd_process_work(struct work_struct *work)
 		 * operands5 : offset of data in the first page
 		 * operands6 : length of data in the last page
 		 * operands7 : top-level reference number for shared pages
-		 * operands8~11 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * operands8 : size of private data (from operands9)
+		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
 		/* if nents == 0, it means it is a message only for priv synchronization
@@ -152,8 +154,25 @@ void cmd_process_work(struct work_struct *work)
 					"Can't find imported sgt_info from IMPORT_LIST\n");
 				break;
 			}
-			/* updating pri data */
-			memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
+
+			/* if size of new private data is different,
+			 * we reallocate it. */
+			if (imported_sgt_info->sz_priv != req->operands[8]) {
+				kfree(imported_sgt_info->priv);
+				imported_sgt_info->sz_priv = req->operands[8];
+				imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
+				if (!imported_sgt_info->priv) {
+					dev_err(hyper_dmabuf_private.device,
+						"Fail to allocate priv\n");
+
+					/* set it invalid */
+					imported_sgt_info->valid = 0;
+					break;
+				}
+			}
+
+			/* updating priv data */
+			memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 			/* generating import event */
@@ -171,6 +190,17 @@ void cmd_process_work(struct work_struct *work)
 			break;
 		}
 
+		imported_sgt_info->sz_priv = req->operands[8];
+		imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
+
+		if (!imported_sgt_info->priv) {
+			dev_err(hyper_dmabuf_private.device,
+				"Fail to allocate priv\n");
+
+			kfree(imported_sgt_info);
+			break;
+		}
+
 		imported_sgt_info->hid.id = req->operands[0];
 
 		for (i=0; i<3; i++)
@@ -190,7 +220,7 @@ void cmd_process_work(struct work_struct *work)
 		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
 		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
 
-		memcpy(&imported_sgt_info->priv[0], &req->operands[8], 32 * sizeof(int));
+		memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
 
 		imported_sgt_info->valid = 1;
 		hyper_dmabuf_register_imported(imported_sgt_info);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 7464273..0f6e795 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -25,7 +25,7 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
-#define MAX_NUMBER_OF_OPERANDS 40
+#define MAX_NUMBER_OF_OPERANDS 64
 
 struct hyper_dmabuf_req {
 	unsigned int request_id;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index dd17d26..691a714 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -255,6 +255,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	kfree(sgt_info->active_attached);
 	kfree(sgt_info->va_kmapped);
 	kfree(sgt_info->va_vmapped);
+	kfree(sgt_info->priv);
 
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index f01f535..6f929f2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -57,6 +57,7 @@ struct hyper_dmabuf_pages_info {
         struct page **pages; /* pages that contains reference numbers of shared pages*/
 };
 
+
 /* Both importer and exporter use this structure to point to sg lists
  *
  * Exporter stores references to sgt in a hash table
@@ -90,7 +91,9 @@ struct hyper_dmabuf_sgt_info {
 	 * uses releases hyper_dmabuf device
 	 */
 	struct file *filp;
-	int priv[32]; /* device specific info (e.g. image's meta info?) */
+
+	size_t sz_priv;
+	char *priv; /* device specific info (e.g. image's meta info?) */
 };
 
 /* Importer store references (before mapping) on shared pages
@@ -110,7 +113,9 @@ struct hyper_dmabuf_imported_sgt_info {
 	void *refs_info;
 	bool valid;
 	int num_importers;
-	int priv[32]; /* device specific info (e.g. image's meta info?) */
+
+	size_t sz_priv;
+	char *priv; /* device specific info (e.g. image's meta info?) */
 };
 
 #endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index 3a6172e..df01b17 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -25,6 +25,8 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_H__
 
+#define MAX_SIZE_PRIV_DATA 192
+
 typedef struct {
         int id;
         int rng_key[3]; /* 12bytes long random number */
@@ -33,7 +35,7 @@ typedef struct {
 struct hyper_dmabuf_event_hdr {
 	int event_type; /* one type only for now - new import */
 	hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
-	size_t size; /* size of data */
+	int size; /* size of data */
 };
 
 struct hyper_dmabuf_event_data {
@@ -67,7 +69,8 @@ struct ioctl_hyper_dmabuf_export_remote {
 	int remote_domain;
 	/* exported dma buf id */
 	hyper_dmabuf_id_t hid;
-	int priv[32];
+	int sz_priv;
+	char *priv;
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_FD \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 48/60] hyper_dmabuf: add query items for buffer private info.
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

This change adds two query items, 'HYPER_DMABUF_QUERY_PRIV_INFO_SIZE'
and 'HYPER_DMABUF_QUERY_PRIV_INFO', for retrieving buffer's private
info and its size.

'info' is an address of user-space buffer (user application provides
this) ,where private data will be copied in case query item is
'HYPER_DMABUF_QUERY_PRIV_INFO'.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c   |  7 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c |  6 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 12 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c | 97 +++++++++++++++++++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h |  6 +-
 include/uapi/xen/hyper_dmabuf.h               |  4 +-
 6 files changed, 100 insertions(+), 32 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 87ea6ca..1c35a59 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -168,8 +168,11 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 				ret -= sizeof(struct hyper_dmabuf_event_hdr);
 
 				/* nullifying hdr of the event in user buffer */
-				copy_to_user(buffer + ret, &dummy_hdr,
-					     sizeof(dummy_hdr));
+				if (copy_to_user(buffer + ret, &dummy_hdr,
+						 sizeof(dummy_hdr))) {
+					dev_err(hyper_dmabuf_private.device,
+						"failed to nullify invalid hdr already in userspace\n");
+				}
 
 				ret = -EFAULT;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index 8998a7d..3e1498c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -104,6 +104,12 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 
 	e = kzalloc(sizeof(*e), GFP_KERNEL);
 
+	if (!e) {
+		dev_err(hyper_dmabuf_private.device,
+			"no space left\n");
+		return -ENOMEM;
+	}
+
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
 	e->event_data.data = (void*)imported_sgt_info->priv;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 06f95ca..15191c2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -671,9 +671,8 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		/* query for exported dmabuf */
 		sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
 		if (sgt_info) {
-			ret = hyper_dmabuf_query_exported(sgt_info, query_attr->item);
-			if (ret != -EINVAL)
-				query_attr->info = ret;
+			ret = hyper_dmabuf_query_exported(sgt_info,
+							  query_attr->item, &query_attr->info);
 		} else {
 			dev_err(hyper_dmabuf_private.device,
 				"DMA BUF {id:%d key:%d %d %d} can't be found in the export list\n",
@@ -685,9 +684,8 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		/* query for imported dmabuf */
 		imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
 		if (imported_sgt_info) {
-			ret = hyper_dmabuf_query_imported(imported_sgt_info, query_attr->item);
-			if (ret != -EINVAL)
-				query_attr->info = ret;
+			ret = hyper_dmabuf_query_imported(imported_sgt_info,
+							  query_attr->item, &query_attr->info);
 		} else {
 			dev_err(hyper_dmabuf_private.device,
 				"DMA BUF {id:%d key:%d %d %d} can't be found in the imported list\n",
@@ -697,7 +695,7 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		}
 	}
 
-	return 0;
+	return ret;
 }
 
 void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
index 2a5201b..39c9dee 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -27,6 +27,7 @@
  */
 
 #include <linux/dma-buf.h>
+#include <linux/uaccess.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_id.h"
@@ -36,56 +37,91 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
 #define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
 	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query)
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+				int query, unsigned long* info)
 {
+	int n;
+
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
-			return EXPORTED;
+			*info = EXPORTED;
+			break;
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			return HYPER_DMABUF_DOM_ID(sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(sgt_info->hid);
+			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			return sgt_info->hyper_dmabuf_rdomain;
+			*info = sgt_info->hyper_dmabuf_rdomain;
+			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
-			return sgt_info->dma_buf->size;
+			*info = sgt_info->dma_buf->size;
+			break;
 
 		/* whether the buffer is used by importer */
 		case HYPER_DMABUF_QUERY_BUSY:
-			return (sgt_info->importer_exported == 0) ? false : true;
+			*info = (sgt_info->importer_exported == 0) ? false : true;
+			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			return !sgt_info->valid;
+			*info = !sgt_info->valid;
+			break;
 
 		/* whether the buffer is scheduled to be unexported */
 		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-			return !sgt_info->unexport_scheduled;
+			*info = !sgt_info->unexport_scheduled;
+			break;
+
+		/* size of private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+			*info = sgt_info->sz_priv;
+			break;
+
+		/* copy private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO:
+			if (sgt_info->sz_priv > 0) {
+				n = copy_to_user((void __user*) *info,
+						sgt_info->priv,
+						sgt_info->sz_priv);
+				if (n != 0)
+					return -EINVAL;
+			}
+			break;
+
+		default:
+			return -EINVAL;
 	}
 
-	return -EINVAL;
+	return 0;
 }
 
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query)
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+				int query, unsigned long *info)
 {
+	int n;
+
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
-			return IMPORTED;
+			*info = IMPORTED;
+			break;
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			return HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			return  hyper_dmabuf_private.domid;
+			*info = hyper_dmabuf_private.domid;
+			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
@@ -93,23 +129,44 @@ int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_
 				/* if local dma_buf is created (if it's ever mapped),
 				 * retrieve it directly from struct dma_buf *
 				 */
-				return imported_sgt_info->dma_buf->size;
+				*info = imported_sgt_info->dma_buf->size;
 			} else {
 				/* calcuate it from given nents, frst_ofst and last_len */
-				return HYPER_DMABUF_SIZE(imported_sgt_info->nents,
-							 imported_sgt_info->frst_ofst,
-							 imported_sgt_info->last_len);
+				*info = HYPER_DMABUF_SIZE(imported_sgt_info->nents,
+							  imported_sgt_info->frst_ofst,
+							  imported_sgt_info->last_len);
 			}
+			break;
 
 		/* whether the buffer is used or not */
 		case HYPER_DMABUF_QUERY_BUSY:
 			/* checks if it's used by importer */
-			return (imported_sgt_info->num_importers > 0) ? true : false;
+			*info = (imported_sgt_info->num_importers > 0) ? true : false;
+			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			return !imported_sgt_info->valid;
+			*info = !imported_sgt_info->valid;
+			break;
+		/* size of private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+			*info = imported_sgt_info->sz_priv;
+			break;
+
+		/* copy private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO:
+			if (imported_sgt_info->sz_priv > 0) {
+				n = copy_to_user((void __user*) *info,
+						imported_sgt_info->priv,
+						imported_sgt_info->sz_priv);
+				if (n != 0)
+					return -EINVAL;
+			}
+			break;
+
+		default:
+			return -EINVAL;
 	}
 
-	return -EINVAL;
+	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index 295e923..7bbb322 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,8 +1,10 @@
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query);
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+				int query, unsigned long *info);
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query);
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+				int query, unsigned long *info);
 
 #endif // __HYPER_DMABUF_QUERY_H__
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index df01b17..e18dd9b 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -109,7 +109,7 @@ struct ioctl_hyper_dmabuf_query {
 	int item;
 	/* OUT parameters */
 	/* Value of queried item */
-	int info;
+	unsigned long info;
 };
 
 /* DMABUF query */
@@ -122,6 +122,8 @@ enum hyper_dmabuf_query {
         HYPER_DMABUF_QUERY_BUSY,
         HYPER_DMABUF_QUERY_UNEXPORTED,
         HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+        HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
+        HYPER_DMABUF_QUERY_PRIV_INFO,
 };
 
 enum hyper_dmabuf_status {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 48/60] hyper_dmabuf: add query items for buffer private info.
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

This change adds two query items, 'HYPER_DMABUF_QUERY_PRIV_INFO_SIZE'
and 'HYPER_DMABUF_QUERY_PRIV_INFO', for retrieving buffer's private
info and its size.

'info' is an address of user-space buffer (user application provides
this) ,where private data will be copied in case query item is
'HYPER_DMABUF_QUERY_PRIV_INFO'.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c   |  7 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c |  6 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 12 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c | 97 +++++++++++++++++++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h |  6 +-
 include/uapi/xen/hyper_dmabuf.h               |  4 +-
 6 files changed, 100 insertions(+), 32 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 87ea6ca..1c35a59 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -168,8 +168,11 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 				ret -= sizeof(struct hyper_dmabuf_event_hdr);
 
 				/* nullifying hdr of the event in user buffer */
-				copy_to_user(buffer + ret, &dummy_hdr,
-					     sizeof(dummy_hdr));
+				if (copy_to_user(buffer + ret, &dummy_hdr,
+						 sizeof(dummy_hdr))) {
+					dev_err(hyper_dmabuf_private.device,
+						"failed to nullify invalid hdr already in userspace\n");
+				}
 
 				ret = -EFAULT;
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index 8998a7d..3e1498c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -104,6 +104,12 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 
 	e = kzalloc(sizeof(*e), GFP_KERNEL);
 
+	if (!e) {
+		dev_err(hyper_dmabuf_private.device,
+			"no space left\n");
+		return -ENOMEM;
+	}
+
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
 	e->event_data.data = (void*)imported_sgt_info->priv;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 06f95ca..15191c2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -671,9 +671,8 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		/* query for exported dmabuf */
 		sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
 		if (sgt_info) {
-			ret = hyper_dmabuf_query_exported(sgt_info, query_attr->item);
-			if (ret != -EINVAL)
-				query_attr->info = ret;
+			ret = hyper_dmabuf_query_exported(sgt_info,
+							  query_attr->item, &query_attr->info);
 		} else {
 			dev_err(hyper_dmabuf_private.device,
 				"DMA BUF {id:%d key:%d %d %d} can't be found in the export list\n",
@@ -685,9 +684,8 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		/* query for imported dmabuf */
 		imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
 		if (imported_sgt_info) {
-			ret = hyper_dmabuf_query_imported(imported_sgt_info, query_attr->item);
-			if (ret != -EINVAL)
-				query_attr->info = ret;
+			ret = hyper_dmabuf_query_imported(imported_sgt_info,
+							  query_attr->item, &query_attr->info);
 		} else {
 			dev_err(hyper_dmabuf_private.device,
 				"DMA BUF {id:%d key:%d %d %d} can't be found in the imported list\n",
@@ -697,7 +695,7 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		}
 	}
 
-	return 0;
+	return ret;
 }
 
 void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
index 2a5201b..39c9dee 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -27,6 +27,7 @@
  */
 
 #include <linux/dma-buf.h>
+#include <linux/uaccess.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_id.h"
@@ -36,56 +37,91 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
 #define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
 	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query)
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+				int query, unsigned long* info)
 {
+	int n;
+
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
-			return EXPORTED;
+			*info = EXPORTED;
+			break;
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			return HYPER_DMABUF_DOM_ID(sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(sgt_info->hid);
+			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			return sgt_info->hyper_dmabuf_rdomain;
+			*info = sgt_info->hyper_dmabuf_rdomain;
+			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
-			return sgt_info->dma_buf->size;
+			*info = sgt_info->dma_buf->size;
+			break;
 
 		/* whether the buffer is used by importer */
 		case HYPER_DMABUF_QUERY_BUSY:
-			return (sgt_info->importer_exported == 0) ? false : true;
+			*info = (sgt_info->importer_exported == 0) ? false : true;
+			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			return !sgt_info->valid;
+			*info = !sgt_info->valid;
+			break;
 
 		/* whether the buffer is scheduled to be unexported */
 		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-			return !sgt_info->unexport_scheduled;
+			*info = !sgt_info->unexport_scheduled;
+			break;
+
+		/* size of private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+			*info = sgt_info->sz_priv;
+			break;
+
+		/* copy private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO:
+			if (sgt_info->sz_priv > 0) {
+				n = copy_to_user((void __user*) *info,
+						sgt_info->priv,
+						sgt_info->sz_priv);
+				if (n != 0)
+					return -EINVAL;
+			}
+			break;
+
+		default:
+			return -EINVAL;
 	}
 
-	return -EINVAL;
+	return 0;
 }
 
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query)
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+				int query, unsigned long *info)
 {
+	int n;
+
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
-			return IMPORTED;
+			*info = IMPORTED;
+			break;
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			return HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			return  hyper_dmabuf_private.domid;
+			*info = hyper_dmabuf_private.domid;
+			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
@@ -93,23 +129,44 @@ int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_
 				/* if local dma_buf is created (if it's ever mapped),
 				 * retrieve it directly from struct dma_buf *
 				 */
-				return imported_sgt_info->dma_buf->size;
+				*info = imported_sgt_info->dma_buf->size;
 			} else {
 				/* calcuate it from given nents, frst_ofst and last_len */
-				return HYPER_DMABUF_SIZE(imported_sgt_info->nents,
-							 imported_sgt_info->frst_ofst,
-							 imported_sgt_info->last_len);
+				*info = HYPER_DMABUF_SIZE(imported_sgt_info->nents,
+							  imported_sgt_info->frst_ofst,
+							  imported_sgt_info->last_len);
 			}
+			break;
 
 		/* whether the buffer is used or not */
 		case HYPER_DMABUF_QUERY_BUSY:
 			/* checks if it's used by importer */
-			return (imported_sgt_info->num_importers > 0) ? true : false;
+			*info = (imported_sgt_info->num_importers > 0) ? true : false;
+			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			return !imported_sgt_info->valid;
+			*info = !imported_sgt_info->valid;
+			break;
+		/* size of private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+			*info = imported_sgt_info->sz_priv;
+			break;
+
+		/* copy private info attached to buffer */
+		case HYPER_DMABUF_QUERY_PRIV_INFO:
+			if (imported_sgt_info->sz_priv > 0) {
+				n = copy_to_user((void __user*) *info,
+						imported_sgt_info->priv,
+						imported_sgt_info->sz_priv);
+				if (n != 0)
+					return -EINVAL;
+			}
+			break;
+
+		default:
+			return -EINVAL;
 	}
 
-	return -EINVAL;
+	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index 295e923..7bbb322 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,8 +1,10 @@
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info, int query);
+int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+				int query, unsigned long *info);
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info, int query);
+int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+				int query, unsigned long *info);
 
 #endif // __HYPER_DMABUF_QUERY_H__
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index df01b17..e18dd9b 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -109,7 +109,7 @@ struct ioctl_hyper_dmabuf_query {
 	int item;
 	/* OUT parameters */
 	/* Value of queried item */
-	int info;
+	unsigned long info;
 };
 
 /* DMABUF query */
@@ -122,6 +122,8 @@ enum hyper_dmabuf_query {
         HYPER_DMABUF_QUERY_BUSY,
         HYPER_DMABUF_QUERY_UNEXPORTED,
         HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+        HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
+        HYPER_DMABUF_QUERY_PRIV_INFO,
 };
 
 enum hyper_dmabuf_status {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 49/60] hyper_dmabuf: general clean-up and fixes
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

1. global hyper_dmabuf_private is now pointer(*hy_drv_priv)
   pointing to private data structure initialized when driver
   is initialized. This is freed when driver exits.

2. using shorter variable and type's names

3. remove unnecessary NULL checks

4. event-polling related funcs are now compiled only if
   CONFIG_HYPER_DMABUF_EVENT_GEN is enabled.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |   7 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |  25 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 164 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  13 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      |  60 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  16 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 569 ++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |   2 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  88 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  18 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 259 +++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  18 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 284 +++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h        |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c      |  58 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |   4 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 170 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 123 ++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  10 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  24 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 240 +++++----
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |   6 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 147 +++---
 23 files changed, 1144 insertions(+), 1165 deletions(-)
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 1cd7a81..a113bfc 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -13,9 +13,12 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
 				 hyper_dmabuf_query.o \
-				 hyper_dmabuf_event.o \
 
-ifeq ($(CONFIG_XEN), y)
+ifeq ($(CONFIG_HYPER_DMABUF_EVENT_GEN), y)
+	$(TARGET_MODULE)-objs += hyper_dmabuf_event.o
+endif
+
+ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
 	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
 				 xen/hyper_dmabuf_xen_comm_list.o \
 				 xen/hyper_dmabuf_xen_shm.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
deleted file mode 100644
index d5125f2..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ /dev/null
@@ -1,25 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-/* configuration */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 1c35a59..525ee78 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -36,7 +36,6 @@
 #include <linux/poll.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
@@ -51,13 +50,32 @@ extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 MODULE_LICENSE("GPL and additional rights");
 MODULE_AUTHOR("Intel Corporation");
 
-struct hyper_dmabuf_private hyper_dmabuf_private;
+struct hyper_dmabuf_private *hy_drv_priv;
 
 long hyper_dmabuf_ioctl(struct file *filp,
 			unsigned int cmd, unsigned long param);
 
-void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
-				    void *attr);
+static void hyper_dmabuf_force_free(struct exported_sgt_info* exported,
+			            void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file*) attr;
+
+	if (!filp || !exported)
+		return;
+
+	if (exported->filp == filp) {
+		dev_dbg(hy_drv_priv->dev,
+			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		unexport_attr.hid = exported->hid;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
+	}
+}
 
 int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 {
@@ -72,18 +90,20 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 
 int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
-	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
+	hyper_dmabuf_foreach_exported(hyper_dmabuf_force_free, filp);
 
 	return 0;
 }
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+
 unsigned int hyper_dmabuf_event_poll(struct file *filp, struct poll_table_struct *wait)
 {
 	unsigned int mask = 0;
 
-	poll_wait(filp, &hyper_dmabuf_private.event_wait, wait);
+	poll_wait(filp, &hy_drv_priv->event_wait, wait);
 
-	if (!list_empty(&hyper_dmabuf_private.event_list))
+	if (!list_empty(&hy_drv_priv->event_list))
 		mask |= POLLIN | POLLRDNORM;
 
 	return mask;
@@ -96,32 +116,32 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 	/* only root can read events */
 	if (!capable(CAP_DAC_OVERRIDE)) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Only root can read events\n");
 		return -EFAULT;
 	}
 
 	/* make sure user buffer can be written */
 	if (!access_ok(VERIFY_WRITE, buffer, count)) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"User buffer can't be written.\n");
 		return -EFAULT;
 	}
 
-	ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
 	if (ret)
 		return ret;
 
 	while (1) {
 		struct hyper_dmabuf_event *e = NULL;
 
-		spin_lock_irq(&hyper_dmabuf_private.event_lock);
-		if (!list_empty(&hyper_dmabuf_private.event_list)) {
-			e = list_first_entry(&hyper_dmabuf_private.event_list,
+		spin_lock_irq(&hy_drv_priv->event_lock);
+		if (!list_empty(&hy_drv_priv->event_list)) {
+			e = list_first_entry(&hy_drv_priv->event_list,
 					struct hyper_dmabuf_event, link);
 			list_del(&e->link);
 		}
-		spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+		spin_unlock_irq(&hy_drv_priv->event_lock);
 
 		if (!e) {
 			if (ret)
@@ -131,12 +151,12 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 				break;
 			}
 
-			mutex_unlock(&hyper_dmabuf_private.event_read_lock);
-			ret = wait_event_interruptible(hyper_dmabuf_private.event_wait,
-						       !list_empty(&hyper_dmabuf_private.event_list));
+			mutex_unlock(&hy_drv_priv->event_read_lock);
+			ret = wait_event_interruptible(hy_drv_priv->event_wait,
+						       !list_empty(&hy_drv_priv->event_list));
 
 			if (ret == 0)
-				ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+				ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
 
 			if (ret)
 				return ret;
@@ -145,9 +165,9 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 			if (length > count - ret) {
 put_back_event:
-				spin_lock_irq(&hyper_dmabuf_private.event_lock);
-				list_add(&e->link, &hyper_dmabuf_private.event_list);
-				spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+				spin_lock_irq(&hy_drv_priv->event_lock);
+				list_add(&e->link, &hy_drv_priv->event_list);
+				spin_unlock_irq(&hy_drv_priv->event_lock);
 				break;
 			}
 
@@ -170,7 +190,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 				/* nullifying hdr of the event in user buffer */
 				if (copy_to_user(buffer + ret, &dummy_hdr,
 						 sizeof(dummy_hdr))) {
-					dev_err(hyper_dmabuf_private.device,
+					dev_err(hy_drv_priv->dev,
 						"failed to nullify invalid hdr already in userspace\n");
 				}
 
@@ -180,23 +200,30 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			}
 
 			ret += e->event_data.hdr.size;
-			hyper_dmabuf_private.curr_num_event--;
+			hy_drv_priv->pending--;
 			kfree(e);
 		}
 	}
 
-	mutex_unlock(&hyper_dmabuf_private.event_read_lock);
+	mutex_unlock(&hy_drv_priv->event_read_lock);
 
 	return ret;
 }
 
+#endif
+
 static struct file_operations hyper_dmabuf_driver_fops =
 {
 	.owner = THIS_MODULE,
 	.open = hyper_dmabuf_open,
 	.release = hyper_dmabuf_release,
+
+/* poll and read interfaces are needed only for event-polling */
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 	.read = hyper_dmabuf_event_read,
 	.poll = hyper_dmabuf_event_poll,
+#endif
+
 	.unlocked_ioctl = hyper_dmabuf_ioctl,
 };
 
@@ -217,17 +244,17 @@ int register_device(void)
 		return ret;
 	}
 
-	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
+	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
 
 	/* TODO: Check if there is a different way to initialize dma mask nicely */
-	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
+	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
 
 	return ret;
 }
 
 void unregister_device(void)
 {
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		"hyper_dmabuf: unregister_device() is called\n");
 
 	misc_deregister(&hyper_dmabuf_miscdev);
@@ -239,9 +266,13 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
-	mutex_init(&hyper_dmabuf_private.lock);
-	mutex_init(&hyper_dmabuf_private.event_read_lock);
-	spin_lock_init(&hyper_dmabuf_private.event_lock);
+	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
+			      GFP_KERNEL);
+
+	if (!hy_drv_priv) {
+		printk( KERN_ERR "hyper_dmabuf: Failed to create drv\n");
+		return -1;
+	}
 
 	ret = register_device();
 	if (ret < 0) {
@@ -251,64 +282,72 @@ static int __init hyper_dmabuf_drv_init(void)
 /* currently only supports XEN hypervisor */
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
-	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
+	hy_drv_priv->backend_ops = &xen_backend_ops;
 #else
-	hyper_dmabuf_private.backend_ops = NULL;
+	hy_drv_priv->backend_ops = NULL;
 	printk( KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
 
-	if (hyper_dmabuf_private.backend_ops == NULL) {
+	if (hy_drv_priv->backend_ops == NULL) {
 		printk( KERN_ERR "Hyper_dmabuf: failed to be loaded - no backend found\n");
 		return -1;
 	}
 
-	mutex_lock(&hyper_dmabuf_private.lock);
+	/* initializing mutexes and a spinlock */
+	mutex_init(&hy_drv_priv->lock);
+
+	mutex_lock(&hy_drv_priv->lock);
 
-	hyper_dmabuf_private.backend_initialized = false;
+	hy_drv_priv->initialized = false;
 
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		 "initializing database for imported/exported dmabufs\n");
 
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
-	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"failed to initialize table for exported/imported entries\n");
 		return ret;
 	}
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
-	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
+	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"failed to initialize sysfs\n");
 		return ret;
 	}
 #endif
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	mutex_init(&hy_drv_priv->event_read_lock);
+	spin_lock_init(&hy_drv_priv->event_lock);
+
 	/* Initialize event queue */
-	INIT_LIST_HEAD(&hyper_dmabuf_private.event_list);
-	init_waitqueue_head(&hyper_dmabuf_private.event_wait);
+	INIT_LIST_HEAD(&hy_drv_priv->event_list);
+	init_waitqueue_head(&hy_drv_priv->event_wait);
 
-	hyper_dmabuf_private.curr_num_event = 0;
-	hyper_dmabuf_private.exited = false;
+	/* resetting number of pending events */
+	hy_drv_priv->pending = 0;
+#endif
 
-	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+	hy_drv_priv->domid = hy_drv_priv->backend_ops->get_vm_id();
 
-	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	ret = hy_drv_priv->backend_ops->init_comm_env();
 	if (ret < 0) {
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"failed to initialize comm-env but it will re-attempt.\n");
 	} else {
-		hyper_dmabuf_private.backend_initialized = true;
+		hy_drv_priv->initialized = true;
 	}
 
-	mutex_unlock(&hyper_dmabuf_private.lock);
+	mutex_unlock(&hy_drv_priv->lock);
 
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		"Finishing up initialization of hyper_dmabuf drv\n");
 
 	/* interrupt for comm should be registered here: */
@@ -318,34 +357,39 @@ static int __init hyper_dmabuf_drv_init(void)
 static void hyper_dmabuf_drv_exit(void)
 {
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
-	hyper_dmabuf_unregister_sysfs(hyper_dmabuf_private.device);
+	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
 #endif
 
-	mutex_lock(&hyper_dmabuf_private.lock);
+	mutex_lock(&hy_drv_priv->lock);
 
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
-	hyper_dmabuf_private.backend_ops->destroy_comm();
+	hy_drv_priv->backend_ops->destroy_comm();
 
 	/* destroy workqueue */
-	if (hyper_dmabuf_private.work_queue)
-		destroy_workqueue(hyper_dmabuf_private.work_queue);
+	if (hy_drv_priv->work_queue)
+		destroy_workqueue(hy_drv_priv->work_queue);
 
 	/* destroy id_queue */
-	if (hyper_dmabuf_private.id_queue)
+	if (hy_drv_priv->id_queue)
 		destroy_reusable_list();
 
-	hyper_dmabuf_private.exited = true;
-
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 	/* clean up event queue */
 	hyper_dmabuf_events_release();
+#endif
 
-	mutex_unlock(&hyper_dmabuf_private.lock);
+	mutex_unlock(&hy_drv_priv->lock);
 
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		 "hyper_dmabuf driver: Exiting\n");
 
+	if (hy_drv_priv) {
+		kfree(hy_drv_priv);
+		hy_drv_priv = NULL;
+	}
+
 	unregister_device();
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index a4acdd9f..2ead41b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -36,7 +36,7 @@ struct hyper_dmabuf_event {
 };
 
 struct hyper_dmabuf_private {
-        struct device *device;
+        struct device *dev;
 
 	/* VM(domain) id of current VM instance */
 	int domid;
@@ -55,7 +55,7 @@ struct hyper_dmabuf_private {
 	struct mutex lock;
 
 	/* flag that shows whether backend is initialized */
-	bool backend_initialized;
+	bool initialized;
 
         wait_queue_head_t event_wait;
         struct list_head event_list;
@@ -63,10 +63,8 @@ struct hyper_dmabuf_private {
 	spinlock_t event_lock;
 	struct mutex event_read_lock;
 
-	int curr_num_event;
-
-	/* indicate whether the driver is unloaded */
-	bool exited;
+	/* # of pending events */
+	int pending;
 };
 
 struct list_reusable_id {
@@ -108,4 +106,7 @@ struct hyper_dmabuf_backend_ops {
 	int (*send_req)(int, struct hyper_dmabuf_req *, int);
 };
 
+/* exporting global drv private info */
+extern struct hyper_dmabuf_private *hy_drv_priv;
+
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index 3e1498c..0498cda 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -32,37 +32,33 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_event.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 {
 	struct hyper_dmabuf_event *oldest;
 
-	assert_spin_locked(&hyper_dmabuf_private.event_lock);
+	assert_spin_locked(&hy_drv_priv->event_lock);
 
 	/* check current number of event then if it hits the max num allowed
 	 * then remove the oldest event in the list */
-	if (hyper_dmabuf_private.curr_num_event > MAX_DEPTH_EVENT_QUEUE - 1) {
-		oldest = list_first_entry(&hyper_dmabuf_private.event_list,
+	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
+		oldest = list_first_entry(&hy_drv_priv->event_list,
 				struct hyper_dmabuf_event, link);
 		list_del(&oldest->link);
-		hyper_dmabuf_private.curr_num_event--;
+		hy_drv_priv->pending--;
 		kfree(oldest);
 	}
 
 	list_add_tail(&e->link,
-		      &hyper_dmabuf_private.event_list);
+		      &hy_drv_priv->event_list);
 
-	hyper_dmabuf_private.curr_num_event++;
+	hy_drv_priv->pending++;
 
-	wake_up_interruptible(&hyper_dmabuf_private.event_wait);
+	wake_up_interruptible(&hy_drv_priv->event_wait);
 }
 
 void hyper_dmabuf_events_release()
@@ -70,34 +66,34 @@ void hyper_dmabuf_events_release()
 	struct hyper_dmabuf_event *e, *et;
 	unsigned long irqflags;
 
-	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
 
-	list_for_each_entry_safe(e, et, &hyper_dmabuf_private.event_list,
+	list_for_each_entry_safe(e, et, &hy_drv_priv->event_list,
 				 link) {
 		list_del(&e->link);
 		kfree(e);
-		hyper_dmabuf_private.curr_num_event--;
+		hy_drv_priv->pending--;
 	}
 
-	if (hyper_dmabuf_private.curr_num_event) {
-		dev_err(hyper_dmabuf_private.device,
+	if (hy_drv_priv->pending) {
+		dev_err(hy_drv_priv->dev,
 			"possible leak on event_list\n");
 	}
 
-	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
 }
 
 int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_event *e;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct imported_sgt_info *imported;
 
 	unsigned long irqflags;
 
-	imported_sgt_info = hyper_dmabuf_find_imported(hid);
+	imported = hyper_dmabuf_find_imported(hid);
 
-	if (!imported_sgt_info) {
-		dev_err(hyper_dmabuf_private.device,
+	if (!imported) {
+		dev_err(hy_drv_priv->dev,
 			"can't find imported_sgt_info in the list\n");
 		return -EINVAL;
 	}
@@ -105,29 +101,29 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	e = kzalloc(sizeof(*e), GFP_KERNEL);
 
 	if (!e) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"no space left\n");
 		return -ENOMEM;
 	}
 
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void*)imported_sgt_info->priv;
-	e->event_data.hdr.size = imported_sgt_info->sz_priv;
+	e->event_data.data = (void*)imported->priv;
+	e->event_data.hdr.size = imported->sz_priv;
 
-	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
 
 	hyper_dmabuf_send_event_locked(e);
 
-	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
 
-	dev_dbg(hyper_dmabuf_private.device,
-			"event number = %d :", hyper_dmabuf_private.curr_num_event);
+	dev_dbg(hy_drv_priv->dev,
+		"event number = %d :", hy_drv_priv->pending);
 
-	dev_dbg(hyper_dmabuf_private.device,
-			"generating events for {%d, %d, %d, %d}\n",
-			imported_sgt_info->hid.id, imported_sgt_info->hid.rng_key[0],
-			imported_sgt_info->hid.rng_key[1], imported_sgt_info->hid.rng_key[2]);
+	dev_dbg(hy_drv_priv->dev,
+		"generating events for {%d, %d, %d, %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index cccdc19..e2466c7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -33,17 +33,15 @@
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 void store_reusable_hid(hyper_dmabuf_id_t hid)
 {
-	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *new_reusable;
 
 	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
 
 	if (!new_reusable) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return;
 	}
@@ -55,7 +53,7 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 
 static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 {
-	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	hyper_dmabuf_id_t hid = {-1, {0,0,0}};
 
 	/* check there is reusable id */
@@ -74,7 +72,7 @@ static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 
 void destroy_reusable_list(void)
 {
-	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *temp_head;
 
 	if (reusable_head) {
@@ -103,14 +101,14 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
 
 		if (!reusable_head) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"No memory left to be allocated\n");
 			return (hyper_dmabuf_id_t){-1, {0,0,0}};
 		}
 
 		reusable_head->hid.id = -1; /* list head has an invalid count */
 		INIT_LIST_HEAD(&reusable_head->list);
-		hyper_dmabuf_private.id_queue = reusable_head;
+		hy_drv_priv->id_queue = reusable_head;
 	}
 
 	hid = retrieve_reusable_hid();
@@ -119,7 +117,7 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	 * and count is less than maximum allowed
 	 */
 	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
-		hid.id = HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, count++);
+		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
 	}
 
 	/* random data embedded in the id for security */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 15191c2..b328df7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -45,16 +45,14 @@
 #include "hyper_dmabuf_ops.h"
 #include "hyper_dmabuf_query.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	int ret = 0;
 
 	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
 		return -EINVAL;
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
@@ -67,11 +65,11 @@ static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	int ret = 0;
 
 	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -82,48 +80,48 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
-					struct hyper_dmabuf_pages_info *page_info)
+static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
+					struct pages_info *pg_info)
 {
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	struct hyper_dmabuf_req *req;
-	int operands[MAX_NUMBER_OF_OPERANDS] = {0};
+	int op[MAX_NUMBER_OF_OPERANDS] = {0};
 	int ret, i;
 
 	/* now create request for importer via ring */
-	operands[0] = sgt_info->hid.id;
+	op[0] = exported->hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = sgt_info->hid.rng_key[i];
-
-	if (page_info) {
-		operands[4] = page_info->nents;
-		operands[5] = page_info->frst_ofst;
-		operands[6] = page_info->last_len;
-		operands[7] = ops->share_pages (page_info->pages, sgt_info->hyper_dmabuf_rdomain,
-						page_info->nents, &sgt_info->refs_info);
-		if (operands[7] < 0) {
-			dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
+		op[i+1] = exported->hid.rng_key[i];
+
+	if (pg_info) {
+		op[4] = pg_info->nents;
+		op[5] = pg_info->frst_ofst;
+		op[6] = pg_info->last_len;
+		op[7] = ops->share_pages(pg_info->pgs, exported->rdomid,
+					 pg_info->nents, &exported->refs_info);
+		if (op[7] < 0) {
+			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
 			return -1;
 		}
 	}
 
-	operands[8] = sgt_info->sz_priv;
+	op[8] = exported->sz_priv;
 
 	/* driver/application specific private info */
-	memcpy(&operands[9], sgt_info->priv, operands[8]);
+	memcpy(&op[9], exported->priv, op[8]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if(!req) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		return -1;
 	}
 
 	/* composing a message to the importer */
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
 
-	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
+	ret = ops->send_req(exported->rdomid, req, true);
 
 	kfree(req);
 
@@ -132,24 +130,18 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 
 static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
+			(struct ioctl_hyper_dmabuf_export_remote *)data;
 	struct dma_buf *dma_buf;
 	struct dma_buf_attachment *attachment;
 	struct sg_table *sgt;
-	struct hyper_dmabuf_pages_info *page_info;
-	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct pages_info *pg_info;
+	struct exported_sgt_info *exported;
 	hyper_dmabuf_id_t hid;
 	int ret = 0;
 
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
-
-	if (hyper_dmabuf_private.domid == export_remote_attr->remote_domain) {
-		dev_err(hyper_dmabuf_private.device,
+	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
+		dev_err(hy_drv_priv->dev,
 			"exporting to the same VM is not permitted\n");
 		return -EINVAL;
 	}
@@ -157,7 +149,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
 	if (IS_ERR(dma_buf)) {
-		dev_err(hyper_dmabuf_private.device,  "Cannot get dma buf\n");
+		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
 		return PTR_ERR(dma_buf);
 	}
 
@@ -165,69 +157,79 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	 * to the same domain and if yes and it's valid sgt_info,
 	 * it returns hyper_dmabuf_id of pre-exported sgt_info
 	 */
-	hid = hyper_dmabuf_find_hid_exported(dma_buf, export_remote_attr->remote_domain);
+	hid = hyper_dmabuf_find_hid_exported(dma_buf,
+					     export_remote_attr->remote_domain);
 	if (hid.id != -1) {
-		sgt_info = hyper_dmabuf_find_exported(hid);
-		if (sgt_info != NULL) {
-			if (sgt_info->valid) {
+		exported = hyper_dmabuf_find_exported(hid);
+		if (exported != NULL) {
+			if (exported->valid) {
 				/*
 				 * Check if unexport is already scheduled for that buffer,
 				 * if so try to cancel it. If that will fail, buffer needs
 				 * to be reexport once again.
 				 */
-				if (sgt_info->unexport_scheduled) {
-					if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+				if (exported->unexport_sched) {
+					if (!cancel_delayed_work_sync(&exported->unexport)) {
 						dma_buf_put(dma_buf);
 						goto reexport;
 					}
-					sgt_info->unexport_scheduled = 0;
+					exported->unexport_sched = false;
 				}
 
 				/* if there's any change in size of private data.
 				 * we reallocate space for private data with new size */
-				if (export_remote_attr->sz_priv != sgt_info->sz_priv) {
-					kfree(sgt_info->priv);
+				if (export_remote_attr->sz_priv != exported->sz_priv) {
+					kfree(exported->priv);
 
 					/* truncating size */
 					if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
-						sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+						exported->sz_priv = MAX_SIZE_PRIV_DATA;
 					} else {
-						sgt_info->sz_priv = export_remote_attr->sz_priv;
+						exported->sz_priv = export_remote_attr->sz_priv;
 					}
 
-					sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+					exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
 
-					if(!sgt_info->priv) {
-						dev_err(hyper_dmabuf_private.device,
-							"Can't reallocate priv because there's no more space left\n");
-						hyper_dmabuf_remove_exported(sgt_info->hid);
-						hyper_dmabuf_cleanup_sgt_info(sgt_info, true);
-						kfree(sgt_info);
+					if(!exported->priv) {
+						dev_err(hy_drv_priv->dev,
+							"no more space left for priv\n");
+						hyper_dmabuf_remove_exported(exported->hid);
+						hyper_dmabuf_cleanup_sgt_info(exported, true);
+						kfree(exported);
+						dma_buf_put(dma_buf);
 						return -ENOMEM;
 					}
 				}
 
 				/* update private data in sgt_info with new ones */
-				copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
-
-				/* send an export msg for updating priv in importer */
-				ret = hyper_dmabuf_send_export_msg(sgt_info, NULL);
-
-				if (ret < 0) {
-					dev_err(hyper_dmabuf_private.device, "Failed to send a new private data\n");
+				ret = copy_from_user(exported->priv, export_remote_attr->priv,
+						     exported->sz_priv);
+				if (ret) {
+					dev_err(hy_drv_priv->dev,
+						"Failed to load a new private data\n");
+					ret = -EINVAL;
+				} else {
+					/* send an export msg for updating priv in importer */
+					ret = hyper_dmabuf_send_export_msg(exported, NULL);
+
+					if (ret < 0) {
+						dev_err(hy_drv_priv->dev,
+							"Failed to send a new private data\n");
+						ret = -EBUSY;
+					}
 				}
 
 				dma_buf_put(dma_buf);
 				export_remote_attr->hid = hid;
-				return 0;
+				return ret;
 			}
 		}
 	}
 
 reexport:
-	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
 	if (IS_ERR(attachment)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
+		dev_err(hy_drv_priv->dev, "Cannot get attachment\n");
 		ret = PTR_ERR(attachment);
 		goto fail_attach;
 	}
@@ -235,154 +237,165 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot map attachment\n");
+		dev_err(hy_drv_priv->dev, "Cannot map attachment\n");
 		ret = PTR_ERR(sgt);
 		goto fail_map_attachment;
 	}
 
-	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
+	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
 
-	if(!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	if(!exported) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_sgt_info_creation;
 	}
 
 	/* possible truncation */
 	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
-		sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+		exported->sz_priv = MAX_SIZE_PRIV_DATA;
 	} else {
-		sgt_info->sz_priv = export_remote_attr->sz_priv;
+		exported->sz_priv = export_remote_attr->sz_priv;
 	}
 
 	/* creating buffer for private data of buffer */
-	if(sgt_info->sz_priv != 0) {
-		sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+	if(exported->sz_priv != 0) {
+		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
 
-		if(!sgt_info->priv) {
-			dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		if(!exported->priv) {
+			dev_err(hy_drv_priv->dev, "no more space left\n");
 			ret = -ENOMEM;
 			goto fail_priv_creation;
 		}
 	} else {
-		dev_err(hyper_dmabuf_private.device, "size is 0\n");
+		dev_err(hy_drv_priv->dev, "size is 0\n");
 	}
 
-	sgt_info->hid = hyper_dmabuf_get_hid();
+	exported->hid = hyper_dmabuf_get_hid();
 
 	/* no more exported dmabuf allowed */
-	if(sgt_info->hid.id == -1) {
-		dev_err(hyper_dmabuf_private.device,
+	if(exported->hid.id == -1) {
+		dev_err(hy_drv_priv->dev,
 			"exceeds allowed number of dmabuf to be exported\n");
 		ret = -ENOMEM;
 		goto fail_sgt_info_creation;
 	}
 
-	/* TODO: We might need to consider using port number on event channel? */
-	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
-	sgt_info->dma_buf = dma_buf;
-	sgt_info->valid = 1;
+	exported->rdomid = export_remote_attr->remote_domain;
+	exported->dma_buf = dma_buf;
+	exported->valid = true;
 
-	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
-	if (!sgt_info->active_sgts) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	if (!exported->active_sgts) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_sgts;
 	}
 
-	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
-	if (!sgt_info->active_attached) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	if (!exported->active_attached) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_attached;
 	}
 
-	sgt_info->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
-	if (!sgt_info->va_kmapped) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	if (!exported->va_kmapped) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_kmapped;
 	}
 
-	sgt_info->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
-	if (!sgt_info->va_vmapped) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	if (!exported->va_vmapped) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_vmapped;
 	}
 
-	sgt_info->active_sgts->sgt = sgt;
-	sgt_info->active_attached->attach = attachment;
-	sgt_info->va_kmapped->vaddr = NULL;
-	sgt_info->va_vmapped->vaddr = NULL;
+	exported->active_sgts->sgt = sgt;
+	exported->active_attached->attach = attachment;
+	exported->va_kmapped->vaddr = NULL;
+	exported->va_vmapped->vaddr = NULL;
 
 	/* initialize list of sgt, attachment and vaddr for dmabuf sync
 	 * via shadow dma-buf
 	 */
-	INIT_LIST_HEAD(&sgt_info->active_sgts->list);
-	INIT_LIST_HEAD(&sgt_info->active_attached->list);
-	INIT_LIST_HEAD(&sgt_info->va_kmapped->list);
-	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
+	INIT_LIST_HEAD(&exported->active_sgts->list);
+	INIT_LIST_HEAD(&exported->active_attached->list);
+	INIT_LIST_HEAD(&exported->va_kmapped->list);
+	INIT_LIST_HEAD(&exported->va_vmapped->list);
 
 	/* copy private data to sgt_info */
-	copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
+	ret = copy_from_user(exported->priv, export_remote_attr->priv,
+			     exported->sz_priv);
 
-	page_info = hyper_dmabuf_ext_pgs(sgt);
-	if (!page_info) {
-		dev_err(hyper_dmabuf_private.device, "failed to construct page_info\n");
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"failed to load private data\n");
+		ret = -EINVAL;
 		goto fail_export;
 	}
 
-	sgt_info->nents = page_info->nents;
+	pg_info = hyper_dmabuf_ext_pgs(sgt);
+	if (!pg_info) {
+		dev_err(hy_drv_priv->dev,
+			"failed to construct pg_info\n");
+		ret = -ENOMEM;
+		goto fail_export;
+	}
+
+	exported->nents = pg_info->nents;
 
 	/* now register it to export list */
-	hyper_dmabuf_register_exported(sgt_info);
+	hyper_dmabuf_register_exported(exported);
 
-	export_remote_attr->hid = sgt_info->hid;
+	export_remote_attr->hid = exported->hid;
 
-	ret = hyper_dmabuf_send_export_msg(sgt_info, page_info);
+	ret = hyper_dmabuf_send_export_msg(exported, pg_info);
 
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device, "failed to send out the export request\n");
+		dev_err(hy_drv_priv->dev,
+			"failed to send out the export request\n");
 		goto fail_send_request;
 	}
 
-	/* free page_info */
-	kfree(page_info->pages);
-	kfree(page_info);
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
-	sgt_info->filp = filp;
+	exported->filp = filp;
 
 	return ret;
 
 /* Clean-up if error occurs */
 
 fail_send_request:
-	hyper_dmabuf_remove_exported(sgt_info->hid);
+	hyper_dmabuf_remove_exported(exported->hid);
 
-	/* free page_info */
-	kfree(page_info->pages);
-	kfree(page_info);
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
 fail_export:
-	kfree(sgt_info->va_vmapped);
+	kfree(exported->va_vmapped);
 
 fail_map_va_vmapped:
-	kfree(sgt_info->va_kmapped);
+	kfree(exported->va_kmapped);
 
 fail_map_va_kmapped:
-	kfree(sgt_info->active_attached);
+	kfree(exported->active_attached);
 
 fail_map_active_attached:
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->priv);
+	kfree(exported->active_sgts);
+	kfree(exported->priv);
 
 fail_priv_creation:
-	kfree(sgt_info);
+	kfree(exported);
 
 fail_map_active_sgts:
 fail_sgt_info_creation:
-	dma_buf_unmap_attachment(attachment, sgt, DMA_BIDIRECTIONAL);
+	dma_buf_unmap_attachment(attachment, sgt,
+				 DMA_BIDIRECTIONAL);
 
 fail_map_attachment:
 	dma_buf_detach(dma_buf, attachment);
@@ -395,143 +408,136 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
+			(struct ioctl_hyper_dmabuf_export_fd *)data;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct imported_sgt_info *imported;
 	struct hyper_dmabuf_req *req;
-	struct page **data_pages;
-	int operands[4];
+	struct page **data_pgs;
+	int op[4];
 	int i;
 	int ret = 0;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	/* look for dmabuf for the id */
-	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hid);
+	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
 
 	/* can't find sgt from the table */
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "can't find the entry\n");
+	if (!imported) {
+		dev_err(hy_drv_priv->dev, "can't find the entry\n");
 		return -ENOENT;
 	}
 
-	mutex_lock(&hyper_dmabuf_private.lock);
+	mutex_lock(&hy_drv_priv->lock);
 
-	sgt_info->num_importers++;
+	imported->importers++;
 
 	/* send notification for export_fd to exporter */
-	operands[0] = sgt_info->hid.id;
+	op[0] = imported->hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = sgt_info->hid.rng_key[i];
+		op[i+1] = imported->hid.rng_key[i];
 
-	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
-		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-		sgt_info->hid.rng_key[2]);
+	dev_dbg(hy_drv_priv->dev, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
+		imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+		imported->hid.rng_key[2]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
 
 	if (ret < 0) {
 		/* in case of timeout other end eventually will receive request, so we need to undo it */
-		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
-		ops->send_req(operands[0], req, false);
+		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
+		ops->send_req(op[0], req, false);
 		kfree(req);
-		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
-		sgt_info->num_importers--;
-		mutex_unlock(&hyper_dmabuf_private.lock);
+		dev_err(hy_drv_priv->dev, "Failed to create sgt or notify exporter\n");
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
 		return ret;
 	}
 
 	kfree(req);
 
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+			imported->hid.rng_key[2]);
 
-		sgt_info->num_importers--;
-		mutex_unlock(&hyper_dmabuf_private.lock);
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
 		return -EINVAL;
 	} else {
-		dev_dbg(hyper_dmabuf_private.device, "Can import buffer {id:%d key:%d %d %d}\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+		dev_dbg(hy_drv_priv->dev, "Can import buffer {id:%d key:%d %d %d}\n",
+			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+			imported->hid.rng_key[2]);
 
 		ret = 0;
 	}
 
-	dev_dbg(hyper_dmabuf_private.device,
-		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
-		  sgt_info->ref_handle, sgt_info->frst_ofst,
-		  sgt_info->last_len, sgt_info->nents,
-		  HYPER_DMABUF_DOM_ID(sgt_info->hid));
+	dev_dbg(hy_drv_priv->dev,
+		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n",
+		  __func__, imported->ref_handle, imported->frst_ofst,
+		  imported->last_len, imported->nents, HYPER_DMABUF_DOM_ID(imported->hid));
 
-	if (!sgt_info->sgt) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (!imported->sgt) {
+		dev_dbg(hy_drv_priv->dev,
 			"%s buffer {id:%d key:%d %d %d} pages not mapped yet\n", __func__,
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+			imported->hid.rng_key[2]);
 
-		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
-						   HYPER_DMABUF_DOM_ID(sgt_info->hid),
-						   sgt_info->nents,
-						   &sgt_info->refs_info);
+		data_pgs = ops->map_shared_pages(imported->ref_handle,
+						   HYPER_DMABUF_DOM_ID(imported->hid),
+						   imported->nents,
+						   &imported->refs_info);
 
-		if (!data_pages) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!data_pgs) {
+			dev_err(hy_drv_priv->dev,
 				"Cannot map pages of buffer {id:%d key:%d %d %d}\n",
-				sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-				sgt_info->hid.rng_key[2]);
+				imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+				imported->hid.rng_key[2]);
 
-			sgt_info->num_importers--;
+			imported->importers--;
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 			if (!req) {
-				dev_err(hyper_dmabuf_private.device,
+				dev_err(hy_drv_priv->dev,
 					"No more space left\n");
 				return -ENOMEM;
 			}
 
-			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
-			ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, false);
+			hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
+			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
 			kfree(req);
-			mutex_unlock(&hyper_dmabuf_private.lock);
+			mutex_unlock(&hy_drv_priv->lock);
 			return -EINVAL;
 		}
 
-		sgt_info->sgt = hyper_dmabuf_create_sgt(data_pages, sgt_info->frst_ofst,
-							sgt_info->last_len, sgt_info->nents);
+		imported->sgt = hyper_dmabuf_create_sgt(data_pgs, imported->frst_ofst,
+							imported->last_len, imported->nents);
 
 	}
 
-	export_fd_attr->fd = hyper_dmabuf_export_fd(sgt_info, export_fd_attr->flags);
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported, export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
 		ret = export_fd_attr->fd;
 	}
 
-	mutex_unlock(&hyper_dmabuf_private.lock);
+	mutex_unlock(&hy_drv_priv->lock);
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return ret;
 }
 
@@ -541,50 +547,51 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct exported_sgt_info *exported =
+		container_of(work, struct exported_sgt_info, unexport.work);
+	int op[4];
 	int i, ret;
-	int operands[4];
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	struct hyper_dmabuf_sgt_info *sgt_info =
-		container_of(work, struct hyper_dmabuf_sgt_info, unexport_work.work);
 
-	if (!sgt_info)
+	if (!exported)
 		return;
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
-		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-		sgt_info->hid.rng_key[2]);
+		exported->hid.id, exported->hid.rng_key[0],
+		exported->hid.rng_key[1], exported->hid.rng_key[2]);
 
 	/* no longer valid */
-	sgt_info->valid = 0;
+	exported->valid = false;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return;
 	}
 
-	operands[0] = sgt_info->hid.id;
+	op[0] = exported->hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = sgt_info->hid.rng_key[i];
+		op[i+1] = exported->hid.rng_key[i];
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
 
-	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
-	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
+	/* Now send unexport request to remote domain, marking
+	 * that buffer should not be used anymore */
+	ret = ops->send_req(exported->rdomid, req, true);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
 	}
 
 	/* free msg */
 	kfree(req);
-	sgt_info->unexport_scheduled = 0;
+	exported->unexport_sched = false;
 
 	/*
 	 * Immediately clean-up if it has never been exported by importer
@@ -593,104 +600,94 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 	 * is called (importer does this only when there's no
 	 * no consumer of locally exported FDs)
 	 */
-	if (!sgt_info->importer_exported) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (exported->active == 0) {
+		dev_dbg(hy_drv_priv->dev,
 			"claning up buffer {id:%d key:%d %d %d} completly\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		hyper_dmabuf_cleanup_sgt_info(exported, false);
+		hyper_dmabuf_remove_exported(exported->hid);
 
-		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-		hyper_dmabuf_remove_exported(sgt_info->hid);
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_hid(sgt_info->hid);
+		store_reusable_hid(exported->hid);
 
-		if (sgt_info->sz_priv > 0 && !sgt_info->priv)
-			kfree(sgt_info->priv);
+		if (exported->sz_priv > 0 && !exported->priv)
+			kfree(exported->priv);
 
-		kfree(sgt_info);
+		kfree(exported);
 	}
 }
 
-/* Schedules unexport of dmabuf.
+/* Schedule unexport of dmabuf.
  */
-static int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
-	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
+			(struct ioctl_hyper_dmabuf_unexport *)data;
+	struct exported_sgt_info *exported;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hid);
+	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
 		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
 		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
 
 	/* failed to find corresponding entry in export list */
-	if (sgt_info == NULL) {
+	if (exported == NULL) {
 		unexport_attr->status = -ENOENT;
 		return -ENOENT;
 	}
 
-	if (sgt_info->unexport_scheduled)
+	if (exported->unexport_sched)
 		return 0;
 
-	sgt_info->unexport_scheduled = 1;
-	INIT_DELAYED_WORK(&sgt_info->unexport_work, hyper_dmabuf_delayed_unexport);
-	schedule_delayed_work(&sgt_info->unexport_work,
+	exported->unexport_sched = true;
+	INIT_DELAYED_WORK(&exported->unexport,
+			  hyper_dmabuf_delayed_unexport);
+	schedule_delayed_work(&exported->unexport,
 			      msecs_to_jiffies(unexport_attr->delay_ms));
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return 0;
 }
 
 static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_query *query_attr;
-	struct hyper_dmabuf_sgt_info *sgt_info = NULL;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info = NULL;
+	struct ioctl_hyper_dmabuf_query *query_attr =
+			(struct ioctl_hyper_dmabuf_query *)data;
+	struct exported_sgt_info *exported = NULL;
+	struct imported_sgt_info *imported = NULL;
 	int ret = 0;
 
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
-
-	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hyper_dmabuf_private.domid) {
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hy_drv_priv->domid) {
 		/* query for exported dmabuf */
-		sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
-		if (sgt_info) {
-			ret = hyper_dmabuf_query_exported(sgt_info,
+		exported = hyper_dmabuf_find_exported(query_attr->hid);
+		if (exported) {
+			ret = hyper_dmabuf_query_exported(exported,
 							  query_attr->item, &query_attr->info);
 		} else {
-			dev_err(hyper_dmabuf_private.device,
-				"DMA BUF {id:%d key:%d %d %d} can't be found in the export list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
+			dev_err(hy_drv_priv->dev,
+				"DMA BUF {id:%d key:%d %d %d} not in the export list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	} else {
 		/* query for imported dmabuf */
-		imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
-		if (imported_sgt_info) {
-			ret = hyper_dmabuf_query_imported(imported_sgt_info,
-							  query_attr->item, &query_attr->info);
+		imported = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported) {
+			ret = hyper_dmabuf_query_imported(imported, query_attr->item,
+							  &query_attr->info);
 		} else {
-			dev_err(hyper_dmabuf_private.device,
-				"DMA BUF {id:%d key:%d %d %d} can't be found in the imported list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
+			dev_err(hy_drv_priv->dev,
+				"DMA BUF {id:%d key:%d %d %d} not in the imported list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	}
@@ -698,28 +695,6 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
-void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
-				    void *attr)
-{
-	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file*) attr;
-
-	if (!filp || !sgt_info)
-		return;
-
-	if (sgt_info->filp == filp) {
-		dev_dbg(hyper_dmabuf_private.device,
-			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
-			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
-			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
-
-		unexport_attr.hid = sgt_info->hid;
-		unexport_attr.delay_ms = 0;
-
-		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
-	}
-}
-
 const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
@@ -739,7 +714,7 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	char *kdata;
 
 	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
-		dev_err(hyper_dmabuf_private.device, "invalid ioctl\n");
+		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
 		return -EINVAL;
 	}
 
@@ -748,18 +723,18 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	func = ioctl->func;
 
 	if (unlikely(!func)) {
-		dev_err(hyper_dmabuf_private.device, "no function\n");
+		dev_err(hy_drv_priv->dev, "no function\n");
 		return -EINVAL;
 	}
 
 	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
 	if (!kdata) {
-		dev_err(hyper_dmabuf_private.device, "no memory\n");
+		dev_err(hy_drv_priv->dev, "no memory\n");
 		return -ENOMEM;
 	}
 
 	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hyper_dmabuf_private.device, "failed to copy from user arguments\n");
+		dev_err(hy_drv_priv->dev, "failed to copy from user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
@@ -767,7 +742,7 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	ret = func(filp, kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
+		dev_err(hy_drv_priv->dev, "failed to copy to user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index ebfbb84..3e9470a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -42,4 +42,6 @@ struct hyper_dmabuf_ioctl_desc {
 			.name = #ioctl			\
 	}
 
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
+
 #endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index eaef2c1..1b3745e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -39,24 +39,22 @@
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_event.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
 static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attribute *attr, char *buf)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 	int bkt;
 	ssize_t count = 0;
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->info->hid;
-		int nents = info_entry->info->nents;
-		bool valid = info_entry->info->valid;
-		int num_importers = info_entry->info->num_importers;
+		hyper_dmabuf_id_t hid = info_entry->imported->hid;
+		int nents = info_entry->imported->nents;
+		bool valid = info_entry->imported->valid;
+		int num_importers = info_entry->imported->importers;
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
 				   "hid:{id:%d keys:%d %d %d}, nents:%d, v:%c, numi:%d\n",
@@ -71,16 +69,16 @@ static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attr
 
 static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attribute *attr, char *buf)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	int bkt;
 	ssize_t count = 0;
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->info->hid;
-		int nents = info_entry->info->nents;
-		bool valid = info_entry->info->valid;
-		int importer_exported = info_entry->info->importer_exported;
+		hyper_dmabuf_id_t hid = info_entry->exported->hid;
+		int nents = info_entry->exported->nents;
+		bool valid = info_entry->exported->valid;
+		int importer_exported = info_entry->exported->active;
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
 				   "hid:{hid:%d keys:%d %d %d}, nents:%d, v:%c, ie:%d\n",
@@ -135,57 +133,57 @@ int hyper_dmabuf_table_destroy()
 	return 0;
 }
 
-int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
+int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
                         "No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	info_entry->info = info;
+	info_entry->exported = exported;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		 info_entry->info->hid.id);
+		 info_entry->exported->hid.id);
 
 	return 0;
 }
 
-int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
+int hyper_dmabuf_register_imported(struct imported_sgt_info* imported)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
                         "No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	info_entry->info = info;
+	info_entry->imported = imported;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		 info_entry->info->hid.id);
+		 info_entry->imported->hid.id);
 
 	return 0;
 }
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
-				return info_entry->info;
+			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid))
+				return info_entry->exported;
 			/* if key is unmatched, given HID is invalid, so returning NULL */
 			else
 				break;
@@ -197,29 +195,29 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 /* search for pre-exported sgt and return id of it if it exist */
 hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	hyper_dmabuf_id_t hid = {-1, {0, 0, 0}};
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->dma_buf == dmabuf &&
-		   info_entry->info->hyper_dmabuf_rdomain == domid)
-			return info_entry->info->hid;
+		if(info_entry->exported->dma_buf == dmabuf &&
+		   info_entry->exported->rdomid == domid)
+			return info_entry->exported->hid;
 
 	return hid;
 }
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
-				return info_entry->info;
+			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid))
+				return info_entry->imported;
 			/* if key is unmatched, given HID is invalid, so returning NULL */
 			else {
 				break;
@@ -231,14 +229,14 @@ struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_i
 
 int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
@@ -252,14 +250,14 @@ int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 
 int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
@@ -272,15 +270,15 @@ int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 }
 
 void hyper_dmabuf_foreach_exported(
-	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void (*func)(struct exported_sgt_info *, void *attr),
 	void *attr)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	struct hlist_node *tmp;
 	int bkt;
 
 	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
 			info_entry, node) {
-		func(info_entry->info, attr);
+		func(info_entry->exported, attr);
 	}
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 8f64db8..d5c17ef 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -32,13 +32,13 @@
 /* number of bits to be used for imported dmabufs hash table */
 #define MAX_ENTRY_IMPORTED 7
 
-struct hyper_dmabuf_info_entry_exported {
-        struct hyper_dmabuf_sgt_info *info;
+struct list_entry_exported {
+        struct exported_sgt_info *exported;
         struct hlist_node node;
 };
 
-struct hyper_dmabuf_info_entry_imported {
-        struct hyper_dmabuf_imported_sgt_info *info;
+struct list_entry_imported {
+        struct imported_sgt_info *imported;
         struct hlist_node node;
 };
 
@@ -46,23 +46,23 @@ int hyper_dmabuf_table_init(void);
 
 int hyper_dmabuf_table_destroy(void);
 
-int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
+int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
 hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid);
 
-int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
+int hyper_dmabuf_register_imported(struct imported_sgt_info* info);
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
 
 int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
 
 int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
 
 void hyper_dmabuf_foreach_exported(
-	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void (*func)(struct exported_sgt_info *, void *attr),
 	void *attr);
 
 int hyper_dmabuf_register_sysfs(struct device *dev);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index ec37c3b..907f76e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -31,7 +31,6 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
@@ -39,58 +38,56 @@
 #include "hyper_dmabuf_event.h"
 #include "hyper_dmabuf_list.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 struct cmd_process {
 	struct work_struct work;
 	struct hyper_dmabuf_req *rq;
 	int domid;
 };
 
-void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
-				 enum hyper_dmabuf_command command, int *operands)
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+			     enum hyper_dmabuf_command cmd, int *op)
 {
 	int i;
 
-	req->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
-	req->command = command;
+	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	req->cmd = cmd;
 
-	switch(command) {
+	switch(cmd) {
 	/* as exporter, commands to importer */
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands4 : number of pages to be shared
-		 * operands5 : offset of data in the first page
-		 * operands6 : length of data in the last page
-		 * operands7 : top-level reference number for shared pages
-		 * operands8 : size of private data (from operands9)
-		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
-		memcpy(&req->operands[0], &operands[0], 9 * sizeof(int) + operands[8]);
+		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
-		 * operands0~3 : hyper_dmabuf_id_t hid
+		 * op0~3 : hyper_dmabuf_id_t hid
 		 */
 
 		for (i=0; i < 4; i++)
-			req->operands[i] = operands[i];
+			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_EXPORT_FD:
 	case HYPER_DMABUF_EXPORT_FD_FAILED:
 		/* dmabuf fd is being created on imported side or importing failed */
 		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * operands0~3 : hyper_dmabuf_id
+		 * op0~3 : hyper_dmabuf_id
 		 */
 
 		for (i=0; i < 4; i++)
-			req->operands[i] = operands[i];
+			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -103,11 +100,11 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
 		* or unmapping for synchronization with original exporter (e.g. i915) */
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
 		for (i = 0; i < 5; i++)
-			req->operands[i] = operands[i];
+			req->op[i] = op[i];
 		break;
 
 	default:
@@ -116,9 +113,9 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 	}
 }
 
-void cmd_process_work(struct work_struct *work)
+static void cmd_process_work(struct work_struct *work)
 {
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct imported_sgt_info *imported;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_req *req;
 	int domid;
@@ -127,107 +124,107 @@ void cmd_process_work(struct work_struct *work)
 	req = proc->rq;
 	domid = proc->domid;
 
-	switch (req->command) {
+	switch (req->cmd) {
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands4 : number of pages to be shared
-		 * operands5 : offset of data in the first page
-		 * operands6 : length of data in the last page
-		 * operands7 : top-level reference number for shared pages
-		 * operands8 : size of private data (from operands9)
-		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
 		/* if nents == 0, it means it is a message only for priv synchronization
 		 * for existing imported_sgt_info so not creating a new one */
-		if (req->operands[4] == 0) {
-			hyper_dmabuf_id_t exist = {req->operands[0],
-						   {req->operands[1], req->operands[2],
-						    req->operands[3]}};
+		if (req->op[4] == 0) {
+			hyper_dmabuf_id_t exist = {req->op[0],
+						   {req->op[1], req->op[2],
+						   req->op[3]}};
 
-			imported_sgt_info = hyper_dmabuf_find_imported(exist);
+			imported = hyper_dmabuf_find_imported(exist);
 
-			if (!imported_sgt_info) {
-				dev_err(hyper_dmabuf_private.device,
+			if (!imported) {
+				dev_err(hy_drv_priv->dev,
 					"Can't find imported sgt_info from IMPORT_LIST\n");
 				break;
 			}
 
 			/* if size of new private data is different,
 			 * we reallocate it. */
-			if (imported_sgt_info->sz_priv != req->operands[8]) {
-				kfree(imported_sgt_info->priv);
-				imported_sgt_info->sz_priv = req->operands[8];
-				imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
-				if (!imported_sgt_info->priv) {
-					dev_err(hyper_dmabuf_private.device,
+			if (imported->sz_priv != req->op[8]) {
+				kfree(imported->priv);
+				imported->sz_priv = req->op[8];
+				imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
+				if (!imported->priv) {
+					dev_err(hy_drv_priv->dev,
 						"Fail to allocate priv\n");
 
 					/* set it invalid */
-					imported_sgt_info->valid = 0;
+					imported->valid = 0;
 					break;
 				}
 			}
 
 			/* updating priv data */
-			memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
+			memcpy(imported->priv, &req->op[9], req->op[8]);
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 			/* generating import event */
-			hyper_dmabuf_import_event(imported_sgt_info->hid);
+			hyper_dmabuf_import_event(imported->hid);
 #endif
 
 			break;
 		}
 
-		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
 
-		if (!imported_sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!imported) {
+			dev_err(hy_drv_priv->dev,
 				"No memory left to be allocated\n");
 			break;
 		}
 
-		imported_sgt_info->sz_priv = req->operands[8];
-		imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
+		imported->sz_priv = req->op[8];
+		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
 
-		if (!imported_sgt_info->priv) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!imported->priv) {
+			dev_err(hy_drv_priv->dev,
 				"Fail to allocate priv\n");
 
-			kfree(imported_sgt_info);
+			kfree(imported);
 			break;
 		}
 
-		imported_sgt_info->hid.id = req->operands[0];
+		imported->hid.id = req->op[0];
 
 		for (i=0; i<3; i++)
-			imported_sgt_info->hid.rng_key[i] = req->operands[i+1];
+			imported->hid.rng_key[i] = req->op[i+1];
 
-		imported_sgt_info->nents = req->operands[4];
-		imported_sgt_info->frst_ofst = req->operands[5];
-		imported_sgt_info->last_len = req->operands[6];
-		imported_sgt_info->ref_handle = req->operands[7];
+		imported->nents = req->op[4];
+		imported->frst_ofst = req->op[5];
+		imported->last_len = req->op[6];
+		imported->ref_handle = req->op[7];
 
-		dev_dbg(hyper_dmabuf_private.device, "DMABUF was exported\n");
-		dev_dbg(hyper_dmabuf_private.device, "\thid{id:%d key:%d %d %d}\n",
-			req->operands[0], req->operands[1], req->operands[2],
-			req->operands[3]);
-		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[4]);
-		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[5]);
-		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
-		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
+		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
+		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
+			req->op[0], req->op[1], req->op[2],
+			req->op[3]);
+		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
+		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
+		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
+		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
 
-		memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
+		memcpy(imported->priv, &req->op[9], req->op[8]);
 
-		imported_sgt_info->valid = 1;
-		hyper_dmabuf_register_imported(imported_sgt_info);
+		imported->valid = true;
+		hyper_dmabuf_register_imported(imported);
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 		/* generating import event */
-		hyper_dmabuf_import_event(imported_sgt_info->hid);
+		hyper_dmabuf_import_event(imported->hid);
 #endif
 
 		break;
@@ -251,142 +248,142 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 {
 	struct cmd_process *proc;
 	struct hyper_dmabuf_req *temp_req;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_sgt_info *exp_sgt_info;
+	struct imported_sgt_info *imported;
+	struct exported_sgt_info *exported;
 	hyper_dmabuf_id_t hid;
 	int ret;
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device, "request is NULL\n");
+		dev_err(hy_drv_priv->dev, "request is NULL\n");
 		return -EINVAL;
 	}
 
-	hid.id = req->operands[0];
-	hid.rng_key[0] = req->operands[1];
-	hid.rng_key[1] = req->operands[2];
-	hid.rng_key[2] = req->operands[3];
+	hid.id = req->op[0];
+	hid.rng_key[0] = req->op[1];
+	hid.rng_key[1] = req->op[2];
+	hid.rng_key[2] = req->op[3];
 
-	if ((req->command < HYPER_DMABUF_EXPORT) ||
-		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
-		dev_err(hyper_dmabuf_private.device, "invalid command\n");
+	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
+		(req->cmd > HYPER_DMABUF_OPS_TO_SOURCE)) {
+		dev_err(hy_drv_priv->dev, "invalid command\n");
 		return -EINVAL;
 	}
 
-	req->status = HYPER_DMABUF_REQ_PROCESSED;
+	req->stat = HYPER_DMABUF_REQ_PROCESSED;
 
 	/* HYPER_DMABUF_DESTROY requires immediate
 	 * follow up so can't be processed in workqueue
 	 */
-	if (req->command == HYPER_DMABUF_NOTIFY_UNEXPORT) {
+	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
-		 * operands0~3 : hyper_dmabuf_id
+		 * op0~3 : hyper_dmabuf_id
 		 */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
 
-		sgt_info = hyper_dmabuf_find_imported(hid);
+		imported = hyper_dmabuf_find_imported(hid);
 
-		if (sgt_info) {
+		if (imported) {
 			/* if anything is still using dma_buf */
-			if (sgt_info->num_importers) {
+			if (imported->importers) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
 				 */
-				sgt_info->valid = 0;
+				imported->valid = false;
 			} else {
 				/* No one is using buffer, remove it from imported list */
 				hyper_dmabuf_remove_imported(hid);
-				kfree(sgt_info);
+				kfree(imported);
 			}
 		} else {
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		}
 
-		return req->command;
+		return req->cmd;
 	}
 
 	/* dma buf remote synchronization */
-	if (req->command == HYPER_DMABUF_OPS_TO_SOURCE) {
+	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
 		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
 		 * or unmapping for synchronization with original exporter (e.g. i915) */
 
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands1 : enum hyper_dmabuf_ops {....}
+		 * op0~3 : hyper_dmabuf_id
+		 * op1 : enum hyper_dmabuf_ops {....}
 		 */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
 
-		ret = hyper_dmabuf_remote_sync(hid, req->operands[4]);
+		ret = hyper_dmabuf_remote_sync(hid, req->op[4]);
 
 		if (ret)
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		else
-			req->status = HYPER_DMABUF_REQ_PROCESSED;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
 
-		return req->command;
+		return req->cmd;
 	}
 
 	/* synchronous dma_buf_fd export */
-	if (req->command == HYPER_DMABUF_EXPORT_FD) {
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"Processing HYPER_DMABUF_EXPORT_FD for buffer {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-		exp_sgt_info = hyper_dmabuf_find_exported(hid);
+		exported = hyper_dmabuf_find_exported(hid);
 
-		if (!exp_sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
 				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			req->status = HYPER_DMABUF_REQ_ERROR;
-		} else if (!exp_sgt_info->valid) {
-			dev_dbg(hyper_dmabuf_private.device,
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else if (!exported->valid) {
+			dev_dbg(hy_drv_priv->dev,
 				"Buffer no longer valid - cannot export fd for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
-			dev_dbg(hyper_dmabuf_private.device,
+			dev_dbg(hy_drv_priv->dev,
 				"Buffer still valid - can export fd for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			exp_sgt_info->importer_exported++;
-			req->status = HYPER_DMABUF_REQ_PROCESSED;
+			exported->active++;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
 		}
-		return req->command;
+		return req->cmd;
 	}
 
-	if (req->command == HYPER_DMABUF_EXPORT_FD_FAILED) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
+		dev_dbg(hy_drv_priv->dev,
 			"Processing HYPER_DMABUF_EXPORT_FD_FAILED for buffer {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-		exp_sgt_info = hyper_dmabuf_find_exported(hid);
+		exported = hyper_dmabuf_find_exported(hid);
 
-		if (!exp_sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
 				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
-			exp_sgt_info->importer_exported--;
-			req->status = HYPER_DMABUF_REQ_PROCESSED;
+			exported->active--;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
 		}
-		return req->command;
+		return req->cmd;
 	}
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
 	if (!temp_req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
@@ -396,7 +393,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
 
 	if (!proc) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		kfree(temp_req);
 		return -ENOMEM;
@@ -407,7 +404,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	INIT_WORK(&(proc->work), cmd_process_work);
 
-	queue_work(hyper_dmabuf_private.work_queue, &(proc->work));
+	queue_work(hy_drv_priv->work_queue, &(proc->work));
 
-	return req->command;
+	return req->cmd;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 0f6e795..7c694ec 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -28,17 +28,17 @@
 #define MAX_NUMBER_OF_OPERANDS 64
 
 struct hyper_dmabuf_req {
-	unsigned int request_id;
-	unsigned int status;
-	unsigned int command;
-	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+	unsigned int req_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
 };
 
 struct hyper_dmabuf_resp {
-	unsigned int response_id;
-	unsigned int status;
-	unsigned int command;
-	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+	unsigned int resp_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
 };
 
 enum hyper_dmabuf_command {
@@ -75,7 +75,7 @@ enum hyper_dmabuf_req_feedback {
 };
 
 /* create a request packet with given command and operands */
-void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 				 enum hyper_dmabuf_command command,
 				 int *operands);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 9313c42..7e73170 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -32,8 +32,6 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_ops.h"
@@ -45,122 +43,111 @@
 #define WAIT_AFTER_SYNC_REQ 0
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
-inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int operands[5];
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	int op[5];
 	int i;
 	int ret;
 
-	operands[0] = hid.id;
+	op[0] = hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = hid.rng_key[i];
+		op[i+1] = hid.rng_key[i];
 
-	operands[4] = dmabuf_ops;
+	op[4] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
 
 	/* send request and wait for a response */
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
 
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"dmabuf sync request failed:%d\n", req->op[4]);
+	}
+
 	kfree(req);
 
 	return ret;
 }
 
-static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
-			struct dma_buf_attachment *attach)
+static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf,
+				   struct device* dev,
+				   struct dma_buf_attachment *attach)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!attach->dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_ATTACH);
 
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-		return ret;
-	}
-
-	return 0;
+	return ret;
 }
 
-static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
+static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf,
+				    struct dma_buf_attachment *attach)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!attach->dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_DETACH);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
-						enum dma_data_direction dir)
+					     enum dma_data_direction dir)
 {
 	struct sg_table *st;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_pages_info *page_info;
+	struct imported_sgt_info *imported;
+	struct pages_info *pg_info;
 	int ret;
 
 	if (!attachment->dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
 
 	/* extract pages from sgt */
-	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
+	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
 
-	if (!page_info) {
+	if (!pg_info) {
 		return NULL;
 	}
 
 	/* create a new sg_table with extracted pages */
-	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
-				page_info->last_len, page_info->nents);
+	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
+				     pg_info->last_len, pg_info->nents);
 	if (!st)
 		goto err_free_sg;
 
         if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
                 goto err_free_sg;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_MAP);
 
-	kfree(page_info->pages);
-	kfree(page_info);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
 	return st;
 
@@ -170,8 +157,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 		kfree(st);
 	}
 
-	kfree(page_info->pages);
-	kfree(page_info);
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
 	return NULL;
 }
@@ -180,294 +167,251 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 				   struct sg_table *sg,
 				   enum dma_data_direction dir)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!attachment->dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
 
 	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
 
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_UNMAP);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	int ret;
-	int final_release;
+	int finish;
 
 	if (!dma_buf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
+	imported = (struct imported_sgt_info *)dma_buf->priv;
 
-	if (!dmabuf_refcount(sgt_info->dma_buf)) {
-		sgt_info->dma_buf = NULL;
+	if (!dmabuf_refcount(imported->dma_buf)) {
+		imported->dma_buf = NULL;
 	}
 
-	sgt_info->num_importers--;
+	imported->importers--;
 
-	if (sgt_info->num_importers == 0) {
-		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
+	if (imported->importers == 0) {
+		ops->unmap_shared_pages(&imported->refs_info, imported->nents);
 
-		if (sgt_info->sgt) {
-			sg_free_table(sgt_info->sgt);
-			kfree(sgt_info->sgt);
-			sgt_info->sgt = NULL;
+		if (imported->sgt) {
+			sg_free_table(imported->sgt);
+			kfree(imported->sgt);
+			imported->sgt = NULL;
 		}
 	}
 
-	final_release = sgt_info && !sgt_info->valid &&
-		        !sgt_info->num_importers;
+	finish = imported && !imported->valid &&
+		 !imported->importers;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_RELEASE);
-	if (ret < 0) {
-		dev_warn(hyper_dmabuf_private.device,
-			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	/*
 	 * Check if buffer is still valid and if not remove it from imported list.
 	 * That has to be done after sending sync request
 	 */
-	if (final_release) {
-		hyper_dmabuf_remove_imported(sgt_info->hid);
-		kfree(sgt_info);
+	if (finish) {
+		hyper_dmabuf_remove_imported(imported->hid);
+		kfree(imported);
 	}
 }
 
 static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return ret;
 }
 
 static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_END_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return 0;
 }
 
 static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
 }
 
 static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
-	return NULL; /* for now NULL.. need to return the address of mapped region */
+	/* for now NULL.. need to return the address of mapped region */
+	return NULL;
 }
 
-static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
+				    void *vaddr)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_MMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return ret;
 }
 
 static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_VMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return NULL;
 }
 
 static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_VUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static const struct dma_buf_ops hyper_dmabuf_ops = {
-		.attach = hyper_dmabuf_ops_attach,
-		.detach = hyper_dmabuf_ops_detach,
-		.map_dma_buf = hyper_dmabuf_ops_map,
-		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
-		.release = hyper_dmabuf_ops_release,
-		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
-		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
-		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
-		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
-		.map = hyper_dmabuf_ops_kmap,
-		.unmap = hyper_dmabuf_ops_kunmap,
-		.mmap = hyper_dmabuf_ops_mmap,
-		.vmap = hyper_dmabuf_ops_vmap,
-		.vunmap = hyper_dmabuf_ops_vunmap,
+	.attach = hyper_dmabuf_ops_attach,
+	.detach = hyper_dmabuf_ops_detach,
+	.map_dma_buf = hyper_dmabuf_ops_map,
+	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+	.release = hyper_dmabuf_ops_release,
+	.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
+	.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+	.map = hyper_dmabuf_ops_kmap,
+	.unmap = hyper_dmabuf_ops_kunmap,
+	.mmap = hyper_dmabuf_ops_mmap,
+	.vmap = hyper_dmabuf_ops_vmap,
+	.vunmap = hyper_dmabuf_ops_vunmap,
 };
 
 /* exporting dmabuf as fd */
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
 {
 	int fd = -1;
 
 	/* call hyper_dmabuf_export_dmabuf and create
 	 * and bind a handle for it then release
 	 */
-	hyper_dmabuf_export_dma_buf(dinfo);
+	hyper_dmabuf_export_dma_buf(imported);
 
-	if (dinfo->dma_buf) {
-		fd = dma_buf_fd(dinfo->dma_buf, flags);
+	if (imported->dma_buf) {
+		fd = dma_buf_fd(imported->dma_buf, flags);
 	}
 
 	return fd;
 }
 
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
 {
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
 	exp_info.ops = &hyper_dmabuf_ops;
 
 	/* multiple of PAGE_SIZE, not considering offset */
-	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
-	exp_info.flags = /* not sure about flag */0;
-	exp_info.priv = dinfo;
+	exp_info.size = imported->sgt->nents * PAGE_SIZE;
+	exp_info.flags = /* not sure about flag */ 0;
+	exp_info.priv = imported;
 
-	dinfo->dma_buf = dma_buf_export(&exp_info);
+	imported->dma_buf = dma_buf_export(&exp_info);
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
index 8c06fc6..c5505a4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
@@ -25,8 +25,8 @@
 #ifndef __HYPER_DMABUF_OPS_H__
 #define __HYPER_DMABUF_OPS_H__
 
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
 
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
 
 #endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
index 39c9dee..36e888c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -32,16 +32,12 @@
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_id.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 #define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
 	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
 				int query, unsigned long* info)
 {
-	int n;
-
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
@@ -50,45 +46,46 @@ int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(exported->hid);
 			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = sgt_info->hyper_dmabuf_rdomain;
+			*info = exported->rdomid;
 			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
-			*info = sgt_info->dma_buf->size;
+			*info = exported->dma_buf->size;
 			break;
 
 		/* whether the buffer is used by importer */
 		case HYPER_DMABUF_QUERY_BUSY:
-			*info = (sgt_info->importer_exported == 0) ? false : true;
+			*info = (exported->active > 0);
 			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !sgt_info->valid;
+			*info = !exported->valid;
 			break;
 
 		/* whether the buffer is scheduled to be unexported */
 		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-			*info = !sgt_info->unexport_scheduled;
+			*info = !exported->unexport_sched;
 			break;
 
 		/* size of private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = sgt_info->sz_priv;
+			*info = exported->sz_priv;
 			break;
 
 		/* copy private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (sgt_info->sz_priv > 0) {
+			if (exported->sz_priv > 0) {
+				int n;
 				n = copy_to_user((void __user*) *info,
-						sgt_info->priv,
-						sgt_info->sz_priv);
+						exported->priv,
+						exported->sz_priv);
 				if (n != 0)
 					return -EINVAL;
 			}
@@ -102,11 +99,9 @@ int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
 }
 
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
 				int query, unsigned long *info)
 {
-	int n;
-
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
@@ -115,50 +110,51 @@ int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(imported->hid);
 			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = hyper_dmabuf_private.domid;
+			*info = hy_drv_priv->domid;
 			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
-			if (imported_sgt_info->dma_buf) {
+			if (imported->dma_buf) {
 				/* if local dma_buf is created (if it's ever mapped),
 				 * retrieve it directly from struct dma_buf *
 				 */
-				*info = imported_sgt_info->dma_buf->size;
+				*info = imported->dma_buf->size;
 			} else {
 				/* calcuate it from given nents, frst_ofst and last_len */
-				*info = HYPER_DMABUF_SIZE(imported_sgt_info->nents,
-							  imported_sgt_info->frst_ofst,
-							  imported_sgt_info->last_len);
+				*info = HYPER_DMABUF_SIZE(imported->nents,
+							  imported->frst_ofst,
+							  imported->last_len);
 			}
 			break;
 
 		/* whether the buffer is used or not */
 		case HYPER_DMABUF_QUERY_BUSY:
 			/* checks if it's used by importer */
-			*info = (imported_sgt_info->num_importers > 0) ? true : false;
+			*info = (imported->importers > 0);
 			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !imported_sgt_info->valid;
+			*info = !imported->valid;
 			break;
 		/* size of private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = imported_sgt_info->sz_priv;
+			*info = imported->sz_priv;
 			break;
 
 		/* copy private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (imported_sgt_info->sz_priv > 0) {
+			if (imported->sz_priv > 0) {
+				int n;
 				n = copy_to_user((void __user*) *info,
-						imported_sgt_info->priv,
-						imported_sgt_info->sz_priv);
+						imported->priv,
+						imported->sz_priv);
 				if (n != 0)
 					return -EINVAL;
 			}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index 7bbb322..65ae738 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,10 +1,10 @@
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
 				int query, unsigned long *info);
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
 				int query, unsigned long *info);
 
 #endif // __HYPER_DMABUF_QUERY_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 9004406..01ec98c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -39,8 +39,6 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_sgl_proc.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 /* Whenever importer does dma operations from remote domain,
  * a notification is sent to the exporter so that exporter
  * issues equivalent dma operation on the original dma buf
@@ -58,7 +56,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
  */
 int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 {
-	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct exported_sgt_info *exported;
 	struct sgt_list *sgtl;
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
@@ -66,10 +64,10 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	int ret;
 
 	/* find a coresponding SGT for the id */
-	sgt_info = hyper_dmabuf_find_exported(hid);
+	exported = hyper_dmabuf_find_exported(hid);
 
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device,
+	if (!exported) {
+		dev_err(hy_drv_priv->dev,
 			"dmabuf remote sync::can't find exported list\n");
 		return -ENOENT;
 	}
@@ -79,84 +77,84 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
 		if (!attachl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
 			return -ENOMEM;
 		}
 
-		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
-						 hyper_dmabuf_private.device);
+		attachl->attach = dma_buf_attach(exported->dma_buf,
+						 hy_drv_priv->dev);
 
 		if (!attachl->attach) {
 			kfree(attachl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
 			return -ENOMEM;
 		}
 
-		list_add(&attachl->list, &sgt_info->active_attached->list);
+		list_add(&attachl->list, &exported->active_attached->list);
 		break;
 
 	case HYPER_DMABUF_OPS_DETACH:
-		if (list_empty(&sgt_info->active_attached->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_DETACH\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf attachment left to be detached\n");
 			return -EFAULT;
 		}
 
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
-		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		dma_buf_detach(exported->dma_buf, attachl->attach);
 		list_del(&attachl->list);
 		kfree(attachl);
 		break;
 
 	case HYPER_DMABUF_OPS_MAP:
-		if (list_empty(&sgt_info->active_attached->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf attachment left to be mapped\n");
 			return -EFAULT;
 		}
 
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
 		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
 
 		if (!sgtl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
 			return -ENOMEM;
 		}
 
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
 			return -ENOMEM;
 		}
-		list_add(&sgtl->list, &sgt_info->active_sgts->list);
+		list_add(&sgtl->list, &exported->active_sgts->list);
 		break;
 
 	case HYPER_DMABUF_OPS_UNMAP:
-		if (list_empty(&sgt_info->active_sgts->list) ||
-		    list_empty(&sgt_info->active_attached->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->active_sgts->list) ||
+		    list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
+			dev_err(hy_drv_priv->dev,
 				"no more SGT or attachment left to be unmapped\n");
 			return -EFAULT;
 		}
 
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+		sgtl = list_first_entry(&exported->active_sgts->list,
 					struct sgt_list, list);
 
 		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
@@ -166,30 +164,30 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_RELEASE:
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"Buffer {id:%d key:%d %d %d} released, references left: %d\n",
-			 sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			 sgt_info->hid.rng_key[2], sgt_info->importer_exported -1);
+			 exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
+			 exported->hid.rng_key[2], exported->active - 1);
 
-                sgt_info->importer_exported--;
+                exported->active--;
 		/* If there are still importers just break, if no then continue with final cleanup */
-		if (sgt_info->importer_exported)
+		if (exported->active)
 			break;
 
 		/*
 		 * Importer just released buffer fd, check if there is any other importer still using it.
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"Buffer {id:%d key:%d %d %d} final released\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
+			exported->hid.rng_key[2]);
 
-		if (!sgt_info->valid && !sgt_info->importer_exported &&
-		    !sgt_info->unexport_scheduled) {
-			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+		if (!exported->valid && !exported->active &&
+		    !exported->unexport_sched) {
+			hyper_dmabuf_cleanup_sgt_info(exported, false);
 			hyper_dmabuf_remove_exported(hid);
-			kfree(sgt_info);
+			kfree(exported);
 			/* store hyper_dmabuf_id in the list for reuse */
 			store_reusable_hid(hid);
 		}
@@ -197,19 +195,19 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
-		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_begin_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
 		if (ret) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
 
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
-		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_end_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
 		if (ret) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
@@ -218,49 +216,49 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_KMAP:
 		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
 		if (!va_kmapl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -ENOMEM;
 		}
 
 		/* dummy kmapping of 1 page */
 		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
-			va_kmapl->vaddr = dma_buf_kmap_atomic(sgt_info->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap_atomic(exported->dma_buf, 1);
 		else
-			va_kmapl->vaddr = dma_buf_kmap(sgt_info->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap(exported->dma_buf, 1);
 
 		if (!va_kmapl->vaddr) {
 			kfree(va_kmapl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -ENOMEM;
 		}
-		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
+		list_add(&va_kmapl->list, &exported->va_kmapped->list);
 		break;
 
 	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KUNMAP:
-		if (list_empty(&sgt_info->va_kmapped->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->va_kmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf VA to be freed\n");
 			return -EFAULT;
 		}
 
-		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
 		if (!va_kmapl->vaddr) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return PTR_ERR(va_kmapl->vaddr);
 		}
 
 		/* unmapping 1 page */
 		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
-			dma_buf_kunmap_atomic(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap_atomic(exported->dma_buf, 1, va_kmapl->vaddr);
 		else
-			dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
 
 		list_del(&va_kmapl->list);
 		kfree(va_kmapl);
@@ -269,48 +267,48 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_MMAP:
 		/* currently not supported: looking for a way to create
 		 * a dummy vma */
-		dev_warn(hyper_dmabuf_private.device,
-			 "dmabuf remote sync::sychronized mmap is not supported\n");
+		dev_warn(hy_drv_priv->dev,
+			 "remote sync::sychronized mmap is not supported\n");
 		break;
 
 	case HYPER_DMABUF_OPS_VMAP:
 		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
 
 		if (!va_vmapl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
 			return -ENOMEM;
 		}
 
 		/* dummy vmapping */
-		va_vmapl->vaddr = dma_buf_vmap(sgt_info->dma_buf);
+		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
 
 		if (!va_vmapl->vaddr) {
 			kfree(va_vmapl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
 			return -ENOMEM;
 		}
-		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
+		list_add(&va_vmapl->list, &exported->va_vmapped->list);
 		break;
 
 	case HYPER_DMABUF_OPS_VUNMAP:
-		if (list_empty(&sgt_info->va_vmapped->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->va_vmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf VA to be freed\n");
 			return -EFAULT;
 		}
-		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
 			return -EFAULT;
 		}
 
-		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
 
 		list_del(&va_vmapl->list);
 		kfree(va_vmapl);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index 691a714..315c354 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -32,8 +32,6 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_sgl_proc.h"
@@ -41,8 +39,6 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
 int dmabuf_refcount(struct dma_buf *dma_buf)
@@ -66,60 +62,68 @@ static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
 	sgl = sgt->sgl;
 
 	length = sgl->length - PAGE_SIZE + sgl->offset;
-	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
+
+	/* round-up */
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
 
 	for (i = 1; i < sgt->nents; i++) {
 		sgl = sg_next(sgl);
-		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
+
+		/* round-up */
+		num_pages += ((sgl->length + PAGE_SIZE - 1) /
+			     PAGE_SIZE); /* round-up */
 	}
 
 	return num_pages;
 }
 
 /* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 {
-	struct hyper_dmabuf_pages_info *pinfo;
+	struct pages_info *pg_info;
 	int i, j, k;
 	int length;
 	struct scatterlist *sgl;
 
-	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
-	if (!pinfo)
+	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
+	if (!pg_info)
 		return NULL;
 
-	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (!pinfo->pages) {
-		kfree(pinfo);
+	pg_info->pgs = kmalloc(sizeof(struct page *) *
+			       hyper_dmabuf_get_num_pgs(sgt),
+			       GFP_KERNEL);
+
+	if (!pg_info->pgs) {
+		kfree(pg_info);
 		return NULL;
 	}
 
 	sgl = sgt->sgl;
 
-	pinfo->nents = 1;
-	pinfo->frst_ofst = sgl->offset;
-	pinfo->pages[0] = sg_page(sgl);
+	pg_info->nents = 1;
+	pg_info->frst_ofst = sgl->offset;
+	pg_info->pgs[0] = sg_page(sgl);
 	length = sgl->length - PAGE_SIZE + sgl->offset;
 	i = 1;
 
 	while (length > 0) {
-		pinfo->pages[i] = nth_page(sg_page(sgl), i);
+		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
 		length -= PAGE_SIZE;
-		pinfo->nents++;
+		pg_info->nents++;
 		i++;
 	}
 
 	for (j = 1; j < sgt->nents; j++) {
 		sgl = sg_next(sgl);
-		pinfo->pages[i++] = sg_page(sgl);
+		pg_info->pgs[i++] = sg_page(sgl);
 		length = sgl->length - PAGE_SIZE;
-		pinfo->nents++;
+		pg_info->nents++;
 		k = 1;
 
 		while (length > 0) {
-			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
+			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
 			length -= PAGE_SIZE;
-			pinfo->nents++;
+			pg_info->nents++;
 		}
 	}
 
@@ -127,13 +131,13 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	 * lenght at that point will be 0 or negative,
 	 * so to calculate last page size just add it to PAGE_SIZE
 	 */
-	pinfo->last_len = PAGE_SIZE + length;
+	pg_info->last_len = PAGE_SIZE + length;
 
-	return pinfo;
+	return pg_info;
 }
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
 					 int frst_ofst, int last_len, int nents)
 {
 	struct sg_table *sgt;
@@ -157,31 +161,32 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	sgl = sgt->sgl;
 
-	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
+	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
 
 	for (i=1; i<nents-1; i++) {
 		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
+		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
 	}
 
 	if (nents > 1) /* more than one page */ {
 		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], last_len, 0);
+		sg_set_page(sgl, pgs[i], last_len, 0);
 	}
 
 	return sgt;
 }
 
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force)
 {
 	struct sgt_list *sgtl;
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
 	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
+	if (!exported) {
+		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
 		return -EINVAL;
 	}
 
@@ -190,35 +195,37 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	 * side.
 	 */
 	if (!force &&
-	    sgt_info->importer_exported) {
-		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
+	    exported->active) {
+		dev_warn(hy_drv_priv->dev,
+			 "dma-buf is used by importer\n");
+
 		return -EPERM;
 	}
 
 	/* force == 1 is not recommended */
-	while (!list_empty(&sgt_info->va_kmapped->list)) {
-		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+	while (!list_empty(&exported->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
 
-		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
 		list_del(&va_kmapl->list);
 		kfree(va_kmapl);
 	}
 
-	while (!list_empty(&sgt_info->va_vmapped->list)) {
-		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+	while (!list_empty(&exported->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
 					    struct vmap_vaddr_list, list);
 
-		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
 		list_del(&va_vmapl->list);
 		kfree(va_vmapl);
 	}
 
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+		sgtl = list_first_entry(&exported->active_sgts->list,
 					struct sgt_list, list);
 
 		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
@@ -227,35 +234,35 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		kfree(sgtl);
 	}
 
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
-		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		dma_buf_detach(exported->dma_buf, attachl->attach);
 		list_del(&attachl->list);
 		kfree(attachl);
 	}
 
 	/* Start cleanup of buffer in reverse order to exporting */
-	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
+	ops->unshare_pages(&exported->refs_info, exported->nents);
 
 	/* unmap dma-buf */
-	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-				 sgt_info->active_sgts->sgt,
+	dma_buf_unmap_attachment(exported->active_attached->attach,
+				 exported->active_sgts->sgt,
 				 DMA_BIDIRECTIONAL);
 
 	/* detatch dma-buf */
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
+	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
 
 	/* close connection to dma-buf completely */
-	dma_buf_put(sgt_info->dma_buf);
-	sgt_info->dma_buf = NULL;
-
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->active_attached);
-	kfree(sgt_info->va_kmapped);
-	kfree(sgt_info->va_vmapped);
-	kfree(sgt_info->priv);
+	dma_buf_put(exported->dma_buf);
+	exported->dma_buf = NULL;
+
+	kfree(exported->active_sgts);
+	kfree(exported->active_attached);
+	kfree(exported->va_kmapped);
+	kfree(exported->va_vmapped);
+	kfree(exported->priv);
 
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
index 237ccf5..930bade 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -28,13 +28,15 @@
 int dmabuf_refcount(struct dma_buf *dma_buf);
 
 /* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-                                int frst_ofst, int last_len, int nents);
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents);
 
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force);
 
 void hyper_dmabuf_free_sgt(struct sg_table *sgt);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 6f929f2..8a612d1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -50,24 +50,20 @@ struct vmap_vaddr_list {
 };
 
 /* Exporter builds pages_info before sharing pages */
-struct hyper_dmabuf_pages_info {
+struct pages_info {
         int frst_ofst; /* offset of data in the first page */
         int last_len; /* length of data in the last page */
         int nents; /* # of pages */
-        struct page **pages; /* pages that contains reference numbers of shared pages*/
+        struct page **pgs; /* pages that contains reference numbers of shared pages*/
 };
 
 
-/* Both importer and exporter use this structure to point to sg lists
- *
- * Exporter stores references to sgt in a hash table
+/* Exporter stores references to sgt in a hash table
  * Exporter keeps these references for synchronization and tracking purposes
- *
- * Importer use this structure exporting to other drivers in the same domain
  */
-struct hyper_dmabuf_sgt_info {
+struct exported_sgt_info {
         hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in remote domain */
-	int hyper_dmabuf_rdomain; /* domain importing this sgt */
+	int rdomid; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
 	int nents;
@@ -79,10 +75,10 @@ struct hyper_dmabuf_sgt_info {
 	struct vmap_vaddr_list *va_vmapped;
 
 	bool valid; /* set to 0 once unexported. Needed to prevent further mapping by importer */
-	int importer_exported; /* exported locally on importer's side */
+	int active; /* locally shared on importer's side */
 	void *refs_info; /* hypervisor-specific info for the references */
-	struct delayed_work unexport_work;
-	bool unexport_scheduled;
+	struct delayed_work unexport;
+	bool unexport_sched;
 
 	/* owner of buffer
 	 * TODO: that is naiive as buffer may be reused by
@@ -99,7 +95,7 @@ struct hyper_dmabuf_sgt_info {
 /* Importer store references (before mapping) on shared pages
  * Importer store these references in the table and map it in
  * its own memory map once userspace asks for reference for the buffer */
-struct hyper_dmabuf_imported_sgt_info {
+struct imported_sgt_info {
 	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
 
 	int ref_handle; /* reference number of top level addressing page of shared pages */
@@ -112,7 +108,7 @@ struct hyper_dmabuf_imported_sgt_info {
 
 	void *refs_info;
 	bool valid;
-	int num_importers;
+	int importers;
 
 	size_t sz_priv;
 	char *priv; /* device specific info (e.g. image's meta info?) */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 920ecf4..f70b4ea 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -45,8 +45,6 @@ static int export_req_id = 0;
 
 struct hyper_dmabuf_req req_pending = {0};
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 extern int xenstored_ready;
 
 static void xen_get_domid_delayed(struct work_struct *unused);
@@ -62,7 +60,9 @@ static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
@@ -76,7 +76,9 @@ static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
@@ -91,20 +93,26 @@ static int xen_comm_expose_ring_details(int domid, int rdomid,
 	char buf[255];
 	int ret;
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", domid, rdomid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		domid, rdomid);
+
 	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
 
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
 	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
@@ -114,25 +122,32 @@ static int xen_comm_expose_ring_details(int domid, int rdomid,
 /*
  * Queries details of ring exposed by remote domain.
  */
-static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *port)
+static int xen_comm_get_ring_details(int domid, int rdomid,
+				     int *grefid, int *port)
 {
 	char buf[255];
 	int ret;
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", rdomid, domid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		rdomid, domid);
+
 	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
 
 	if (ret <= 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
 	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret <= 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
@@ -146,9 +161,8 @@ void xen_get_domid_delayed(struct work_struct *unused)
 
 	/* scheduling another if driver is still running
 	 * and xenstore has not been initialized */
-	if (hyper_dmabuf_private.exited == false &&
-	    likely(xenstored_ready == 0)) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (likely(xenstored_ready == 0)) {
+		dev_dbg(hy_drv_priv->dev,
 			"Xenstore is not quite ready yet. Will retry it in 500ms\n");
 		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
 	} else {
@@ -163,14 +177,14 @@ void xen_get_domid_delayed(struct work_struct *unused)
 
 		/* try again since -1 is an invalid id for domain
 		 * (but only if driver is still running) */
-		if (hyper_dmabuf_private.exited == false && unlikely(domid == -1)) {
-			dev_dbg(hyper_dmabuf_private.device,
+		if (unlikely(domid == -1)) {
+			dev_dbg(hy_drv_priv->dev,
 				"domid==-1 is invalid. Will retry it in 500ms\n");
 			schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
 		} else {
-			dev_info(hyper_dmabuf_private.device,
+			dev_info(hy_drv_priv->dev,
 				"Successfully retrieved domid from Xenstore:%d\n", domid);
-			hyper_dmabuf_private.domid = domid;
+			hy_drv_priv->domid = domid;
 		}
 	}
 }
@@ -232,28 +246,30 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 		return;
 	}
 
-	/* Check if we have importer ring for given remote domain alrady created */
+	/* Check if we have importer ring for given remote domain already
+	 * created */
+
 	ring_info = xen_comm_find_rx_ring(rdom);
 
-	/* Try to query remote domain exporter ring details - if that will
-	 * fail and we have importer ring that means remote domains has cleanup
-	 * its exporter ring, so our importer ring is no longer useful.
+	/* Try to query remote domain exporter ring details - if
+	 * that will fail and we have importer ring that means remote
+	 * domains has cleanup its exporter ring, so our importer ring
+	 * is no longer useful.
 	 *
 	 * If querying details will succeed and we don't have importer ring,
-	 * it means that remote domain has setup it for us and we should connect
-	 * to it.
+	 * it means that remote domain has setup it for us and we should
+	 * connect to it.
 	 */
 
-
-	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), rdom,
-					&grefid, &port);
+	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(),
+					rdom, &grefid, &port);
 
 	if (ring_info && ret != 0) {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			 "Remote exporter closed, cleaninup importer\n");
 		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			 "Registering importer\n");
 		hyper_dmabuf_xen_init_rx_rbuf(rdom);
 	}
@@ -274,7 +290,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info = xen_comm_find_tx_ring(domid);
 
 	if (ring_info) {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
 		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
 		return 0;
@@ -283,7 +299,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	if (!ring_info) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No more spae left\n");
 		return -ENOMEM;
 	}
@@ -313,9 +329,9 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	alloc_unbound.dom = DOMID_SELF;
 	alloc_unbound.remote_dom = domid;
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
-					&alloc_unbound);
+					  &alloc_unbound);
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Cannot allocate event channel\n");
 		kfree(ring_info);
 		return -EIO;
@@ -327,7 +343,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 					NULL, (void*) ring_info);
 
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
@@ -343,7 +359,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	mutex_init(&ring_info->lock);
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
 		ring_info->gref_ring,
@@ -364,7 +380,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
 
 	if (!ring_info->watch.node) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No more space left\n");
 		kfree(ring_info);
 		return -ENOMEM;
@@ -414,7 +430,8 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	if (!rx_ring_info)
 		return;
 
-	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring, PAGE_SIZE);
+	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring,
+		       PAGE_SIZE);
 }
 
 /* importer needs to know about shared page and port numbers for
@@ -436,25 +453,28 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ring_info = xen_comm_find_rx_ring(domid);
 
 	if (ring_info) {
-		dev_info(hyper_dmabuf_private.device,
-			 "rx ring ch from domid = %d already exist\n", ring_info->sdomain);
+		dev_info(hy_drv_priv->dev,
+			 "rx ring ch from domid = %d already exist\n",
+			 ring_info->sdomain);
+
 		return 0;
 	}
 
-
 	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					&rx_gref, &rx_port);
 
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
-			"Domain %d has not created exporter ring for current domain\n", domid);
+		dev_err(hy_drv_priv->dev,
+			"Domain %d has not created exporter ring for current domain\n",
+			domid);
+
 		return ret;
 	}
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	if (!ring_info) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
@@ -465,7 +485,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (!map_ops) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		ret = -ENOMEM;
 		goto fail_no_map_ops;
@@ -476,21 +496,23 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		goto fail_others;
 	}
 
-	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+	gnttab_set_map_op(&map_ops[0],
+			  (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
 			  GNTMAP_host_map, rx_gref, domid);
 
-	gnttab_set_unmap_op(&ring_info->unmap_op, (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+	gnttab_set_unmap_op(&ring_info->unmap_op,
+			    (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
 			    GNTMAP_host_map, -1);
 
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
+		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
 		ret = -EFAULT;
 		goto fail_others;
 	}
 
 	if (map_ops[0].status) {
-		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
+		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
 		ret = -EFAULT;
 		goto fail_others;
 	} else {
@@ -512,7 +534,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info->irq = ret;
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
 		rx_port,
 		ring_info->irq);
@@ -569,7 +591,9 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 		return;
 
 	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
-	FRONT_RING_INIT(&(tx_ring_info->ring_front), tx_ring_info->ring_front.sring, PAGE_SIZE);
+	FRONT_RING_INIT(&(tx_ring_info->ring_front),
+			tx_ring_info->ring_front.sring,
+			PAGE_SIZE);
 }
 
 #ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
@@ -587,20 +611,20 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 	char buf[128];
 	int i, dummy;
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"Scanning new tx channel comming from another domain\n");
 
 	/* check other domains and schedule another work if driver
 	 * is still running and backend is valid
 	 */
-	if (hyper_dmabuf_private.exited == false &&
-	    hyper_dmabuf_private.backend_initialized == true) {
+	if (hy_drv_priv &&
+	    hy_drv_priv->initialized) {
 		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
-			if (i == hyper_dmabuf_private.domid)
+			if (i == hy_drv_priv->domid)
 				continue;
 
-			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", i,
-				hyper_dmabuf_private.domid);
+			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+				i, hy_drv_priv->domid);
 
 			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
 
@@ -611,13 +635,14 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 				ret = hyper_dmabuf_xen_init_rx_rbuf(i);
 
 				if (!ret)
-					dev_info(hyper_dmabuf_private.device,
+					dev_info(hy_drv_priv->dev,
 						 "Finishing up setting up rx channel for domain %d\n", i);
 			}
 		}
 
 		/* check every 10 seconds */
-		schedule_delayed_work(&xen_rx_ch_auto_add_work, msecs_to_jiffies(10000));
+		schedule_delayed_work(&xen_rx_ch_auto_add_work,
+				      msecs_to_jiffies(10000));
 	}
 }
 
@@ -630,21 +655,21 @@ void xen_init_comm_env_delayed(struct work_struct *unused)
 	/* scheduling another work if driver is still running
 	 * and xenstore hasn't been initialized or dom_id hasn't
 	 * been correctly retrieved. */
-	if (hyper_dmabuf_private.exited == false &&
-	    likely(xenstored_ready == 0 ||
-	    hyper_dmabuf_private.domid == -1)) {
-		dev_dbg(hyper_dmabuf_private.device,
-			"Xenstore is not ready yet. Re-try this again in 500ms\n");
-		schedule_delayed_work(&xen_init_comm_env_work, msecs_to_jiffies(500));
+	if (likely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore not ready Will re-try in 500ms\n");
+		schedule_delayed_work(&xen_init_comm_env_work,
+				      msecs_to_jiffies(500));
 	} else {
 		ret = xen_comm_setup_data_dir();
 		if (ret < 0) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"Failed to create data dir in Xenstore\n");
 		} else {
-			dev_info(hyper_dmabuf_private.device,
-				"Successfully finished comm env initialization\n");
-			hyper_dmabuf_private.backend_initialized = true;
+			dev_info(hy_drv_priv->dev,
+				"Successfully finished comm env init\n");
+			hy_drv_priv->initialized = true;
 
 #ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
 			xen_rx_ch_add_delayed(NULL);
@@ -659,20 +684,21 @@ int hyper_dmabuf_xen_init_comm_env(void)
 
 	xen_comm_ring_table_init();
 
-	if (unlikely(xenstored_ready == 0 || hyper_dmabuf_private.domid == -1)) {
+	if (unlikely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
 		xen_init_comm_env_delayed(NULL);
 		return -1;
 	}
 
 	ret = xen_comm_setup_data_dir();
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Failed to create data dir in Xenstore\n");
 	} else {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			"Successfully finished comm env initialization\n");
 
-		hyper_dmabuf_private.backend_initialized = true;
+		hy_drv_priv->initialized = true;
 	}
 
 	return ret;
@@ -691,7 +717,8 @@ void hyper_dmabuf_xen_destroy_comm(void)
 	xen_comm_destroy_data_dir();
 }
 
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
+			      int wait)
 {
 	struct xen_comm_front_ring *ring;
 	struct hyper_dmabuf_req *new_req;
@@ -706,22 +733,21 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	/* find a ring info for the channel */
 	ring_info = xen_comm_find_tx_ring(domid);
 	if (!ring_info) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Can't find ring info for the channel\n");
 		return -ENOENT;
 	}
 
-	mutex_lock(&ring_info->lock);
 
 	ring = &ring_info->ring_front;
 
 	do_gettimeofday(&tv_start);
 
 	while (RING_FULL(ring)) {
-		dev_dbg(hyper_dmabuf_private.device, "RING_FULL\n");
+		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
 
 		if (timeout == 0) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"Timeout while waiting for an entry in the ring\n");
 			return -EIO;
 		}
@@ -731,15 +757,17 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	timeout = 1000;
 
+	mutex_lock(&ring_info->lock);
+
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
 		mutex_unlock(&ring_info->lock);
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"NULL REQUEST\n");
 		return -EIO;
 	}
 
-	req->request_id = xen_comm_next_req_id();
+	req->req_id = xen_comm_next_req_id();
 
 	/* update req_pending with current request */
 	memcpy(&req_pending, req, sizeof(req_pending));
@@ -756,7 +784,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	if (wait) {
 		while (timeout--) {
-			if (req_pending.status !=
+			if (req_pending.stat !=
 			    HYPER_DMABUF_REQ_NOT_RESPONDED)
 				break;
 			usleep_range(100, 120);
@@ -764,7 +792,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 		if (timeout < 0) {
 			mutex_unlock(&ring_info->lock);
-			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
+			dev_err(hy_drv_priv->dev, "request timed-out\n");
 			return -EBUSY;
 		}
 
@@ -781,10 +809,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
-			dev_dbg(hyper_dmabuf_private.device, "send_req:time diff: %ld sec, %ld usec\n",
+			dev_dbg(hy_drv_priv->dev, "send_req:time diff: %ld sec, %ld usec\n",
 				tv_diff.tv_sec, tv_diff.tv_usec);
-
-		return req_pending.status;
 	}
 
 	mutex_unlock(&ring_info->lock);
@@ -808,7 +834,7 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_rx_ring_info *)info;
 	ring = &ring_info->ring_back;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
 
 	do {
 		rc = ring->req_cons;
@@ -828,13 +854,13 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 				 * the requester
 				 */
 				memcpy(&resp, &req, sizeof(resp));
-				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
-							sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt),
+							 &resp, sizeof(resp));
 				ring->rsp_prod_pvt++;
 
-				dev_dbg(hyper_dmabuf_private.device,
+				dev_dbg(hy_drv_priv->dev,
 					"sending response to exporter for request id:%d\n",
-					resp.response_id);
+					resp.resp_id);
 
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
 
@@ -864,7 +890,7 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_tx_ring_info *)info;
 	ring = &ring_info->ring_front;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
 
 	do {
 		more_to_do = 0;
@@ -876,33 +902,33 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 			 * in the response
 			 */
 
-			dev_dbg(hyper_dmabuf_private.device,
+			dev_dbg(hy_drv_priv->dev,
 				"getting response from importer\n");
 
-			if (req_pending.request_id == resp->response_id) {
-				req_pending.status = resp->status;
+			if (req_pending.req_id == resp->resp_id) {
+				req_pending.stat = resp->stat;
 			}
 
-			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
 				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
 							(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
-					dev_err(hyper_dmabuf_private.device,
+					dev_err(hy_drv_priv->dev,
 						"getting error while parsing response\n");
 				}
-			} else if (resp->status == HYPER_DMABUF_REQ_PROCESSED) {
+			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
 				/* for debugging dma_buf remote synchronization */
-				dev_dbg(hyper_dmabuf_private.device,
-					"original request = 0x%x\n", resp->command);
-				dev_dbg(hyper_dmabuf_private.device,
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
 					"Just got HYPER_DMABUF_REQ_PROCESSED\n");
-			} else if (resp->status == HYPER_DMABUF_REQ_ERROR) {
+			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
 				/* for debugging dma_buf remote synchronization */
-				dev_dbg(hyper_dmabuf_private.device,
-					"original request = 0x%x\n", resp->command);
-				dev_dbg(hyper_dmabuf_private.device,
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
 					"Just got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 4708b49..7a8ec73 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -38,8 +38,6 @@
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
 DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
@@ -56,7 +54,7 @@ int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
@@ -76,7 +74,7 @@ int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 908eda8..424417d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -36,8 +36,6 @@
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 /*
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
@@ -98,7 +96,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 
 	if (!sh_pages_info) {
-		dev_err(hyper_dmabuf_private.device, "No more space left\n");
+		dev_err(hy_drv_priv->dev, "No more space left\n");
 		return -ENOMEM;
 	}
 
@@ -107,10 +105,10 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* share data pages in readonly mode for security */
 	for (i=0; i<nents; i++) {
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
-							    pfn_to_mfn(page_to_pfn(pages[i])),
-							    true /* read-only from remote domain */);
+					pfn_to_mfn(page_to_pfn(pages[i])),
+					true /* read-only from remote domain */);
 		if (lvl2_table[i] == -ENOSPC) {
-			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl2 */
 			while(i--) {
@@ -124,10 +122,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
-							    true);
+					virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+					true);
+
 		if (lvl3_table[i] == -ENOSPC) {
-			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl3 */
 			while(i--) {
@@ -147,11 +146,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	/* Share lvl3_table in readonly mode*/
 	lvl3_gref = gnttab_grant_foreign_access(domid,
-						virt_to_mfn((unsigned long)lvl3_table),
-						true);
+			virt_to_mfn((unsigned long)lvl3_table),
+			true);
 
 	if (lvl3_gref == -ENOSPC) {
-		dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+		dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
 
 		/* Unshare all pages for lvl3 */
 		while(i--) {
@@ -178,7 +177,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Store exported pages refid to be unshared later */
 	sh_pages_info->lvl3_gref = lvl3_gref;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return lvl3_gref;
 
 err_cleanup:
@@ -190,16 +189,17 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	struct xen_shared_pages_info *sh_pages_info;
-	int n_lvl2_grefs = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			    ((nents % REFS_PER_PAGE) ? 1: 0));
 	int i;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->lvl3_table == NULL ||
 	    sh_pages_info->lvl2_table ==  NULL ||
 	    sh_pages_info->lvl3_gref == -1) {
-		dev_warn(hyper_dmabuf_private.device,
+		dev_warn(hy_drv_priv->dev,
 			 "gref table for hyper_dmabuf already cleaned up\n");
 		return 0;
 	}
@@ -207,7 +207,7 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	/* End foreign access for data pages, but do not free them */
 	for (i = 0; i < nents; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
-			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
 		}
 		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
 		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
@@ -216,17 +216,17 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	/* End foreign access for 2nd level addressing pages */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
-			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
 		}
 		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
-			dev_warn(hyper_dmabuf_private.device, "refid still in use!!!\n");
+			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
 		}
 		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
 	}
 
 	/* End foreign access for top level addressing page */
 	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
-		dev_warn(hyper_dmabuf_private.device, "gref not shared !!\n");
+		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
 	}
 
 	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
@@ -242,7 +242,7 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return 0;
 }
 
@@ -270,27 +270,33 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* # of grefs in the last page of lvl2 table */
 	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
-	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0) -
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
+			   ((nents_last > 0) ? 1 : 0) -
 			   (nents_last == REFS_PER_PAGE);
 	int i, j, k;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 	*refs_info = (void *) sh_pages_info;
 
-	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs, GFP_KERNEL);
+	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs,
+				   GFP_KERNEL);
+
 	data_pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
 
-	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs, GFP_KERNEL);
-	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs, GFP_KERNEL);
+	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs,
+			       GFP_KERNEL);
+
+	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs,
+				 GFP_KERNEL);
 
 	data_map_ops = kcalloc(sizeof(*data_map_ops), nents, GFP_KERNEL);
 	data_unmap_ops = kcalloc(sizeof(*data_unmap_ops), nents, GFP_KERNEL);
 
 	/* Map top level addressing page */
 	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -304,13 +310,16 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			    GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	if (lvl3_map_ops.status) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed status = %d",
 			lvl3_map_ops.status);
+
 		goto error_cleanup_lvl3;
 	} else {
 		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
@@ -318,35 +327,43 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Map all second level pages */
 	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
 		goto error_cleanup_lvl3;
 	}
 
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		lvl2_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
-		gnttab_set_map_op(&lvl2_map_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly,
+		gnttab_set_map_op(&lvl2_map_ops[i],
+				  (unsigned long)lvl2_table, GNTMAP_host_map |
+				  GNTMAP_readonly,
 				  lvl3_table[i], domid);
-		gnttab_set_unmap_op(&lvl2_unmap_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
+				    (unsigned long)lvl2_table, GNTMAP_host_map |
+				    GNTMAP_readonly, -1);
 	}
 
 	/* Unmap top level page, as it won't be needed any longer */
-	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
-		dev_err(hyper_dmabuf_private.device, "xen: cannot unmap top level page\n");
+	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+			      &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"xen: cannot unmap top level page\n");
 		return NULL;
 	} else {
 		/* Mark that page was unmapped */
 		lvl3_unmap_ops.handle = -1;
 	}
 
-	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
+	if (gnttab_map_refs(lvl2_map_ops, NULL,
+			    lvl2_table_pages, n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	/* Checks if pages were mapped correctly */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (lvl2_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"HYPERVISOR map grant ref failed status = %d",
 				lvl2_map_ops[i].status);
 			goto error_cleanup_lvl2;
@@ -356,7 +373,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	if (gnttab_alloc_pages(nents, data_pages)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate pages\n");
 		goto error_cleanup_lvl2;
 	}
 
@@ -366,13 +384,13 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
-					  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					  GNTMAP_host_map | GNTMAP_readonly,
-					  lvl2_table[j], domid);
+				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly,
+				lvl2_table[j], domid);
 
 			gnttab_set_unmap_op(&data_unmap_ops[k],
-					    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					    GNTMAP_host_map | GNTMAP_readonly, -1);
+				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly, -1);
 			k++;
 		}
 	}
@@ -382,25 +400,29 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	for (j = 0; j < nents_last; j++) {
 		gnttab_set_map_op(&data_map_ops[k],
-				  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				  GNTMAP_host_map | GNTMAP_readonly,
-				  lvl2_table[j], domid);
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly,
+			lvl2_table[j], domid);
 
 		gnttab_set_unmap_op(&data_unmap_ops[k],
-				    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				    GNTMAP_host_map | GNTMAP_readonly, -1);
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly, -1);
 		k++;
 	}
 
-	if (gnttab_map_refs(data_map_ops, NULL, data_pages, nents)) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed\n");
+	if (gnttab_map_refs(data_map_ops, NULL,
+			    data_pages, nents)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed\n");
 		return NULL;
 	}
 
 	/* unmapping lvl2 table pages */
-	if (gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
+	if (gnttab_unmap_refs(lvl2_unmap_ops,
+			      NULL, lvl2_table_pages,
 			      n_lvl2_grefs)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot unmap 2nd level refs\n");
+		dev_err(hy_drv_priv->dev,
+			"Cannot unmap 2nd level refs\n");
 		return NULL;
 	} else {
 		/* Mark that pages were unmapped */
@@ -411,7 +433,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
 			goto error_cleanup_data;
@@ -431,7 +453,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	kfree(lvl2_unmap_ops);
 	kfree(data_map_ops);
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return data_pages;
 
 error_cleanup_data:
@@ -442,13 +464,14 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 error_cleanup_lvl2:
 	if (lvl2_unmap_ops[0].handle != -1)
-		gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
-				  n_lvl2_grefs);
+		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
+				  lvl2_table_pages, n_lvl2_grefs);
 	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
 
 error_cleanup_lvl3:
 	if (lvl3_unmap_ops.handle != -1)
-		gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1);
+		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+				  &lvl3_table_page, 1);
 	gnttab_free_pages(1, &lvl3_table_page);
 
 	kfree(lvl2_table_pages);
@@ -463,20 +486,20 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	struct xen_shared_pages_info *sh_pages_info;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->unmap_ops == NULL ||
 	    sh_pages_info->data_pages == NULL) {
-		dev_warn(hyper_dmabuf_private.device,
-			 "Imported pages already cleaned up or buffer was not imported yet\n");
+		dev_warn(hy_drv_priv->dev,
+			 "pages already cleaned up or buffer not imported yet\n");
 		return 0;
 	}
 
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
 			      sh_pages_info->data_pages, nents) ) {
-		dev_err(hyper_dmabuf_private.device, "Cannot unmap data pages\n");
+		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
 		return -EFAULT;
 	}
 
@@ -489,6 +512,6 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 49/60] hyper_dmabuf: general clean-up and fixes
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

1. global hyper_dmabuf_private is now pointer(*hy_drv_priv)
   pointing to private data structure initialized when driver
   is initialized. This is freed when driver exits.

2. using shorter variable and type's names

3. remove unnecessary NULL checks

4. event-polling related funcs are now compiled only if
   CONFIG_HYPER_DMABUF_EVENT_GEN is enabled.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/Makefile                  |   7 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |  25 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 164 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  13 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      |  60 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  16 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 569 ++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |   2 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       |  88 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  18 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 259 +++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  18 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 284 +++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h        |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c      |  58 +--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |   4 +-
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 170 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 123 ++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  10 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  24 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 240 +++++----
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |   6 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 147 +++---
 23 files changed, 1144 insertions(+), 1165 deletions(-)
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h

diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
index 1cd7a81..a113bfc 100644
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -13,9 +13,12 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
 				 hyper_dmabuf_query.o \
-				 hyper_dmabuf_event.o \
 
-ifeq ($(CONFIG_XEN), y)
+ifeq ($(CONFIG_HYPER_DMABUF_EVENT_GEN), y)
+	$(TARGET_MODULE)-objs += hyper_dmabuf_event.o
+endif
+
+ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
 	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
 				 xen/hyper_dmabuf_xen_comm_list.o \
 				 xen/hyper_dmabuf_xen_shm.o \
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
deleted file mode 100644
index d5125f2..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
+++ /dev/null
@@ -1,25 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-/* configuration */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 1c35a59..525ee78 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -36,7 +36,6 @@
 #include <linux/poll.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_conf.h"
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
@@ -51,13 +50,32 @@ extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 MODULE_LICENSE("GPL and additional rights");
 MODULE_AUTHOR("Intel Corporation");
 
-struct hyper_dmabuf_private hyper_dmabuf_private;
+struct hyper_dmabuf_private *hy_drv_priv;
 
 long hyper_dmabuf_ioctl(struct file *filp,
 			unsigned int cmd, unsigned long param);
 
-void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
-				    void *attr);
+static void hyper_dmabuf_force_free(struct exported_sgt_info* exported,
+			            void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file*) attr;
+
+	if (!filp || !exported)
+		return;
+
+	if (exported->filp == filp) {
+		dev_dbg(hy_drv_priv->dev,
+			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		unexport_attr.hid = exported->hid;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
+	}
+}
 
 int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 {
@@ -72,18 +90,20 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 
 int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
-	hyper_dmabuf_foreach_exported(hyper_dmabuf_emergency_release, filp);
+	hyper_dmabuf_foreach_exported(hyper_dmabuf_force_free, filp);
 
 	return 0;
 }
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+
 unsigned int hyper_dmabuf_event_poll(struct file *filp, struct poll_table_struct *wait)
 {
 	unsigned int mask = 0;
 
-	poll_wait(filp, &hyper_dmabuf_private.event_wait, wait);
+	poll_wait(filp, &hy_drv_priv->event_wait, wait);
 
-	if (!list_empty(&hyper_dmabuf_private.event_list))
+	if (!list_empty(&hy_drv_priv->event_list))
 		mask |= POLLIN | POLLRDNORM;
 
 	return mask;
@@ -96,32 +116,32 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 	/* only root can read events */
 	if (!capable(CAP_DAC_OVERRIDE)) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Only root can read events\n");
 		return -EFAULT;
 	}
 
 	/* make sure user buffer can be written */
 	if (!access_ok(VERIFY_WRITE, buffer, count)) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"User buffer can't be written.\n");
 		return -EFAULT;
 	}
 
-	ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
 	if (ret)
 		return ret;
 
 	while (1) {
 		struct hyper_dmabuf_event *e = NULL;
 
-		spin_lock_irq(&hyper_dmabuf_private.event_lock);
-		if (!list_empty(&hyper_dmabuf_private.event_list)) {
-			e = list_first_entry(&hyper_dmabuf_private.event_list,
+		spin_lock_irq(&hy_drv_priv->event_lock);
+		if (!list_empty(&hy_drv_priv->event_list)) {
+			e = list_first_entry(&hy_drv_priv->event_list,
 					struct hyper_dmabuf_event, link);
 			list_del(&e->link);
 		}
-		spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+		spin_unlock_irq(&hy_drv_priv->event_lock);
 
 		if (!e) {
 			if (ret)
@@ -131,12 +151,12 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 				break;
 			}
 
-			mutex_unlock(&hyper_dmabuf_private.event_read_lock);
-			ret = wait_event_interruptible(hyper_dmabuf_private.event_wait,
-						       !list_empty(&hyper_dmabuf_private.event_list));
+			mutex_unlock(&hy_drv_priv->event_read_lock);
+			ret = wait_event_interruptible(hy_drv_priv->event_wait,
+						       !list_empty(&hy_drv_priv->event_list));
 
 			if (ret == 0)
-				ret = mutex_lock_interruptible(&hyper_dmabuf_private.event_read_lock);
+				ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
 
 			if (ret)
 				return ret;
@@ -145,9 +165,9 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 			if (length > count - ret) {
 put_back_event:
-				spin_lock_irq(&hyper_dmabuf_private.event_lock);
-				list_add(&e->link, &hyper_dmabuf_private.event_list);
-				spin_unlock_irq(&hyper_dmabuf_private.event_lock);
+				spin_lock_irq(&hy_drv_priv->event_lock);
+				list_add(&e->link, &hy_drv_priv->event_list);
+				spin_unlock_irq(&hy_drv_priv->event_lock);
 				break;
 			}
 
@@ -170,7 +190,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 				/* nullifying hdr of the event in user buffer */
 				if (copy_to_user(buffer + ret, &dummy_hdr,
 						 sizeof(dummy_hdr))) {
-					dev_err(hyper_dmabuf_private.device,
+					dev_err(hy_drv_priv->dev,
 						"failed to nullify invalid hdr already in userspace\n");
 				}
 
@@ -180,23 +200,30 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			}
 
 			ret += e->event_data.hdr.size;
-			hyper_dmabuf_private.curr_num_event--;
+			hy_drv_priv->pending--;
 			kfree(e);
 		}
 	}
 
-	mutex_unlock(&hyper_dmabuf_private.event_read_lock);
+	mutex_unlock(&hy_drv_priv->event_read_lock);
 
 	return ret;
 }
 
+#endif
+
 static struct file_operations hyper_dmabuf_driver_fops =
 {
 	.owner = THIS_MODULE,
 	.open = hyper_dmabuf_open,
 	.release = hyper_dmabuf_release,
+
+/* poll and read interfaces are needed only for event-polling */
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 	.read = hyper_dmabuf_event_read,
 	.poll = hyper_dmabuf_event_poll,
+#endif
+
 	.unlocked_ioctl = hyper_dmabuf_ioctl,
 };
 
@@ -217,17 +244,17 @@ int register_device(void)
 		return ret;
 	}
 
-	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
+	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
 
 	/* TODO: Check if there is a different way to initialize dma mask nicely */
-	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, DMA_BIT_MASK(64));
+	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
 
 	return ret;
 }
 
 void unregister_device(void)
 {
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		"hyper_dmabuf: unregister_device() is called\n");
 
 	misc_deregister(&hyper_dmabuf_miscdev);
@@ -239,9 +266,13 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
-	mutex_init(&hyper_dmabuf_private.lock);
-	mutex_init(&hyper_dmabuf_private.event_read_lock);
-	spin_lock_init(&hyper_dmabuf_private.event_lock);
+	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
+			      GFP_KERNEL);
+
+	if (!hy_drv_priv) {
+		printk( KERN_ERR "hyper_dmabuf: Failed to create drv\n");
+		return -1;
+	}
 
 	ret = register_device();
 	if (ret < 0) {
@@ -251,64 +282,72 @@ static int __init hyper_dmabuf_drv_init(void)
 /* currently only supports XEN hypervisor */
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
-	hyper_dmabuf_private.backend_ops = &xen_backend_ops;
+	hy_drv_priv->backend_ops = &xen_backend_ops;
 #else
-	hyper_dmabuf_private.backend_ops = NULL;
+	hy_drv_priv->backend_ops = NULL;
 	printk( KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
 
-	if (hyper_dmabuf_private.backend_ops == NULL) {
+	if (hy_drv_priv->backend_ops == NULL) {
 		printk( KERN_ERR "Hyper_dmabuf: failed to be loaded - no backend found\n");
 		return -1;
 	}
 
-	mutex_lock(&hyper_dmabuf_private.lock);
+	/* initializing mutexes and a spinlock */
+	mutex_init(&hy_drv_priv->lock);
+
+	mutex_lock(&hy_drv_priv->lock);
 
-	hyper_dmabuf_private.backend_initialized = false;
+	hy_drv_priv->initialized = false;
 
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		 "initializing database for imported/exported dmabufs\n");
 
 	/* device structure initialization */
 	/* currently only does work-queue initialization */
-	hyper_dmabuf_private.work_queue = create_workqueue("hyper_dmabuf_wqueue");
+	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"failed to initialize table for exported/imported entries\n");
 		return ret;
 	}
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
-	ret = hyper_dmabuf_register_sysfs(hyper_dmabuf_private.device);
+	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"failed to initialize sysfs\n");
 		return ret;
 	}
 #endif
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	mutex_init(&hy_drv_priv->event_read_lock);
+	spin_lock_init(&hy_drv_priv->event_lock);
+
 	/* Initialize event queue */
-	INIT_LIST_HEAD(&hyper_dmabuf_private.event_list);
-	init_waitqueue_head(&hyper_dmabuf_private.event_wait);
+	INIT_LIST_HEAD(&hy_drv_priv->event_list);
+	init_waitqueue_head(&hy_drv_priv->event_wait);
 
-	hyper_dmabuf_private.curr_num_event = 0;
-	hyper_dmabuf_private.exited = false;
+	/* resetting number of pending events */
+	hy_drv_priv->pending = 0;
+#endif
 
-	hyper_dmabuf_private.domid = hyper_dmabuf_private.backend_ops->get_vm_id();
+	hy_drv_priv->domid = hy_drv_priv->backend_ops->get_vm_id();
 
-	ret = hyper_dmabuf_private.backend_ops->init_comm_env();
+	ret = hy_drv_priv->backend_ops->init_comm_env();
 	if (ret < 0) {
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"failed to initialize comm-env but it will re-attempt.\n");
 	} else {
-		hyper_dmabuf_private.backend_initialized = true;
+		hy_drv_priv->initialized = true;
 	}
 
-	mutex_unlock(&hyper_dmabuf_private.lock);
+	mutex_unlock(&hy_drv_priv->lock);
 
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		"Finishing up initialization of hyper_dmabuf drv\n");
 
 	/* interrupt for comm should be registered here: */
@@ -318,34 +357,39 @@ static int __init hyper_dmabuf_drv_init(void)
 static void hyper_dmabuf_drv_exit(void)
 {
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
-	hyper_dmabuf_unregister_sysfs(hyper_dmabuf_private.device);
+	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
 #endif
 
-	mutex_lock(&hyper_dmabuf_private.lock);
+	mutex_lock(&hy_drv_priv->lock);
 
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
-	hyper_dmabuf_private.backend_ops->destroy_comm();
+	hy_drv_priv->backend_ops->destroy_comm();
 
 	/* destroy workqueue */
-	if (hyper_dmabuf_private.work_queue)
-		destroy_workqueue(hyper_dmabuf_private.work_queue);
+	if (hy_drv_priv->work_queue)
+		destroy_workqueue(hy_drv_priv->work_queue);
 
 	/* destroy id_queue */
-	if (hyper_dmabuf_private.id_queue)
+	if (hy_drv_priv->id_queue)
 		destroy_reusable_list();
 
-	hyper_dmabuf_private.exited = true;
-
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 	/* clean up event queue */
 	hyper_dmabuf_events_release();
+#endif
 
-	mutex_unlock(&hyper_dmabuf_private.lock);
+	mutex_unlock(&hy_drv_priv->lock);
 
-	dev_info(hyper_dmabuf_private.device,
+	dev_info(hy_drv_priv->dev,
 		 "hyper_dmabuf driver: Exiting\n");
 
+	if (hy_drv_priv) {
+		kfree(hy_drv_priv);
+		hy_drv_priv = NULL;
+	}
+
 	unregister_device();
 }
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index a4acdd9f..2ead41b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -36,7 +36,7 @@ struct hyper_dmabuf_event {
 };
 
 struct hyper_dmabuf_private {
-        struct device *device;
+        struct device *dev;
 
 	/* VM(domain) id of current VM instance */
 	int domid;
@@ -55,7 +55,7 @@ struct hyper_dmabuf_private {
 	struct mutex lock;
 
 	/* flag that shows whether backend is initialized */
-	bool backend_initialized;
+	bool initialized;
 
         wait_queue_head_t event_wait;
         struct list_head event_list;
@@ -63,10 +63,8 @@ struct hyper_dmabuf_private {
 	spinlock_t event_lock;
 	struct mutex event_read_lock;
 
-	int curr_num_event;
-
-	/* indicate whether the driver is unloaded */
-	bool exited;
+	/* # of pending events */
+	int pending;
 };
 
 struct list_reusable_id {
@@ -108,4 +106,7 @@ struct hyper_dmabuf_backend_ops {
 	int (*send_req)(int, struct hyper_dmabuf_req *, int);
 };
 
+/* exporting global drv private info */
+extern struct hyper_dmabuf_private *hy_drv_priv;
+
 #endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index 3e1498c..0498cda 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -32,37 +32,33 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_event.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 {
 	struct hyper_dmabuf_event *oldest;
 
-	assert_spin_locked(&hyper_dmabuf_private.event_lock);
+	assert_spin_locked(&hy_drv_priv->event_lock);
 
 	/* check current number of event then if it hits the max num allowed
 	 * then remove the oldest event in the list */
-	if (hyper_dmabuf_private.curr_num_event > MAX_DEPTH_EVENT_QUEUE - 1) {
-		oldest = list_first_entry(&hyper_dmabuf_private.event_list,
+	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
+		oldest = list_first_entry(&hy_drv_priv->event_list,
 				struct hyper_dmabuf_event, link);
 		list_del(&oldest->link);
-		hyper_dmabuf_private.curr_num_event--;
+		hy_drv_priv->pending--;
 		kfree(oldest);
 	}
 
 	list_add_tail(&e->link,
-		      &hyper_dmabuf_private.event_list);
+		      &hy_drv_priv->event_list);
 
-	hyper_dmabuf_private.curr_num_event++;
+	hy_drv_priv->pending++;
 
-	wake_up_interruptible(&hyper_dmabuf_private.event_wait);
+	wake_up_interruptible(&hy_drv_priv->event_wait);
 }
 
 void hyper_dmabuf_events_release()
@@ -70,34 +66,34 @@ void hyper_dmabuf_events_release()
 	struct hyper_dmabuf_event *e, *et;
 	unsigned long irqflags;
 
-	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
 
-	list_for_each_entry_safe(e, et, &hyper_dmabuf_private.event_list,
+	list_for_each_entry_safe(e, et, &hy_drv_priv->event_list,
 				 link) {
 		list_del(&e->link);
 		kfree(e);
-		hyper_dmabuf_private.curr_num_event--;
+		hy_drv_priv->pending--;
 	}
 
-	if (hyper_dmabuf_private.curr_num_event) {
-		dev_err(hyper_dmabuf_private.device,
+	if (hy_drv_priv->pending) {
+		dev_err(hy_drv_priv->dev,
 			"possible leak on event_list\n");
 	}
 
-	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
 }
 
 int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 {
 	struct hyper_dmabuf_event *e;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct imported_sgt_info *imported;
 
 	unsigned long irqflags;
 
-	imported_sgt_info = hyper_dmabuf_find_imported(hid);
+	imported = hyper_dmabuf_find_imported(hid);
 
-	if (!imported_sgt_info) {
-		dev_err(hyper_dmabuf_private.device,
+	if (!imported) {
+		dev_err(hy_drv_priv->dev,
 			"can't find imported_sgt_info in the list\n");
 		return -EINVAL;
 	}
@@ -105,29 +101,29 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	e = kzalloc(sizeof(*e), GFP_KERNEL);
 
 	if (!e) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"no space left\n");
 		return -ENOMEM;
 	}
 
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void*)imported_sgt_info->priv;
-	e->event_data.hdr.size = imported_sgt_info->sz_priv;
+	e->event_data.data = (void*)imported->priv;
+	e->event_data.hdr.size = imported->sz_priv;
 
-	spin_lock_irqsave(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
 
 	hyper_dmabuf_send_event_locked(e);
 
-	spin_unlock_irqrestore(&hyper_dmabuf_private.event_lock, irqflags);
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
 
-	dev_dbg(hyper_dmabuf_private.device,
-			"event number = %d :", hyper_dmabuf_private.curr_num_event);
+	dev_dbg(hy_drv_priv->dev,
+		"event number = %d :", hy_drv_priv->pending);
 
-	dev_dbg(hyper_dmabuf_private.device,
-			"generating events for {%d, %d, %d, %d}\n",
-			imported_sgt_info->hid.id, imported_sgt_info->hid.rng_key[0],
-			imported_sgt_info->hid.rng_key[1], imported_sgt_info->hid.rng_key[2]);
+	dev_dbg(hy_drv_priv->dev,
+		"generating events for {%d, %d, %d, %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index cccdc19..e2466c7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -33,17 +33,15 @@
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_msg.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 void store_reusable_hid(hyper_dmabuf_id_t hid)
 {
-	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *new_reusable;
 
 	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
 
 	if (!new_reusable) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return;
 	}
@@ -55,7 +53,7 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 
 static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 {
-	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	hyper_dmabuf_id_t hid = {-1, {0,0,0}};
 
 	/* check there is reusable id */
@@ -74,7 +72,7 @@ static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 
 void destroy_reusable_list(void)
 {
-	struct list_reusable_id *reusable_head = hyper_dmabuf_private.id_queue;
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *temp_head;
 
 	if (reusable_head) {
@@ -103,14 +101,14 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
 
 		if (!reusable_head) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"No memory left to be allocated\n");
 			return (hyper_dmabuf_id_t){-1, {0,0,0}};
 		}
 
 		reusable_head->hid.id = -1; /* list head has an invalid count */
 		INIT_LIST_HEAD(&reusable_head->list);
-		hyper_dmabuf_private.id_queue = reusable_head;
+		hy_drv_priv->id_queue = reusable_head;
 	}
 
 	hid = retrieve_reusable_hid();
@@ -119,7 +117,7 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	 * and count is less than maximum allowed
 	 */
 	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
-		hid.id = HYPER_DMABUF_ID_CREATE(hyper_dmabuf_private.domid, count++);
+		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
 	}
 
 	/* random data embedded in the id for security */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 15191c2..b328df7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -45,16 +45,14 @@
 #include "hyper_dmabuf_ops.h"
 #include "hyper_dmabuf_query.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	int ret = 0;
 
 	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
 		return -EINVAL;
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
@@ -67,11 +65,11 @@ static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	int ret = 0;
 
 	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
 		return -EINVAL;
 	}
 
@@ -82,48 +80,48 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
-					struct hyper_dmabuf_pages_info *page_info)
+static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
+					struct pages_info *pg_info)
 {
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	struct hyper_dmabuf_req *req;
-	int operands[MAX_NUMBER_OF_OPERANDS] = {0};
+	int op[MAX_NUMBER_OF_OPERANDS] = {0};
 	int ret, i;
 
 	/* now create request for importer via ring */
-	operands[0] = sgt_info->hid.id;
+	op[0] = exported->hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = sgt_info->hid.rng_key[i];
-
-	if (page_info) {
-		operands[4] = page_info->nents;
-		operands[5] = page_info->frst_ofst;
-		operands[6] = page_info->last_len;
-		operands[7] = ops->share_pages (page_info->pages, sgt_info->hyper_dmabuf_rdomain,
-						page_info->nents, &sgt_info->refs_info);
-		if (operands[7] < 0) {
-			dev_err(hyper_dmabuf_private.device, "pages sharing failed\n");
+		op[i+1] = exported->hid.rng_key[i];
+
+	if (pg_info) {
+		op[4] = pg_info->nents;
+		op[5] = pg_info->frst_ofst;
+		op[6] = pg_info->last_len;
+		op[7] = ops->share_pages(pg_info->pgs, exported->rdomid,
+					 pg_info->nents, &exported->refs_info);
+		if (op[7] < 0) {
+			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
 			return -1;
 		}
 	}
 
-	operands[8] = sgt_info->sz_priv;
+	op[8] = exported->sz_priv;
 
 	/* driver/application specific private info */
-	memcpy(&operands[9], sgt_info->priv, operands[8]);
+	memcpy(&op[9], exported->priv, op[8]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if(!req) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		return -1;
 	}
 
 	/* composing a message to the importer */
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
 
-	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
+	ret = ops->send_req(exported->rdomid, req, true);
 
 	kfree(req);
 
@@ -132,24 +130,18 @@ static int hyper_dmabuf_send_export_msg(struct hyper_dmabuf_sgt_info *sgt_info,
 
 static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
+			(struct ioctl_hyper_dmabuf_export_remote *)data;
 	struct dma_buf *dma_buf;
 	struct dma_buf_attachment *attachment;
 	struct sg_table *sgt;
-	struct hyper_dmabuf_pages_info *page_info;
-	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct pages_info *pg_info;
+	struct exported_sgt_info *exported;
 	hyper_dmabuf_id_t hid;
 	int ret = 0;
 
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
-
-	if (hyper_dmabuf_private.domid == export_remote_attr->remote_domain) {
-		dev_err(hyper_dmabuf_private.device,
+	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
+		dev_err(hy_drv_priv->dev,
 			"exporting to the same VM is not permitted\n");
 		return -EINVAL;
 	}
@@ -157,7 +149,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
 
 	if (IS_ERR(dma_buf)) {
-		dev_err(hyper_dmabuf_private.device,  "Cannot get dma buf\n");
+		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
 		return PTR_ERR(dma_buf);
 	}
 
@@ -165,69 +157,79 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	 * to the same domain and if yes and it's valid sgt_info,
 	 * it returns hyper_dmabuf_id of pre-exported sgt_info
 	 */
-	hid = hyper_dmabuf_find_hid_exported(dma_buf, export_remote_attr->remote_domain);
+	hid = hyper_dmabuf_find_hid_exported(dma_buf,
+					     export_remote_attr->remote_domain);
 	if (hid.id != -1) {
-		sgt_info = hyper_dmabuf_find_exported(hid);
-		if (sgt_info != NULL) {
-			if (sgt_info->valid) {
+		exported = hyper_dmabuf_find_exported(hid);
+		if (exported != NULL) {
+			if (exported->valid) {
 				/*
 				 * Check if unexport is already scheduled for that buffer,
 				 * if so try to cancel it. If that will fail, buffer needs
 				 * to be reexport once again.
 				 */
-				if (sgt_info->unexport_scheduled) {
-					if (!cancel_delayed_work_sync(&sgt_info->unexport_work)) {
+				if (exported->unexport_sched) {
+					if (!cancel_delayed_work_sync(&exported->unexport)) {
 						dma_buf_put(dma_buf);
 						goto reexport;
 					}
-					sgt_info->unexport_scheduled = 0;
+					exported->unexport_sched = false;
 				}
 
 				/* if there's any change in size of private data.
 				 * we reallocate space for private data with new size */
-				if (export_remote_attr->sz_priv != sgt_info->sz_priv) {
-					kfree(sgt_info->priv);
+				if (export_remote_attr->sz_priv != exported->sz_priv) {
+					kfree(exported->priv);
 
 					/* truncating size */
 					if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
-						sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+						exported->sz_priv = MAX_SIZE_PRIV_DATA;
 					} else {
-						sgt_info->sz_priv = export_remote_attr->sz_priv;
+						exported->sz_priv = export_remote_attr->sz_priv;
 					}
 
-					sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+					exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
 
-					if(!sgt_info->priv) {
-						dev_err(hyper_dmabuf_private.device,
-							"Can't reallocate priv because there's no more space left\n");
-						hyper_dmabuf_remove_exported(sgt_info->hid);
-						hyper_dmabuf_cleanup_sgt_info(sgt_info, true);
-						kfree(sgt_info);
+					if(!exported->priv) {
+						dev_err(hy_drv_priv->dev,
+							"no more space left for priv\n");
+						hyper_dmabuf_remove_exported(exported->hid);
+						hyper_dmabuf_cleanup_sgt_info(exported, true);
+						kfree(exported);
+						dma_buf_put(dma_buf);
 						return -ENOMEM;
 					}
 				}
 
 				/* update private data in sgt_info with new ones */
-				copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
-
-				/* send an export msg for updating priv in importer */
-				ret = hyper_dmabuf_send_export_msg(sgt_info, NULL);
-
-				if (ret < 0) {
-					dev_err(hyper_dmabuf_private.device, "Failed to send a new private data\n");
+				ret = copy_from_user(exported->priv, export_remote_attr->priv,
+						     exported->sz_priv);
+				if (ret) {
+					dev_err(hy_drv_priv->dev,
+						"Failed to load a new private data\n");
+					ret = -EINVAL;
+				} else {
+					/* send an export msg for updating priv in importer */
+					ret = hyper_dmabuf_send_export_msg(exported, NULL);
+
+					if (ret < 0) {
+						dev_err(hy_drv_priv->dev,
+							"Failed to send a new private data\n");
+						ret = -EBUSY;
+					}
 				}
 
 				dma_buf_put(dma_buf);
 				export_remote_attr->hid = hid;
-				return 0;
+				return ret;
 			}
 		}
 	}
 
 reexport:
-	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
 	if (IS_ERR(attachment)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot get attachment\n");
+		dev_err(hy_drv_priv->dev, "Cannot get attachment\n");
 		ret = PTR_ERR(attachment);
 		goto fail_attach;
 	}
@@ -235,154 +237,165 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot map attachment\n");
+		dev_err(hy_drv_priv->dev, "Cannot map attachment\n");
 		ret = PTR_ERR(sgt);
 		goto fail_map_attachment;
 	}
 
-	sgt_info = kcalloc(1, sizeof(*sgt_info), GFP_KERNEL);
+	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
 
-	if(!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	if(!exported) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_sgt_info_creation;
 	}
 
 	/* possible truncation */
 	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
-		sgt_info->sz_priv = MAX_SIZE_PRIV_DATA;
+		exported->sz_priv = MAX_SIZE_PRIV_DATA;
 	} else {
-		sgt_info->sz_priv = export_remote_attr->sz_priv;
+		exported->sz_priv = export_remote_attr->sz_priv;
 	}
 
 	/* creating buffer for private data of buffer */
-	if(sgt_info->sz_priv != 0) {
-		sgt_info->priv = kcalloc(1, sgt_info->sz_priv, GFP_KERNEL);
+	if(exported->sz_priv != 0) {
+		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
 
-		if(!sgt_info->priv) {
-			dev_err(hyper_dmabuf_private.device, "no more space left\n");
+		if(!exported->priv) {
+			dev_err(hy_drv_priv->dev, "no more space left\n");
 			ret = -ENOMEM;
 			goto fail_priv_creation;
 		}
 	} else {
-		dev_err(hyper_dmabuf_private.device, "size is 0\n");
+		dev_err(hy_drv_priv->dev, "size is 0\n");
 	}
 
-	sgt_info->hid = hyper_dmabuf_get_hid();
+	exported->hid = hyper_dmabuf_get_hid();
 
 	/* no more exported dmabuf allowed */
-	if(sgt_info->hid.id == -1) {
-		dev_err(hyper_dmabuf_private.device,
+	if(exported->hid.id == -1) {
+		dev_err(hy_drv_priv->dev,
 			"exceeds allowed number of dmabuf to be exported\n");
 		ret = -ENOMEM;
 		goto fail_sgt_info_creation;
 	}
 
-	/* TODO: We might need to consider using port number on event channel? */
-	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
-	sgt_info->dma_buf = dma_buf;
-	sgt_info->valid = 1;
+	exported->rdomid = export_remote_attr->remote_domain;
+	exported->dma_buf = dma_buf;
+	exported->valid = true;
 
-	sgt_info->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
-	if (!sgt_info->active_sgts) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	if (!exported->active_sgts) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_sgts;
 	}
 
-	sgt_info->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
-	if (!sgt_info->active_attached) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	if (!exported->active_attached) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_attached;
 	}
 
-	sgt_info->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
-	if (!sgt_info->va_kmapped) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	if (!exported->va_kmapped) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_kmapped;
 	}
 
-	sgt_info->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
-	if (!sgt_info->va_vmapped) {
-		dev_err(hyper_dmabuf_private.device, "no more space left\n");
+	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	if (!exported->va_vmapped) {
+		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_vmapped;
 	}
 
-	sgt_info->active_sgts->sgt = sgt;
-	sgt_info->active_attached->attach = attachment;
-	sgt_info->va_kmapped->vaddr = NULL;
-	sgt_info->va_vmapped->vaddr = NULL;
+	exported->active_sgts->sgt = sgt;
+	exported->active_attached->attach = attachment;
+	exported->va_kmapped->vaddr = NULL;
+	exported->va_vmapped->vaddr = NULL;
 
 	/* initialize list of sgt, attachment and vaddr for dmabuf sync
 	 * via shadow dma-buf
 	 */
-	INIT_LIST_HEAD(&sgt_info->active_sgts->list);
-	INIT_LIST_HEAD(&sgt_info->active_attached->list);
-	INIT_LIST_HEAD(&sgt_info->va_kmapped->list);
-	INIT_LIST_HEAD(&sgt_info->va_vmapped->list);
+	INIT_LIST_HEAD(&exported->active_sgts->list);
+	INIT_LIST_HEAD(&exported->active_attached->list);
+	INIT_LIST_HEAD(&exported->va_kmapped->list);
+	INIT_LIST_HEAD(&exported->va_vmapped->list);
 
 	/* copy private data to sgt_info */
-	copy_from_user(sgt_info->priv, export_remote_attr->priv, sgt_info->sz_priv);
+	ret = copy_from_user(exported->priv, export_remote_attr->priv,
+			     exported->sz_priv);
 
-	page_info = hyper_dmabuf_ext_pgs(sgt);
-	if (!page_info) {
-		dev_err(hyper_dmabuf_private.device, "failed to construct page_info\n");
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"failed to load private data\n");
+		ret = -EINVAL;
 		goto fail_export;
 	}
 
-	sgt_info->nents = page_info->nents;
+	pg_info = hyper_dmabuf_ext_pgs(sgt);
+	if (!pg_info) {
+		dev_err(hy_drv_priv->dev,
+			"failed to construct pg_info\n");
+		ret = -ENOMEM;
+		goto fail_export;
+	}
+
+	exported->nents = pg_info->nents;
 
 	/* now register it to export list */
-	hyper_dmabuf_register_exported(sgt_info);
+	hyper_dmabuf_register_exported(exported);
 
-	export_remote_attr->hid = sgt_info->hid;
+	export_remote_attr->hid = exported->hid;
 
-	ret = hyper_dmabuf_send_export_msg(sgt_info, page_info);
+	ret = hyper_dmabuf_send_export_msg(exported, pg_info);
 
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device, "failed to send out the export request\n");
+		dev_err(hy_drv_priv->dev,
+			"failed to send out the export request\n");
 		goto fail_send_request;
 	}
 
-	/* free page_info */
-	kfree(page_info->pages);
-	kfree(page_info);
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
-	sgt_info->filp = filp;
+	exported->filp = filp;
 
 	return ret;
 
 /* Clean-up if error occurs */
 
 fail_send_request:
-	hyper_dmabuf_remove_exported(sgt_info->hid);
+	hyper_dmabuf_remove_exported(exported->hid);
 
-	/* free page_info */
-	kfree(page_info->pages);
-	kfree(page_info);
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
 fail_export:
-	kfree(sgt_info->va_vmapped);
+	kfree(exported->va_vmapped);
 
 fail_map_va_vmapped:
-	kfree(sgt_info->va_kmapped);
+	kfree(exported->va_kmapped);
 
 fail_map_va_kmapped:
-	kfree(sgt_info->active_attached);
+	kfree(exported->active_attached);
 
 fail_map_active_attached:
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->priv);
+	kfree(exported->active_sgts);
+	kfree(exported->priv);
 
 fail_priv_creation:
-	kfree(sgt_info);
+	kfree(exported);
 
 fail_map_active_sgts:
 fail_sgt_info_creation:
-	dma_buf_unmap_attachment(attachment, sgt, DMA_BIDIRECTIONAL);
+	dma_buf_unmap_attachment(attachment, sgt,
+				 DMA_BIDIRECTIONAL);
 
 fail_map_attachment:
 	dma_buf_detach(dma_buf, attachment);
@@ -395,143 +408,136 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
+			(struct ioctl_hyper_dmabuf_export_fd *)data;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct imported_sgt_info *imported;
 	struct hyper_dmabuf_req *req;
-	struct page **data_pages;
-	int operands[4];
+	struct page **data_pgs;
+	int op[4];
 	int i;
 	int ret = 0;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	/* look for dmabuf for the id */
-	sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hid);
+	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
 
 	/* can't find sgt from the table */
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "can't find the entry\n");
+	if (!imported) {
+		dev_err(hy_drv_priv->dev, "can't find the entry\n");
 		return -ENOENT;
 	}
 
-	mutex_lock(&hyper_dmabuf_private.lock);
+	mutex_lock(&hy_drv_priv->lock);
 
-	sgt_info->num_importers++;
+	imported->importers++;
 
 	/* send notification for export_fd to exporter */
-	operands[0] = sgt_info->hid.id;
+	op[0] = imported->hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = sgt_info->hid.rng_key[i];
+		op[i+1] = imported->hid.rng_key[i];
 
-	dev_dbg(hyper_dmabuf_private.device, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
-		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-		sgt_info->hid.rng_key[2]);
+	dev_dbg(hy_drv_priv->dev, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
+		imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+		imported->hid.rng_key[2]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, true);
+	ret = ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
 
 	if (ret < 0) {
 		/* in case of timeout other end eventually will receive request, so we need to undo it */
-		hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
-		ops->send_req(operands[0], req, false);
+		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
+		ops->send_req(op[0], req, false);
 		kfree(req);
-		dev_err(hyper_dmabuf_private.device, "Failed to create sgt or notify exporter\n");
-		sgt_info->num_importers--;
-		mutex_unlock(&hyper_dmabuf_private.lock);
+		dev_err(hy_drv_priv->dev, "Failed to create sgt or notify exporter\n");
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
 		return ret;
 	}
 
 	kfree(req);
 
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+			imported->hid.rng_key[2]);
 
-		sgt_info->num_importers--;
-		mutex_unlock(&hyper_dmabuf_private.lock);
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
 		return -EINVAL;
 	} else {
-		dev_dbg(hyper_dmabuf_private.device, "Can import buffer {id:%d key:%d %d %d}\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+		dev_dbg(hy_drv_priv->dev, "Can import buffer {id:%d key:%d %d %d}\n",
+			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+			imported->hid.rng_key[2]);
 
 		ret = 0;
 	}
 
-	dev_dbg(hyper_dmabuf_private.device,
-		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
-		  sgt_info->ref_handle, sgt_info->frst_ofst,
-		  sgt_info->last_len, sgt_info->nents,
-		  HYPER_DMABUF_DOM_ID(sgt_info->hid));
+	dev_dbg(hy_drv_priv->dev,
+		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n",
+		  __func__, imported->ref_handle, imported->frst_ofst,
+		  imported->last_len, imported->nents, HYPER_DMABUF_DOM_ID(imported->hid));
 
-	if (!sgt_info->sgt) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (!imported->sgt) {
+		dev_dbg(hy_drv_priv->dev,
 			"%s buffer {id:%d key:%d %d %d} pages not mapped yet\n", __func__,
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+			imported->hid.rng_key[2]);
 
-		data_pages = ops->map_shared_pages(sgt_info->ref_handle,
-						   HYPER_DMABUF_DOM_ID(sgt_info->hid),
-						   sgt_info->nents,
-						   &sgt_info->refs_info);
+		data_pgs = ops->map_shared_pages(imported->ref_handle,
+						   HYPER_DMABUF_DOM_ID(imported->hid),
+						   imported->nents,
+						   &imported->refs_info);
 
-		if (!data_pages) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!data_pgs) {
+			dev_err(hy_drv_priv->dev,
 				"Cannot map pages of buffer {id:%d key:%d %d %d}\n",
-				sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-				sgt_info->hid.rng_key[2]);
+				imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+				imported->hid.rng_key[2]);
 
-			sgt_info->num_importers--;
+			imported->importers--;
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 			if (!req) {
-				dev_err(hyper_dmabuf_private.device,
+				dev_err(hy_drv_priv->dev,
 					"No more space left\n");
 				return -ENOMEM;
 			}
 
-			hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT_FD_FAILED, &operands[0]);
-			ops->send_req(HYPER_DMABUF_DOM_ID(sgt_info->hid), req, false);
+			hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
+			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
 			kfree(req);
-			mutex_unlock(&hyper_dmabuf_private.lock);
+			mutex_unlock(&hy_drv_priv->lock);
 			return -EINVAL;
 		}
 
-		sgt_info->sgt = hyper_dmabuf_create_sgt(data_pages, sgt_info->frst_ofst,
-							sgt_info->last_len, sgt_info->nents);
+		imported->sgt = hyper_dmabuf_create_sgt(data_pgs, imported->frst_ofst,
+							imported->last_len, imported->nents);
 
 	}
 
-	export_fd_attr->fd = hyper_dmabuf_export_fd(sgt_info, export_fd_attr->flags);
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported, export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
 		ret = export_fd_attr->fd;
 	}
 
-	mutex_unlock(&hyper_dmabuf_private.lock);
+	mutex_unlock(&hy_drv_priv->lock);
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return ret;
 }
 
@@ -541,50 +547,51 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct exported_sgt_info *exported =
+		container_of(work, struct exported_sgt_info, unexport.work);
+	int op[4];
 	int i, ret;
-	int operands[4];
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	struct hyper_dmabuf_sgt_info *sgt_info =
-		container_of(work, struct hyper_dmabuf_sgt_info, unexport_work.work);
 
-	if (!sgt_info)
+	if (!exported)
 		return;
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
-		sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-		sgt_info->hid.rng_key[2]);
+		exported->hid.id, exported->hid.rng_key[0],
+		exported->hid.rng_key[1], exported->hid.rng_key[2]);
 
 	/* no longer valid */
-	sgt_info->valid = 0;
+	exported->valid = false;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return;
 	}
 
-	operands[0] = sgt_info->hid.id;
+	op[0] = exported->hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = sgt_info->hid.rng_key[i];
+		op[i+1] = exported->hid.rng_key[i];
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
 
-	/* Now send unexport request to remote domain, marking that buffer should not be used anymore */
-	ret = ops->send_req(sgt_info->hyper_dmabuf_rdomain, req, true);
+	/* Now send unexport request to remote domain, marking
+	 * that buffer should not be used anymore */
+	ret = ops->send_req(exported->rdomid, req, true);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
 	}
 
 	/* free msg */
 	kfree(req);
-	sgt_info->unexport_scheduled = 0;
+	exported->unexport_sched = false;
 
 	/*
 	 * Immediately clean-up if it has never been exported by importer
@@ -593,104 +600,94 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 	 * is called (importer does this only when there's no
 	 * no consumer of locally exported FDs)
 	 */
-	if (!sgt_info->importer_exported) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (exported->active == 0) {
+		dev_dbg(hy_drv_priv->dev,
 			"claning up buffer {id:%d key:%d %d %d} completly\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		hyper_dmabuf_cleanup_sgt_info(exported, false);
+		hyper_dmabuf_remove_exported(exported->hid);
 
-		hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
-		hyper_dmabuf_remove_exported(sgt_info->hid);
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_hid(sgt_info->hid);
+		store_reusable_hid(exported->hid);
 
-		if (sgt_info->sz_priv > 0 && !sgt_info->priv)
-			kfree(sgt_info->priv);
+		if (exported->sz_priv > 0 && !exported->priv)
+			kfree(exported->priv);
 
-		kfree(sgt_info);
+		kfree(exported);
 	}
 }
 
-/* Schedules unexport of dmabuf.
+/* Schedule unexport of dmabuf.
  */
-static int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_unexport *unexport_attr;
-	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
+			(struct ioctl_hyper_dmabuf_unexport *)data;
+	struct exported_sgt_info *exported;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
-
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	unexport_attr = (struct ioctl_hyper_dmabuf_unexport *)data;
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	/* find dmabuf in export list */
-	sgt_info = hyper_dmabuf_find_exported(unexport_attr->hid);
+	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
 		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
 		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
 
 	/* failed to find corresponding entry in export list */
-	if (sgt_info == NULL) {
+	if (exported == NULL) {
 		unexport_attr->status = -ENOENT;
 		return -ENOENT;
 	}
 
-	if (sgt_info->unexport_scheduled)
+	if (exported->unexport_sched)
 		return 0;
 
-	sgt_info->unexport_scheduled = 1;
-	INIT_DELAYED_WORK(&sgt_info->unexport_work, hyper_dmabuf_delayed_unexport);
-	schedule_delayed_work(&sgt_info->unexport_work,
+	exported->unexport_sched = true;
+	INIT_DELAYED_WORK(&exported->unexport,
+			  hyper_dmabuf_delayed_unexport);
+	schedule_delayed_work(&exported->unexport,
 			      msecs_to_jiffies(unexport_attr->delay_ms));
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return 0;
 }
 
 static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 {
-	struct ioctl_hyper_dmabuf_query *query_attr;
-	struct hyper_dmabuf_sgt_info *sgt_info = NULL;
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info = NULL;
+	struct ioctl_hyper_dmabuf_query *query_attr =
+			(struct ioctl_hyper_dmabuf_query *)data;
+	struct exported_sgt_info *exported = NULL;
+	struct imported_sgt_info *imported = NULL;
 	int ret = 0;
 
-	if (!data) {
-		dev_err(hyper_dmabuf_private.device, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
-
-	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hyper_dmabuf_private.domid) {
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hy_drv_priv->domid) {
 		/* query for exported dmabuf */
-		sgt_info = hyper_dmabuf_find_exported(query_attr->hid);
-		if (sgt_info) {
-			ret = hyper_dmabuf_query_exported(sgt_info,
+		exported = hyper_dmabuf_find_exported(query_attr->hid);
+		if (exported) {
+			ret = hyper_dmabuf_query_exported(exported,
 							  query_attr->item, &query_attr->info);
 		} else {
-			dev_err(hyper_dmabuf_private.device,
-				"DMA BUF {id:%d key:%d %d %d} can't be found in the export list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
+			dev_err(hy_drv_priv->dev,
+				"DMA BUF {id:%d key:%d %d %d} not in the export list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	} else {
 		/* query for imported dmabuf */
-		imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hid);
-		if (imported_sgt_info) {
-			ret = hyper_dmabuf_query_imported(imported_sgt_info,
-							  query_attr->item, &query_attr->info);
+		imported = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported) {
+			ret = hyper_dmabuf_query_imported(imported, query_attr->item,
+							  &query_attr->info);
 		} else {
-			dev_err(hyper_dmabuf_private.device,
-				"DMA BUF {id:%d key:%d %d %d} can't be found in the imported list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0], query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
+			dev_err(hy_drv_priv->dev,
+				"DMA BUF {id:%d key:%d %d %d} not in the imported list\n",
+				query_attr->hid.id, query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	}
@@ -698,28 +695,6 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
-void hyper_dmabuf_emergency_release(struct hyper_dmabuf_sgt_info* sgt_info,
-				    void *attr)
-{
-	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file*) attr;
-
-	if (!filp || !sgt_info)
-		return;
-
-	if (sgt_info->filp == filp) {
-		dev_dbg(hyper_dmabuf_private.device,
-			"Executing emergency release of buffer {id:%d key:%d %d %d}\n",
-			 sgt_info->hid.id, sgt_info->hid.rng_key[0],
-			 sgt_info->hid.rng_key[1], sgt_info->hid.rng_key[2]);
-
-		unexport_attr.hid = sgt_info->hid;
-		unexport_attr.delay_ms = 0;
-
-		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
-	}
-}
-
 const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
@@ -739,7 +714,7 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	char *kdata;
 
 	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
-		dev_err(hyper_dmabuf_private.device, "invalid ioctl\n");
+		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
 		return -EINVAL;
 	}
 
@@ -748,18 +723,18 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	func = ioctl->func;
 
 	if (unlikely(!func)) {
-		dev_err(hyper_dmabuf_private.device, "no function\n");
+		dev_err(hy_drv_priv->dev, "no function\n");
 		return -EINVAL;
 	}
 
 	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
 	if (!kdata) {
-		dev_err(hyper_dmabuf_private.device, "no memory\n");
+		dev_err(hy_drv_priv->dev, "no memory\n");
 		return -ENOMEM;
 	}
 
 	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hyper_dmabuf_private.device, "failed to copy from user arguments\n");
+		dev_err(hy_drv_priv->dev, "failed to copy from user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
@@ -767,7 +742,7 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	ret = func(filp, kdata);
 
 	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hyper_dmabuf_private.device, "failed to copy to user arguments\n");
+		dev_err(hy_drv_priv->dev, "failed to copy to user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index ebfbb84..3e9470a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -42,4 +42,6 @@ struct hyper_dmabuf_ioctl_desc {
 			.name = #ioctl			\
 	}
 
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
+
 #endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index eaef2c1..1b3745e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -39,24 +39,22 @@
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_event.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
 static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attribute *attr, char *buf)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 	int bkt;
 	ssize_t count = 0;
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->info->hid;
-		int nents = info_entry->info->nents;
-		bool valid = info_entry->info->valid;
-		int num_importers = info_entry->info->num_importers;
+		hyper_dmabuf_id_t hid = info_entry->imported->hid;
+		int nents = info_entry->imported->nents;
+		bool valid = info_entry->imported->valid;
+		int num_importers = info_entry->imported->importers;
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
 				   "hid:{id:%d keys:%d %d %d}, nents:%d, v:%c, numi:%d\n",
@@ -71,16 +69,16 @@ static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attr
 
 static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attribute *attr, char *buf)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	int bkt;
 	ssize_t count = 0;
 	size_t total = 0;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->info->hid;
-		int nents = info_entry->info->nents;
-		bool valid = info_entry->info->valid;
-		int importer_exported = info_entry->info->importer_exported;
+		hyper_dmabuf_id_t hid = info_entry->exported->hid;
+		int nents = info_entry->exported->nents;
+		bool valid = info_entry->exported->valid;
+		int importer_exported = info_entry->exported->active;
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
 				   "hid:{hid:%d keys:%d %d %d}, nents:%d, v:%c, ie:%d\n",
@@ -135,57 +133,57 @@ int hyper_dmabuf_table_destroy()
 	return 0;
 }
 
-int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
+int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
                         "No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	info_entry->info = info;
+	info_entry->exported = exported;
 
 	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		 info_entry->info->hid.id);
+		 info_entry->exported->hid.id);
 
 	return 0;
 }
 
-int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
+int hyper_dmabuf_register_imported(struct imported_sgt_info* imported)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
                         "No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	info_entry->info = info;
+	info_entry->imported = imported;
 
 	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		 info_entry->info->hid.id);
+		 info_entry->imported->hid.id);
 
 	return 0;
 }
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
-				return info_entry->info;
+			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid))
+				return info_entry->exported;
 			/* if key is unmatched, given HID is invalid, so returning NULL */
 			else
 				break;
@@ -197,29 +195,29 @@ struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 /* search for pre-exported sgt and return id of it if it exist */
 hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	hyper_dmabuf_id_t hid = {-1, {0, 0, 0}};
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->info->dma_buf == dmabuf &&
-		   info_entry->info->hyper_dmabuf_rdomain == domid)
-			return info_entry->info->hid;
+		if(info_entry->exported->dma_buf == dmabuf &&
+		   info_entry->exported->rdomid == domid)
+			return info_entry->exported->hid;
 
 	return hid;
 }
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid))
-				return info_entry->info;
+			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid))
+				return info_entry->imported;
 			/* if key is unmatched, given HID is invalid, so returning NULL */
 			else {
 				break;
@@ -231,14 +229,14 @@ struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_i
 
 int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
@@ -252,14 +250,14 @@ int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 
 int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 {
-	struct hyper_dmabuf_info_entry_imported *info_entry;
+	struct list_entry_imported *info_entry;
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->info->hid.id == hid.id) {
+		if(info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->info->hid, hid)) {
+			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
@@ -272,15 +270,15 @@ int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 }
 
 void hyper_dmabuf_foreach_exported(
-	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void (*func)(struct exported_sgt_info *, void *attr),
 	void *attr)
 {
-	struct hyper_dmabuf_info_entry_exported *info_entry;
+	struct list_entry_exported *info_entry;
 	struct hlist_node *tmp;
 	int bkt;
 
 	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
 			info_entry, node) {
-		func(info_entry->info, attr);
+		func(info_entry->exported, attr);
 	}
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index 8f64db8..d5c17ef 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -32,13 +32,13 @@
 /* number of bits to be used for imported dmabufs hash table */
 #define MAX_ENTRY_IMPORTED 7
 
-struct hyper_dmabuf_info_entry_exported {
-        struct hyper_dmabuf_sgt_info *info;
+struct list_entry_exported {
+        struct exported_sgt_info *exported;
         struct hlist_node node;
 };
 
-struct hyper_dmabuf_info_entry_imported {
-        struct hyper_dmabuf_imported_sgt_info *info;
+struct list_entry_imported {
+        struct imported_sgt_info *imported;
         struct hlist_node node;
 };
 
@@ -46,23 +46,23 @@ int hyper_dmabuf_table_init(void);
 
 int hyper_dmabuf_table_destroy(void);
 
-int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
+int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
 hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid);
 
-int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
+int hyper_dmabuf_register_imported(struct imported_sgt_info* info);
 
-struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
 
-struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
 
 int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
 
 int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
 
 void hyper_dmabuf_foreach_exported(
-	void (*func)(struct hyper_dmabuf_sgt_info *, void *attr),
+	void (*func)(struct exported_sgt_info *, void *attr),
 	void *attr);
 
 int hyper_dmabuf_register_sysfs(struct device *dev);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index ec37c3b..907f76e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -31,7 +31,6 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
@@ -39,58 +38,56 @@
 #include "hyper_dmabuf_event.h"
 #include "hyper_dmabuf_list.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 struct cmd_process {
 	struct work_struct work;
 	struct hyper_dmabuf_req *rq;
 	int domid;
 };
 
-void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
-				 enum hyper_dmabuf_command command, int *operands)
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+			     enum hyper_dmabuf_command cmd, int *op)
 {
 	int i;
 
-	req->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
-	req->command = command;
+	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	req->cmd = cmd;
 
-	switch(command) {
+	switch(cmd) {
 	/* as exporter, commands to importer */
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands4 : number of pages to be shared
-		 * operands5 : offset of data in the first page
-		 * operands6 : length of data in the last page
-		 * operands7 : top-level reference number for shared pages
-		 * operands8 : size of private data (from operands9)
-		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
-		memcpy(&req->operands[0], &operands[0], 9 * sizeof(int) + operands[8]);
+		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
-		 * operands0~3 : hyper_dmabuf_id_t hid
+		 * op0~3 : hyper_dmabuf_id_t hid
 		 */
 
 		for (i=0; i < 4; i++)
-			req->operands[i] = operands[i];
+			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_EXPORT_FD:
 	case HYPER_DMABUF_EXPORT_FD_FAILED:
 		/* dmabuf fd is being created on imported side or importing failed */
 		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * operands0~3 : hyper_dmabuf_id
+		 * op0~3 : hyper_dmabuf_id
 		 */
 
 		for (i=0; i < 4; i++)
-			req->operands[i] = operands[i];
+			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
@@ -103,11 +100,11 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
 		* or unmapping for synchronization with original exporter (e.g. i915) */
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
 		for (i = 0; i < 5; i++)
-			req->operands[i] = operands[i];
+			req->op[i] = op[i];
 		break;
 
 	default:
@@ -116,9 +113,9 @@ void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
 	}
 }
 
-void cmd_process_work(struct work_struct *work)
+static void cmd_process_work(struct work_struct *work)
 {
-	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct imported_sgt_info *imported;
 	struct cmd_process *proc = container_of(work, struct cmd_process, work);
 	struct hyper_dmabuf_req *req;
 	int domid;
@@ -127,107 +124,107 @@ void cmd_process_work(struct work_struct *work)
 	req = proc->rq;
 	domid = proc->domid;
 
-	switch (req->command) {
+	switch (req->cmd) {
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands4 : number of pages to be shared
-		 * operands5 : offset of data in the first page
-		 * operands6 : length of data in the last page
-		 * operands7 : top-level reference number for shared pages
-		 * operands8 : size of private data (from operands9)
-		 * operands9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
 		 */
 
 		/* if nents == 0, it means it is a message only for priv synchronization
 		 * for existing imported_sgt_info so not creating a new one */
-		if (req->operands[4] == 0) {
-			hyper_dmabuf_id_t exist = {req->operands[0],
-						   {req->operands[1], req->operands[2],
-						    req->operands[3]}};
+		if (req->op[4] == 0) {
+			hyper_dmabuf_id_t exist = {req->op[0],
+						   {req->op[1], req->op[2],
+						   req->op[3]}};
 
-			imported_sgt_info = hyper_dmabuf_find_imported(exist);
+			imported = hyper_dmabuf_find_imported(exist);
 
-			if (!imported_sgt_info) {
-				dev_err(hyper_dmabuf_private.device,
+			if (!imported) {
+				dev_err(hy_drv_priv->dev,
 					"Can't find imported sgt_info from IMPORT_LIST\n");
 				break;
 			}
 
 			/* if size of new private data is different,
 			 * we reallocate it. */
-			if (imported_sgt_info->sz_priv != req->operands[8]) {
-				kfree(imported_sgt_info->priv);
-				imported_sgt_info->sz_priv = req->operands[8];
-				imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
-				if (!imported_sgt_info->priv) {
-					dev_err(hyper_dmabuf_private.device,
+			if (imported->sz_priv != req->op[8]) {
+				kfree(imported->priv);
+				imported->sz_priv = req->op[8];
+				imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
+				if (!imported->priv) {
+					dev_err(hy_drv_priv->dev,
 						"Fail to allocate priv\n");
 
 					/* set it invalid */
-					imported_sgt_info->valid = 0;
+					imported->valid = 0;
 					break;
 				}
 			}
 
 			/* updating priv data */
-			memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
+			memcpy(imported->priv, &req->op[9], req->op[8]);
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 			/* generating import event */
-			hyper_dmabuf_import_event(imported_sgt_info->hid);
+			hyper_dmabuf_import_event(imported->hid);
 #endif
 
 			break;
 		}
 
-		imported_sgt_info = kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
 
-		if (!imported_sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!imported) {
+			dev_err(hy_drv_priv->dev,
 				"No memory left to be allocated\n");
 			break;
 		}
 
-		imported_sgt_info->sz_priv = req->operands[8];
-		imported_sgt_info->priv = kcalloc(1, req->operands[8], GFP_KERNEL);
+		imported->sz_priv = req->op[8];
+		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
 
-		if (!imported_sgt_info->priv) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!imported->priv) {
+			dev_err(hy_drv_priv->dev,
 				"Fail to allocate priv\n");
 
-			kfree(imported_sgt_info);
+			kfree(imported);
 			break;
 		}
 
-		imported_sgt_info->hid.id = req->operands[0];
+		imported->hid.id = req->op[0];
 
 		for (i=0; i<3; i++)
-			imported_sgt_info->hid.rng_key[i] = req->operands[i+1];
+			imported->hid.rng_key[i] = req->op[i+1];
 
-		imported_sgt_info->nents = req->operands[4];
-		imported_sgt_info->frst_ofst = req->operands[5];
-		imported_sgt_info->last_len = req->operands[6];
-		imported_sgt_info->ref_handle = req->operands[7];
+		imported->nents = req->op[4];
+		imported->frst_ofst = req->op[5];
+		imported->last_len = req->op[6];
+		imported->ref_handle = req->op[7];
 
-		dev_dbg(hyper_dmabuf_private.device, "DMABUF was exported\n");
-		dev_dbg(hyper_dmabuf_private.device, "\thid{id:%d key:%d %d %d}\n",
-			req->operands[0], req->operands[1], req->operands[2],
-			req->operands[3]);
-		dev_dbg(hyper_dmabuf_private.device, "\tnents %d\n", req->operands[4]);
-		dev_dbg(hyper_dmabuf_private.device, "\tfirst offset %d\n", req->operands[5]);
-		dev_dbg(hyper_dmabuf_private.device, "\tlast len %d\n", req->operands[6]);
-		dev_dbg(hyper_dmabuf_private.device, "\tgrefid %d\n", req->operands[7]);
+		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
+		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
+			req->op[0], req->op[1], req->op[2],
+			req->op[3]);
+		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
+		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
+		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
+		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
 
-		memcpy(imported_sgt_info->priv, &req->operands[9], req->operands[8]);
+		memcpy(imported->priv, &req->op[9], req->op[8]);
 
-		imported_sgt_info->valid = 1;
-		hyper_dmabuf_register_imported(imported_sgt_info);
+		imported->valid = true;
+		hyper_dmabuf_register_imported(imported);
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 		/* generating import event */
-		hyper_dmabuf_import_event(imported_sgt_info->hid);
+		hyper_dmabuf_import_event(imported->hid);
 #endif
 
 		break;
@@ -251,142 +248,142 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 {
 	struct cmd_process *proc;
 	struct hyper_dmabuf_req *temp_req;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_sgt_info *exp_sgt_info;
+	struct imported_sgt_info *imported;
+	struct exported_sgt_info *exported;
 	hyper_dmabuf_id_t hid;
 	int ret;
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device, "request is NULL\n");
+		dev_err(hy_drv_priv->dev, "request is NULL\n");
 		return -EINVAL;
 	}
 
-	hid.id = req->operands[0];
-	hid.rng_key[0] = req->operands[1];
-	hid.rng_key[1] = req->operands[2];
-	hid.rng_key[2] = req->operands[3];
+	hid.id = req->op[0];
+	hid.rng_key[0] = req->op[1];
+	hid.rng_key[1] = req->op[2];
+	hid.rng_key[2] = req->op[3];
 
-	if ((req->command < HYPER_DMABUF_EXPORT) ||
-		(req->command > HYPER_DMABUF_OPS_TO_SOURCE)) {
-		dev_err(hyper_dmabuf_private.device, "invalid command\n");
+	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
+		(req->cmd > HYPER_DMABUF_OPS_TO_SOURCE)) {
+		dev_err(hy_drv_priv->dev, "invalid command\n");
 		return -EINVAL;
 	}
 
-	req->status = HYPER_DMABUF_REQ_PROCESSED;
+	req->stat = HYPER_DMABUF_REQ_PROCESSED;
 
 	/* HYPER_DMABUF_DESTROY requires immediate
 	 * follow up so can't be processed in workqueue
 	 */
-	if (req->command == HYPER_DMABUF_NOTIFY_UNEXPORT) {
+	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
-		 * operands0~3 : hyper_dmabuf_id
+		 * op0~3 : hyper_dmabuf_id
 		 */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
 
-		sgt_info = hyper_dmabuf_find_imported(hid);
+		imported = hyper_dmabuf_find_imported(hid);
 
-		if (sgt_info) {
+		if (imported) {
 			/* if anything is still using dma_buf */
-			if (sgt_info->num_importers) {
+			if (imported->importers) {
 				/*
 				 * Buffer is still in  use, just mark that it should
 				 * not be allowed to export its fd anymore.
 				 */
-				sgt_info->valid = 0;
+				imported->valid = false;
 			} else {
 				/* No one is using buffer, remove it from imported list */
 				hyper_dmabuf_remove_imported(hid);
-				kfree(sgt_info);
+				kfree(imported);
 			}
 		} else {
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		}
 
-		return req->command;
+		return req->cmd;
 	}
 
 	/* dma buf remote synchronization */
-	if (req->command == HYPER_DMABUF_OPS_TO_SOURCE) {
+	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
 		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
 		 * or unmapping for synchronization with original exporter (e.g. i915) */
 
 		/* command : DMABUF_OPS_TO_SOURCE.
-		 * operands0~3 : hyper_dmabuf_id
-		 * operands1 : enum hyper_dmabuf_ops {....}
+		 * op0~3 : hyper_dmabuf_id
+		 * op1 : enum hyper_dmabuf_ops {....}
 		 */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
 
-		ret = hyper_dmabuf_remote_sync(hid, req->operands[4]);
+		ret = hyper_dmabuf_remote_sync(hid, req->op[4]);
 
 		if (ret)
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		else
-			req->status = HYPER_DMABUF_REQ_PROCESSED;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
 
-		return req->command;
+		return req->cmd;
 	}
 
 	/* synchronous dma_buf_fd export */
-	if (req->command == HYPER_DMABUF_EXPORT_FD) {
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"Processing HYPER_DMABUF_EXPORT_FD for buffer {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-		exp_sgt_info = hyper_dmabuf_find_exported(hid);
+		exported = hyper_dmabuf_find_exported(hid);
 
-		if (!exp_sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
 				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			req->status = HYPER_DMABUF_REQ_ERROR;
-		} else if (!exp_sgt_info->valid) {
-			dev_dbg(hyper_dmabuf_private.device,
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else if (!exported->valid) {
+			dev_dbg(hy_drv_priv->dev,
 				"Buffer no longer valid - cannot export fd for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
-			dev_dbg(hyper_dmabuf_private.device,
+			dev_dbg(hy_drv_priv->dev,
 				"Buffer still valid - can export fd for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			exp_sgt_info->importer_exported++;
-			req->status = HYPER_DMABUF_REQ_PROCESSED;
+			exported->active++;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
 		}
-		return req->command;
+		return req->cmd;
 	}
 
-	if (req->command == HYPER_DMABUF_EXPORT_FD_FAILED) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
+		dev_dbg(hy_drv_priv->dev,
 			"Processing HYPER_DMABUF_EXPORT_FD_FAILED for buffer {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-		exp_sgt_info = hyper_dmabuf_find_exported(hid);
+		exported = hyper_dmabuf_find_exported(hid);
 
-		if (!exp_sgt_info) {
-			dev_err(hyper_dmabuf_private.device,
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
 				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
 				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
-			req->status = HYPER_DMABUF_REQ_ERROR;
+			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
-			exp_sgt_info->importer_exported--;
-			req->status = HYPER_DMABUF_REQ_PROCESSED;
+			exported->active--;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
 		}
-		return req->command;
+		return req->cmd;
 	}
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
 	if (!temp_req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
@@ -396,7 +393,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
 
 	if (!proc) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		kfree(temp_req);
 		return -ENOMEM;
@@ -407,7 +404,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	INIT_WORK(&(proc->work), cmd_process_work);
 
-	queue_work(hyper_dmabuf_private.work_queue, &(proc->work));
+	queue_work(hy_drv_priv->work_queue, &(proc->work));
 
-	return req->command;
+	return req->cmd;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 0f6e795..7c694ec 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -28,17 +28,17 @@
 #define MAX_NUMBER_OF_OPERANDS 64
 
 struct hyper_dmabuf_req {
-	unsigned int request_id;
-	unsigned int status;
-	unsigned int command;
-	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+	unsigned int req_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
 };
 
 struct hyper_dmabuf_resp {
-	unsigned int response_id;
-	unsigned int status;
-	unsigned int command;
-	unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+	unsigned int resp_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
 };
 
 enum hyper_dmabuf_command {
@@ -75,7 +75,7 @@ enum hyper_dmabuf_req_feedback {
 };
 
 /* create a request packet with given command and operands */
-void hyper_dmabuf_create_request(struct hyper_dmabuf_req *req,
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 				 enum hyper_dmabuf_command command,
 				 int *operands);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 9313c42..7e73170 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -32,8 +32,6 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_ops.h"
@@ -45,122 +43,111 @@
 #define WAIT_AFTER_SYNC_REQ 0
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
-inline int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
-	int operands[5];
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	int op[5];
 	int i;
 	int ret;
 
-	operands[0] = hid.id;
+	op[0] = hid.id;
 
 	for (i=0; i<3; i++)
-		operands[i+1] = hid.rng_key[i];
+		op[i+1] = hid.rng_key[i];
 
-	operands[4] = dmabuf_ops;
+	op[4] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
 
-	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
 
 	/* send request and wait for a response */
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
 
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"dmabuf sync request failed:%d\n", req->op[4]);
+	}
+
 	kfree(req);
 
 	return ret;
 }
 
-static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
-			struct dma_buf_attachment *attach)
+static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf,
+				   struct device* dev,
+				   struct dma_buf_attachment *attach)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!attach->dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_ATTACH);
 
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-		return ret;
-	}
-
-	return 0;
+	return ret;
 }
 
-static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
+static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf,
+				    struct dma_buf_attachment *attach)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!attach->dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_DETACH);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
-						enum dma_data_direction dir)
+					     enum dma_data_direction dir)
 {
 	struct sg_table *st;
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_pages_info *page_info;
+	struct imported_sgt_info *imported;
+	struct pages_info *pg_info;
 	int ret;
 
 	if (!attachment->dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
 
 	/* extract pages from sgt */
-	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
+	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
 
-	if (!page_info) {
+	if (!pg_info) {
 		return NULL;
 	}
 
 	/* create a new sg_table with extracted pages */
-	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
-				page_info->last_len, page_info->nents);
+	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
+				     pg_info->last_len, pg_info->nents);
 	if (!st)
 		goto err_free_sg;
 
         if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
                 goto err_free_sg;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_MAP);
 
-	kfree(page_info->pages);
-	kfree(page_info);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
 	return st;
 
@@ -170,8 +157,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 		kfree(st);
 	}
 
-	kfree(page_info->pages);
-	kfree(page_info);
+	kfree(pg_info->pgs);
+	kfree(pg_info);
 
 	return NULL;
 }
@@ -180,294 +167,251 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 				   struct sg_table *sg,
 				   enum dma_data_direction dir)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!attachment->dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
 
 	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
 
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_UNMAP);
-
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	int ret;
-	int final_release;
+	int finish;
 
 	if (!dma_buf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dma_buf->priv;
+	imported = (struct imported_sgt_info *)dma_buf->priv;
 
-	if (!dmabuf_refcount(sgt_info->dma_buf)) {
-		sgt_info->dma_buf = NULL;
+	if (!dmabuf_refcount(imported->dma_buf)) {
+		imported->dma_buf = NULL;
 	}
 
-	sgt_info->num_importers--;
+	imported->importers--;
 
-	if (sgt_info->num_importers == 0) {
-		ops->unmap_shared_pages(&sgt_info->refs_info, sgt_info->nents);
+	if (imported->importers == 0) {
+		ops->unmap_shared_pages(&imported->refs_info, imported->nents);
 
-		if (sgt_info->sgt) {
-			sg_free_table(sgt_info->sgt);
-			kfree(sgt_info->sgt);
-			sgt_info->sgt = NULL;
+		if (imported->sgt) {
+			sg_free_table(imported->sgt);
+			kfree(imported->sgt);
+			imported->sgt = NULL;
 		}
 	}
 
-	final_release = sgt_info && !sgt_info->valid &&
-		        !sgt_info->num_importers;
+	finish = imported && !imported->valid &&
+		 !imported->importers;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_RELEASE);
-	if (ret < 0) {
-		dev_warn(hyper_dmabuf_private.device,
-			 "hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	/*
 	 * Check if buffer is still valid and if not remove it from imported list.
 	 * That has to be done after sending sync request
 	 */
-	if (final_release) {
-		hyper_dmabuf_remove_imported(sgt_info->hid);
-		kfree(sgt_info);
+	if (finish) {
+		hyper_dmabuf_remove_imported(imported->hid);
+		kfree(imported);
 	}
 }
 
 static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return ret;
 }
 
 static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_END_CPU_ACCESS);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return 0;
 }
 
 static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return NULL; /* for now NULL.. need to return the address of mapped region */
 }
 
 static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
-	return NULL; /* for now NULL.. need to return the address of mapped region */
+	/* for now NULL.. need to return the address of mapped region */
+	return NULL;
 }
 
-static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
+				    void *vaddr)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return -EINVAL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_MMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return ret;
 }
 
 static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return NULL;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_VMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 
 	return NULL;
 }
 
 static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 {
-	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct imported_sgt_info *imported;
 	int ret;
 
 	if (!dmabuf->priv)
 		return;
 
-	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(sgt_info->hid,
+	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_VUNMAP);
-	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"hyper_dmabuf::%s Error:send dmabuf sync request failed\n", __func__);
-	}
 }
 
 static const struct dma_buf_ops hyper_dmabuf_ops = {
-		.attach = hyper_dmabuf_ops_attach,
-		.detach = hyper_dmabuf_ops_detach,
-		.map_dma_buf = hyper_dmabuf_ops_map,
-		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
-		.release = hyper_dmabuf_ops_release,
-		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
-		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
-		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
-		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
-		.map = hyper_dmabuf_ops_kmap,
-		.unmap = hyper_dmabuf_ops_kunmap,
-		.mmap = hyper_dmabuf_ops_mmap,
-		.vmap = hyper_dmabuf_ops_vmap,
-		.vunmap = hyper_dmabuf_ops_vunmap,
+	.attach = hyper_dmabuf_ops_attach,
+	.detach = hyper_dmabuf_ops_detach,
+	.map_dma_buf = hyper_dmabuf_ops_map,
+	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+	.release = hyper_dmabuf_ops_release,
+	.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
+	.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+	.map = hyper_dmabuf_ops_kmap,
+	.unmap = hyper_dmabuf_ops_kunmap,
+	.mmap = hyper_dmabuf_ops_mmap,
+	.vmap = hyper_dmabuf_ops_vmap,
+	.vunmap = hyper_dmabuf_ops_vunmap,
 };
 
 /* exporting dmabuf as fd */
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
 {
 	int fd = -1;
 
 	/* call hyper_dmabuf_export_dmabuf and create
 	 * and bind a handle for it then release
 	 */
-	hyper_dmabuf_export_dma_buf(dinfo);
+	hyper_dmabuf_export_dma_buf(imported);
 
-	if (dinfo->dma_buf) {
-		fd = dma_buf_fd(dinfo->dma_buf, flags);
+	if (imported->dma_buf) {
+		fd = dma_buf_fd(imported->dma_buf, flags);
 	}
 
 	return fd;
 }
 
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
 {
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 
 	exp_info.ops = &hyper_dmabuf_ops;
 
 	/* multiple of PAGE_SIZE, not considering offset */
-	exp_info.size = dinfo->sgt->nents * PAGE_SIZE;
-	exp_info.flags = /* not sure about flag */0;
-	exp_info.priv = dinfo;
+	exp_info.size = imported->sgt->nents * PAGE_SIZE;
+	exp_info.flags = /* not sure about flag */ 0;
+	exp_info.priv = imported;
 
-	dinfo->dma_buf = dma_buf_export(&exp_info);
+	imported->dma_buf = dma_buf_export(&exp_info);
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
index 8c06fc6..c5505a4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
@@ -25,8 +25,8 @@
 #ifndef __HYPER_DMABUF_OPS_H__
 #define __HYPER_DMABUF_OPS_H__
 
-int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
 
-void hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
 
 #endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
index 39c9dee..36e888c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -32,16 +32,12 @@
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_id.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 #define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
 	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
 				int query, unsigned long* info)
 {
-	int n;
-
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
@@ -50,45 +46,46 @@ int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(exported->hid);
 			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = sgt_info->hyper_dmabuf_rdomain;
+			*info = exported->rdomid;
 			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
-			*info = sgt_info->dma_buf->size;
+			*info = exported->dma_buf->size;
 			break;
 
 		/* whether the buffer is used by importer */
 		case HYPER_DMABUF_QUERY_BUSY:
-			*info = (sgt_info->importer_exported == 0) ? false : true;
+			*info = (exported->active > 0);
 			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !sgt_info->valid;
+			*info = !exported->valid;
 			break;
 
 		/* whether the buffer is scheduled to be unexported */
 		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-			*info = !sgt_info->unexport_scheduled;
+			*info = !exported->unexport_sched;
 			break;
 
 		/* size of private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = sgt_info->sz_priv;
+			*info = exported->sz_priv;
 			break;
 
 		/* copy private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (sgt_info->sz_priv > 0) {
+			if (exported->sz_priv > 0) {
+				int n;
 				n = copy_to_user((void __user*) *info,
-						sgt_info->priv,
-						sgt_info->sz_priv);
+						exported->priv,
+						exported->sz_priv);
 				if (n != 0)
 					return -EINVAL;
 			}
@@ -102,11 +99,9 @@ int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
 }
 
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
 				int query, unsigned long *info)
 {
-	int n;
-
 	switch (query)
 	{
 		case HYPER_DMABUF_QUERY_TYPE:
@@ -115,50 +110,51 @@ int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_
 
 		/* exporting domain of this specific dmabuf*/
 		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(imported_sgt_info->hid);
+			*info = HYPER_DMABUF_DOM_ID(imported->hid);
 			break;
 
 		/* importing domain of this specific dmabuf */
 		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = hyper_dmabuf_private.domid;
+			*info = hy_drv_priv->domid;
 			break;
 
 		/* size of dmabuf in byte */
 		case HYPER_DMABUF_QUERY_SIZE:
-			if (imported_sgt_info->dma_buf) {
+			if (imported->dma_buf) {
 				/* if local dma_buf is created (if it's ever mapped),
 				 * retrieve it directly from struct dma_buf *
 				 */
-				*info = imported_sgt_info->dma_buf->size;
+				*info = imported->dma_buf->size;
 			} else {
 				/* calcuate it from given nents, frst_ofst and last_len */
-				*info = HYPER_DMABUF_SIZE(imported_sgt_info->nents,
-							  imported_sgt_info->frst_ofst,
-							  imported_sgt_info->last_len);
+				*info = HYPER_DMABUF_SIZE(imported->nents,
+							  imported->frst_ofst,
+							  imported->last_len);
 			}
 			break;
 
 		/* whether the buffer is used or not */
 		case HYPER_DMABUF_QUERY_BUSY:
 			/* checks if it's used by importer */
-			*info = (imported_sgt_info->num_importers > 0) ? true : false;
+			*info = (imported->importers > 0);
 			break;
 
 		/* whether the buffer is unexported */
 		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !imported_sgt_info->valid;
+			*info = !imported->valid;
 			break;
 		/* size of private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = imported_sgt_info->sz_priv;
+			*info = imported->sz_priv;
 			break;
 
 		/* copy private info attached to buffer */
 		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (imported_sgt_info->sz_priv > 0) {
+			if (imported->sz_priv > 0) {
+				int n;
 				n = copy_to_user((void __user*) *info,
-						imported_sgt_info->priv,
-						imported_sgt_info->sz_priv);
+						imported->priv,
+						imported->sz_priv);
 				if (n != 0)
 					return -EINVAL;
 			}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
index 7bbb322..65ae738 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -1,10 +1,10 @@
 #ifndef __HYPER_DMABUF_QUERY_H__
 #define __HYPER_DMABUF_QUERY_H__
 
-int hyper_dmabuf_query_imported(struct hyper_dmabuf_imported_sgt_info *imported_sgt_info,
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
 				int query, unsigned long *info);
 
-int hyper_dmabuf_query_exported(struct hyper_dmabuf_sgt_info *sgt_info,
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
 				int query, unsigned long *info);
 
 #endif // __HYPER_DMABUF_QUERY_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 9004406..01ec98c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -39,8 +39,6 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_sgl_proc.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 /* Whenever importer does dma operations from remote domain,
  * a notification is sent to the exporter so that exporter
  * issues equivalent dma operation on the original dma buf
@@ -58,7 +56,7 @@ extern struct hyper_dmabuf_private hyper_dmabuf_private;
  */
 int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 {
-	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct exported_sgt_info *exported;
 	struct sgt_list *sgtl;
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
@@ -66,10 +64,10 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	int ret;
 
 	/* find a coresponding SGT for the id */
-	sgt_info = hyper_dmabuf_find_exported(hid);
+	exported = hyper_dmabuf_find_exported(hid);
 
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device,
+	if (!exported) {
+		dev_err(hy_drv_priv->dev,
 			"dmabuf remote sync::can't find exported list\n");
 		return -ENOENT;
 	}
@@ -79,84 +77,84 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
 		if (!attachl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
 			return -ENOMEM;
 		}
 
-		attachl->attach = dma_buf_attach(sgt_info->dma_buf,
-						 hyper_dmabuf_private.device);
+		attachl->attach = dma_buf_attach(exported->dma_buf,
+						 hy_drv_priv->dev);
 
 		if (!attachl->attach) {
 			kfree(attachl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_ATTACH\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
 			return -ENOMEM;
 		}
 
-		list_add(&attachl->list, &sgt_info->active_attached->list);
+		list_add(&attachl->list, &exported->active_attached->list);
 		break;
 
 	case HYPER_DMABUF_OPS_DETACH:
-		if (list_empty(&sgt_info->active_attached->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_DETACH\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_DETACH\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf attachment left to be detached\n");
 			return -EFAULT;
 		}
 
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
-		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		dma_buf_detach(exported->dma_buf, attachl->attach);
 		list_del(&attachl->list);
 		kfree(attachl);
 		break;
 
 	case HYPER_DMABUF_OPS_MAP:
-		if (list_empty(&sgt_info->active_attached->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf attachment left to be mapped\n");
 			return -EFAULT;
 		}
 
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
 		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
 
 		if (!sgtl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
 			return -ENOMEM;
 		}
 
 		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
 			return -ENOMEM;
 		}
-		list_add(&sgtl->list, &sgt_info->active_sgts->list);
+		list_add(&sgtl->list, &exported->active_sgts->list);
 		break;
 
 	case HYPER_DMABUF_OPS_UNMAP:
-		if (list_empty(&sgt_info->active_sgts->list) ||
-		    list_empty(&sgt_info->active_attached->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_UNMAP\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->active_sgts->list) ||
+		    list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
+			dev_err(hy_drv_priv->dev,
 				"no more SGT or attachment left to be unmapped\n");
 			return -EFAULT;
 		}
 
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+		sgtl = list_first_entry(&exported->active_sgts->list,
 					struct sgt_list, list);
 
 		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
@@ -166,30 +164,30 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_RELEASE:
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"Buffer {id:%d key:%d %d %d} released, references left: %d\n",
-			 sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			 sgt_info->hid.rng_key[2], sgt_info->importer_exported -1);
+			 exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
+			 exported->hid.rng_key[2], exported->active - 1);
 
-                sgt_info->importer_exported--;
+                exported->active--;
 		/* If there are still importers just break, if no then continue with final cleanup */
-		if (sgt_info->importer_exported)
+		if (exported->active)
 			break;
 
 		/*
 		 * Importer just released buffer fd, check if there is any other importer still using it.
 		 * If not and buffer was unexported, clean up shared data and remove that buffer.
 		 */
-		dev_dbg(hyper_dmabuf_private.device,
+		dev_dbg(hy_drv_priv->dev,
 			"Buffer {id:%d key:%d %d %d} final released\n",
-			sgt_info->hid.id, sgt_info->hid.rng_key[0], sgt_info->hid.rng_key[1],
-			sgt_info->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
+			exported->hid.rng_key[2]);
 
-		if (!sgt_info->valid && !sgt_info->importer_exported &&
-		    !sgt_info->unexport_scheduled) {
-			hyper_dmabuf_cleanup_sgt_info(sgt_info, false);
+		if (!exported->valid && !exported->active &&
+		    !exported->unexport_sched) {
+			hyper_dmabuf_cleanup_sgt_info(exported, false);
 			hyper_dmabuf_remove_exported(hid);
-			kfree(sgt_info);
+			kfree(exported);
 			/* store hyper_dmabuf_id in the list for reuse */
 			store_reusable_hid(hid);
 		}
@@ -197,19 +195,19 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
-		ret = dma_buf_begin_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_begin_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
 		if (ret) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
 
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
-		ret = dma_buf_end_cpu_access(sgt_info->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_end_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
 		if (ret) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
@@ -218,49 +216,49 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_KMAP:
 		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
 		if (!va_kmapl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -ENOMEM;
 		}
 
 		/* dummy kmapping of 1 page */
 		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
-			va_kmapl->vaddr = dma_buf_kmap_atomic(sgt_info->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap_atomic(exported->dma_buf, 1);
 		else
-			va_kmapl->vaddr = dma_buf_kmap(sgt_info->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap(exported->dma_buf, 1);
 
 		if (!va_kmapl->vaddr) {
 			kfree(va_kmapl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -ENOMEM;
 		}
-		list_add(&va_kmapl->list, &sgt_info->va_kmapped->list);
+		list_add(&va_kmapl->list, &exported->va_kmapped->list);
 		break;
 
 	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KUNMAP:
-		if (list_empty(&sgt_info->va_kmapped->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->va_kmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf VA to be freed\n");
 			return -EFAULT;
 		}
 
-		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
 		if (!va_kmapl->vaddr) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return PTR_ERR(va_kmapl->vaddr);
 		}
 
 		/* unmapping 1 page */
 		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
-			dma_buf_kunmap_atomic(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap_atomic(exported->dma_buf, 1, va_kmapl->vaddr);
 		else
-			dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
 
 		list_del(&va_kmapl->list);
 		kfree(va_kmapl);
@@ -269,48 +267,48 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_MMAP:
 		/* currently not supported: looking for a way to create
 		 * a dummy vma */
-		dev_warn(hyper_dmabuf_private.device,
-			 "dmabuf remote sync::sychronized mmap is not supported\n");
+		dev_warn(hy_drv_priv->dev,
+			 "remote sync::sychronized mmap is not supported\n");
 		break;
 
 	case HYPER_DMABUF_OPS_VMAP:
 		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
 
 		if (!va_vmapl) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
 			return -ENOMEM;
 		}
 
 		/* dummy vmapping */
-		va_vmapl->vaddr = dma_buf_vmap(sgt_info->dma_buf);
+		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
 
 		if (!va_vmapl->vaddr) {
 			kfree(va_vmapl);
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
 			return -ENOMEM;
 		}
-		list_add(&va_vmapl->list, &sgt_info->va_vmapped->list);
+		list_add(&va_vmapl->list, &exported->va_vmapped->list);
 		break;
 
 	case HYPER_DMABUF_OPS_VUNMAP:
-		if (list_empty(&sgt_info->va_vmapped->list)) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
-			dev_err(hyper_dmabuf_private.device,
+		if (list_empty(&exported->va_vmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hy_drv_priv->dev,
 				"no more dmabuf VA to be freed\n");
 			return -EFAULT;
 		}
-		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
 					struct vmap_vaddr_list, list);
 		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-			dev_err(hyper_dmabuf_private.device,
-				"dmabuf remote sync::error while processing HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
 			return -EFAULT;
 		}
 
-		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
 
 		list_del(&va_vmapl->list);
 		kfree(va_vmapl);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index 691a714..315c354 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -32,8 +32,6 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/dma-buf.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_sgl_proc.h"
@@ -41,8 +39,6 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
 int dmabuf_refcount(struct dma_buf *dma_buf)
@@ -66,60 +62,68 @@ static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
 	sgl = sgt->sgl;
 
 	length = sgl->length - PAGE_SIZE + sgl->offset;
-	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
+
+	/* round-up */
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
 
 	for (i = 1; i < sgt->nents; i++) {
 		sgl = sg_next(sgl);
-		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
+
+		/* round-up */
+		num_pages += ((sgl->length + PAGE_SIZE - 1) /
+			     PAGE_SIZE); /* round-up */
 	}
 
 	return num_pages;
 }
 
 /* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 {
-	struct hyper_dmabuf_pages_info *pinfo;
+	struct pages_info *pg_info;
 	int i, j, k;
 	int length;
 	struct scatterlist *sgl;
 
-	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
-	if (!pinfo)
+	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
+	if (!pg_info)
 		return NULL;
 
-	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
-	if (!pinfo->pages) {
-		kfree(pinfo);
+	pg_info->pgs = kmalloc(sizeof(struct page *) *
+			       hyper_dmabuf_get_num_pgs(sgt),
+			       GFP_KERNEL);
+
+	if (!pg_info->pgs) {
+		kfree(pg_info);
 		return NULL;
 	}
 
 	sgl = sgt->sgl;
 
-	pinfo->nents = 1;
-	pinfo->frst_ofst = sgl->offset;
-	pinfo->pages[0] = sg_page(sgl);
+	pg_info->nents = 1;
+	pg_info->frst_ofst = sgl->offset;
+	pg_info->pgs[0] = sg_page(sgl);
 	length = sgl->length - PAGE_SIZE + sgl->offset;
 	i = 1;
 
 	while (length > 0) {
-		pinfo->pages[i] = nth_page(sg_page(sgl), i);
+		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
 		length -= PAGE_SIZE;
-		pinfo->nents++;
+		pg_info->nents++;
 		i++;
 	}
 
 	for (j = 1; j < sgt->nents; j++) {
 		sgl = sg_next(sgl);
-		pinfo->pages[i++] = sg_page(sgl);
+		pg_info->pgs[i++] = sg_page(sgl);
 		length = sgl->length - PAGE_SIZE;
-		pinfo->nents++;
+		pg_info->nents++;
 		k = 1;
 
 		while (length > 0) {
-			pinfo->pages[i++] = nth_page(sg_page(sgl), k++);
+			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
 			length -= PAGE_SIZE;
-			pinfo->nents++;
+			pg_info->nents++;
 		}
 	}
 
@@ -127,13 +131,13 @@ struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	 * lenght at that point will be 0 or negative,
 	 * so to calculate last page size just add it to PAGE_SIZE
 	 */
-	pinfo->last_len = PAGE_SIZE + length;
+	pg_info->last_len = PAGE_SIZE + length;
 
-	return pinfo;
+	return pg_info;
 }
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
 					 int frst_ofst, int last_len, int nents)
 {
 	struct sg_table *sgt;
@@ -157,31 +161,32 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
 
 	sgl = sgt->sgl;
 
-	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
+	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
 
 	for (i=1; i<nents-1; i++) {
 		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
+		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
 	}
 
 	if (nents > 1) /* more than one page */ {
 		sgl = sg_next(sgl);
-		sg_set_page(sgl, pages[i], last_len, 0);
+		sg_set_page(sgl, pgs[i], last_len, 0);
 	}
 
 	return sgt;
 }
 
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force)
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force)
 {
 	struct sgt_list *sgtl;
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
 	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_backend_ops *ops = hyper_dmabuf_private.backend_ops;
+	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 
-	if (!sgt_info) {
-		dev_err(hyper_dmabuf_private.device, "invalid hyper_dmabuf_id\n");
+	if (!exported) {
+		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
 		return -EINVAL;
 	}
 
@@ -190,35 +195,37 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 	 * side.
 	 */
 	if (!force &&
-	    sgt_info->importer_exported) {
-		dev_warn(hyper_dmabuf_private.device, "dma-buf is used by importer\n");
+	    exported->active) {
+		dev_warn(hy_drv_priv->dev,
+			 "dma-buf is used by importer\n");
+
 		return -EPERM;
 	}
 
 	/* force == 1 is not recommended */
-	while (!list_empty(&sgt_info->va_kmapped->list)) {
-		va_kmapl = list_first_entry(&sgt_info->va_kmapped->list,
+	while (!list_empty(&exported->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
 					    struct kmap_vaddr_list, list);
 
-		dma_buf_kunmap(sgt_info->dma_buf, 1, va_kmapl->vaddr);
+		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
 		list_del(&va_kmapl->list);
 		kfree(va_kmapl);
 	}
 
-	while (!list_empty(&sgt_info->va_vmapped->list)) {
-		va_vmapl = list_first_entry(&sgt_info->va_vmapped->list,
+	while (!list_empty(&exported->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
 					    struct vmap_vaddr_list, list);
 
-		dma_buf_vunmap(sgt_info->dma_buf, va_vmapl->vaddr);
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
 		list_del(&va_vmapl->list);
 		kfree(va_vmapl);
 	}
 
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
-		sgtl = list_first_entry(&sgt_info->active_sgts->list,
+		sgtl = list_first_entry(&exported->active_sgts->list,
 					struct sgt_list, list);
 
 		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
@@ -227,35 +234,35 @@ int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int fo
 		kfree(sgtl);
 	}
 
-	while (!list_empty(&sgt_info->active_sgts->list)) {
-		attachl = list_first_entry(&sgt_info->active_attached->list,
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
 					   struct attachment_list, list);
 
-		dma_buf_detach(sgt_info->dma_buf, attachl->attach);
+		dma_buf_detach(exported->dma_buf, attachl->attach);
 		list_del(&attachl->list);
 		kfree(attachl);
 	}
 
 	/* Start cleanup of buffer in reverse order to exporting */
-	ops->unshare_pages(&sgt_info->refs_info, sgt_info->nents);
+	ops->unshare_pages(&exported->refs_info, exported->nents);
 
 	/* unmap dma-buf */
-	dma_buf_unmap_attachment(sgt_info->active_attached->attach,
-				 sgt_info->active_sgts->sgt,
+	dma_buf_unmap_attachment(exported->active_attached->attach,
+				 exported->active_sgts->sgt,
 				 DMA_BIDIRECTIONAL);
 
 	/* detatch dma-buf */
-	dma_buf_detach(sgt_info->dma_buf, sgt_info->active_attached->attach);
+	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
 
 	/* close connection to dma-buf completely */
-	dma_buf_put(sgt_info->dma_buf);
-	sgt_info->dma_buf = NULL;
-
-	kfree(sgt_info->active_sgts);
-	kfree(sgt_info->active_attached);
-	kfree(sgt_info->va_kmapped);
-	kfree(sgt_info->va_vmapped);
-	kfree(sgt_info->priv);
+	dma_buf_put(exported->dma_buf);
+	exported->dma_buf = NULL;
+
+	kfree(exported->active_sgts);
+	kfree(exported->active_attached);
+	kfree(exported->va_kmapped);
+	kfree(exported->va_vmapped);
+	kfree(exported->priv);
 
 	return 0;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
index 237ccf5..930bade 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -28,13 +28,15 @@
 int dmabuf_refcount(struct dma_buf *dma_buf);
 
 /* extract pages directly from struct sg_table */
-struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
-                                int frst_ofst, int last_len, int nents);
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents);
 
-int hyper_dmabuf_cleanup_sgt_info(struct hyper_dmabuf_sgt_info *sgt_info, int force);
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force);
 
 void hyper_dmabuf_free_sgt(struct sg_table *sgt);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 6f929f2..8a612d1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -50,24 +50,20 @@ struct vmap_vaddr_list {
 };
 
 /* Exporter builds pages_info before sharing pages */
-struct hyper_dmabuf_pages_info {
+struct pages_info {
         int frst_ofst; /* offset of data in the first page */
         int last_len; /* length of data in the last page */
         int nents; /* # of pages */
-        struct page **pages; /* pages that contains reference numbers of shared pages*/
+        struct page **pgs; /* pages that contains reference numbers of shared pages*/
 };
 
 
-/* Both importer and exporter use this structure to point to sg lists
- *
- * Exporter stores references to sgt in a hash table
+/* Exporter stores references to sgt in a hash table
  * Exporter keeps these references for synchronization and tracking purposes
- *
- * Importer use this structure exporting to other drivers in the same domain
  */
-struct hyper_dmabuf_sgt_info {
+struct exported_sgt_info {
         hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in remote domain */
-	int hyper_dmabuf_rdomain; /* domain importing this sgt */
+	int rdomid; /* domain importing this sgt */
 
 	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
 	int nents;
@@ -79,10 +75,10 @@ struct hyper_dmabuf_sgt_info {
 	struct vmap_vaddr_list *va_vmapped;
 
 	bool valid; /* set to 0 once unexported. Needed to prevent further mapping by importer */
-	int importer_exported; /* exported locally on importer's side */
+	int active; /* locally shared on importer's side */
 	void *refs_info; /* hypervisor-specific info for the references */
-	struct delayed_work unexport_work;
-	bool unexport_scheduled;
+	struct delayed_work unexport;
+	bool unexport_sched;
 
 	/* owner of buffer
 	 * TODO: that is naiive as buffer may be reused by
@@ -99,7 +95,7 @@ struct hyper_dmabuf_sgt_info {
 /* Importer store references (before mapping) on shared pages
  * Importer store these references in the table and map it in
  * its own memory map once userspace asks for reference for the buffer */
-struct hyper_dmabuf_imported_sgt_info {
+struct imported_sgt_info {
 	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
 
 	int ref_handle; /* reference number of top level addressing page of shared pages */
@@ -112,7 +108,7 @@ struct hyper_dmabuf_imported_sgt_info {
 
 	void *refs_info;
 	bool valid;
-	int num_importers;
+	int importers;
 
 	size_t sz_priv;
 	char *priv; /* device specific info (e.g. image's meta info?) */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 920ecf4..f70b4ea 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -45,8 +45,6 @@ static int export_req_id = 0;
 
 struct hyper_dmabuf_req req_pending = {0};
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 extern int xenstored_ready;
 
 static void xen_get_domid_delayed(struct work_struct *unused);
@@ -62,7 +60,9 @@ static int xen_comm_setup_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
 	return xenbus_mkdir(XBT_NIL, buf, "");
 }
 
@@ -76,7 +76,9 @@ static int xen_comm_destroy_data_dir(void)
 {
 	char buf[255];
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf", hyper_dmabuf_private.domid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
 	return xenbus_rm(XBT_NIL, buf, "");
 }
 
@@ -91,20 +93,26 @@ static int xen_comm_expose_ring_details(int domid, int rdomid,
 	char buf[255];
 	int ret;
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", domid, rdomid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		domid, rdomid);
+
 	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
 
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
 	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to write xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
@@ -114,25 +122,32 @@ static int xen_comm_expose_ring_details(int domid, int rdomid,
 /*
  * Queries details of ring exposed by remote domain.
  */
-static int xen_comm_get_ring_details(int domid, int rdomid, int *grefid, int *port)
+static int xen_comm_get_ring_details(int domid, int rdomid,
+				     int *grefid, int *port)
 {
 	char buf[255];
 	int ret;
 
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", rdomid, domid);
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		rdomid, domid);
+
 	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
 
 	if (ret <= 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
 	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
 
 	if (ret <= 0) {
-		dev_err(hyper_dmabuf_private.device,
-			"Failed to read xenbus entry %s: %d\n", buf, ret);
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
 		return ret;
 	}
 
@@ -146,9 +161,8 @@ void xen_get_domid_delayed(struct work_struct *unused)
 
 	/* scheduling another if driver is still running
 	 * and xenstore has not been initialized */
-	if (hyper_dmabuf_private.exited == false &&
-	    likely(xenstored_ready == 0)) {
-		dev_dbg(hyper_dmabuf_private.device,
+	if (likely(xenstored_ready == 0)) {
+		dev_dbg(hy_drv_priv->dev,
 			"Xenstore is not quite ready yet. Will retry it in 500ms\n");
 		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
 	} else {
@@ -163,14 +177,14 @@ void xen_get_domid_delayed(struct work_struct *unused)
 
 		/* try again since -1 is an invalid id for domain
 		 * (but only if driver is still running) */
-		if (hyper_dmabuf_private.exited == false && unlikely(domid == -1)) {
-			dev_dbg(hyper_dmabuf_private.device,
+		if (unlikely(domid == -1)) {
+			dev_dbg(hy_drv_priv->dev,
 				"domid==-1 is invalid. Will retry it in 500ms\n");
 			schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
 		} else {
-			dev_info(hyper_dmabuf_private.device,
+			dev_info(hy_drv_priv->dev,
 				"Successfully retrieved domid from Xenstore:%d\n", domid);
-			hyper_dmabuf_private.domid = domid;
+			hy_drv_priv->domid = domid;
 		}
 	}
 }
@@ -232,28 +246,30 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 		return;
 	}
 
-	/* Check if we have importer ring for given remote domain alrady created */
+	/* Check if we have importer ring for given remote domain already
+	 * created */
+
 	ring_info = xen_comm_find_rx_ring(rdom);
 
-	/* Try to query remote domain exporter ring details - if that will
-	 * fail and we have importer ring that means remote domains has cleanup
-	 * its exporter ring, so our importer ring is no longer useful.
+	/* Try to query remote domain exporter ring details - if
+	 * that will fail and we have importer ring that means remote
+	 * domains has cleanup its exporter ring, so our importer ring
+	 * is no longer useful.
 	 *
 	 * If querying details will succeed and we don't have importer ring,
-	 * it means that remote domain has setup it for us and we should connect
-	 * to it.
+	 * it means that remote domain has setup it for us and we should
+	 * connect to it.
 	 */
 
-
-	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), rdom,
-					&grefid, &port);
+	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(),
+					rdom, &grefid, &port);
 
 	if (ring_info && ret != 0) {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			 "Remote exporter closed, cleaninup importer\n");
 		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			 "Registering importer\n");
 		hyper_dmabuf_xen_init_rx_rbuf(rdom);
 	}
@@ -274,7 +290,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info = xen_comm_find_tx_ring(domid);
 
 	if (ring_info) {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
 		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
 		return 0;
@@ -283,7 +299,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	if (!ring_info) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No more spae left\n");
 		return -ENOMEM;
 	}
@@ -313,9 +329,9 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	alloc_unbound.dom = DOMID_SELF;
 	alloc_unbound.remote_dom = domid;
 	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
-					&alloc_unbound);
+					  &alloc_unbound);
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Cannot allocate event channel\n");
 		kfree(ring_info);
 		return -EIO;
@@ -327,7 +343,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 					NULL, (void*) ring_info);
 
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
 		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
@@ -343,7 +359,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	mutex_init(&ring_info->lock);
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
 		__func__,
 		ring_info->gref_ring,
@@ -364,7 +380,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
 
 	if (!ring_info->watch.node) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No more space left\n");
 		kfree(ring_info);
 		return -ENOMEM;
@@ -414,7 +430,8 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	if (!rx_ring_info)
 		return;
 
-	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring, PAGE_SIZE);
+	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring,
+		       PAGE_SIZE);
 }
 
 /* importer needs to know about shared page and port numbers for
@@ -436,25 +453,28 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ring_info = xen_comm_find_rx_ring(domid);
 
 	if (ring_info) {
-		dev_info(hyper_dmabuf_private.device,
-			 "rx ring ch from domid = %d already exist\n", ring_info->sdomain);
+		dev_info(hy_drv_priv->dev,
+			 "rx ring ch from domid = %d already exist\n",
+			 ring_info->sdomain);
+
 		return 0;
 	}
 
-
 	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
 					&rx_gref, &rx_port);
 
 	if (ret) {
-		dev_err(hyper_dmabuf_private.device,
-			"Domain %d has not created exporter ring for current domain\n", domid);
+		dev_err(hy_drv_priv->dev,
+			"Domain %d has not created exporter ring for current domain\n",
+			domid);
+
 		return ret;
 	}
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
 	if (!ring_info) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
@@ -465,7 +485,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (!map_ops) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		ret = -ENOMEM;
 		goto fail_no_map_ops;
@@ -476,21 +496,23 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		goto fail_others;
 	}
 
-	gnttab_set_map_op(&map_ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+	gnttab_set_map_op(&map_ops[0],
+			  (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
 			  GNTMAP_host_map, rx_gref, domid);
 
-	gnttab_set_unmap_op(&ring_info->unmap_op, (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+	gnttab_set_unmap_op(&ring_info->unmap_op,
+			    (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
 			    GNTMAP_host_map, -1);
 
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device, "Cannot map ring\n");
+		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
 		ret = -EFAULT;
 		goto fail_others;
 	}
 
 	if (map_ops[0].status) {
-		dev_err(hyper_dmabuf_private.device, "Ring mapping failed\n");
+		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
 		ret = -EFAULT;
 		goto fail_others;
 	} else {
@@ -512,7 +534,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info->irq = ret;
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
 		rx_port,
 		ring_info->irq);
@@ -569,7 +591,9 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 		return;
 
 	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
-	FRONT_RING_INIT(&(tx_ring_info->ring_front), tx_ring_info->ring_front.sring, PAGE_SIZE);
+	FRONT_RING_INIT(&(tx_ring_info->ring_front),
+			tx_ring_info->ring_front.sring,
+			PAGE_SIZE);
 }
 
 #ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
@@ -587,20 +611,20 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 	char buf[128];
 	int i, dummy;
 
-	dev_dbg(hyper_dmabuf_private.device,
+	dev_dbg(hy_drv_priv->dev,
 		"Scanning new tx channel comming from another domain\n");
 
 	/* check other domains and schedule another work if driver
 	 * is still running and backend is valid
 	 */
-	if (hyper_dmabuf_private.exited == false &&
-	    hyper_dmabuf_private.backend_initialized == true) {
+	if (hy_drv_priv &&
+	    hy_drv_priv->initialized) {
 		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
-			if (i == hyper_dmabuf_private.domid)
+			if (i == hy_drv_priv->domid)
 				continue;
 
-			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d", i,
-				hyper_dmabuf_private.domid);
+			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+				i, hy_drv_priv->domid);
 
 			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
 
@@ -611,13 +635,14 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 				ret = hyper_dmabuf_xen_init_rx_rbuf(i);
 
 				if (!ret)
-					dev_info(hyper_dmabuf_private.device,
+					dev_info(hy_drv_priv->dev,
 						 "Finishing up setting up rx channel for domain %d\n", i);
 			}
 		}
 
 		/* check every 10 seconds */
-		schedule_delayed_work(&xen_rx_ch_auto_add_work, msecs_to_jiffies(10000));
+		schedule_delayed_work(&xen_rx_ch_auto_add_work,
+				      msecs_to_jiffies(10000));
 	}
 }
 
@@ -630,21 +655,21 @@ void xen_init_comm_env_delayed(struct work_struct *unused)
 	/* scheduling another work if driver is still running
 	 * and xenstore hasn't been initialized or dom_id hasn't
 	 * been correctly retrieved. */
-	if (hyper_dmabuf_private.exited == false &&
-	    likely(xenstored_ready == 0 ||
-	    hyper_dmabuf_private.domid == -1)) {
-		dev_dbg(hyper_dmabuf_private.device,
-			"Xenstore is not ready yet. Re-try this again in 500ms\n");
-		schedule_delayed_work(&xen_init_comm_env_work, msecs_to_jiffies(500));
+	if (likely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore not ready Will re-try in 500ms\n");
+		schedule_delayed_work(&xen_init_comm_env_work,
+				      msecs_to_jiffies(500));
 	} else {
 		ret = xen_comm_setup_data_dir();
 		if (ret < 0) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"Failed to create data dir in Xenstore\n");
 		} else {
-			dev_info(hyper_dmabuf_private.device,
-				"Successfully finished comm env initialization\n");
-			hyper_dmabuf_private.backend_initialized = true;
+			dev_info(hy_drv_priv->dev,
+				"Successfully finished comm env init\n");
+			hy_drv_priv->initialized = true;
 
 #ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
 			xen_rx_ch_add_delayed(NULL);
@@ -659,20 +684,21 @@ int hyper_dmabuf_xen_init_comm_env(void)
 
 	xen_comm_ring_table_init();
 
-	if (unlikely(xenstored_ready == 0 || hyper_dmabuf_private.domid == -1)) {
+	if (unlikely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
 		xen_init_comm_env_delayed(NULL);
 		return -1;
 	}
 
 	ret = xen_comm_setup_data_dir();
 	if (ret < 0) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Failed to create data dir in Xenstore\n");
 	} else {
-		dev_info(hyper_dmabuf_private.device,
+		dev_info(hy_drv_priv->dev,
 			"Successfully finished comm env initialization\n");
 
-		hyper_dmabuf_private.backend_initialized = true;
+		hy_drv_priv->initialized = true;
 	}
 
 	return ret;
@@ -691,7 +717,8 @@ void hyper_dmabuf_xen_destroy_comm(void)
 	xen_comm_destroy_data_dir();
 }
 
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
+			      int wait)
 {
 	struct xen_comm_front_ring *ring;
 	struct hyper_dmabuf_req *new_req;
@@ -706,22 +733,21 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 	/* find a ring info for the channel */
 	ring_info = xen_comm_find_tx_ring(domid);
 	if (!ring_info) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"Can't find ring info for the channel\n");
 		return -ENOENT;
 	}
 
-	mutex_lock(&ring_info->lock);
 
 	ring = &ring_info->ring_front;
 
 	do_gettimeofday(&tv_start);
 
 	while (RING_FULL(ring)) {
-		dev_dbg(hyper_dmabuf_private.device, "RING_FULL\n");
+		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
 
 		if (timeout == 0) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"Timeout while waiting for an entry in the ring\n");
 			return -EIO;
 		}
@@ -731,15 +757,17 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	timeout = 1000;
 
+	mutex_lock(&ring_info->lock);
+
 	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
 	if (!new_req) {
 		mutex_unlock(&ring_info->lock);
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"NULL REQUEST\n");
 		return -EIO;
 	}
 
-	req->request_id = xen_comm_next_req_id();
+	req->req_id = xen_comm_next_req_id();
 
 	/* update req_pending with current request */
 	memcpy(&req_pending, req, sizeof(req_pending));
@@ -756,7 +784,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 	if (wait) {
 		while (timeout--) {
-			if (req_pending.status !=
+			if (req_pending.stat !=
 			    HYPER_DMABUF_REQ_NOT_RESPONDED)
 				break;
 			usleep_range(100, 120);
@@ -764,7 +792,7 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 
 		if (timeout < 0) {
 			mutex_unlock(&ring_info->lock);
-			dev_err(hyper_dmabuf_private.device, "request timed-out\n");
+			dev_err(hy_drv_priv->dev, "request timed-out\n");
 			return -EBUSY;
 		}
 
@@ -781,10 +809,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait)
 		}
 
 		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
-			dev_dbg(hyper_dmabuf_private.device, "send_req:time diff: %ld sec, %ld usec\n",
+			dev_dbg(hy_drv_priv->dev, "send_req:time diff: %ld sec, %ld usec\n",
 				tv_diff.tv_sec, tv_diff.tv_usec);
-
-		return req_pending.status;
 	}
 
 	mutex_unlock(&ring_info->lock);
@@ -808,7 +834,7 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_rx_ring_info *)info;
 	ring = &ring_info->ring_back;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
 
 	do {
 		rc = ring->req_cons;
@@ -828,13 +854,13 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 				 * the requester
 				 */
 				memcpy(&resp, &req, sizeof(resp));
-				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &resp,
-							sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt),
+							 &resp, sizeof(resp));
 				ring->rsp_prod_pvt++;
 
-				dev_dbg(hyper_dmabuf_private.device,
+				dev_dbg(hy_drv_priv->dev,
 					"sending response to exporter for request id:%d\n",
-					resp.response_id);
+					resp.resp_id);
 
 				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
 
@@ -864,7 +890,7 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 	ring_info = (struct xen_comm_tx_ring_info *)info;
 	ring = &ring_info->ring_front;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
 
 	do {
 		more_to_do = 0;
@@ -876,33 +902,33 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 			 * in the response
 			 */
 
-			dev_dbg(hyper_dmabuf_private.device,
+			dev_dbg(hy_drv_priv->dev,
 				"getting response from importer\n");
 
-			if (req_pending.request_id == resp->response_id) {
-				req_pending.status = resp->status;
+			if (req_pending.req_id == resp->resp_id) {
+				req_pending.stat = resp->stat;
 			}
 
-			if (resp->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
 				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
 							(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
-					dev_err(hyper_dmabuf_private.device,
+					dev_err(hy_drv_priv->dev,
 						"getting error while parsing response\n");
 				}
-			} else if (resp->status == HYPER_DMABUF_REQ_PROCESSED) {
+			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
 				/* for debugging dma_buf remote synchronization */
-				dev_dbg(hyper_dmabuf_private.device,
-					"original request = 0x%x\n", resp->command);
-				dev_dbg(hyper_dmabuf_private.device,
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
 					"Just got HYPER_DMABUF_REQ_PROCESSED\n");
-			} else if (resp->status == HYPER_DMABUF_REQ_ERROR) {
+			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
 				/* for debugging dma_buf remote synchronization */
-				dev_dbg(hyper_dmabuf_private.device,
-					"original request = 0x%x\n", resp->command);
-				dev_dbg(hyper_dmabuf_private.device,
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
 					"Just got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 4708b49..7a8ec73 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -38,8 +38,6 @@
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_comm_list.h"
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
 DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
@@ -56,7 +54,7 @@ int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
@@ -76,7 +74,7 @@ int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
 	if (!info_entry) {
-		dev_err(hyper_dmabuf_private.device,
+		dev_err(hy_drv_priv->dev,
 			"No memory left to be allocated\n");
 		return -ENOMEM;
 	}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 908eda8..424417d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -36,8 +36,6 @@
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-extern struct hyper_dmabuf_private hyper_dmabuf_private;
-
 /*
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
@@ -98,7 +96,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 
 	if (!sh_pages_info) {
-		dev_err(hyper_dmabuf_private.device, "No more space left\n");
+		dev_err(hy_drv_priv->dev, "No more space left\n");
 		return -ENOMEM;
 	}
 
@@ -107,10 +105,10 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* share data pages in readonly mode for security */
 	for (i=0; i<nents; i++) {
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
-							    pfn_to_mfn(page_to_pfn(pages[i])),
-							    true /* read-only from remote domain */);
+					pfn_to_mfn(page_to_pfn(pages[i])),
+					true /* read-only from remote domain */);
 		if (lvl2_table[i] == -ENOSPC) {
-			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl2 */
 			while(i--) {
@@ -124,10 +122,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Share 2nd level addressing pages in readonly mode*/
 	for (i=0; i< n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-							    virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
-							    true);
+					virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+					true);
+
 		if (lvl3_table[i] == -ENOSPC) {
-			dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl3 */
 			while(i--) {
@@ -147,11 +146,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	/* Share lvl3_table in readonly mode*/
 	lvl3_gref = gnttab_grant_foreign_access(domid,
-						virt_to_mfn((unsigned long)lvl3_table),
-						true);
+			virt_to_mfn((unsigned long)lvl3_table),
+			true);
 
 	if (lvl3_gref == -ENOSPC) {
-		dev_err(hyper_dmabuf_private.device, "No more space left in grant table\n");
+		dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
 
 		/* Unshare all pages for lvl3 */
 		while(i--) {
@@ -178,7 +177,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	/* Store exported pages refid to be unshared later */
 	sh_pages_info->lvl3_gref = lvl3_gref;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return lvl3_gref;
 
 err_cleanup:
@@ -190,16 +189,17 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	struct xen_shared_pages_info *sh_pages_info;
-	int n_lvl2_grefs = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			    ((nents % REFS_PER_PAGE) ? 1: 0));
 	int i;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->lvl3_table == NULL ||
 	    sh_pages_info->lvl2_table ==  NULL ||
 	    sh_pages_info->lvl3_gref == -1) {
-		dev_warn(hyper_dmabuf_private.device,
+		dev_warn(hy_drv_priv->dev,
 			 "gref table for hyper_dmabuf already cleaned up\n");
 		return 0;
 	}
@@ -207,7 +207,7 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	/* End foreign access for data pages, but do not free them */
 	for (i = 0; i < nents; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
-			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
 		}
 		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
 		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
@@ -216,17 +216,17 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	/* End foreign access for 2nd level addressing pages */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
-			dev_warn(hyper_dmabuf_private.device, "refid not shared !!\n");
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
 		}
 		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
-			dev_warn(hyper_dmabuf_private.device, "refid still in use!!!\n");
+			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
 		}
 		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
 	}
 
 	/* End foreign access for top level addressing page */
 	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
-		dev_warn(hyper_dmabuf_private.device, "gref not shared !!\n");
+		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
 	}
 
 	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
@@ -242,7 +242,7 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return 0;
 }
 
@@ -270,27 +270,33 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* # of grefs in the last page of lvl2 table */
 	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
-	int n_lvl2_grefs = (nents / REFS_PER_PAGE) + ((nents_last > 0) ? 1 : 0) -
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
+			   ((nents_last > 0) ? 1 : 0) -
 			   (nents_last == REFS_PER_PAGE);
 	int i, j, k;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 	*refs_info = (void *) sh_pages_info;
 
-	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs, GFP_KERNEL);
+	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs,
+				   GFP_KERNEL);
+
 	data_pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
 
-	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs, GFP_KERNEL);
-	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs, GFP_KERNEL);
+	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs,
+			       GFP_KERNEL);
+
+	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs,
+				 GFP_KERNEL);
 
 	data_map_ops = kcalloc(sizeof(*data_map_ops), nents, GFP_KERNEL);
 	data_unmap_ops = kcalloc(sizeof(*data_unmap_ops), nents, GFP_KERNEL);
 
 	/* Map top level addressing page */
 	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
 		return NULL;
 	}
 
@@ -304,13 +310,16 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 			    GNTMAP_host_map | GNTMAP_readonly, -1);
 
 	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	if (lvl3_map_ops.status) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed status = %d",
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed status = %d",
 			lvl3_map_ops.status);
+
 		goto error_cleanup_lvl3;
 	} else {
 		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
@@ -318,35 +327,43 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	/* Map all second level pages */
 	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
 		goto error_cleanup_lvl3;
 	}
 
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		lvl2_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
-		gnttab_set_map_op(&lvl2_map_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly,
+		gnttab_set_map_op(&lvl2_map_ops[i],
+				  (unsigned long)lvl2_table, GNTMAP_host_map |
+				  GNTMAP_readonly,
 				  lvl3_table[i], domid);
-		gnttab_set_unmap_op(&lvl2_unmap_ops[i], (unsigned long)lvl2_table, GNTMAP_host_map | GNTMAP_readonly, -1);
+		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
+				    (unsigned long)lvl2_table, GNTMAP_host_map |
+				    GNTMAP_readonly, -1);
 	}
 
 	/* Unmap top level page, as it won't be needed any longer */
-	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1)) {
-		dev_err(hyper_dmabuf_private.device, "xen: cannot unmap top level page\n");
+	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+			      &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"xen: cannot unmap top level page\n");
 		return NULL;
 	} else {
 		/* Mark that page was unmapped */
 		lvl3_unmap_ops.handle = -1;
 	}
 
-	if (gnttab_map_refs(lvl2_map_ops, NULL, lvl2_table_pages, n_lvl2_grefs)) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed");
+	if (gnttab_map_refs(lvl2_map_ops, NULL,
+			    lvl2_table_pages, n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
 		return NULL;
 	}
 
 	/* Checks if pages were mapped correctly */
 	for (i = 0; i < n_lvl2_grefs; i++) {
 		if (lvl2_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"HYPERVISOR map grant ref failed status = %d",
 				lvl2_map_ops[i].status);
 			goto error_cleanup_lvl2;
@@ -356,7 +373,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	if (gnttab_alloc_pages(nents, data_pages)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot allocate pages\n");
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate pages\n");
 		goto error_cleanup_lvl2;
 	}
 
@@ -366,13 +384,13 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
-					  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					  GNTMAP_host_map | GNTMAP_readonly,
-					  lvl2_table[j], domid);
+				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly,
+				lvl2_table[j], domid);
 
 			gnttab_set_unmap_op(&data_unmap_ops[k],
-					    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-					    GNTMAP_host_map | GNTMAP_readonly, -1);
+				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly, -1);
 			k++;
 		}
 	}
@@ -382,25 +400,29 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	for (j = 0; j < nents_last; j++) {
 		gnttab_set_map_op(&data_map_ops[k],
-				  (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				  GNTMAP_host_map | GNTMAP_readonly,
-				  lvl2_table[j], domid);
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly,
+			lvl2_table[j], domid);
 
 		gnttab_set_unmap_op(&data_unmap_ops[k],
-				    (unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-				    GNTMAP_host_map | GNTMAP_readonly, -1);
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly, -1);
 		k++;
 	}
 
-	if (gnttab_map_refs(data_map_ops, NULL, data_pages, nents)) {
-		dev_err(hyper_dmabuf_private.device, "HYPERVISOR map grant ref failed\n");
+	if (gnttab_map_refs(data_map_ops, NULL,
+			    data_pages, nents)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed\n");
 		return NULL;
 	}
 
 	/* unmapping lvl2 table pages */
-	if (gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
+	if (gnttab_unmap_refs(lvl2_unmap_ops,
+			      NULL, lvl2_table_pages,
 			      n_lvl2_grefs)) {
-		dev_err(hyper_dmabuf_private.device, "Cannot unmap 2nd level refs\n");
+		dev_err(hy_drv_priv->dev,
+			"Cannot unmap 2nd level refs\n");
 		return NULL;
 	} else {
 		/* Mark that pages were unmapped */
@@ -411,7 +433,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
-			dev_err(hyper_dmabuf_private.device,
+			dev_err(hy_drv_priv->dev,
 				"HYPERVISOR map grant ref failed status = %d\n",
 				data_map_ops[i].status);
 			goto error_cleanup_data;
@@ -431,7 +453,7 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	kfree(lvl2_unmap_ops);
 	kfree(data_map_ops);
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return data_pages;
 
 error_cleanup_data:
@@ -442,13 +464,14 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 
 error_cleanup_lvl2:
 	if (lvl2_unmap_ops[0].handle != -1)
-		gnttab_unmap_refs(lvl2_unmap_ops, NULL, lvl2_table_pages,
-				  n_lvl2_grefs);
+		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
+				  lvl2_table_pages, n_lvl2_grefs);
 	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
 
 error_cleanup_lvl3:
 	if (lvl3_unmap_ops.handle != -1)
-		gnttab_unmap_refs(&lvl3_unmap_ops, NULL, &lvl3_table_page, 1);
+		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+				  &lvl3_table_page, 1);
 	gnttab_free_pages(1, &lvl3_table_page);
 
 	kfree(lvl2_table_pages);
@@ -463,20 +486,20 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	struct xen_shared_pages_info *sh_pages_info;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s entry\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
 
 	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
 
 	if (sh_pages_info->unmap_ops == NULL ||
 	    sh_pages_info->data_pages == NULL) {
-		dev_warn(hyper_dmabuf_private.device,
-			 "Imported pages already cleaned up or buffer was not imported yet\n");
+		dev_warn(hy_drv_priv->dev,
+			 "pages already cleaned up or buffer not imported yet\n");
 		return 0;
 	}
 
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
 			      sh_pages_info->data_pages, nents) ) {
-		dev_err(hyper_dmabuf_private.device, "Cannot unmap data pages\n");
+		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
 		return -EFAULT;
 	}
 
@@ -489,6 +512,6 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	kfree(sh_pages_info);
 	sh_pages_info = NULL;
 
-	dev_dbg(hyper_dmabuf_private.device, "%s exit\n", __func__);
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
 	return 0;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 50/60] hyper_dmabuf: fix styling err and warns caught by checkpatch.pl
  2017-12-19 19:29 ` Dongwon Kim
                   ` (67 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Fixing all styling problems caught by checkpatch.pl

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  53 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      |  12 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  24 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 308 +++++++++++----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |   5 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 132 ++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        |  58 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c      | 236 ++++++++--------
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  81 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  15 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  78 ++++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 154 +++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  21 +-
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  21 +-
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  16 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  19 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 128 +++++----
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  15 +-
 include/uapi/xen/hyper_dmabuf.h                    |  26 +-
 23 files changed, 739 insertions(+), 679 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 525ee78..023d7f4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -44,7 +44,6 @@
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
 #include "xen/hyper_dmabuf_xen_drv.h"
-extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 #endif
 
 MODULE_LICENSE("GPL and additional rights");
@@ -52,14 +51,11 @@ MODULE_AUTHOR("Intel Corporation");
 
 struct hyper_dmabuf_private *hy_drv_priv;
 
-long hyper_dmabuf_ioctl(struct file *filp,
-			unsigned int cmd, unsigned long param);
-
-static void hyper_dmabuf_force_free(struct exported_sgt_info* exported,
-			            void *attr)
+static void hyper_dmabuf_force_free(struct exported_sgt_info *exported,
+				    void *attr)
 {
 	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file*) attr;
+	struct file *filp = (struct file *)attr;
 
 	if (!filp || !exported)
 		return;
@@ -97,7 +93,8 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 
-unsigned int hyper_dmabuf_event_poll(struct file *filp, struct poll_table_struct *wait)
+unsigned int hyper_dmabuf_event_poll(struct file *filp,
+				     struct poll_table_struct *wait)
 {
 	unsigned int mask = 0;
 
@@ -153,15 +150,17 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 			mutex_unlock(&hy_drv_priv->event_read_lock);
 			ret = wait_event_interruptible(hy_drv_priv->event_wait,
-						       !list_empty(&hy_drv_priv->event_list));
+				  !list_empty(&hy_drv_priv->event_list));
 
 			if (ret == 0)
-				ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
+				ret = mutex_lock_interruptible(
+					&hy_drv_priv->event_read_lock);
 
 			if (ret)
 				return ret;
 		} else {
-			unsigned length = (sizeof(struct hyper_dmabuf_event_hdr) + e->event_data.hdr.size);
+			unsigned int length = (sizeof(e->event_data.hdr) +
+						      e->event_data.hdr.size);
 
 			if (length > count - ret) {
 put_back_event:
@@ -172,20 +171,22 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			}
 
 			if (copy_to_user(buffer + ret, &e->event_data.hdr,
-					 sizeof(struct hyper_dmabuf_event_hdr))) {
+					 sizeof(e->event_data.hdr))) {
 				if (ret == 0)
 					ret = -EFAULT;
 
 				goto put_back_event;
 			}
 
-			ret += sizeof(struct hyper_dmabuf_event_hdr);
+			ret += sizeof(e->event_data.hdr);
 
-			if (copy_to_user(buffer + ret, e->event_data.data, e->event_data.hdr.size)) {
+			if (copy_to_user(buffer + ret, e->event_data.data,
+					 e->event_data.hdr.size)) {
 				/* error while copying void *data */
 
 				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
-				ret -= sizeof(struct hyper_dmabuf_event_hdr);
+
+				ret -= sizeof(e->event_data.hdr);
 
 				/* nullifying hdr of the event in user buffer */
 				if (copy_to_user(buffer + ret, &dummy_hdr,
@@ -212,8 +213,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 #endif
 
-static struct file_operations hyper_dmabuf_driver_fops =
-{
+static const struct file_operations hyper_dmabuf_driver_fops = {
 	.owner = THIS_MODULE,
 	.open = hyper_dmabuf_open,
 	.release = hyper_dmabuf_release,
@@ -246,7 +246,7 @@ int register_device(void)
 
 	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
 
-	/* TODO: Check if there is a different way to initialize dma mask nicely */
+	/* TODO: Check if there is a different way to initialize dma mask */
 	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
 
 	return ret;
@@ -264,32 +264,30 @@ static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
 
-	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
+	printk(KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
 	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
 			      GFP_KERNEL);
 
 	if (!hy_drv_priv) {
-		printk( KERN_ERR "hyper_dmabuf: Failed to create drv\n");
+		printk(KERN_ERR "hyper_dmabuf: Failed to create drv\n");
 		return -1;
 	}
 
 	ret = register_device();
-	if (ret < 0) {
+	if (ret < 0)
 		return ret;
-	}
 
 /* currently only supports XEN hypervisor */
-
 #ifdef CONFIG_HYPER_DMABUF_XEN
 	hy_drv_priv->backend_ops = &xen_backend_ops;
 #else
 	hy_drv_priv->backend_ops = NULL;
-	printk( KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
+	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
 
 	if (hy_drv_priv->backend_ops == NULL) {
-		printk( KERN_ERR "Hyper_dmabuf: failed to be loaded - no backend found\n");
+		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
 		return -1;
 	}
 
@@ -385,10 +383,7 @@ static void hyper_dmabuf_drv_exit(void)
 	dev_info(hy_drv_priv->dev,
 		 "hyper_dmabuf driver: Exiting\n");
 
-	if (hy_drv_priv) {
-		kfree(hy_drv_priv);
-		hy_drv_priv = NULL;
-	}
+	kfree(hy_drv_priv);
 
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 2ead41b..049c694 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -36,7 +36,7 @@ struct hyper_dmabuf_event {
 };
 
 struct hyper_dmabuf_private {
-        struct device *dev;
+	struct device *dev;
 
 	/* VM(domain) id of current VM instance */
 	int domid;
@@ -57,8 +57,8 @@ struct hyper_dmabuf_private {
 	/* flag that shows whether backend is initialized */
 	bool initialized;
 
-        wait_queue_head_t event_wait;
-        struct list_head event_list;
+	wait_queue_head_t event_wait;
+	struct list_head event_list;
 
 	spinlock_t event_lock;
 	struct mutex event_read_lock;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index 0498cda..a4945af 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -44,7 +44,8 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 	assert_spin_locked(&hy_drv_priv->event_lock);
 
 	/* check current number of event then if it hits the max num allowed
-	 * then remove the oldest event in the list */
+	 * then remove the oldest event in the list
+	 */
 	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
 		oldest = list_first_entry(&hy_drv_priv->event_list,
 				struct hyper_dmabuf_event, link);
@@ -61,7 +62,7 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 	wake_up_interruptible(&hy_drv_priv->event_wait);
 }
 
-void hyper_dmabuf_events_release()
+void hyper_dmabuf_events_release(void)
 {
 	struct hyper_dmabuf_event *e, *et;
 	unsigned long irqflags;
@@ -100,15 +101,12 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 
 	e = kzalloc(sizeof(*e), GFP_KERNEL);
 
-	if (!e) {
-		dev_err(hy_drv_priv->dev,
-			"no space left\n");
+	if (!e)
 		return -ENOMEM;
-	}
 
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void*)imported->priv;
+	e->event_data.data = (void *)imported->priv;
 	e->event_data.hdr.size = imported->sz_priv;
 
 	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index e2466c7..312dea5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -40,11 +40,8 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 
 	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
 
-	if (!new_reusable) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!new_reusable)
 		return;
-	}
 
 	new_reusable->hid = hid;
 
@@ -54,7 +51,7 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	hyper_dmabuf_id_t hid = {-1, {0,0,0}};
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
 
 	/* check there is reusable id */
 	if (!list_empty(&reusable_head->list)) {
@@ -92,7 +89,7 @@ void destroy_reusable_list(void)
 
 hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 {
-	static int count = 0;
+	static int count;
 	hyper_dmabuf_id_t hid;
 	struct list_reusable_id *reusable_head;
 
@@ -100,13 +97,11 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	if (count == 0) {
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
 
-		if (!reusable_head) {
-			dev_err(hy_drv_priv->dev,
-				"No memory left to be allocated\n");
-			return (hyper_dmabuf_id_t){-1, {0,0,0}};
-		}
+		if (!reusable_head)
+			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
 
-		reusable_head->hid.id = -1; /* list head has an invalid count */
+		/* list head has an invalid count */
+		reusable_head->hid.id = -1;
 		INIT_LIST_HEAD(&reusable_head->list);
 		hy_drv_priv->id_queue = reusable_head;
 	}
@@ -116,9 +111,8 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	/*creating a new H-ID only if nothing in the reusable id queue
 	 * and count is less than maximum allowed
 	 */
-	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
+	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
 		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
-	}
 
 	/* random data embedded in the id for security */
 	get_random_bytes(&hid.rng_key[0], 12);
@@ -131,7 +125,7 @@ bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
 	int i;
 
 	/* compare keys */
-	for (i=0; i<3; i++) {
+	for (i = 0; i < 3; i++) {
 		if (hid1.rng_key[i] != hid2.rng_key[i])
 			return false;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index a3336d9..61c4fb3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -26,10 +26,10 @@
 #define __HYPER_DMABUF_ID_H__
 
 #define HYPER_DMABUF_ID_CREATE(domid, cnt) \
-        ((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
+	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
 
 #define HYPER_DMABUF_DOM_ID(hid) \
-        (((hid.id) >> 24) & 0xFF)
+	(((hid.id) >> 24) & 0xFF)
 
 /* currently maximum number of buffers shared
  * at any given moment is limited to 1000
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b328df7..f9040ed 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -91,7 +91,7 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 	/* now create request for importer via ring */
 	op[0] = exported->hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = exported->hid.rng_key[i];
 
 	if (pg_info) {
@@ -113,10 +113,8 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if(!req) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
+	if (!req)
 		return -1;
-	}
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
@@ -161,69 +159,71 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 					     export_remote_attr->remote_domain);
 	if (hid.id != -1) {
 		exported = hyper_dmabuf_find_exported(hid);
-		if (exported != NULL) {
-			if (exported->valid) {
-				/*
-				 * Check if unexport is already scheduled for that buffer,
-				 * if so try to cancel it. If that will fail, buffer needs
-				 * to be reexport once again.
-				 */
-				if (exported->unexport_sched) {
-					if (!cancel_delayed_work_sync(&exported->unexport)) {
-						dma_buf_put(dma_buf);
-						goto reexport;
-					}
-					exported->unexport_sched = false;
-				}
-
-				/* if there's any change in size of private data.
-				 * we reallocate space for private data with new size */
-				if (export_remote_attr->sz_priv != exported->sz_priv) {
-					kfree(exported->priv);
-
-					/* truncating size */
-					if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
-						exported->sz_priv = MAX_SIZE_PRIV_DATA;
-					} else {
-						exported->sz_priv = export_remote_attr->sz_priv;
-					}
-
-					exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
-
-					if(!exported->priv) {
-						dev_err(hy_drv_priv->dev,
-							"no more space left for priv\n");
-						hyper_dmabuf_remove_exported(exported->hid);
-						hyper_dmabuf_cleanup_sgt_info(exported, true);
-						kfree(exported);
-						dma_buf_put(dma_buf);
-						return -ENOMEM;
-					}
-				}
-
-				/* update private data in sgt_info with new ones */
-				ret = copy_from_user(exported->priv, export_remote_attr->priv,
-						     exported->sz_priv);
-				if (ret) {
-					dev_err(hy_drv_priv->dev,
-						"Failed to load a new private data\n");
-					ret = -EINVAL;
-				} else {
-					/* send an export msg for updating priv in importer */
-					ret = hyper_dmabuf_send_export_msg(exported, NULL);
-
-					if (ret < 0) {
-						dev_err(hy_drv_priv->dev,
-							"Failed to send a new private data\n");
-						ret = -EBUSY;
-					}
-				}
 
+		if (!exported)
+			goto reexport;
+
+		if (exported->valid == false)
+			goto reexport;
+
+		/*
+		 * Check if unexport is already scheduled for that buffer,
+		 * if so try to cancel it. If that will fail, buffer needs
+		 * to be reexport once again.
+		 */
+		if (exported->unexport_sched) {
+			if (!cancel_delayed_work_sync(&exported->unexport)) {
 				dma_buf_put(dma_buf);
-				export_remote_attr->hid = hid;
-				return ret;
+				goto reexport;
 			}
+			exported->unexport_sched = false;
 		}
+
+		/* if there's any change in size of private data.
+		 * we reallocate space for private data with new size
+		 */
+		if (export_remote_attr->sz_priv != exported->sz_priv) {
+			kfree(exported->priv);
+
+			/* truncating size */
+			if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
+				exported->sz_priv = MAX_SIZE_PRIV_DATA;
+			else
+				exported->sz_priv = export_remote_attr->sz_priv;
+
+			exported->priv = kcalloc(1, exported->sz_priv,
+						 GFP_KERNEL);
+
+			if (!exported->priv) {
+				hyper_dmabuf_remove_exported(exported->hid);
+				hyper_dmabuf_cleanup_sgt_info(exported, true);
+				kfree(exported);
+				dma_buf_put(dma_buf);
+				return -ENOMEM;
+			}
+		}
+
+		/* update private data in sgt_info with new ones */
+		ret = copy_from_user(exported->priv, export_remote_attr->priv,
+				     exported->sz_priv);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to load a new private data\n");
+			ret = -EINVAL;
+		} else {
+			/* send an export msg for updating priv in importer */
+			ret = hyper_dmabuf_send_export_msg(exported, NULL);
+
+			if (ret < 0) {
+				dev_err(hy_drv_priv->dev,
+					"Failed to send a new private data\n");
+				ret = -EBUSY;
+			}
+		}
+
+		dma_buf_put(dma_buf);
+		export_remote_attr->hid = hid;
+		return ret;
 	}
 
 reexport:
@@ -244,25 +244,22 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
 
-	if(!exported) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
+	if (!exported) {
 		ret = -ENOMEM;
 		goto fail_sgt_info_creation;
 	}
 
 	/* possible truncation */
-	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
+	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
 		exported->sz_priv = MAX_SIZE_PRIV_DATA;
-	} else {
+	else
 		exported->sz_priv = export_remote_attr->sz_priv;
-	}
 
 	/* creating buffer for private data of buffer */
-	if(exported->sz_priv != 0) {
+	if (exported->sz_priv != 0) {
 		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
 
-		if(!exported->priv) {
-			dev_err(hy_drv_priv->dev, "no more space left\n");
+		if (!exported->priv) {
 			ret = -ENOMEM;
 			goto fail_priv_creation;
 		}
@@ -273,7 +270,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	exported->hid = hyper_dmabuf_get_hid();
 
 	/* no more exported dmabuf allowed */
-	if(exported->hid.id == -1) {
+	if (exported->hid.id == -1) {
 		dev_err(hy_drv_priv->dev,
 			"exceeds allowed number of dmabuf to be exported\n");
 		ret = -ENOMEM;
@@ -286,28 +283,27 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
 	if (!exported->active_sgts) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_sgts;
 	}
 
-	exported->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	exported->active_attached = kmalloc(sizeof(struct attachment_list),
+					    GFP_KERNEL);
 	if (!exported->active_attached) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_attached;
 	}
 
-	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
+				       GFP_KERNEL);
 	if (!exported->va_kmapped) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_kmapped;
 	}
 
-	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
+				       GFP_KERNEL);
 	if (!exported->va_vmapped) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_vmapped;
 	}
@@ -436,31 +432,32 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	/* send notification for export_fd to exporter */
 	op[0] = imported->hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = imported->hid.rng_key[i];
 
-	dev_dbg(hy_drv_priv->dev, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
-		imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-		imported->hid.rng_key[2]);
+	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!req)
 		return -ENOMEM;
-	}
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
 
 	if (ret < 0) {
-		/* in case of timeout other end eventually will receive request, so we need to undo it */
-		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
+		/* in case of timeout other end eventually will receive request,
+		 * so we need to undo it
+		 */
+		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
+					&op[0]);
 		ops->send_req(op[0], req, false);
 		kfree(req);
-		dev_err(hy_drv_priv->dev, "Failed to create sgt or notify exporter\n");
+		dev_err(hy_drv_priv->dev,
+			"Failed to create sgt or notify exporter\n");
 		imported->importers--;
 		mutex_unlock(&hy_drv_priv->lock);
 		return ret;
@@ -471,64 +468,69 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
 		dev_err(hy_drv_priv->dev,
 			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
-			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-			imported->hid.rng_key[2]);
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 		imported->importers--;
 		mutex_unlock(&hy_drv_priv->lock);
 		return -EINVAL;
-	} else {
-		dev_dbg(hy_drv_priv->dev, "Can import buffer {id:%d key:%d %d %d}\n",
-			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-			imported->hid.rng_key[2]);
-
-		ret = 0;
 	}
 
+	ret = 0;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Found buffer gref %d off %d\n",
+		imported->ref_handle, imported->frst_ofst);
+
 	dev_dbg(hy_drv_priv->dev,
-		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n",
-		  __func__, imported->ref_handle, imported->frst_ofst,
-		  imported->last_len, imported->nents, HYPER_DMABUF_DOM_ID(imported->hid));
+		"last len %d nents %d domain %d\n",
+		imported->last_len, imported->nents,
+		HYPER_DMABUF_DOM_ID(imported->hid));
 
 	if (!imported->sgt) {
 		dev_dbg(hy_drv_priv->dev,
-			"%s buffer {id:%d key:%d %d %d} pages not mapped yet\n", __func__,
-			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-			imported->hid.rng_key[2]);
+			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 		data_pgs = ops->map_shared_pages(imported->ref_handle,
-						   HYPER_DMABUF_DOM_ID(imported->hid),
-						   imported->nents,
-						   &imported->refs_info);
+					HYPER_DMABUF_DOM_ID(imported->hid),
+					imported->nents,
+					&imported->refs_info);
 
 		if (!data_pgs) {
 			dev_err(hy_drv_priv->dev,
-				"Cannot map pages of buffer {id:%d key:%d %d %d}\n",
-				imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+				"can't map pages hid {id:%d key:%d %d %d}\n",
+				imported->hid.id, imported->hid.rng_key[0],
+				imported->hid.rng_key[1],
 				imported->hid.rng_key[2]);
 
 			imported->importers--;
+
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-			if (!req) {
-				dev_err(hy_drv_priv->dev,
-					"No more space left\n");
+			if (!req)
 				return -ENOMEM;
-			}
 
-			hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
-			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
+			hyper_dmabuf_create_req(req,
+						HYPER_DMABUF_EXPORT_FD_FAILED,
+						&op[0]);
+			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
+							  false);
 			kfree(req);
 			mutex_unlock(&hy_drv_priv->lock);
 			return -EINVAL;
 		}
 
-		imported->sgt = hyper_dmabuf_create_sgt(data_pgs, imported->frst_ofst,
-							imported->last_len, imported->nents);
+		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
+							imported->frst_ofst,
+							imported->last_len,
+							imported->nents);
 
 	}
 
-	export_fd_attr->fd = hyper_dmabuf_export_fd(imported, export_fd_attr->flags);
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
+						    export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
@@ -566,21 +568,19 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!req)
 		return;
-	}
 
 	op[0] = exported->hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = exported->hid.rng_key[i];
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
 
 	/* Now send unexport request to remote domain, marking
-	 * that buffer should not be used anymore */
+	 * that buffer should not be used anymore
+	 */
 	ret = ops->send_req(exported->rdomid, req, true);
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
@@ -589,12 +589,10 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 			exported->hid.rng_key[1], exported->hid.rng_key[2]);
 	}
 
-	/* free msg */
 	kfree(req);
 	exported->unexport_sched = false;
 
-	/*
-	 * Immediately clean-up if it has never been exported by importer
+	/* Immediately clean-up if it has never been exported by importer
 	 * (so no SGT is constructed on importer).
 	 * clean it up later in remote sync when final release ops
 	 * is called (importer does this only when there's no
@@ -669,25 +667,31 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		exported = hyper_dmabuf_find_exported(query_attr->hid);
 		if (exported) {
 			ret = hyper_dmabuf_query_exported(exported,
-							  query_attr->item, &query_attr->info);
+							  query_attr->item,
+							  &query_attr->info);
 		} else {
 			dev_err(hy_drv_priv->dev,
-				"DMA BUF {id:%d key:%d %d %d} not in the export list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
+				"hid {id:%d key:%d %d %d} not in exp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	} else {
 		/* query for imported dmabuf */
 		imported = hyper_dmabuf_find_imported(query_attr->hid);
 		if (imported) {
-			ret = hyper_dmabuf_query_imported(imported, query_attr->item,
+			ret = hyper_dmabuf_query_imported(imported,
+							  query_attr->item,
 							  &query_attr->info);
 		} else {
 			dev_err(hy_drv_priv->dev,
-				"DMA BUF {id:%d key:%d %d %d} not in the imported list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
+				"hid {id:%d key:%d %d %d} not in imp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	}
@@ -696,12 +700,18 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 }
 
 const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
+			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
+			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
+			       hyper_dmabuf_export_remote_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
+			       hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
+			       hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY,
+			       hyper_dmabuf_query_ioctl, 0),
 };
 
 long hyper_dmabuf_ioctl(struct file *filp,
@@ -728,21 +738,23 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	}
 
 	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
-	if (!kdata) {
-		dev_err(hy_drv_priv->dev, "no memory\n");
+	if (!kdata)
 		return -ENOMEM;
-	}
 
-	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev, "failed to copy from user arguments\n");
+	if (copy_from_user(kdata, (void __user *)param,
+			   _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy from user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
 
 	ret = func(filp, kdata);
 
-	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev, "failed to copy to user arguments\n");
+	if (copy_to_user((void __user *)param, kdata,
+			 _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy to user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index 3e9470a..5991a87 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -34,7 +34,7 @@ struct hyper_dmabuf_ioctl_desc {
 	const char *name;
 };
 
-#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
 	[_IOC_NR(ioctl)] = {				\
 			.cmd = ioctl,			\
 			.func = _func,			\
@@ -42,6 +42,9 @@ struct hyper_dmabuf_ioctl_desc {
 			.name = #ioctl			\
 	}
 
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param);
+
 int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
 
 #endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 907f76e..fbbcc39 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -52,18 +52,19 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
 	req->cmd = cmd;
 
-	switch(cmd) {
+	switch (cmd) {
 	/* as exporter, commands to importer */
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~3 : hyper_dmabuf_id
+		 * op0~op3 : hyper_dmabuf_id
 		 * op4 : number of pages to be shared
 		 * op5 : offset of data in the first page
 		 * op6 : length of data in the last page
 		 * op7 : top-level reference number for shared pages
 		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op9 ~ : Driver-specific private data
+		 *	   (e.g. graphic buffer's meta info)
 		 */
 
 		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
@@ -72,34 +73,39 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
-		 * op0~3 : hyper_dmabuf_id_t hid
+		 * op0~op3 : hyper_dmabuf_id_t hid
 		 */
 
-		for (i=0; i < 4; i++)
+		for (i = 0; i < 4; i++)
 			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_EXPORT_FD:
 	case HYPER_DMABUF_EXPORT_FD_FAILED:
-		/* dmabuf fd is being created on imported side or importing failed */
-		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * op0~3 : hyper_dmabuf_id
+		/* dmabuf fd is being created on imported side or importing
+		 * failed
+		 *
+		 * command : HYPER_DMABUF_EXPORT_FD or
+		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
+		 * op0~op3 : hyper_dmabuf_id
 		 */
 
-		for (i=0; i < 4; i++)
+		for (i = 0; i < 4; i++)
 			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer (probably not needed) */
-		/* for dmabuf synchronization */
+		/* notifying dmabuf map/unmap to importer (probably not needed)
+		 * for dmabuf synchronization
+		 */
 		break;
 
-	/* as importer, command to exporter */
 	case HYPER_DMABUF_OPS_TO_SOURCE:
-		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
-		* or unmapping for synchronization with original exporter (e.g. i915) */
-		/* command : DMABUF_OPS_TO_SOURCE.
+		/* notifying dmabuf map/unmap to exporter, map will make
+		 * the driver to do shadow mapping or unmapping for
+		 * synchronization with original exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
 		 * op0~3 : hyper_dmabuf_id
 		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
@@ -116,7 +122,8 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 static void cmd_process_work(struct work_struct *work)
 {
 	struct imported_sgt_info *imported;
-	struct cmd_process *proc = container_of(work, struct cmd_process, work);
+	struct cmd_process *proc = container_of(work,
+						struct cmd_process, work);
 	struct hyper_dmabuf_req *req;
 	int domid;
 	int i;
@@ -128,40 +135,42 @@ static void cmd_process_work(struct work_struct *work)
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~3 : hyper_dmabuf_id
+		 * op0~op3 : hyper_dmabuf_id
 		 * op4 : number of pages to be shared
 		 * op5 : offset of data in the first page
 		 * op6 : length of data in the last page
 		 * op7 : top-level reference number for shared pages
 		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op9 ~ : Driver-specific private data
+		 *         (e.g. graphic buffer's meta info)
 		 */
 
-		/* if nents == 0, it means it is a message only for priv synchronization
-		 * for existing imported_sgt_info so not creating a new one */
+		/* if nents == 0, it means it is a message only for
+		 * priv synchronization. for existing imported_sgt_info
+		 * so not creating a new one
+		 */
 		if (req->op[4] == 0) {
 			hyper_dmabuf_id_t exist = {req->op[0],
 						   {req->op[1], req->op[2],
-						   req->op[3]}};
+						   req->op[3] } };
 
 			imported = hyper_dmabuf_find_imported(exist);
 
 			if (!imported) {
 				dev_err(hy_drv_priv->dev,
-					"Can't find imported sgt_info from IMPORT_LIST\n");
+					"Can't find imported sgt_info\n");
 				break;
 			}
 
 			/* if size of new private data is different,
-			 * we reallocate it. */
+			 * we reallocate it.
+			 */
 			if (imported->sz_priv != req->op[8]) {
 				kfree(imported->priv);
 				imported->sz_priv = req->op[8];
-				imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
+				imported->priv = kcalloc(1, req->op[8],
+							 GFP_KERNEL);
 				if (!imported->priv) {
-					dev_err(hy_drv_priv->dev,
-						"Fail to allocate priv\n");
-
 					/* set it invalid */
 					imported->valid = 0;
 					break;
@@ -181,26 +190,20 @@ static void cmd_process_work(struct work_struct *work)
 
 		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
 
-		if (!imported) {
-			dev_err(hy_drv_priv->dev,
-				"No memory left to be allocated\n");
+		if (!imported)
 			break;
-		}
 
 		imported->sz_priv = req->op[8];
 		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
 
 		if (!imported->priv) {
-			dev_err(hy_drv_priv->dev,
-				"Fail to allocate priv\n");
-
 			kfree(imported);
 			break;
 		}
 
 		imported->hid.id = req->op[0];
 
-		for (i=0; i<3; i++)
+		for (i = 0; i < 3; i++)
 			imported->hid.rng_key[i] = req->op[i+1];
 
 		imported->nents = req->op[4];
@@ -230,13 +233,13 @@ static void cmd_process_work(struct work_struct *work)
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer (probably not needed) */
-		/* for dmabuf synchronization */
+		/* notifying dmabuf map/unmap to importer
+		 * (probably not needed) for dmabuf synchronization
+		 */
 		break;
 
 	default:
 		/* shouldn't get here */
-		/* no matched command, nothing to do.. just return error */
 		break;
 	}
 
@@ -280,20 +283,22 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 * op0~3 : hyper_dmabuf_id
 		 */
 		dev_dbg(hy_drv_priv->dev,
-			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
+			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
 
 		imported = hyper_dmabuf_find_imported(hid);
 
 		if (imported) {
 			/* if anything is still using dma_buf */
 			if (imported->importers) {
-				/*
-				 * Buffer is still in  use, just mark that it should
-				 * not be allowed to export its fd anymore.
+				/* Buffer is still in  use, just mark that
+				 * it should not be allowed to export its fd
+				 * anymore.
 				 */
 				imported->valid = false;
 			} else {
-				/* No one is using buffer, remove it from imported list */
+				/* No one is using buffer, remove it from
+				 * imported list
+				 */
 				hyper_dmabuf_remove_imported(hid);
 				kfree(imported);
 			}
@@ -306,10 +311,12 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	/* dma buf remote synchronization */
 	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
-		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
-		 * or unmapping for synchronization with original exporter (e.g. i915) */
-
-		/* command : DMABUF_OPS_TO_SOURCE.
+		/* notifying dmabuf map/unmap to exporter, map will
+		 * make the driver to do shadow mapping
+		 * or unmapping for synchronization with original
+		 * exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
 		 * op0~3 : hyper_dmabuf_id
 		 * op1 : enum hyper_dmabuf_ops {....}
 		 */
@@ -330,27 +337,30 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
 		dev_dbg(hy_drv_priv->dev,
-			"Processing HYPER_DMABUF_EXPORT_FD for buffer {id:%d key:%d %d %d}\n",
+			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
 		exported = hyper_dmabuf_find_exported(hid);
 
 		if (!exported) {
 			dev_err(hy_drv_priv->dev,
-				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else if (!exported->valid) {
 			dev_dbg(hy_drv_priv->dev,
-				"Buffer no longer valid - cannot export fd for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"Buffer no longer valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			dev_dbg(hy_drv_priv->dev,
-				"Buffer still valid - can export fd for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"Buffer still valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			exported->active++;
 			req->stat = HYPER_DMABUF_REQ_PROCESSED;
@@ -360,15 +370,16 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
 		dev_dbg(hy_drv_priv->dev,
-			"Processing HYPER_DMABUF_EXPORT_FD_FAILED for buffer {id:%d key:%d %d %d}\n",
+			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
 		exported = hyper_dmabuf_find_exported(hid);
 
 		if (!exported) {
 			dev_err(hy_drv_priv->dev,
-				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
@@ -382,19 +393,14 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
-	if (!temp_req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!temp_req)
 		return -ENOMEM;
-	}
 
 	memcpy(temp_req, req, sizeof(*temp_req));
 
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
 
 	if (!proc) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
 		kfree(temp_req);
 		return -ENOMEM;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 7c694ec..9c8a76b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -79,7 +79,9 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 				 enum hyper_dmabuf_command command,
 				 int *operands);
 
-/* parse incoming request packet (or response) and take appropriate actions for those */
+/* parse incoming request packet (or response) and take
+ * appropriate actions for those
+ */
 int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
 
 #endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 7e73170..03fdd30 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -53,18 +53,15 @@ static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 
 	op[0] = hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = hid.rng_key[i];
 
 	op[4] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!req)
 		return -ENOMEM;
-	}
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
 
@@ -81,8 +78,8 @@ static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 	return ret;
 }
 
-static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf,
-				   struct device* dev,
+static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
+				   struct device *dev,
 				   struct dma_buf_attachment *attach)
 {
 	struct imported_sgt_info *imported;
@@ -99,7 +96,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf,
 	return ret;
 }
 
-static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf,
+static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
 				    struct dma_buf_attachment *attach)
 {
 	struct imported_sgt_info *imported;
@@ -114,8 +111,9 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf,
 					HYPER_DMABUF_OPS_DETACH);
 }
 
-static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
-					     enum dma_data_direction dir)
+static struct sg_table *hyper_dmabuf_ops_map(
+				struct dma_buf_attachment *attachment,
+				enum dma_data_direction dir)
 {
 	struct sg_table *st;
 	struct imported_sgt_info *imported;
@@ -130,9 +128,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	/* extract pages from sgt */
 	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
 
-	if (!pg_info) {
+	if (!pg_info)
 		return NULL;
-	}
 
 	/* create a new sg_table with extracted pages */
 	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
@@ -140,8 +137,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	if (!st)
 		goto err_free_sg;
 
-        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
-                goto err_free_sg;
+	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
+		goto err_free_sg;
 
 	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_MAP);
@@ -196,9 +193,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 
 	imported = (struct imported_sgt_info *)dma_buf->priv;
 
-	if (!dmabuf_refcount(imported->dma_buf)) {
+	if (!dmabuf_refcount(imported->dma_buf))
 		imported->dma_buf = NULL;
-	}
 
 	imported->importers--;
 
@@ -219,8 +215,9 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 					HYPER_DMABUF_OPS_RELEASE);
 
 	/*
-	 * Check if buffer is still valid and if not remove it from imported list.
-	 * That has to be done after sending sync request
+	 * Check if buffer is still valid and if not remove it
+	 * from imported list. That has to be done after sending
+	 * sync request
 	 */
 	if (finish) {
 		hyper_dmabuf_remove_imported(imported->hid);
@@ -228,7 +225,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	}
 }
 
-static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
+					     enum dma_data_direction dir)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -244,7 +242,8 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 	return ret;
 }
 
-static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
+					   enum dma_data_direction dir)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -260,7 +259,8 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 	return 0;
 }
 
-static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
+					  unsigned long pgnum)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -273,10 +273,12 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KMAP_ATOMIC);
 
-	return NULL; /* for now NULL.. need to return the address of mapped region */
+	/* TODO: NULL for now. Need to return the addr of mapped region */
+	return NULL;
 }
 
-static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
+					   unsigned long pgnum, void *vaddr)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -322,7 +324,8 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 					HYPER_DMABUF_OPS_KUNMAP);
 }
 
-static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
+				 struct vm_area_struct *vma)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -374,8 +377,8 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 	.map_dma_buf = hyper_dmabuf_ops_map,
 	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
 	.release = hyper_dmabuf_ops_release,
-	.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
-	.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
+	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
 	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
 	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
 	.map = hyper_dmabuf_ops_kmap,
@@ -395,9 +398,8 @@ int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
 	 */
 	hyper_dmabuf_export_dma_buf(imported);
 
-	if (imported->dma_buf) {
+	if (imported->dma_buf)
 		fd = dma_buf_fd(imported->dma_buf, flags);
-	}
 
 	return fd;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
index 36e888c..1f2f56b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -36,63 +36,63 @@
 	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
 
 int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
-				int query, unsigned long* info)
+				int query, unsigned long *info)
 {
-	switch (query)
-	{
-		case HYPER_DMABUF_QUERY_TYPE:
-			*info = EXPORTED;
-			break;
-
-		/* exporting domain of this specific dmabuf*/
-		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(exported->hid);
-			break;
-
-		/* importing domain of this specific dmabuf */
-		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = exported->rdomid;
-			break;
-
-		/* size of dmabuf in byte */
-		case HYPER_DMABUF_QUERY_SIZE:
-			*info = exported->dma_buf->size;
-			break;
-
-		/* whether the buffer is used by importer */
-		case HYPER_DMABUF_QUERY_BUSY:
-			*info = (exported->active > 0);
-			break;
-
-		/* whether the buffer is unexported */
-		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !exported->valid;
-			break;
-
-		/* whether the buffer is scheduled to be unexported */
-		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-			*info = !exported->unexport_sched;
-			break;
-
-		/* size of private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = exported->sz_priv;
-			break;
-
-		/* copy private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (exported->sz_priv > 0) {
-				int n;
-				n = copy_to_user((void __user*) *info,
-						exported->priv,
-						exported->sz_priv);
-				if (n != 0)
-					return -EINVAL;
-			}
-			break;
-
-		default:
-			return -EINVAL;
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = EXPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(exported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = exported->rdomid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		*info = exported->dma_buf->size;
+		break;
+
+	/* whether the buffer is used by importer */
+	case HYPER_DMABUF_QUERY_BUSY:
+		*info = (exported->active > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !exported->valid;
+		break;
+
+	/* whether the buffer is scheduled to be unexported */
+	case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+		*info = !exported->unexport_sched;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = exported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (exported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *) *info,
+					exported->priv,
+					exported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
 	}
 
 	return 0;
@@ -102,66 +102,70 @@ int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
 int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
 				int query, unsigned long *info)
 {
-	switch (query)
-	{
-		case HYPER_DMABUF_QUERY_TYPE:
-			*info = IMPORTED;
-			break;
-
-		/* exporting domain of this specific dmabuf*/
-		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(imported->hid);
-			break;
-
-		/* importing domain of this specific dmabuf */
-		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = hy_drv_priv->domid;
-			break;
-
-		/* size of dmabuf in byte */
-		case HYPER_DMABUF_QUERY_SIZE:
-			if (imported->dma_buf) {
-				/* if local dma_buf is created (if it's ever mapped),
-				 * retrieve it directly from struct dma_buf *
-				 */
-				*info = imported->dma_buf->size;
-			} else {
-				/* calcuate it from given nents, frst_ofst and last_len */
-				*info = HYPER_DMABUF_SIZE(imported->nents,
-							  imported->frst_ofst,
-							  imported->last_len);
-			}
-			break;
-
-		/* whether the buffer is used or not */
-		case HYPER_DMABUF_QUERY_BUSY:
-			/* checks if it's used by importer */
-			*info = (imported->importers > 0);
-			break;
-
-		/* whether the buffer is unexported */
-		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !imported->valid;
-			break;
-		/* size of private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = imported->sz_priv;
-			break;
-
-		/* copy private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (imported->sz_priv > 0) {
-				int n;
-				n = copy_to_user((void __user*) *info,
-						imported->priv,
-						imported->sz_priv);
-				if (n != 0)
-					return -EINVAL;
-			}
-			break;
-
-		default:
-			return -EINVAL;
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = IMPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(imported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = hy_drv_priv->domid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		if (imported->dma_buf) {
+			/* if local dma_buf is created (if it's
+			 * ever mapped), retrieve it directly
+			 * from struct dma_buf *
+			 */
+			*info = imported->dma_buf->size;
+		} else {
+			/* calcuate it from given nents, frst_ofst
+			 * and last_len
+			 */
+			*info = HYPER_DMABUF_SIZE(imported->nents,
+						  imported->frst_ofst,
+						  imported->last_len);
+		}
+		break;
+
+	/* whether the buffer is used or not */
+	case HYPER_DMABUF_QUERY_BUSY:
+		/* checks if it's used by importer */
+		*info = (imported->importers > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !imported->valid;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = imported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (imported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *)*info,
+					imported->priv,
+					imported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
 	}
 
 	return 0;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 01ec98c..c9fe040 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -76,11 +76,8 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_ATTACH:
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
-		if (!attachl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
+		if (!attachl)
 			return -ENOMEM;
-		}
 
 		attachl->attach = dma_buf_attach(exported->dma_buf,
 						 hy_drv_priv->dev);
@@ -126,13 +123,11 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 
 		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
 
-		if (!sgtl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+		if (!sgtl)
 			return -ENOMEM;
-		}
 
-		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach,
+						   DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
 			dev_err(hy_drv_priv->dev,
@@ -148,7 +143,7 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 			dev_err(hy_drv_priv->dev,
 				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
 			dev_err(hy_drv_priv->dev,
-				"no more SGT or attachment left to be unmapped\n");
+				"no SGT or attach left to be unmapped\n");
 			return -EFAULT;
 		}
 
@@ -165,23 +160,28 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 
 	case HYPER_DMABUF_OPS_RELEASE:
 		dev_dbg(hy_drv_priv->dev,
-			"Buffer {id:%d key:%d %d %d} released, references left: %d\n",
-			 exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
-			 exported->hid.rng_key[2], exported->active - 1);
+			"id:%d key:%d %d %d} released, ref left: %d\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2],
+			 exported->active - 1);
+
+		exported->active--;
 
-                exported->active--;
-		/* If there are still importers just break, if no then continue with final cleanup */
+		/* If there are still importers just break, if no then
+		 * continue with final cleanup
+		 */
 		if (exported->active)
 			break;
 
-		/*
-		 * Importer just released buffer fd, check if there is any other importer still using it.
-		 * If not and buffer was unexported, clean up shared data and remove that buffer.
+		/* Importer just released buffer fd, check if there is
+		 * any other importer still using it.
+		 * If not and buffer was unexported, clean up shared
+		 * data and remove that buffer.
 		 */
 		dev_dbg(hy_drv_priv->dev,
 			"Buffer {id:%d key:%d %d %d} final released\n",
-			exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
-			exported->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
 
 		if (!exported->valid && !exported->active &&
 		    !exported->unexport_sched) {
@@ -195,19 +195,21 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
-		ret = dma_buf_begin_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_begin_cpu_access(exported->dma_buf,
+					       DMA_BIDIRECTIONAL);
 		if (ret) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+				"HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
 
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
-		ret = dma_buf_end_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_end_cpu_access(exported->dma_buf,
+					     DMA_BIDIRECTIONAL);
 		if (ret) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+				"HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
@@ -215,22 +217,21 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KMAP:
 		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
-		if (!va_kmapl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+		if (!va_kmapl)
 			return -ENOMEM;
-		}
 
 		/* dummy kmapping of 1 page */
 		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
-			va_kmapl->vaddr = dma_buf_kmap_atomic(exported->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap_atomic(
+						exported->dma_buf, 1);
 		else
-			va_kmapl->vaddr = dma_buf_kmap(exported->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap(
+						exported->dma_buf, 1);
 
 		if (!va_kmapl->vaddr) {
 			kfree(va_kmapl);
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+				"HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -ENOMEM;
 		}
 		list_add(&va_kmapl->list, &exported->va_kmapped->list);
@@ -240,7 +241,7 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_KUNMAP:
 		if (list_empty(&exported->va_kmapped->list)) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			dev_err(hy_drv_priv->dev,
 				"no more dmabuf VA to be freed\n");
 			return -EFAULT;
@@ -250,15 +251,17 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 					    struct kmap_vaddr_list, list);
 		if (!va_kmapl->vaddr) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return PTR_ERR(va_kmapl->vaddr);
 		}
 
 		/* unmapping 1 page */
 		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
-			dma_buf_kunmap_atomic(exported->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap_atomic(exported->dma_buf,
+					      1, va_kmapl->vaddr);
 		else
-			dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap(exported->dma_buf,
+				       1, va_kmapl->vaddr);
 
 		list_del(&va_kmapl->list);
 		kfree(va_kmapl);
@@ -266,7 +269,8 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 
 	case HYPER_DMABUF_OPS_MMAP:
 		/* currently not supported: looking for a way to create
-		 * a dummy vma */
+		 * a dummy vma
+		 */
 		dev_warn(hy_drv_priv->dev,
 			 "remote sync::sychronized mmap is not supported\n");
 		break;
@@ -274,11 +278,8 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_VMAP:
 		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
 
-		if (!va_vmapl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
+		if (!va_vmapl)
 			return -ENOMEM;
-		}
 
 		/* dummy vmapping */
 		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index 315c354..e9299e5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -89,9 +89,8 @@ struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	if (!pg_info)
 		return NULL;
 
-	pg_info->pgs = kmalloc(sizeof(struct page *) *
-			       hyper_dmabuf_get_num_pgs(sgt),
-			       GFP_KERNEL);
+	pg_info->pgs = kmalloc_array(hyper_dmabuf_get_num_pgs(sgt),
+				     sizeof(struct page *), GFP_KERNEL);
 
 	if (!pg_info->pgs) {
 		kfree(pg_info);
@@ -137,17 +136,17 @@ struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 }
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
-					 int frst_ofst, int last_len, int nents)
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents)
 {
 	struct sg_table *sgt;
 	struct scatterlist *sgl;
 	int i, ret;
 
 	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (!sgt) {
+	if (!sgt)
 		return NULL;
-	}
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
@@ -163,7 +162,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
 
 	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
 
-	for (i=1; i<nents-1; i++) {
+	for (i = 1; i < nents-1; i++) {
 		sgl = sg_next(sgl);
 		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
index 930bade..152f78c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -31,7 +31,7 @@ int dmabuf_refcount(struct dma_buf *dma_buf);
 struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
 					 int frst_ofst, int last_len,
 					 int nents);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 8a612d1..a11f804 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -51,67 +51,91 @@ struct vmap_vaddr_list {
 
 /* Exporter builds pages_info before sharing pages */
 struct pages_info {
-        int frst_ofst; /* offset of data in the first page */
-        int last_len; /* length of data in the last page */
-        int nents; /* # of pages */
-        struct page **pgs; /* pages that contains reference numbers of shared pages*/
+	int frst_ofst;
+	int last_len;
+	int nents;
+	struct page **pgs;
 };
 
 
 /* Exporter stores references to sgt in a hash table
- * Exporter keeps these references for synchronization and tracking purposes
+ * Exporter keeps these references for synchronization
+ * and tracking purposes
  */
 struct exported_sgt_info {
-        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in remote domain */
-	int rdomid; /* domain importing this sgt */
+	hyper_dmabuf_id_t hid;
+
+	/* VM ID of importer */
+	int rdomid;
 
-	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	struct dma_buf *dma_buf;
 	int nents;
 
-	/* list of remote activities on dma_buf */
+	/* list for tracking activities on dma_buf */
 	struct sgt_list *active_sgts;
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
 
-	bool valid; /* set to 0 once unexported. Needed to prevent further mapping by importer */
-	int active; /* locally shared on importer's side */
-	void *refs_info; /* hypervisor-specific info for the references */
+	/* set to 0 when unexported. Importer doesn't
+	 * do a new mapping of buffer if valid == false
+	 */
+	bool valid;
+
+	/* active == true if the buffer is actively used
+	 * (mapped) by importer
+	 */
+	int active;
+
+	/* hypervisor specific reference data for shared pages */
+	void *refs_info;
+
 	struct delayed_work unexport;
 	bool unexport_sched;
 
-	/* owner of buffer
-	 * TODO: that is naiive as buffer may be reused by
-	 * another userspace app, so here list of struct file should be kept
-	 * and emergency unexport should be executed only after last of buffer
-	 * uses releases hyper_dmabuf device
+	/* list for file pointers associated with all user space
+	 * application that have exported this same buffer to
+	 * another VM. This needs to be tracked to know whether
+	 * the buffer can be completely freed.
 	 */
 	struct file *filp;
 
+	/* size of private */
 	size_t sz_priv;
-	char *priv; /* device specific info (e.g. image's meta info?) */
+
+	/* private data associated with the exported buffer */
+	char *priv;
 };
 
-/* Importer store references (before mapping) on shared pages
- * Importer store these references in the table and map it in
- * its own memory map once userspace asks for reference for the buffer */
+/* imported_sgt_info contains information about imported DMA_BUF
+ * this info is kept in IMPORT list and asynchorously retrieved and
+ * used to map DMA_BUF on importer VM's side upon export fd ioctl
+ * request from user-space
+ */
+
 struct imported_sgt_info {
 	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
 
-	int ref_handle; /* reference number of top level addressing page of shared pages */
-	int frst_ofst;	/* start offset in first shared page */
-	int last_len;	/* length of data in the last shared page */
-	int nents;	/* number of pages to be shared */
+	/* hypervisor-specific handle to pages */
+	int ref_handle;
+
+	/* offset and size info of DMA_BUF */
+	int frst_ofst;
+	int last_len;
+	int nents;
 
 	struct dma_buf *dma_buf;
-	struct sg_table *sgt; /* sgt pointer after importing buffer */
+	struct sg_table *sgt;
 
 	void *refs_info;
 	bool valid;
 	int importers;
 
+	/* size of private */
 	size_t sz_priv;
-	char *priv; /* device specific info (e.g. image's meta info?) */
+
+	/* private data associated with the exported buffer */
+	char *priv;
 };
 
 #endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index f70b4ea..05f3521 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -41,12 +41,10 @@
 #include "hyper_dmabuf_xen_comm_list.h"
 #include "../hyper_dmabuf_drv.h"
 
-static int export_req_id = 0;
+static int export_req_id;
 
 struct hyper_dmabuf_req req_pending = {0};
 
-extern int xenstored_ready;
-
 static void xen_get_domid_delayed(struct work_struct *unused);
 static void xen_init_comm_env_delayed(struct work_struct *unused);
 
@@ -160,15 +158,16 @@ void xen_get_domid_delayed(struct work_struct *unused)
 	int domid, ret;
 
 	/* scheduling another if driver is still running
-	 * and xenstore has not been initialized */
+	 * and xenstore has not been initialized
+	 */
 	if (likely(xenstored_ready == 0)) {
 		dev_dbg(hy_drv_priv->dev,
-			"Xenstore is not quite ready yet. Will retry it in 500ms\n");
+			"Xenstore is not ready yet. Will retry in 500ms\n");
 		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
 	} else {
-	        xenbus_transaction_start(&xbt);
+		xenbus_transaction_start(&xbt);
 
-		ret = xenbus_scanf(xbt, "domid","", "%d", &domid);
+		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
 
 		if (ret <= 0)
 			domid = -1;
@@ -176,14 +175,17 @@ void xen_get_domid_delayed(struct work_struct *unused)
 		xenbus_transaction_end(xbt, 0);
 
 		/* try again since -1 is an invalid id for domain
-		 * (but only if driver is still running) */
+		 * (but only if driver is still running)
+		 */
 		if (unlikely(domid == -1)) {
 			dev_dbg(hy_drv_priv->dev,
 				"domid==-1 is invalid. Will retry it in 500ms\n");
-			schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+			schedule_delayed_work(&get_vm_id_work,
+					      msecs_to_jiffies(500));
 		} else {
 			dev_info(hy_drv_priv->dev,
-				"Successfully retrieved domid from Xenstore:%d\n", domid);
+				 "Successfully retrieved domid from Xenstore:%d\n",
+				 domid);
 			hy_drv_priv->domid = domid;
 		}
 	}
@@ -199,21 +201,20 @@ int hyper_dmabuf_xen_get_domid(void)
 		return -1;
 	}
 
-        xenbus_transaction_start(&xbt);
+	xenbus_transaction_start(&xbt);
 
-        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
+	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
 		domid = -1;
-        }
 
-        xenbus_transaction_end(xbt, 0);
+	xenbus_transaction_end(xbt, 0);
 
 	return domid;
 }
 
 static int xen_comm_next_req_id(void)
 {
-        export_req_id++;
-        return export_req_id;
+	export_req_id++;
+	return export_req_id;
 }
 
 /* For now cache latast rings as global variables TODO: keep them in list*/
@@ -236,19 +237,18 @@ static irqreturn_t back_ring_isr(int irq, void *info);
 static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 					 const char *path, const char *token)
 {
-	int rdom,ret;
+	int rdom, ret;
 	uint32_t grefid, port;
 	struct xen_comm_rx_ring_info *ring_info;
 
 	/* Check which domain has changed its exporter rings */
 	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
-	if (ret <= 0) {
+	if (ret <= 0)
 		return;
-	}
 
 	/* Check if we have importer ring for given remote domain already
-	 * created */
-
+	 * created
+	 */
 	ring_info = xen_comm_find_rx_ring(rdom);
 
 	/* Try to query remote domain exporter ring details - if
@@ -298,11 +298,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
-	if (!ring_info) {
-		dev_err(hy_drv_priv->dev,
-			"No more spae left\n");
+	if (!ring_info)
 		return -ENOMEM;
-	}
 
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
@@ -318,8 +315,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
 
 	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
-							   virt_to_mfn(shared_ring),
-							   0);
+						virt_to_mfn(shared_ring),
+						0);
 	if (ring_info->gref_ring < 0) {
 		/* fail to get gref */
 		kfree(ring_info);
@@ -340,7 +337,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	/* setting up interrupt */
 	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
 					front_ring_isr, 0,
-					NULL, (void*) ring_info);
+					NULL, (void *) ring_info);
 
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
@@ -368,25 +365,24 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(), domid,
-					   ring_info->gref_ring, ring_info->port);
+	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(),
+					   domid,
+					   ring_info->gref_ring,
+					   ring_info->port);
 
-	/*
-	 * Register watch for remote domain exporter ring.
+	/* Register watch for remote domain exporter ring.
 	 * When remote domain will setup its exporter ring,
 	 * we will automatically connect our importer ring to it.
 	 */
 	ring_info->watch.callback = remote_dom_exporter_watch_cb;
-	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
+	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
 
 	if (!ring_info->watch.node) {
-		dev_err(hy_drv_priv->dev,
-			"No more space left\n");
 		kfree(ring_info);
 		return -ENOMEM;
 	}
 
-	sprintf((char*)ring_info->watch.node,
+	sprintf((char *)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
 		domid, hyper_dmabuf_xen_get_domid());
 
@@ -404,9 +400,8 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	/* check if we at all have exporter ring for given rdomain */
 	ring_info = xen_comm_find_tx_ring(domid);
 
-	if (!ring_info) {
+	if (!ring_info)
 		return;
-	}
 
 	xen_comm_remove_tx_ring(domid);
 
@@ -416,7 +411,7 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	/* No need to close communication channel, will be done by
 	 * this function
 	 */
-	unbind_from_irqhandler(ring_info->irq, (void*) ring_info);
+	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
 
 	/* No need to free sring page, will be freed by this function
 	 * when other side will end its access
@@ -430,7 +425,8 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	if (!rx_ring_info)
 		return;
 
-	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring,
+	BACK_RING_INIT(&(rx_ring_info->ring_back),
+		       rx_ring_info->ring_back.sring,
 		       PAGE_SIZE);
 }
 
@@ -473,11 +469,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
-	if (!ring_info) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!ring_info)
 		return -ENOMEM;
-	}
 
 	ring_info->sdomain = domid;
 	ring_info->evtchn = rx_port;
@@ -485,8 +478,6 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (!map_ops) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
 		ret = -ENOMEM;
 		goto fail_no_map_ops;
 	}
@@ -497,11 +488,13 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	}
 
 	gnttab_set_map_op(&map_ops[0],
-			  (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			  (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
 			  GNTMAP_host_map, rx_gref, domid);
 
 	gnttab_set_unmap_op(&ring_info->unmap_op,
-			    (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			    (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
 			    GNTMAP_host_map, -1);
 
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
@@ -542,13 +535,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = xen_comm_add_rx_ring(ring_info);
 
 	/* Setup communcation channel in opposite direction */
-	if (!xen_comm_find_tx_ring(domid)) {
+	if (!xen_comm_find_tx_ring(domid))
 		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
-	}
 
 	ret = request_irq(ring_info->irq,
 			  back_ring_isr, 0,
-			  NULL, (void*)ring_info);
+			  NULL, (void *)ring_info);
 
 	return ret;
 
@@ -577,7 +569,7 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 	xen_comm_remove_rx_ring(domid);
 
 	/* no need to close event channel, will be done by that function */
-	unbind_from_irqhandler(ring_info->irq, (void*)ring_info);
+	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
 
 	/* unmapping shared ring page */
 	shared_ring = virt_to_page(ring_info->ring_back.sring);
@@ -636,7 +628,8 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 
 				if (!ret)
 					dev_info(hy_drv_priv->dev,
-						 "Finishing up setting up rx channel for domain %d\n", i);
+						 "Done rx ch init for VM %d\n",
+						 i);
 			}
 		}
 
@@ -654,7 +647,8 @@ void xen_init_comm_env_delayed(struct work_struct *unused)
 
 	/* scheduling another work if driver is still running
 	 * and xenstore hasn't been initialized or dom_id hasn't
-	 * been correctly retrieved. */
+	 * been correctly retrieved.
+	 */
 	if (likely(xenstored_ready == 0 ||
 	    hy_drv_priv->domid == -1)) {
 		dev_dbg(hy_drv_priv->dev,
@@ -778,9 +772,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
 	ring->req_prod_pvt++;
 
 	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
-	if (notify) {
+	if (notify)
 		notify_remote_via_irq(ring_info->irq);
-	}
 
 	if (wait) {
 		while (timeout--) {
@@ -792,24 +785,29 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
 
 		if (timeout < 0) {
 			mutex_unlock(&ring_info->lock);
-			dev_err(hy_drv_priv->dev, "request timed-out\n");
+			dev_err(hy_drv_priv->dev,
+				"request timed-out\n");
 			return -EBUSY;
 		}
 
 		mutex_unlock(&ring_info->lock);
 		do_gettimeofday(&tv_end);
 
-		/* checking time duration for round-trip of a request for debugging */
+		/* checking time duration for round-trip of a request
+		 * for debugging
+		 */
 		if (tv_end.tv_usec >= tv_start.tv_usec) {
 			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
 			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
 		} else {
 			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
-			tv_diff.tv_usec = tv_end.tv_usec+1000000-tv_start.tv_usec;
+			tv_diff.tv_usec = tv_end.tv_usec+1000000-
+					  tv_start.tv_usec;
 		}
 
 		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
-			dev_dbg(hy_drv_priv->dev, "send_req:time diff: %ld sec, %ld usec\n",
+			dev_dbg(hy_drv_priv->dev,
+				"send_req:time diff: %ld sec, %ld usec\n",
 				tv_diff.tv_sec, tv_diff.tv_usec);
 	}
 
@@ -850,23 +848,24 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
-				/* preparing a response for the request and send it to
-				 * the requester
+				/* preparing a response for the request and
+				 * send it to the requester
 				 */
 				memcpy(&resp, &req, sizeof(resp));
-				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt),
+				memcpy(RING_GET_RESPONSE(ring,
+							 ring->rsp_prod_pvt),
 							 &resp, sizeof(resp));
 				ring->rsp_prod_pvt++;
 
 				dev_dbg(hy_drv_priv->dev,
-					"sending response to exporter for request id:%d\n",
+					"responding to exporter for req:%d\n",
 					resp.resp_id);
 
-				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
+								     notify);
 
-				if (notify) {
+				if (notify)
 					notify_remote_via_irq(ring_info->irq);
-				}
 			}
 
 			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
@@ -905,41 +904,40 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 			dev_dbg(hy_drv_priv->dev,
 				"getting response from importer\n");
 
-			if (req_pending.req_id == resp->resp_id) {
+			if (req_pending.req_id == resp->resp_id)
 				req_pending.stat = resp->stat;
-			}
 
 			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
 				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
-							(struct hyper_dmabuf_req *)resp);
+					(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
 					dev_err(hy_drv_priv->dev,
-						"getting error while parsing response\n");
+						"err while parsing resp\n");
 				}
 			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
-				/* for debugging dma_buf remote synchronization */
+				/* for debugging dma_buf remote synch */
 				dev_dbg(hy_drv_priv->dev,
 					"original request = 0x%x\n", resp->cmd);
 				dev_dbg(hy_drv_priv->dev,
-					"Just got HYPER_DMABUF_REQ_PROCESSED\n");
+					"got HYPER_DMABUF_REQ_PROCESSED\n");
 			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
-				/* for debugging dma_buf remote synchronization */
+				/* for debugging dma_buf remote synch */
 				dev_dbg(hy_drv_priv->dev,
 					"original request = 0x%x\n", resp->cmd);
 				dev_dbg(hy_drv_priv->dev,
-					"Just got HYPER_DMABUF_REQ_ERROR\n");
+					"got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
 
 		ring->rsp_cons = i;
 
-		if (i != ring->req_prod_pvt) {
+		if (i != ring->req_prod_pvt)
 			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
-		} else {
+		else
 			ring->sring->rsp_event = i+1;
-		}
+
 	} while (more_to_do);
 
 	return IRQ_HANDLED;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 80741c1..8e2d1d0 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -29,23 +29,25 @@
 #include "xen/xenbus.h"
 #include "../hyper_dmabuf_msg.h"
 
+extern int xenstored_ready;
+
 DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
 
 struct xen_comm_tx_ring_info {
-        struct xen_comm_front_ring ring_front;
+	struct xen_comm_front_ring ring_front;
 	int rdomain;
-        int gref_ring;
-        int irq;
-        int port;
+	int gref_ring;
+	int irq;
+	int port;
 	struct mutex lock;
 	struct xenbus_watch watch;
 };
 
 struct xen_comm_rx_ring_info {
-        int sdomain;
-        int irq;
-        int evtchn;
-        struct xen_comm_back_ring ring_back;
+	int sdomain;
+	int irq;
+	int evtchn;
+	struct xen_comm_back_ring ring_back;
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
@@ -70,6 +72,7 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid);
 void hyper_dmabuf_xen_destroy_comm(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait);
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
+			      int wait);
 
 #endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 7a8ec73..343aab3 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -31,7 +31,6 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/cdev.h>
-#include <asm/uaccess.h>
 #include <linux/hashtable.h>
 #include <xen/grant_table.h>
 #include "../hyper_dmabuf_drv.h"
@@ -41,7 +40,7 @@
 DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
 DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
-void xen_comm_ring_table_init()
+void xen_comm_ring_table_init(void)
 {
 	hash_init(xen_comm_rx_ring_hash);
 	hash_init(xen_comm_tx_ring_hash);
@@ -53,11 +52,8 @@ int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->info = ring_info;
 
@@ -73,11 +69,8 @@ int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->info = ring_info;
 
@@ -93,7 +86,7 @@ struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->rdomain == domid)
+		if (info_entry->info->rdomain == domid)
 			return info_entry->info;
 
 	return NULL;
@@ -105,7 +98,7 @@ struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->sdomain == domid)
+		if (info_entry->info->sdomain == domid)
 			return info_entry->info;
 
 	return NULL;
@@ -117,7 +110,7 @@ int xen_comm_remove_tx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->rdomain == domid) {
+		if (info_entry->info->rdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
 			return 0;
@@ -132,7 +125,7 @@ int xen_comm_remove_rx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->sdomain == domid) {
+		if (info_entry->info->sdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
 			return 0;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index cde8ade..8502fe7 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -31,13 +31,13 @@
 #define MAX_ENTRY_RX_RING 7
 
 struct xen_comm_tx_ring_info_entry {
-        struct xen_comm_tx_ring_info *info;
-        struct hlist_node node;
+	struct xen_comm_tx_ring_info *info;
+	struct hlist_node node;
 };
 
 struct xen_comm_rx_ring_info_entry {
-        struct xen_comm_rx_ring_info *info;
-        struct hlist_node node;
+	struct xen_comm_rx_ring_info *info;
+	struct hlist_node node;
 };
 
 void xen_comm_ring_table_init(void);
@@ -54,10 +54,14 @@ struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
 
 struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
 
-/* iterates over all exporter rings and calls provided function for each of them */
+/* iterates over all exporter rings and calls provided
+ * function for each of them
+ */
 void xen_comm_foreach_tx_ring(void (*func)(int domid));
 
-/* iterates over all importer rings and calls provided function for each of them */
+/* iterates over all importer rings and calls provided
+ * function for each of them
+ */
 void xen_comm_foreach_rx_ring(void (*func)(int domid));
 
 #endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
index c5fec24..e5bff09 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -34,11 +34,20 @@ extern struct hyper_dmabuf_backend_ops xen_backend_ops;
  * when unsharing.
  */
 struct xen_shared_pages_info {
-        grant_ref_t lvl3_gref; /* top level refid */
-        grant_ref_t *lvl3_table; /* page of top level addressing, it contains refids of 2nd level pages */
-        grant_ref_t *lvl2_table; /* table of 2nd level pages, that contains refids to data pages */
-        struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
-        struct page **data_pages; /* data pages to be unmapped */
+	/* top level refid */
+	grant_ref_t lvl3_gref;
+
+	/* page of top level addressing, it contains refids of 2nd lvl pages */
+	grant_ref_t *lvl3_table;
+
+	/* table of 2nd level pages, that contains refids to data pages */
+	grant_ref_t *lvl2_table;
+
+	/* unmap ops for mapped pages */
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	/* data pages to be unmapped */
+	struct page **data_pages;
 };
 
 #endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 424417d..a86313a 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -40,19 +40,21 @@
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
  * point to 2nd level pages.
+ *
  * Each 2nd level page contains up to 1024 refids that point to shared
  * data pages.
+ *
  * There will always be one top level page and number of 2nd level pages
  * depends on number of shared data pages.
  *
  *      3rd level page                2nd level pages            Data pages
- * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
- * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
- * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
+ * +-------------------------+   ┌>+--------------------+ ┌>+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
  * |           ...           |   | |     ....           | |
- * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
- * +-------------------------+ | | +--------------------+      |Data page 1 |
- *                             | |                             +------------+
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
+ * +-------------------------+ | | +--------------------+   |Data page 1 |
+ *                             | |                          +------------+
  *                             | └>+--------------------+
  *                             |   |Data page 1024 refid|
  *                             |   |Data page 1025 refid|
@@ -65,9 +67,8 @@
  *                                 |Data page 1047552 refid|
  *                                 |Data page 1047553 refid|
  *                                 |       ...             |
- *                                 |Data page 1048575 refid|-->+------------------+
- *                                 +-----------------------+   |Data page 1048575 |
- *                                                             +------------------+
+ *                                 |Data page 1048575 refid|
+ *                                 +-----------------------+
  *
  * Using such 2 level structure it is possible to reference up to 4GB of
  * shared data using single refid pointing to top level page.
@@ -85,7 +86,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	 * Calculate number of pages needed for 2nd level addresing:
 	 */
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			   ((nents % REFS_PER_PAGE) ? 1: 0));
+			   ((nents % REFS_PER_PAGE) ? 1 : 0));
 
 	struct xen_shared_pages_info *sh_pages_info;
 	int i;
@@ -95,23 +96,22 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 
-	if (!sh_pages_info) {
-		dev_err(hy_drv_priv->dev, "No more space left\n");
+	if (!sh_pages_info)
 		return -ENOMEM;
-	}
 
 	*refs_info = (void *)sh_pages_info;
 
 	/* share data pages in readonly mode for security */
-	for (i=0; i<nents; i++) {
+	for (i = 0; i < nents; i++) {
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
 					pfn_to_mfn(page_to_pfn(pages[i])),
-					true /* read-only from remote domain */);
+					true /* read only */);
 		if (lvl2_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl2 */
-			while(i--) {
+			while (i--) {
 				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
 				gnttab_free_grant_reference(lvl2_table[i]);
 			}
@@ -120,23 +120,26 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	}
 
 	/* Share 2nd level addressing pages in readonly mode*/
-	for (i=0; i< n_lvl2_grefs; i++) {
+	for (i = 0; i < n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-					virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+					virt_to_mfn(
+					(unsigned long)lvl2_table+i*PAGE_SIZE),
 					true);
 
 		if (lvl3_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl3 */
-			while(i--) {
+			while (i--) {
 				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
 				gnttab_free_grant_reference(lvl3_table[i]);
 			}
 
 			/* Unshare all pages for lvl2 */
-			while(nents--) {
-				gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+			while (nents--) {
+				gnttab_end_foreign_access_ref(
+							lvl2_table[nents], 0);
 				gnttab_free_grant_reference(lvl2_table[nents]);
 			}
 
@@ -150,16 +153,17 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 			true);
 
 	if (lvl3_gref == -ENOSPC) {
-		dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
+		dev_err(hy_drv_priv->dev,
+			"No more space left in grant table\n");
 
 		/* Unshare all pages for lvl3 */
-		while(i--) {
+		while (i--) {
 			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
 			gnttab_free_grant_reference(lvl3_table[i]);
 		}
 
 		/* Unshare all pages for lvl2 */
-		while(nents--) {
+		while (nents--) {
 			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
 			gnttab_free_grant_reference(lvl2_table[nents]);
 		}
@@ -187,10 +191,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	return -ENOSPC;
 }
 
-int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
+int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents)
+{
 	struct xen_shared_pages_info *sh_pages_info;
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			    ((nents % REFS_PER_PAGE) ? 1: 0));
+			    ((nents % REFS_PER_PAGE) ? 1 : 0));
 	int i;
 
 	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
@@ -206,28 +211,28 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 
 	/* End foreign access for data pages, but do not free them */
 	for (i = 0; i < nents; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
 			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-		}
+
 		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
 		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
 	}
 
 	/* End foreign access for 2nd level addressing pages */
 	for (i = 0; i < n_lvl2_grefs; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
 			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-		}
-		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
+
+		if (!gnttab_end_foreign_access_ref(
+					sh_pages_info->lvl3_table[i], 1))
 			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
-		}
+
 		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
 	}
 
 	/* End foreign access for top level addressing page */
-	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
+	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
 		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
-	}
 
 	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
 	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
@@ -246,10 +251,11 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	return 0;
 }
 
-/*
- * Maps provided top level ref id and then return array of pages containing data refs.
+/* Maps provided top level ref id and then return array of pages
+ * containing data refs.
  */
-struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents, void **refs_info)
+struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
+						int nents, void **refs_info)
 {
 	struct page *lvl3_table_page;
 	struct page **lvl2_table_pages;
@@ -280,19 +286,19 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 	*refs_info = (void *) sh_pages_info;
 
-	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs,
+	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
 				   GFP_KERNEL);
 
-	data_pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
+	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
 
-	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs,
+	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
 			       GFP_KERNEL);
 
-	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs,
+	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
 				 GFP_KERNEL);
 
-	data_map_ops = kcalloc(sizeof(*data_map_ops), nents, GFP_KERNEL);
-	data_unmap_ops = kcalloc(sizeof(*data_unmap_ops), nents, GFP_KERNEL);
+	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
+	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
 
 	/* Map top level addressing page */
 	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
@@ -332,7 +338,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	for (i = 0; i < n_lvl2_grefs; i++) {
-		lvl2_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
+					page_to_pfn(lvl2_table_pages[i]));
 		gnttab_set_map_op(&lvl2_map_ops[i],
 				  (unsigned long)lvl2_table, GNTMAP_host_map |
 				  GNTMAP_readonly,
@@ -348,11 +355,11 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		dev_err(hy_drv_priv->dev,
 			"xen: cannot unmap top level page\n");
 		return NULL;
-	} else {
-		/* Mark that page was unmapped */
-		lvl3_unmap_ops.handle = -1;
 	}
 
+	/* Mark that page was unmapped */
+	lvl3_unmap_ops.handle = -1;
+
 	if (gnttab_map_refs(lvl2_map_ops, NULL,
 			    lvl2_table_pages, n_lvl2_grefs)) {
 		dev_err(hy_drv_priv->dev,
@@ -384,19 +391,22 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
-				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
 				GNTMAP_host_map | GNTMAP_readonly,
 				lvl2_table[j], domid);
 
 			gnttab_set_unmap_op(&data_unmap_ops[k],
-				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
 				GNTMAP_host_map | GNTMAP_readonly, -1);
 			k++;
 		}
 	}
 
 	/* for grefs in the last lvl2 table page */
-	lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[n_lvl2_grefs - 1]));
+	lvl2_table = pfn_to_kaddr(page_to_pfn(
+				lvl2_table_pages[n_lvl2_grefs - 1]));
 
 	for (j = 0; j < nents_last; j++) {
 		gnttab_set_map_op(&data_map_ops[k],
@@ -424,13 +434,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		dev_err(hy_drv_priv->dev,
 			"Cannot unmap 2nd level refs\n");
 		return NULL;
-	} else {
-		/* Mark that pages were unmapped */
-		for (i = 0; i < n_lvl2_grefs; i++) {
-			lvl2_unmap_ops[i].handle = -1;
-		}
 	}
 
+	/* Mark that pages were unmapped */
+	for (i = 0; i < n_lvl2_grefs; i++)
+		lvl2_unmap_ops[i].handle = -1;
+
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
 			dev_err(hy_drv_priv->dev,
@@ -483,7 +492,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	return NULL;
 }
 
-int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
+int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents)
+{
 	struct xen_shared_pages_info *sh_pages_info;
 
 	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
@@ -498,7 +508,7 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	}
 
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
-			      sh_pages_info->data_pages, nents) ) {
+			      sh_pages_info->data_pages, nents)) {
 		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
 		return -EFAULT;
 	}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index 629ec0f..e7ae731 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -25,18 +25,21 @@
 #ifndef __HYPER_DMABUF_XEN_SHM_H__
 #define __HYPER_DMABUF_XEN_SHM_H__
 
-/* This collects all reference numbers for 2nd level shared pages and create a table
- * with those in 1st level shared pages then return reference numbers for this top level
- * table. */
+/* This collects all reference numbers for 2nd level shared pages and
+ * create a table with those in 1st level shared pages then return reference
+ * numbers for this top level table.
+ */
 int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 				 void **refs_info);
 
 int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents);
 
-/* Maps provided top level ref id and then return array of pages containing data refs.
+/* Maps provided top level ref id and then return array of pages containing
+ * data refs.
  */
-struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents,
-						void **refs_info);
+struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
+						 int nents,
+						 void **refs_info);
 
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents);
 
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index e18dd9b..cb25299 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -28,8 +28,8 @@
 #define MAX_SIZE_PRIV_DATA 192
 
 typedef struct {
-        int id;
-        int rng_key[3]; /* 12bytes long random number */
+	int id;
+	int rng_key[3]; /* 12bytes long random number */
 } hyper_dmabuf_id_t;
 
 struct hyper_dmabuf_event_hdr {
@@ -115,20 +115,20 @@ struct ioctl_hyper_dmabuf_query {
 /* DMABUF query */
 
 enum hyper_dmabuf_query {
-        HYPER_DMABUF_QUERY_TYPE = 0x10,
-        HYPER_DMABUF_QUERY_EXPORTER,
-        HYPER_DMABUF_QUERY_IMPORTER,
-        HYPER_DMABUF_QUERY_SIZE,
-        HYPER_DMABUF_QUERY_BUSY,
-        HYPER_DMABUF_QUERY_UNEXPORTED,
-        HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
-        HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
-        HYPER_DMABUF_QUERY_PRIV_INFO,
+	HYPER_DMABUF_QUERY_TYPE = 0x10,
+	HYPER_DMABUF_QUERY_EXPORTER,
+	HYPER_DMABUF_QUERY_IMPORTER,
+	HYPER_DMABUF_QUERY_SIZE,
+	HYPER_DMABUF_QUERY_BUSY,
+	HYPER_DMABUF_QUERY_UNEXPORTED,
+	HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+	HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
+	HYPER_DMABUF_QUERY_PRIV_INFO,
 };
 
 enum hyper_dmabuf_status {
-        EXPORTED= 0x01,
-        IMPORTED,
+	EXPORTED = 0x01,
+	IMPORTED,
 };
 
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 50/60] hyper_dmabuf: fix styling err and warns caught by checkpatch.pl
  2017-12-19 19:29 ` Dongwon Kim
                   ` (66 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Fixing all styling problems caught by checkpatch.pl

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  53 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |   6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      |  12 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |  24 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 308 +++++++++++----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |   5 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 132 ++++-----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        |  58 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c      | 236 ++++++++--------
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |  81 +++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  15 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |   2 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  78 ++++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 154 +++++------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  21 +-
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |  21 +-
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  16 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  19 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 128 +++++----
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  15 +-
 include/uapi/xen/hyper_dmabuf.h                    |  26 +-
 23 files changed, 739 insertions(+), 679 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 525ee78..023d7f4 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -44,7 +44,6 @@
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
 #include "xen/hyper_dmabuf_xen_drv.h"
-extern struct hyper_dmabuf_backend_ops xen_backend_ops;
 #endif
 
 MODULE_LICENSE("GPL and additional rights");
@@ -52,14 +51,11 @@ MODULE_AUTHOR("Intel Corporation");
 
 struct hyper_dmabuf_private *hy_drv_priv;
 
-long hyper_dmabuf_ioctl(struct file *filp,
-			unsigned int cmd, unsigned long param);
-
-static void hyper_dmabuf_force_free(struct exported_sgt_info* exported,
-			            void *attr)
+static void hyper_dmabuf_force_free(struct exported_sgt_info *exported,
+				    void *attr)
 {
 	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file*) attr;
+	struct file *filp = (struct file *)attr;
 
 	if (!filp || !exported)
 		return;
@@ -97,7 +93,8 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 
-unsigned int hyper_dmabuf_event_poll(struct file *filp, struct poll_table_struct *wait)
+unsigned int hyper_dmabuf_event_poll(struct file *filp,
+				     struct poll_table_struct *wait)
 {
 	unsigned int mask = 0;
 
@@ -153,15 +150,17 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 			mutex_unlock(&hy_drv_priv->event_read_lock);
 			ret = wait_event_interruptible(hy_drv_priv->event_wait,
-						       !list_empty(&hy_drv_priv->event_list));
+				  !list_empty(&hy_drv_priv->event_list));
 
 			if (ret == 0)
-				ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
+				ret = mutex_lock_interruptible(
+					&hy_drv_priv->event_read_lock);
 
 			if (ret)
 				return ret;
 		} else {
-			unsigned length = (sizeof(struct hyper_dmabuf_event_hdr) + e->event_data.hdr.size);
+			unsigned int length = (sizeof(e->event_data.hdr) +
+						      e->event_data.hdr.size);
 
 			if (length > count - ret) {
 put_back_event:
@@ -172,20 +171,22 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 			}
 
 			if (copy_to_user(buffer + ret, &e->event_data.hdr,
-					 sizeof(struct hyper_dmabuf_event_hdr))) {
+					 sizeof(e->event_data.hdr))) {
 				if (ret == 0)
 					ret = -EFAULT;
 
 				goto put_back_event;
 			}
 
-			ret += sizeof(struct hyper_dmabuf_event_hdr);
+			ret += sizeof(e->event_data.hdr);
 
-			if (copy_to_user(buffer + ret, e->event_data.data, e->event_data.hdr.size)) {
+			if (copy_to_user(buffer + ret, e->event_data.data,
+					 e->event_data.hdr.size)) {
 				/* error while copying void *data */
 
 				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
-				ret -= sizeof(struct hyper_dmabuf_event_hdr);
+
+				ret -= sizeof(e->event_data.hdr);
 
 				/* nullifying hdr of the event in user buffer */
 				if (copy_to_user(buffer + ret, &dummy_hdr,
@@ -212,8 +213,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 
 #endif
 
-static struct file_operations hyper_dmabuf_driver_fops =
-{
+static const struct file_operations hyper_dmabuf_driver_fops = {
 	.owner = THIS_MODULE,
 	.open = hyper_dmabuf_open,
 	.release = hyper_dmabuf_release,
@@ -246,7 +246,7 @@ int register_device(void)
 
 	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
 
-	/* TODO: Check if there is a different way to initialize dma mask nicely */
+	/* TODO: Check if there is a different way to initialize dma mask */
 	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
 
 	return ret;
@@ -264,32 +264,30 @@ static int __init hyper_dmabuf_drv_init(void)
 {
 	int ret = 0;
 
-	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
+	printk(KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
 
 	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
 			      GFP_KERNEL);
 
 	if (!hy_drv_priv) {
-		printk( KERN_ERR "hyper_dmabuf: Failed to create drv\n");
+		printk(KERN_ERR "hyper_dmabuf: Failed to create drv\n");
 		return -1;
 	}
 
 	ret = register_device();
-	if (ret < 0) {
+	if (ret < 0)
 		return ret;
-	}
 
 /* currently only supports XEN hypervisor */
-
 #ifdef CONFIG_HYPER_DMABUF_XEN
 	hy_drv_priv->backend_ops = &xen_backend_ops;
 #else
 	hy_drv_priv->backend_ops = NULL;
-	printk( KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
+	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
 
 	if (hy_drv_priv->backend_ops == NULL) {
-		printk( KERN_ERR "Hyper_dmabuf: failed to be loaded - no backend found\n");
+		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
 		return -1;
 	}
 
@@ -385,10 +383,7 @@ static void hyper_dmabuf_drv_exit(void)
 	dev_info(hy_drv_priv->dev,
 		 "hyper_dmabuf driver: Exiting\n");
 
-	if (hy_drv_priv) {
-		kfree(hy_drv_priv);
-		hy_drv_priv = NULL;
-	}
+	kfree(hy_drv_priv);
 
 	unregister_device();
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 2ead41b..049c694 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -36,7 +36,7 @@ struct hyper_dmabuf_event {
 };
 
 struct hyper_dmabuf_private {
-        struct device *dev;
+	struct device *dev;
 
 	/* VM(domain) id of current VM instance */
 	int domid;
@@ -57,8 +57,8 @@ struct hyper_dmabuf_private {
 	/* flag that shows whether backend is initialized */
 	bool initialized;
 
-        wait_queue_head_t event_wait;
-        struct list_head event_list;
+	wait_queue_head_t event_wait;
+	struct list_head event_list;
 
 	spinlock_t event_lock;
 	struct mutex event_read_lock;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index 0498cda..a4945af 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -44,7 +44,8 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 	assert_spin_locked(&hy_drv_priv->event_lock);
 
 	/* check current number of event then if it hits the max num allowed
-	 * then remove the oldest event in the list */
+	 * then remove the oldest event in the list
+	 */
 	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
 		oldest = list_first_entry(&hy_drv_priv->event_list,
 				struct hyper_dmabuf_event, link);
@@ -61,7 +62,7 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 	wake_up_interruptible(&hy_drv_priv->event_wait);
 }
 
-void hyper_dmabuf_events_release()
+void hyper_dmabuf_events_release(void)
 {
 	struct hyper_dmabuf_event *e, *et;
 	unsigned long irqflags;
@@ -100,15 +101,12 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 
 	e = kzalloc(sizeof(*e), GFP_KERNEL);
 
-	if (!e) {
-		dev_err(hy_drv_priv->dev,
-			"no space left\n");
+	if (!e)
 		return -ENOMEM;
-	}
 
 	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
 	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void*)imported->priv;
+	e->event_data.data = (void *)imported->priv;
 	e->event_data.hdr.size = imported->sz_priv;
 
 	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index e2466c7..312dea5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -40,11 +40,8 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 
 	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
 
-	if (!new_reusable) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!new_reusable)
 		return;
-	}
 
 	new_reusable->hid = hid;
 
@@ -54,7 +51,7 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	hyper_dmabuf_id_t hid = {-1, {0,0,0}};
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
 
 	/* check there is reusable id */
 	if (!list_empty(&reusable_head->list)) {
@@ -92,7 +89,7 @@ void destroy_reusable_list(void)
 
 hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 {
-	static int count = 0;
+	static int count;
 	hyper_dmabuf_id_t hid;
 	struct list_reusable_id *reusable_head;
 
@@ -100,13 +97,11 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	if (count == 0) {
 		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
 
-		if (!reusable_head) {
-			dev_err(hy_drv_priv->dev,
-				"No memory left to be allocated\n");
-			return (hyper_dmabuf_id_t){-1, {0,0,0}};
-		}
+		if (!reusable_head)
+			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
 
-		reusable_head->hid.id = -1; /* list head has an invalid count */
+		/* list head has an invalid count */
+		reusable_head->hid.id = -1;
 		INIT_LIST_HEAD(&reusable_head->list);
 		hy_drv_priv->id_queue = reusable_head;
 	}
@@ -116,9 +111,8 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 	/*creating a new H-ID only if nothing in the reusable id queue
 	 * and count is less than maximum allowed
 	 */
-	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX) {
+	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
 		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
-	}
 
 	/* random data embedded in the id for security */
 	get_random_bytes(&hid.rng_key[0], 12);
@@ -131,7 +125,7 @@ bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
 	int i;
 
 	/* compare keys */
-	for (i=0; i<3; i++) {
+	for (i = 0; i < 3; i++) {
 		if (hid1.rng_key[i] != hid2.rng_key[i])
 			return false;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index a3336d9..61c4fb3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -26,10 +26,10 @@
 #define __HYPER_DMABUF_ID_H__
 
 #define HYPER_DMABUF_ID_CREATE(domid, cnt) \
-        ((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
+	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
 
 #define HYPER_DMABUF_DOM_ID(hid) \
-        (((hid.id) >> 24) & 0xFF)
+	(((hid.id) >> 24) & 0xFF)
 
 /* currently maximum number of buffers shared
  * at any given moment is limited to 1000
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b328df7..f9040ed 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -91,7 +91,7 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 	/* now create request for importer via ring */
 	op[0] = exported->hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = exported->hid.rng_key[i];
 
 	if (pg_info) {
@@ -113,10 +113,8 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if(!req) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
+	if (!req)
 		return -1;
-	}
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
@@ -161,69 +159,71 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 					     export_remote_attr->remote_domain);
 	if (hid.id != -1) {
 		exported = hyper_dmabuf_find_exported(hid);
-		if (exported != NULL) {
-			if (exported->valid) {
-				/*
-				 * Check if unexport is already scheduled for that buffer,
-				 * if so try to cancel it. If that will fail, buffer needs
-				 * to be reexport once again.
-				 */
-				if (exported->unexport_sched) {
-					if (!cancel_delayed_work_sync(&exported->unexport)) {
-						dma_buf_put(dma_buf);
-						goto reexport;
-					}
-					exported->unexport_sched = false;
-				}
-
-				/* if there's any change in size of private data.
-				 * we reallocate space for private data with new size */
-				if (export_remote_attr->sz_priv != exported->sz_priv) {
-					kfree(exported->priv);
-
-					/* truncating size */
-					if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
-						exported->sz_priv = MAX_SIZE_PRIV_DATA;
-					} else {
-						exported->sz_priv = export_remote_attr->sz_priv;
-					}
-
-					exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
-
-					if(!exported->priv) {
-						dev_err(hy_drv_priv->dev,
-							"no more space left for priv\n");
-						hyper_dmabuf_remove_exported(exported->hid);
-						hyper_dmabuf_cleanup_sgt_info(exported, true);
-						kfree(exported);
-						dma_buf_put(dma_buf);
-						return -ENOMEM;
-					}
-				}
-
-				/* update private data in sgt_info with new ones */
-				ret = copy_from_user(exported->priv, export_remote_attr->priv,
-						     exported->sz_priv);
-				if (ret) {
-					dev_err(hy_drv_priv->dev,
-						"Failed to load a new private data\n");
-					ret = -EINVAL;
-				} else {
-					/* send an export msg for updating priv in importer */
-					ret = hyper_dmabuf_send_export_msg(exported, NULL);
-
-					if (ret < 0) {
-						dev_err(hy_drv_priv->dev,
-							"Failed to send a new private data\n");
-						ret = -EBUSY;
-					}
-				}
 
+		if (!exported)
+			goto reexport;
+
+		if (exported->valid == false)
+			goto reexport;
+
+		/*
+		 * Check if unexport is already scheduled for that buffer,
+		 * if so try to cancel it. If that will fail, buffer needs
+		 * to be reexport once again.
+		 */
+		if (exported->unexport_sched) {
+			if (!cancel_delayed_work_sync(&exported->unexport)) {
 				dma_buf_put(dma_buf);
-				export_remote_attr->hid = hid;
-				return ret;
+				goto reexport;
 			}
+			exported->unexport_sched = false;
 		}
+
+		/* if there's any change in size of private data.
+		 * we reallocate space for private data with new size
+		 */
+		if (export_remote_attr->sz_priv != exported->sz_priv) {
+			kfree(exported->priv);
+
+			/* truncating size */
+			if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
+				exported->sz_priv = MAX_SIZE_PRIV_DATA;
+			else
+				exported->sz_priv = export_remote_attr->sz_priv;
+
+			exported->priv = kcalloc(1, exported->sz_priv,
+						 GFP_KERNEL);
+
+			if (!exported->priv) {
+				hyper_dmabuf_remove_exported(exported->hid);
+				hyper_dmabuf_cleanup_sgt_info(exported, true);
+				kfree(exported);
+				dma_buf_put(dma_buf);
+				return -ENOMEM;
+			}
+		}
+
+		/* update private data in sgt_info with new ones */
+		ret = copy_from_user(exported->priv, export_remote_attr->priv,
+				     exported->sz_priv);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to load a new private data\n");
+			ret = -EINVAL;
+		} else {
+			/* send an export msg for updating priv in importer */
+			ret = hyper_dmabuf_send_export_msg(exported, NULL);
+
+			if (ret < 0) {
+				dev_err(hy_drv_priv->dev,
+					"Failed to send a new private data\n");
+				ret = -EBUSY;
+			}
+		}
+
+		dma_buf_put(dma_buf);
+		export_remote_attr->hid = hid;
+		return ret;
 	}
 
 reexport:
@@ -244,25 +244,22 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
 
-	if(!exported) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
+	if (!exported) {
 		ret = -ENOMEM;
 		goto fail_sgt_info_creation;
 	}
 
 	/* possible truncation */
-	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA) {
+	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
 		exported->sz_priv = MAX_SIZE_PRIV_DATA;
-	} else {
+	else
 		exported->sz_priv = export_remote_attr->sz_priv;
-	}
 
 	/* creating buffer for private data of buffer */
-	if(exported->sz_priv != 0) {
+	if (exported->sz_priv != 0) {
 		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
 
-		if(!exported->priv) {
-			dev_err(hy_drv_priv->dev, "no more space left\n");
+		if (!exported->priv) {
 			ret = -ENOMEM;
 			goto fail_priv_creation;
 		}
@@ -273,7 +270,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	exported->hid = hyper_dmabuf_get_hid();
 
 	/* no more exported dmabuf allowed */
-	if(exported->hid.id == -1) {
+	if (exported->hid.id == -1) {
 		dev_err(hy_drv_priv->dev,
 			"exceeds allowed number of dmabuf to be exported\n");
 		ret = -ENOMEM;
@@ -286,28 +283,27 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
 	if (!exported->active_sgts) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_sgts;
 	}
 
-	exported->active_attached = kmalloc(sizeof(struct attachment_list), GFP_KERNEL);
+	exported->active_attached = kmalloc(sizeof(struct attachment_list),
+					    GFP_KERNEL);
 	if (!exported->active_attached) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_active_attached;
 	}
 
-	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list), GFP_KERNEL);
+	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
+				       GFP_KERNEL);
 	if (!exported->va_kmapped) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_kmapped;
 	}
 
-	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list), GFP_KERNEL);
+	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
+				       GFP_KERNEL);
 	if (!exported->va_vmapped) {
-		dev_err(hy_drv_priv->dev, "no more space left\n");
 		ret = -ENOMEM;
 		goto fail_map_va_vmapped;
 	}
@@ -436,31 +432,32 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	/* send notification for export_fd to exporter */
 	op[0] = imported->hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = imported->hid.rng_key[i];
 
-	dev_dbg(hy_drv_priv->dev, "Exporting fd of buffer {id:%d key:%d %d %d}\n",
-		imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-		imported->hid.rng_key[2]);
+	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!req)
 		return -ENOMEM;
-	}
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
 	ret = ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
 
 	if (ret < 0) {
-		/* in case of timeout other end eventually will receive request, so we need to undo it */
-		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
+		/* in case of timeout other end eventually will receive request,
+		 * so we need to undo it
+		 */
+		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
+					&op[0]);
 		ops->send_req(op[0], req, false);
 		kfree(req);
-		dev_err(hy_drv_priv->dev, "Failed to create sgt or notify exporter\n");
+		dev_err(hy_drv_priv->dev,
+			"Failed to create sgt or notify exporter\n");
 		imported->importers--;
 		mutex_unlock(&hy_drv_priv->lock);
 		return ret;
@@ -471,64 +468,69 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 	if (ret == HYPER_DMABUF_REQ_ERROR) {
 		dev_err(hy_drv_priv->dev,
 			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
-			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-			imported->hid.rng_key[2]);
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 		imported->importers--;
 		mutex_unlock(&hy_drv_priv->lock);
 		return -EINVAL;
-	} else {
-		dev_dbg(hy_drv_priv->dev, "Can import buffer {id:%d key:%d %d %d}\n",
-			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-			imported->hid.rng_key[2]);
-
-		ret = 0;
 	}
 
+	ret = 0;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Found buffer gref %d off %d\n",
+		imported->ref_handle, imported->frst_ofst);
+
 	dev_dbg(hy_drv_priv->dev,
-		  "%s Found buffer gref %d  off %d last len %d nents %d domain %d\n",
-		  __func__, imported->ref_handle, imported->frst_ofst,
-		  imported->last_len, imported->nents, HYPER_DMABUF_DOM_ID(imported->hid));
+		"last len %d nents %d domain %d\n",
+		imported->last_len, imported->nents,
+		HYPER_DMABUF_DOM_ID(imported->hid));
 
 	if (!imported->sgt) {
 		dev_dbg(hy_drv_priv->dev,
-			"%s buffer {id:%d key:%d %d %d} pages not mapped yet\n", __func__,
-			imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
-			imported->hid.rng_key[2]);
+			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
 		data_pgs = ops->map_shared_pages(imported->ref_handle,
-						   HYPER_DMABUF_DOM_ID(imported->hid),
-						   imported->nents,
-						   &imported->refs_info);
+					HYPER_DMABUF_DOM_ID(imported->hid),
+					imported->nents,
+					&imported->refs_info);
 
 		if (!data_pgs) {
 			dev_err(hy_drv_priv->dev,
-				"Cannot map pages of buffer {id:%d key:%d %d %d}\n",
-				imported->hid.id, imported->hid.rng_key[0], imported->hid.rng_key[1],
+				"can't map pages hid {id:%d key:%d %d %d}\n",
+				imported->hid.id, imported->hid.rng_key[0],
+				imported->hid.rng_key[1],
 				imported->hid.rng_key[2]);
 
 			imported->importers--;
+
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-			if (!req) {
-				dev_err(hy_drv_priv->dev,
-					"No more space left\n");
+			if (!req)
 				return -ENOMEM;
-			}
 
-			hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED, &op[0]);
-			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
+			hyper_dmabuf_create_req(req,
+						HYPER_DMABUF_EXPORT_FD_FAILED,
+						&op[0]);
+			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
+							  false);
 			kfree(req);
 			mutex_unlock(&hy_drv_priv->lock);
 			return -EINVAL;
 		}
 
-		imported->sgt = hyper_dmabuf_create_sgt(data_pgs, imported->frst_ofst,
-							imported->last_len, imported->nents);
+		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
+							imported->frst_ofst,
+							imported->last_len,
+							imported->nents);
 
 	}
 
-	export_fd_attr->fd = hyper_dmabuf_export_fd(imported, export_fd_attr->flags);
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
+						    export_fd_attr->flags);
 
 	if (export_fd_attr->fd < 0) {
 		/* fail to get fd */
@@ -566,21 +568,19 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!req)
 		return;
-	}
 
 	op[0] = exported->hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = exported->hid.rng_key[i];
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
 
 	/* Now send unexport request to remote domain, marking
-	 * that buffer should not be used anymore */
+	 * that buffer should not be used anymore
+	 */
 	ret = ops->send_req(exported->rdomid, req, true);
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
@@ -589,12 +589,10 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 			exported->hid.rng_key[1], exported->hid.rng_key[2]);
 	}
 
-	/* free msg */
 	kfree(req);
 	exported->unexport_sched = false;
 
-	/*
-	 * Immediately clean-up if it has never been exported by importer
+	/* Immediately clean-up if it has never been exported by importer
 	 * (so no SGT is constructed on importer).
 	 * clean it up later in remote sync when final release ops
 	 * is called (importer does this only when there's no
@@ -669,25 +667,31 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 		exported = hyper_dmabuf_find_exported(query_attr->hid);
 		if (exported) {
 			ret = hyper_dmabuf_query_exported(exported,
-							  query_attr->item, &query_attr->info);
+							  query_attr->item,
+							  &query_attr->info);
 		} else {
 			dev_err(hy_drv_priv->dev,
-				"DMA BUF {id:%d key:%d %d %d} not in the export list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
+				"hid {id:%d key:%d %d %d} not in exp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	} else {
 		/* query for imported dmabuf */
 		imported = hyper_dmabuf_find_imported(query_attr->hid);
 		if (imported) {
-			ret = hyper_dmabuf_query_imported(imported, query_attr->item,
+			ret = hyper_dmabuf_query_imported(imported,
+							  query_attr->item,
 							  &query_attr->info);
 		} else {
 			dev_err(hy_drv_priv->dev,
-				"DMA BUF {id:%d key:%d %d %d} not in the imported list\n",
-				query_attr->hid.id, query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1], query_attr->hid.rng_key[2]);
+				"hid {id:%d key:%d %d %d} not in imp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
 			return -ENOENT;
 		}
 	}
@@ -696,12 +700,18 @@ static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
 }
 
 const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP, hyper_dmabuf_tx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP, hyper_dmabuf_rx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT, hyper_dmabuf_unexport_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
+			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
+			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
+			       hyper_dmabuf_export_remote_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
+			       hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
+			       hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY,
+			       hyper_dmabuf_query_ioctl, 0),
 };
 
 long hyper_dmabuf_ioctl(struct file *filp,
@@ -728,21 +738,23 @@ long hyper_dmabuf_ioctl(struct file *filp,
 	}
 
 	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
-	if (!kdata) {
-		dev_err(hy_drv_priv->dev, "no memory\n");
+	if (!kdata)
 		return -ENOMEM;
-	}
 
-	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev, "failed to copy from user arguments\n");
+	if (copy_from_user(kdata, (void __user *)param,
+			   _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy from user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
 
 	ret = func(filp, kdata);
 
-	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev, "failed to copy to user arguments\n");
+	if (copy_to_user((void __user *)param, kdata,
+			 _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy to user arguments\n");
 		ret = -EFAULT;
 		goto ioctl_error;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
index 3e9470a..5991a87 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -34,7 +34,7 @@ struct hyper_dmabuf_ioctl_desc {
 	const char *name;
 };
 
-#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
 	[_IOC_NR(ioctl)] = {				\
 			.cmd = ioctl,			\
 			.func = _func,			\
@@ -42,6 +42,9 @@ struct hyper_dmabuf_ioctl_desc {
 			.name = #ioctl			\
 	}
 
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param);
+
 int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
 
 #endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index 907f76e..fbbcc39 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -52,18 +52,19 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
 	req->cmd = cmd;
 
-	switch(cmd) {
+	switch (cmd) {
 	/* as exporter, commands to importer */
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~3 : hyper_dmabuf_id
+		 * op0~op3 : hyper_dmabuf_id
 		 * op4 : number of pages to be shared
 		 * op5 : offset of data in the first page
 		 * op6 : length of data in the last page
 		 * op7 : top-level reference number for shared pages
 		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op9 ~ : Driver-specific private data
+		 *	   (e.g. graphic buffer's meta info)
 		 */
 
 		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
@@ -72,34 +73,39 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
 		/* destroy sg_list for hyper_dmabuf_id on remote side */
 		/* command : DMABUF_DESTROY,
-		 * op0~3 : hyper_dmabuf_id_t hid
+		 * op0~op3 : hyper_dmabuf_id_t hid
 		 */
 
-		for (i=0; i < 4; i++)
+		for (i = 0; i < 4; i++)
 			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_EXPORT_FD:
 	case HYPER_DMABUF_EXPORT_FD_FAILED:
-		/* dmabuf fd is being created on imported side or importing failed */
-		/* command : HYPER_DMABUF_EXPORT_FD or HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * op0~3 : hyper_dmabuf_id
+		/* dmabuf fd is being created on imported side or importing
+		 * failed
+		 *
+		 * command : HYPER_DMABUF_EXPORT_FD or
+		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
+		 * op0~op3 : hyper_dmabuf_id
 		 */
 
-		for (i=0; i < 4; i++)
+		for (i = 0; i < 4; i++)
 			req->op[i] = op[i];
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer (probably not needed) */
-		/* for dmabuf synchronization */
+		/* notifying dmabuf map/unmap to importer (probably not needed)
+		 * for dmabuf synchronization
+		 */
 		break;
 
-	/* as importer, command to exporter */
 	case HYPER_DMABUF_OPS_TO_SOURCE:
-		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
-		* or unmapping for synchronization with original exporter (e.g. i915) */
-		/* command : DMABUF_OPS_TO_SOURCE.
+		/* notifying dmabuf map/unmap to exporter, map will make
+		 * the driver to do shadow mapping or unmapping for
+		 * synchronization with original exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
 		 * op0~3 : hyper_dmabuf_id
 		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
 		 */
@@ -116,7 +122,8 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 static void cmd_process_work(struct work_struct *work)
 {
 	struct imported_sgt_info *imported;
-	struct cmd_process *proc = container_of(work, struct cmd_process, work);
+	struct cmd_process *proc = container_of(work,
+						struct cmd_process, work);
 	struct hyper_dmabuf_req *req;
 	int domid;
 	int i;
@@ -128,40 +135,42 @@ static void cmd_process_work(struct work_struct *work)
 	case HYPER_DMABUF_EXPORT:
 		/* exporting pages for dmabuf */
 		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~3 : hyper_dmabuf_id
+		 * op0~op3 : hyper_dmabuf_id
 		 * op4 : number of pages to be shared
 		 * op5 : offset of data in the first page
 		 * op6 : length of data in the last page
 		 * op7 : top-level reference number for shared pages
 		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data (e.g. graphic buffer's meta info)
+		 * op9 ~ : Driver-specific private data
+		 *         (e.g. graphic buffer's meta info)
 		 */
 
-		/* if nents == 0, it means it is a message only for priv synchronization
-		 * for existing imported_sgt_info so not creating a new one */
+		/* if nents == 0, it means it is a message only for
+		 * priv synchronization. for existing imported_sgt_info
+		 * so not creating a new one
+		 */
 		if (req->op[4] == 0) {
 			hyper_dmabuf_id_t exist = {req->op[0],
 						   {req->op[1], req->op[2],
-						   req->op[3]}};
+						   req->op[3] } };
 
 			imported = hyper_dmabuf_find_imported(exist);
 
 			if (!imported) {
 				dev_err(hy_drv_priv->dev,
-					"Can't find imported sgt_info from IMPORT_LIST\n");
+					"Can't find imported sgt_info\n");
 				break;
 			}
 
 			/* if size of new private data is different,
-			 * we reallocate it. */
+			 * we reallocate it.
+			 */
 			if (imported->sz_priv != req->op[8]) {
 				kfree(imported->priv);
 				imported->sz_priv = req->op[8];
-				imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
+				imported->priv = kcalloc(1, req->op[8],
+							 GFP_KERNEL);
 				if (!imported->priv) {
-					dev_err(hy_drv_priv->dev,
-						"Fail to allocate priv\n");
-
 					/* set it invalid */
 					imported->valid = 0;
 					break;
@@ -181,26 +190,20 @@ static void cmd_process_work(struct work_struct *work)
 
 		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
 
-		if (!imported) {
-			dev_err(hy_drv_priv->dev,
-				"No memory left to be allocated\n");
+		if (!imported)
 			break;
-		}
 
 		imported->sz_priv = req->op[8];
 		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
 
 		if (!imported->priv) {
-			dev_err(hy_drv_priv->dev,
-				"Fail to allocate priv\n");
-
 			kfree(imported);
 			break;
 		}
 
 		imported->hid.id = req->op[0];
 
-		for (i=0; i<3; i++)
+		for (i = 0; i < 3; i++)
 			imported->hid.rng_key[i] = req->op[i+1];
 
 		imported->nents = req->op[4];
@@ -230,13 +233,13 @@ static void cmd_process_work(struct work_struct *work)
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer (probably not needed) */
-		/* for dmabuf synchronization */
+		/* notifying dmabuf map/unmap to importer
+		 * (probably not needed) for dmabuf synchronization
+		 */
 		break;
 
 	default:
 		/* shouldn't get here */
-		/* no matched command, nothing to do.. just return error */
 		break;
 	}
 
@@ -280,20 +283,22 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		 * op0~3 : hyper_dmabuf_id
 		 */
 		dev_dbg(hy_drv_priv->dev,
-			"%s: processing HYPER_DMABUF_NOTIFY_UNEXPORT\n", __func__);
+			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
 
 		imported = hyper_dmabuf_find_imported(hid);
 
 		if (imported) {
 			/* if anything is still using dma_buf */
 			if (imported->importers) {
-				/*
-				 * Buffer is still in  use, just mark that it should
-				 * not be allowed to export its fd anymore.
+				/* Buffer is still in  use, just mark that
+				 * it should not be allowed to export its fd
+				 * anymore.
 				 */
 				imported->valid = false;
 			} else {
-				/* No one is using buffer, remove it from imported list */
+				/* No one is using buffer, remove it from
+				 * imported list
+				 */
 				hyper_dmabuf_remove_imported(hid);
 				kfree(imported);
 			}
@@ -306,10 +311,12 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	/* dma buf remote synchronization */
 	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
-		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
-		 * or unmapping for synchronization with original exporter (e.g. i915) */
-
-		/* command : DMABUF_OPS_TO_SOURCE.
+		/* notifying dmabuf map/unmap to exporter, map will
+		 * make the driver to do shadow mapping
+		 * or unmapping for synchronization with original
+		 * exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
 		 * op0~3 : hyper_dmabuf_id
 		 * op1 : enum hyper_dmabuf_ops {....}
 		 */
@@ -330,27 +337,30 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
 		dev_dbg(hy_drv_priv->dev,
-			"Processing HYPER_DMABUF_EXPORT_FD for buffer {id:%d key:%d %d %d}\n",
+			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
 		exported = hyper_dmabuf_find_exported(hid);
 
 		if (!exported) {
 			dev_err(hy_drv_priv->dev,
-				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else if (!exported->valid) {
 			dev_dbg(hy_drv_priv->dev,
-				"Buffer no longer valid - cannot export fd for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"Buffer no longer valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
 			dev_dbg(hy_drv_priv->dev,
-				"Buffer still valid - can export fd for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"Buffer still valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			exported->active++;
 			req->stat = HYPER_DMABUF_REQ_PROCESSED;
@@ -360,15 +370,16 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 
 	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
 		dev_dbg(hy_drv_priv->dev,
-			"Processing HYPER_DMABUF_EXPORT_FD_FAILED for buffer {id:%d key:%d %d %d}\n",
+			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
 			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
 
 		exported = hyper_dmabuf_find_exported(hid);
 
 		if (!exported) {
 			dev_err(hy_drv_priv->dev,
-				"critical err: requested sgt_info can't be found for buffer {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
 
 			req->stat = HYPER_DMABUF_REQ_ERROR;
 		} else {
@@ -382,19 +393,14 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		"%s: putting request to workqueue\n", __func__);
 	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
 
-	if (!temp_req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!temp_req)
 		return -ENOMEM;
-	}
 
 	memcpy(temp_req, req, sizeof(*temp_req));
 
 	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
 
 	if (!proc) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
 		kfree(temp_req);
 		return -ENOMEM;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
index 7c694ec..9c8a76b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -79,7 +79,9 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 				 enum hyper_dmabuf_command command,
 				 int *operands);
 
-/* parse incoming request packet (or response) and take appropriate actions for those */
+/* parse incoming request packet (or response) and take
+ * appropriate actions for those
+ */
 int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
 
 #endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 7e73170..03fdd30 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -53,18 +53,15 @@ static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 
 	op[0] = hid.id;
 
-	for (i=0; i<3; i++)
+	for (i = 0; i < 3; i++)
 		op[i+1] = hid.rng_key[i];
 
 	op[4] = dmabuf_ops;
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!req)
 		return -ENOMEM;
-	}
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
 
@@ -81,8 +78,8 @@ static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 	return ret;
 }
 
-static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf,
-				   struct device* dev,
+static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
+				   struct device *dev,
 				   struct dma_buf_attachment *attach)
 {
 	struct imported_sgt_info *imported;
@@ -99,7 +96,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf,
 	return ret;
 }
 
-static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf,
+static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
 				    struct dma_buf_attachment *attach)
 {
 	struct imported_sgt_info *imported;
@@ -114,8 +111,9 @@ static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf,
 					HYPER_DMABUF_OPS_DETACH);
 }
 
-static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
-					     enum dma_data_direction dir)
+static struct sg_table *hyper_dmabuf_ops_map(
+				struct dma_buf_attachment *attachment,
+				enum dma_data_direction dir)
 {
 	struct sg_table *st;
 	struct imported_sgt_info *imported;
@@ -130,9 +128,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	/* extract pages from sgt */
 	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
 
-	if (!pg_info) {
+	if (!pg_info)
 		return NULL;
-	}
 
 	/* create a new sg_table with extracted pages */
 	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
@@ -140,8 +137,8 @@ static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachme
 	if (!st)
 		goto err_free_sg;
 
-        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
-                goto err_free_sg;
+	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
+		goto err_free_sg;
 
 	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_MAP);
@@ -196,9 +193,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 
 	imported = (struct imported_sgt_info *)dma_buf->priv;
 
-	if (!dmabuf_refcount(imported->dma_buf)) {
+	if (!dmabuf_refcount(imported->dma_buf))
 		imported->dma_buf = NULL;
-	}
 
 	imported->importers--;
 
@@ -219,8 +215,9 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 					HYPER_DMABUF_OPS_RELEASE);
 
 	/*
-	 * Check if buffer is still valid and if not remove it from imported list.
-	 * That has to be done after sending sync request
+	 * Check if buffer is still valid and if not remove it
+	 * from imported list. That has to be done after sending
+	 * sync request
 	 */
 	if (finish) {
 		hyper_dmabuf_remove_imported(imported->hid);
@@ -228,7 +225,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	}
 }
 
-static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
+					     enum dma_data_direction dir)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -244,7 +242,8 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_da
 	return ret;
 }
 
-static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
+					   enum dma_data_direction dir)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -260,7 +259,8 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data
 	return 0;
 }
 
-static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
+					  unsigned long pgnum)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -273,10 +273,12 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long
 	ret = hyper_dmabuf_sync_request(imported->hid,
 					HYPER_DMABUF_OPS_KMAP_ATOMIC);
 
-	return NULL; /* for now NULL.. need to return the address of mapped region */
+	/* TODO: NULL for now. Need to return the addr of mapped region */
+	return NULL;
 }
 
-static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
+					   unsigned long pgnum, void *vaddr)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -322,7 +324,8 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 					HYPER_DMABUF_OPS_KUNMAP);
 }
 
-static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
+				 struct vm_area_struct *vma)
 {
 	struct imported_sgt_info *imported;
 	int ret;
@@ -374,8 +377,8 @@ static const struct dma_buf_ops hyper_dmabuf_ops = {
 	.map_dma_buf = hyper_dmabuf_ops_map,
 	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
 	.release = hyper_dmabuf_ops_release,
-	.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
-	.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
+	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
 	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
 	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
 	.map = hyper_dmabuf_ops_kmap,
@@ -395,9 +398,8 @@ int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
 	 */
 	hyper_dmabuf_export_dma_buf(imported);
 
-	if (imported->dma_buf) {
+	if (imported->dma_buf)
 		fd = dma_buf_fd(imported->dma_buf, flags);
-	}
 
 	return fd;
 }
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
index 36e888c..1f2f56b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
@@ -36,63 +36,63 @@
 	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
 
 int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
-				int query, unsigned long* info)
+				int query, unsigned long *info)
 {
-	switch (query)
-	{
-		case HYPER_DMABUF_QUERY_TYPE:
-			*info = EXPORTED;
-			break;
-
-		/* exporting domain of this specific dmabuf*/
-		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(exported->hid);
-			break;
-
-		/* importing domain of this specific dmabuf */
-		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = exported->rdomid;
-			break;
-
-		/* size of dmabuf in byte */
-		case HYPER_DMABUF_QUERY_SIZE:
-			*info = exported->dma_buf->size;
-			break;
-
-		/* whether the buffer is used by importer */
-		case HYPER_DMABUF_QUERY_BUSY:
-			*info = (exported->active > 0);
-			break;
-
-		/* whether the buffer is unexported */
-		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !exported->valid;
-			break;
-
-		/* whether the buffer is scheduled to be unexported */
-		case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-			*info = !exported->unexport_sched;
-			break;
-
-		/* size of private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = exported->sz_priv;
-			break;
-
-		/* copy private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (exported->sz_priv > 0) {
-				int n;
-				n = copy_to_user((void __user*) *info,
-						exported->priv,
-						exported->sz_priv);
-				if (n != 0)
-					return -EINVAL;
-			}
-			break;
-
-		default:
-			return -EINVAL;
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = EXPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(exported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = exported->rdomid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		*info = exported->dma_buf->size;
+		break;
+
+	/* whether the buffer is used by importer */
+	case HYPER_DMABUF_QUERY_BUSY:
+		*info = (exported->active > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !exported->valid;
+		break;
+
+	/* whether the buffer is scheduled to be unexported */
+	case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+		*info = !exported->unexport_sched;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = exported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (exported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *) *info,
+					exported->priv,
+					exported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
 	}
 
 	return 0;
@@ -102,66 +102,70 @@ int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
 int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
 				int query, unsigned long *info)
 {
-	switch (query)
-	{
-		case HYPER_DMABUF_QUERY_TYPE:
-			*info = IMPORTED;
-			break;
-
-		/* exporting domain of this specific dmabuf*/
-		case HYPER_DMABUF_QUERY_EXPORTER:
-			*info = HYPER_DMABUF_DOM_ID(imported->hid);
-			break;
-
-		/* importing domain of this specific dmabuf */
-		case HYPER_DMABUF_QUERY_IMPORTER:
-			*info = hy_drv_priv->domid;
-			break;
-
-		/* size of dmabuf in byte */
-		case HYPER_DMABUF_QUERY_SIZE:
-			if (imported->dma_buf) {
-				/* if local dma_buf is created (if it's ever mapped),
-				 * retrieve it directly from struct dma_buf *
-				 */
-				*info = imported->dma_buf->size;
-			} else {
-				/* calcuate it from given nents, frst_ofst and last_len */
-				*info = HYPER_DMABUF_SIZE(imported->nents,
-							  imported->frst_ofst,
-							  imported->last_len);
-			}
-			break;
-
-		/* whether the buffer is used or not */
-		case HYPER_DMABUF_QUERY_BUSY:
-			/* checks if it's used by importer */
-			*info = (imported->importers > 0);
-			break;
-
-		/* whether the buffer is unexported */
-		case HYPER_DMABUF_QUERY_UNEXPORTED:
-			*info = !imported->valid;
-			break;
-		/* size of private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-			*info = imported->sz_priv;
-			break;
-
-		/* copy private info attached to buffer */
-		case HYPER_DMABUF_QUERY_PRIV_INFO:
-			if (imported->sz_priv > 0) {
-				int n;
-				n = copy_to_user((void __user*) *info,
-						imported->priv,
-						imported->sz_priv);
-				if (n != 0)
-					return -EINVAL;
-			}
-			break;
-
-		default:
-			return -EINVAL;
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = IMPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(imported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = hy_drv_priv->domid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		if (imported->dma_buf) {
+			/* if local dma_buf is created (if it's
+			 * ever mapped), retrieve it directly
+			 * from struct dma_buf *
+			 */
+			*info = imported->dma_buf->size;
+		} else {
+			/* calcuate it from given nents, frst_ofst
+			 * and last_len
+			 */
+			*info = HYPER_DMABUF_SIZE(imported->nents,
+						  imported->frst_ofst,
+						  imported->last_len);
+		}
+		break;
+
+	/* whether the buffer is used or not */
+	case HYPER_DMABUF_QUERY_BUSY:
+		/* checks if it's used by importer */
+		*info = (imported->importers > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !imported->valid;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = imported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (imported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *)*info,
+					imported->priv,
+					imported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
 	}
 
 	return 0;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index 01ec98c..c9fe040 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -76,11 +76,8 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_ATTACH:
 		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
 
-		if (!attachl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
+		if (!attachl)
 			return -ENOMEM;
-		}
 
 		attachl->attach = dma_buf_attach(exported->dma_buf,
 						 hy_drv_priv->dev);
@@ -126,13 +123,11 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 
 		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
 
-		if (!sgtl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+		if (!sgtl)
 			return -ENOMEM;
-		}
 
-		sgtl->sgt = dma_buf_map_attachment(attachl->attach, DMA_BIDIRECTIONAL);
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach,
+						   DMA_BIDIRECTIONAL);
 		if (!sgtl->sgt) {
 			kfree(sgtl);
 			dev_err(hy_drv_priv->dev,
@@ -148,7 +143,7 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 			dev_err(hy_drv_priv->dev,
 				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
 			dev_err(hy_drv_priv->dev,
-				"no more SGT or attachment left to be unmapped\n");
+				"no SGT or attach left to be unmapped\n");
 			return -EFAULT;
 		}
 
@@ -165,23 +160,28 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 
 	case HYPER_DMABUF_OPS_RELEASE:
 		dev_dbg(hy_drv_priv->dev,
-			"Buffer {id:%d key:%d %d %d} released, references left: %d\n",
-			 exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
-			 exported->hid.rng_key[2], exported->active - 1);
+			"id:%d key:%d %d %d} released, ref left: %d\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2],
+			 exported->active - 1);
+
+		exported->active--;
 
-                exported->active--;
-		/* If there are still importers just break, if no then continue with final cleanup */
+		/* If there are still importers just break, if no then
+		 * continue with final cleanup
+		 */
 		if (exported->active)
 			break;
 
-		/*
-		 * Importer just released buffer fd, check if there is any other importer still using it.
-		 * If not and buffer was unexported, clean up shared data and remove that buffer.
+		/* Importer just released buffer fd, check if there is
+		 * any other importer still using it.
+		 * If not and buffer was unexported, clean up shared
+		 * data and remove that buffer.
 		 */
 		dev_dbg(hy_drv_priv->dev,
 			"Buffer {id:%d key:%d %d %d} final released\n",
-			exported->hid.id, exported->hid.rng_key[0], exported->hid.rng_key[1],
-			exported->hid.rng_key[2]);
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
 
 		if (!exported->valid && !exported->active &&
 		    !exported->unexport_sched) {
@@ -195,19 +195,21 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 		break;
 
 	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
-		ret = dma_buf_begin_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_begin_cpu_access(exported->dma_buf,
+					       DMA_BIDIRECTIONAL);
 		if (ret) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+				"HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
 
 	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
-		ret = dma_buf_end_cpu_access(exported->dma_buf, DMA_BIDIRECTIONAL);
+		ret = dma_buf_end_cpu_access(exported->dma_buf,
+					     DMA_BIDIRECTIONAL);
 		if (ret) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+				"HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
 			return ret;
 		}
 		break;
@@ -215,22 +217,21 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
 	case HYPER_DMABUF_OPS_KMAP:
 		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
-		if (!va_kmapl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+		if (!va_kmapl)
 			return -ENOMEM;
-		}
 
 		/* dummy kmapping of 1 page */
 		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
-			va_kmapl->vaddr = dma_buf_kmap_atomic(exported->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap_atomic(
+						exported->dma_buf, 1);
 		else
-			va_kmapl->vaddr = dma_buf_kmap(exported->dma_buf, 1);
+			va_kmapl->vaddr = dma_buf_kmap(
+						exported->dma_buf, 1);
 
 		if (!va_kmapl->vaddr) {
 			kfree(va_kmapl);
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+				"HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
 			return -ENOMEM;
 		}
 		list_add(&va_kmapl->list, &exported->va_kmapped->list);
@@ -240,7 +241,7 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_KUNMAP:
 		if (list_empty(&exported->va_kmapped->list)) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			dev_err(hy_drv_priv->dev,
 				"no more dmabuf VA to be freed\n");
 			return -EFAULT;
@@ -250,15 +251,17 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 					    struct kmap_vaddr_list, list);
 		if (!va_kmapl->vaddr) {
 			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
 			return PTR_ERR(va_kmapl->vaddr);
 		}
 
 		/* unmapping 1 page */
 		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
-			dma_buf_kunmap_atomic(exported->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap_atomic(exported->dma_buf,
+					      1, va_kmapl->vaddr);
 		else
-			dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
+			dma_buf_kunmap(exported->dma_buf,
+				       1, va_kmapl->vaddr);
 
 		list_del(&va_kmapl->list);
 		kfree(va_kmapl);
@@ -266,7 +269,8 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 
 	case HYPER_DMABUF_OPS_MMAP:
 		/* currently not supported: looking for a way to create
-		 * a dummy vma */
+		 * a dummy vma
+		 */
 		dev_warn(hy_drv_priv->dev,
 			 "remote sync::sychronized mmap is not supported\n");
 		break;
@@ -274,11 +278,8 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 	case HYPER_DMABUF_OPS_VMAP:
 		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
 
-		if (!va_vmapl) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
+		if (!va_vmapl)
 			return -ENOMEM;
-		}
 
 		/* dummy vmapping */
 		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index 315c354..e9299e5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -89,9 +89,8 @@ struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	if (!pg_info)
 		return NULL;
 
-	pg_info->pgs = kmalloc(sizeof(struct page *) *
-			       hyper_dmabuf_get_num_pgs(sgt),
-			       GFP_KERNEL);
+	pg_info->pgs = kmalloc_array(hyper_dmabuf_get_num_pgs(sgt),
+				     sizeof(struct page *), GFP_KERNEL);
 
 	if (!pg_info->pgs) {
 		kfree(pg_info);
@@ -137,17 +136,17 @@ struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 }
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
-					 int frst_ofst, int last_len, int nents)
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents)
 {
 	struct sg_table *sgt;
 	struct scatterlist *sgl;
 	int i, ret;
 
 	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (!sgt) {
+	if (!sgt)
 		return NULL;
-	}
 
 	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
 	if (ret) {
@@ -163,7 +162,7 @@ struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
 
 	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
 
-	for (i=1; i<nents-1; i++) {
+	for (i = 1; i < nents-1; i++) {
 		sgl = sg_next(sgl);
 		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
index 930bade..152f78c 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -31,7 +31,7 @@ int dmabuf_refcount(struct dma_buf *dma_buf);
 struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 
 /* create sg_table with given pages and other parameters */
-struct sg_table* hyper_dmabuf_create_sgt(struct page **pgs,
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
 					 int frst_ofst, int last_len,
 					 int nents);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
index 8a612d1..a11f804 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -51,67 +51,91 @@ struct vmap_vaddr_list {
 
 /* Exporter builds pages_info before sharing pages */
 struct pages_info {
-        int frst_ofst; /* offset of data in the first page */
-        int last_len; /* length of data in the last page */
-        int nents; /* # of pages */
-        struct page **pgs; /* pages that contains reference numbers of shared pages*/
+	int frst_ofst;
+	int last_len;
+	int nents;
+	struct page **pgs;
 };
 
 
 /* Exporter stores references to sgt in a hash table
- * Exporter keeps these references for synchronization and tracking purposes
+ * Exporter keeps these references for synchronization
+ * and tracking purposes
  */
 struct exported_sgt_info {
-        hyper_dmabuf_id_t hid; /* unique id to reference dmabuf in remote domain */
-	int rdomid; /* domain importing this sgt */
+	hyper_dmabuf_id_t hid;
+
+	/* VM ID of importer */
+	int rdomid;
 
-	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	struct dma_buf *dma_buf;
 	int nents;
 
-	/* list of remote activities on dma_buf */
+	/* list for tracking activities on dma_buf */
 	struct sgt_list *active_sgts;
 	struct attachment_list *active_attached;
 	struct kmap_vaddr_list *va_kmapped;
 	struct vmap_vaddr_list *va_vmapped;
 
-	bool valid; /* set to 0 once unexported. Needed to prevent further mapping by importer */
-	int active; /* locally shared on importer's side */
-	void *refs_info; /* hypervisor-specific info for the references */
+	/* set to 0 when unexported. Importer doesn't
+	 * do a new mapping of buffer if valid == false
+	 */
+	bool valid;
+
+	/* active == true if the buffer is actively used
+	 * (mapped) by importer
+	 */
+	int active;
+
+	/* hypervisor specific reference data for shared pages */
+	void *refs_info;
+
 	struct delayed_work unexport;
 	bool unexport_sched;
 
-	/* owner of buffer
-	 * TODO: that is naiive as buffer may be reused by
-	 * another userspace app, so here list of struct file should be kept
-	 * and emergency unexport should be executed only after last of buffer
-	 * uses releases hyper_dmabuf device
+	/* list for file pointers associated with all user space
+	 * application that have exported this same buffer to
+	 * another VM. This needs to be tracked to know whether
+	 * the buffer can be completely freed.
 	 */
 	struct file *filp;
 
+	/* size of private */
 	size_t sz_priv;
-	char *priv; /* device specific info (e.g. image's meta info?) */
+
+	/* private data associated with the exported buffer */
+	char *priv;
 };
 
-/* Importer store references (before mapping) on shared pages
- * Importer store these references in the table and map it in
- * its own memory map once userspace asks for reference for the buffer */
+/* imported_sgt_info contains information about imported DMA_BUF
+ * this info is kept in IMPORT list and asynchorously retrieved and
+ * used to map DMA_BUF on importer VM's side upon export fd ioctl
+ * request from user-space
+ */
+
 struct imported_sgt_info {
 	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
 
-	int ref_handle; /* reference number of top level addressing page of shared pages */
-	int frst_ofst;	/* start offset in first shared page */
-	int last_len;	/* length of data in the last shared page */
-	int nents;	/* number of pages to be shared */
+	/* hypervisor-specific handle to pages */
+	int ref_handle;
+
+	/* offset and size info of DMA_BUF */
+	int frst_ofst;
+	int last_len;
+	int nents;
 
 	struct dma_buf *dma_buf;
-	struct sg_table *sgt; /* sgt pointer after importing buffer */
+	struct sg_table *sgt;
 
 	void *refs_info;
 	bool valid;
 	int importers;
 
+	/* size of private */
 	size_t sz_priv;
-	char *priv; /* device specific info (e.g. image's meta info?) */
+
+	/* private data associated with the exported buffer */
+	char *priv;
 };
 
 #endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index f70b4ea..05f3521 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -41,12 +41,10 @@
 #include "hyper_dmabuf_xen_comm_list.h"
 #include "../hyper_dmabuf_drv.h"
 
-static int export_req_id = 0;
+static int export_req_id;
 
 struct hyper_dmabuf_req req_pending = {0};
 
-extern int xenstored_ready;
-
 static void xen_get_domid_delayed(struct work_struct *unused);
 static void xen_init_comm_env_delayed(struct work_struct *unused);
 
@@ -160,15 +158,16 @@ void xen_get_domid_delayed(struct work_struct *unused)
 	int domid, ret;
 
 	/* scheduling another if driver is still running
-	 * and xenstore has not been initialized */
+	 * and xenstore has not been initialized
+	 */
 	if (likely(xenstored_ready == 0)) {
 		dev_dbg(hy_drv_priv->dev,
-			"Xenstore is not quite ready yet. Will retry it in 500ms\n");
+			"Xenstore is not ready yet. Will retry in 500ms\n");
 		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
 	} else {
-	        xenbus_transaction_start(&xbt);
+		xenbus_transaction_start(&xbt);
 
-		ret = xenbus_scanf(xbt, "domid","", "%d", &domid);
+		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
 
 		if (ret <= 0)
 			domid = -1;
@@ -176,14 +175,17 @@ void xen_get_domid_delayed(struct work_struct *unused)
 		xenbus_transaction_end(xbt, 0);
 
 		/* try again since -1 is an invalid id for domain
-		 * (but only if driver is still running) */
+		 * (but only if driver is still running)
+		 */
 		if (unlikely(domid == -1)) {
 			dev_dbg(hy_drv_priv->dev,
 				"domid==-1 is invalid. Will retry it in 500ms\n");
-			schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+			schedule_delayed_work(&get_vm_id_work,
+					      msecs_to_jiffies(500));
 		} else {
 			dev_info(hy_drv_priv->dev,
-				"Successfully retrieved domid from Xenstore:%d\n", domid);
+				 "Successfully retrieved domid from Xenstore:%d\n",
+				 domid);
 			hy_drv_priv->domid = domid;
 		}
 	}
@@ -199,21 +201,20 @@ int hyper_dmabuf_xen_get_domid(void)
 		return -1;
 	}
 
-        xenbus_transaction_start(&xbt);
+	xenbus_transaction_start(&xbt);
 
-        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
+	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
 		domid = -1;
-        }
 
-        xenbus_transaction_end(xbt, 0);
+	xenbus_transaction_end(xbt, 0);
 
 	return domid;
 }
 
 static int xen_comm_next_req_id(void)
 {
-        export_req_id++;
-        return export_req_id;
+	export_req_id++;
+	return export_req_id;
 }
 
 /* For now cache latast rings as global variables TODO: keep them in list*/
@@ -236,19 +237,18 @@ static irqreturn_t back_ring_isr(int irq, void *info);
 static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 					 const char *path, const char *token)
 {
-	int rdom,ret;
+	int rdom, ret;
 	uint32_t grefid, port;
 	struct xen_comm_rx_ring_info *ring_info;
 
 	/* Check which domain has changed its exporter rings */
 	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
-	if (ret <= 0) {
+	if (ret <= 0)
 		return;
-	}
 
 	/* Check if we have importer ring for given remote domain already
-	 * created */
-
+	 * created
+	 */
 	ring_info = xen_comm_find_rx_ring(rdom);
 
 	/* Try to query remote domain exporter ring details - if
@@ -298,11 +298,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
-	if (!ring_info) {
-		dev_err(hy_drv_priv->dev,
-			"No more spae left\n");
+	if (!ring_info)
 		return -ENOMEM;
-	}
 
 	/* from exporter to importer */
 	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
@@ -318,8 +315,8 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
 
 	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
-							   virt_to_mfn(shared_ring),
-							   0);
+						virt_to_mfn(shared_ring),
+						0);
 	if (ring_info->gref_ring < 0) {
 		/* fail to get gref */
 		kfree(ring_info);
@@ -340,7 +337,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 	/* setting up interrupt */
 	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
 					front_ring_isr, 0,
-					NULL, (void*) ring_info);
+					NULL, (void *) ring_info);
 
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
@@ -368,25 +365,24 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(), domid,
-					   ring_info->gref_ring, ring_info->port);
+	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(),
+					   domid,
+					   ring_info->gref_ring,
+					   ring_info->port);
 
-	/*
-	 * Register watch for remote domain exporter ring.
+	/* Register watch for remote domain exporter ring.
 	 * When remote domain will setup its exporter ring,
 	 * we will automatically connect our importer ring to it.
 	 */
 	ring_info->watch.callback = remote_dom_exporter_watch_cb;
-	ring_info->watch.node = (const char*) kmalloc(sizeof(char) * 255, GFP_KERNEL);
+	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
 
 	if (!ring_info->watch.node) {
-		dev_err(hy_drv_priv->dev,
-			"No more space left\n");
 		kfree(ring_info);
 		return -ENOMEM;
 	}
 
-	sprintf((char*)ring_info->watch.node,
+	sprintf((char *)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
 		domid, hyper_dmabuf_xen_get_domid());
 
@@ -404,9 +400,8 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	/* check if we at all have exporter ring for given rdomain */
 	ring_info = xen_comm_find_tx_ring(domid);
 
-	if (!ring_info) {
+	if (!ring_info)
 		return;
-	}
 
 	xen_comm_remove_tx_ring(domid);
 
@@ -416,7 +411,7 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	/* No need to close communication channel, will be done by
 	 * this function
 	 */
-	unbind_from_irqhandler(ring_info->irq, (void*) ring_info);
+	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
 
 	/* No need to free sring page, will be freed by this function
 	 * when other side will end its access
@@ -430,7 +425,8 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 	if (!rx_ring_info)
 		return;
 
-	BACK_RING_INIT(&(rx_ring_info->ring_back), rx_ring_info->ring_back.sring,
+	BACK_RING_INIT(&(rx_ring_info->ring_back),
+		       rx_ring_info->ring_back.sring,
 		       PAGE_SIZE);
 }
 
@@ -473,11 +469,8 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
 
-	if (!ring_info) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!ring_info)
 		return -ENOMEM;
-	}
 
 	ring_info->sdomain = domid;
 	ring_info->evtchn = rx_port;
@@ -485,8 +478,6 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
 
 	if (!map_ops) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
 		ret = -ENOMEM;
 		goto fail_no_map_ops;
 	}
@@ -497,11 +488,13 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	}
 
 	gnttab_set_map_op(&map_ops[0],
-			  (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			  (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
 			  GNTMAP_host_map, rx_gref, domid);
 
 	gnttab_set_unmap_op(&ring_info->unmap_op,
-			    (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			    (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
 			    GNTMAP_host_map, -1);
 
 	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
@@ -542,13 +535,12 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 	ret = xen_comm_add_rx_ring(ring_info);
 
 	/* Setup communcation channel in opposite direction */
-	if (!xen_comm_find_tx_ring(domid)) {
+	if (!xen_comm_find_tx_ring(domid))
 		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
-	}
 
 	ret = request_irq(ring_info->irq,
 			  back_ring_isr, 0,
-			  NULL, (void*)ring_info);
+			  NULL, (void *)ring_info);
 
 	return ret;
 
@@ -577,7 +569,7 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
 	xen_comm_remove_rx_ring(domid);
 
 	/* no need to close event channel, will be done by that function */
-	unbind_from_irqhandler(ring_info->irq, (void*)ring_info);
+	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
 
 	/* unmapping shared ring page */
 	shared_ring = virt_to_page(ring_info->ring_back.sring);
@@ -636,7 +628,8 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 
 				if (!ret)
 					dev_info(hy_drv_priv->dev,
-						 "Finishing up setting up rx channel for domain %d\n", i);
+						 "Done rx ch init for VM %d\n",
+						 i);
 			}
 		}
 
@@ -654,7 +647,8 @@ void xen_init_comm_env_delayed(struct work_struct *unused)
 
 	/* scheduling another work if driver is still running
 	 * and xenstore hasn't been initialized or dom_id hasn't
-	 * been correctly retrieved. */
+	 * been correctly retrieved.
+	 */
 	if (likely(xenstored_ready == 0 ||
 	    hy_drv_priv->domid == -1)) {
 		dev_dbg(hy_drv_priv->dev,
@@ -778,9 +772,8 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
 	ring->req_prod_pvt++;
 
 	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
-	if (notify) {
+	if (notify)
 		notify_remote_via_irq(ring_info->irq);
-	}
 
 	if (wait) {
 		while (timeout--) {
@@ -792,24 +785,29 @@ int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
 
 		if (timeout < 0) {
 			mutex_unlock(&ring_info->lock);
-			dev_err(hy_drv_priv->dev, "request timed-out\n");
+			dev_err(hy_drv_priv->dev,
+				"request timed-out\n");
 			return -EBUSY;
 		}
 
 		mutex_unlock(&ring_info->lock);
 		do_gettimeofday(&tv_end);
 
-		/* checking time duration for round-trip of a request for debugging */
+		/* checking time duration for round-trip of a request
+		 * for debugging
+		 */
 		if (tv_end.tv_usec >= tv_start.tv_usec) {
 			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
 			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
 		} else {
 			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
-			tv_diff.tv_usec = tv_end.tv_usec+1000000-tv_start.tv_usec;
+			tv_diff.tv_usec = tv_end.tv_usec+1000000-
+					  tv_start.tv_usec;
 		}
 
 		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
-			dev_dbg(hy_drv_priv->dev, "send_req:time diff: %ld sec, %ld usec\n",
+			dev_dbg(hy_drv_priv->dev,
+				"send_req:time diff: %ld sec, %ld usec\n",
 				tv_diff.tv_sec, tv_diff.tv_usec);
 	}
 
@@ -850,23 +848,24 @@ static irqreturn_t back_ring_isr(int irq, void *info)
 			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
 
 			if (ret > 0) {
-				/* preparing a response for the request and send it to
-				 * the requester
+				/* preparing a response for the request and
+				 * send it to the requester
 				 */
 				memcpy(&resp, &req, sizeof(resp));
-				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt),
+				memcpy(RING_GET_RESPONSE(ring,
+							 ring->rsp_prod_pvt),
 							 &resp, sizeof(resp));
 				ring->rsp_prod_pvt++;
 
 				dev_dbg(hy_drv_priv->dev,
-					"sending response to exporter for request id:%d\n",
+					"responding to exporter for req:%d\n",
 					resp.resp_id);
 
-				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
+								     notify);
 
-				if (notify) {
+				if (notify)
 					notify_remote_via_irq(ring_info->irq);
-				}
 			}
 
 			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
@@ -905,41 +904,40 @@ static irqreturn_t front_ring_isr(int irq, void *info)
 			dev_dbg(hy_drv_priv->dev,
 				"getting response from importer\n");
 
-			if (req_pending.req_id == resp->resp_id) {
+			if (req_pending.req_id == resp->resp_id)
 				req_pending.stat = resp->stat;
-			}
 
 			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
 				/* parsing response */
 				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
-							(struct hyper_dmabuf_req *)resp);
+					(struct hyper_dmabuf_req *)resp);
 
 				if (ret < 0) {
 					dev_err(hy_drv_priv->dev,
-						"getting error while parsing response\n");
+						"err while parsing resp\n");
 				}
 			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
-				/* for debugging dma_buf remote synchronization */
+				/* for debugging dma_buf remote synch */
 				dev_dbg(hy_drv_priv->dev,
 					"original request = 0x%x\n", resp->cmd);
 				dev_dbg(hy_drv_priv->dev,
-					"Just got HYPER_DMABUF_REQ_PROCESSED\n");
+					"got HYPER_DMABUF_REQ_PROCESSED\n");
 			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
-				/* for debugging dma_buf remote synchronization */
+				/* for debugging dma_buf remote synch */
 				dev_dbg(hy_drv_priv->dev,
 					"original request = 0x%x\n", resp->cmd);
 				dev_dbg(hy_drv_priv->dev,
-					"Just got HYPER_DMABUF_REQ_ERROR\n");
+					"got HYPER_DMABUF_REQ_ERROR\n");
 			}
 		}
 
 		ring->rsp_cons = i;
 
-		if (i != ring->req_prod_pvt) {
+		if (i != ring->req_prod_pvt)
 			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
-		} else {
+		else
 			ring->sring->rsp_event = i+1;
-		}
+
 	} while (more_to_do);
 
 	return IRQ_HANDLED;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 80741c1..8e2d1d0 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -29,23 +29,25 @@
 #include "xen/xenbus.h"
 #include "../hyper_dmabuf_msg.h"
 
+extern int xenstored_ready;
+
 DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
 
 struct xen_comm_tx_ring_info {
-        struct xen_comm_front_ring ring_front;
+	struct xen_comm_front_ring ring_front;
 	int rdomain;
-        int gref_ring;
-        int irq;
-        int port;
+	int gref_ring;
+	int irq;
+	int port;
 	struct mutex lock;
 	struct xenbus_watch watch;
 };
 
 struct xen_comm_rx_ring_info {
-        int sdomain;
-        int irq;
-        int evtchn;
-        struct xen_comm_back_ring ring_back;
+	int sdomain;
+	int irq;
+	int evtchn;
+	struct xen_comm_back_ring ring_back;
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
@@ -70,6 +72,7 @@ void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid);
 void hyper_dmabuf_xen_destroy_comm(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req, int wait);
+int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
+			      int wait);
 
 #endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 7a8ec73..343aab3 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -31,7 +31,6 @@
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/cdev.h>
-#include <asm/uaccess.h>
 #include <linux/hashtable.h>
 #include <xen/grant_table.h>
 #include "../hyper_dmabuf_drv.h"
@@ -41,7 +40,7 @@
 DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
 DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
 
-void xen_comm_ring_table_init()
+void xen_comm_ring_table_init(void)
 {
 	hash_init(xen_comm_rx_ring_hash);
 	hash_init(xen_comm_tx_ring_hash);
@@ -53,11 +52,8 @@ int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->info = ring_info;
 
@@ -73,11 +69,8 @@ int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-			"No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->info = ring_info;
 
@@ -93,7 +86,7 @@ struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->rdomain == domid)
+		if (info_entry->info->rdomain == domid)
 			return info_entry->info;
 
 	return NULL;
@@ -105,7 +98,7 @@ struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->sdomain == domid)
+		if (info_entry->info->sdomain == domid)
 			return info_entry->info;
 
 	return NULL;
@@ -117,7 +110,7 @@ int xen_comm_remove_tx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->rdomain == domid) {
+		if (info_entry->info->rdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
 			return 0;
@@ -132,7 +125,7 @@ int xen_comm_remove_rx_ring(int domid)
 	int bkt;
 
 	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if(info_entry->info->sdomain == domid) {
+		if (info_entry->info->sdomain == domid) {
 			hash_del(&info_entry->node);
 			kfree(info_entry);
 			return 0;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
index cde8ade..8502fe7 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -31,13 +31,13 @@
 #define MAX_ENTRY_RX_RING 7
 
 struct xen_comm_tx_ring_info_entry {
-        struct xen_comm_tx_ring_info *info;
-        struct hlist_node node;
+	struct xen_comm_tx_ring_info *info;
+	struct hlist_node node;
 };
 
 struct xen_comm_rx_ring_info_entry {
-        struct xen_comm_rx_ring_info *info;
-        struct hlist_node node;
+	struct xen_comm_rx_ring_info *info;
+	struct hlist_node node;
 };
 
 void xen_comm_ring_table_init(void);
@@ -54,10 +54,14 @@ struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
 
 struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
 
-/* iterates over all exporter rings and calls provided function for each of them */
+/* iterates over all exporter rings and calls provided
+ * function for each of them
+ */
 void xen_comm_foreach_tx_ring(void (*func)(int domid));
 
-/* iterates over all importer rings and calls provided function for each of them */
+/* iterates over all importer rings and calls provided
+ * function for each of them
+ */
 void xen_comm_foreach_rx_ring(void (*func)(int domid));
 
 #endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
index c5fec24..e5bff09 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -34,11 +34,20 @@ extern struct hyper_dmabuf_backend_ops xen_backend_ops;
  * when unsharing.
  */
 struct xen_shared_pages_info {
-        grant_ref_t lvl3_gref; /* top level refid */
-        grant_ref_t *lvl3_table; /* page of top level addressing, it contains refids of 2nd level pages */
-        grant_ref_t *lvl2_table; /* table of 2nd level pages, that contains refids to data pages */
-        struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
-        struct page **data_pages; /* data pages to be unmapped */
+	/* top level refid */
+	grant_ref_t lvl3_gref;
+
+	/* page of top level addressing, it contains refids of 2nd lvl pages */
+	grant_ref_t *lvl3_table;
+
+	/* table of 2nd level pages, that contains refids to data pages */
+	grant_ref_t *lvl2_table;
+
+	/* unmap ops for mapped pages */
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	/* data pages to be unmapped */
+	struct page **data_pages;
 };
 
 #endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 424417d..a86313a 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -40,19 +40,21 @@
  * Creates 2 level page directory structure for referencing shared pages.
  * Top level page is a single page that contains up to 1024 refids that
  * point to 2nd level pages.
+ *
  * Each 2nd level page contains up to 1024 refids that point to shared
  * data pages.
+ *
  * There will always be one top level page and number of 2nd level pages
  * depends on number of shared data pages.
  *
  *      3rd level page                2nd level pages            Data pages
- * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
- * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
- * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
+ * +-------------------------+   ┌>+--------------------+ ┌>+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
  * |           ...           |   | |     ....           | |
- * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
- * +-------------------------+ | | +--------------------+      |Data page 1 |
- *                             | |                             +------------+
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
+ * +-------------------------+ | | +--------------------+   |Data page 1 |
+ *                             | |                          +------------+
  *                             | └>+--------------------+
  *                             |   |Data page 1024 refid|
  *                             |   |Data page 1025 refid|
@@ -65,9 +67,8 @@
  *                                 |Data page 1047552 refid|
  *                                 |Data page 1047553 refid|
  *                                 |       ...             |
- *                                 |Data page 1048575 refid|-->+------------------+
- *                                 +-----------------------+   |Data page 1048575 |
- *                                                             +------------------+
+ *                                 |Data page 1048575 refid|
+ *                                 +-----------------------+
  *
  * Using such 2 level structure it is possible to reference up to 4GB of
  * shared data using single refid pointing to top level page.
@@ -85,7 +86,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	 * Calculate number of pages needed for 2nd level addresing:
 	 */
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			   ((nents % REFS_PER_PAGE) ? 1: 0));
+			   ((nents % REFS_PER_PAGE) ? 1 : 0));
 
 	struct xen_shared_pages_info *sh_pages_info;
 	int i;
@@ -95,23 +96,22 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 
-	if (!sh_pages_info) {
-		dev_err(hy_drv_priv->dev, "No more space left\n");
+	if (!sh_pages_info)
 		return -ENOMEM;
-	}
 
 	*refs_info = (void *)sh_pages_info;
 
 	/* share data pages in readonly mode for security */
-	for (i=0; i<nents; i++) {
+	for (i = 0; i < nents; i++) {
 		lvl2_table[i] = gnttab_grant_foreign_access(domid,
 					pfn_to_mfn(page_to_pfn(pages[i])),
-					true /* read-only from remote domain */);
+					true /* read only */);
 		if (lvl2_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl2 */
-			while(i--) {
+			while (i--) {
 				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
 				gnttab_free_grant_reference(lvl2_table[i]);
 			}
@@ -120,23 +120,26 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	}
 
 	/* Share 2nd level addressing pages in readonly mode*/
-	for (i=0; i< n_lvl2_grefs; i++) {
+	for (i = 0; i < n_lvl2_grefs; i++) {
 		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-					virt_to_mfn((unsigned long)lvl2_table+i*PAGE_SIZE ),
+					virt_to_mfn(
+					(unsigned long)lvl2_table+i*PAGE_SIZE),
 					true);
 
 		if (lvl3_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
 
 			/* Unshare all already shared pages for lvl3 */
-			while(i--) {
+			while (i--) {
 				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
 				gnttab_free_grant_reference(lvl3_table[i]);
 			}
 
 			/* Unshare all pages for lvl2 */
-			while(nents--) {
-				gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+			while (nents--) {
+				gnttab_end_foreign_access_ref(
+							lvl2_table[nents], 0);
 				gnttab_free_grant_reference(lvl2_table[nents]);
 			}
 
@@ -150,16 +153,17 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 			true);
 
 	if (lvl3_gref == -ENOSPC) {
-		dev_err(hy_drv_priv->dev, "No more space left in grant table\n");
+		dev_err(hy_drv_priv->dev,
+			"No more space left in grant table\n");
 
 		/* Unshare all pages for lvl3 */
-		while(i--) {
+		while (i--) {
 			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
 			gnttab_free_grant_reference(lvl3_table[i]);
 		}
 
 		/* Unshare all pages for lvl2 */
-		while(nents--) {
+		while (nents--) {
 			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
 			gnttab_free_grant_reference(lvl2_table[nents]);
 		}
@@ -187,10 +191,11 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	return -ENOSPC;
 }
 
-int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
+int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents)
+{
 	struct xen_shared_pages_info *sh_pages_info;
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			    ((nents % REFS_PER_PAGE) ? 1: 0));
+			    ((nents % REFS_PER_PAGE) ? 1 : 0));
 	int i;
 
 	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
@@ -206,28 +211,28 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 
 	/* End foreign access for data pages, but do not free them */
 	for (i = 0; i < nents; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i])) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
 			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-		}
+
 		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
 		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
 	}
 
 	/* End foreign access for 2nd level addressing pages */
 	for (i = 0; i < n_lvl2_grefs; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i])) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
 			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-		}
-		if (!gnttab_end_foreign_access_ref(sh_pages_info->lvl3_table[i], 1)) {
+
+		if (!gnttab_end_foreign_access_ref(
+					sh_pages_info->lvl3_table[i], 1))
 			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
-		}
+
 		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
 	}
 
 	/* End foreign access for top level addressing page */
-	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref)) {
+	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
 		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
-	}
 
 	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
 	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
@@ -246,10 +251,11 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents) {
 	return 0;
 }
 
-/*
- * Maps provided top level ref id and then return array of pages containing data refs.
+/* Maps provided top level ref id and then return array of pages
+ * containing data refs.
  */
-struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents, void **refs_info)
+struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
+						int nents, void **refs_info)
 {
 	struct page *lvl3_table_page;
 	struct page **lvl2_table_pages;
@@ -280,19 +286,19 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
 	*refs_info = (void *) sh_pages_info;
 
-	lvl2_table_pages = kcalloc(sizeof(struct page*), n_lvl2_grefs,
+	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
 				   GFP_KERNEL);
 
-	data_pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
+	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
 
-	lvl2_map_ops = kcalloc(sizeof(*lvl2_map_ops), n_lvl2_grefs,
+	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
 			       GFP_KERNEL);
 
-	lvl2_unmap_ops = kcalloc(sizeof(*lvl2_unmap_ops), n_lvl2_grefs,
+	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
 				 GFP_KERNEL);
 
-	data_map_ops = kcalloc(sizeof(*data_map_ops), nents, GFP_KERNEL);
-	data_unmap_ops = kcalloc(sizeof(*data_unmap_ops), nents, GFP_KERNEL);
+	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
+	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
 
 	/* Map top level addressing page */
 	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
@@ -332,7 +338,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	}
 
 	for (i = 0; i < n_lvl2_grefs; i++) {
-		lvl2_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
+					page_to_pfn(lvl2_table_pages[i]));
 		gnttab_set_map_op(&lvl2_map_ops[i],
 				  (unsigned long)lvl2_table, GNTMAP_host_map |
 				  GNTMAP_readonly,
@@ -348,11 +355,11 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		dev_err(hy_drv_priv->dev,
 			"xen: cannot unmap top level page\n");
 		return NULL;
-	} else {
-		/* Mark that page was unmapped */
-		lvl3_unmap_ops.handle = -1;
 	}
 
+	/* Mark that page was unmapped */
+	lvl3_unmap_ops.handle = -1;
+
 	if (gnttab_map_refs(lvl2_map_ops, NULL,
 			    lvl2_table_pages, n_lvl2_grefs)) {
 		dev_err(hy_drv_priv->dev,
@@ -384,19 +391,22 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
 		for (j = 0; j < REFS_PER_PAGE; j++) {
 			gnttab_set_map_op(&data_map_ops[k],
-				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
 				GNTMAP_host_map | GNTMAP_readonly,
 				lvl2_table[j], domid);
 
 			gnttab_set_unmap_op(&data_unmap_ops[k],
-				(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
 				GNTMAP_host_map | GNTMAP_readonly, -1);
 			k++;
 		}
 	}
 
 	/* for grefs in the last lvl2 table page */
-	lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[n_lvl2_grefs - 1]));
+	lvl2_table = pfn_to_kaddr(page_to_pfn(
+				lvl2_table_pages[n_lvl2_grefs - 1]));
 
 	for (j = 0; j < nents_last; j++) {
 		gnttab_set_map_op(&data_map_ops[k],
@@ -424,13 +434,12 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 		dev_err(hy_drv_priv->dev,
 			"Cannot unmap 2nd level refs\n");
 		return NULL;
-	} else {
-		/* Mark that pages were unmapped */
-		for (i = 0; i < n_lvl2_grefs; i++) {
-			lvl2_unmap_ops[i].handle = -1;
-		}
 	}
 
+	/* Mark that pages were unmapped */
+	for (i = 0; i < n_lvl2_grefs; i++)
+		lvl2_unmap_ops[i].handle = -1;
+
 	for (i = 0; i < nents; i++) {
 		if (data_map_ops[i].status) {
 			dev_err(hy_drv_priv->dev,
@@ -483,7 +492,8 @@ struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int n
 	return NULL;
 }
 
-int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
+int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents)
+{
 	struct xen_shared_pages_info *sh_pages_info;
 
 	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
@@ -498,7 +508,7 @@ int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents) {
 	}
 
 	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
-			      sh_pages_info->data_pages, nents) ) {
+			      sh_pages_info->data_pages, nents)) {
 		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
 		return -EFAULT;
 	}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index 629ec0f..e7ae731 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -25,18 +25,21 @@
 #ifndef __HYPER_DMABUF_XEN_SHM_H__
 #define __HYPER_DMABUF_XEN_SHM_H__
 
-/* This collects all reference numbers for 2nd level shared pages and create a table
- * with those in 1st level shared pages then return reference numbers for this top level
- * table. */
+/* This collects all reference numbers for 2nd level shared pages and
+ * create a table with those in 1st level shared pages then return reference
+ * numbers for this top level table.
+ */
 int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 				 void **refs_info);
 
 int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents);
 
-/* Maps provided top level ref id and then return array of pages containing data refs.
+/* Maps provided top level ref id and then return array of pages containing
+ * data refs.
  */
-struct page ** hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid, int nents,
-						void **refs_info);
+struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
+						 int nents,
+						 void **refs_info);
 
 int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents);
 
diff --git a/include/uapi/xen/hyper_dmabuf.h b/include/uapi/xen/hyper_dmabuf.h
index e18dd9b..cb25299 100644
--- a/include/uapi/xen/hyper_dmabuf.h
+++ b/include/uapi/xen/hyper_dmabuf.h
@@ -28,8 +28,8 @@
 #define MAX_SIZE_PRIV_DATA 192
 
 typedef struct {
-        int id;
-        int rng_key[3]; /* 12bytes long random number */
+	int id;
+	int rng_key[3]; /* 12bytes long random number */
 } hyper_dmabuf_id_t;
 
 struct hyper_dmabuf_event_hdr {
@@ -115,20 +115,20 @@ struct ioctl_hyper_dmabuf_query {
 /* DMABUF query */
 
 enum hyper_dmabuf_query {
-        HYPER_DMABUF_QUERY_TYPE = 0x10,
-        HYPER_DMABUF_QUERY_EXPORTER,
-        HYPER_DMABUF_QUERY_IMPORTER,
-        HYPER_DMABUF_QUERY_SIZE,
-        HYPER_DMABUF_QUERY_BUSY,
-        HYPER_DMABUF_QUERY_UNEXPORTED,
-        HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
-        HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
-        HYPER_DMABUF_QUERY_PRIV_INFO,
+	HYPER_DMABUF_QUERY_TYPE = 0x10,
+	HYPER_DMABUF_QUERY_EXPORTER,
+	HYPER_DMABUF_QUERY_IMPORTER,
+	HYPER_DMABUF_QUERY_SIZE,
+	HYPER_DMABUF_QUERY_BUSY,
+	HYPER_DMABUF_QUERY_UNEXPORTED,
+	HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+	HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
+	HYPER_DMABUF_QUERY_PRIV_INFO,
 };
 
 enum hyper_dmabuf_status {
-        EXPORTED= 0x01,
-        IMPORTED,
+	EXPORTED = 0x01,
+	IMPORTED,
 };
 
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 51/60] hyper_dmabuf: missing mutex_unlock and move spinlock
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Added missing mutex_unlock to make sure mutex is unlocked
before returning.

Also, moved spinlock lock/unlock into hyper_dmabuf_send_event
and remove checking on spinlock (with assumption caller does
the spinlock in advance) to make it more straight forward.

This patch includes a couple of minor modifications, changing type
of function calls to static and correcting some of error code.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c   | 38 +++++++++++++--------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c | 15 +++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c |  8 ++++--
 3 files changed, 30 insertions(+), 31 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 023d7f4..76f57c2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -73,7 +73,7 @@ static void hyper_dmabuf_force_free(struct exported_sgt_info *exported,
 	}
 }
 
-int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 {
 	int ret = 0;
 
@@ -84,7 +84,7 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 	return ret;
 }
 
-int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
 	hyper_dmabuf_foreach_exported(hyper_dmabuf_force_free, filp);
 
@@ -93,20 +93,18 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 
-unsigned int hyper_dmabuf_event_poll(struct file *filp,
+static unsigned int hyper_dmabuf_event_poll(struct file *filp,
 				     struct poll_table_struct *wait)
 {
-	unsigned int mask = 0;
-
 	poll_wait(filp, &hy_drv_priv->event_wait, wait);
 
 	if (!list_empty(&hy_drv_priv->event_list))
-		mask |= POLLIN | POLLRDNORM;
+		return POLLIN | POLLRDNORM;
 
-	return mask;
+	return 0;
 }
 
-ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
+static ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 		size_t count, loff_t *offset)
 {
 	int ret;
@@ -115,14 +113,14 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 	if (!capable(CAP_DAC_OVERRIDE)) {
 		dev_err(hy_drv_priv->dev,
 			"Only root can read events\n");
-		return -EFAULT;
+		return -EPERM;
 	}
 
 	/* make sure user buffer can be written */
 	if (!access_ok(VERIFY_WRITE, buffer, count)) {
 		dev_err(hy_drv_priv->dev,
 			"User buffer can't be written.\n");
-		return -EFAULT;
+		return -EINVAL;
 	}
 
 	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
@@ -143,6 +141,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 		if (!e) {
 			if (ret)
 				break;
+
 			if (filp->f_flags & O_NONBLOCK) {
 				ret = -EAGAIN;
 				break;
@@ -233,7 +232,7 @@ static struct miscdevice hyper_dmabuf_miscdev = {
 	.fops = &hyper_dmabuf_driver_fops,
 };
 
-int register_device(void)
+static int register_device(void)
 {
 	int ret = 0;
 
@@ -252,7 +251,7 @@ int register_device(void)
 	return ret;
 }
 
-void unregister_device(void)
+static void unregister_device(void)
 {
 	dev_info(hy_drv_priv->dev,
 		"hyper_dmabuf: unregister_device() is called\n");
@@ -269,10 +268,8 @@ static int __init hyper_dmabuf_drv_init(void)
 	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
 			      GFP_KERNEL);
 
-	if (!hy_drv_priv) {
-		printk(KERN_ERR "hyper_dmabuf: Failed to create drv\n");
-		return -1;
-	}
+	if (!hy_drv_priv)
+		return -ENOMEM;
 
 	ret = register_device();
 	if (ret < 0)
@@ -291,7 +288,6 @@ static int __init hyper_dmabuf_drv_init(void)
 		return -1;
 	}
 
-	/* initializing mutexes and a spinlock */
 	mutex_init(&hy_drv_priv->lock);
 
 	mutex_lock(&hy_drv_priv->lock);
@@ -301,14 +297,14 @@ static int __init hyper_dmabuf_drv_init(void)
 	dev_info(hy_drv_priv->dev,
 		 "initializing database for imported/exported dmabufs\n");
 
-	/* device structure initialization */
-	/* currently only does work-queue initialization */
 	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
 			"failed to initialize table for exported/imported entries\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
 		return ret;
 	}
 
@@ -317,6 +313,8 @@ static int __init hyper_dmabuf_drv_init(void)
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
 			"failed to initialize sysfs\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
 		return ret;
 	}
 #endif
@@ -338,7 +336,7 @@ static int __init hyper_dmabuf_drv_init(void)
 	ret = hy_drv_priv->backend_ops->init_comm_env();
 	if (ret < 0) {
 		dev_dbg(hy_drv_priv->dev,
-			"failed to initialize comm-env but it will re-attempt.\n");
+			"failed to initialize comm-env.\n");
 	} else {
 		hy_drv_priv->initialized = true;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index a4945af..ae8cb43 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -37,11 +37,12 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_event.h"
 
-static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
+static void hyper_dmabuf_send_event(struct hyper_dmabuf_event *e)
 {
 	struct hyper_dmabuf_event *oldest;
+	unsigned long irqflags;
 
-	assert_spin_locked(&hy_drv_priv->event_lock);
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
 
 	/* check current number of event then if it hits the max num allowed
 	 * then remove the oldest event in the list
@@ -60,6 +61,8 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 	hy_drv_priv->pending++;
 
 	wake_up_interruptible(&hy_drv_priv->event_wait);
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
 }
 
 void hyper_dmabuf_events_release(void)
@@ -89,8 +92,6 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	struct hyper_dmabuf_event *e;
 	struct imported_sgt_info *imported;
 
-	unsigned long irqflags;
-
 	imported = hyper_dmabuf_find_imported(hid);
 
 	if (!imported) {
@@ -109,11 +110,7 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	e->event_data.data = (void *)imported->priv;
 	e->event_data.hdr.size = imported->sz_priv;
 
-	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
-
-	hyper_dmabuf_send_event_locked(e);
-
-	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+	hyper_dmabuf_send_event(e);
 
 	dev_dbg(hy_drv_priv->dev,
 		"event number = %d :", hy_drv_priv->pending);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index f9040ed..195cede 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -441,8 +441,10 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req)
+	if (!req) {
+		mutex_unlock(&hy_drv_priv->lock);
 		return -ENOMEM;
+	}
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
@@ -509,8 +511,10 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-			if (!req)
+			if (!req) {
+				mutex_unlock(&hy_drv_priv->lock);
 				return -ENOMEM;
+			}
 
 			hyper_dmabuf_create_req(req,
 						HYPER_DMABUF_EXPORT_FD_FAILED,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 51/60] hyper_dmabuf: missing mutex_unlock and move spinlock
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Added missing mutex_unlock to make sure mutex is unlocked
before returning.

Also, moved spinlock lock/unlock into hyper_dmabuf_send_event
and remove checking on spinlock (with assumption caller does
the spinlock in advance) to make it more straight forward.

This patch includes a couple of minor modifications, changing type
of function calls to static and correcting some of error code.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c   | 38 +++++++++++++--------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c | 15 +++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c |  8 ++++--
 3 files changed, 30 insertions(+), 31 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 023d7f4..76f57c2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -73,7 +73,7 @@ static void hyper_dmabuf_force_free(struct exported_sgt_info *exported,
 	}
 }
 
-int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 {
 	int ret = 0;
 
@@ -84,7 +84,7 @@ int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 	return ret;
 }
 
-int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
 	hyper_dmabuf_foreach_exported(hyper_dmabuf_force_free, filp);
 
@@ -93,20 +93,18 @@ int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 
-unsigned int hyper_dmabuf_event_poll(struct file *filp,
+static unsigned int hyper_dmabuf_event_poll(struct file *filp,
 				     struct poll_table_struct *wait)
 {
-	unsigned int mask = 0;
-
 	poll_wait(filp, &hy_drv_priv->event_wait, wait);
 
 	if (!list_empty(&hy_drv_priv->event_list))
-		mask |= POLLIN | POLLRDNORM;
+		return POLLIN | POLLRDNORM;
 
-	return mask;
+	return 0;
 }
 
-ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
+static ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 		size_t count, loff_t *offset)
 {
 	int ret;
@@ -115,14 +113,14 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 	if (!capable(CAP_DAC_OVERRIDE)) {
 		dev_err(hy_drv_priv->dev,
 			"Only root can read events\n");
-		return -EFAULT;
+		return -EPERM;
 	}
 
 	/* make sure user buffer can be written */
 	if (!access_ok(VERIFY_WRITE, buffer, count)) {
 		dev_err(hy_drv_priv->dev,
 			"User buffer can't be written.\n");
-		return -EFAULT;
+		return -EINVAL;
 	}
 
 	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
@@ -143,6 +141,7 @@ ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
 		if (!e) {
 			if (ret)
 				break;
+
 			if (filp->f_flags & O_NONBLOCK) {
 				ret = -EAGAIN;
 				break;
@@ -233,7 +232,7 @@ static struct miscdevice hyper_dmabuf_miscdev = {
 	.fops = &hyper_dmabuf_driver_fops,
 };
 
-int register_device(void)
+static int register_device(void)
 {
 	int ret = 0;
 
@@ -252,7 +251,7 @@ int register_device(void)
 	return ret;
 }
 
-void unregister_device(void)
+static void unregister_device(void)
 {
 	dev_info(hy_drv_priv->dev,
 		"hyper_dmabuf: unregister_device() is called\n");
@@ -269,10 +268,8 @@ static int __init hyper_dmabuf_drv_init(void)
 	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
 			      GFP_KERNEL);
 
-	if (!hy_drv_priv) {
-		printk(KERN_ERR "hyper_dmabuf: Failed to create drv\n");
-		return -1;
-	}
+	if (!hy_drv_priv)
+		return -ENOMEM;
 
 	ret = register_device();
 	if (ret < 0)
@@ -291,7 +288,6 @@ static int __init hyper_dmabuf_drv_init(void)
 		return -1;
 	}
 
-	/* initializing mutexes and a spinlock */
 	mutex_init(&hy_drv_priv->lock);
 
 	mutex_lock(&hy_drv_priv->lock);
@@ -301,14 +297,14 @@ static int __init hyper_dmabuf_drv_init(void)
 	dev_info(hy_drv_priv->dev,
 		 "initializing database for imported/exported dmabufs\n");
 
-	/* device structure initialization */
-	/* currently only does work-queue initialization */
 	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
 
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
 			"failed to initialize table for exported/imported entries\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
 		return ret;
 	}
 
@@ -317,6 +313,8 @@ static int __init hyper_dmabuf_drv_init(void)
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
 			"failed to initialize sysfs\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
 		return ret;
 	}
 #endif
@@ -338,7 +336,7 @@ static int __init hyper_dmabuf_drv_init(void)
 	ret = hy_drv_priv->backend_ops->init_comm_env();
 	if (ret < 0) {
 		dev_dbg(hy_drv_priv->dev,
-			"failed to initialize comm-env but it will re-attempt.\n");
+			"failed to initialize comm-env.\n");
 	} else {
 		hy_drv_priv->initialized = true;
 	}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index a4945af..ae8cb43 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -37,11 +37,12 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_event.h"
 
-static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
+static void hyper_dmabuf_send_event(struct hyper_dmabuf_event *e)
 {
 	struct hyper_dmabuf_event *oldest;
+	unsigned long irqflags;
 
-	assert_spin_locked(&hy_drv_priv->event_lock);
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
 
 	/* check current number of event then if it hits the max num allowed
 	 * then remove the oldest event in the list
@@ -60,6 +61,8 @@ static void hyper_dmabuf_send_event_locked(struct hyper_dmabuf_event *e)
 	hy_drv_priv->pending++;
 
 	wake_up_interruptible(&hy_drv_priv->event_wait);
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
 }
 
 void hyper_dmabuf_events_release(void)
@@ -89,8 +92,6 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	struct hyper_dmabuf_event *e;
 	struct imported_sgt_info *imported;
 
-	unsigned long irqflags;
-
 	imported = hyper_dmabuf_find_imported(hid);
 
 	if (!imported) {
@@ -109,11 +110,7 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	e->event_data.data = (void *)imported->priv;
 	e->event_data.hdr.size = imported->sz_priv;
 
-	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
-
-	hyper_dmabuf_send_event_locked(e);
-
-	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+	hyper_dmabuf_send_event(e);
 
 	dev_dbg(hy_drv_priv->dev,
 		"event number = %d :", hy_drv_priv->pending);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index f9040ed..195cede 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -441,8 +441,10 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-	if (!req)
+	if (!req) {
+		mutex_unlock(&hy_drv_priv->lock);
 		return -ENOMEM;
+	}
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
@@ -509,8 +511,10 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
-			if (!req)
+			if (!req) {
+				mutex_unlock(&hy_drv_priv->lock);
 				return -ENOMEM;
+			}
 
 			hyper_dmabuf_create_req(req,
 						HYPER_DMABUF_EXPORT_FD_FAILED,
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 52/60] hyper_dmabuf: remove prefix 'hyper_dmabuf' from static func and backend APIs
  2017-12-19 19:29 ` Dongwon Kim
                   ` (69 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

Removed prefix "hyper_dmabuf" from backend functions and static func
(except for driver APIs) and add 'be' after 'xen' in backend function
calls to show those are backend APIs.

Also, modified some of function names for clarification and  addressed
some missing errors and warnings in hyper_dmabuf_list.c and
hyper_dmabuf_list.h.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |   9 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      |   6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |   9 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |   8 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  23 ++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 113 +++++++++++----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  20 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        |  54 +++++-----
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  20 +---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |   2 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  45 ++++----
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  20 ++--
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |   1 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  26 ++---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    |  14 ++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  14 +--
 18 files changed, 179 insertions(+), 213 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 76f57c2..387cc63 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -37,7 +37,6 @@
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_ioctl.h"
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_event.h"
@@ -51,8 +50,8 @@ MODULE_AUTHOR("Intel Corporation");
 
 struct hyper_dmabuf_private *hy_drv_priv;
 
-static void hyper_dmabuf_force_free(struct exported_sgt_info *exported,
-				    void *attr)
+static void force_free(struct exported_sgt_info *exported,
+		       void *attr)
 {
 	struct ioctl_hyper_dmabuf_unexport unexport_attr;
 	struct file *filp = (struct file *)attr;
@@ -86,7 +85,7 @@ static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 
 static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
-	hyper_dmabuf_foreach_exported(hyper_dmabuf_force_free, filp);
+	hyper_dmabuf_foreach_exported(force_free, filp);
 
 	return 0;
 }
@@ -369,7 +368,7 @@ static void hyper_dmabuf_drv_exit(void)
 
 	/* destroy id_queue */
 	if (hy_drv_priv->id_queue)
-		destroy_reusable_list();
+		hyper_dmabuf_free_hid_list();
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 	/* clean up event queue */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index ae8cb43..392ea99 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -28,16 +28,14 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/fs.h>
 #include <linux/slab.h>
 #include <linux/module.h>
-#include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_event.h"
 
-static void hyper_dmabuf_send_event(struct hyper_dmabuf_event *e)
+static void send_event(struct hyper_dmabuf_event *e)
 {
 	struct hyper_dmabuf_event *oldest;
 	unsigned long irqflags;
@@ -110,7 +108,7 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	e->event_data.data = (void *)imported->priv;
 	e->event_data.hdr.size = imported->sz_priv;
 
-	hyper_dmabuf_send_event(e);
+	send_event(e);
 
 	dev_dbg(hy_drv_priv->dev,
 		"event number = %d :", hy_drv_priv->pending);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 312dea5..e67b84a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -31,9 +31,8 @@
 #include <linux/random.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
 
-void store_reusable_hid(hyper_dmabuf_id_t hid)
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *new_reusable;
@@ -48,7 +47,7 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 	list_add(&new_reusable->list, &reusable_head->list);
 }
 
-static hyper_dmabuf_id_t retrieve_reusable_hid(void)
+static hyper_dmabuf_id_t get_reusable_hid(void)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
@@ -67,7 +66,7 @@ static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 	return hid;
 }
 
-void destroy_reusable_list(void)
+void hyper_dmabuf_free_hid_list(void)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *temp_head;
@@ -106,7 +105,7 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 		hy_drv_priv->id_queue = reusable_head;
 	}
 
-	hid = retrieve_reusable_hid();
+	hid = get_reusable_hid();
 
 	/*creating a new H-ID only if nothing in the reusable id queue
 	 * and count is less than maximum allowed
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index 61c4fb3..ed690f3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -36,12 +36,16 @@
  */
 #define HYPER_DMABUF_ID_MAX 1000
 
-void store_reusable_hid(hyper_dmabuf_id_t hid);
+/* adding freed hid to the reusable list */
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
 
-void destroy_reusable_list(void);
+/* freeing the reusasble list */
+void hyper_dmabuf_free_hid_list(void);
 
+/* getting a hid available to use. */
 hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
 
+/* comparing two different hid */
 bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
 
 #endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 195cede..b40cf89 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -28,13 +28,9 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
-#include <linux/miscdevice.h>
 #include <linux/uaccess.h>
 #include <linux/dma-buf.h>
-#include <linux/delay.h>
-#include <linux/list.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_struct.h"
@@ -80,8 +76,8 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
-					struct pages_info *pg_info)
+static int send_export_msg(struct exported_sgt_info *exported,
+			   struct pages_info *pg_info)
 {
 	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	struct hyper_dmabuf_req *req;
@@ -102,7 +98,7 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 					 pg_info->nents, &exported->refs_info);
 		if (op[7] < 0) {
 			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
-			return -1;
+			return op[7];
 		}
 	}
 
@@ -114,7 +110,7 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req)
-		return -1;
+		return -ENOMEM;
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
@@ -212,7 +208,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 			ret = -EINVAL;
 		} else {
 			/* send an export msg for updating priv in importer */
-			ret = hyper_dmabuf_send_export_msg(exported, NULL);
+			ret = send_export_msg(exported, NULL);
 
 			if (ret < 0) {
 				dev_err(hy_drv_priv->dev,
@@ -347,7 +343,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	export_remote_attr->hid = exported->hid;
 
-	ret = hyper_dmabuf_send_export_msg(exported, pg_info);
+	ret = send_export_msg(exported, pg_info);
 
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
@@ -550,7 +546,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 /* unexport dmabuf from the database and send int req to the source domain
  * to unmap it.
  */
-static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
+static void delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
@@ -612,7 +608,7 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 		hyper_dmabuf_remove_exported(exported->hid);
 
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_hid(exported->hid);
+		hyper_dmabuf_store_hid(exported->hid);
 
 		if (exported->sz_priv > 0 && !exported->priv)
 			kfree(exported->priv);
@@ -649,8 +645,7 @@ int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 		return 0;
 
 	exported->unexport_sched = true;
-	INIT_DELAYED_WORK(&exported->unexport,
-			  hyper_dmabuf_delayed_unexport);
+	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
 	schedule_delayed_work(&exported->unexport,
 			      msecs_to_jiffies(unexport_attr->delay_ms));
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 1b3745e..bba6d1d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -28,12 +28,9 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/cdev.h>
-#include <asm/uaccess.h>
 #include <linux/hashtable.h>
-#include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
@@ -43,7 +40,9 @@ DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
-static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attribute *attr, char *buf)
+static ssize_t hyper_dmabuf_imported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
 {
 	struct list_entry_imported *info_entry;
 	int bkt;
@@ -55,19 +54,23 @@ static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attr
 		int nents = info_entry->imported->nents;
 		bool valid = info_entry->imported->valid;
 		int num_importers = info_entry->imported->importers;
+
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
-				   "hid:{id:%d keys:%d %d %d}, nents:%d, v:%c, numi:%d\n",
-				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
-				   nents, (valid ? 't' : 'f'), num_importers);
+				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				num_importers);
 	}
-	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
-			   total);
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
 
 	return count;
 }
 
-static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attribute *attr, char *buf)
+static ssize_t hyper_dmabuf_exported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
 {
 	struct list_entry_exported *info_entry;
 	int bkt;
@@ -79,20 +82,22 @@ static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attr
 		int nents = info_entry->exported->nents;
 		bool valid = info_entry->exported->valid;
 		int importer_exported = info_entry->exported->active;
+
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
-				   "hid:{hid:%d keys:%d %d %d}, nents:%d, v:%c, ie:%d\n",
-				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
-				   nents, (valid ? 't' : 'f'), importer_exported);
+				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1],
+				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				   importer_exported);
 	}
-	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
-			   total);
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
 
 	return count;
 }
 
-static DEVICE_ATTR(imported, S_IRUSR, hyper_dmabuf_imported_show, NULL);
-static DEVICE_ATTR(exported, S_IRUSR, hyper_dmabuf_exported_show, NULL);
+static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
+static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
 
 int hyper_dmabuf_register_sysfs(struct device *dev)
 {
@@ -118,18 +123,21 @@ int hyper_dmabuf_unregister_sysfs(struct device *dev)
 	device_remove_file(dev, &dev_attr_exported);
 	return 0;
 }
+
 #endif
 
-int hyper_dmabuf_table_init()
+int hyper_dmabuf_table_init(void)
 {
 	hash_init(hyper_dmabuf_hash_imported);
 	hash_init(hyper_dmabuf_hash_exported);
 	return 0;
 }
 
-int hyper_dmabuf_table_destroy()
+int hyper_dmabuf_table_destroy(void)
 {
-	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
+	/* TODO: cleanup hyper_dmabuf_hash_imported
+	 * and hyper_dmabuf_hash_exported
+	 */
 	return 0;
 }
 
@@ -139,11 +147,8 @@ int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-                        "No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->exported = exported;
 
@@ -153,17 +158,14 @@ int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
 	return 0;
 }
 
-int hyper_dmabuf_register_imported(struct imported_sgt_info* imported)
+int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
 {
 	struct list_entry_imported *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-                        "No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->imported = imported;
 
@@ -180,28 +182,32 @@ struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->exported->hid.id == hid.id) {
+		if (info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid))
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid))
 				return info_entry->exported;
-			/* if key is unmatched, given HID is invalid, so returning NULL */
-			else
-				break;
+
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
 		}
 
 	return NULL;
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid)
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid)
 {
 	struct list_entry_exported *info_entry;
-	hyper_dmabuf_id_t hid = {-1, {0, 0, 0}};
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->exported->dma_buf == dmabuf &&
-		   info_entry->exported->rdomid == domid)
+		if (info_entry->exported->dma_buf == dmabuf &&
+		    info_entry->exported->rdomid == domid)
 			return info_entry->exported->hid;
 
 	return hid;
@@ -214,14 +220,15 @@ struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->imported->hid.id == hid.id) {
+		if (info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid))
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid))
 				return info_entry->imported;
-			/* if key is unmatched, given HID is invalid, so returning NULL */
-			else {
-				break;
-			}
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
 		}
 
 	return NULL;
@@ -234,15 +241,16 @@ int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->exported->hid.id == hid.id) {
+		if (info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid)) {
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
-			} else {
-				break;
 			}
+
+			break;
 		}
 
 	return -ENOENT;
@@ -255,15 +263,16 @@ int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->imported->hid.id == hid.id) {
+		if (info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid)) {
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
-			} else {
-				break;
 			}
+
+			break;
 		}
 
 	return -ENOENT;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index d5c17ef..f7102f5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -33,13 +33,13 @@
 #define MAX_ENTRY_IMPORTED 7
 
 struct list_entry_exported {
-        struct exported_sgt_info *exported;
-        struct hlist_node node;
+	struct exported_sgt_info *exported;
+	struct hlist_node node;
 };
 
 struct list_entry_imported {
-        struct imported_sgt_info *imported;
-        struct hlist_node node;
+	struct imported_sgt_info *imported;
+	struct hlist_node node;
 };
 
 int hyper_dmabuf_table_init(void);
@@ -49,9 +49,10 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid);
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid);
 
-int hyper_dmabuf_register_imported(struct imported_sgt_info* info);
+int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
 
 struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
 
@@ -61,11 +62,10 @@ int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
 
 int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
 
-void hyper_dmabuf_foreach_exported(
-	void (*func)(struct exported_sgt_info *, void *attr),
-	void *attr);
+void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
+				   void *attr), void *attr);
 
 int hyper_dmabuf_register_sysfs(struct device *dev);
 int hyper_dmabuf_unregister_sysfs(struct device *dev);
 
-#endif // __HYPER_DMABUF_LIST_H__
+#endif /* __HYPER_DMABUF_LIST_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index fbbcc39..afc1fd6e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -28,12 +28,10 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
-#include <linux/dma-buf.h>
 #include <linux/workqueue.h>
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_remote_sync.h"
 #include "hyper_dmabuf_event.h"
 #include "hyper_dmabuf_list.h"
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 03fdd30..bf805b1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -28,9 +28,7 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/fs.h>
 #include <linux/slab.h>
-#include <linux/module.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
@@ -43,7 +41,15 @@
 #define WAIT_AFTER_SYNC_REQ 0
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+static int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -EINVAL;
+}
+
+static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
@@ -90,8 +96,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_ATTACH);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_ATTACH);
 
 	return ret;
 }
@@ -107,8 +112,7 @@ static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_DETACH);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_DETACH);
 }
 
 static struct sg_table *hyper_dmabuf_ops_map(
@@ -140,8 +144,7 @@ static struct sg_table *hyper_dmabuf_ops_map(
 	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
 		goto err_free_sg;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_MAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MAP);
 
 	kfree(pg_info->pgs);
 	kfree(pg_info);
@@ -177,8 +180,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_UNMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_UNMAP);
 }
 
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
@@ -211,8 +213,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	finish = imported && !imported->valid &&
 		 !imported->importers;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_RELEASE);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_RELEASE);
 
 	/*
 	 * Check if buffer is still valid and if not remove it
@@ -236,8 +237,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 
 	return ret;
 }
@@ -253,8 +253,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_END_CPU_ACCESS);
 
 	return 0;
 }
@@ -270,8 +269,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP_ATOMIC);
 
 	/* TODO: NULL for now. Need to return the addr of mapped region */
 	return NULL;
@@ -288,8 +286,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 }
 
 static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
@@ -302,8 +299,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP);
 
 	/* for now NULL.. need to return the address of mapped region */
 	return NULL;
@@ -320,8 +316,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KUNMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP);
 }
 
 static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
@@ -335,8 +330,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_MMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MMAP);
 
 	return ret;
 }
@@ -351,8 +345,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_VMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VMAP);
 
 	return NULL;
 }
@@ -367,8 +360,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_VUNMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VUNMAP);
 }
 
 static const struct dma_buf_ops hyper_dmabuf_ops = {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index c9fe040..a82fd7b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -28,7 +28,6 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
@@ -36,7 +35,6 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_sgl_proc.h"
 
 /* Whenever importer does dma operations from remote domain,
@@ -189,7 +187,7 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 			hyper_dmabuf_remove_exported(hid);
 			kfree(exported);
 			/* store hyper_dmabuf_id in the list for reuse */
-			store_reusable_hid(hid);
+			hyper_dmabuf_store_hid(hid);
 		}
 
 		break;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index e9299e5..9ad7ab9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -28,31 +28,18 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/fs.h>
 #include <linux/slab.h>
-#include <linux/module.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_sgl_proc.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_list.h"
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-int dmabuf_refcount(struct dma_buf *dma_buf)
-{
-	if ((dma_buf != NULL) && (dma_buf->file != NULL))
-		return file_count(dma_buf->file);
-
-	return -1;
-}
-
 /* return total number of pages referenced by a sgt
  * for pre-calculation of # of pages behind a given sgt
  */
-static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
+static int get_num_pgs(struct sg_table *sgt)
 {
 	struct scatterlist *sgl;
 	int length, i;
@@ -89,8 +76,9 @@ struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	if (!pg_info)
 		return NULL;
 
-	pg_info->pgs = kmalloc_array(hyper_dmabuf_get_num_pgs(sgt),
-				     sizeof(struct page *), GFP_KERNEL);
+	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
+				     sizeof(struct page *),
+				     GFP_KERNEL);
 
 	if (!pg_info->pgs) {
 		kfree(pg_info);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
index 152f78c..869d982 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -25,8 +25,6 @@
 #ifndef __HYPER_DMABUF_IMP_H__
 #define __HYPER_DMABUF_IMP_H__
 
-int dmabuf_refcount(struct dma_buf *dma_buf);
-
 /* extract pages directly from struct sg_table */
 struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 05f3521..4a073ce 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -26,13 +26,10 @@
  *
  */
 
-#include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/workqueue.h>
 #include <linux/delay.h>
-#include <linux/time.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
 #include <xen/xenbus.h>
@@ -152,7 +149,7 @@ static int xen_comm_get_ring_details(int domid, int rdomid,
 	return (ret <= 0 ? 1 : 0);
 }
 
-void xen_get_domid_delayed(struct work_struct *unused)
+static void xen_get_domid_delayed(struct work_struct *unused)
 {
 	struct xenbus_transaction xbt;
 	int domid, ret;
@@ -191,7 +188,7 @@ void xen_get_domid_delayed(struct work_struct *unused)
 	}
 }
 
-int hyper_dmabuf_xen_get_domid(void)
+int xen_be_get_domid(void)
 {
 	struct xenbus_transaction xbt;
 	int domid;
@@ -261,22 +258,22 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 	 * connect to it.
 	 */
 
-	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(),
+	ret = xen_comm_get_ring_details(xen_be_get_domid(),
 					rdom, &grefid, &port);
 
 	if (ring_info && ret != 0) {
 		dev_info(hy_drv_priv->dev,
 			 "Remote exporter closed, cleaninup importer\n");
-		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
+		xen_be_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
 		dev_info(hy_drv_priv->dev,
 			 "Registering importer\n");
-		hyper_dmabuf_xen_init_rx_rbuf(rdom);
+		xen_be_init_rx_rbuf(rdom);
 	}
 }
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_xen_init_tx_rbuf(int domid)
+int xen_be_init_tx_rbuf(int domid)
 {
 	struct xen_comm_tx_ring_info *ring_info;
 	struct xen_comm_sring *sring;
@@ -365,7 +362,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(),
+	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
 					   domid,
 					   ring_info->gref_ring,
 					   ring_info->port);
@@ -384,7 +381,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	sprintf((char *)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
-		domid, hyper_dmabuf_xen_get_domid());
+		domid, xen_be_get_domid());
 
 	register_xenbus_watch(&ring_info->watch);
 
@@ -392,7 +389,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 }
 
 /* cleans up exporter ring created for given remote domain */
-void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
+void xen_be_cleanup_tx_rbuf(int domid)
 {
 	struct xen_comm_tx_ring_info *ring_info;
 	struct xen_comm_rx_ring_info *rx_ring_info;
@@ -433,7 +430,7 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 /* importer needs to know about shared page and port numbers for
  * ring buffer and event channel
  */
-int hyper_dmabuf_xen_init_rx_rbuf(int domid)
+int xen_be_init_rx_rbuf(int domid)
 {
 	struct xen_comm_rx_ring_info *ring_info;
 	struct xen_comm_sring *sring;
@@ -456,7 +453,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		return 0;
 	}
 
-	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
+	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
 					&rx_gref, &rx_port);
 
 	if (ret) {
@@ -536,7 +533,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	/* Setup communcation channel in opposite direction */
 	if (!xen_comm_find_tx_ring(domid))
-		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
+		ret = xen_be_init_tx_rbuf(domid);
 
 	ret = request_irq(ring_info->irq,
 			  back_ring_isr, 0,
@@ -554,7 +551,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 }
 
 /* clenas up importer ring create for given source domain */
-void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
+void xen_be_cleanup_rx_rbuf(int domid)
 {
 	struct xen_comm_rx_ring_info *ring_info;
 	struct xen_comm_tx_ring_info *tx_ring_info;
@@ -624,7 +621,7 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 				if (xen_comm_find_rx_ring(i) != NULL)
 					continue;
 
-				ret = hyper_dmabuf_xen_init_rx_rbuf(i);
+				ret = xen_be_init_rx_rbuf(i);
 
 				if (!ret)
 					dev_info(hy_drv_priv->dev,
@@ -672,7 +669,7 @@ void xen_init_comm_env_delayed(struct work_struct *unused)
 	}
 }
 
-int hyper_dmabuf_xen_init_comm_env(void)
+int xen_be_init_comm_env(void)
 {
 	int ret;
 
@@ -699,19 +696,19 @@ int hyper_dmabuf_xen_init_comm_env(void)
 }
 
 /* cleans up all tx/rx rings */
-static void hyper_dmabuf_xen_cleanup_all_rbufs(void)
+static void xen_be_cleanup_all_rbufs(void)
 {
-	xen_comm_foreach_tx_ring(hyper_dmabuf_xen_cleanup_tx_rbuf);
-	xen_comm_foreach_rx_ring(hyper_dmabuf_xen_cleanup_rx_rbuf);
+	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
+	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
 }
 
-void hyper_dmabuf_xen_destroy_comm(void)
+void xen_be_destroy_comm(void)
 {
-	hyper_dmabuf_xen_cleanup_all_rbufs();
+	xen_be_cleanup_all_rbufs();
 	xen_comm_destroy_data_dir();
 }
 
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
 			      int wait)
 {
 	struct xen_comm_front_ring *ring;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 8e2d1d0..70a2b70 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -51,28 +51,28 @@ struct xen_comm_rx_ring_info {
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
-int hyper_dmabuf_xen_get_domid(void);
+int xen_be_get_domid(void);
 
-int hyper_dmabuf_xen_init_comm_env(void);
+int xen_be_init_comm_env(void);
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_xen_init_tx_rbuf(int domid);
+int xen_be_init_tx_rbuf(int domid);
 
 /* importer needs to know about shared page and port numbers
  * for ring buffer and event channel
  */
-int hyper_dmabuf_xen_init_rx_rbuf(int domid);
+int xen_be_init_rx_rbuf(int domid);
 
 /* cleans up exporter ring created for given domain */
-void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid);
+void xen_be_cleanup_tx_rbuf(int domid);
 
 /* cleans up importer ring created for given domain */
-void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid);
+void xen_be_cleanup_rx_rbuf(int domid);
 
-void hyper_dmabuf_xen_destroy_comm(void);
+void xen_be_destroy_comm(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
-			      int wait);
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+		    int wait);
 
-#endif // __HYPER_DMABUF_XEN_COMM_H__
+#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 343aab3..15023db 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -28,7 +28,6 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/cdev.h>
 #include <linux/hashtable.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index aa4c2f5..23965b8 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -26,25 +26,19 @@
  *
  */
 
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/module.h>
-#include <xen/grant_table.h>
-#include "../hyper_dmabuf_msg.h"
 #include "../hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_xen_drv.h"
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_shm.h"
 
 struct hyper_dmabuf_backend_ops xen_backend_ops = {
-	.get_vm_id = hyper_dmabuf_xen_get_domid,
-	.share_pages = hyper_dmabuf_xen_share_pages,
-	.unshare_pages = hyper_dmabuf_xen_unshare_pages,
-	.map_shared_pages = (void *)hyper_dmabuf_xen_map_shared_pages,
-	.unmap_shared_pages = hyper_dmabuf_xen_unmap_shared_pages,
-	.init_comm_env = hyper_dmabuf_xen_init_comm_env,
-	.destroy_comm = hyper_dmabuf_xen_destroy_comm,
-	.init_rx_ch = hyper_dmabuf_xen_init_rx_rbuf,
-	.init_tx_ch = hyper_dmabuf_xen_init_tx_rbuf,
-	.send_req = hyper_dmabuf_xen_send_req,
+	.get_vm_id = xen_be_get_domid,
+	.share_pages = xen_be_share_pages,
+	.unshare_pages = xen_be_unshare_pages,
+	.map_shared_pages = (void *)xen_be_map_shared_pages,
+	.unmap_shared_pages = xen_be_unmap_shared_pages,
+	.init_comm_env = xen_be_init_comm_env,
+	.destroy_comm = xen_be_destroy_comm,
+	.init_rx_ch = xen_be_init_rx_rbuf,
+	.init_tx_ch = xen_be_init_tx_rbuf,
+	.send_req = xen_be_send_req,
 };
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index a86313a..16416f8 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -26,8 +26,6 @@
  *
  */
 
-#include <linux/kernel.h>
-#include <linux/errno.h>
 #include <linux/slab.h>
 #include <xen/grant_table.h>
 #include <asm/xen/page.h>
@@ -75,8 +73,8 @@
  *
  * Returns refid of top level page.
  */
-int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
-				 void **refs_info)
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		       void **refs_info)
 {
 	grant_ref_t lvl3_gref;
 	grant_ref_t *lvl2_table;
@@ -191,7 +189,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	return -ENOSPC;
 }
 
-int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents)
+int xen_be_unshare_pages(void **refs_info, int nents)
 {
 	struct xen_shared_pages_info *sh_pages_info;
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
@@ -254,8 +252,8 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents)
 /* Maps provided top level ref id and then return array of pages
  * containing data refs.
  */
-struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
-						int nents, void **refs_info)
+struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
+				   int nents, void **refs_info)
 {
 	struct page *lvl3_table_page;
 	struct page **lvl2_table_pages;
@@ -492,7 +490,7 @@ struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
 	return NULL;
 }
 
-int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents)
+int xen_be_unmap_shared_pages(void **refs_info, int nents)
 {
 	struct xen_shared_pages_info *sh_pages_info;
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index e7ae731..e02fab0b 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -29,18 +29,18 @@
  * create a table with those in 1st level shared pages then return reference
  * numbers for this top level table.
  */
-int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
-				 void **refs_info);
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		    void **refs_info);
 
-int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents);
+int xen_be_unshare_pages(void **refs_info, int nents);
 
 /* Maps provided top level ref id and then return array of pages containing
  * data refs.
  */
-struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
-						 int nents,
-						 void **refs_info);
+struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
+				      int nents,
+				      void **refs_info);
 
-int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents);
+int xen_be_unmap_shared_pages(void **refs_info, int nents);
 
 #endif /* __HYPER_DMABUF_XEN_SHM_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 52/60] hyper_dmabuf: remove prefix 'hyper_dmabuf' from static func and backend APIs
  2017-12-19 19:29 ` Dongwon Kim
                   ` (70 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Removed prefix "hyper_dmabuf" from backend functions and static func
(except for driver APIs) and add 'be' after 'xen' in backend function
calls to show those are backend APIs.

Also, modified some of function names for clarification and  addressed
some missing errors and warnings in hyper_dmabuf_list.c and
hyper_dmabuf_list.h.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |   9 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      |   6 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         |   9 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |   8 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      |  23 ++---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 113 +++++++++++----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  20 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        |  54 +++++-----
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    |   4 +-
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  20 +---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |   2 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   |  45 ++++----
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  20 ++--
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  |   1 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  26 ++---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    |  14 ++-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  14 +--
 18 files changed, 179 insertions(+), 213 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 76f57c2..387cc63 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -37,7 +37,6 @@
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_ioctl.h"
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_event.h"
@@ -51,8 +50,8 @@ MODULE_AUTHOR("Intel Corporation");
 
 struct hyper_dmabuf_private *hy_drv_priv;
 
-static void hyper_dmabuf_force_free(struct exported_sgt_info *exported,
-				    void *attr)
+static void force_free(struct exported_sgt_info *exported,
+		       void *attr)
 {
 	struct ioctl_hyper_dmabuf_unexport unexport_attr;
 	struct file *filp = (struct file *)attr;
@@ -86,7 +85,7 @@ static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
 
 static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 {
-	hyper_dmabuf_foreach_exported(hyper_dmabuf_force_free, filp);
+	hyper_dmabuf_foreach_exported(force_free, filp);
 
 	return 0;
 }
@@ -369,7 +368,7 @@ static void hyper_dmabuf_drv_exit(void)
 
 	/* destroy id_queue */
 	if (hy_drv_priv->id_queue)
-		destroy_reusable_list();
+		hyper_dmabuf_free_hid_list();
 
 #ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
 	/* clean up event queue */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
index ae8cb43..392ea99 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
@@ -28,16 +28,14 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/fs.h>
 #include <linux/slab.h>
 #include <linux/module.h>
-#include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_event.h"
 
-static void hyper_dmabuf_send_event(struct hyper_dmabuf_event *e)
+static void send_event(struct hyper_dmabuf_event *e)
 {
 	struct hyper_dmabuf_event *oldest;
 	unsigned long irqflags;
@@ -110,7 +108,7 @@ int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
 	e->event_data.data = (void *)imported->priv;
 	e->event_data.hdr.size = imported->sz_priv;
 
-	hyper_dmabuf_send_event(e);
+	send_event(e);
 
 	dev_dbg(hy_drv_priv->dev,
 		"event number = %d :", hy_drv_priv->pending);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
index 312dea5..e67b84a 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
@@ -31,9 +31,8 @@
 #include <linux/random.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
 
-void store_reusable_hid(hyper_dmabuf_id_t hid)
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *new_reusable;
@@ -48,7 +47,7 @@ void store_reusable_hid(hyper_dmabuf_id_t hid)
 	list_add(&new_reusable->list, &reusable_head->list);
 }
 
-static hyper_dmabuf_id_t retrieve_reusable_hid(void)
+static hyper_dmabuf_id_t get_reusable_hid(void)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
@@ -67,7 +66,7 @@ static hyper_dmabuf_id_t retrieve_reusable_hid(void)
 	return hid;
 }
 
-void destroy_reusable_list(void)
+void hyper_dmabuf_free_hid_list(void)
 {
 	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
 	struct list_reusable_id *temp_head;
@@ -106,7 +105,7 @@ hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
 		hy_drv_priv->id_queue = reusable_head;
 	}
 
-	hid = retrieve_reusable_hid();
+	hid = get_reusable_hid();
 
 	/*creating a new H-ID only if nothing in the reusable id queue
 	 * and count is less than maximum allowed
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
index 61c4fb3..ed690f3 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
@@ -36,12 +36,16 @@
  */
 #define HYPER_DMABUF_ID_MAX 1000
 
-void store_reusable_hid(hyper_dmabuf_id_t hid);
+/* adding freed hid to the reusable list */
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
 
-void destroy_reusable_list(void);
+/* freeing the reusasble list */
+void hyper_dmabuf_free_hid_list(void);
 
+/* getting a hid available to use. */
 hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
 
+/* comparing two different hid */
 bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
 
 #endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 195cede..b40cf89 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -28,13 +28,9 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
-#include <linux/miscdevice.h>
 #include <linux/uaccess.h>
 #include <linux/dma-buf.h>
-#include <linux/delay.h>
-#include <linux/list.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_id.h"
 #include "hyper_dmabuf_struct.h"
@@ -80,8 +76,8 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 	return ret;
 }
 
-static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
-					struct pages_info *pg_info)
+static int send_export_msg(struct exported_sgt_info *exported,
+			   struct pages_info *pg_info)
 {
 	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
 	struct hyper_dmabuf_req *req;
@@ -102,7 +98,7 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 					 pg_info->nents, &exported->refs_info);
 		if (op[7] < 0) {
 			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
-			return -1;
+			return op[7];
 		}
 	}
 
@@ -114,7 +110,7 @@ static int hyper_dmabuf_send_export_msg(struct exported_sgt_info *exported,
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req)
-		return -1;
+		return -ENOMEM;
 
 	/* composing a message to the importer */
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
@@ -212,7 +208,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 			ret = -EINVAL;
 		} else {
 			/* send an export msg for updating priv in importer */
-			ret = hyper_dmabuf_send_export_msg(exported, NULL);
+			ret = send_export_msg(exported, NULL);
 
 			if (ret < 0) {
 				dev_err(hy_drv_priv->dev,
@@ -347,7 +343,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 	export_remote_attr->hid = exported->hid;
 
-	ret = hyper_dmabuf_send_export_msg(exported, pg_info);
+	ret = send_export_msg(exported, pg_info);
 
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
@@ -550,7 +546,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 /* unexport dmabuf from the database and send int req to the source domain
  * to unmap it.
  */
-static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
+static void delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
@@ -612,7 +608,7 @@ static void hyper_dmabuf_delayed_unexport(struct work_struct *work)
 		hyper_dmabuf_remove_exported(exported->hid);
 
 		/* register hyper_dmabuf_id to the list for reuse */
-		store_reusable_hid(exported->hid);
+		hyper_dmabuf_store_hid(exported->hid);
 
 		if (exported->sz_priv > 0 && !exported->priv)
 			kfree(exported->priv);
@@ -649,8 +645,7 @@ int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 		return 0;
 
 	exported->unexport_sched = true;
-	INIT_DELAYED_WORK(&exported->unexport,
-			  hyper_dmabuf_delayed_unexport);
+	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
 	schedule_delayed_work(&exported->unexport,
 			      msecs_to_jiffies(unexport_attr->delay_ms));
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
index 1b3745e..bba6d1d 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -28,12 +28,9 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/cdev.h>
-#include <asm/uaccess.h>
 #include <linux/hashtable.h>
-#include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
@@ -43,7 +40,9 @@ DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
 
 #ifdef CONFIG_HYPER_DMABUF_SYSFS
-static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attribute *attr, char *buf)
+static ssize_t hyper_dmabuf_imported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
 {
 	struct list_entry_imported *info_entry;
 	int bkt;
@@ -55,19 +54,23 @@ static ssize_t hyper_dmabuf_imported_show(struct device *drv, struct device_attr
 		int nents = info_entry->imported->nents;
 		bool valid = info_entry->imported->valid;
 		int num_importers = info_entry->imported->importers;
+
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
-				   "hid:{id:%d keys:%d %d %d}, nents:%d, v:%c, numi:%d\n",
-				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
-				   nents, (valid ? 't' : 'f'), num_importers);
+				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				num_importers);
 	}
-	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
-			   total);
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
 
 	return count;
 }
 
-static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attribute *attr, char *buf)
+static ssize_t hyper_dmabuf_exported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
 {
 	struct list_entry_exported *info_entry;
 	int bkt;
@@ -79,20 +82,22 @@ static ssize_t hyper_dmabuf_exported_show(struct device *drv, struct device_attr
 		int nents = info_entry->exported->nents;
 		bool valid = info_entry->exported->valid;
 		int importer_exported = info_entry->exported->active;
+
 		total += nents;
 		count += scnprintf(buf + count, PAGE_SIZE - count,
-				   "hid:{hid:%d keys:%d %d %d}, nents:%d, v:%c, ie:%d\n",
-				   hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2],
-				   nents, (valid ? 't' : 'f'), importer_exported);
+				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1],
+				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				   importer_exported);
 	}
-	count += scnprintf(buf + count, PAGE_SIZE - count, "total nents: %lu\n",
-			   total);
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
 
 	return count;
 }
 
-static DEVICE_ATTR(imported, S_IRUSR, hyper_dmabuf_imported_show, NULL);
-static DEVICE_ATTR(exported, S_IRUSR, hyper_dmabuf_exported_show, NULL);
+static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
+static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
 
 int hyper_dmabuf_register_sysfs(struct device *dev)
 {
@@ -118,18 +123,21 @@ int hyper_dmabuf_unregister_sysfs(struct device *dev)
 	device_remove_file(dev, &dev_attr_exported);
 	return 0;
 }
+
 #endif
 
-int hyper_dmabuf_table_init()
+int hyper_dmabuf_table_init(void)
 {
 	hash_init(hyper_dmabuf_hash_imported);
 	hash_init(hyper_dmabuf_hash_exported);
 	return 0;
 }
 
-int hyper_dmabuf_table_destroy()
+int hyper_dmabuf_table_destroy(void)
 {
-	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
+	/* TODO: cleanup hyper_dmabuf_hash_imported
+	 * and hyper_dmabuf_hash_exported
+	 */
 	return 0;
 }
 
@@ -139,11 +147,8 @@ int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-                        "No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->exported = exported;
 
@@ -153,17 +158,14 @@ int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
 	return 0;
 }
 
-int hyper_dmabuf_register_imported(struct imported_sgt_info* imported)
+int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
 {
 	struct list_entry_imported *info_entry;
 
 	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
 
-	if (!info_entry) {
-		dev_err(hy_drv_priv->dev,
-                        "No memory left to be allocated\n");
+	if (!info_entry)
 		return -ENOMEM;
-	}
 
 	info_entry->imported = imported;
 
@@ -180,28 +182,32 @@ struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->exported->hid.id == hid.id) {
+		if (info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid))
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid))
 				return info_entry->exported;
-			/* if key is unmatched, given HID is invalid, so returning NULL */
-			else
-				break;
+
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
 		}
 
 	return NULL;
 }
 
 /* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid)
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid)
 {
 	struct list_entry_exported *info_entry;
-	hyper_dmabuf_id_t hid = {-1, {0, 0, 0}};
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
 	int bkt;
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if(info_entry->exported->dma_buf == dmabuf &&
-		   info_entry->exported->rdomid == domid)
+		if (info_entry->exported->dma_buf == dmabuf &&
+		    info_entry->exported->rdomid == domid)
 			return info_entry->exported->hid;
 
 	return hid;
@@ -214,14 +220,15 @@ struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->imported->hid.id == hid.id) {
+		if (info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid))
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid))
 				return info_entry->imported;
-			/* if key is unmatched, given HID is invalid, so returning NULL */
-			else {
-				break;
-			}
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
 		}
 
 	return NULL;
@@ -234,15 +241,16 @@ int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->exported->hid.id == hid.id) {
+		if (info_entry->exported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->exported->hid, hid)) {
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
-			} else {
-				break;
 			}
+
+			break;
 		}
 
 	return -ENOENT;
@@ -255,15 +263,16 @@ int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
 
 	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
 		/* checking hid.id first */
-		if(info_entry->imported->hid.id == hid.id) {
+		if (info_entry->imported->hid.id == hid.id) {
 			/* then key is compared */
-			if(hyper_dmabuf_hid_keycomp(info_entry->imported->hid, hid)) {
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid)) {
 				hash_del(&info_entry->node);
 				kfree(info_entry);
 				return 0;
-			} else {
-				break;
 			}
+
+			break;
 		}
 
 	return -ENOENT;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
index d5c17ef..f7102f5 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -33,13 +33,13 @@
 #define MAX_ENTRY_IMPORTED 7
 
 struct list_entry_exported {
-        struct exported_sgt_info *exported;
-        struct hlist_node node;
+	struct exported_sgt_info *exported;
+	struct hlist_node node;
 };
 
 struct list_entry_imported {
-        struct imported_sgt_info *imported;
-        struct hlist_node node;
+	struct imported_sgt_info *imported;
+	struct hlist_node node;
 };
 
 int hyper_dmabuf_table_init(void);
@@ -49,9 +49,10 @@ int hyper_dmabuf_table_destroy(void);
 int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
 
 /* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf, int domid);
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid);
 
-int hyper_dmabuf_register_imported(struct imported_sgt_info* info);
+int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
 
 struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
 
@@ -61,11 +62,10 @@ int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
 
 int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
 
-void hyper_dmabuf_foreach_exported(
-	void (*func)(struct exported_sgt_info *, void *attr),
-	void *attr);
+void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
+				   void *attr), void *attr);
 
 int hyper_dmabuf_register_sysfs(struct device *dev);
 int hyper_dmabuf_unregister_sysfs(struct device *dev);
 
-#endif // __HYPER_DMABUF_LIST_H__
+#endif /* __HYPER_DMABUF_LIST_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
index fbbcc39..afc1fd6e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -28,12 +28,10 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
-#include <linux/dma-buf.h>
 #include <linux/workqueue.h>
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_remote_sync.h"
 #include "hyper_dmabuf_event.h"
 #include "hyper_dmabuf_list.h"
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index 03fdd30..bf805b1 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -28,9 +28,7 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/fs.h>
 #include <linux/slab.h>
-#include <linux/module.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
@@ -43,7 +41,15 @@
 #define WAIT_AFTER_SYNC_REQ 0
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-static int hyper_dmabuf_sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+static int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -EINVAL;
+}
+
+static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
 	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
@@ -90,8 +96,7 @@ static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_ATTACH);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_ATTACH);
 
 	return ret;
 }
@@ -107,8 +112,7 @@ static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_DETACH);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_DETACH);
 }
 
 static struct sg_table *hyper_dmabuf_ops_map(
@@ -140,8 +144,7 @@ static struct sg_table *hyper_dmabuf_ops_map(
 	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
 		goto err_free_sg;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_MAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MAP);
 
 	kfree(pg_info->pgs);
 	kfree(pg_info);
@@ -177,8 +180,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 	sg_free_table(sg);
 	kfree(sg);
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_UNMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_UNMAP);
 }
 
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
@@ -211,8 +213,7 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	finish = imported && !imported->valid &&
 		 !imported->importers;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_RELEASE);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_RELEASE);
 
 	/*
 	 * Check if buffer is still valid and if not remove it
@@ -236,8 +237,7 @@ static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
 
 	return ret;
 }
@@ -253,8 +253,7 @@ static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_END_CPU_ACCESS);
 
 	return 0;
 }
@@ -270,8 +269,7 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP_ATOMIC);
 
 	/* TODO: NULL for now. Need to return the addr of mapped region */
 	return NULL;
@@ -288,8 +286,7 @@ static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 }
 
 static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
@@ -302,8 +299,7 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP);
 
 	/* for now NULL.. need to return the address of mapped region */
 	return NULL;
@@ -320,8 +316,7 @@ static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_KUNMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP);
 }
 
 static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
@@ -335,8 +330,7 @@ static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_MMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MMAP);
 
 	return ret;
 }
@@ -351,8 +345,7 @@ static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_VMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VMAP);
 
 	return NULL;
 }
@@ -367,8 +360,7 @@ static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 
 	imported = (struct imported_sgt_info *)dmabuf->priv;
 
-	ret = hyper_dmabuf_sync_request(imported->hid,
-					HYPER_DMABUF_OPS_VUNMAP);
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VUNMAP);
 }
 
 static const struct dma_buf_ops hyper_dmabuf_ops = {
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
index c9fe040..a82fd7b 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -28,7 +28,6 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
@@ -36,7 +35,6 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_sgl_proc.h"
 
 /* Whenever importer does dma operations from remote domain,
@@ -189,7 +187,7 @@ int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
 			hyper_dmabuf_remove_exported(hid);
 			kfree(exported);
 			/* store hyper_dmabuf_id in the list for reuse */
-			store_reusable_hid(hid);
+			hyper_dmabuf_store_hid(hid);
 		}
 
 		break;
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index e9299e5..9ad7ab9 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -28,31 +28,18 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/fs.h>
 #include <linux/slab.h>
-#include <linux/module.h>
 #include <linux/dma-buf.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_struct.h"
 #include "hyper_dmabuf_sgl_proc.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_list.h"
 
 #define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
 
-int dmabuf_refcount(struct dma_buf *dma_buf)
-{
-	if ((dma_buf != NULL) && (dma_buf->file != NULL))
-		return file_count(dma_buf->file);
-
-	return -1;
-}
-
 /* return total number of pages referenced by a sgt
  * for pre-calculation of # of pages behind a given sgt
  */
-static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
+static int get_num_pgs(struct sg_table *sgt)
 {
 	struct scatterlist *sgl;
 	int length, i;
@@ -89,8 +76,9 @@ struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
 	if (!pg_info)
 		return NULL;
 
-	pg_info->pgs = kmalloc_array(hyper_dmabuf_get_num_pgs(sgt),
-				     sizeof(struct page *), GFP_KERNEL);
+	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
+				     sizeof(struct page *),
+				     GFP_KERNEL);
 
 	if (!pg_info->pgs) {
 		kfree(pg_info);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
index 152f78c..869d982 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -25,8 +25,6 @@
 #ifndef __HYPER_DMABUF_IMP_H__
 #define __HYPER_DMABUF_IMP_H__
 
-int dmabuf_refcount(struct dma_buf *dma_buf);
-
 /* extract pages directly from struct sg_table */
 struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
index 05f3521..4a073ce 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -26,13 +26,10 @@
  *
  */
 
-#include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/workqueue.h>
 #include <linux/delay.h>
-#include <linux/time.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
 #include <xen/xenbus.h>
@@ -152,7 +149,7 @@ static int xen_comm_get_ring_details(int domid, int rdomid,
 	return (ret <= 0 ? 1 : 0);
 }
 
-void xen_get_domid_delayed(struct work_struct *unused)
+static void xen_get_domid_delayed(struct work_struct *unused)
 {
 	struct xenbus_transaction xbt;
 	int domid, ret;
@@ -191,7 +188,7 @@ void xen_get_domid_delayed(struct work_struct *unused)
 	}
 }
 
-int hyper_dmabuf_xen_get_domid(void)
+int xen_be_get_domid(void)
 {
 	struct xenbus_transaction xbt;
 	int domid;
@@ -261,22 +258,22 @@ static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
 	 * connect to it.
 	 */
 
-	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(),
+	ret = xen_comm_get_ring_details(xen_be_get_domid(),
 					rdom, &grefid, &port);
 
 	if (ring_info && ret != 0) {
 		dev_info(hy_drv_priv->dev,
 			 "Remote exporter closed, cleaninup importer\n");
-		hyper_dmabuf_xen_cleanup_rx_rbuf(rdom);
+		xen_be_cleanup_rx_rbuf(rdom);
 	} else if (!ring_info && ret == 0) {
 		dev_info(hy_drv_priv->dev,
 			 "Registering importer\n");
-		hyper_dmabuf_xen_init_rx_rbuf(rdom);
+		xen_be_init_rx_rbuf(rdom);
 	}
 }
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_xen_init_tx_rbuf(int domid)
+int xen_be_init_tx_rbuf(int domid)
 {
 	struct xen_comm_tx_ring_info *ring_info;
 	struct xen_comm_sring *sring;
@@ -365,7 +362,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	ret = xen_comm_add_tx_ring(ring_info);
 
-	ret = xen_comm_expose_ring_details(hyper_dmabuf_xen_get_domid(),
+	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
 					   domid,
 					   ring_info->gref_ring,
 					   ring_info->port);
@@ -384,7 +381,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 
 	sprintf((char *)ring_info->watch.node,
 		"/local/domain/%d/data/hyper_dmabuf/%d/port",
-		domid, hyper_dmabuf_xen_get_domid());
+		domid, xen_be_get_domid());
 
 	register_xenbus_watch(&ring_info->watch);
 
@@ -392,7 +389,7 @@ int hyper_dmabuf_xen_init_tx_rbuf(int domid)
 }
 
 /* cleans up exporter ring created for given remote domain */
-void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
+void xen_be_cleanup_tx_rbuf(int domid)
 {
 	struct xen_comm_tx_ring_info *ring_info;
 	struct xen_comm_rx_ring_info *rx_ring_info;
@@ -433,7 +430,7 @@ void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid)
 /* importer needs to know about shared page and port numbers for
  * ring buffer and event channel
  */
-int hyper_dmabuf_xen_init_rx_rbuf(int domid)
+int xen_be_init_rx_rbuf(int domid)
 {
 	struct xen_comm_rx_ring_info *ring_info;
 	struct xen_comm_sring *sring;
@@ -456,7 +453,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 		return 0;
 	}
 
-	ret = xen_comm_get_ring_details(hyper_dmabuf_xen_get_domid(), domid,
+	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
 					&rx_gref, &rx_port);
 
 	if (ret) {
@@ -536,7 +533,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 
 	/* Setup communcation channel in opposite direction */
 	if (!xen_comm_find_tx_ring(domid))
-		ret = hyper_dmabuf_xen_init_tx_rbuf(domid);
+		ret = xen_be_init_tx_rbuf(domid);
 
 	ret = request_irq(ring_info->irq,
 			  back_ring_isr, 0,
@@ -554,7 +551,7 @@ int hyper_dmabuf_xen_init_rx_rbuf(int domid)
 }
 
 /* clenas up importer ring create for given source domain */
-void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid)
+void xen_be_cleanup_rx_rbuf(int domid)
 {
 	struct xen_comm_rx_ring_info *ring_info;
 	struct xen_comm_tx_ring_info *tx_ring_info;
@@ -624,7 +621,7 @@ static void xen_rx_ch_add_delayed(struct work_struct *unused)
 				if (xen_comm_find_rx_ring(i) != NULL)
 					continue;
 
-				ret = hyper_dmabuf_xen_init_rx_rbuf(i);
+				ret = xen_be_init_rx_rbuf(i);
 
 				if (!ret)
 					dev_info(hy_drv_priv->dev,
@@ -672,7 +669,7 @@ void xen_init_comm_env_delayed(struct work_struct *unused)
 	}
 }
 
-int hyper_dmabuf_xen_init_comm_env(void)
+int xen_be_init_comm_env(void)
 {
 	int ret;
 
@@ -699,19 +696,19 @@ int hyper_dmabuf_xen_init_comm_env(void)
 }
 
 /* cleans up all tx/rx rings */
-static void hyper_dmabuf_xen_cleanup_all_rbufs(void)
+static void xen_be_cleanup_all_rbufs(void)
 {
-	xen_comm_foreach_tx_ring(hyper_dmabuf_xen_cleanup_tx_rbuf);
-	xen_comm_foreach_rx_ring(hyper_dmabuf_xen_cleanup_rx_rbuf);
+	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
+	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
 }
 
-void hyper_dmabuf_xen_destroy_comm(void)
+void xen_be_destroy_comm(void)
 {
-	hyper_dmabuf_xen_cleanup_all_rbufs();
+	xen_be_cleanup_all_rbufs();
 	xen_comm_destroy_data_dir();
 }
 
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
 			      int wait)
 {
 	struct xen_comm_front_ring *ring;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
index 8e2d1d0..70a2b70 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -51,28 +51,28 @@ struct xen_comm_rx_ring_info {
 	struct gnttab_unmap_grant_ref unmap_op;
 };
 
-int hyper_dmabuf_xen_get_domid(void);
+int xen_be_get_domid(void);
 
-int hyper_dmabuf_xen_init_comm_env(void);
+int xen_be_init_comm_env(void);
 
 /* exporter needs to generated info for page sharing */
-int hyper_dmabuf_xen_init_tx_rbuf(int domid);
+int xen_be_init_tx_rbuf(int domid);
 
 /* importer needs to know about shared page and port numbers
  * for ring buffer and event channel
  */
-int hyper_dmabuf_xen_init_rx_rbuf(int domid);
+int xen_be_init_rx_rbuf(int domid);
 
 /* cleans up exporter ring created for given domain */
-void hyper_dmabuf_xen_cleanup_tx_rbuf(int domid);
+void xen_be_cleanup_tx_rbuf(int domid);
 
 /* cleans up importer ring created for given domain */
-void hyper_dmabuf_xen_cleanup_rx_rbuf(int domid);
+void xen_be_cleanup_rx_rbuf(int domid);
 
-void hyper_dmabuf_xen_destroy_comm(void);
+void xen_be_destroy_comm(void);
 
 /* send request to the remote domain */
-int hyper_dmabuf_xen_send_req(int domid, struct hyper_dmabuf_req *req,
-			      int wait);
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+		    int wait);
 
-#endif // __HYPER_DMABUF_XEN_COMM_H__
+#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
index 343aab3..15023db 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -28,7 +28,6 @@
 
 #include <linux/kernel.h>
 #include <linux/errno.h>
-#include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/cdev.h>
 #include <linux/hashtable.h>
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index aa4c2f5..23965b8 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -26,25 +26,19 @@
  *
  */
 
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/module.h>
-#include <xen/grant_table.h>
-#include "../hyper_dmabuf_msg.h"
 #include "../hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_xen_drv.h"
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_shm.h"
 
 struct hyper_dmabuf_backend_ops xen_backend_ops = {
-	.get_vm_id = hyper_dmabuf_xen_get_domid,
-	.share_pages = hyper_dmabuf_xen_share_pages,
-	.unshare_pages = hyper_dmabuf_xen_unshare_pages,
-	.map_shared_pages = (void *)hyper_dmabuf_xen_map_shared_pages,
-	.unmap_shared_pages = hyper_dmabuf_xen_unmap_shared_pages,
-	.init_comm_env = hyper_dmabuf_xen_init_comm_env,
-	.destroy_comm = hyper_dmabuf_xen_destroy_comm,
-	.init_rx_ch = hyper_dmabuf_xen_init_rx_rbuf,
-	.init_tx_ch = hyper_dmabuf_xen_init_tx_rbuf,
-	.send_req = hyper_dmabuf_xen_send_req,
+	.get_vm_id = xen_be_get_domid,
+	.share_pages = xen_be_share_pages,
+	.unshare_pages = xen_be_unshare_pages,
+	.map_shared_pages = (void *)xen_be_map_shared_pages,
+	.unmap_shared_pages = xen_be_unmap_shared_pages,
+	.init_comm_env = xen_be_init_comm_env,
+	.destroy_comm = xen_be_destroy_comm,
+	.init_rx_ch = xen_be_init_rx_rbuf,
+	.init_tx_ch = xen_be_init_tx_rbuf,
+	.send_req = xen_be_send_req,
 };
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index a86313a..16416f8 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -26,8 +26,6 @@
  *
  */
 
-#include <linux/kernel.h>
-#include <linux/errno.h>
 #include <linux/slab.h>
 #include <xen/grant_table.h>
 #include <asm/xen/page.h>
@@ -75,8 +73,8 @@
  *
  * Returns refid of top level page.
  */
-int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
-				 void **refs_info)
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		       void **refs_info)
 {
 	grant_ref_t lvl3_gref;
 	grant_ref_t *lvl2_table;
@@ -191,7 +189,7 @@ int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
 	return -ENOSPC;
 }
 
-int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents)
+int xen_be_unshare_pages(void **refs_info, int nents)
 {
 	struct xen_shared_pages_info *sh_pages_info;
 	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
@@ -254,8 +252,8 @@ int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents)
 /* Maps provided top level ref id and then return array of pages
  * containing data refs.
  */
-struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
-						int nents, void **refs_info)
+struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
+				   int nents, void **refs_info)
 {
 	struct page *lvl3_table_page;
 	struct page **lvl2_table_pages;
@@ -492,7 +490,7 @@ struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
 	return NULL;
 }
 
-int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents)
+int xen_be_unmap_shared_pages(void **refs_info, int nents)
 {
 	struct xen_shared_pages_info *sh_pages_info;
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index e7ae731..e02fab0b 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -29,18 +29,18 @@
  * create a table with those in 1st level shared pages then return reference
  * numbers for this top level table.
  */
-int hyper_dmabuf_xen_share_pages(struct page **pages, int domid, int nents,
-				 void **refs_info);
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		    void **refs_info);
 
-int hyper_dmabuf_xen_unshare_pages(void **refs_info, int nents);
+int xen_be_unshare_pages(void **refs_info, int nents);
 
 /* Maps provided top level ref id and then return array of pages containing
  * data refs.
  */
-struct page **hyper_dmabuf_xen_map_shared_pages(int lvl3_gref, int domid,
-						 int nents,
-						 void **refs_info);
+struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
+				      int nents,
+				      void **refs_info);
 
-int hyper_dmabuf_xen_unmap_shared_pages(void **refs_info, int nents);
+int xen_be_unmap_shared_pages(void **refs_info, int nents);
 
 #endif /* __HYPER_DMABUF_XEN_SHM_H__ */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 53/60] hyper_dmabuf: define fastpath_export for exporting existing buffer
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

To make hyper_dmabuf_export_remote_ioctl more compact and readable,
a new function call, 'fastpath_export' is created to replace a routine
in hyper_dmabuf_export_remote_ioctl for the case requested buffer for
exporting is already in the LIST (exported previously).

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 153 +++++++++++++++-----------
 1 file changed, 87 insertions(+), 66 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b40cf89..d11f609 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -122,6 +122,82 @@ static int send_export_msg(struct exported_sgt_info *exported,
 	return ret;
 }
 
+/* Fast path exporting routine in case same buffer is already exported.
+ * In this function, we skip normal exporting process and just update
+ * private data on both VMs (importer and exporter)
+ *
+ * return '1' if reexport is needed, return '0' if succeeds, return
+ * Kernel error code if something goes wrong
+ */
+static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
+{
+	int reexport = 1;
+	int ret = 0;
+	struct exported_sgt_info *exported;
+
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported)
+		return reexport;
+
+	if (exported->valid == false)
+		return reexport;
+
+	/*
+	 * Check if unexport is already scheduled for that buffer,
+	 * if so try to cancel it. If that will fail, buffer needs
+	 * to be reexport once again.
+	 */
+	if (exported->unexport_sched) {
+		if (!cancel_delayed_work_sync(&exported->unexport))
+			return reexport;
+
+		exported->unexport_sched = false;
+	}
+
+	/* if there's any change in size of private data.
+	 * we reallocate space for private data with new size
+	 */
+	if (sz_priv != exported->sz_priv) {
+		kfree(exported->priv);
+
+		/* truncating size */
+		if (sz_priv > MAX_SIZE_PRIV_DATA)
+			exported->sz_priv = MAX_SIZE_PRIV_DATA;
+		else
+			exported->sz_priv = sz_priv;
+
+		exported->priv = kcalloc(1, exported->sz_priv,
+					 GFP_KERNEL);
+
+		if (!exported->priv) {
+			hyper_dmabuf_remove_exported(exported->hid);
+			hyper_dmabuf_cleanup_sgt_info(exported, true);
+			kfree(exported);
+			return -ENOMEM;
+		}
+	}
+
+	/* update private data in sgt_info with new ones */
+	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to load a new private data\n");
+		ret = -EINVAL;
+	} else {
+		/* send an export msg for updating priv in importer */
+		ret = send_export_msg(exported, NULL);
+
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to send a new private data\n");
+			ret = -EBUSY;
+		}
+	}
+
+	return ret;
+}
+
 static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
@@ -153,79 +229,24 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	 */
 	hid = hyper_dmabuf_find_hid_exported(dma_buf,
 					     export_remote_attr->remote_domain);
-	if (hid.id != -1) {
-		exported = hyper_dmabuf_find_exported(hid);
 
-		if (!exported)
-			goto reexport;
-
-		if (exported->valid == false)
-			goto reexport;
+	if (hid.id != -1) {
+		ret = fastpath_export(hid, export_remote_attr->sz_priv,
+				      export_remote_attr->priv);
 
-		/*
-		 * Check if unexport is already scheduled for that buffer,
-		 * if so try to cancel it. If that will fail, buffer needs
-		 * to be reexport once again.
+		/* return if fastpath_export succeeds or
+		 * gets some fatal error
 		 */
-		if (exported->unexport_sched) {
-			if (!cancel_delayed_work_sync(&exported->unexport)) {
-				dma_buf_put(dma_buf);
-				goto reexport;
-			}
-			exported->unexport_sched = false;
+		if (ret <= 0) {
+			dma_buf_put(dma_buf);
+			export_remote_attr->hid = hid;
+			return ret;
 		}
-
-		/* if there's any change in size of private data.
-		 * we reallocate space for private data with new size
-		 */
-		if (export_remote_attr->sz_priv != exported->sz_priv) {
-			kfree(exported->priv);
-
-			/* truncating size */
-			if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
-				exported->sz_priv = MAX_SIZE_PRIV_DATA;
-			else
-				exported->sz_priv = export_remote_attr->sz_priv;
-
-			exported->priv = kcalloc(1, exported->sz_priv,
-						 GFP_KERNEL);
-
-			if (!exported->priv) {
-				hyper_dmabuf_remove_exported(exported->hid);
-				hyper_dmabuf_cleanup_sgt_info(exported, true);
-				kfree(exported);
-				dma_buf_put(dma_buf);
-				return -ENOMEM;
-			}
-		}
-
-		/* update private data in sgt_info with new ones */
-		ret = copy_from_user(exported->priv, export_remote_attr->priv,
-				     exported->sz_priv);
-		if (ret) {
-			dev_err(hy_drv_priv->dev,
-				"Failed to load a new private data\n");
-			ret = -EINVAL;
-		} else {
-			/* send an export msg for updating priv in importer */
-			ret = send_export_msg(exported, NULL);
-
-			if (ret < 0) {
-				dev_err(hy_drv_priv->dev,
-					"Failed to send a new private data\n");
-				ret = -EBUSY;
-			}
-		}
-
-		dma_buf_put(dma_buf);
-		export_remote_attr->hid = hid;
-		return ret;
 	}
 
-reexport:
 	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
 	if (IS_ERR(attachment)) {
-		dev_err(hy_drv_priv->dev, "Cannot get attachment\n");
+		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
 		ret = PTR_ERR(attachment);
 		goto fail_attach;
 	}
@@ -233,7 +254,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
-		dev_err(hy_drv_priv->dev, "Cannot map attachment\n");
+		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
 		ret = PTR_ERR(sgt);
 		goto fail_map_attachment;
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 53/60] hyper_dmabuf: define fastpath_export for exporting existing buffer
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

To make hyper_dmabuf_export_remote_ioctl more compact and readable,
a new function call, 'fastpath_export' is created to replace a routine
in hyper_dmabuf_export_remote_ioctl for the case requested buffer for
exporting is already in the LIST (exported previously).

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 153 +++++++++++++++-----------
 1 file changed, 87 insertions(+), 66 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index b40cf89..d11f609 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -122,6 +122,82 @@ static int send_export_msg(struct exported_sgt_info *exported,
 	return ret;
 }
 
+/* Fast path exporting routine in case same buffer is already exported.
+ * In this function, we skip normal exporting process and just update
+ * private data on both VMs (importer and exporter)
+ *
+ * return '1' if reexport is needed, return '0' if succeeds, return
+ * Kernel error code if something goes wrong
+ */
+static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
+{
+	int reexport = 1;
+	int ret = 0;
+	struct exported_sgt_info *exported;
+
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported)
+		return reexport;
+
+	if (exported->valid == false)
+		return reexport;
+
+	/*
+	 * Check if unexport is already scheduled for that buffer,
+	 * if so try to cancel it. If that will fail, buffer needs
+	 * to be reexport once again.
+	 */
+	if (exported->unexport_sched) {
+		if (!cancel_delayed_work_sync(&exported->unexport))
+			return reexport;
+
+		exported->unexport_sched = false;
+	}
+
+	/* if there's any change in size of private data.
+	 * we reallocate space for private data with new size
+	 */
+	if (sz_priv != exported->sz_priv) {
+		kfree(exported->priv);
+
+		/* truncating size */
+		if (sz_priv > MAX_SIZE_PRIV_DATA)
+			exported->sz_priv = MAX_SIZE_PRIV_DATA;
+		else
+			exported->sz_priv = sz_priv;
+
+		exported->priv = kcalloc(1, exported->sz_priv,
+					 GFP_KERNEL);
+
+		if (!exported->priv) {
+			hyper_dmabuf_remove_exported(exported->hid);
+			hyper_dmabuf_cleanup_sgt_info(exported, true);
+			kfree(exported);
+			return -ENOMEM;
+		}
+	}
+
+	/* update private data in sgt_info with new ones */
+	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to load a new private data\n");
+		ret = -EINVAL;
+	} else {
+		/* send an export msg for updating priv in importer */
+		ret = send_export_msg(exported, NULL);
+
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to send a new private data\n");
+			ret = -EBUSY;
+		}
+	}
+
+	return ret;
+}
+
 static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
@@ -153,79 +229,24 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	 */
 	hid = hyper_dmabuf_find_hid_exported(dma_buf,
 					     export_remote_attr->remote_domain);
-	if (hid.id != -1) {
-		exported = hyper_dmabuf_find_exported(hid);
 
-		if (!exported)
-			goto reexport;
-
-		if (exported->valid == false)
-			goto reexport;
+	if (hid.id != -1) {
+		ret = fastpath_export(hid, export_remote_attr->sz_priv,
+				      export_remote_attr->priv);
 
-		/*
-		 * Check if unexport is already scheduled for that buffer,
-		 * if so try to cancel it. If that will fail, buffer needs
-		 * to be reexport once again.
+		/* return if fastpath_export succeeds or
+		 * gets some fatal error
 		 */
-		if (exported->unexport_sched) {
-			if (!cancel_delayed_work_sync(&exported->unexport)) {
-				dma_buf_put(dma_buf);
-				goto reexport;
-			}
-			exported->unexport_sched = false;
+		if (ret <= 0) {
+			dma_buf_put(dma_buf);
+			export_remote_attr->hid = hid;
+			return ret;
 		}
-
-		/* if there's any change in size of private data.
-		 * we reallocate space for private data with new size
-		 */
-		if (export_remote_attr->sz_priv != exported->sz_priv) {
-			kfree(exported->priv);
-
-			/* truncating size */
-			if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
-				exported->sz_priv = MAX_SIZE_PRIV_DATA;
-			else
-				exported->sz_priv = export_remote_attr->sz_priv;
-
-			exported->priv = kcalloc(1, exported->sz_priv,
-						 GFP_KERNEL);
-
-			if (!exported->priv) {
-				hyper_dmabuf_remove_exported(exported->hid);
-				hyper_dmabuf_cleanup_sgt_info(exported, true);
-				kfree(exported);
-				dma_buf_put(dma_buf);
-				return -ENOMEM;
-			}
-		}
-
-		/* update private data in sgt_info with new ones */
-		ret = copy_from_user(exported->priv, export_remote_attr->priv,
-				     exported->sz_priv);
-		if (ret) {
-			dev_err(hy_drv_priv->dev,
-				"Failed to load a new private data\n");
-			ret = -EINVAL;
-		} else {
-			/* send an export msg for updating priv in importer */
-			ret = send_export_msg(exported, NULL);
-
-			if (ret < 0) {
-				dev_err(hy_drv_priv->dev,
-					"Failed to send a new private data\n");
-				ret = -EBUSY;
-			}
-		}
-
-		dma_buf_put(dma_buf);
-		export_remote_attr->hid = hid;
-		return ret;
 	}
 
-reexport:
 	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
 	if (IS_ERR(attachment)) {
-		dev_err(hy_drv_priv->dev, "Cannot get attachment\n");
+		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
 		ret = PTR_ERR(attachment);
 		goto fail_attach;
 	}
@@ -233,7 +254,7 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
 
 	if (IS_ERR(sgt)) {
-		dev_err(hy_drv_priv->dev, "Cannot map attachment\n");
+		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
 		ret = PTR_ERR(sgt);
 		goto fail_map_attachment;
 	}
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 54/60] hyper_dmabuf: 'backend_ops' reduced to 'bknd_ops' and 'ops' to 'bknd_ops'
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

To make type's name compact, *_backend_ops is changed to '*_bknd_ops'. Also
'ops' is now changed to 'bknd_ops' to clarify it is a data structure with
entry points of 'backend' operations.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 14 +++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  4 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 28 +++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 10 ++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  4 ++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  2 +-
 7 files changed, 33 insertions(+), 31 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 387cc63..161fee7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -276,13 +276,13 @@ static int __init hyper_dmabuf_drv_init(void)
 
 /* currently only supports XEN hypervisor */
 #ifdef CONFIG_HYPER_DMABUF_XEN
-	hy_drv_priv->backend_ops = &xen_backend_ops;
+	hy_drv_priv->bknd_ops = &xen_bknd_ops;
 #else
-	hy_drv_priv->backend_ops = NULL;
+	hy_drv_priv->bknd_ops = NULL;
 	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
 
-	if (hy_drv_priv->backend_ops == NULL) {
+	if (hy_drv_priv->bknd_ops == NULL) {
 		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
 		return -1;
 	}
@@ -301,7 +301,7 @@ static int __init hyper_dmabuf_drv_init(void)
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
-			"failed to initialize table for exported/imported entries\n");
+			"fail to init table for exported/imported entries\n");
 		mutex_unlock(&hy_drv_priv->lock);
 		kfree(hy_drv_priv);
 		return ret;
@@ -330,9 +330,9 @@ static int __init hyper_dmabuf_drv_init(void)
 	hy_drv_priv->pending = 0;
 #endif
 
-	hy_drv_priv->domid = hy_drv_priv->backend_ops->get_vm_id();
+	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
 
-	ret = hy_drv_priv->backend_ops->init_comm_env();
+	ret = hy_drv_priv->bknd_ops->init_comm_env();
 	if (ret < 0) {
 		dev_dbg(hy_drv_priv->dev,
 			"failed to initialize comm-env.\n");
@@ -360,7 +360,7 @@ static void hyper_dmabuf_drv_exit(void)
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
-	hy_drv_priv->backend_ops->destroy_comm();
+	hy_drv_priv->bknd_ops->destroy_comm();
 
 	/* destroy workqueue */
 	if (hy_drv_priv->work_queue)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 049c694..4a51f9e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -48,7 +48,7 @@ struct hyper_dmabuf_private {
 	struct list_reusable_id *id_queue;
 
 	/* backend ops - hypervisor specific */
-	struct hyper_dmabuf_backend_ops *backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops;
 
 	/* device global lock */
 	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
@@ -72,7 +72,7 @@ struct list_reusable_id {
 	struct list_head list;
 };
 
-struct hyper_dmabuf_backend_ops {
+struct hyper_dmabuf_bknd_ops {
 	/* retreiving id of current virtual machine */
 	int (*get_vm_id)(void);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index d11f609..d1970c8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -44,7 +44,7 @@
 static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int ret = 0;
 
 	if (!data) {
@@ -53,7 +53,7 @@ static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
 
-	ret = ops->init_tx_ch(tx_ch_attr->remote_domain);
+	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
 
 	return ret;
 }
@@ -61,7 +61,7 @@ static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int ret = 0;
 
 	if (!data) {
@@ -71,7 +71,7 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 
 	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
 
-	ret = ops->init_rx_ch(rx_ch_attr->source_domain);
+	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
 
 	return ret;
 }
@@ -79,7 +79,7 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 static int send_export_msg(struct exported_sgt_info *exported,
 			   struct pages_info *pg_info)
 {
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	struct hyper_dmabuf_req *req;
 	int op[MAX_NUMBER_OF_OPERANDS] = {0};
 	int ret, i;
@@ -94,7 +94,7 @@ static int send_export_msg(struct exported_sgt_info *exported,
 		op[4] = pg_info->nents;
 		op[5] = pg_info->frst_ofst;
 		op[6] = pg_info->last_len;
-		op[7] = ops->share_pages(pg_info->pgs, exported->rdomid,
+		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
 					 pg_info->nents, &exported->refs_info);
 		if (op[7] < 0) {
 			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
@@ -115,7 +115,7 @@ static int send_export_msg(struct exported_sgt_info *exported,
 	/* composing a message to the importer */
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
 
-	ret = ops->send_req(exported->rdomid, req, true);
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
 
 	kfree(req);
 
@@ -423,7 +423,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
 			(struct ioctl_hyper_dmabuf_export_fd *)data;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	struct imported_sgt_info *imported;
 	struct hyper_dmabuf_req *req;
 	struct page **data_pgs;
@@ -465,7 +465,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
 
 	if (ret < 0) {
 		/* in case of timeout other end eventually will receive request,
@@ -473,7 +473,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		 */
 		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
 					&op[0]);
-		ops->send_req(op[0], req, false);
+		bknd_ops->send_req(op[0], req, false);
 		kfree(req);
 		dev_err(hy_drv_priv->dev,
 			"Failed to create sgt or notify exporter\n");
@@ -512,7 +512,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 			imported->hid.id, imported->hid.rng_key[0],
 			imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
-		data_pgs = ops->map_shared_pages(imported->ref_handle,
+		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
 					HYPER_DMABUF_DOM_ID(imported->hid),
 					imported->nents,
 					&imported->refs_info);
@@ -536,7 +536,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 			hyper_dmabuf_create_req(req,
 						HYPER_DMABUF_EXPORT_FD_FAILED,
 						&op[0]);
-			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
+			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
 							  false);
 			kfree(req);
 			mutex_unlock(&hy_drv_priv->lock);
@@ -570,7 +570,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 static void delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	struct exported_sgt_info *exported =
 		container_of(work, struct exported_sgt_info, unexport.work);
 	int op[4];
@@ -602,7 +602,7 @@ static void delayed_unexport(struct work_struct *work)
 	/* Now send unexport request to remote domain, marking
 	 * that buffer should not be used anymore
 	 */
-	ret = ops->send_req(exported->rdomid, req, true);
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
 			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index bf805b1..e85f619 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -52,7 +52,7 @@ static int dmabuf_refcount(struct dma_buf *dma_buf)
 static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int op[5];
 	int i;
 	int ret;
@@ -72,7 +72,8 @@ static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
 
 	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(hid), req,
+				 WAIT_AFTER_SYNC_REQ);
 
 	if (ret < 0) {
 		dev_dbg(hy_drv_priv->dev,
@@ -186,7 +187,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct imported_sgt_info *imported;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int ret;
 	int finish;
 
@@ -201,7 +202,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	imported->importers--;
 
 	if (imported->importers == 0) {
-		ops->unmap_shared_pages(&imported->refs_info, imported->nents);
+		bknd_ops->unmap_shared_pages(&imported->refs_info,
+					     imported->nents);
 
 		if (imported->sgt) {
 			sg_free_table(imported->sgt);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index 9ad7ab9..d15eb17 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -170,7 +170,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
 	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 
 	if (!exported) {
 		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
@@ -231,7 +231,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
 	}
 
 	/* Start cleanup of buffer in reverse order to exporting */
-	ops->unshare_pages(&exported->refs_info, exported->nents);
+	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
 
 	/* unmap dma-buf */
 	dma_buf_unmap_attachment(exported->active_attached->attach,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index 23965b8..1d7249d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -30,7 +30,7 @@
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_shm.h"
 
-struct hyper_dmabuf_backend_ops xen_backend_ops = {
+struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
 	.get_vm_id = xen_be_get_domid,
 	.share_pages = xen_be_share_pages,
 	.unshare_pages = xen_be_unshare_pages,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
index e5bff09..a4902b7 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -26,7 +26,7 @@
 #define __HYPER_DMABUF_XEN_DRV_H__
 #include <xen/interface/grant_table.h>
 
-extern struct hyper_dmabuf_backend_ops xen_backend_ops;
+extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
 
 /* Main purpose of this structure is to keep
  * all references created or acquired for sharing
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 54/60] hyper_dmabuf: 'backend_ops' reduced to 'bknd_ops' and 'ops' to 'bknd_ops'
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

To make type's name compact, *_backend_ops is changed to '*_bknd_ops'. Also
'ops' is now changed to 'bknd_ops' to clarify it is a data structure with
entry points of 'backend' operations.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 14 +++++------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        |  4 ++--
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 28 +++++++++++-----------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 10 ++++----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  4 ++--
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  2 +-
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  2 +-
 7 files changed, 33 insertions(+), 31 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 387cc63..161fee7 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -276,13 +276,13 @@ static int __init hyper_dmabuf_drv_init(void)
 
 /* currently only supports XEN hypervisor */
 #ifdef CONFIG_HYPER_DMABUF_XEN
-	hy_drv_priv->backend_ops = &xen_backend_ops;
+	hy_drv_priv->bknd_ops = &xen_bknd_ops;
 #else
-	hy_drv_priv->backend_ops = NULL;
+	hy_drv_priv->bknd_ops = NULL;
 	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
 #endif
 
-	if (hy_drv_priv->backend_ops == NULL) {
+	if (hy_drv_priv->bknd_ops == NULL) {
 		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
 		return -1;
 	}
@@ -301,7 +301,7 @@ static int __init hyper_dmabuf_drv_init(void)
 	ret = hyper_dmabuf_table_init();
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
-			"failed to initialize table for exported/imported entries\n");
+			"fail to init table for exported/imported entries\n");
 		mutex_unlock(&hy_drv_priv->lock);
 		kfree(hy_drv_priv);
 		return ret;
@@ -330,9 +330,9 @@ static int __init hyper_dmabuf_drv_init(void)
 	hy_drv_priv->pending = 0;
 #endif
 
-	hy_drv_priv->domid = hy_drv_priv->backend_ops->get_vm_id();
+	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
 
-	ret = hy_drv_priv->backend_ops->init_comm_env();
+	ret = hy_drv_priv->bknd_ops->init_comm_env();
 	if (ret < 0) {
 		dev_dbg(hy_drv_priv->dev,
 			"failed to initialize comm-env.\n");
@@ -360,7 +360,7 @@ static void hyper_dmabuf_drv_exit(void)
 	/* hash tables for export/import entries and ring_infos */
 	hyper_dmabuf_table_destroy();
 
-	hy_drv_priv->backend_ops->destroy_comm();
+	hy_drv_priv->bknd_ops->destroy_comm();
 
 	/* destroy workqueue */
 	if (hy_drv_priv->work_queue)
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 049c694..4a51f9e 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -48,7 +48,7 @@ struct hyper_dmabuf_private {
 	struct list_reusable_id *id_queue;
 
 	/* backend ops - hypervisor specific */
-	struct hyper_dmabuf_backend_ops *backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops;
 
 	/* device global lock */
 	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
@@ -72,7 +72,7 @@ struct list_reusable_id {
 	struct list_head list;
 };
 
-struct hyper_dmabuf_backend_ops {
+struct hyper_dmabuf_bknd_ops {
 	/* retreiving id of current virtual machine */
 	int (*get_vm_id)(void);
 
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index d11f609..d1970c8 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -44,7 +44,7 @@
 static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int ret = 0;
 
 	if (!data) {
@@ -53,7 +53,7 @@ static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 	}
 	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
 
-	ret = ops->init_tx_ch(tx_ch_attr->remote_domain);
+	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
 
 	return ret;
 }
@@ -61,7 +61,7 @@ static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int ret = 0;
 
 	if (!data) {
@@ -71,7 +71,7 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 
 	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
 
-	ret = ops->init_rx_ch(rx_ch_attr->source_domain);
+	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
 
 	return ret;
 }
@@ -79,7 +79,7 @@ static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
 static int send_export_msg(struct exported_sgt_info *exported,
 			   struct pages_info *pg_info)
 {
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	struct hyper_dmabuf_req *req;
 	int op[MAX_NUMBER_OF_OPERANDS] = {0};
 	int ret, i;
@@ -94,7 +94,7 @@ static int send_export_msg(struct exported_sgt_info *exported,
 		op[4] = pg_info->nents;
 		op[5] = pg_info->frst_ofst;
 		op[6] = pg_info->last_len;
-		op[7] = ops->share_pages(pg_info->pgs, exported->rdomid,
+		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
 					 pg_info->nents, &exported->refs_info);
 		if (op[7] < 0) {
 			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
@@ -115,7 +115,7 @@ static int send_export_msg(struct exported_sgt_info *exported,
 	/* composing a message to the importer */
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
 
-	ret = ops->send_req(exported->rdomid, req, true);
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
 
 	kfree(req);
 
@@ -423,7 +423,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 {
 	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
 			(struct ioctl_hyper_dmabuf_export_fd *)data;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	struct imported_sgt_info *imported;
 	struct hyper_dmabuf_req *req;
 	struct page **data_pgs;
@@ -465,7 +465,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
 
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
 
 	if (ret < 0) {
 		/* in case of timeout other end eventually will receive request,
@@ -473,7 +473,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		 */
 		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
 					&op[0]);
-		ops->send_req(op[0], req, false);
+		bknd_ops->send_req(op[0], req, false);
 		kfree(req);
 		dev_err(hy_drv_priv->dev,
 			"Failed to create sgt or notify exporter\n");
@@ -512,7 +512,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 			imported->hid.id, imported->hid.rng_key[0],
 			imported->hid.rng_key[1], imported->hid.rng_key[2]);
 
-		data_pgs = ops->map_shared_pages(imported->ref_handle,
+		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
 					HYPER_DMABUF_DOM_ID(imported->hid),
 					imported->nents,
 					&imported->refs_info);
@@ -536,7 +536,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 			hyper_dmabuf_create_req(req,
 						HYPER_DMABUF_EXPORT_FD_FAILED,
 						&op[0]);
-			ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
+			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
 							  false);
 			kfree(req);
 			mutex_unlock(&hy_drv_priv->lock);
@@ -570,7 +570,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 static void delayed_unexport(struct work_struct *work)
 {
 	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	struct exported_sgt_info *exported =
 		container_of(work, struct exported_sgt_info, unexport.work);
 	int op[4];
@@ -602,7 +602,7 @@ static void delayed_unexport(struct work_struct *work)
 	/* Now send unexport request to remote domain, marking
 	 * that buffer should not be used anymore
 	 */
-	ret = ops->send_req(exported->rdomid, req, true);
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
 	if (ret < 0) {
 		dev_err(hy_drv_priv->dev,
 			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
index bf805b1..e85f619 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -52,7 +52,7 @@ static int dmabuf_refcount(struct dma_buf *dma_buf)
 static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 {
 	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int op[5];
 	int i;
 	int ret;
@@ -72,7 +72,8 @@ static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
 	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
 
 	/* send request and wait for a response */
-	ret = ops->send_req(HYPER_DMABUF_DOM_ID(hid), req, WAIT_AFTER_SYNC_REQ);
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(hid), req,
+				 WAIT_AFTER_SYNC_REQ);
 
 	if (ret < 0) {
 		dev_dbg(hy_drv_priv->dev,
@@ -186,7 +187,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct imported_sgt_info *imported;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 	int ret;
 	int finish;
 
@@ -201,7 +202,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	imported->importers--;
 
 	if (imported->importers == 0) {
-		ops->unmap_shared_pages(&imported->refs_info, imported->nents);
+		bknd_ops->unmap_shared_pages(&imported->refs_info,
+					     imported->nents);
 
 		if (imported->sgt) {
 			sg_free_table(imported->sgt);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index 9ad7ab9..d15eb17 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -170,7 +170,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
 	struct attachment_list *attachl;
 	struct kmap_vaddr_list *va_kmapl;
 	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_backend_ops *ops = hy_drv_priv->backend_ops;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
 
 	if (!exported) {
 		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
@@ -231,7 +231,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
 	}
 
 	/* Start cleanup of buffer in reverse order to exporting */
-	ops->unshare_pages(&exported->refs_info, exported->nents);
+	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
 
 	/* unmap dma-buf */
 	dma_buf_unmap_attachment(exported->active_attached->attach,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index 23965b8..1d7249d 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -30,7 +30,7 @@
 #include "hyper_dmabuf_xen_comm.h"
 #include "hyper_dmabuf_xen_shm.h"
 
-struct hyper_dmabuf_backend_ops xen_backend_ops = {
+struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
 	.get_vm_id = xen_be_get_domid,
 	.share_pages = xen_be_share_pages,
 	.unshare_pages = xen_be_unshare_pages,
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
index e5bff09..a4902b7 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
@@ -26,7 +26,7 @@
 #define __HYPER_DMABUF_XEN_DRV_H__
 #include <xen/interface/grant_table.h>
 
-extern struct hyper_dmabuf_backend_ops xen_backend_ops;
+extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
 
 /* Main purpose of this structure is to keep
  * all references created or acquired for sharing
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 55/60] hyper_dmabuf: fixed wrong send_req call
  2017-12-19 19:29 ` Dongwon Kim
                   ` (73 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Wrong vmid is used in case of sending HYPER_DMABUF_EXPORT_FD_FAILED
message. Instead of vmid, hypder dmabuf id is being used.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index d1970c8..ca6edf2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -473,7 +473,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		 */
 		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
 					&op[0]);
-		bknd_ops->send_req(op[0], req, false);
+		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
 		kfree(req);
 		dev_err(hy_drv_priv->dev,
 			"Failed to create sgt or notify exporter\n");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 55/60] hyper_dmabuf: fixed wrong send_req call
  2017-12-19 19:29 ` Dongwon Kim
                   ` (74 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Wrong vmid is used in case of sending HYPER_DMABUF_EXPORT_FD_FAILED
message. Instead of vmid, hypder dmabuf id is being used.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
index d1970c8..ca6edf2 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -473,7 +473,7 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 		 */
 		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
 					&op[0]);
-		bknd_ops->send_req(op[0], req, false);
+		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
 		kfree(req);
 		dev_err(hy_drv_priv->dev,
 			"Failed to create sgt or notify exporter\n");
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 56/60] hyper_dmabuf: add initialization and cleanup to bknd_ops
  2017-12-19 19:29 ` Dongwon Kim
                   ` (76 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Introduced additional init and cleanup routines in the backend
API structure that might be useful for hypervisors other than Xen.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c         | 14 ++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h         |  6 ++++++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c |  2 ++
 3 files changed, 22 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 161fee7..f2731bf 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -330,6 +330,16 @@ static int __init hyper_dmabuf_drv_init(void)
 	hy_drv_priv->pending = 0;
 #endif
 
+	if (hy_drv_priv->bknd_ops->init) {
+		ret = hy_drv_priv->bknd_ops->init();
+
+		if (ret < 0) {
+			dev_dbg(hy_drv_priv->dev,
+				"failed to initialize backend.\n");
+			return ret;
+		}
+	}
+
 	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
 
 	ret = hy_drv_priv->bknd_ops->init_comm_env();
@@ -362,6 +372,10 @@ static void hyper_dmabuf_drv_exit(void)
 
 	hy_drv_priv->bknd_ops->destroy_comm();
 
+	if (hy_drv_priv->bknd_ops->cleanup) {
+		hy_drv_priv->bknd_ops->cleanup();
+	};
+
 	/* destroy workqueue */
 	if (hy_drv_priv->work_queue)
 		destroy_workqueue(hy_drv_priv->work_queue);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 4a51f9e..9337d53 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -73,6 +73,12 @@ struct list_reusable_id {
 };
 
 struct hyper_dmabuf_bknd_ops {
+	/* backend initialization routine (optional) */
+	int (*init)(void);
+
+	/* backend cleanup routine (optional) */
+	int (*cleanup)(void);
+
 	/* retreiving id of current virtual machine */
 	int (*get_vm_id)(void);
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index 1d7249d..14ed3bc 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -31,6 +31,8 @@
 #include "hyper_dmabuf_xen_shm.h"
 
 struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
+	.init = NULL, /* not needed for xen */
+	.cleanup = NULL, /* not needed for xen */
 	.get_vm_id = xen_be_get_domid,
 	.share_pages = xen_be_share_pages,
 	.unshare_pages = xen_be_unshare_pages,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 56/60] hyper_dmabuf: add initialization and cleanup to bknd_ops
  2017-12-19 19:29 ` Dongwon Kim
                   ` (75 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Introduced additional init and cleanup routines in the backend
API structure that might be useful for hypervisors other than Xen.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c         | 14 ++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h         |  6 ++++++
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c |  2 ++
 3 files changed, 22 insertions(+)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index 161fee7..f2731bf 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -330,6 +330,16 @@ static int __init hyper_dmabuf_drv_init(void)
 	hy_drv_priv->pending = 0;
 #endif
 
+	if (hy_drv_priv->bknd_ops->init) {
+		ret = hy_drv_priv->bknd_ops->init();
+
+		if (ret < 0) {
+			dev_dbg(hy_drv_priv->dev,
+				"failed to initialize backend.\n");
+			return ret;
+		}
+	}
+
 	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
 
 	ret = hy_drv_priv->bknd_ops->init_comm_env();
@@ -362,6 +372,10 @@ static void hyper_dmabuf_drv_exit(void)
 
 	hy_drv_priv->bknd_ops->destroy_comm();
 
+	if (hy_drv_priv->bknd_ops->cleanup) {
+		hy_drv_priv->bknd_ops->cleanup();
+	};
+
 	/* destroy workqueue */
 	if (hy_drv_priv->work_queue)
 		destroy_workqueue(hy_drv_priv->work_queue);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 4a51f9e..9337d53 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -73,6 +73,12 @@ struct list_reusable_id {
 };
 
 struct hyper_dmabuf_bknd_ops {
+	/* backend initialization routine (optional) */
+	int (*init)(void);
+
+	/* backend cleanup routine (optional) */
+	int (*cleanup)(void);
+
 	/* retreiving id of current virtual machine */
 	int (*get_vm_id)(void);
 
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
index 1d7249d..14ed3bc 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
@@ -31,6 +31,8 @@
 #include "hyper_dmabuf_xen_shm.h"
 
 struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
+	.init = NULL, /* not needed for xen */
+	.cleanup = NULL, /* not needed for xen */
 	.get_vm_id = xen_be_get_domid,
 	.share_pages = xen_be_share_pages,
 	.unshare_pages = xen_be_unshare_pages,
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 57/60] hyper_dmabuf: change type of ref to shared pages to unsigned long
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 19:30   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Changed data type of reference for the group of pages to be shared
unsigned long in case it is direct representation of the memory
address.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h         | 2 +-
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c | 4 ++--
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 9337d53..c2bb3ce 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -91,7 +91,7 @@ struct hyper_dmabuf_bknd_ops {
 	/* map remotely shared pages on importer's side via
 	 * hypervisor-specific method
 	 */
-	struct page ** (*map_shared_pages)(int, int, int, void **);
+	struct page ** (*map_shared_pages)(unsigned long, int, int, void **);
 
 	/* unmap and free shared pages on importer's side via
 	 * hypervisor-specific method
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 16416f8..c6a15f1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -252,8 +252,8 @@ int xen_be_unshare_pages(void **refs_info, int nents)
 /* Maps provided top level ref id and then return array of pages
  * containing data refs.
  */
-struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
-				   int nents, void **refs_info)
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents, void **refs_info)
 {
 	struct page *lvl3_table_page;
 	struct page **lvl2_table_pages;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index e02fab0b..d5236b5 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -37,7 +37,7 @@ int xen_be_unshare_pages(void **refs_info, int nents);
 /* Maps provided top level ref id and then return array of pages containing
  * data refs.
  */
-struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
 				      int nents,
 				      void **refs_info);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 57/60] hyper_dmabuf: change type of ref to shared pages to unsigned long
@ 2017-12-19 19:30   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

Changed data type of reference for the group of pages to be shared
unsigned long in case it is direct representation of the memory
address.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h         | 2 +-
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c | 4 ++--
 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
index 9337d53..c2bb3ce 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -91,7 +91,7 @@ struct hyper_dmabuf_bknd_ops {
 	/* map remotely shared pages on importer's side via
 	 * hypervisor-specific method
 	 */
-	struct page ** (*map_shared_pages)(int, int, int, void **);
+	struct page ** (*map_shared_pages)(unsigned long, int, int, void **);
 
 	/* unmap and free shared pages on importer's side via
 	 * hypervisor-specific method
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
index 16416f8..c6a15f1 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
@@ -252,8 +252,8 @@ int xen_be_unshare_pages(void **refs_info, int nents)
 /* Maps provided top level ref id and then return array of pages
  * containing data refs.
  */
-struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
-				   int nents, void **refs_info)
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents, void **refs_info)
 {
 	struct page *lvl3_table_page;
 	struct page **lvl2_table_pages;
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
index e02fab0b..d5236b5 100644
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
@@ -37,7 +37,7 @@ int xen_be_unshare_pages(void **refs_info, int nents);
 /* Maps provided top level ref id and then return array of pages containing
  * data refs.
  */
-struct page **xen_be_map_shared_pages(int lvl3_gref, int domid,
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
 				      int nents,
 				      void **refs_info);
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 58/60] hyper_dmabuf: move device node out of /dev/xen/
  2017-12-19 19:29 ` Dongwon Kim
                   ` (79 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

hyper_dmabuf driver is generic driver that is designed to work
with any hypervisor with various backend implementations. So
moving out its device node out of /dev/xen.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index f2731bf..bbb3414 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -227,7 +227,7 @@ static const struct file_operations hyper_dmabuf_driver_fops = {
 
 static struct miscdevice hyper_dmabuf_miscdev = {
 	.minor = MISC_DYNAMIC_MINOR,
-	.name = "xen/hyper_dmabuf",
+	.name = "hyper_dmabuf",
 	.fops = &hyper_dmabuf_driver_fops,
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 58/60] hyper_dmabuf: move device node out of /dev/xen/
  2017-12-19 19:29 ` Dongwon Kim
                   ` (78 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

From: Mateusz Polrola <mateuszx.potrola@intel.com>

hyper_dmabuf driver is generic driver that is designed to work
with any hypervisor with various backend implementations. So
moving out its device node out of /dev/xen.

Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index f2731bf..bbb3414 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -227,7 +227,7 @@ static const struct file_operations hyper_dmabuf_driver_fops = {
 
 static struct miscdevice hyper_dmabuf_miscdev = {
 	.minor = MISC_DYNAMIC_MINOR,
-	.name = "xen/hyper_dmabuf",
+	.name = "hyper_dmabuf",
 	.fops = &hyper_dmabuf_driver_fops,
 };
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 59/60] hyper_dmabuf: freeing hy_drv_priv when drv init fails (v2)
  2017-12-19 19:29 ` Dongwon Kim
                   ` (80 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim, Kai Chen

From: Kai Chen <kai.chen@intel.com>

Make sure hy_drv_priv is freed before exiting in several places
in hyper_dmabuf_drv_init.

v2: unlocking mutex before freeing hy_drv_priv when bknd_ops->init
fails

Signed-off-by: Kai Chen <kai.chen@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index bbb3414..eead4c0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -271,8 +271,10 @@ static int __init hyper_dmabuf_drv_init(void)
 		return -ENOMEM;
 
 	ret = register_device();
-	if (ret < 0)
+	if (ret < 0) {
+		kfree(hy_drv_priv);
 		return ret;
+	}
 
 /* currently only supports XEN hypervisor */
 #ifdef CONFIG_HYPER_DMABUF_XEN
@@ -284,6 +286,7 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	if (hy_drv_priv->bknd_ops == NULL) {
 		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
+		kfree(hy_drv_priv);
 		return -1;
 	}
 
@@ -336,6 +339,8 @@ static int __init hyper_dmabuf_drv_init(void)
 		if (ret < 0) {
 			dev_dbg(hy_drv_priv->dev,
 				"failed to initialize backend.\n");
+			mutex_unlock(&hy_drv_priv->lock);
+			kfree(hy_drv_priv);
 			return ret;
 		}
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 59/60] hyper_dmabuf: freeing hy_drv_priv when drv init fails (v2)
  2017-12-19 19:29 ` Dongwon Kim
                   ` (81 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: xen-devel, Kai Chen, mateuszx.potrola, dri-devel, dongwon.kim

From: Kai Chen <kai.chen@intel.com>

Make sure hy_drv_priv is freed before exiting in several places
in hyper_dmabuf_drv_init.

v2: unlocking mutex before freeing hy_drv_priv when bknd_ops->init
fails

Signed-off-by: Kai Chen <kai.chen@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
index bbb3414..eead4c0 100644
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -271,8 +271,10 @@ static int __init hyper_dmabuf_drv_init(void)
 		return -ENOMEM;
 
 	ret = register_device();
-	if (ret < 0)
+	if (ret < 0) {
+		kfree(hy_drv_priv);
 		return ret;
+	}
 
 /* currently only supports XEN hypervisor */
 #ifdef CONFIG_HYPER_DMABUF_XEN
@@ -284,6 +286,7 @@ static int __init hyper_dmabuf_drv_init(void)
 
 	if (hy_drv_priv->bknd_ops == NULL) {
 		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
+		kfree(hy_drv_priv);
 		return -1;
 	}
 
@@ -336,6 +339,8 @@ static int __init hyper_dmabuf_drv_init(void)
 		if (ret < 0) {
 			dev_dbg(hy_drv_priv->dev,
 				"failed to initialize backend.\n");
+			mutex_unlock(&hy_drv_priv->lock);
+			kfree(hy_drv_priv);
 			return ret;
 		}
 	}
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 60/60] hyper_dmabuf: move hyper_dmabuf to under drivers/dma-buf/
  2017-12-19 19:29 ` Dongwon Kim
                   ` (83 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, mateuszx.potrola, dongwon.kim

This driver's ultimate goal is to expand the boundary of data
sharing via DMA-BUF to across different OSes running on the same
hardware regardless of what Hypervisor is currently used for the
OS virtualization. So it makes more sense to have its implementation
under drivers/dma-buf.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/dma-buf/hyper_dmabuf/Kconfig               |  42 +
 drivers/dma-buf/hyper_dmabuf/Makefile              |  49 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    | 408 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h    | 118 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c  | 122 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h  |  38 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c     | 133 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h     |  51 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 786 +++++++++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  50 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c   | 293 +++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h   |  71 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 414 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  87 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 413 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h    |  32 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c  | 172 ++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h  |  10 +
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.c        | 322 +++++++
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.h        |  30 +
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 255 ++++++
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  41 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 141 +++
 .../xen-backend/hyper_dmabuf_xen_comm.c            | 941 +++++++++++++++++++++
 .../xen-backend/hyper_dmabuf_xen_comm.h            |  78 ++
 .../xen-backend/hyper_dmabuf_xen_comm_list.c       | 158 ++++
 .../xen-backend/hyper_dmabuf_xen_comm_list.h       |  67 ++
 .../xen-backend/hyper_dmabuf_xen_drv.c             |  46 +
 .../xen-backend/hyper_dmabuf_xen_drv.h             |  53 ++
 .../xen-backend/hyper_dmabuf_xen_shm.c             | 525 ++++++++++++
 .../xen-backend/hyper_dmabuf_xen_shm.h             |  46 +
 drivers/xen/Kconfig                                |   2 +-
 drivers/xen/Makefile                               |   2 +-
 drivers/xen/hyper_dmabuf/Kconfig                   |  42 -
 drivers/xen/hyper_dmabuf/Makefile                  |  49 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 408 ---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 118 ---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      | 122 ---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h      |  38 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         | 133 ---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |  51 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 786 -----------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |  50 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 293 -------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  71 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 414 ---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  87 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 413 ---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h        |  32 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c      | 172 ----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  10 -
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 322 -------
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    |  30 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 255 ------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  41 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     | 141 ---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 941 ---------------------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  78 --
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 158 ----
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  67 --
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  46 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  53 --
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 525 ------------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  46 -
 64 files changed, 5994 insertions(+), 5994 deletions(-)
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Kconfig
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Makefile
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h
 delete mode 100644 drivers/xen/hyper_dmabuf/Kconfig
 delete mode 100644 drivers/xen/hyper_dmabuf/Makefile
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h

diff --git a/drivers/dma-buf/hyper_dmabuf/Kconfig b/drivers/dma-buf/hyper_dmabuf/Kconfig
new file mode 100644
index 0000000..5efcd44
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/Kconfig
@@ -0,0 +1,42 @@
+menu "hyper_dmabuf options"
+
+config HYPER_DMABUF
+	tristate "Enables hyper dmabuf driver"
+	default y
+
+config HYPER_DMABUF_XEN
+	bool "Configure hyper_dmabuf for XEN hypervisor"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Configuring hyper_dmabuf driver for XEN hypervisor
+
+config HYPER_DMABUF_SYSFS
+	bool "Enable sysfs information about hyper DMA buffers"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Expose information about imported and exported buffers using
+	  hyper_dmabuf driver
+
+config HYPER_DMABUF_EVENT_GEN
+	bool "Enable event-generation and polling operation"
+	default n
+	depends on HYPER_DMABUF
+	help
+	  With this config enabled, hyper_dmabuf driver on the importer side
+	  generates events and queue those up in the event list whenever a new
+	  shared DMA-BUF is available. Events in the list can be retrieved by
+	  read operation.
+
+config HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+	bool "Enable automatic rx-ch add with 10 secs interval"
+	default y
+	depends on HYPER_DMABUF && HYPER_DMABUF_XEN
+	help
+	  If enabled, driver reads a node in xenstore every 10 seconds
+	  to check whether there is any tx comm ch configured by another
+	  domain then initialize matched rx comm ch automatically for any
+	  existing tx comm chs.
+
+endmenu
diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
new file mode 100644
index 0000000..cce8e69
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/Makefile
@@ -0,0 +1,49 @@
+TARGET_MODULE:=hyper_dmabuf
+
+PLATFORM:=XEN
+
+# If we running by kernel building system
+ifneq ($(KERNELRELEASE),)
+	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
+                                 hyper_dmabuf_ioctl.o \
+                                 hyper_dmabuf_list.o \
+				 hyper_dmabuf_sgl_proc.o \
+				 hyper_dmabuf_ops.o \
+				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_id.o \
+				 hyper_dmabuf_remote_sync.o \
+				 hyper_dmabuf_query.o \
+
+ifeq ($(CONFIG_HYPER_DMABUF_EVENT_GEN), y)
+	$(TARGET_MODULE)-objs += hyper_dmabuf_event.o
+endif
+
+ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
+	$(TARGET_MODULE)-objs += xen-backend/hyper_dmabuf_xen_comm.o \
+				 xen-backend/hyper_dmabuf_xen_comm_list.o \
+				 xen-backend/hyper_dmabuf_xen_shm.o \
+				 xen-backend/hyper_dmabuf_xen_drv.o
+endif
+
+obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
+
+# If we are running without kernel build system
+else
+BUILDSYSTEM_DIR?=../../../
+PWD:=$(shell pwd)
+
+all :
+# run kernel build system to make module
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
+
+clean:
+# run kernel build system to cleanup in current directory
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
+
+load:
+	insmod ./$(TARGET_MODULE).ko
+
+unload:
+	rmmod ./$(TARGET_MODULE).ko
+
+endif
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
new file mode 100644
index 0000000..498b06c
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -0,0 +1,408 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+#include <linux/device.h>
+#include <linux/uaccess.h>
+#include <linux/poll.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_ioctl.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
+
+#ifdef CONFIG_HYPER_DMABUF_XEN
+#include "xen-backend/hyper_dmabuf_xen_drv.h"
+#endif
+
+MODULE_LICENSE("GPL and additional rights");
+MODULE_AUTHOR("Intel Corporation");
+
+struct hyper_dmabuf_private *hy_drv_priv;
+
+static void force_free(struct exported_sgt_info *exported,
+		       void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file *)attr;
+
+	if (!filp || !exported)
+		return;
+
+	if (exported->filp == filp) {
+		dev_dbg(hy_drv_priv->dev,
+			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		unexport_attr.hid = exported->hid;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
+	}
+}
+
+static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+{
+	int ret = 0;
+
+	/* Do not allow exclusive open */
+	if (filp->f_flags & O_EXCL)
+		return -EBUSY;
+
+	return ret;
+}
+
+static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+{
+	hyper_dmabuf_foreach_exported(force_free, filp);
+
+	return 0;
+}
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+
+static unsigned int hyper_dmabuf_event_poll(struct file *filp,
+				     struct poll_table_struct *wait)
+{
+	poll_wait(filp, &hy_drv_priv->event_wait, wait);
+
+	if (!list_empty(&hy_drv_priv->event_list))
+		return POLLIN | POLLRDNORM;
+
+	return 0;
+}
+
+static ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
+		size_t count, loff_t *offset)
+{
+	int ret;
+
+	/* only root can read events */
+	if (!capable(CAP_DAC_OVERRIDE)) {
+		dev_err(hy_drv_priv->dev,
+			"Only root can read events\n");
+		return -EPERM;
+	}
+
+	/* make sure user buffer can be written */
+	if (!access_ok(VERIFY_WRITE, buffer, count)) {
+		dev_err(hy_drv_priv->dev,
+			"User buffer can't be written.\n");
+		return -EINVAL;
+	}
+
+	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
+	if (ret)
+		return ret;
+
+	while (1) {
+		struct hyper_dmabuf_event *e = NULL;
+
+		spin_lock_irq(&hy_drv_priv->event_lock);
+		if (!list_empty(&hy_drv_priv->event_list)) {
+			e = list_first_entry(&hy_drv_priv->event_list,
+					struct hyper_dmabuf_event, link);
+			list_del(&e->link);
+		}
+		spin_unlock_irq(&hy_drv_priv->event_lock);
+
+		if (!e) {
+			if (ret)
+				break;
+
+			if (filp->f_flags & O_NONBLOCK) {
+				ret = -EAGAIN;
+				break;
+			}
+
+			mutex_unlock(&hy_drv_priv->event_read_lock);
+			ret = wait_event_interruptible(hy_drv_priv->event_wait,
+				  !list_empty(&hy_drv_priv->event_list));
+
+			if (ret == 0)
+				ret = mutex_lock_interruptible(
+					&hy_drv_priv->event_read_lock);
+
+			if (ret)
+				return ret;
+		} else {
+			unsigned int length = (sizeof(e->event_data.hdr) +
+						      e->event_data.hdr.size);
+
+			if (length > count - ret) {
+put_back_event:
+				spin_lock_irq(&hy_drv_priv->event_lock);
+				list_add(&e->link, &hy_drv_priv->event_list);
+				spin_unlock_irq(&hy_drv_priv->event_lock);
+				break;
+			}
+
+			if (copy_to_user(buffer + ret, &e->event_data.hdr,
+					 sizeof(e->event_data.hdr))) {
+				if (ret == 0)
+					ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += sizeof(e->event_data.hdr);
+
+			if (copy_to_user(buffer + ret, e->event_data.data,
+					 e->event_data.hdr.size)) {
+				/* error while copying void *data */
+
+				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
+
+				ret -= sizeof(e->event_data.hdr);
+
+				/* nullifying hdr of the event in user buffer */
+				if (copy_to_user(buffer + ret, &dummy_hdr,
+						 sizeof(dummy_hdr))) {
+					dev_err(hy_drv_priv->dev,
+						"failed to nullify invalid hdr already in userspace\n");
+				}
+
+				ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += e->event_data.hdr.size;
+			hy_drv_priv->pending--;
+			kfree(e);
+		}
+	}
+
+	mutex_unlock(&hy_drv_priv->event_read_lock);
+
+	return ret;
+}
+
+#endif
+
+static const struct file_operations hyper_dmabuf_driver_fops = {
+	.owner = THIS_MODULE,
+	.open = hyper_dmabuf_open,
+	.release = hyper_dmabuf_release,
+
+/* poll and read interfaces are needed only for event-polling */
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	.read = hyper_dmabuf_event_read,
+	.poll = hyper_dmabuf_event_poll,
+#endif
+
+	.unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+static int register_device(void)
+{
+	int ret = 0;
+
+	ret = misc_register(&hyper_dmabuf_miscdev);
+
+	if (ret) {
+		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
+		return ret;
+	}
+
+	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask */
+	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
+
+	return ret;
+}
+
+static void unregister_device(void)
+{
+	dev_info(hy_drv_priv->dev,
+		"hyper_dmabuf: unregister_device() is called\n");
+
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
+
+static int __init hyper_dmabuf_drv_init(void)
+{
+	int ret = 0;
+
+	printk(KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
+
+	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
+			      GFP_KERNEL);
+
+	if (!hy_drv_priv)
+		return -ENOMEM;
+
+	ret = register_device();
+	if (ret < 0) {
+		kfree(hy_drv_priv);
+		return ret;
+	}
+
+/* currently only supports XEN hypervisor */
+#ifdef CONFIG_HYPER_DMABUF_XEN
+	hy_drv_priv->bknd_ops = &xen_bknd_ops;
+#else
+	hy_drv_priv->bknd_ops = NULL;
+	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
+#endif
+
+	if (hy_drv_priv->bknd_ops == NULL) {
+		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
+		kfree(hy_drv_priv);
+		return -1;
+	}
+
+	mutex_init(&hy_drv_priv->lock);
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	hy_drv_priv->initialized = false;
+
+	dev_info(hy_drv_priv->dev,
+		 "initializing database for imported/exported dmabufs\n");
+
+	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
+
+	ret = hyper_dmabuf_table_init();
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"fail to init table for exported/imported entries\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
+		return ret;
+	}
+
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to initialize sysfs\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
+		return ret;
+	}
+#endif
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	mutex_init(&hy_drv_priv->event_read_lock);
+	spin_lock_init(&hy_drv_priv->event_lock);
+
+	/* Initialize event queue */
+	INIT_LIST_HEAD(&hy_drv_priv->event_list);
+	init_waitqueue_head(&hy_drv_priv->event_wait);
+
+	/* resetting number of pending events */
+	hy_drv_priv->pending = 0;
+#endif
+
+	if (hy_drv_priv->bknd_ops->init) {
+		ret = hy_drv_priv->bknd_ops->init();
+
+		if (ret < 0) {
+			dev_dbg(hy_drv_priv->dev,
+				"failed to initialize backend.\n");
+			mutex_unlock(&hy_drv_priv->lock);
+			kfree(hy_drv_priv);
+			return ret;
+		}
+	}
+
+	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
+
+	ret = hy_drv_priv->bknd_ops->init_comm_env();
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"failed to initialize comm-env.\n");
+	} else {
+		hy_drv_priv->initialized = true;
+	}
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_info(hy_drv_priv->dev,
+		"Finishing up initialization of hyper_dmabuf drv\n");
+
+	/* interrupt for comm should be registered here: */
+	return ret;
+}
+
+static void hyper_dmabuf_drv_exit(void)
+{
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
+#endif
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	/* hash tables for export/import entries and ring_infos */
+	hyper_dmabuf_table_destroy();
+
+	hy_drv_priv->bknd_ops->destroy_comm();
+
+	if (hy_drv_priv->bknd_ops->cleanup) {
+		hy_drv_priv->bknd_ops->cleanup();
+	};
+
+	/* destroy workqueue */
+	if (hy_drv_priv->work_queue)
+		destroy_workqueue(hy_drv_priv->work_queue);
+
+	/* destroy id_queue */
+	if (hy_drv_priv->id_queue)
+		hyper_dmabuf_free_hid_list();
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	/* clean up event queue */
+	hyper_dmabuf_events_release();
+#endif
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_info(hy_drv_priv->dev,
+		 "hyper_dmabuf driver: Exiting\n");
+
+	kfree(hy_drv_priv);
+
+	unregister_device();
+}
+
+module_init(hyper_dmabuf_drv_init);
+module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
new file mode 100644
index 0000000..c2bb3ce
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -0,0 +1,118 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+
+#include <linux/device.h>
+#include <xen/hyper_dmabuf.h>
+
+struct hyper_dmabuf_req;
+
+struct hyper_dmabuf_event {
+	struct hyper_dmabuf_event_data event_data;
+	struct list_head link;
+};
+
+struct hyper_dmabuf_private {
+	struct device *dev;
+
+	/* VM(domain) id of current VM instance */
+	int domid;
+
+	/* workqueue dedicated to hyper_dmabuf driver */
+	struct workqueue_struct *work_queue;
+
+	/* list of reusable hyper_dmabuf_ids */
+	struct list_reusable_id *id_queue;
+
+	/* backend ops - hypervisor specific */
+	struct hyper_dmabuf_bknd_ops *bknd_ops;
+
+	/* device global lock */
+	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
+	struct mutex lock;
+
+	/* flag that shows whether backend is initialized */
+	bool initialized;
+
+	wait_queue_head_t event_wait;
+	struct list_head event_list;
+
+	spinlock_t event_lock;
+	struct mutex event_read_lock;
+
+	/* # of pending events */
+	int pending;
+};
+
+struct list_reusable_id {
+	hyper_dmabuf_id_t hid;
+	struct list_head list;
+};
+
+struct hyper_dmabuf_bknd_ops {
+	/* backend initialization routine (optional) */
+	int (*init)(void);
+
+	/* backend cleanup routine (optional) */
+	int (*cleanup)(void);
+
+	/* retreiving id of current virtual machine */
+	int (*get_vm_id)(void);
+
+	/* get pages shared via hypervisor-specific method */
+	int (*share_pages)(struct page **, int, int, void **);
+
+	/* make shared pages unshared via hypervisor specific method */
+	int (*unshare_pages)(void **, int);
+
+	/* map remotely shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	struct page ** (*map_shared_pages)(unsigned long, int, int, void **);
+
+	/* unmap and free shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	int (*unmap_shared_pages)(void **, int);
+
+	/* initialize communication environment */
+	int (*init_comm_env)(void);
+
+	void (*destroy_comm)(void);
+
+	/* upstream ch setup (receiving and responding) */
+	int (*init_rx_ch)(int);
+
+	/* downstream ch setup (transmitting and parsing responses) */
+	int (*init_tx_ch)(int);
+
+	int (*send_req)(int, struct hyper_dmabuf_req *, int);
+};
+
+/* exporting global drv private info */
+extern struct hyper_dmabuf_private *hy_drv_priv;
+
+#endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
new file mode 100644
index 0000000..392ea99
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_event.h"
+
+static void send_event(struct hyper_dmabuf_event *e)
+{
+	struct hyper_dmabuf_event *oldest;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
+
+	/* check current number of event then if it hits the max num allowed
+	 * then remove the oldest event in the list
+	 */
+	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
+		oldest = list_first_entry(&hy_drv_priv->event_list,
+				struct hyper_dmabuf_event, link);
+		list_del(&oldest->link);
+		hy_drv_priv->pending--;
+		kfree(oldest);
+	}
+
+	list_add_tail(&e->link,
+		      &hy_drv_priv->event_list);
+
+	hy_drv_priv->pending++;
+
+	wake_up_interruptible(&hy_drv_priv->event_wait);
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+}
+
+void hyper_dmabuf_events_release(void)
+{
+	struct hyper_dmabuf_event *e, *et;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
+
+	list_for_each_entry_safe(e, et, &hy_drv_priv->event_list,
+				 link) {
+		list_del(&e->link);
+		kfree(e);
+		hy_drv_priv->pending--;
+	}
+
+	if (hy_drv_priv->pending) {
+		dev_err(hy_drv_priv->dev,
+			"possible leak on event_list\n");
+	}
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+}
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
+{
+	struct hyper_dmabuf_event *e;
+	struct imported_sgt_info *imported;
+
+	imported = hyper_dmabuf_find_imported(hid);
+
+	if (!imported) {
+		dev_err(hy_drv_priv->dev,
+			"can't find imported_sgt_info in the list\n");
+		return -EINVAL;
+	}
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+
+	if (!e)
+		return -ENOMEM;
+
+	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
+	e->event_data.hdr.hid = hid;
+	e->event_data.data = (void *)imported->priv;
+	e->event_data.hdr.size = imported->sz_priv;
+
+	send_event(e);
+
+	dev_dbg(hy_drv_priv->dev,
+		"event number = %d :", hy_drv_priv->pending);
+
+	dev_dbg(hy_drv_priv->dev,
+		"generating events for {%d, %d, %d, %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
new file mode 100644
index 0000000..50db04f
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
@@ -0,0 +1,38 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_EVENT_H__
+#define __HYPER_DMABUF_EVENT_H__
+
+#define MAX_DEPTH_EVENT_QUEUE 32
+
+enum hyper_dmabuf_event_type {
+	HYPER_DMABUF_NEW_IMPORT = 0x10000,
+};
+
+void hyper_dmabuf_events_release(void);
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid);
+
+#endif /* __HYPER_DMABUF_EVENT_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
new file mode 100644
index 0000000..e67b84a
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
@@ -0,0 +1,133 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	struct list_reusable_id *new_reusable;
+
+	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
+
+	if (!new_reusable)
+		return;
+
+	new_reusable->hid = hid;
+
+	list_add(&new_reusable->list, &reusable_head->list);
+}
+
+static hyper_dmabuf_id_t get_reusable_hid(void)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
+
+	/* check there is reusable id */
+	if (!list_empty(&reusable_head->list)) {
+		reusable_head = list_first_entry(&reusable_head->list,
+						 struct list_reusable_id,
+						 list);
+
+		list_del(&reusable_head->list);
+		hid = reusable_head->hid;
+		kfree(reusable_head);
+	}
+
+	return hid;
+}
+
+void hyper_dmabuf_free_hid_list(void)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	struct list_reusable_id *temp_head;
+
+	if (reusable_head) {
+		/* freeing mem space all reusable ids in the stack */
+		while (!list_empty(&reusable_head->list)) {
+			temp_head = list_first_entry(&reusable_head->list,
+						     struct list_reusable_id,
+						     list);
+			list_del(&temp_head->list);
+			kfree(temp_head);
+		}
+
+		/* freeing head */
+		kfree(reusable_head);
+	}
+}
+
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
+{
+	static int count;
+	hyper_dmabuf_id_t hid;
+	struct list_reusable_id *reusable_head;
+
+	/* first call to hyper_dmabuf_get_id */
+	if (count == 0) {
+		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
+
+		if (!reusable_head)
+			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
+
+		/* list head has an invalid count */
+		reusable_head->hid.id = -1;
+		INIT_LIST_HEAD(&reusable_head->list);
+		hy_drv_priv->id_queue = reusable_head;
+	}
+
+	hid = get_reusable_hid();
+
+	/*creating a new H-ID only if nothing in the reusable id queue
+	 * and count is less than maximum allowed
+	 */
+	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
+		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
+
+	/* random data embedded in the id for security */
+	get_random_bytes(&hid.rng_key[0], 12);
+
+	return hid;
+}
+
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
+{
+	int i;
+
+	/* compare keys */
+	for (i = 0; i < 3; i++) {
+		if (hid1.rng_key[i] != hid2.rng_key[i])
+			return false;
+	}
+
+	return true;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
new file mode 100644
index 0000000..ed690f3
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_ID_H__
+#define __HYPER_DMABUF_ID_H__
+
+#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
+	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
+
+#define HYPER_DMABUF_DOM_ID(hid) \
+	(((hid.id) >> 24) & 0xFF)
+
+/* currently maximum number of buffers shared
+ * at any given moment is limited to 1000
+ */
+#define HYPER_DMABUF_ID_MAX 1000
+
+/* adding freed hid to the reusable list */
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
+
+/* freeing the reusasble list */
+void hyper_dmabuf_free_hid_list(void);
+
+/* getting a hid available to use. */
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
+
+/* comparing two different hid */
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
+
+#endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
new file mode 100644
index 0000000..ca6edf2
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -0,0 +1,786 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ioctl.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_query.h"
+
+static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret = 0;
+
+	if (!data) {
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
+		return -EINVAL;
+	}
+	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
+
+	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
+
+	return ret;
+}
+
+static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret = 0;
+
+	if (!data) {
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
+		return -EINVAL;
+	}
+
+	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
+
+	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
+
+	return ret;
+}
+
+static int send_export_msg(struct exported_sgt_info *exported,
+			   struct pages_info *pg_info)
+{
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct hyper_dmabuf_req *req;
+	int op[MAX_NUMBER_OF_OPERANDS] = {0};
+	int ret, i;
+
+	/* now create request for importer via ring */
+	op[0] = exported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = exported->hid.rng_key[i];
+
+	if (pg_info) {
+		op[4] = pg_info->nents;
+		op[5] = pg_info->frst_ofst;
+		op[6] = pg_info->last_len;
+		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
+					 pg_info->nents, &exported->refs_info);
+		if (op[7] < 0) {
+			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
+			return op[7];
+		}
+	}
+
+	op[8] = exported->sz_priv;
+
+	/* driver/application specific private info */
+	memcpy(&op[9], exported->priv, op[8]);
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return -ENOMEM;
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
+
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
+
+	kfree(req);
+
+	return ret;
+}
+
+/* Fast path exporting routine in case same buffer is already exported.
+ * In this function, we skip normal exporting process and just update
+ * private data on both VMs (importer and exporter)
+ *
+ * return '1' if reexport is needed, return '0' if succeeds, return
+ * Kernel error code if something goes wrong
+ */
+static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
+{
+	int reexport = 1;
+	int ret = 0;
+	struct exported_sgt_info *exported;
+
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported)
+		return reexport;
+
+	if (exported->valid == false)
+		return reexport;
+
+	/*
+	 * Check if unexport is already scheduled for that buffer,
+	 * if so try to cancel it. If that will fail, buffer needs
+	 * to be reexport once again.
+	 */
+	if (exported->unexport_sched) {
+		if (!cancel_delayed_work_sync(&exported->unexport))
+			return reexport;
+
+		exported->unexport_sched = false;
+	}
+
+	/* if there's any change in size of private data.
+	 * we reallocate space for private data with new size
+	 */
+	if (sz_priv != exported->sz_priv) {
+		kfree(exported->priv);
+
+		/* truncating size */
+		if (sz_priv > MAX_SIZE_PRIV_DATA)
+			exported->sz_priv = MAX_SIZE_PRIV_DATA;
+		else
+			exported->sz_priv = sz_priv;
+
+		exported->priv = kcalloc(1, exported->sz_priv,
+					 GFP_KERNEL);
+
+		if (!exported->priv) {
+			hyper_dmabuf_remove_exported(exported->hid);
+			hyper_dmabuf_cleanup_sgt_info(exported, true);
+			kfree(exported);
+			return -ENOMEM;
+		}
+	}
+
+	/* update private data in sgt_info with new ones */
+	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to load a new private data\n");
+		ret = -EINVAL;
+	} else {
+		/* send an export msg for updating priv in importer */
+		ret = send_export_msg(exported, NULL);
+
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to send a new private data\n");
+			ret = -EBUSY;
+		}
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
+			(struct ioctl_hyper_dmabuf_export_remote *)data;
+	struct dma_buf *dma_buf;
+	struct dma_buf_attachment *attachment;
+	struct sg_table *sgt;
+	struct pages_info *pg_info;
+	struct exported_sgt_info *exported;
+	hyper_dmabuf_id_t hid;
+	int ret = 0;
+
+	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
+		dev_err(hy_drv_priv->dev,
+			"exporting to the same VM is not permitted\n");
+		return -EINVAL;
+	}
+
+	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+
+	if (IS_ERR(dma_buf)) {
+		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
+		return PTR_ERR(dma_buf);
+	}
+
+	/* we check if this specific attachment was already exported
+	 * to the same domain and if yes and it's valid sgt_info,
+	 * it returns hyper_dmabuf_id of pre-exported sgt_info
+	 */
+	hid = hyper_dmabuf_find_hid_exported(dma_buf,
+					     export_remote_attr->remote_domain);
+
+	if (hid.id != -1) {
+		ret = fastpath_export(hid, export_remote_attr->sz_priv,
+				      export_remote_attr->priv);
+
+		/* return if fastpath_export succeeds or
+		 * gets some fatal error
+		 */
+		if (ret <= 0) {
+			dma_buf_put(dma_buf);
+			export_remote_attr->hid = hid;
+			return ret;
+		}
+	}
+
+	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
+	if (IS_ERR(attachment)) {
+		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
+		ret = PTR_ERR(attachment);
+		goto fail_attach;
+	}
+
+	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
+
+	if (IS_ERR(sgt)) {
+		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
+		ret = PTR_ERR(sgt);
+		goto fail_map_attachment;
+	}
+
+	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
+
+	if (!exported) {
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
+	}
+
+	/* possible truncation */
+	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
+		exported->sz_priv = MAX_SIZE_PRIV_DATA;
+	else
+		exported->sz_priv = export_remote_attr->sz_priv;
+
+	/* creating buffer for private data of buffer */
+	if (exported->sz_priv != 0) {
+		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
+
+		if (!exported->priv) {
+			ret = -ENOMEM;
+			goto fail_priv_creation;
+		}
+	} else {
+		dev_err(hy_drv_priv->dev, "size is 0\n");
+	}
+
+	exported->hid = hyper_dmabuf_get_hid();
+
+	/* no more exported dmabuf allowed */
+	if (exported->hid.id == -1) {
+		dev_err(hy_drv_priv->dev,
+			"exceeds allowed number of dmabuf to be exported\n");
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
+	}
+
+	exported->rdomid = export_remote_attr->remote_domain;
+	exported->dma_buf = dma_buf;
+	exported->valid = true;
+
+	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	if (!exported->active_sgts) {
+		ret = -ENOMEM;
+		goto fail_map_active_sgts;
+	}
+
+	exported->active_attached = kmalloc(sizeof(struct attachment_list),
+					    GFP_KERNEL);
+	if (!exported->active_attached) {
+		ret = -ENOMEM;
+		goto fail_map_active_attached;
+	}
+
+	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
+				       GFP_KERNEL);
+	if (!exported->va_kmapped) {
+		ret = -ENOMEM;
+		goto fail_map_va_kmapped;
+	}
+
+	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
+				       GFP_KERNEL);
+	if (!exported->va_vmapped) {
+		ret = -ENOMEM;
+		goto fail_map_va_vmapped;
+	}
+
+	exported->active_sgts->sgt = sgt;
+	exported->active_attached->attach = attachment;
+	exported->va_kmapped->vaddr = NULL;
+	exported->va_vmapped->vaddr = NULL;
+
+	/* initialize list of sgt, attachment and vaddr for dmabuf sync
+	 * via shadow dma-buf
+	 */
+	INIT_LIST_HEAD(&exported->active_sgts->list);
+	INIT_LIST_HEAD(&exported->active_attached->list);
+	INIT_LIST_HEAD(&exported->va_kmapped->list);
+	INIT_LIST_HEAD(&exported->va_vmapped->list);
+
+	/* copy private data to sgt_info */
+	ret = copy_from_user(exported->priv, export_remote_attr->priv,
+			     exported->sz_priv);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"failed to load private data\n");
+		ret = -EINVAL;
+		goto fail_export;
+	}
+
+	pg_info = hyper_dmabuf_ext_pgs(sgt);
+	if (!pg_info) {
+		dev_err(hy_drv_priv->dev,
+			"failed to construct pg_info\n");
+		ret = -ENOMEM;
+		goto fail_export;
+	}
+
+	exported->nents = pg_info->nents;
+
+	/* now register it to export list */
+	hyper_dmabuf_register_exported(exported);
+
+	export_remote_attr->hid = exported->hid;
+
+	ret = send_export_msg(exported, pg_info);
+
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to send out the export request\n");
+		goto fail_send_request;
+	}
+
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	exported->filp = filp;
+
+	return ret;
+
+/* Clean-up if error occurs */
+
+fail_send_request:
+	hyper_dmabuf_remove_exported(exported->hid);
+
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+fail_export:
+	kfree(exported->va_vmapped);
+
+fail_map_va_vmapped:
+	kfree(exported->va_kmapped);
+
+fail_map_va_kmapped:
+	kfree(exported->active_attached);
+
+fail_map_active_attached:
+	kfree(exported->active_sgts);
+	kfree(exported->priv);
+
+fail_priv_creation:
+	kfree(exported);
+
+fail_map_active_sgts:
+fail_sgt_info_creation:
+	dma_buf_unmap_attachment(attachment, sgt,
+				 DMA_BIDIRECTIONAL);
+
+fail_map_attachment:
+	dma_buf_detach(dma_buf, attachment);
+
+fail_attach:
+	dma_buf_put(dma_buf);
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
+			(struct ioctl_hyper_dmabuf_export_fd *)data;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_req *req;
+	struct page **data_pgs;
+	int op[4];
+	int i;
+	int ret = 0;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	/* look for dmabuf for the id */
+	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
+
+	/* can't find sgt from the table */
+	if (!imported) {
+		dev_err(hy_drv_priv->dev, "can't find the entry\n");
+		return -ENOENT;
+	}
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	imported->importers++;
+
+	/* send notification for export_fd to exporter */
+	op[0] = imported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = imported->hid.rng_key[i];
+
+	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req) {
+		mutex_unlock(&hy_drv_priv->lock);
+		return -ENOMEM;
+	}
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
+
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
+
+	if (ret < 0) {
+		/* in case of timeout other end eventually will receive request,
+		 * so we need to undo it
+		 */
+		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
+					&op[0]);
+		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
+		kfree(req);
+		dev_err(hy_drv_priv->dev,
+			"Failed to create sgt or notify exporter\n");
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
+		return ret;
+	}
+
+	kfree(req);
+
+	if (ret == HYPER_DMABUF_REQ_ERROR) {
+		dev_err(hy_drv_priv->dev,
+			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
+		return -EINVAL;
+	}
+
+	ret = 0;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Found buffer gref %d off %d\n",
+		imported->ref_handle, imported->frst_ofst);
+
+	dev_dbg(hy_drv_priv->dev,
+		"last len %d nents %d domain %d\n",
+		imported->last_len, imported->nents,
+		HYPER_DMABUF_DOM_ID(imported->hid));
+
+	if (!imported->sgt) {
+		dev_dbg(hy_drv_priv->dev,
+			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
+					HYPER_DMABUF_DOM_ID(imported->hid),
+					imported->nents,
+					&imported->refs_info);
+
+		if (!data_pgs) {
+			dev_err(hy_drv_priv->dev,
+				"can't map pages hid {id:%d key:%d %d %d}\n",
+				imported->hid.id, imported->hid.rng_key[0],
+				imported->hid.rng_key[1],
+				imported->hid.rng_key[2]);
+
+			imported->importers--;
+
+			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+			if (!req) {
+				mutex_unlock(&hy_drv_priv->lock);
+				return -ENOMEM;
+			}
+
+			hyper_dmabuf_create_req(req,
+						HYPER_DMABUF_EXPORT_FD_FAILED,
+						&op[0]);
+			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
+							  false);
+			kfree(req);
+			mutex_unlock(&hy_drv_priv->lock);
+			return -EINVAL;
+		}
+
+		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
+							imported->frst_ofst,
+							imported->last_len,
+							imported->nents);
+
+	}
+
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
+						    export_fd_attr->flags);
+
+	if (export_fd_attr->fd < 0) {
+		/* fail to get fd */
+		ret = export_fd_attr->fd;
+	}
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return ret;
+}
+
+/* unexport dmabuf from the database and send int req to the source domain
+ * to unmap it.
+ */
+static void delayed_unexport(struct work_struct *work)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct exported_sgt_info *exported =
+		container_of(work, struct exported_sgt_info, unexport.work);
+	int op[4];
+	int i, ret;
+
+	if (!exported)
+		return;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
+		exported->hid.id, exported->hid.rng_key[0],
+		exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+	/* no longer valid */
+	exported->valid = false;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return;
+
+	op[0] = exported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = exported->hid.rng_key[i];
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
+
+	/* Now send unexport request to remote domain, marking
+	 * that buffer should not be used anymore
+	 */
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+	}
+
+	kfree(req);
+	exported->unexport_sched = false;
+
+	/* Immediately clean-up if it has never been exported by importer
+	 * (so no SGT is constructed on importer).
+	 * clean it up later in remote sync when final release ops
+	 * is called (importer does this only when there's no
+	 * no consumer of locally exported FDs)
+	 */
+	if (exported->active == 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"claning up buffer {id:%d key:%d %d %d} completly\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		hyper_dmabuf_cleanup_sgt_info(exported, false);
+		hyper_dmabuf_remove_exported(exported->hid);
+
+		/* register hyper_dmabuf_id to the list for reuse */
+		hyper_dmabuf_store_hid(exported->hid);
+
+		if (exported->sz_priv > 0 && !exported->priv)
+			kfree(exported->priv);
+
+		kfree(exported);
+	}
+}
+
+/* Schedule unexport of dmabuf.
+ */
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
+			(struct ioctl_hyper_dmabuf_unexport *)data;
+	struct exported_sgt_info *exported;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	/* find dmabuf in export list */
+	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
+
+	dev_dbg(hy_drv_priv->dev,
+		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
+		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
+		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
+
+	/* failed to find corresponding entry in export list */
+	if (exported == NULL) {
+		unexport_attr->status = -ENOENT;
+		return -ENOENT;
+	}
+
+	if (exported->unexport_sched)
+		return 0;
+
+	exported->unexport_sched = true;
+	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
+	schedule_delayed_work(&exported->unexport,
+			      msecs_to_jiffies(unexport_attr->delay_ms));
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
+
+static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_query *query_attr =
+			(struct ioctl_hyper_dmabuf_query *)data;
+	struct exported_sgt_info *exported = NULL;
+	struct imported_sgt_info *imported = NULL;
+	int ret = 0;
+
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hy_drv_priv->domid) {
+		/* query for exported dmabuf */
+		exported = hyper_dmabuf_find_exported(query_attr->hid);
+		if (exported) {
+			ret = hyper_dmabuf_query_exported(exported,
+							  query_attr->item,
+							  &query_attr->info);
+		} else {
+			dev_err(hy_drv_priv->dev,
+				"hid {id:%d key:%d %d %d} not in exp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	} else {
+		/* query for imported dmabuf */
+		imported = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported) {
+			ret = hyper_dmabuf_query_imported(imported,
+							  query_attr->item,
+							  &query_attr->info);
+		} else {
+			dev_err(hy_drv_priv->dev,
+				"hid {id:%d key:%d %d %d} not in imp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	}
+
+	return ret;
+}
+
+const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
+			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
+			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
+			       hyper_dmabuf_export_remote_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
+			       hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
+			       hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY,
+			       hyper_dmabuf_query_ioctl, 0),
+};
+
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param)
+{
+	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
+	unsigned int nr = _IOC_NR(cmd);
+	int ret;
+	hyper_dmabuf_ioctl_t func;
+	char *kdata;
+
+	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
+		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
+		return -EINVAL;
+	}
+
+	ioctl = &hyper_dmabuf_ioctls[nr];
+
+	func = ioctl->func;
+
+	if (unlikely(!func)) {
+		dev_err(hy_drv_priv->dev, "no function\n");
+		return -EINVAL;
+	}
+
+	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+	if (!kdata)
+		return -ENOMEM;
+
+	if (copy_from_user(kdata, (void __user *)param,
+			   _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy from user arguments\n");
+		ret = -EFAULT;
+		goto ioctl_error;
+	}
+
+	ret = func(filp, kdata);
+
+	if (copy_to_user((void __user *)param, kdata,
+			 _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy to user arguments\n");
+		ret = -EFAULT;
+		goto ioctl_error;
+	}
+
+ioctl_error:
+	kfree(kdata);
+
+	return ret;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
new file mode 100644
index 0000000..5991a87
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IOCTL_H__
+#define __HYPER_DMABUF_IOCTL_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param);
+
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
+
+#endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
new file mode 100644
index 0000000..bba6d1d
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
@@ -0,0 +1,293 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <linux/hashtable.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
+
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+static ssize_t hyper_dmabuf_imported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
+		hyper_dmabuf_id_t hid = info_entry->imported->hid;
+		int nents = info_entry->imported->nents;
+		bool valid = info_entry->imported->valid;
+		int num_importers = info_entry->imported->importers;
+
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				num_importers);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
+
+	return count;
+}
+
+static ssize_t hyper_dmabuf_exported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
+		hyper_dmabuf_id_t hid = info_entry->exported->hid;
+		int nents = info_entry->exported->nents;
+		bool valid = info_entry->exported->valid;
+		int importer_exported = info_entry->exported->active;
+
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1],
+				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				   importer_exported);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
+
+	return count;
+}
+
+static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
+static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
+
+int hyper_dmabuf_register_sysfs(struct device *dev)
+{
+	int err;
+
+	err = device_create_file(dev, &dev_attr_imported);
+	if (err < 0)
+		goto err1;
+	err = device_create_file(dev, &dev_attr_exported);
+	if (err < 0)
+		goto err2;
+
+	return 0;
+err2:
+	device_remove_file(dev, &dev_attr_imported);
+err1:
+	return -1;
+}
+
+int hyper_dmabuf_unregister_sysfs(struct device *dev)
+{
+	device_remove_file(dev, &dev_attr_imported);
+	device_remove_file(dev, &dev_attr_exported);
+	return 0;
+}
+
+#endif
+
+int hyper_dmabuf_table_init(void)
+{
+	hash_init(hyper_dmabuf_hash_imported);
+	hash_init(hyper_dmabuf_hash_exported);
+	return 0;
+}
+
+int hyper_dmabuf_table_destroy(void)
+{
+	/* TODO: cleanup hyper_dmabuf_hash_imported
+	 * and hyper_dmabuf_hash_exported
+	 */
+	return 0;
+}
+
+int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
+{
+	struct list_entry_exported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->exported = exported;
+
+	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
+		 info_entry->exported->hid.id);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
+{
+	struct list_entry_imported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->imported = imported;
+
+	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
+		 info_entry->imported->hid.id);
+
+	return 0;
+}
+
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->exported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid))
+				return info_entry->exported;
+
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
+		}
+
+	return NULL;
+}
+
+/* search for pre-exported sgt and return id of it if it exist */
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid)
+{
+	struct list_entry_exported *info_entry;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if (info_entry->exported->dma_buf == dmabuf &&
+		    info_entry->exported->rdomid == domid)
+			return info_entry->exported->hid;
+
+	return hid;
+}
+
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->imported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid))
+				return info_entry->imported;
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
+		}
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->exported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			}
+
+			break;
+		}
+
+	return -ENOENT;
+}
+
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->imported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			}
+
+			break;
+		}
+
+	return -ENOENT;
+}
+
+void hyper_dmabuf_foreach_exported(
+	void (*func)(struct exported_sgt_info *, void *attr),
+	void *attr)
+{
+	struct list_entry_exported *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
+			info_entry, node) {
+		func(info_entry->exported, attr);
+	}
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
new file mode 100644
index 0000000..f7102f5
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
@@ -0,0 +1,71 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_LIST_H__
+#define __HYPER_DMABUF_LIST_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORTED 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORTED 7
+
+struct list_entry_exported {
+	struct exported_sgt_info *exported;
+	struct hlist_node node;
+};
+
+struct list_entry_imported {
+	struct imported_sgt_info *imported;
+	struct hlist_node node;
+};
+
+int hyper_dmabuf_table_init(void);
+
+int hyper_dmabuf_table_destroy(void);
+
+int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
+
+/* search for pre-exported sgt and return id of it if it exist */
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid);
+
+int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
+
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
+
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
+
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
+
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
+
+void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
+				   void *attr), void *attr);
+
+int hyper_dmabuf_register_sysfs(struct device *dev);
+int hyper_dmabuf_unregister_sysfs(struct device *dev);
+
+#endif /* __HYPER_DMABUF_LIST_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
new file mode 100644
index 0000000..afc1fd6e
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -0,0 +1,414 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_event.h"
+#include "hyper_dmabuf_list.h"
+
+struct cmd_process {
+	struct work_struct work;
+	struct hyper_dmabuf_req *rq;
+	int domid;
+};
+
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+			     enum hyper_dmabuf_command cmd, int *op)
+{
+	int i;
+
+	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	req->cmd = cmd;
+
+	switch (cmd) {
+	/* as exporter, commands to importer */
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * op0~op3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data
+		 *	   (e.g. graphic buffer's meta info)
+		 */
+
+		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
+		break;
+
+	case HYPER_DMABUF_NOTIFY_UNEXPORT:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * op0~op3 : hyper_dmabuf_id_t hid
+		 */
+
+		for (i = 0; i < 4; i++)
+			req->op[i] = op[i];
+		break;
+
+	case HYPER_DMABUF_EXPORT_FD:
+	case HYPER_DMABUF_EXPORT_FD_FAILED:
+		/* dmabuf fd is being created on imported side or importing
+		 * failed
+		 *
+		 * command : HYPER_DMABUF_EXPORT_FD or
+		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
+		 * op0~op3 : hyper_dmabuf_id
+		 */
+
+		for (i = 0; i < 4; i++)
+			req->op[i] = op[i];
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed)
+		 * for dmabuf synchronization
+		 */
+		break;
+
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make
+		 * the driver to do shadow mapping or unmapping for
+		 * synchronization with original exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		for (i = 0; i < 5; i++)
+			req->op[i] = op[i];
+		break;
+
+	default:
+		/* no command found */
+		return;
+	}
+}
+
+static void cmd_process_work(struct work_struct *work)
+{
+	struct imported_sgt_info *imported;
+	struct cmd_process *proc = container_of(work,
+						struct cmd_process, work);
+	struct hyper_dmabuf_req *req;
+	int domid;
+	int i;
+
+	req = proc->rq;
+	domid = proc->domid;
+
+	switch (req->cmd) {
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * op0~op3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data
+		 *         (e.g. graphic buffer's meta info)
+		 */
+
+		/* if nents == 0, it means it is a message only for
+		 * priv synchronization. for existing imported_sgt_info
+		 * so not creating a new one
+		 */
+		if (req->op[4] == 0) {
+			hyper_dmabuf_id_t exist = {req->op[0],
+						   {req->op[1], req->op[2],
+						   req->op[3] } };
+
+			imported = hyper_dmabuf_find_imported(exist);
+
+			if (!imported) {
+				dev_err(hy_drv_priv->dev,
+					"Can't find imported sgt_info\n");
+				break;
+			}
+
+			/* if size of new private data is different,
+			 * we reallocate it.
+			 */
+			if (imported->sz_priv != req->op[8]) {
+				kfree(imported->priv);
+				imported->sz_priv = req->op[8];
+				imported->priv = kcalloc(1, req->op[8],
+							 GFP_KERNEL);
+				if (!imported->priv) {
+					/* set it invalid */
+					imported->valid = 0;
+					break;
+				}
+			}
+
+			/* updating priv data */
+			memcpy(imported->priv, &req->op[9], req->op[8]);
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+			/* generating import event */
+			hyper_dmabuf_import_event(imported->hid);
+#endif
+
+			break;
+		}
+
+		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
+
+		if (!imported)
+			break;
+
+		imported->sz_priv = req->op[8];
+		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
+
+		if (!imported->priv) {
+			kfree(imported);
+			break;
+		}
+
+		imported->hid.id = req->op[0];
+
+		for (i = 0; i < 3; i++)
+			imported->hid.rng_key[i] = req->op[i+1];
+
+		imported->nents = req->op[4];
+		imported->frst_ofst = req->op[5];
+		imported->last_len = req->op[6];
+		imported->ref_handle = req->op[7];
+
+		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
+		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
+			req->op[0], req->op[1], req->op[2],
+			req->op[3]);
+		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
+		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
+		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
+		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
+
+		memcpy(imported->priv, &req->op[9], req->op[8]);
+
+		imported->valid = true;
+		hyper_dmabuf_register_imported(imported);
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+		/* generating import event */
+		hyper_dmabuf_import_event(imported->hid);
+#endif
+
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer
+		 * (probably not needed) for dmabuf synchronization
+		 */
+		break;
+
+	default:
+		/* shouldn't get here */
+		break;
+	}
+
+	kfree(req);
+	kfree(proc);
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
+{
+	struct cmd_process *proc;
+	struct hyper_dmabuf_req *temp_req;
+	struct imported_sgt_info *imported;
+	struct exported_sgt_info *exported;
+	hyper_dmabuf_id_t hid;
+	int ret;
+
+	if (!req) {
+		dev_err(hy_drv_priv->dev, "request is NULL\n");
+		return -EINVAL;
+	}
+
+	hid.id = req->op[0];
+	hid.rng_key[0] = req->op[1];
+	hid.rng_key[1] = req->op[2];
+	hid.rng_key[2] = req->op[3];
+
+	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
+		(req->cmd > HYPER_DMABUF_OPS_TO_SOURCE)) {
+		dev_err(hy_drv_priv->dev, "invalid command\n");
+		return -EINVAL;
+	}
+
+	req->stat = HYPER_DMABUF_REQ_PROCESSED;
+
+	/* HYPER_DMABUF_DESTROY requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
+		 * op0~3 : hyper_dmabuf_id
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
+
+		imported = hyper_dmabuf_find_imported(hid);
+
+		if (imported) {
+			/* if anything is still using dma_buf */
+			if (imported->importers) {
+				/* Buffer is still in  use, just mark that
+				 * it should not be allowed to export its fd
+				 * anymore.
+				 */
+				imported->valid = false;
+			} else {
+				/* No one is using buffer, remove it from
+				 * imported list
+				 */
+				hyper_dmabuf_remove_imported(hid);
+				kfree(imported);
+			}
+		} else {
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		}
+
+		return req->cmd;
+	}
+
+	/* dma buf remote synchronization */
+	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
+		/* notifying dmabuf map/unmap to exporter, map will
+		 * make the driver to do shadow mapping
+		 * or unmapping for synchronization with original
+		 * exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
+		 * op0~3 : hyper_dmabuf_id
+		 * op1 : enum hyper_dmabuf_ops {....}
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
+
+		ret = hyper_dmabuf_remote_sync(hid, req->op[4]);
+
+		if (ret)
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		else
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+
+		return req->cmd;
+	}
+
+	/* synchronous dma_buf_fd export */
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
+		/* find a corresponding SGT for the id */
+		dev_dbg(hy_drv_priv->dev,
+			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exported = hyper_dmabuf_find_exported(hid);
+
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else if (!exported->valid) {
+			dev_dbg(hy_drv_priv->dev,
+				"Buffer no longer valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			dev_dbg(hy_drv_priv->dev,
+				"Buffer still valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			exported->active++;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->cmd;
+	}
+
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
+		dev_dbg(hy_drv_priv->dev,
+			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exported = hyper_dmabuf_find_exported(hid);
+
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			exported->active--;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->cmd;
+	}
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: putting request to workqueue\n", __func__);
+	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
+
+	if (!temp_req)
+		return -ENOMEM;
+
+	memcpy(temp_req, req, sizeof(*temp_req));
+
+	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
+
+	if (!proc) {
+		kfree(temp_req);
+		return -ENOMEM;
+	}
+
+	proc->rq = temp_req;
+	proc->domid = domid;
+
+	INIT_WORK(&(proc->work), cmd_process_work);
+
+	queue_work(hy_drv_priv->work_queue, &(proc->work));
+
+	return req->cmd;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
new file mode 100644
index 0000000..9c8a76b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_MSG_H__
+#define __HYPER_DMABUF_MSG_H__
+
+#define MAX_NUMBER_OF_OPERANDS 64
+
+struct hyper_dmabuf_req {
+	unsigned int req_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_resp {
+	unsigned int resp_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
+};
+
+enum hyper_dmabuf_command {
+	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_EXPORT_FD,
+	HYPER_DMABUF_EXPORT_FD_FAILED,
+	HYPER_DMABUF_NOTIFY_UNEXPORT,
+	HYPER_DMABUF_OPS_TO_REMOTE,
+	HYPER_DMABUF_OPS_TO_SOURCE,
+};
+
+enum hyper_dmabuf_ops {
+	HYPER_DMABUF_OPS_ATTACH = 0x1000,
+	HYPER_DMABUF_OPS_DETACH,
+	HYPER_DMABUF_OPS_MAP,
+	HYPER_DMABUF_OPS_UNMAP,
+	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
+	HYPER_DMABUF_OPS_END_CPU_ACCESS,
+	HYPER_DMABUF_OPS_KMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KMAP,
+	HYPER_DMABUF_OPS_KUNMAP,
+	HYPER_DMABUF_OPS_MMAP,
+	HYPER_DMABUF_OPS_VMAP,
+	HYPER_DMABUF_OPS_VUNMAP,
+};
+
+enum hyper_dmabuf_req_feedback {
+	HYPER_DMABUF_REQ_PROCESSED = 0x100,
+	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
+	HYPER_DMABUF_REQ_ERROR,
+	HYPER_DMABUF_REQ_NOT_RESPONDED
+};
+
+/* create a request packet with given command and operands */
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+				 enum hyper_dmabuf_command command,
+				 int *operands);
+
+/* parse incoming request packet (or response) and take
+ * appropriate actions for those
+ */
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
+
+#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
new file mode 100644
index 0000000..e85f619
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -0,0 +1,413 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+#define WAIT_AFTER_SYNC_REQ 0
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+static int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -EINVAL;
+}
+
+static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int op[5];
+	int i;
+	int ret;
+
+	op[0] = hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = hid.rng_key[i];
+
+	op[4] = dmabuf_ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return -ENOMEM;
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
+
+	/* send request and wait for a response */
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(hid), req,
+				 WAIT_AFTER_SYNC_REQ);
+
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"dmabuf sync request failed:%d\n", req->op[4]);
+	}
+
+	kfree(req);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
+				   struct device *dev,
+				   struct dma_buf_attachment *attach)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_ATTACH);
+
+	return ret;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
+				    struct dma_buf_attachment *attach)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_DETACH);
+}
+
+static struct sg_table *hyper_dmabuf_ops_map(
+				struct dma_buf_attachment *attachment,
+				enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct imported_sgt_info *imported;
+	struct pages_info *pg_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
+
+	if (!pg_info)
+		return NULL;
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
+				     pg_info->last_len, pg_info->nents);
+	if (!st)
+		goto err_free_sg;
+
+	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
+		goto err_free_sg;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MAP);
+
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	return st;
+
+err_free_sg:
+	if (st) {
+		sg_free_table(st);
+		kfree(st);
+	}
+
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+				   struct sg_table *sg,
+				   enum dma_data_direction dir)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_UNMAP);
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
+{
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret;
+	int finish;
+
+	if (!dma_buf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dma_buf->priv;
+
+	if (!dmabuf_refcount(imported->dma_buf))
+		imported->dma_buf = NULL;
+
+	imported->importers--;
+
+	if (imported->importers == 0) {
+		bknd_ops->unmap_shared_pages(&imported->refs_info,
+					     imported->nents);
+
+		if (imported->sgt) {
+			sg_free_table(imported->sgt);
+			kfree(imported->sgt);
+			imported->sgt = NULL;
+		}
+	}
+
+	finish = imported && !imported->valid &&
+		 !imported->importers;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_RELEASE);
+
+	/*
+	 * Check if buffer is still valid and if not remove it
+	 * from imported list. That has to be done after sending
+	 * sync request
+	 */
+	if (finish) {
+		hyper_dmabuf_remove_imported(imported->hid);
+		kfree(imported);
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
+					     enum dma_data_direction dir)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
+					   enum dma_data_direction dir)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_END_CPU_ACCESS);
+
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
+					  unsigned long pgnum)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP_ATOMIC);
+
+	/* TODO: NULL for now. Need to return the addr of mapped region */
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
+					   unsigned long pgnum, void *vaddr)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP);
+
+	/* for now NULL.. need to return the address of mapped region */
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
+				    void *vaddr)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP);
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
+				 struct vm_area_struct *vma)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MMAP);
+
+	return ret;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VMAP);
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VUNMAP);
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+	.attach = hyper_dmabuf_ops_attach,
+	.detach = hyper_dmabuf_ops_detach,
+	.map_dma_buf = hyper_dmabuf_ops_map,
+	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+	.release = hyper_dmabuf_ops_release,
+	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
+	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
+	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+	.map = hyper_dmabuf_ops_kmap,
+	.unmap = hyper_dmabuf_ops_kunmap,
+	.mmap = hyper_dmabuf_ops_mmap,
+	.vmap = hyper_dmabuf_ops_vmap,
+	.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
+{
+	int fd = -1;
+
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
+	hyper_dmabuf_export_dma_buf(imported);
+
+	if (imported->dma_buf)
+		fd = dma_buf_fd(imported->dma_buf, flags);
+
+	return fd;
+}
+
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = imported->sgt->nents * PAGE_SIZE;
+	exp_info.flags = /* not sure about flag */ 0;
+	exp_info.priv = imported;
+
+	imported->dma_buf = dma_buf_export(&exp_info);
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
new file mode 100644
index 0000000..c5505a4
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_OPS_H__
+#define __HYPER_DMABUF_OPS_H__
+
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
+
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
new file mode 100644
index 0000000..1f2f56b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
@@ -0,0 +1,172 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/uaccess.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_id.h"
+
+#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
+	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
+
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
+				int query, unsigned long *info)
+{
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = EXPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(exported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = exported->rdomid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		*info = exported->dma_buf->size;
+		break;
+
+	/* whether the buffer is used by importer */
+	case HYPER_DMABUF_QUERY_BUSY:
+		*info = (exported->active > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !exported->valid;
+		break;
+
+	/* whether the buffer is scheduled to be unexported */
+	case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+		*info = !exported->unexport_sched;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = exported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (exported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *) *info,
+					exported->priv,
+					exported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
+				int query, unsigned long *info)
+{
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = IMPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(imported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = hy_drv_priv->domid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		if (imported->dma_buf) {
+			/* if local dma_buf is created (if it's
+			 * ever mapped), retrieve it directly
+			 * from struct dma_buf *
+			 */
+			*info = imported->dma_buf->size;
+		} else {
+			/* calcuate it from given nents, frst_ofst
+			 * and last_len
+			 */
+			*info = HYPER_DMABUF_SIZE(imported->nents,
+						  imported->frst_ofst,
+						  imported->last_len);
+		}
+		break;
+
+	/* whether the buffer is used or not */
+	case HYPER_DMABUF_QUERY_BUSY:
+		/* checks if it's used by importer */
+		*info = (imported->importers > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !imported->valid;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = imported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (imported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *)*info,
+					imported->priv,
+					imported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
new file mode 100644
index 0000000..65ae738
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
@@ -0,0 +1,10 @@
+#ifndef __HYPER_DMABUF_QUERY_H__
+#define __HYPER_DMABUF_QUERY_H__
+
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
+				int query, unsigned long *info);
+
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
+				int query, unsigned long *info);
+
+#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
new file mode 100644
index 0000000..a82fd7b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -0,0 +1,322 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_sgl_proc.h"
+
+/* Whenever importer does dma operations from remote domain,
+ * a notification is sent to the exporter so that exporter
+ * issues equivalent dma operation on the original dma buf
+ * for indirect synchronization via shadow operations.
+ *
+ * All ptrs and references (e.g struct sg_table*,
+ * struct dma_buf_attachment) created via these operations on
+ * exporter's side are kept in stack (implemented as circular
+ * linked-lists) separately so that those can be re-referenced
+ * later when unmapping operations are invoked to free those.
+ *
+ * The very first element on the bottom of each stack holds
+ * is what is created when initial exporting is issued so it
+ * should not be modified or released by this fuction.
+ */
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
+{
+	struct exported_sgt_info *exported;
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	int ret;
+
+	/* find a coresponding SGT for the id */
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported) {
+		dev_err(hy_drv_priv->dev,
+			"dmabuf remote sync::can't find exported list\n");
+		return -ENOENT;
+	}
+
+	switch (ops) {
+	case HYPER_DMABUF_OPS_ATTACH:
+		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
+
+		if (!attachl)
+			return -ENOMEM;
+
+		attachl->attach = dma_buf_attach(exported->dma_buf,
+						 hy_drv_priv->dev);
+
+		if (!attachl->attach) {
+			kfree(attachl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
+			return -ENOMEM;
+		}
+
+		list_add(&attachl->list, &exported->active_attached->list);
+		break;
+
+	case HYPER_DMABUF_OPS_DETACH:
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_DETACH\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf attachment left to be detached\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(exported->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+		break;
+
+	case HYPER_DMABUF_OPS_MAP:
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf attachment left to be mapped\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+
+		if (!sgtl)
+			return -ENOMEM;
+
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach,
+						   DMA_BIDIRECTIONAL);
+		if (!sgtl->sgt) {
+			kfree(sgtl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			return -ENOMEM;
+		}
+		list_add(&sgtl->list, &exported->active_sgts->list);
+		break;
+
+	case HYPER_DMABUF_OPS_UNMAP:
+		if (list_empty(&exported->active_sgts->list) ||
+		    list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no SGT or attach left to be unmapped\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+		sgtl = list_first_entry(&exported->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+		break;
+
+	case HYPER_DMABUF_OPS_RELEASE:
+		dev_dbg(hy_drv_priv->dev,
+			"id:%d key:%d %d %d} released, ref left: %d\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2],
+			 exported->active - 1);
+
+		exported->active--;
+
+		/* If there are still importers just break, if no then
+		 * continue with final cleanup
+		 */
+		if (exported->active)
+			break;
+
+		/* Importer just released buffer fd, check if there is
+		 * any other importer still using it.
+		 * If not and buffer was unexported, clean up shared
+		 * data and remove that buffer.
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"Buffer {id:%d key:%d %d %d} final released\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		if (!exported->valid && !exported->active &&
+		    !exported->unexport_sched) {
+			hyper_dmabuf_cleanup_sgt_info(exported, false);
+			hyper_dmabuf_remove_exported(hid);
+			kfree(exported);
+			/* store hyper_dmabuf_id in the list for reuse */
+			hyper_dmabuf_store_hid(hid);
+		}
+
+		break;
+
+	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
+		ret = dma_buf_begin_cpu_access(exported->dma_buf,
+					       DMA_BIDIRECTIONAL);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			return ret;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
+		ret = dma_buf_end_cpu_access(exported->dma_buf,
+					     DMA_BIDIRECTIONAL);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			return ret;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KMAP:
+		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+		if (!va_kmapl)
+			return -ENOMEM;
+
+		/* dummy kmapping of 1 page */
+		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
+			va_kmapl->vaddr = dma_buf_kmap_atomic(
+						exported->dma_buf, 1);
+		else
+			va_kmapl->vaddr = dma_buf_kmap(
+						exported->dma_buf, 1);
+
+		if (!va_kmapl->vaddr) {
+			kfree(va_kmapl);
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -ENOMEM;
+		}
+		list_add(&va_kmapl->list, &exported->va_kmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KUNMAP:
+		if (list_empty(&exported->va_kmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf VA to be freed\n");
+			return -EFAULT;
+		}
+
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+		if (!va_kmapl->vaddr) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			return PTR_ERR(va_kmapl->vaddr);
+		}
+
+		/* unmapping 1 page */
+		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
+			dma_buf_kunmap_atomic(exported->dma_buf,
+					      1, va_kmapl->vaddr);
+		else
+			dma_buf_kunmap(exported->dma_buf,
+				       1, va_kmapl->vaddr);
+
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+		break;
+
+	case HYPER_DMABUF_OPS_MMAP:
+		/* currently not supported: looking for a way to create
+		 * a dummy vma
+		 */
+		dev_warn(hy_drv_priv->dev,
+			 "remote sync::sychronized mmap is not supported\n");
+		break;
+
+	case HYPER_DMABUF_OPS_VMAP:
+		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
+
+		if (!va_vmapl)
+			return -ENOMEM;
+
+		/* dummy vmapping */
+		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
+
+		if (!va_vmapl->vaddr) {
+			kfree(va_vmapl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
+			return -ENOMEM;
+		}
+		list_add(&va_vmapl->list, &exported->va_vmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_VUNMAP:
+		if (list_empty(&exported->va_vmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf VA to be freed\n");
+			return -EFAULT;
+		}
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
+					struct vmap_vaddr_list, list);
+		if (!va_vmapl || va_vmapl->vaddr == NULL) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			return -EFAULT;
+		}
+
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
+
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+		break;
+
+	default:
+		/* program should not get here */
+		break;
+	}
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
new file mode 100644
index 0000000..36638928
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
+#define __HYPER_DMABUF_REMOTE_SYNC_H__
+
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops);
+
+#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
new file mode 100644
index 0000000..d15eb17
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -0,0 +1,255 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_sgl_proc.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/* return total number of pages referenced by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+
+	/* round-up */
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+
+		/* round-up */
+		num_pages += ((sgl->length + PAGE_SIZE - 1) /
+			     PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct pages_info *pg_info;
+	int i, j, k;
+	int length;
+	struct scatterlist *sgl;
+
+	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
+	if (!pg_info)
+		return NULL;
+
+	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
+				     sizeof(struct page *),
+				     GFP_KERNEL);
+
+	if (!pg_info->pgs) {
+		kfree(pg_info);
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	pg_info->nents = 1;
+	pg_info->frst_ofst = sgl->offset;
+	pg_info->pgs[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i = 1;
+
+	while (length > 0) {
+		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pg_info->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pg_info->pgs[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pg_info->nents++;
+		k = 1;
+
+		while (length > 0) {
+			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
+			length -= PAGE_SIZE;
+			pg_info->nents++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pg_info->last_len = PAGE_SIZE + length;
+
+	return pg_info;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (!sgt)
+		return NULL;
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		if (sgt) {
+			sg_free_table(sgt);
+			kfree(sgt);
+		}
+
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i = 1; i < nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
+	}
+
+	if (nents > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pgs[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force)
+{
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+
+	if (!exported) {
+		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
+		return -EINVAL;
+	}
+
+	/* if force != 1, sgt_info can be released only if
+	 * there's no activity on exported dma-buf on importer
+	 * side.
+	 */
+	if (!force &&
+	    exported->active) {
+		dev_warn(hy_drv_priv->dev,
+			 "dma-buf is used by importer\n");
+
+		return -EPERM;
+	}
+
+	/* force == 1 is not recommended */
+	while (!list_empty(&exported->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+
+		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+	}
+
+	while (!list_empty(&exported->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
+					    struct vmap_vaddr_list, list);
+
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+	}
+
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = list_first_entry(&exported->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+	}
+
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(exported->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+	}
+
+	/* Start cleanup of buffer in reverse order to exporting */
+	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
+
+	/* unmap dma-buf */
+	dma_buf_unmap_attachment(exported->active_attached->attach,
+				 exported->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+
+	/* detatch dma-buf */
+	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
+
+	/* close connection to dma-buf completely */
+	dma_buf_put(exported->dma_buf);
+	exported->dma_buf = NULL;
+
+	kfree(exported->active_sgts);
+	kfree(exported->active_attached);
+	kfree(exported->va_kmapped);
+	kfree(exported->va_vmapped);
+	kfree(exported->priv);
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
new file mode 100644
index 0000000..869d982
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+/* extract pages directly from struct sg_table */
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents);
+
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force);
+
+void hyper_dmabuf_free_sgt(struct sg_table *sgt);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
new file mode 100644
index 0000000..a11f804
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -0,0 +1,141 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_STRUCT_H__
+#define __HYPER_DMABUF_STRUCT_H__
+
+/* stack of mapped sgts */
+struct sgt_list {
+	struct sg_table *sgt;
+	struct list_head list;
+};
+
+/* stack of attachments */
+struct attachment_list {
+	struct dma_buf_attachment *attach;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via kmap */
+struct kmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via vmap */
+struct vmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* Exporter builds pages_info before sharing pages */
+struct pages_info {
+	int frst_ofst;
+	int last_len;
+	int nents;
+	struct page **pgs;
+};
+
+
+/* Exporter stores references to sgt in a hash table
+ * Exporter keeps these references for synchronization
+ * and tracking purposes
+ */
+struct exported_sgt_info {
+	hyper_dmabuf_id_t hid;
+
+	/* VM ID of importer */
+	int rdomid;
+
+	struct dma_buf *dma_buf;
+	int nents;
+
+	/* list for tracking activities on dma_buf */
+	struct sgt_list *active_sgts;
+	struct attachment_list *active_attached;
+	struct kmap_vaddr_list *va_kmapped;
+	struct vmap_vaddr_list *va_vmapped;
+
+	/* set to 0 when unexported. Importer doesn't
+	 * do a new mapping of buffer if valid == false
+	 */
+	bool valid;
+
+	/* active == true if the buffer is actively used
+	 * (mapped) by importer
+	 */
+	int active;
+
+	/* hypervisor specific reference data for shared pages */
+	void *refs_info;
+
+	struct delayed_work unexport;
+	bool unexport_sched;
+
+	/* list for file pointers associated with all user space
+	 * application that have exported this same buffer to
+	 * another VM. This needs to be tracked to know whether
+	 * the buffer can be completely freed.
+	 */
+	struct file *filp;
+
+	/* size of private */
+	size_t sz_priv;
+
+	/* private data associated with the exported buffer */
+	char *priv;
+};
+
+/* imported_sgt_info contains information about imported DMA_BUF
+ * this info is kept in IMPORT list and asynchorously retrieved and
+ * used to map DMA_BUF on importer VM's side upon export fd ioctl
+ * request from user-space
+ */
+
+struct imported_sgt_info {
+	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
+
+	/* hypervisor-specific handle to pages */
+	int ref_handle;
+
+	/* offset and size info of DMA_BUF */
+	int frst_ofst;
+	int last_len;
+	int nents;
+
+	struct dma_buf *dma_buf;
+	struct sg_table *sgt;
+
+	void *refs_info;
+	bool valid;
+	int importers;
+
+	/* size of private */
+	size_t sz_priv;
+
+	/* private data associated with the exported buffer */
+	char *priv;
+};
+
+#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c
new file mode 100644
index 0000000..4a073ce
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c
@@ -0,0 +1,941 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+#include <xen/grant_table.h>
+#include <xen/events.h>
+#include <xen/xenbus.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+#include "../hyper_dmabuf_drv.h"
+
+static int export_req_id;
+
+struct hyper_dmabuf_req req_pending = {0};
+
+static void xen_get_domid_delayed(struct work_struct *unused);
+static void xen_init_comm_env_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
+static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
+
+/* Creates entry in xen store that will keep details of all
+ * exporter rings created by this domain
+ */
+static int xen_comm_setup_data_dir(void)
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
+	return xenbus_mkdir(XBT_NIL, buf, "");
+}
+
+/* Removes entry from xenstore with exporter ring details.
+ * Other domains that has connected to any of exporter rings
+ * created by this domain, will be notified about removal of
+ * this entry and will treat that as signal to cleanup importer
+ * rings created for this domain
+ */
+static int xen_comm_destroy_data_dir(void)
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
+	return xenbus_rm(XBT_NIL, buf, "");
+}
+
+/* Adds xenstore entries with details of exporter ring created
+ * for given remote domain. It requires special daemon running
+ * in dom0 to make sure that given remote domain will have right
+ * permissions to access that data.
+ */
+static int xen_comm_expose_ring_details(int domid, int rdomid,
+					int gref, int port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		domid, rdomid);
+
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Queries details of ring exposed by remote domain.
+ */
+static int xen_comm_get_ring_details(int domid, int rdomid,
+				     int *grefid, int *port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		rdomid, domid);
+
+	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret <= 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret <= 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	return (ret <= 0 ? 1 : 0);
+}
+
+static void xen_get_domid_delayed(struct work_struct *unused)
+{
+	struct xenbus_transaction xbt;
+	int domid, ret;
+
+	/* scheduling another if driver is still running
+	 * and xenstore has not been initialized
+	 */
+	if (likely(xenstored_ready == 0)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore is not ready yet. Will retry in 500ms\n");
+		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+	} else {
+		xenbus_transaction_start(&xbt);
+
+		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
+
+		if (ret <= 0)
+			domid = -1;
+
+		xenbus_transaction_end(xbt, 0);
+
+		/* try again since -1 is an invalid id for domain
+		 * (but only if driver is still running)
+		 */
+		if (unlikely(domid == -1)) {
+			dev_dbg(hy_drv_priv->dev,
+				"domid==-1 is invalid. Will retry it in 500ms\n");
+			schedule_delayed_work(&get_vm_id_work,
+					      msecs_to_jiffies(500));
+		} else {
+			dev_info(hy_drv_priv->dev,
+				 "Successfully retrieved domid from Xenstore:%d\n",
+				 domid);
+			hy_drv_priv->domid = domid;
+		}
+	}
+}
+
+int xen_be_get_domid(void)
+{
+	struct xenbus_transaction xbt;
+	int domid;
+
+	if (unlikely(xenstored_ready == 0)) {
+		xen_get_domid_delayed(NULL);
+		return -1;
+	}
+
+	xenbus_transaction_start(&xbt);
+
+	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
+		domid = -1;
+
+	xenbus_transaction_end(xbt, 0);
+
+	return domid;
+}
+
+static int xen_comm_next_req_id(void)
+{
+	export_req_id++;
+	return export_req_id;
+}
+
+/* For now cache latast rings as global variables TODO: keep them in list*/
+static irqreturn_t front_ring_isr(int irq, void *info);
+static irqreturn_t back_ring_isr(int irq, void *info);
+
+/* Callback function that will be called on any change of xenbus path
+ * being watched. Used for detecting creation/destruction of remote
+ * domain exporter ring.
+ *
+ * When remote domain's exporter ring will be detected, importer ring
+ * on this domain will be created.
+ *
+ * When remote domain's exporter ring destruction will be detected it
+ * will celanup this domain importer ring.
+ *
+ * Destruction can be caused by unloading module by remote domain or
+ * it's crash/force shutdown.
+ */
+static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
+					 const char *path, const char *token)
+{
+	int rdom, ret;
+	uint32_t grefid, port;
+	struct xen_comm_rx_ring_info *ring_info;
+
+	/* Check which domain has changed its exporter rings */
+	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
+	if (ret <= 0)
+		return;
+
+	/* Check if we have importer ring for given remote domain already
+	 * created
+	 */
+	ring_info = xen_comm_find_rx_ring(rdom);
+
+	/* Try to query remote domain exporter ring details - if
+	 * that will fail and we have importer ring that means remote
+	 * domains has cleanup its exporter ring, so our importer ring
+	 * is no longer useful.
+	 *
+	 * If querying details will succeed and we don't have importer ring,
+	 * it means that remote domain has setup it for us and we should
+	 * connect to it.
+	 */
+
+	ret = xen_comm_get_ring_details(xen_be_get_domid(),
+					rdom, &grefid, &port);
+
+	if (ring_info && ret != 0) {
+		dev_info(hy_drv_priv->dev,
+			 "Remote exporter closed, cleaninup importer\n");
+		xen_be_cleanup_rx_rbuf(rdom);
+	} else if (!ring_info && ret == 0) {
+		dev_info(hy_drv_priv->dev,
+			 "Registering importer\n");
+		xen_be_init_rx_rbuf(rdom);
+	}
+}
+
+/* exporter needs to generated info for page sharing */
+int xen_be_init_tx_rbuf(int domid)
+{
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
+	struct evtchn_alloc_unbound alloc_unbound;
+	struct evtchn_close close;
+
+	void *shared_ring;
+	int ret;
+
+	/* check if there's any existing tx channel in the table */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (ring_info) {
+		dev_info(hy_drv_priv->dev,
+			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	if (!ring_info)
+		return -ENOMEM;
+
+	/* from exporter to importer */
+	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
+	if (shared_ring == 0) {
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
+	sring = (struct xen_comm_sring *) shared_ring;
+
+	SHARED_RING_INIT(sring);
+
+	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
+
+	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
+						virt_to_mfn(shared_ring),
+						0);
+	if (ring_info->gref_ring < 0) {
+		/* fail to get gref */
+		kfree(ring_info);
+		return -EFAULT;
+	}
+
+	alloc_unbound.dom = DOMID_SELF;
+	alloc_unbound.remote_dom = domid;
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
+					  &alloc_unbound);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate event channel\n");
+		kfree(ring_info);
+		return -EIO;
+	}
+
+	/* setting up interrupt */
+	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
+					front_ring_isr, 0,
+					NULL, (void *) ring_info);
+
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to setup event channel\n");
+		close.port = alloc_unbound.port;
+		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
+		gnttab_end_foreign_access(ring_info->gref_ring, 0,
+					virt_to_mfn(shared_ring));
+		kfree(ring_info);
+		return -EIO;
+	}
+
+	ring_info->rdomain = domid;
+	ring_info->irq = ret;
+	ring_info->port = alloc_unbound.port;
+
+	mutex_init(&ring_info->lock);
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+		__func__,
+		ring_info->gref_ring,
+		ring_info->port,
+		ring_info->irq);
+
+	ret = xen_comm_add_tx_ring(ring_info);
+
+	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
+					   domid,
+					   ring_info->gref_ring,
+					   ring_info->port);
+
+	/* Register watch for remote domain exporter ring.
+	 * When remote domain will setup its exporter ring,
+	 * we will automatically connect our importer ring to it.
+	 */
+	ring_info->watch.callback = remote_dom_exporter_watch_cb;
+	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
+
+	if (!ring_info->watch.node) {
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
+	sprintf((char *)ring_info->watch.node,
+		"/local/domain/%d/data/hyper_dmabuf/%d/port",
+		domid, xen_be_get_domid());
+
+	register_xenbus_watch(&ring_info->watch);
+
+	return ret;
+}
+
+/* cleans up exporter ring created for given remote domain */
+void xen_be_cleanup_tx_rbuf(int domid)
+{
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_rx_ring_info *rx_ring_info;
+
+	/* check if we at all have exporter ring for given rdomain */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (!ring_info)
+		return;
+
+	xen_comm_remove_tx_ring(domid);
+
+	unregister_xenbus_watch(&ring_info->watch);
+	kfree(ring_info->watch.node);
+
+	/* No need to close communication channel, will be done by
+	 * this function
+	 */
+	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
+
+	/* No need to free sring page, will be freed by this function
+	 * when other side will end its access
+	 */
+	gnttab_end_foreign_access(ring_info->gref_ring, 0,
+				  (unsigned long) ring_info->ring_front.sring);
+
+	kfree(ring_info);
+
+	rx_ring_info = xen_comm_find_rx_ring(domid);
+	if (!rx_ring_info)
+		return;
+
+	BACK_RING_INIT(&(rx_ring_info->ring_back),
+		       rx_ring_info->ring_back.sring,
+		       PAGE_SIZE);
+}
+
+/* importer needs to know about shared page and port numbers for
+ * ring buffer and event channel
+ */
+int xen_be_init_rx_rbuf(int domid)
+{
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
+
+	struct page *shared_ring;
+
+	struct gnttab_map_grant_ref *map_ops;
+
+	int ret;
+	int rx_gref, rx_port;
+
+	/* check if there's existing rx ring channel */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (ring_info) {
+		dev_info(hy_drv_priv->dev,
+			 "rx ring ch from domid = %d already exist\n",
+			 ring_info->sdomain);
+
+		return 0;
+	}
+
+	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
+					&rx_gref, &rx_port);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Domain %d has not created exporter ring for current domain\n",
+			domid);
+
+		return ret;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	if (!ring_info)
+		return -ENOMEM;
+
+	ring_info->sdomain = domid;
+	ring_info->evtchn = rx_port;
+
+	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
+
+	if (!map_ops) {
+		ret = -ENOMEM;
+		goto fail_no_map_ops;
+	}
+
+	if (gnttab_alloc_pages(1, &shared_ring)) {
+		ret = -ENOMEM;
+		goto fail_others;
+	}
+
+	gnttab_set_map_op(&map_ops[0],
+			  (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
+			  GNTMAP_host_map, rx_gref, domid);
+
+	gnttab_set_unmap_op(&ring_info->unmap_op,
+			    (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
+			    GNTMAP_host_map, -1);
+
+	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
+		ret = -EFAULT;
+		goto fail_others;
+	}
+
+	if (map_ops[0].status) {
+		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
+		ret = -EFAULT;
+		goto fail_others;
+	} else {
+		ring_info->unmap_op.handle = map_ops[0].handle;
+	}
+
+	kfree(map_ops);
+
+	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
+
+	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
+
+	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
+
+	if (ret < 0) {
+		ret = -EIO;
+		goto fail_others;
+	}
+
+	ring_info->irq = ret;
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+		rx_port,
+		ring_info->irq);
+
+	ret = xen_comm_add_rx_ring(ring_info);
+
+	/* Setup communcation channel in opposite direction */
+	if (!xen_comm_find_tx_ring(domid))
+		ret = xen_be_init_tx_rbuf(domid);
+
+	ret = request_irq(ring_info->irq,
+			  back_ring_isr, 0,
+			  NULL, (void *)ring_info);
+
+	return ret;
+
+fail_others:
+	kfree(map_ops);
+
+fail_no_map_ops:
+	kfree(ring_info);
+
+	return ret;
+}
+
+/* clenas up importer ring create for given source domain */
+void xen_be_cleanup_rx_rbuf(int domid)
+{
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_tx_ring_info *tx_ring_info;
+	struct page *shared_ring;
+
+	/* check if we have importer ring created for given sdomain */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (!ring_info)
+		return;
+
+	xen_comm_remove_rx_ring(domid);
+
+	/* no need to close event channel, will be done by that function */
+	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
+
+	/* unmapping shared ring page */
+	shared_ring = virt_to_page(ring_info->ring_back.sring);
+	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
+	gnttab_free_pages(1, &shared_ring);
+
+	kfree(ring_info);
+
+	tx_ring_info = xen_comm_find_tx_ring(domid);
+	if (!tx_ring_info)
+		return;
+
+	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
+	FRONT_RING_INIT(&(tx_ring_info->ring_front),
+			tx_ring_info->ring_front.sring,
+			PAGE_SIZE);
+}
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
+
+#define DOMID_SCAN_START	1	/*  domid = 1 */
+#define DOMID_SCAN_END		10	/* domid = 10 */
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused)
+{
+	int ret;
+	char buf[128];
+	int i, dummy;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Scanning new tx channel comming from another domain\n");
+
+	/* check other domains and schedule another work if driver
+	 * is still running and backend is valid
+	 */
+	if (hy_drv_priv &&
+	    hy_drv_priv->initialized) {
+		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
+			if (i == hy_drv_priv->domid)
+				continue;
+
+			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+				i, hy_drv_priv->domid);
+
+			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
+
+			if (ret > 0) {
+				if (xen_comm_find_rx_ring(i) != NULL)
+					continue;
+
+				ret = xen_be_init_rx_rbuf(i);
+
+				if (!ret)
+					dev_info(hy_drv_priv->dev,
+						 "Done rx ch init for VM %d\n",
+						 i);
+			}
+		}
+
+		/* check every 10 seconds */
+		schedule_delayed_work(&xen_rx_ch_auto_add_work,
+				      msecs_to_jiffies(10000));
+	}
+}
+
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+
+void xen_init_comm_env_delayed(struct work_struct *unused)
+{
+	int ret;
+
+	/* scheduling another work if driver is still running
+	 * and xenstore hasn't been initialized or dom_id hasn't
+	 * been correctly retrieved.
+	 */
+	if (likely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore not ready Will re-try in 500ms\n");
+		schedule_delayed_work(&xen_init_comm_env_work,
+				      msecs_to_jiffies(500));
+	} else {
+		ret = xen_comm_setup_data_dir();
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to create data dir in Xenstore\n");
+		} else {
+			dev_info(hy_drv_priv->dev,
+				"Successfully finished comm env init\n");
+			hy_drv_priv->initialized = true;
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+			xen_rx_ch_add_delayed(NULL);
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+		}
+	}
+}
+
+int xen_be_init_comm_env(void)
+{
+	int ret;
+
+	xen_comm_ring_table_init();
+
+	if (unlikely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		xen_init_comm_env_delayed(NULL);
+		return -1;
+	}
+
+	ret = xen_comm_setup_data_dir();
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to create data dir in Xenstore\n");
+	} else {
+		dev_info(hy_drv_priv->dev,
+			"Successfully finished comm env initialization\n");
+
+		hy_drv_priv->initialized = true;
+	}
+
+	return ret;
+}
+
+/* cleans up all tx/rx rings */
+static void xen_be_cleanup_all_rbufs(void)
+{
+	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
+	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
+}
+
+void xen_be_destroy_comm(void)
+{
+	xen_be_cleanup_all_rbufs();
+	xen_comm_destroy_data_dir();
+}
+
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+			      int wait)
+{
+	struct xen_comm_front_ring *ring;
+	struct hyper_dmabuf_req *new_req;
+	struct xen_comm_tx_ring_info *ring_info;
+	int notify;
+
+	struct timeval tv_start, tv_end;
+	struct timeval tv_diff;
+
+	int timeout = 1000;
+
+	/* find a ring info for the channel */
+	ring_info = xen_comm_find_tx_ring(domid);
+	if (!ring_info) {
+		dev_err(hy_drv_priv->dev,
+			"Can't find ring info for the channel\n");
+		return -ENOENT;
+	}
+
+
+	ring = &ring_info->ring_front;
+
+	do_gettimeofday(&tv_start);
+
+	while (RING_FULL(ring)) {
+		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
+
+		if (timeout == 0) {
+			dev_err(hy_drv_priv->dev,
+				"Timeout while waiting for an entry in the ring\n");
+			return -EIO;
+		}
+		usleep_range(100, 120);
+		timeout--;
+	}
+
+	timeout = 1000;
+
+	mutex_lock(&ring_info->lock);
+
+	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
+	if (!new_req) {
+		mutex_unlock(&ring_info->lock);
+		dev_err(hy_drv_priv->dev,
+			"NULL REQUEST\n");
+		return -EIO;
+	}
+
+	req->req_id = xen_comm_next_req_id();
+
+	/* update req_pending with current request */
+	memcpy(&req_pending, req, sizeof(req_pending));
+
+	/* pass current request to the ring */
+	memcpy(new_req, req, sizeof(*new_req));
+
+	ring->req_prod_pvt++;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
+	if (notify)
+		notify_remote_via_irq(ring_info->irq);
+
+	if (wait) {
+		while (timeout--) {
+			if (req_pending.stat !=
+			    HYPER_DMABUF_REQ_NOT_RESPONDED)
+				break;
+			usleep_range(100, 120);
+		}
+
+		if (timeout < 0) {
+			mutex_unlock(&ring_info->lock);
+			dev_err(hy_drv_priv->dev,
+				"request timed-out\n");
+			return -EBUSY;
+		}
+
+		mutex_unlock(&ring_info->lock);
+		do_gettimeofday(&tv_end);
+
+		/* checking time duration for round-trip of a request
+		 * for debugging
+		 */
+		if (tv_end.tv_usec >= tv_start.tv_usec) {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
+			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
+		} else {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
+			tv_diff.tv_usec = tv_end.tv_usec+1000000-
+					  tv_start.tv_usec;
+		}
+
+		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
+			dev_dbg(hy_drv_priv->dev,
+				"send_req:time diff: %ld sec, %ld usec\n",
+				tv_diff.tv_sec, tv_diff.tv_usec);
+	}
+
+	mutex_unlock(&ring_info->lock);
+
+	return 0;
+}
+
+/* ISR for handling request */
+static irqreturn_t back_ring_isr(int irq, void *info)
+{
+	RING_IDX rc, rp;
+	struct hyper_dmabuf_req req;
+	struct hyper_dmabuf_resp resp;
+
+	int notify, more_to_do;
+	int ret;
+
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_back_ring *ring;
+
+	ring_info = (struct xen_comm_rx_ring_info *)info;
+	ring = &ring_info->ring_back;
+
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
+
+	do {
+		rc = ring->req_cons;
+		rp = ring->sring->req_prod;
+		more_to_do = 0;
+		while (rc != rp) {
+			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
+				break;
+
+			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
+			ring->req_cons = ++rc;
+
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
+
+			if (ret > 0) {
+				/* preparing a response for the request and
+				 * send it to the requester
+				 */
+				memcpy(&resp, &req, sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring,
+							 ring->rsp_prod_pvt),
+							 &resp, sizeof(resp));
+				ring->rsp_prod_pvt++;
+
+				dev_dbg(hy_drv_priv->dev,
+					"responding to exporter for req:%d\n",
+					resp.resp_id);
+
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
+								     notify);
+
+				if (notify)
+					notify_remote_via_irq(ring_info->irq);
+			}
+
+			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
+
+/* ISR for handling responses */
+static irqreturn_t front_ring_isr(int irq, void *info)
+{
+	/* front ring only care about response from back */
+	struct hyper_dmabuf_resp *resp;
+	RING_IDX i, rp;
+	int more_to_do, ret;
+
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_front_ring *ring;
+
+	ring_info = (struct xen_comm_tx_ring_info *)info;
+	ring = &ring_info->ring_front;
+
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
+
+	do {
+		more_to_do = 0;
+		rp = ring->sring->rsp_prod;
+		for (i = ring->rsp_cons; i != rp; i++) {
+			resp = RING_GET_RESPONSE(ring, i);
+
+			/* update pending request's status with what is
+			 * in the response
+			 */
+
+			dev_dbg(hy_drv_priv->dev,
+				"getting response from importer\n");
+
+			if (req_pending.req_id == resp->resp_id)
+				req_pending.stat = resp->stat;
+
+			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+				/* parsing response */
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
+					(struct hyper_dmabuf_req *)resp);
+
+				if (ret < 0) {
+					dev_err(hy_drv_priv->dev,
+						"err while parsing resp\n");
+				}
+			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
+				/* for debugging dma_buf remote synch */
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
+					"got HYPER_DMABUF_REQ_PROCESSED\n");
+			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
+				/* for debugging dma_buf remote synch */
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
+					"got HYPER_DMABUF_REQ_ERROR\n");
+			}
+		}
+
+		ring->rsp_cons = i;
+
+		if (i != ring->req_prod_pvt)
+			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
+		else
+			ring->sring->rsp_event = i+1;
+
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h
new file mode 100644
index 0000000..70a2b70
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_COMM_H__
+#define __HYPER_DMABUF_XEN_COMM_H__
+
+#include "xen/interface/io/ring.h"
+#include "xen/xenbus.h"
+#include "../hyper_dmabuf_msg.h"
+
+extern int xenstored_ready;
+
+DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
+
+struct xen_comm_tx_ring_info {
+	struct xen_comm_front_ring ring_front;
+	int rdomain;
+	int gref_ring;
+	int irq;
+	int port;
+	struct mutex lock;
+	struct xenbus_watch watch;
+};
+
+struct xen_comm_rx_ring_info {
+	int sdomain;
+	int irq;
+	int evtchn;
+	struct xen_comm_back_ring ring_back;
+	struct gnttab_unmap_grant_ref unmap_op;
+};
+
+int xen_be_get_domid(void);
+
+int xen_be_init_comm_env(void);
+
+/* exporter needs to generated info for page sharing */
+int xen_be_init_tx_rbuf(int domid);
+
+/* importer needs to know about shared page and port numbers
+ * for ring buffer and event channel
+ */
+int xen_be_init_rx_rbuf(int domid);
+
+/* cleans up exporter ring created for given domain */
+void xen_be_cleanup_tx_rbuf(int domid);
+
+/* cleans up importer ring created for given domain */
+void xen_be_cleanup_rx_rbuf(int domid);
+
+void xen_be_destroy_comm(void);
+
+/* send request to the remote domain */
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+		    int wait);
+
+#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c
new file mode 100644
index 0000000..15023db
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c
@@ -0,0 +1,158 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <linux/hashtable.h>
+#include <xen/grant_table.h>
+#include "../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+
+DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
+DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
+
+void xen_comm_ring_table_init(void)
+{
+	hash_init(xen_comm_rx_ring_hash);
+	hash_init(xen_comm_tx_ring_hash);
+}
+
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->info = ring_info;
+
+	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
+		info_entry->info->rdomain);
+
+	return 0;
+}
+
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->info = ring_info;
+
+	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
+		info_entry->info->sdomain);
+
+	return 0;
+}
+
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->rdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->sdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int xen_comm_remove_tx_ring(int domid)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->rdomain == domid) {
+			hash_del(&info_entry->node);
+			kfree(info_entry);
+			return 0;
+		}
+
+	return -ENOENT;
+}
+
+int xen_comm_remove_rx_ring(int domid)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->sdomain == domid) {
+			hash_del(&info_entry->node);
+			kfree(info_entry);
+			return 0;
+		}
+
+	return -ENOENT;
+}
+
+void xen_comm_foreach_tx_ring(void (*func)(int domid))
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
+			   info_entry, node) {
+		func(info_entry->info->rdomain);
+	}
+}
+
+void xen_comm_foreach_rx_ring(void (*func)(int domid))
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
+			   info_entry, node) {
+		func(info_entry->info->sdomain);
+	}
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h
new file mode 100644
index 0000000..8502fe7
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
+#define __HYPER_DMABUF_XEN_COMM_LIST_H__
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_TX_RING 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_RX_RING 7
+
+struct xen_comm_tx_ring_info_entry {
+	struct xen_comm_tx_ring_info *info;
+	struct hlist_node node;
+};
+
+struct xen_comm_rx_ring_info_entry {
+	struct xen_comm_rx_ring_info *info;
+	struct hlist_node node;
+};
+
+void xen_comm_ring_table_init(void);
+
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
+
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
+
+int xen_comm_remove_tx_ring(int domid);
+
+int xen_comm_remove_rx_ring(int domid);
+
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
+
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
+
+/* iterates over all exporter rings and calls provided
+ * function for each of them
+ */
+void xen_comm_foreach_tx_ring(void (*func)(int domid));
+
+/* iterates over all importer rings and calls provided
+ * function for each of them
+ */
+void xen_comm_foreach_rx_ring(void (*func)(int domid));
+
+#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c
new file mode 100644
index 0000000..14ed3bc
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include "../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_shm.h"
+
+struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
+	.init = NULL, /* not needed for xen */
+	.cleanup = NULL, /* not needed for xen */
+	.get_vm_id = xen_be_get_domid,
+	.share_pages = xen_be_share_pages,
+	.unshare_pages = xen_be_unshare_pages,
+	.map_shared_pages = (void *)xen_be_map_shared_pages,
+	.unmap_shared_pages = xen_be_unmap_shared_pages,
+	.init_comm_env = xen_be_init_comm_env,
+	.destroy_comm = xen_be_destroy_comm,
+	.init_rx_ch = xen_be_init_rx_rbuf,
+	.init_tx_ch = xen_be_init_tx_rbuf,
+	.send_req = xen_be_send_req,
+};
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h
new file mode 100644
index 0000000..a4902b7
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_DRV_H__
+#define __HYPER_DMABUF_XEN_DRV_H__
+#include <xen/interface/grant_table.h>
+
+extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
+
+/* Main purpose of this structure is to keep
+ * all references created or acquired for sharing
+ * pages with another domain for freeing those later
+ * when unsharing.
+ */
+struct xen_shared_pages_info {
+	/* top level refid */
+	grant_ref_t lvl3_gref;
+
+	/* page of top level addressing, it contains refids of 2nd lvl pages */
+	grant_ref_t *lvl3_table;
+
+	/* table of 2nd level pages, that contains refids to data pages */
+	grant_ref_t *lvl2_table;
+
+	/* unmap ops for mapped pages */
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	/* data pages to be unmapped */
+	struct page **data_pages;
+};
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c
new file mode 100644
index 0000000..c6a15f1
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c
@@ -0,0 +1,525 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/slab.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_drv.h"
+#include "../hyper_dmabuf_drv.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ *
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ *
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      3rd level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌>+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
+ * +-------------------------+ | | +--------------------+   |Data page 1 |
+ *                             | |                          +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|
+ *                                 +-----------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		       void **refs_info)
+{
+	grant_ref_t lvl3_gref;
+	grant_ref_t *lvl2_table;
+	grant_ref_t *lvl3_table;
+
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			   ((nents % REFS_PER_PAGE) ? 1 : 0));
+
+	struct xen_shared_pages_info *sh_pages_info;
+	int i;
+
+	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
+	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+
+	if (!sh_pages_info)
+		return -ENOMEM;
+
+	*refs_info = (void *)sh_pages_info;
+
+	/* share data pages in readonly mode for security */
+	for (i = 0; i < nents; i++) {
+		lvl2_table[i] = gnttab_grant_foreign_access(domid,
+					pfn_to_mfn(page_to_pfn(pages[i])),
+					true /* read only */);
+		if (lvl2_table[i] == -ENOSPC) {
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl2 */
+			while (i--) {
+				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
+				gnttab_free_grant_reference(lvl2_table[i]);
+			}
+			goto err_cleanup;
+		}
+	}
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl3_table[i] = gnttab_grant_foreign_access(domid,
+					virt_to_mfn(
+					(unsigned long)lvl2_table+i*PAGE_SIZE),
+					true);
+
+		if (lvl3_table[i] == -ENOSPC) {
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl3 */
+			while (i--) {
+				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+				gnttab_free_grant_reference(lvl3_table[i]);
+			}
+
+			/* Unshare all pages for lvl2 */
+			while (nents--) {
+				gnttab_end_foreign_access_ref(
+							lvl2_table[nents], 0);
+				gnttab_free_grant_reference(lvl2_table[nents]);
+			}
+
+			goto err_cleanup;
+		}
+	}
+
+	/* Share lvl3_table in readonly mode*/
+	lvl3_gref = gnttab_grant_foreign_access(domid,
+			virt_to_mfn((unsigned long)lvl3_table),
+			true);
+
+	if (lvl3_gref == -ENOSPC) {
+		dev_err(hy_drv_priv->dev,
+			"No more space left in grant table\n");
+
+		/* Unshare all pages for lvl3 */
+		while (i--) {
+			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+			gnttab_free_grant_reference(lvl3_table[i]);
+		}
+
+		/* Unshare all pages for lvl2 */
+		while (nents--) {
+			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+			gnttab_free_grant_reference(lvl2_table[nents]);
+		}
+
+		goto err_cleanup;
+	}
+
+	/* Store lvl3_table page to be freed later */
+	sh_pages_info->lvl3_table = lvl3_table;
+
+	/* Store lvl2_table pages to be freed later */
+	sh_pages_info->lvl2_table = lvl2_table;
+
+
+	/* Store exported pages refid to be unshared later */
+	sh_pages_info->lvl3_gref = lvl3_gref;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return lvl3_gref;
+
+err_cleanup:
+	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)lvl3_table, 1);
+
+	return -ENOSPC;
+}
+
+int xen_be_unshare_pages(void **refs_info, int nents)
+{
+	struct xen_shared_pages_info *sh_pages_info;
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			    ((nents % REFS_PER_PAGE) ? 1 : 0));
+	int i;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->lvl3_table == NULL ||
+	    sh_pages_info->lvl2_table ==  NULL ||
+	    sh_pages_info->lvl3_gref == -1) {
+		dev_warn(hy_drv_priv->dev,
+			 "gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < nents; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
+
+		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
+		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
+
+		if (!gnttab_end_foreign_access_ref(
+					sh_pages_info->lvl3_table[i], 1))
+			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
+
+		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
+	}
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
+		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
+
+	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
+	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
+
+	/* freeing all pages used for 2 level addressing */
+	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
+
+	sh_pages_info->lvl3_gref = -1;
+	sh_pages_info->lvl2_table = NULL;
+	sh_pages_info->lvl3_table = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
+
+/* Maps provided top level ref id and then return array of pages
+ * containing data refs.
+ */
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents, void **refs_info)
+{
+	struct page *lvl3_table_page;
+	struct page **lvl2_table_pages;
+	struct page **data_pages;
+	struct xen_shared_pages_info *sh_pages_info;
+
+	grant_ref_t *lvl3_table;
+	grant_ref_t *lvl2_table;
+
+	struct gnttab_map_grant_ref lvl3_map_ops;
+	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
+
+	struct gnttab_map_grant_ref *lvl2_map_ops;
+	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
+
+	struct gnttab_map_grant_ref *data_map_ops;
+	struct gnttab_unmap_grant_ref *data_unmap_ops;
+
+	/* # of grefs in the last page of lvl2 table */
+	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
+			   ((nents_last > 0) ? 1 : 0) -
+			   (nents_last == REFS_PER_PAGE);
+	int i, j, k;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+	*refs_info = (void *) sh_pages_info;
+
+	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
+				   GFP_KERNEL);
+
+	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
+
+	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
+			       GFP_KERNEL);
+
+	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
+				 GFP_KERNEL);
+
+	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
+	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
+		return NULL;
+	}
+
+	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
+
+	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  (grant_ref_t)lvl3_gref, domid);
+
+	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
+			    GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (lvl3_map_ops.status) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed status = %d",
+			lvl3_map_ops.status);
+
+		goto error_cleanup_lvl3;
+	} else {
+		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
+	}
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
+		goto error_cleanup_lvl3;
+	}
+
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
+					page_to_pfn(lvl2_table_pages[i]));
+		gnttab_set_map_op(&lvl2_map_ops[i],
+				  (unsigned long)lvl2_table, GNTMAP_host_map |
+				  GNTMAP_readonly,
+				  lvl3_table[i], domid);
+		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
+				    (unsigned long)lvl2_table, GNTMAP_host_map |
+				    GNTMAP_readonly, -1);
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+			      &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	/* Mark that page was unmapped */
+	lvl3_unmap_ops.handle = -1;
+
+	if (gnttab_map_refs(lvl2_map_ops, NULL,
+			    lvl2_table_pages, n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (lvl2_map_ops[i].status) {
+			dev_err(hy_drv_priv->dev,
+				"HYPERVISOR map grant ref failed status = %d",
+				lvl2_map_ops[i].status);
+			goto error_cleanup_lvl2;
+		} else {
+			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
+		}
+	}
+
+	if (gnttab_alloc_pages(nents, data_pages)) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate pages\n");
+		goto error_cleanup_lvl2;
+	}
+
+	k = 0;
+
+	for (i = 0; i < n_lvl2_grefs - 1; i++) {
+		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		for (j = 0; j < REFS_PER_PAGE; j++) {
+			gnttab_set_map_op(&data_map_ops[k],
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly,
+				lvl2_table[j], domid);
+
+			gnttab_set_unmap_op(&data_unmap_ops[k],
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly, -1);
+			k++;
+		}
+	}
+
+	/* for grefs in the last lvl2 table page */
+	lvl2_table = pfn_to_kaddr(page_to_pfn(
+				lvl2_table_pages[n_lvl2_grefs - 1]));
+
+	for (j = 0; j < nents_last; j++) {
+		gnttab_set_map_op(&data_map_ops[k],
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly,
+			lvl2_table[j], domid);
+
+		gnttab_set_unmap_op(&data_unmap_ops[k],
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly, -1);
+		k++;
+	}
+
+	if (gnttab_map_refs(data_map_ops, NULL,
+			    data_pages, nents)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	/* unmapping lvl2 table pages */
+	if (gnttab_unmap_refs(lvl2_unmap_ops,
+			      NULL, lvl2_table_pages,
+			      n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	/* Mark that pages were unmapped */
+	for (i = 0; i < n_lvl2_grefs; i++)
+		lvl2_unmap_ops[i].handle = -1;
+
+	for (i = 0; i < nents; i++) {
+		if (data_map_ops[i].status) {
+			dev_err(hy_drv_priv->dev,
+				"HYPERVISOR map grant ref failed status = %d\n",
+				data_map_ops[i].status);
+			goto error_cleanup_data;
+		} else {
+			data_unmap_ops[i].handle = data_map_ops[i].handle;
+		}
+	}
+
+	/* store these references for unmapping in the future */
+	sh_pages_info->unmap_ops = data_unmap_ops;
+	sh_pages_info->data_pages = data_pages;
+
+	gnttab_free_pages(1, &lvl3_table_page);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return data_pages;
+
+error_cleanup_data:
+	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
+			  nents);
+
+	gnttab_free_pages(nents, data_pages);
+
+error_cleanup_lvl2:
+	if (lvl2_unmap_ops[0].handle != -1)
+		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
+				  lvl2_table_pages, n_lvl2_grefs);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+
+error_cleanup_lvl3:
+	if (lvl3_unmap_ops.handle != -1)
+		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+				  &lvl3_table_page, 1);
+	gnttab_free_pages(1, &lvl3_table_page);
+
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+
+	return NULL;
+}
+
+int xen_be_unmap_shared_pages(void **refs_info, int nents)
+{
+	struct xen_shared_pages_info *sh_pages_info;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->unmap_ops == NULL ||
+	    sh_pages_info->data_pages == NULL) {
+		dev_warn(hy_drv_priv->dev,
+			 "pages already cleaned up or buffer not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
+			      sh_pages_info->data_pages, nents)) {
+		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
+		return -EFAULT;
+	}
+
+	gnttab_free_pages(nents, sh_pages_info->data_pages);
+
+	kfree(sh_pages_info->data_pages);
+	kfree(sh_pages_info->unmap_ops);
+	sh_pages_info->unmap_ops = NULL;
+	sh_pages_info->data_pages = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h
new file mode 100644
index 0000000..d5236b5
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_SHM_H__
+#define __HYPER_DMABUF_XEN_SHM_H__
+
+/* This collects all reference numbers for 2nd level shared pages and
+ * create a table with those in 1st level shared pages then return reference
+ * numbers for this top level table.
+ */
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		    void **refs_info);
+
+int xen_be_unshare_pages(void **refs_info, int nents);
+
+/* Maps provided top level ref id and then return array of pages containing
+ * data refs.
+ */
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents,
+				      void **refs_info);
+
+int xen_be_unmap_shared_pages(void **refs_info, int nents);
+
+#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index b59b0e3..6aa302d 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -321,6 +321,6 @@ config XEN_SYMS
 config XEN_HAVE_VPMU
        bool
 
-source "drivers/xen/hyper_dmabuf/Kconfig"
+source "drivers/dma-buf/hyper_dmabuf/Kconfig"
 
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index a6e253a..ede7082 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -4,7 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
 obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
 obj-y	+= events/
 obj-y	+= xenbus/
-obj-y	+= hyper_dmabuf/
+obj-y	+= ../dma-buf/hyper_dmabuf/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
deleted file mode 100644
index 5efcd44..0000000
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ /dev/null
@@ -1,42 +0,0 @@
-menu "hyper_dmabuf options"
-
-config HYPER_DMABUF
-	tristate "Enables hyper dmabuf driver"
-	default y
-
-config HYPER_DMABUF_XEN
-	bool "Configure hyper_dmabuf for XEN hypervisor"
-	default y
-	depends on HYPER_DMABUF
-	help
-	  Configuring hyper_dmabuf driver for XEN hypervisor
-
-config HYPER_DMABUF_SYSFS
-	bool "Enable sysfs information about hyper DMA buffers"
-	default y
-	depends on HYPER_DMABUF
-	help
-	  Expose information about imported and exported buffers using
-	  hyper_dmabuf driver
-
-config HYPER_DMABUF_EVENT_GEN
-	bool "Enable event-generation and polling operation"
-	default n
-	depends on HYPER_DMABUF
-	help
-	  With this config enabled, hyper_dmabuf driver on the importer side
-	  generates events and queue those up in the event list whenever a new
-	  shared DMA-BUF is available. Events in the list can be retrieved by
-	  read operation.
-
-config HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
-	bool "Enable automatic rx-ch add with 10 secs interval"
-	default y
-	depends on HYPER_DMABUF && HYPER_DMABUF_XEN
-	help
-	  If enabled, driver reads a node in xenstore every 10 seconds
-	  to check whether there is any tx comm ch configured by another
-	  domain then initialize matched rx comm ch automatically for any
-	  existing tx comm chs.
-
-endmenu
diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
deleted file mode 100644
index a113bfc..0000000
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ /dev/null
@@ -1,49 +0,0 @@
-TARGET_MODULE:=hyper_dmabuf
-
-PLATFORM:=XEN
-
-# If we running by kernel building system
-ifneq ($(KERNELRELEASE),)
-	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
-                                 hyper_dmabuf_ioctl.o \
-                                 hyper_dmabuf_list.o \
-				 hyper_dmabuf_sgl_proc.o \
-				 hyper_dmabuf_ops.o \
-				 hyper_dmabuf_msg.o \
-				 hyper_dmabuf_id.o \
-				 hyper_dmabuf_remote_sync.o \
-				 hyper_dmabuf_query.o \
-
-ifeq ($(CONFIG_HYPER_DMABUF_EVENT_GEN), y)
-	$(TARGET_MODULE)-objs += hyper_dmabuf_event.o
-endif
-
-ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
-	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
-				 xen/hyper_dmabuf_xen_comm_list.o \
-				 xen/hyper_dmabuf_xen_shm.o \
-				 xen/hyper_dmabuf_xen_drv.o
-endif
-
-obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
-
-# If we are running without kernel build system
-else
-BUILDSYSTEM_DIR?=../../../
-PWD:=$(shell pwd)
-
-all :
-# run kernel build system to make module
-$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
-
-clean:
-# run kernel build system to cleanup in current directory
-$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
-
-load:
-	insmod ./$(TARGET_MODULE).ko
-
-unload:
-	rmmod ./$(TARGET_MODULE).ko
-
-endif
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
deleted file mode 100644
index eead4c0..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ /dev/null
@@ -1,408 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/miscdevice.h>
-#include <linux/workqueue.h>
-#include <linux/slab.h>
-#include <linux/device.h>
-#include <linux/uaccess.h>
-#include <linux/poll.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_ioctl.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_event.h"
-
-#ifdef CONFIG_HYPER_DMABUF_XEN
-#include "xen/hyper_dmabuf_xen_drv.h"
-#endif
-
-MODULE_LICENSE("GPL and additional rights");
-MODULE_AUTHOR("Intel Corporation");
-
-struct hyper_dmabuf_private *hy_drv_priv;
-
-static void force_free(struct exported_sgt_info *exported,
-		       void *attr)
-{
-	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file *)attr;
-
-	if (!filp || !exported)
-		return;
-
-	if (exported->filp == filp) {
-		dev_dbg(hy_drv_priv->dev,
-			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
-			 exported->hid.id, exported->hid.rng_key[0],
-			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-		unexport_attr.hid = exported->hid;
-		unexport_attr.delay_ms = 0;
-
-		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
-	}
-}
-
-static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
-{
-	int ret = 0;
-
-	/* Do not allow exclusive open */
-	if (filp->f_flags & O_EXCL)
-		return -EBUSY;
-
-	return ret;
-}
-
-static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
-{
-	hyper_dmabuf_foreach_exported(force_free, filp);
-
-	return 0;
-}
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-
-static unsigned int hyper_dmabuf_event_poll(struct file *filp,
-				     struct poll_table_struct *wait)
-{
-	poll_wait(filp, &hy_drv_priv->event_wait, wait);
-
-	if (!list_empty(&hy_drv_priv->event_list))
-		return POLLIN | POLLRDNORM;
-
-	return 0;
-}
-
-static ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
-		size_t count, loff_t *offset)
-{
-	int ret;
-
-	/* only root can read events */
-	if (!capable(CAP_DAC_OVERRIDE)) {
-		dev_err(hy_drv_priv->dev,
-			"Only root can read events\n");
-		return -EPERM;
-	}
-
-	/* make sure user buffer can be written */
-	if (!access_ok(VERIFY_WRITE, buffer, count)) {
-		dev_err(hy_drv_priv->dev,
-			"User buffer can't be written.\n");
-		return -EINVAL;
-	}
-
-	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
-	if (ret)
-		return ret;
-
-	while (1) {
-		struct hyper_dmabuf_event *e = NULL;
-
-		spin_lock_irq(&hy_drv_priv->event_lock);
-		if (!list_empty(&hy_drv_priv->event_list)) {
-			e = list_first_entry(&hy_drv_priv->event_list,
-					struct hyper_dmabuf_event, link);
-			list_del(&e->link);
-		}
-		spin_unlock_irq(&hy_drv_priv->event_lock);
-
-		if (!e) {
-			if (ret)
-				break;
-
-			if (filp->f_flags & O_NONBLOCK) {
-				ret = -EAGAIN;
-				break;
-			}
-
-			mutex_unlock(&hy_drv_priv->event_read_lock);
-			ret = wait_event_interruptible(hy_drv_priv->event_wait,
-				  !list_empty(&hy_drv_priv->event_list));
-
-			if (ret == 0)
-				ret = mutex_lock_interruptible(
-					&hy_drv_priv->event_read_lock);
-
-			if (ret)
-				return ret;
-		} else {
-			unsigned int length = (sizeof(e->event_data.hdr) +
-						      e->event_data.hdr.size);
-
-			if (length > count - ret) {
-put_back_event:
-				spin_lock_irq(&hy_drv_priv->event_lock);
-				list_add(&e->link, &hy_drv_priv->event_list);
-				spin_unlock_irq(&hy_drv_priv->event_lock);
-				break;
-			}
-
-			if (copy_to_user(buffer + ret, &e->event_data.hdr,
-					 sizeof(e->event_data.hdr))) {
-				if (ret == 0)
-					ret = -EFAULT;
-
-				goto put_back_event;
-			}
-
-			ret += sizeof(e->event_data.hdr);
-
-			if (copy_to_user(buffer + ret, e->event_data.data,
-					 e->event_data.hdr.size)) {
-				/* error while copying void *data */
-
-				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
-
-				ret -= sizeof(e->event_data.hdr);
-
-				/* nullifying hdr of the event in user buffer */
-				if (copy_to_user(buffer + ret, &dummy_hdr,
-						 sizeof(dummy_hdr))) {
-					dev_err(hy_drv_priv->dev,
-						"failed to nullify invalid hdr already in userspace\n");
-				}
-
-				ret = -EFAULT;
-
-				goto put_back_event;
-			}
-
-			ret += e->event_data.hdr.size;
-			hy_drv_priv->pending--;
-			kfree(e);
-		}
-	}
-
-	mutex_unlock(&hy_drv_priv->event_read_lock);
-
-	return ret;
-}
-
-#endif
-
-static const struct file_operations hyper_dmabuf_driver_fops = {
-	.owner = THIS_MODULE,
-	.open = hyper_dmabuf_open,
-	.release = hyper_dmabuf_release,
-
-/* poll and read interfaces are needed only for event-polling */
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-	.read = hyper_dmabuf_event_read,
-	.poll = hyper_dmabuf_event_poll,
-#endif
-
-	.unlocked_ioctl = hyper_dmabuf_ioctl,
-};
-
-static struct miscdevice hyper_dmabuf_miscdev = {
-	.minor = MISC_DYNAMIC_MINOR,
-	.name = "hyper_dmabuf",
-	.fops = &hyper_dmabuf_driver_fops,
-};
-
-static int register_device(void)
-{
-	int ret = 0;
-
-	ret = misc_register(&hyper_dmabuf_miscdev);
-
-	if (ret) {
-		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
-		return ret;
-	}
-
-	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
-
-	/* TODO: Check if there is a different way to initialize dma mask */
-	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
-
-	return ret;
-}
-
-static void unregister_device(void)
-{
-	dev_info(hy_drv_priv->dev,
-		"hyper_dmabuf: unregister_device() is called\n");
-
-	misc_deregister(&hyper_dmabuf_miscdev);
-}
-
-static int __init hyper_dmabuf_drv_init(void)
-{
-	int ret = 0;
-
-	printk(KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
-
-	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
-			      GFP_KERNEL);
-
-	if (!hy_drv_priv)
-		return -ENOMEM;
-
-	ret = register_device();
-	if (ret < 0) {
-		kfree(hy_drv_priv);
-		return ret;
-	}
-
-/* currently only supports XEN hypervisor */
-#ifdef CONFIG_HYPER_DMABUF_XEN
-	hy_drv_priv->bknd_ops = &xen_bknd_ops;
-#else
-	hy_drv_priv->bknd_ops = NULL;
-	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
-#endif
-
-	if (hy_drv_priv->bknd_ops == NULL) {
-		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
-		kfree(hy_drv_priv);
-		return -1;
-	}
-
-	mutex_init(&hy_drv_priv->lock);
-
-	mutex_lock(&hy_drv_priv->lock);
-
-	hy_drv_priv->initialized = false;
-
-	dev_info(hy_drv_priv->dev,
-		 "initializing database for imported/exported dmabufs\n");
-
-	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
-
-	ret = hyper_dmabuf_table_init();
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"fail to init table for exported/imported entries\n");
-		mutex_unlock(&hy_drv_priv->lock);
-		kfree(hy_drv_priv);
-		return ret;
-	}
-
-#ifdef CONFIG_HYPER_DMABUF_SYSFS
-	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to initialize sysfs\n");
-		mutex_unlock(&hy_drv_priv->lock);
-		kfree(hy_drv_priv);
-		return ret;
-	}
-#endif
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-	mutex_init(&hy_drv_priv->event_read_lock);
-	spin_lock_init(&hy_drv_priv->event_lock);
-
-	/* Initialize event queue */
-	INIT_LIST_HEAD(&hy_drv_priv->event_list);
-	init_waitqueue_head(&hy_drv_priv->event_wait);
-
-	/* resetting number of pending events */
-	hy_drv_priv->pending = 0;
-#endif
-
-	if (hy_drv_priv->bknd_ops->init) {
-		ret = hy_drv_priv->bknd_ops->init();
-
-		if (ret < 0) {
-			dev_dbg(hy_drv_priv->dev,
-				"failed to initialize backend.\n");
-			mutex_unlock(&hy_drv_priv->lock);
-			kfree(hy_drv_priv);
-			return ret;
-		}
-	}
-
-	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
-
-	ret = hy_drv_priv->bknd_ops->init_comm_env();
-	if (ret < 0) {
-		dev_dbg(hy_drv_priv->dev,
-			"failed to initialize comm-env.\n");
-	} else {
-		hy_drv_priv->initialized = true;
-	}
-
-	mutex_unlock(&hy_drv_priv->lock);
-
-	dev_info(hy_drv_priv->dev,
-		"Finishing up initialization of hyper_dmabuf drv\n");
-
-	/* interrupt for comm should be registered here: */
-	return ret;
-}
-
-static void hyper_dmabuf_drv_exit(void)
-{
-#ifdef CONFIG_HYPER_DMABUF_SYSFS
-	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
-#endif
-
-	mutex_lock(&hy_drv_priv->lock);
-
-	/* hash tables for export/import entries and ring_infos */
-	hyper_dmabuf_table_destroy();
-
-	hy_drv_priv->bknd_ops->destroy_comm();
-
-	if (hy_drv_priv->bknd_ops->cleanup) {
-		hy_drv_priv->bknd_ops->cleanup();
-	};
-
-	/* destroy workqueue */
-	if (hy_drv_priv->work_queue)
-		destroy_workqueue(hy_drv_priv->work_queue);
-
-	/* destroy id_queue */
-	if (hy_drv_priv->id_queue)
-		hyper_dmabuf_free_hid_list();
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-	/* clean up event queue */
-	hyper_dmabuf_events_release();
-#endif
-
-	mutex_unlock(&hy_drv_priv->lock);
-
-	dev_info(hy_drv_priv->dev,
-		 "hyper_dmabuf driver: Exiting\n");
-
-	kfree(hy_drv_priv);
-
-	unregister_device();
-}
-
-module_init(hyper_dmabuf_drv_init);
-module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
deleted file mode 100644
index c2bb3ce..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ /dev/null
@@ -1,118 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
-#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
-
-#include <linux/device.h>
-#include <xen/hyper_dmabuf.h>
-
-struct hyper_dmabuf_req;
-
-struct hyper_dmabuf_event {
-	struct hyper_dmabuf_event_data event_data;
-	struct list_head link;
-};
-
-struct hyper_dmabuf_private {
-	struct device *dev;
-
-	/* VM(domain) id of current VM instance */
-	int domid;
-
-	/* workqueue dedicated to hyper_dmabuf driver */
-	struct workqueue_struct *work_queue;
-
-	/* list of reusable hyper_dmabuf_ids */
-	struct list_reusable_id *id_queue;
-
-	/* backend ops - hypervisor specific */
-	struct hyper_dmabuf_bknd_ops *bknd_ops;
-
-	/* device global lock */
-	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
-	struct mutex lock;
-
-	/* flag that shows whether backend is initialized */
-	bool initialized;
-
-	wait_queue_head_t event_wait;
-	struct list_head event_list;
-
-	spinlock_t event_lock;
-	struct mutex event_read_lock;
-
-	/* # of pending events */
-	int pending;
-};
-
-struct list_reusable_id {
-	hyper_dmabuf_id_t hid;
-	struct list_head list;
-};
-
-struct hyper_dmabuf_bknd_ops {
-	/* backend initialization routine (optional) */
-	int (*init)(void);
-
-	/* backend cleanup routine (optional) */
-	int (*cleanup)(void);
-
-	/* retreiving id of current virtual machine */
-	int (*get_vm_id)(void);
-
-	/* get pages shared via hypervisor-specific method */
-	int (*share_pages)(struct page **, int, int, void **);
-
-	/* make shared pages unshared via hypervisor specific method */
-	int (*unshare_pages)(void **, int);
-
-	/* map remotely shared pages on importer's side via
-	 * hypervisor-specific method
-	 */
-	struct page ** (*map_shared_pages)(unsigned long, int, int, void **);
-
-	/* unmap and free shared pages on importer's side via
-	 * hypervisor-specific method
-	 */
-	int (*unmap_shared_pages)(void **, int);
-
-	/* initialize communication environment */
-	int (*init_comm_env)(void);
-
-	void (*destroy_comm)(void);
-
-	/* upstream ch setup (receiving and responding) */
-	int (*init_rx_ch)(int);
-
-	/* downstream ch setup (transmitting and parsing responses) */
-	int (*init_tx_ch)(int);
-
-	int (*send_req)(int, struct hyper_dmabuf_req *, int);
-};
-
-/* exporting global drv private info */
-extern struct hyper_dmabuf_private *hy_drv_priv;
-
-#endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
deleted file mode 100644
index 392ea99..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_event.h"
-
-static void send_event(struct hyper_dmabuf_event *e)
-{
-	struct hyper_dmabuf_event *oldest;
-	unsigned long irqflags;
-
-	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
-
-	/* check current number of event then if it hits the max num allowed
-	 * then remove the oldest event in the list
-	 */
-	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
-		oldest = list_first_entry(&hy_drv_priv->event_list,
-				struct hyper_dmabuf_event, link);
-		list_del(&oldest->link);
-		hy_drv_priv->pending--;
-		kfree(oldest);
-	}
-
-	list_add_tail(&e->link,
-		      &hy_drv_priv->event_list);
-
-	hy_drv_priv->pending++;
-
-	wake_up_interruptible(&hy_drv_priv->event_wait);
-
-	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
-}
-
-void hyper_dmabuf_events_release(void)
-{
-	struct hyper_dmabuf_event *e, *et;
-	unsigned long irqflags;
-
-	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
-
-	list_for_each_entry_safe(e, et, &hy_drv_priv->event_list,
-				 link) {
-		list_del(&e->link);
-		kfree(e);
-		hy_drv_priv->pending--;
-	}
-
-	if (hy_drv_priv->pending) {
-		dev_err(hy_drv_priv->dev,
-			"possible leak on event_list\n");
-	}
-
-	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
-}
-
-int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
-{
-	struct hyper_dmabuf_event *e;
-	struct imported_sgt_info *imported;
-
-	imported = hyper_dmabuf_find_imported(hid);
-
-	if (!imported) {
-		dev_err(hy_drv_priv->dev,
-			"can't find imported_sgt_info in the list\n");
-		return -EINVAL;
-	}
-
-	e = kzalloc(sizeof(*e), GFP_KERNEL);
-
-	if (!e)
-		return -ENOMEM;
-
-	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
-	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void *)imported->priv;
-	e->event_data.hdr.size = imported->sz_priv;
-
-	send_event(e);
-
-	dev_dbg(hy_drv_priv->dev,
-		"event number = %d :", hy_drv_priv->pending);
-
-	dev_dbg(hy_drv_priv->dev,
-		"generating events for {%d, %d, %d, %d}\n",
-		imported->hid.id, imported->hid.rng_key[0],
-		imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
deleted file mode 100644
index 50db04f..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_EVENT_H__
-#define __HYPER_DMABUF_EVENT_H__
-
-#define MAX_DEPTH_EVENT_QUEUE 32
-
-enum hyper_dmabuf_event_type {
-	HYPER_DMABUF_NEW_IMPORT = 0x10000,
-};
-
-void hyper_dmabuf_events_release(void);
-
-int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid);
-
-#endif /* __HYPER_DMABUF_EVENT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
deleted file mode 100644
index e67b84a..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ /dev/null
@@ -1,133 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/list.h>
-#include <linux/slab.h>
-#include <linux/random.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_id.h"
-
-void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
-{
-	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	struct list_reusable_id *new_reusable;
-
-	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
-
-	if (!new_reusable)
-		return;
-
-	new_reusable->hid = hid;
-
-	list_add(&new_reusable->list, &reusable_head->list);
-}
-
-static hyper_dmabuf_id_t get_reusable_hid(void)
-{
-	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
-
-	/* check there is reusable id */
-	if (!list_empty(&reusable_head->list)) {
-		reusable_head = list_first_entry(&reusable_head->list,
-						 struct list_reusable_id,
-						 list);
-
-		list_del(&reusable_head->list);
-		hid = reusable_head->hid;
-		kfree(reusable_head);
-	}
-
-	return hid;
-}
-
-void hyper_dmabuf_free_hid_list(void)
-{
-	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	struct list_reusable_id *temp_head;
-
-	if (reusable_head) {
-		/* freeing mem space all reusable ids in the stack */
-		while (!list_empty(&reusable_head->list)) {
-			temp_head = list_first_entry(&reusable_head->list,
-						     struct list_reusable_id,
-						     list);
-			list_del(&temp_head->list);
-			kfree(temp_head);
-		}
-
-		/* freeing head */
-		kfree(reusable_head);
-	}
-}
-
-hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
-{
-	static int count;
-	hyper_dmabuf_id_t hid;
-	struct list_reusable_id *reusable_head;
-
-	/* first call to hyper_dmabuf_get_id */
-	if (count == 0) {
-		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
-
-		if (!reusable_head)
-			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
-
-		/* list head has an invalid count */
-		reusable_head->hid.id = -1;
-		INIT_LIST_HEAD(&reusable_head->list);
-		hy_drv_priv->id_queue = reusable_head;
-	}
-
-	hid = get_reusable_hid();
-
-	/*creating a new H-ID only if nothing in the reusable id queue
-	 * and count is less than maximum allowed
-	 */
-	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
-		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
-
-	/* random data embedded in the id for security */
-	get_random_bytes(&hid.rng_key[0], 12);
-
-	return hid;
-}
-
-bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
-{
-	int i;
-
-	/* compare keys */
-	for (i = 0; i < 3; i++) {
-		if (hid1.rng_key[i] != hid2.rng_key[i])
-			return false;
-	}
-
-	return true;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
deleted file mode 100644
index ed690f3..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_ID_H__
-#define __HYPER_DMABUF_ID_H__
-
-#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
-	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
-
-#define HYPER_DMABUF_DOM_ID(hid) \
-	(((hid.id) >> 24) & 0xFF)
-
-/* currently maximum number of buffers shared
- * at any given moment is limited to 1000
- */
-#define HYPER_DMABUF_ID_MAX 1000
-
-/* adding freed hid to the reusable list */
-void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
-
-/* freeing the reusasble list */
-void hyper_dmabuf_free_hid_list(void);
-
-/* getting a hid available to use. */
-hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
-
-/* comparing two different hid */
-bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
-
-#endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
deleted file mode 100644
index ca6edf2..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ /dev/null
@@ -1,786 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/uaccess.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_ioctl.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_sgl_proc.h"
-#include "hyper_dmabuf_ops.h"
-#include "hyper_dmabuf_query.h"
-
-static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int ret = 0;
-
-	if (!data) {
-		dev_err(hy_drv_priv->dev, "user data is NULL\n");
-		return -EINVAL;
-	}
-	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
-
-	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
-
-	return ret;
-}
-
-static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int ret = 0;
-
-	if (!data) {
-		dev_err(hy_drv_priv->dev, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
-
-	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
-
-	return ret;
-}
-
-static int send_export_msg(struct exported_sgt_info *exported,
-			   struct pages_info *pg_info)
-{
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	struct hyper_dmabuf_req *req;
-	int op[MAX_NUMBER_OF_OPERANDS] = {0};
-	int ret, i;
-
-	/* now create request for importer via ring */
-	op[0] = exported->hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = exported->hid.rng_key[i];
-
-	if (pg_info) {
-		op[4] = pg_info->nents;
-		op[5] = pg_info->frst_ofst;
-		op[6] = pg_info->last_len;
-		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
-					 pg_info->nents, &exported->refs_info);
-		if (op[7] < 0) {
-			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
-			return op[7];
-		}
-	}
-
-	op[8] = exported->sz_priv;
-
-	/* driver/application specific private info */
-	memcpy(&op[9], exported->priv, op[8]);
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req)
-		return -ENOMEM;
-
-	/* composing a message to the importer */
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
-
-	ret = bknd_ops->send_req(exported->rdomid, req, true);
-
-	kfree(req);
-
-	return ret;
-}
-
-/* Fast path exporting routine in case same buffer is already exported.
- * In this function, we skip normal exporting process and just update
- * private data on both VMs (importer and exporter)
- *
- * return '1' if reexport is needed, return '0' if succeeds, return
- * Kernel error code if something goes wrong
- */
-static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
-{
-	int reexport = 1;
-	int ret = 0;
-	struct exported_sgt_info *exported;
-
-	exported = hyper_dmabuf_find_exported(hid);
-
-	if (!exported)
-		return reexport;
-
-	if (exported->valid == false)
-		return reexport;
-
-	/*
-	 * Check if unexport is already scheduled for that buffer,
-	 * if so try to cancel it. If that will fail, buffer needs
-	 * to be reexport once again.
-	 */
-	if (exported->unexport_sched) {
-		if (!cancel_delayed_work_sync(&exported->unexport))
-			return reexport;
-
-		exported->unexport_sched = false;
-	}
-
-	/* if there's any change in size of private data.
-	 * we reallocate space for private data with new size
-	 */
-	if (sz_priv != exported->sz_priv) {
-		kfree(exported->priv);
-
-		/* truncating size */
-		if (sz_priv > MAX_SIZE_PRIV_DATA)
-			exported->sz_priv = MAX_SIZE_PRIV_DATA;
-		else
-			exported->sz_priv = sz_priv;
-
-		exported->priv = kcalloc(1, exported->sz_priv,
-					 GFP_KERNEL);
-
-		if (!exported->priv) {
-			hyper_dmabuf_remove_exported(exported->hid);
-			hyper_dmabuf_cleanup_sgt_info(exported, true);
-			kfree(exported);
-			return -ENOMEM;
-		}
-	}
-
-	/* update private data in sgt_info with new ones */
-	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to load a new private data\n");
-		ret = -EINVAL;
-	} else {
-		/* send an export msg for updating priv in importer */
-		ret = send_export_msg(exported, NULL);
-
-		if (ret < 0) {
-			dev_err(hy_drv_priv->dev,
-				"Failed to send a new private data\n");
-			ret = -EBUSY;
-		}
-	}
-
-	return ret;
-}
-
-static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
-			(struct ioctl_hyper_dmabuf_export_remote *)data;
-	struct dma_buf *dma_buf;
-	struct dma_buf_attachment *attachment;
-	struct sg_table *sgt;
-	struct pages_info *pg_info;
-	struct exported_sgt_info *exported;
-	hyper_dmabuf_id_t hid;
-	int ret = 0;
-
-	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
-		dev_err(hy_drv_priv->dev,
-			"exporting to the same VM is not permitted\n");
-		return -EINVAL;
-	}
-
-	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
-
-	if (IS_ERR(dma_buf)) {
-		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
-		return PTR_ERR(dma_buf);
-	}
-
-	/* we check if this specific attachment was already exported
-	 * to the same domain and if yes and it's valid sgt_info,
-	 * it returns hyper_dmabuf_id of pre-exported sgt_info
-	 */
-	hid = hyper_dmabuf_find_hid_exported(dma_buf,
-					     export_remote_attr->remote_domain);
-
-	if (hid.id != -1) {
-		ret = fastpath_export(hid, export_remote_attr->sz_priv,
-				      export_remote_attr->priv);
-
-		/* return if fastpath_export succeeds or
-		 * gets some fatal error
-		 */
-		if (ret <= 0) {
-			dma_buf_put(dma_buf);
-			export_remote_attr->hid = hid;
-			return ret;
-		}
-	}
-
-	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
-	if (IS_ERR(attachment)) {
-		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
-		ret = PTR_ERR(attachment);
-		goto fail_attach;
-	}
-
-	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
-
-	if (IS_ERR(sgt)) {
-		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
-		ret = PTR_ERR(sgt);
-		goto fail_map_attachment;
-	}
-
-	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
-
-	if (!exported) {
-		ret = -ENOMEM;
-		goto fail_sgt_info_creation;
-	}
-
-	/* possible truncation */
-	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
-		exported->sz_priv = MAX_SIZE_PRIV_DATA;
-	else
-		exported->sz_priv = export_remote_attr->sz_priv;
-
-	/* creating buffer for private data of buffer */
-	if (exported->sz_priv != 0) {
-		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
-
-		if (!exported->priv) {
-			ret = -ENOMEM;
-			goto fail_priv_creation;
-		}
-	} else {
-		dev_err(hy_drv_priv->dev, "size is 0\n");
-	}
-
-	exported->hid = hyper_dmabuf_get_hid();
-
-	/* no more exported dmabuf allowed */
-	if (exported->hid.id == -1) {
-		dev_err(hy_drv_priv->dev,
-			"exceeds allowed number of dmabuf to be exported\n");
-		ret = -ENOMEM;
-		goto fail_sgt_info_creation;
-	}
-
-	exported->rdomid = export_remote_attr->remote_domain;
-	exported->dma_buf = dma_buf;
-	exported->valid = true;
-
-	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
-	if (!exported->active_sgts) {
-		ret = -ENOMEM;
-		goto fail_map_active_sgts;
-	}
-
-	exported->active_attached = kmalloc(sizeof(struct attachment_list),
-					    GFP_KERNEL);
-	if (!exported->active_attached) {
-		ret = -ENOMEM;
-		goto fail_map_active_attached;
-	}
-
-	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
-				       GFP_KERNEL);
-	if (!exported->va_kmapped) {
-		ret = -ENOMEM;
-		goto fail_map_va_kmapped;
-	}
-
-	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
-				       GFP_KERNEL);
-	if (!exported->va_vmapped) {
-		ret = -ENOMEM;
-		goto fail_map_va_vmapped;
-	}
-
-	exported->active_sgts->sgt = sgt;
-	exported->active_attached->attach = attachment;
-	exported->va_kmapped->vaddr = NULL;
-	exported->va_vmapped->vaddr = NULL;
-
-	/* initialize list of sgt, attachment and vaddr for dmabuf sync
-	 * via shadow dma-buf
-	 */
-	INIT_LIST_HEAD(&exported->active_sgts->list);
-	INIT_LIST_HEAD(&exported->active_attached->list);
-	INIT_LIST_HEAD(&exported->va_kmapped->list);
-	INIT_LIST_HEAD(&exported->va_vmapped->list);
-
-	/* copy private data to sgt_info */
-	ret = copy_from_user(exported->priv, export_remote_attr->priv,
-			     exported->sz_priv);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"failed to load private data\n");
-		ret = -EINVAL;
-		goto fail_export;
-	}
-
-	pg_info = hyper_dmabuf_ext_pgs(sgt);
-	if (!pg_info) {
-		dev_err(hy_drv_priv->dev,
-			"failed to construct pg_info\n");
-		ret = -ENOMEM;
-		goto fail_export;
-	}
-
-	exported->nents = pg_info->nents;
-
-	/* now register it to export list */
-	hyper_dmabuf_register_exported(exported);
-
-	export_remote_attr->hid = exported->hid;
-
-	ret = send_export_msg(exported, pg_info);
-
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to send out the export request\n");
-		goto fail_send_request;
-	}
-
-	/* free pg_info */
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-	exported->filp = filp;
-
-	return ret;
-
-/* Clean-up if error occurs */
-
-fail_send_request:
-	hyper_dmabuf_remove_exported(exported->hid);
-
-	/* free pg_info */
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-fail_export:
-	kfree(exported->va_vmapped);
-
-fail_map_va_vmapped:
-	kfree(exported->va_kmapped);
-
-fail_map_va_kmapped:
-	kfree(exported->active_attached);
-
-fail_map_active_attached:
-	kfree(exported->active_sgts);
-	kfree(exported->priv);
-
-fail_priv_creation:
-	kfree(exported);
-
-fail_map_active_sgts:
-fail_sgt_info_creation:
-	dma_buf_unmap_attachment(attachment, sgt,
-				 DMA_BIDIRECTIONAL);
-
-fail_map_attachment:
-	dma_buf_detach(dma_buf, attachment);
-
-fail_attach:
-	dma_buf_put(dma_buf);
-
-	return ret;
-}
-
-static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
-			(struct ioctl_hyper_dmabuf_export_fd *)data;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	struct imported_sgt_info *imported;
-	struct hyper_dmabuf_req *req;
-	struct page **data_pgs;
-	int op[4];
-	int i;
-	int ret = 0;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	/* look for dmabuf for the id */
-	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
-
-	/* can't find sgt from the table */
-	if (!imported) {
-		dev_err(hy_drv_priv->dev, "can't find the entry\n");
-		return -ENOENT;
-	}
-
-	mutex_lock(&hy_drv_priv->lock);
-
-	imported->importers++;
-
-	/* send notification for export_fd to exporter */
-	op[0] = imported->hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = imported->hid.rng_key[i];
-
-	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
-		imported->hid.id, imported->hid.rng_key[0],
-		imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req) {
-		mutex_unlock(&hy_drv_priv->lock);
-		return -ENOMEM;
-	}
-
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
-
-	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
-
-	if (ret < 0) {
-		/* in case of timeout other end eventually will receive request,
-		 * so we need to undo it
-		 */
-		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
-					&op[0]);
-		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
-		kfree(req);
-		dev_err(hy_drv_priv->dev,
-			"Failed to create sgt or notify exporter\n");
-		imported->importers--;
-		mutex_unlock(&hy_drv_priv->lock);
-		return ret;
-	}
-
-	kfree(req);
-
-	if (ret == HYPER_DMABUF_REQ_ERROR) {
-		dev_err(hy_drv_priv->dev,
-			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
-			imported->hid.id, imported->hid.rng_key[0],
-			imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-		imported->importers--;
-		mutex_unlock(&hy_drv_priv->lock);
-		return -EINVAL;
-	}
-
-	ret = 0;
-
-	dev_dbg(hy_drv_priv->dev,
-		"Found buffer gref %d off %d\n",
-		imported->ref_handle, imported->frst_ofst);
-
-	dev_dbg(hy_drv_priv->dev,
-		"last len %d nents %d domain %d\n",
-		imported->last_len, imported->nents,
-		HYPER_DMABUF_DOM_ID(imported->hid));
-
-	if (!imported->sgt) {
-		dev_dbg(hy_drv_priv->dev,
-			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
-			imported->hid.id, imported->hid.rng_key[0],
-			imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
-					HYPER_DMABUF_DOM_ID(imported->hid),
-					imported->nents,
-					&imported->refs_info);
-
-		if (!data_pgs) {
-			dev_err(hy_drv_priv->dev,
-				"can't map pages hid {id:%d key:%d %d %d}\n",
-				imported->hid.id, imported->hid.rng_key[0],
-				imported->hid.rng_key[1],
-				imported->hid.rng_key[2]);
-
-			imported->importers--;
-
-			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-			if (!req) {
-				mutex_unlock(&hy_drv_priv->lock);
-				return -ENOMEM;
-			}
-
-			hyper_dmabuf_create_req(req,
-						HYPER_DMABUF_EXPORT_FD_FAILED,
-						&op[0]);
-			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
-							  false);
-			kfree(req);
-			mutex_unlock(&hy_drv_priv->lock);
-			return -EINVAL;
-		}
-
-		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
-							imported->frst_ofst,
-							imported->last_len,
-							imported->nents);
-
-	}
-
-	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
-						    export_fd_attr->flags);
-
-	if (export_fd_attr->fd < 0) {
-		/* fail to get fd */
-		ret = export_fd_attr->fd;
-	}
-
-	mutex_unlock(&hy_drv_priv->lock);
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return ret;
-}
-
-/* unexport dmabuf from the database and send int req to the source domain
- * to unmap it.
- */
-static void delayed_unexport(struct work_struct *work)
-{
-	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	struct exported_sgt_info *exported =
-		container_of(work, struct exported_sgt_info, unexport.work);
-	int op[4];
-	int i, ret;
-
-	if (!exported)
-		return;
-
-	dev_dbg(hy_drv_priv->dev,
-		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
-		exported->hid.id, exported->hid.rng_key[0],
-		exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-	/* no longer valid */
-	exported->valid = false;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req)
-		return;
-
-	op[0] = exported->hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = exported->hid.rng_key[i];
-
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
-
-	/* Now send unexport request to remote domain, marking
-	 * that buffer should not be used anymore
-	 */
-	ret = bknd_ops->send_req(exported->rdomid, req, true);
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
-			exported->hid.id, exported->hid.rng_key[0],
-			exported->hid.rng_key[1], exported->hid.rng_key[2]);
-	}
-
-	kfree(req);
-	exported->unexport_sched = false;
-
-	/* Immediately clean-up if it has never been exported by importer
-	 * (so no SGT is constructed on importer).
-	 * clean it up later in remote sync when final release ops
-	 * is called (importer does this only when there's no
-	 * no consumer of locally exported FDs)
-	 */
-	if (exported->active == 0) {
-		dev_dbg(hy_drv_priv->dev,
-			"claning up buffer {id:%d key:%d %d %d} completly\n",
-			exported->hid.id, exported->hid.rng_key[0],
-			exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-		hyper_dmabuf_cleanup_sgt_info(exported, false);
-		hyper_dmabuf_remove_exported(exported->hid);
-
-		/* register hyper_dmabuf_id to the list for reuse */
-		hyper_dmabuf_store_hid(exported->hid);
-
-		if (exported->sz_priv > 0 && !exported->priv)
-			kfree(exported->priv);
-
-		kfree(exported);
-	}
-}
-
-/* Schedule unexport of dmabuf.
- */
-int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
-			(struct ioctl_hyper_dmabuf_unexport *)data;
-	struct exported_sgt_info *exported;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	/* find dmabuf in export list */
-	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
-
-	dev_dbg(hy_drv_priv->dev,
-		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
-		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
-		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
-
-	/* failed to find corresponding entry in export list */
-	if (exported == NULL) {
-		unexport_attr->status = -ENOENT;
-		return -ENOENT;
-	}
-
-	if (exported->unexport_sched)
-		return 0;
-
-	exported->unexport_sched = true;
-	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
-	schedule_delayed_work(&exported->unexport,
-			      msecs_to_jiffies(unexport_attr->delay_ms));
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return 0;
-}
-
-static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_query *query_attr =
-			(struct ioctl_hyper_dmabuf_query *)data;
-	struct exported_sgt_info *exported = NULL;
-	struct imported_sgt_info *imported = NULL;
-	int ret = 0;
-
-	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hy_drv_priv->domid) {
-		/* query for exported dmabuf */
-		exported = hyper_dmabuf_find_exported(query_attr->hid);
-		if (exported) {
-			ret = hyper_dmabuf_query_exported(exported,
-							  query_attr->item,
-							  &query_attr->info);
-		} else {
-			dev_err(hy_drv_priv->dev,
-				"hid {id:%d key:%d %d %d} not in exp list\n",
-				query_attr->hid.id,
-				query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
-			return -ENOENT;
-		}
-	} else {
-		/* query for imported dmabuf */
-		imported = hyper_dmabuf_find_imported(query_attr->hid);
-		if (imported) {
-			ret = hyper_dmabuf_query_imported(imported,
-							  query_attr->item,
-							  &query_attr->info);
-		} else {
-			dev_err(hy_drv_priv->dev,
-				"hid {id:%d key:%d %d %d} not in imp list\n",
-				query_attr->hid.id,
-				query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
-			return -ENOENT;
-		}
-	}
-
-	return ret;
-}
-
-const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
-			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
-			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
-			       hyper_dmabuf_export_remote_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
-			       hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
-			       hyper_dmabuf_unexport_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY,
-			       hyper_dmabuf_query_ioctl, 0),
-};
-
-long hyper_dmabuf_ioctl(struct file *filp,
-			unsigned int cmd, unsigned long param)
-{
-	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
-	unsigned int nr = _IOC_NR(cmd);
-	int ret;
-	hyper_dmabuf_ioctl_t func;
-	char *kdata;
-
-	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
-		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
-		return -EINVAL;
-	}
-
-	ioctl = &hyper_dmabuf_ioctls[nr];
-
-	func = ioctl->func;
-
-	if (unlikely(!func)) {
-		dev_err(hy_drv_priv->dev, "no function\n");
-		return -EINVAL;
-	}
-
-	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
-	if (!kdata)
-		return -ENOMEM;
-
-	if (copy_from_user(kdata, (void __user *)param,
-			   _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to copy from user arguments\n");
-		ret = -EFAULT;
-		goto ioctl_error;
-	}
-
-	ret = func(filp, kdata);
-
-	if (copy_to_user((void __user *)param, kdata,
-			 _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to copy to user arguments\n");
-		ret = -EFAULT;
-		goto ioctl_error;
-	}
-
-ioctl_error:
-	kfree(kdata);
-
-	return ret;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
deleted file mode 100644
index 5991a87..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_IOCTL_H__
-#define __HYPER_DMABUF_IOCTL_H__
-
-typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
-
-struct hyper_dmabuf_ioctl_desc {
-	unsigned int cmd;
-	int flags;
-	hyper_dmabuf_ioctl_t func;
-	const char *name;
-};
-
-#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
-	[_IOC_NR(ioctl)] = {				\
-			.cmd = ioctl,			\
-			.func = _func,			\
-			.flags = _flags,		\
-			.name = #ioctl			\
-	}
-
-long hyper_dmabuf_ioctl(struct file *filp,
-			unsigned int cmd, unsigned long param);
-
-int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
-
-#endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
deleted file mode 100644
index bba6d1d..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ /dev/null
@@ -1,293 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/cdev.h>
-#include <linux/hashtable.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_event.h"
-
-DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
-DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
-
-#ifdef CONFIG_HYPER_DMABUF_SYSFS
-static ssize_t hyper_dmabuf_imported_show(struct device *drv,
-					  struct device_attribute *attr,
-					  char *buf)
-{
-	struct list_entry_imported *info_entry;
-	int bkt;
-	ssize_t count = 0;
-	size_t total = 0;
-
-	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->imported->hid;
-		int nents = info_entry->imported->nents;
-		bool valid = info_entry->imported->valid;
-		int num_importers = info_entry->imported->importers;
-
-		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count,
-				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2], nents, (valid ? 't' : 'f'),
-				num_importers);
-	}
-	count += scnprintf(buf + count, PAGE_SIZE - count,
-			   "total nents: %lu\n", total);
-
-	return count;
-}
-
-static ssize_t hyper_dmabuf_exported_show(struct device *drv,
-					  struct device_attribute *attr,
-					  char *buf)
-{
-	struct list_entry_exported *info_entry;
-	int bkt;
-	ssize_t count = 0;
-	size_t total = 0;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->exported->hid;
-		int nents = info_entry->exported->nents;
-		bool valid = info_entry->exported->valid;
-		int importer_exported = info_entry->exported->active;
-
-		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count,
-				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
-				   hid.id, hid.rng_key[0], hid.rng_key[1],
-				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
-				   importer_exported);
-	}
-	count += scnprintf(buf + count, PAGE_SIZE - count,
-			   "total nents: %lu\n", total);
-
-	return count;
-}
-
-static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
-static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
-
-int hyper_dmabuf_register_sysfs(struct device *dev)
-{
-	int err;
-
-	err = device_create_file(dev, &dev_attr_imported);
-	if (err < 0)
-		goto err1;
-	err = device_create_file(dev, &dev_attr_exported);
-	if (err < 0)
-		goto err2;
-
-	return 0;
-err2:
-	device_remove_file(dev, &dev_attr_imported);
-err1:
-	return -1;
-}
-
-int hyper_dmabuf_unregister_sysfs(struct device *dev)
-{
-	device_remove_file(dev, &dev_attr_imported);
-	device_remove_file(dev, &dev_attr_exported);
-	return 0;
-}
-
-#endif
-
-int hyper_dmabuf_table_init(void)
-{
-	hash_init(hyper_dmabuf_hash_imported);
-	hash_init(hyper_dmabuf_hash_exported);
-	return 0;
-}
-
-int hyper_dmabuf_table_destroy(void)
-{
-	/* TODO: cleanup hyper_dmabuf_hash_imported
-	 * and hyper_dmabuf_hash_exported
-	 */
-	return 0;
-}
-
-int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
-{
-	struct list_entry_exported *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->exported = exported;
-
-	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		 info_entry->exported->hid.id);
-
-	return 0;
-}
-
-int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
-{
-	struct list_entry_imported *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->imported = imported;
-
-	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		 info_entry->imported->hid.id);
-
-	return 0;
-}
-
-struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_exported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->exported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
-						    hid))
-				return info_entry->exported;
-
-			/* if key is unmatched, given HID is invalid,
-			 * so returning NULL
-			 */
-			break;
-		}
-
-	return NULL;
-}
-
-/* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
-						 int domid)
-{
-	struct list_entry_exported *info_entry;
-	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if (info_entry->exported->dma_buf == dmabuf &&
-		    info_entry->exported->rdomid == domid)
-			return info_entry->exported->hid;
-
-	return hid;
-}
-
-struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_imported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->imported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
-						    hid))
-				return info_entry->imported;
-			/* if key is unmatched, given HID is invalid,
-			 * so returning NULL
-			 */
-			break;
-		}
-
-	return NULL;
-}
-
-int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_exported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->exported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
-						    hid)) {
-				hash_del(&info_entry->node);
-				kfree(info_entry);
-				return 0;
-			}
-
-			break;
-		}
-
-	return -ENOENT;
-}
-
-int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_imported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->imported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
-						    hid)) {
-				hash_del(&info_entry->node);
-				kfree(info_entry);
-				return 0;
-			}
-
-			break;
-		}
-
-	return -ENOENT;
-}
-
-void hyper_dmabuf_foreach_exported(
-	void (*func)(struct exported_sgt_info *, void *attr),
-	void *attr)
-{
-	struct list_entry_exported *info_entry;
-	struct hlist_node *tmp;
-	int bkt;
-
-	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
-			info_entry, node) {
-		func(info_entry->exported, attr);
-	}
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
deleted file mode 100644
index f7102f5..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_LIST_H__
-#define __HYPER_DMABUF_LIST_H__
-
-#include "hyper_dmabuf_struct.h"
-
-/* number of bits to be used for exported dmabufs hash table */
-#define MAX_ENTRY_EXPORTED 7
-/* number of bits to be used for imported dmabufs hash table */
-#define MAX_ENTRY_IMPORTED 7
-
-struct list_entry_exported {
-	struct exported_sgt_info *exported;
-	struct hlist_node node;
-};
-
-struct list_entry_imported {
-	struct imported_sgt_info *imported;
-	struct hlist_node node;
-};
-
-int hyper_dmabuf_table_init(void);
-
-int hyper_dmabuf_table_destroy(void);
-
-int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
-
-/* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
-						 int domid);
-
-int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
-
-struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
-
-struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
-
-int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
-
-int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
-
-void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
-				   void *attr), void *attr);
-
-int hyper_dmabuf_register_sysfs(struct device *dev);
-int hyper_dmabuf_unregister_sysfs(struct device *dev);
-
-#endif /* __HYPER_DMABUF_LIST_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
deleted file mode 100644
index afc1fd6e..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ /dev/null
@@ -1,414 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/workqueue.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_remote_sync.h"
-#include "hyper_dmabuf_event.h"
-#include "hyper_dmabuf_list.h"
-
-struct cmd_process {
-	struct work_struct work;
-	struct hyper_dmabuf_req *rq;
-	int domid;
-};
-
-void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
-			     enum hyper_dmabuf_command cmd, int *op)
-{
-	int i;
-
-	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
-	req->cmd = cmd;
-
-	switch (cmd) {
-	/* as exporter, commands to importer */
-	case HYPER_DMABUF_EXPORT:
-		/* exporting pages for dmabuf */
-		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~op3 : hyper_dmabuf_id
-		 * op4 : number of pages to be shared
-		 * op5 : offset of data in the first page
-		 * op6 : length of data in the last page
-		 * op7 : top-level reference number for shared pages
-		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data
-		 *	   (e.g. graphic buffer's meta info)
-		 */
-
-		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
-		break;
-
-	case HYPER_DMABUF_NOTIFY_UNEXPORT:
-		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : DMABUF_DESTROY,
-		 * op0~op3 : hyper_dmabuf_id_t hid
-		 */
-
-		for (i = 0; i < 4; i++)
-			req->op[i] = op[i];
-		break;
-
-	case HYPER_DMABUF_EXPORT_FD:
-	case HYPER_DMABUF_EXPORT_FD_FAILED:
-		/* dmabuf fd is being created on imported side or importing
-		 * failed
-		 *
-		 * command : HYPER_DMABUF_EXPORT_FD or
-		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * op0~op3 : hyper_dmabuf_id
-		 */
-
-		for (i = 0; i < 4; i++)
-			req->op[i] = op[i];
-		break;
-
-	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer (probably not needed)
-		 * for dmabuf synchronization
-		 */
-		break;
-
-	case HYPER_DMABUF_OPS_TO_SOURCE:
-		/* notifying dmabuf map/unmap to exporter, map will make
-		 * the driver to do shadow mapping or unmapping for
-		 * synchronization with original exporter (e.g. i915)
-		 *
-		 * command : DMABUF_OPS_TO_SOURCE.
-		 * op0~3 : hyper_dmabuf_id
-		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
-		 */
-		for (i = 0; i < 5; i++)
-			req->op[i] = op[i];
-		break;
-
-	default:
-		/* no command found */
-		return;
-	}
-}
-
-static void cmd_process_work(struct work_struct *work)
-{
-	struct imported_sgt_info *imported;
-	struct cmd_process *proc = container_of(work,
-						struct cmd_process, work);
-	struct hyper_dmabuf_req *req;
-	int domid;
-	int i;
-
-	req = proc->rq;
-	domid = proc->domid;
-
-	switch (req->cmd) {
-	case HYPER_DMABUF_EXPORT:
-		/* exporting pages for dmabuf */
-		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~op3 : hyper_dmabuf_id
-		 * op4 : number of pages to be shared
-		 * op5 : offset of data in the first page
-		 * op6 : length of data in the last page
-		 * op7 : top-level reference number for shared pages
-		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data
-		 *         (e.g. graphic buffer's meta info)
-		 */
-
-		/* if nents == 0, it means it is a message only for
-		 * priv synchronization. for existing imported_sgt_info
-		 * so not creating a new one
-		 */
-		if (req->op[4] == 0) {
-			hyper_dmabuf_id_t exist = {req->op[0],
-						   {req->op[1], req->op[2],
-						   req->op[3] } };
-
-			imported = hyper_dmabuf_find_imported(exist);
-
-			if (!imported) {
-				dev_err(hy_drv_priv->dev,
-					"Can't find imported sgt_info\n");
-				break;
-			}
-
-			/* if size of new private data is different,
-			 * we reallocate it.
-			 */
-			if (imported->sz_priv != req->op[8]) {
-				kfree(imported->priv);
-				imported->sz_priv = req->op[8];
-				imported->priv = kcalloc(1, req->op[8],
-							 GFP_KERNEL);
-				if (!imported->priv) {
-					/* set it invalid */
-					imported->valid = 0;
-					break;
-				}
-			}
-
-			/* updating priv data */
-			memcpy(imported->priv, &req->op[9], req->op[8]);
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-			/* generating import event */
-			hyper_dmabuf_import_event(imported->hid);
-#endif
-
-			break;
-		}
-
-		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
-
-		if (!imported)
-			break;
-
-		imported->sz_priv = req->op[8];
-		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
-
-		if (!imported->priv) {
-			kfree(imported);
-			break;
-		}
-
-		imported->hid.id = req->op[0];
-
-		for (i = 0; i < 3; i++)
-			imported->hid.rng_key[i] = req->op[i+1];
-
-		imported->nents = req->op[4];
-		imported->frst_ofst = req->op[5];
-		imported->last_len = req->op[6];
-		imported->ref_handle = req->op[7];
-
-		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
-		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
-			req->op[0], req->op[1], req->op[2],
-			req->op[3]);
-		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
-		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
-		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
-		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
-
-		memcpy(imported->priv, &req->op[9], req->op[8]);
-
-		imported->valid = true;
-		hyper_dmabuf_register_imported(imported);
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-		/* generating import event */
-		hyper_dmabuf_import_event(imported->hid);
-#endif
-
-		break;
-
-	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer
-		 * (probably not needed) for dmabuf synchronization
-		 */
-		break;
-
-	default:
-		/* shouldn't get here */
-		break;
-	}
-
-	kfree(req);
-	kfree(proc);
-}
-
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
-{
-	struct cmd_process *proc;
-	struct hyper_dmabuf_req *temp_req;
-	struct imported_sgt_info *imported;
-	struct exported_sgt_info *exported;
-	hyper_dmabuf_id_t hid;
-	int ret;
-
-	if (!req) {
-		dev_err(hy_drv_priv->dev, "request is NULL\n");
-		return -EINVAL;
-	}
-
-	hid.id = req->op[0];
-	hid.rng_key[0] = req->op[1];
-	hid.rng_key[1] = req->op[2];
-	hid.rng_key[2] = req->op[3];
-
-	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
-		(req->cmd > HYPER_DMABUF_OPS_TO_SOURCE)) {
-		dev_err(hy_drv_priv->dev, "invalid command\n");
-		return -EINVAL;
-	}
-
-	req->stat = HYPER_DMABUF_REQ_PROCESSED;
-
-	/* HYPER_DMABUF_DESTROY requires immediate
-	 * follow up so can't be processed in workqueue
-	 */
-	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
-		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
-		 * op0~3 : hyper_dmabuf_id
-		 */
-		dev_dbg(hy_drv_priv->dev,
-			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
-
-		imported = hyper_dmabuf_find_imported(hid);
-
-		if (imported) {
-			/* if anything is still using dma_buf */
-			if (imported->importers) {
-				/* Buffer is still in  use, just mark that
-				 * it should not be allowed to export its fd
-				 * anymore.
-				 */
-				imported->valid = false;
-			} else {
-				/* No one is using buffer, remove it from
-				 * imported list
-				 */
-				hyper_dmabuf_remove_imported(hid);
-				kfree(imported);
-			}
-		} else {
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		}
-
-		return req->cmd;
-	}
-
-	/* dma buf remote synchronization */
-	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
-		/* notifying dmabuf map/unmap to exporter, map will
-		 * make the driver to do shadow mapping
-		 * or unmapping for synchronization with original
-		 * exporter (e.g. i915)
-		 *
-		 * command : DMABUF_OPS_TO_SOURCE.
-		 * op0~3 : hyper_dmabuf_id
-		 * op1 : enum hyper_dmabuf_ops {....}
-		 */
-		dev_dbg(hy_drv_priv->dev,
-			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
-
-		ret = hyper_dmabuf_remote_sync(hid, req->op[4]);
-
-		if (ret)
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		else
-			req->stat = HYPER_DMABUF_REQ_PROCESSED;
-
-		return req->cmd;
-	}
-
-	/* synchronous dma_buf_fd export */
-	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
-		/* find a corresponding SGT for the id */
-		dev_dbg(hy_drv_priv->dev,
-			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
-			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
-
-		exported = hyper_dmabuf_find_exported(hid);
-
-		if (!exported) {
-			dev_err(hy_drv_priv->dev,
-				"buffer {id:%d key:%d %d %d} not found\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		} else if (!exported->valid) {
-			dev_dbg(hy_drv_priv->dev,
-				"Buffer no longer valid {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		} else {
-			dev_dbg(hy_drv_priv->dev,
-				"Buffer still valid {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			exported->active++;
-			req->stat = HYPER_DMABUF_REQ_PROCESSED;
-		}
-		return req->cmd;
-	}
-
-	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
-		dev_dbg(hy_drv_priv->dev,
-			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
-			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
-
-		exported = hyper_dmabuf_find_exported(hid);
-
-		if (!exported) {
-			dev_err(hy_drv_priv->dev,
-				"buffer {id:%d key:%d %d %d} not found\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		} else {
-			exported->active--;
-			req->stat = HYPER_DMABUF_REQ_PROCESSED;
-		}
-		return req->cmd;
-	}
-
-	dev_dbg(hy_drv_priv->dev,
-		"%s: putting request to workqueue\n", __func__);
-	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
-
-	if (!temp_req)
-		return -ENOMEM;
-
-	memcpy(temp_req, req, sizeof(*temp_req));
-
-	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
-
-	if (!proc) {
-		kfree(temp_req);
-		return -ENOMEM;
-	}
-
-	proc->rq = temp_req;
-	proc->domid = domid;
-
-	INIT_WORK(&(proc->work), cmd_process_work);
-
-	queue_work(hy_drv_priv->work_queue, &(proc->work));
-
-	return req->cmd;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
deleted file mode 100644
index 9c8a76b..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ /dev/null
@@ -1,87 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_MSG_H__
-#define __HYPER_DMABUF_MSG_H__
-
-#define MAX_NUMBER_OF_OPERANDS 64
-
-struct hyper_dmabuf_req {
-	unsigned int req_id;
-	unsigned int stat;
-	unsigned int cmd;
-	unsigned int op[MAX_NUMBER_OF_OPERANDS];
-};
-
-struct hyper_dmabuf_resp {
-	unsigned int resp_id;
-	unsigned int stat;
-	unsigned int cmd;
-	unsigned int op[MAX_NUMBER_OF_OPERANDS];
-};
-
-enum hyper_dmabuf_command {
-	HYPER_DMABUF_EXPORT = 0x10,
-	HYPER_DMABUF_EXPORT_FD,
-	HYPER_DMABUF_EXPORT_FD_FAILED,
-	HYPER_DMABUF_NOTIFY_UNEXPORT,
-	HYPER_DMABUF_OPS_TO_REMOTE,
-	HYPER_DMABUF_OPS_TO_SOURCE,
-};
-
-enum hyper_dmabuf_ops {
-	HYPER_DMABUF_OPS_ATTACH = 0x1000,
-	HYPER_DMABUF_OPS_DETACH,
-	HYPER_DMABUF_OPS_MAP,
-	HYPER_DMABUF_OPS_UNMAP,
-	HYPER_DMABUF_OPS_RELEASE,
-	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
-	HYPER_DMABUF_OPS_END_CPU_ACCESS,
-	HYPER_DMABUF_OPS_KMAP_ATOMIC,
-	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
-	HYPER_DMABUF_OPS_KMAP,
-	HYPER_DMABUF_OPS_KUNMAP,
-	HYPER_DMABUF_OPS_MMAP,
-	HYPER_DMABUF_OPS_VMAP,
-	HYPER_DMABUF_OPS_VUNMAP,
-};
-
-enum hyper_dmabuf_req_feedback {
-	HYPER_DMABUF_REQ_PROCESSED = 0x100,
-	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
-	HYPER_DMABUF_REQ_ERROR,
-	HYPER_DMABUF_REQ_NOT_RESPONDED
-};
-
-/* create a request packet with given command and operands */
-void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
-				 enum hyper_dmabuf_command command,
-				 int *operands);
-
-/* parse incoming request packet (or response) and take
- * appropriate actions for those
- */
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
-
-#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
deleted file mode 100644
index e85f619..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ /dev/null
@@ -1,413 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_ops.h"
-#include "hyper_dmabuf_sgl_proc.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_list.h"
-
-#define WAIT_AFTER_SYNC_REQ 0
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-static int dmabuf_refcount(struct dma_buf *dma_buf)
-{
-	if ((dma_buf != NULL) && (dma_buf->file != NULL))
-		return file_count(dma_buf->file);
-
-	return -EINVAL;
-}
-
-static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
-{
-	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int op[5];
-	int i;
-	int ret;
-
-	op[0] = hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = hid.rng_key[i];
-
-	op[4] = dmabuf_ops;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req)
-		return -ENOMEM;
-
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
-
-	/* send request and wait for a response */
-	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(hid), req,
-				 WAIT_AFTER_SYNC_REQ);
-
-	if (ret < 0) {
-		dev_dbg(hy_drv_priv->dev,
-			"dmabuf sync request failed:%d\n", req->op[4]);
-	}
-
-	kfree(req);
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
-				   struct device *dev,
-				   struct dma_buf_attachment *attach)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_ATTACH);
-
-	return ret;
-}
-
-static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
-				    struct dma_buf_attachment *attach)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_DETACH);
-}
-
-static struct sg_table *hyper_dmabuf_ops_map(
-				struct dma_buf_attachment *attachment,
-				enum dma_data_direction dir)
-{
-	struct sg_table *st;
-	struct imported_sgt_info *imported;
-	struct pages_info *pg_info;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
-
-	/* extract pages from sgt */
-	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
-
-	if (!pg_info)
-		return NULL;
-
-	/* create a new sg_table with extracted pages */
-	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
-				     pg_info->last_len, pg_info->nents);
-	if (!st)
-		goto err_free_sg;
-
-	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
-		goto err_free_sg;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MAP);
-
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-	return st;
-
-err_free_sg:
-	if (st) {
-		sg_free_table(st);
-		kfree(st);
-	}
-
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
-				   struct sg_table *sg,
-				   enum dma_data_direction dir)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
-
-	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
-
-	sg_free_table(sg);
-	kfree(sg);
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_UNMAP);
-}
-
-static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
-{
-	struct imported_sgt_info *imported;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int ret;
-	int finish;
-
-	if (!dma_buf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dma_buf->priv;
-
-	if (!dmabuf_refcount(imported->dma_buf))
-		imported->dma_buf = NULL;
-
-	imported->importers--;
-
-	if (imported->importers == 0) {
-		bknd_ops->unmap_shared_pages(&imported->refs_info,
-					     imported->nents);
-
-		if (imported->sgt) {
-			sg_free_table(imported->sgt);
-			kfree(imported->sgt);
-			imported->sgt = NULL;
-		}
-	}
-
-	finish = imported && !imported->valid &&
-		 !imported->importers;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_RELEASE);
-
-	/*
-	 * Check if buffer is still valid and if not remove it
-	 * from imported list. That has to be done after sending
-	 * sync request
-	 */
-	if (finish) {
-		hyper_dmabuf_remove_imported(imported->hid);
-		kfree(imported);
-	}
-}
-
-static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
-					     enum dma_data_direction dir)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
-					   enum dma_data_direction dir)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_END_CPU_ACCESS);
-
-	return 0;
-}
-
-static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
-					  unsigned long pgnum)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP_ATOMIC);
-
-	/* TODO: NULL for now. Need to return the addr of mapped region */
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
-					   unsigned long pgnum, void *vaddr)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
-}
-
-static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP);
-
-	/* for now NULL.. need to return the address of mapped region */
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
-				    void *vaddr)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP);
-}
-
-static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
-				 struct vm_area_struct *vma)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MMAP);
-
-	return ret;
-}
-
-static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VMAP);
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VUNMAP);
-}
-
-static const struct dma_buf_ops hyper_dmabuf_ops = {
-	.attach = hyper_dmabuf_ops_attach,
-	.detach = hyper_dmabuf_ops_detach,
-	.map_dma_buf = hyper_dmabuf_ops_map,
-	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
-	.release = hyper_dmabuf_ops_release,
-	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
-	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
-	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
-	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
-	.map = hyper_dmabuf_ops_kmap,
-	.unmap = hyper_dmabuf_ops_kunmap,
-	.mmap = hyper_dmabuf_ops_mmap,
-	.vmap = hyper_dmabuf_ops_vmap,
-	.vunmap = hyper_dmabuf_ops_vunmap,
-};
-
-/* exporting dmabuf as fd */
-int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
-{
-	int fd = -1;
-
-	/* call hyper_dmabuf_export_dmabuf and create
-	 * and bind a handle for it then release
-	 */
-	hyper_dmabuf_export_dma_buf(imported);
-
-	if (imported->dma_buf)
-		fd = dma_buf_fd(imported->dma_buf, flags);
-
-	return fd;
-}
-
-void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
-{
-	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
-
-	exp_info.ops = &hyper_dmabuf_ops;
-
-	/* multiple of PAGE_SIZE, not considering offset */
-	exp_info.size = imported->sgt->nents * PAGE_SIZE;
-	exp_info.flags = /* not sure about flag */ 0;
-	exp_info.priv = imported;
-
-	imported->dma_buf = dma_buf_export(&exp_info);
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
deleted file mode 100644
index c5505a4..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_OPS_H__
-#define __HYPER_DMABUF_OPS_H__
-
-int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
-
-void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
-
-#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
deleted file mode 100644
index 1f2f56b..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ /dev/null
@@ -1,172 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/dma-buf.h>
-#include <linux/uaccess.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_id.h"
-
-#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
-	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
-
-int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
-				int query, unsigned long *info)
-{
-	switch (query) {
-	case HYPER_DMABUF_QUERY_TYPE:
-		*info = EXPORTED;
-		break;
-
-	/* exporting domain of this specific dmabuf*/
-	case HYPER_DMABUF_QUERY_EXPORTER:
-		*info = HYPER_DMABUF_DOM_ID(exported->hid);
-		break;
-
-	/* importing domain of this specific dmabuf */
-	case HYPER_DMABUF_QUERY_IMPORTER:
-		*info = exported->rdomid;
-		break;
-
-	/* size of dmabuf in byte */
-	case HYPER_DMABUF_QUERY_SIZE:
-		*info = exported->dma_buf->size;
-		break;
-
-	/* whether the buffer is used by importer */
-	case HYPER_DMABUF_QUERY_BUSY:
-		*info = (exported->active > 0);
-		break;
-
-	/* whether the buffer is unexported */
-	case HYPER_DMABUF_QUERY_UNEXPORTED:
-		*info = !exported->valid;
-		break;
-
-	/* whether the buffer is scheduled to be unexported */
-	case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-		*info = !exported->unexport_sched;
-		break;
-
-	/* size of private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-		*info = exported->sz_priv;
-		break;
-
-	/* copy private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO:
-		if (exported->sz_priv > 0) {
-			int n;
-
-			n = copy_to_user((void __user *) *info,
-					exported->priv,
-					exported->sz_priv);
-			if (n != 0)
-				return -EINVAL;
-		}
-		break;
-
-	default:
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-
-int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
-				int query, unsigned long *info)
-{
-	switch (query) {
-	case HYPER_DMABUF_QUERY_TYPE:
-		*info = IMPORTED;
-		break;
-
-	/* exporting domain of this specific dmabuf*/
-	case HYPER_DMABUF_QUERY_EXPORTER:
-		*info = HYPER_DMABUF_DOM_ID(imported->hid);
-		break;
-
-	/* importing domain of this specific dmabuf */
-	case HYPER_DMABUF_QUERY_IMPORTER:
-		*info = hy_drv_priv->domid;
-		break;
-
-	/* size of dmabuf in byte */
-	case HYPER_DMABUF_QUERY_SIZE:
-		if (imported->dma_buf) {
-			/* if local dma_buf is created (if it's
-			 * ever mapped), retrieve it directly
-			 * from struct dma_buf *
-			 */
-			*info = imported->dma_buf->size;
-		} else {
-			/* calcuate it from given nents, frst_ofst
-			 * and last_len
-			 */
-			*info = HYPER_DMABUF_SIZE(imported->nents,
-						  imported->frst_ofst,
-						  imported->last_len);
-		}
-		break;
-
-	/* whether the buffer is used or not */
-	case HYPER_DMABUF_QUERY_BUSY:
-		/* checks if it's used by importer */
-		*info = (imported->importers > 0);
-		break;
-
-	/* whether the buffer is unexported */
-	case HYPER_DMABUF_QUERY_UNEXPORTED:
-		*info = !imported->valid;
-		break;
-
-	/* size of private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-		*info = imported->sz_priv;
-		break;
-
-	/* copy private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO:
-		if (imported->sz_priv > 0) {
-			int n;
-
-			n = copy_to_user((void __user *)*info,
-					imported->priv,
-					imported->sz_priv);
-			if (n != 0)
-				return -EINVAL;
-		}
-		break;
-
-	default:
-		return -EINVAL;
-	}
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
deleted file mode 100644
index 65ae738..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ /dev/null
@@ -1,10 +0,0 @@
-#ifndef __HYPER_DMABUF_QUERY_H__
-#define __HYPER_DMABUF_QUERY_H__
-
-int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
-				int query, unsigned long *info);
-
-int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
-				int query, unsigned long *info);
-
-#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
deleted file mode 100644
index a82fd7b..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ /dev/null
@@ -1,322 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_sgl_proc.h"
-
-/* Whenever importer does dma operations from remote domain,
- * a notification is sent to the exporter so that exporter
- * issues equivalent dma operation on the original dma buf
- * for indirect synchronization via shadow operations.
- *
- * All ptrs and references (e.g struct sg_table*,
- * struct dma_buf_attachment) created via these operations on
- * exporter's side are kept in stack (implemented as circular
- * linked-lists) separately so that those can be re-referenced
- * later when unmapping operations are invoked to free those.
- *
- * The very first element on the bottom of each stack holds
- * is what is created when initial exporting is issued so it
- * should not be modified or released by this fuction.
- */
-int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
-{
-	struct exported_sgt_info *exported;
-	struct sgt_list *sgtl;
-	struct attachment_list *attachl;
-	struct kmap_vaddr_list *va_kmapl;
-	struct vmap_vaddr_list *va_vmapl;
-	int ret;
-
-	/* find a coresponding SGT for the id */
-	exported = hyper_dmabuf_find_exported(hid);
-
-	if (!exported) {
-		dev_err(hy_drv_priv->dev,
-			"dmabuf remote sync::can't find exported list\n");
-		return -ENOENT;
-	}
-
-	switch (ops) {
-	case HYPER_DMABUF_OPS_ATTACH:
-		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
-
-		if (!attachl)
-			return -ENOMEM;
-
-		attachl->attach = dma_buf_attach(exported->dma_buf,
-						 hy_drv_priv->dev);
-
-		if (!attachl->attach) {
-			kfree(attachl);
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
-			return -ENOMEM;
-		}
-
-		list_add(&attachl->list, &exported->active_attached->list);
-		break;
-
-	case HYPER_DMABUF_OPS_DETACH:
-		if (list_empty(&exported->active_attached->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_DETACH\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf attachment left to be detached\n");
-			return -EFAULT;
-		}
-
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		dma_buf_detach(exported->dma_buf, attachl->attach);
-		list_del(&attachl->list);
-		kfree(attachl);
-		break;
-
-	case HYPER_DMABUF_OPS_MAP:
-		if (list_empty(&exported->active_attached->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_MAP\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf attachment left to be mapped\n");
-			return -EFAULT;
-		}
-
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
-
-		if (!sgtl)
-			return -ENOMEM;
-
-		sgtl->sgt = dma_buf_map_attachment(attachl->attach,
-						   DMA_BIDIRECTIONAL);
-		if (!sgtl->sgt) {
-			kfree(sgtl);
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_MAP\n");
-			return -ENOMEM;
-		}
-		list_add(&sgtl->list, &exported->active_sgts->list);
-		break;
-
-	case HYPER_DMABUF_OPS_UNMAP:
-		if (list_empty(&exported->active_sgts->list) ||
-		    list_empty(&exported->active_attached->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
-			dev_err(hy_drv_priv->dev,
-				"no SGT or attach left to be unmapped\n");
-			return -EFAULT;
-		}
-
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-		sgtl = list_first_entry(&exported->active_sgts->list,
-					struct sgt_list, list);
-
-		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					 DMA_BIDIRECTIONAL);
-		list_del(&sgtl->list);
-		kfree(sgtl);
-		break;
-
-	case HYPER_DMABUF_OPS_RELEASE:
-		dev_dbg(hy_drv_priv->dev,
-			"id:%d key:%d %d %d} released, ref left: %d\n",
-			 exported->hid.id, exported->hid.rng_key[0],
-			 exported->hid.rng_key[1], exported->hid.rng_key[2],
-			 exported->active - 1);
-
-		exported->active--;
-
-		/* If there are still importers just break, if no then
-		 * continue with final cleanup
-		 */
-		if (exported->active)
-			break;
-
-		/* Importer just released buffer fd, check if there is
-		 * any other importer still using it.
-		 * If not and buffer was unexported, clean up shared
-		 * data and remove that buffer.
-		 */
-		dev_dbg(hy_drv_priv->dev,
-			"Buffer {id:%d key:%d %d %d} final released\n",
-			exported->hid.id, exported->hid.rng_key[0],
-			exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-		if (!exported->valid && !exported->active &&
-		    !exported->unexport_sched) {
-			hyper_dmabuf_cleanup_sgt_info(exported, false);
-			hyper_dmabuf_remove_exported(hid);
-			kfree(exported);
-			/* store hyper_dmabuf_id in the list for reuse */
-			hyper_dmabuf_store_hid(hid);
-		}
-
-		break;
-
-	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
-		ret = dma_buf_begin_cpu_access(exported->dma_buf,
-					       DMA_BIDIRECTIONAL);
-		if (ret) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
-			return ret;
-		}
-		break;
-
-	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
-		ret = dma_buf_end_cpu_access(exported->dma_buf,
-					     DMA_BIDIRECTIONAL);
-		if (ret) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
-			return ret;
-		}
-		break;
-
-	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
-	case HYPER_DMABUF_OPS_KMAP:
-		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
-		if (!va_kmapl)
-			return -ENOMEM;
-
-		/* dummy kmapping of 1 page */
-		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
-			va_kmapl->vaddr = dma_buf_kmap_atomic(
-						exported->dma_buf, 1);
-		else
-			va_kmapl->vaddr = dma_buf_kmap(
-						exported->dma_buf, 1);
-
-		if (!va_kmapl->vaddr) {
-			kfree(va_kmapl);
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
-			return -ENOMEM;
-		}
-		list_add(&va_kmapl->list, &exported->va_kmapped->list);
-		break;
-
-	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
-	case HYPER_DMABUF_OPS_KUNMAP:
-		if (list_empty(&exported->va_kmapped->list)) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf VA to be freed\n");
-			return -EFAULT;
-		}
-
-		va_kmapl = list_first_entry(&exported->va_kmapped->list,
-					    struct kmap_vaddr_list, list);
-		if (!va_kmapl->vaddr) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			return PTR_ERR(va_kmapl->vaddr);
-		}
-
-		/* unmapping 1 page */
-		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
-			dma_buf_kunmap_atomic(exported->dma_buf,
-					      1, va_kmapl->vaddr);
-		else
-			dma_buf_kunmap(exported->dma_buf,
-				       1, va_kmapl->vaddr);
-
-		list_del(&va_kmapl->list);
-		kfree(va_kmapl);
-		break;
-
-	case HYPER_DMABUF_OPS_MMAP:
-		/* currently not supported: looking for a way to create
-		 * a dummy vma
-		 */
-		dev_warn(hy_drv_priv->dev,
-			 "remote sync::sychronized mmap is not supported\n");
-		break;
-
-	case HYPER_DMABUF_OPS_VMAP:
-		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
-
-		if (!va_vmapl)
-			return -ENOMEM;
-
-		/* dummy vmapping */
-		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
-
-		if (!va_vmapl->vaddr) {
-			kfree(va_vmapl);
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
-			return -ENOMEM;
-		}
-		list_add(&va_vmapl->list, &exported->va_vmapped->list);
-		break;
-
-	case HYPER_DMABUF_OPS_VUNMAP:
-		if (list_empty(&exported->va_vmapped->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf VA to be freed\n");
-			return -EFAULT;
-		}
-		va_vmapl = list_first_entry(&exported->va_vmapped->list,
-					struct vmap_vaddr_list, list);
-		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
-			return -EFAULT;
-		}
-
-		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
-
-		list_del(&va_vmapl->list);
-		kfree(va_vmapl);
-		break;
-
-	default:
-		/* program should not get here */
-		break;
-	}
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
deleted file mode 100644
index 36638928..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
-#define __HYPER_DMABUF_REMOTE_SYNC_H__
-
-int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops);
-
-#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
deleted file mode 100644
index d15eb17..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ /dev/null
@@ -1,255 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_sgl_proc.h"
-
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-/* return total number of pages referenced by a sgt
- * for pre-calculation of # of pages behind a given sgt
- */
-static int get_num_pgs(struct sg_table *sgt)
-{
-	struct scatterlist *sgl;
-	int length, i;
-	/* at least one page */
-	int num_pages = 1;
-
-	sgl = sgt->sgl;
-
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-
-	/* round-up */
-	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
-
-	for (i = 1; i < sgt->nents; i++) {
-		sgl = sg_next(sgl);
-
-		/* round-up */
-		num_pages += ((sgl->length + PAGE_SIZE - 1) /
-			     PAGE_SIZE); /* round-up */
-	}
-
-	return num_pages;
-}
-
-/* extract pages directly from struct sg_table */
-struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
-{
-	struct pages_info *pg_info;
-	int i, j, k;
-	int length;
-	struct scatterlist *sgl;
-
-	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
-	if (!pg_info)
-		return NULL;
-
-	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
-				     sizeof(struct page *),
-				     GFP_KERNEL);
-
-	if (!pg_info->pgs) {
-		kfree(pg_info);
-		return NULL;
-	}
-
-	sgl = sgt->sgl;
-
-	pg_info->nents = 1;
-	pg_info->frst_ofst = sgl->offset;
-	pg_info->pgs[0] = sg_page(sgl);
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-	i = 1;
-
-	while (length > 0) {
-		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
-		length -= PAGE_SIZE;
-		pg_info->nents++;
-		i++;
-	}
-
-	for (j = 1; j < sgt->nents; j++) {
-		sgl = sg_next(sgl);
-		pg_info->pgs[i++] = sg_page(sgl);
-		length = sgl->length - PAGE_SIZE;
-		pg_info->nents++;
-		k = 1;
-
-		while (length > 0) {
-			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
-			length -= PAGE_SIZE;
-			pg_info->nents++;
-		}
-	}
-
-	/*
-	 * lenght at that point will be 0 or negative,
-	 * so to calculate last page size just add it to PAGE_SIZE
-	 */
-	pg_info->last_len = PAGE_SIZE + length;
-
-	return pg_info;
-}
-
-/* create sg_table with given pages and other parameters */
-struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
-					 int frst_ofst, int last_len,
-					 int nents)
-{
-	struct sg_table *sgt;
-	struct scatterlist *sgl;
-	int i, ret;
-
-	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (!sgt)
-		return NULL;
-
-	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
-	if (ret) {
-		if (sgt) {
-			sg_free_table(sgt);
-			kfree(sgt);
-		}
-
-		return NULL;
-	}
-
-	sgl = sgt->sgl;
-
-	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
-
-	for (i = 1; i < nents-1; i++) {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
-	}
-
-	if (nents > 1) /* more than one page */ {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pgs[i], last_len, 0);
-	}
-
-	return sgt;
-}
-
-int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
-				  int force)
-{
-	struct sgt_list *sgtl;
-	struct attachment_list *attachl;
-	struct kmap_vaddr_list *va_kmapl;
-	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-
-	if (!exported) {
-		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
-		return -EINVAL;
-	}
-
-	/* if force != 1, sgt_info can be released only if
-	 * there's no activity on exported dma-buf on importer
-	 * side.
-	 */
-	if (!force &&
-	    exported->active) {
-		dev_warn(hy_drv_priv->dev,
-			 "dma-buf is used by importer\n");
-
-		return -EPERM;
-	}
-
-	/* force == 1 is not recommended */
-	while (!list_empty(&exported->va_kmapped->list)) {
-		va_kmapl = list_first_entry(&exported->va_kmapped->list,
-					    struct kmap_vaddr_list, list);
-
-		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
-		list_del(&va_kmapl->list);
-		kfree(va_kmapl);
-	}
-
-	while (!list_empty(&exported->va_vmapped->list)) {
-		va_vmapl = list_first_entry(&exported->va_vmapped->list,
-					    struct vmap_vaddr_list, list);
-
-		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
-		list_del(&va_vmapl->list);
-		kfree(va_vmapl);
-	}
-
-	while (!list_empty(&exported->active_sgts->list)) {
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		sgtl = list_first_entry(&exported->active_sgts->list,
-					struct sgt_list, list);
-
-		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					 DMA_BIDIRECTIONAL);
-		list_del(&sgtl->list);
-		kfree(sgtl);
-	}
-
-	while (!list_empty(&exported->active_sgts->list)) {
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		dma_buf_detach(exported->dma_buf, attachl->attach);
-		list_del(&attachl->list);
-		kfree(attachl);
-	}
-
-	/* Start cleanup of buffer in reverse order to exporting */
-	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
-
-	/* unmap dma-buf */
-	dma_buf_unmap_attachment(exported->active_attached->attach,
-				 exported->active_sgts->sgt,
-				 DMA_BIDIRECTIONAL);
-
-	/* detatch dma-buf */
-	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
-
-	/* close connection to dma-buf completely */
-	dma_buf_put(exported->dma_buf);
-	exported->dma_buf = NULL;
-
-	kfree(exported->active_sgts);
-	kfree(exported->active_attached);
-	kfree(exported->va_kmapped);
-	kfree(exported->va_vmapped);
-	kfree(exported->priv);
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
deleted file mode 100644
index 869d982..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_IMP_H__
-#define __HYPER_DMABUF_IMP_H__
-
-/* extract pages directly from struct sg_table */
-struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
-
-/* create sg_table with given pages and other parameters */
-struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
-					 int frst_ofst, int last_len,
-					 int nents);
-
-int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
-				  int force);
-
-void hyper_dmabuf_free_sgt(struct sg_table *sgt);
-
-#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
deleted file mode 100644
index a11f804..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ /dev/null
@@ -1,141 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_STRUCT_H__
-#define __HYPER_DMABUF_STRUCT_H__
-
-/* stack of mapped sgts */
-struct sgt_list {
-	struct sg_table *sgt;
-	struct list_head list;
-};
-
-/* stack of attachments */
-struct attachment_list {
-	struct dma_buf_attachment *attach;
-	struct list_head list;
-};
-
-/* stack of vaddr mapped via kmap */
-struct kmap_vaddr_list {
-	void *vaddr;
-	struct list_head list;
-};
-
-/* stack of vaddr mapped via vmap */
-struct vmap_vaddr_list {
-	void *vaddr;
-	struct list_head list;
-};
-
-/* Exporter builds pages_info before sharing pages */
-struct pages_info {
-	int frst_ofst;
-	int last_len;
-	int nents;
-	struct page **pgs;
-};
-
-
-/* Exporter stores references to sgt in a hash table
- * Exporter keeps these references for synchronization
- * and tracking purposes
- */
-struct exported_sgt_info {
-	hyper_dmabuf_id_t hid;
-
-	/* VM ID of importer */
-	int rdomid;
-
-	struct dma_buf *dma_buf;
-	int nents;
-
-	/* list for tracking activities on dma_buf */
-	struct sgt_list *active_sgts;
-	struct attachment_list *active_attached;
-	struct kmap_vaddr_list *va_kmapped;
-	struct vmap_vaddr_list *va_vmapped;
-
-	/* set to 0 when unexported. Importer doesn't
-	 * do a new mapping of buffer if valid == false
-	 */
-	bool valid;
-
-	/* active == true if the buffer is actively used
-	 * (mapped) by importer
-	 */
-	int active;
-
-	/* hypervisor specific reference data for shared pages */
-	void *refs_info;
-
-	struct delayed_work unexport;
-	bool unexport_sched;
-
-	/* list for file pointers associated with all user space
-	 * application that have exported this same buffer to
-	 * another VM. This needs to be tracked to know whether
-	 * the buffer can be completely freed.
-	 */
-	struct file *filp;
-
-	/* size of private */
-	size_t sz_priv;
-
-	/* private data associated with the exported buffer */
-	char *priv;
-};
-
-/* imported_sgt_info contains information about imported DMA_BUF
- * this info is kept in IMPORT list and asynchorously retrieved and
- * used to map DMA_BUF on importer VM's side upon export fd ioctl
- * request from user-space
- */
-
-struct imported_sgt_info {
-	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
-
-	/* hypervisor-specific handle to pages */
-	int ref_handle;
-
-	/* offset and size info of DMA_BUF */
-	int frst_ofst;
-	int last_len;
-	int nents;
-
-	struct dma_buf *dma_buf;
-	struct sg_table *sgt;
-
-	void *refs_info;
-	bool valid;
-	int importers;
-
-	/* size of private */
-	size_t sz_priv;
-
-	/* private data associated with the exported buffer */
-	char *priv;
-};
-
-#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
deleted file mode 100644
index 4a073ce..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ /dev/null
@@ -1,941 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/workqueue.h>
-#include <linux/delay.h>
-#include <xen/grant_table.h>
-#include <xen/events.h>
-#include <xen/xenbus.h>
-#include <asm/xen/page.h>
-#include "hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_xen_comm_list.h"
-#include "../hyper_dmabuf_drv.h"
-
-static int export_req_id;
-
-struct hyper_dmabuf_req req_pending = {0};
-
-static void xen_get_domid_delayed(struct work_struct *unused);
-static void xen_init_comm_env_delayed(struct work_struct *unused);
-
-static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
-static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
-
-/* Creates entry in xen store that will keep details of all
- * exporter rings created by this domain
- */
-static int xen_comm_setup_data_dir(void)
-{
-	char buf[255];
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
-		hy_drv_priv->domid);
-
-	return xenbus_mkdir(XBT_NIL, buf, "");
-}
-
-/* Removes entry from xenstore with exporter ring details.
- * Other domains that has connected to any of exporter rings
- * created by this domain, will be notified about removal of
- * this entry and will treat that as signal to cleanup importer
- * rings created for this domain
- */
-static int xen_comm_destroy_data_dir(void)
-{
-	char buf[255];
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
-		hy_drv_priv->domid);
-
-	return xenbus_rm(XBT_NIL, buf, "");
-}
-
-/* Adds xenstore entries with details of exporter ring created
- * for given remote domain. It requires special daemon running
- * in dom0 to make sure that given remote domain will have right
- * permissions to access that data.
- */
-static int xen_comm_expose_ring_details(int domid, int rdomid,
-					int gref, int port)
-{
-	char buf[255];
-	int ret;
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
-		domid, rdomid);
-
-	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to write xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to write xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	return 0;
-}
-
-/*
- * Queries details of ring exposed by remote domain.
- */
-static int xen_comm_get_ring_details(int domid, int rdomid,
-				     int *grefid, int *port)
-{
-	char buf[255];
-	int ret;
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
-		rdomid, domid);
-
-	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
-
-	if (ret <= 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to read xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
-
-	if (ret <= 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to read xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	return (ret <= 0 ? 1 : 0);
-}
-
-static void xen_get_domid_delayed(struct work_struct *unused)
-{
-	struct xenbus_transaction xbt;
-	int domid, ret;
-
-	/* scheduling another if driver is still running
-	 * and xenstore has not been initialized
-	 */
-	if (likely(xenstored_ready == 0)) {
-		dev_dbg(hy_drv_priv->dev,
-			"Xenstore is not ready yet. Will retry in 500ms\n");
-		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
-	} else {
-		xenbus_transaction_start(&xbt);
-
-		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
-
-		if (ret <= 0)
-			domid = -1;
-
-		xenbus_transaction_end(xbt, 0);
-
-		/* try again since -1 is an invalid id for domain
-		 * (but only if driver is still running)
-		 */
-		if (unlikely(domid == -1)) {
-			dev_dbg(hy_drv_priv->dev,
-				"domid==-1 is invalid. Will retry it in 500ms\n");
-			schedule_delayed_work(&get_vm_id_work,
-					      msecs_to_jiffies(500));
-		} else {
-			dev_info(hy_drv_priv->dev,
-				 "Successfully retrieved domid from Xenstore:%d\n",
-				 domid);
-			hy_drv_priv->domid = domid;
-		}
-	}
-}
-
-int xen_be_get_domid(void)
-{
-	struct xenbus_transaction xbt;
-	int domid;
-
-	if (unlikely(xenstored_ready == 0)) {
-		xen_get_domid_delayed(NULL);
-		return -1;
-	}
-
-	xenbus_transaction_start(&xbt);
-
-	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
-		domid = -1;
-
-	xenbus_transaction_end(xbt, 0);
-
-	return domid;
-}
-
-static int xen_comm_next_req_id(void)
-{
-	export_req_id++;
-	return export_req_id;
-}
-
-/* For now cache latast rings as global variables TODO: keep them in list*/
-static irqreturn_t front_ring_isr(int irq, void *info);
-static irqreturn_t back_ring_isr(int irq, void *info);
-
-/* Callback function that will be called on any change of xenbus path
- * being watched. Used for detecting creation/destruction of remote
- * domain exporter ring.
- *
- * When remote domain's exporter ring will be detected, importer ring
- * on this domain will be created.
- *
- * When remote domain's exporter ring destruction will be detected it
- * will celanup this domain importer ring.
- *
- * Destruction can be caused by unloading module by remote domain or
- * it's crash/force shutdown.
- */
-static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
-					 const char *path, const char *token)
-{
-	int rdom, ret;
-	uint32_t grefid, port;
-	struct xen_comm_rx_ring_info *ring_info;
-
-	/* Check which domain has changed its exporter rings */
-	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
-	if (ret <= 0)
-		return;
-
-	/* Check if we have importer ring for given remote domain already
-	 * created
-	 */
-	ring_info = xen_comm_find_rx_ring(rdom);
-
-	/* Try to query remote domain exporter ring details - if
-	 * that will fail and we have importer ring that means remote
-	 * domains has cleanup its exporter ring, so our importer ring
-	 * is no longer useful.
-	 *
-	 * If querying details will succeed and we don't have importer ring,
-	 * it means that remote domain has setup it for us and we should
-	 * connect to it.
-	 */
-
-	ret = xen_comm_get_ring_details(xen_be_get_domid(),
-					rdom, &grefid, &port);
-
-	if (ring_info && ret != 0) {
-		dev_info(hy_drv_priv->dev,
-			 "Remote exporter closed, cleaninup importer\n");
-		xen_be_cleanup_rx_rbuf(rdom);
-	} else if (!ring_info && ret == 0) {
-		dev_info(hy_drv_priv->dev,
-			 "Registering importer\n");
-		xen_be_init_rx_rbuf(rdom);
-	}
-}
-
-/* exporter needs to generated info for page sharing */
-int xen_be_init_tx_rbuf(int domid)
-{
-	struct xen_comm_tx_ring_info *ring_info;
-	struct xen_comm_sring *sring;
-	struct evtchn_alloc_unbound alloc_unbound;
-	struct evtchn_close close;
-
-	void *shared_ring;
-	int ret;
-
-	/* check if there's any existing tx channel in the table */
-	ring_info = xen_comm_find_tx_ring(domid);
-
-	if (ring_info) {
-		dev_info(hy_drv_priv->dev,
-			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
-		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
-		return 0;
-	}
-
-	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
-
-	if (!ring_info)
-		return -ENOMEM;
-
-	/* from exporter to importer */
-	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
-	if (shared_ring == 0) {
-		kfree(ring_info);
-		return -ENOMEM;
-	}
-
-	sring = (struct xen_comm_sring *) shared_ring;
-
-	SHARED_RING_INIT(sring);
-
-	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
-
-	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
-						virt_to_mfn(shared_ring),
-						0);
-	if (ring_info->gref_ring < 0) {
-		/* fail to get gref */
-		kfree(ring_info);
-		return -EFAULT;
-	}
-
-	alloc_unbound.dom = DOMID_SELF;
-	alloc_unbound.remote_dom = domid;
-	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
-					  &alloc_unbound);
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Cannot allocate event channel\n");
-		kfree(ring_info);
-		return -EIO;
-	}
-
-	/* setting up interrupt */
-	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
-					front_ring_isr, 0,
-					NULL, (void *) ring_info);
-
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to setup event channel\n");
-		close.port = alloc_unbound.port;
-		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
-		gnttab_end_foreign_access(ring_info->gref_ring, 0,
-					virt_to_mfn(shared_ring));
-		kfree(ring_info);
-		return -EIO;
-	}
-
-	ring_info->rdomain = domid;
-	ring_info->irq = ret;
-	ring_info->port = alloc_unbound.port;
-
-	mutex_init(&ring_info->lock);
-
-	dev_dbg(hy_drv_priv->dev,
-		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
-		__func__,
-		ring_info->gref_ring,
-		ring_info->port,
-		ring_info->irq);
-
-	ret = xen_comm_add_tx_ring(ring_info);
-
-	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
-					   domid,
-					   ring_info->gref_ring,
-					   ring_info->port);
-
-	/* Register watch for remote domain exporter ring.
-	 * When remote domain will setup its exporter ring,
-	 * we will automatically connect our importer ring to it.
-	 */
-	ring_info->watch.callback = remote_dom_exporter_watch_cb;
-	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
-
-	if (!ring_info->watch.node) {
-		kfree(ring_info);
-		return -ENOMEM;
-	}
-
-	sprintf((char *)ring_info->watch.node,
-		"/local/domain/%d/data/hyper_dmabuf/%d/port",
-		domid, xen_be_get_domid());
-
-	register_xenbus_watch(&ring_info->watch);
-
-	return ret;
-}
-
-/* cleans up exporter ring created for given remote domain */
-void xen_be_cleanup_tx_rbuf(int domid)
-{
-	struct xen_comm_tx_ring_info *ring_info;
-	struct xen_comm_rx_ring_info *rx_ring_info;
-
-	/* check if we at all have exporter ring for given rdomain */
-	ring_info = xen_comm_find_tx_ring(domid);
-
-	if (!ring_info)
-		return;
-
-	xen_comm_remove_tx_ring(domid);
-
-	unregister_xenbus_watch(&ring_info->watch);
-	kfree(ring_info->watch.node);
-
-	/* No need to close communication channel, will be done by
-	 * this function
-	 */
-	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
-
-	/* No need to free sring page, will be freed by this function
-	 * when other side will end its access
-	 */
-	gnttab_end_foreign_access(ring_info->gref_ring, 0,
-				  (unsigned long) ring_info->ring_front.sring);
-
-	kfree(ring_info);
-
-	rx_ring_info = xen_comm_find_rx_ring(domid);
-	if (!rx_ring_info)
-		return;
-
-	BACK_RING_INIT(&(rx_ring_info->ring_back),
-		       rx_ring_info->ring_back.sring,
-		       PAGE_SIZE);
-}
-
-/* importer needs to know about shared page and port numbers for
- * ring buffer and event channel
- */
-int xen_be_init_rx_rbuf(int domid)
-{
-	struct xen_comm_rx_ring_info *ring_info;
-	struct xen_comm_sring *sring;
-
-	struct page *shared_ring;
-
-	struct gnttab_map_grant_ref *map_ops;
-
-	int ret;
-	int rx_gref, rx_port;
-
-	/* check if there's existing rx ring channel */
-	ring_info = xen_comm_find_rx_ring(domid);
-
-	if (ring_info) {
-		dev_info(hy_drv_priv->dev,
-			 "rx ring ch from domid = %d already exist\n",
-			 ring_info->sdomain);
-
-		return 0;
-	}
-
-	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
-					&rx_gref, &rx_port);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Domain %d has not created exporter ring for current domain\n",
-			domid);
-
-		return ret;
-	}
-
-	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
-
-	if (!ring_info)
-		return -ENOMEM;
-
-	ring_info->sdomain = domid;
-	ring_info->evtchn = rx_port;
-
-	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
-
-	if (!map_ops) {
-		ret = -ENOMEM;
-		goto fail_no_map_ops;
-	}
-
-	if (gnttab_alloc_pages(1, &shared_ring)) {
-		ret = -ENOMEM;
-		goto fail_others;
-	}
-
-	gnttab_set_map_op(&map_ops[0],
-			  (unsigned long)pfn_to_kaddr(
-					page_to_pfn(shared_ring)),
-			  GNTMAP_host_map, rx_gref, domid);
-
-	gnttab_set_unmap_op(&ring_info->unmap_op,
-			    (unsigned long)pfn_to_kaddr(
-					page_to_pfn(shared_ring)),
-			    GNTMAP_host_map, -1);
-
-	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
-		ret = -EFAULT;
-		goto fail_others;
-	}
-
-	if (map_ops[0].status) {
-		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
-		ret = -EFAULT;
-		goto fail_others;
-	} else {
-		ring_info->unmap_op.handle = map_ops[0].handle;
-	}
-
-	kfree(map_ops);
-
-	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
-
-	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
-
-	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
-
-	if (ret < 0) {
-		ret = -EIO;
-		goto fail_others;
-	}
-
-	ring_info->irq = ret;
-
-	dev_dbg(hy_drv_priv->dev,
-		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
-		rx_port,
-		ring_info->irq);
-
-	ret = xen_comm_add_rx_ring(ring_info);
-
-	/* Setup communcation channel in opposite direction */
-	if (!xen_comm_find_tx_ring(domid))
-		ret = xen_be_init_tx_rbuf(domid);
-
-	ret = request_irq(ring_info->irq,
-			  back_ring_isr, 0,
-			  NULL, (void *)ring_info);
-
-	return ret;
-
-fail_others:
-	kfree(map_ops);
-
-fail_no_map_ops:
-	kfree(ring_info);
-
-	return ret;
-}
-
-/* clenas up importer ring create for given source domain */
-void xen_be_cleanup_rx_rbuf(int domid)
-{
-	struct xen_comm_rx_ring_info *ring_info;
-	struct xen_comm_tx_ring_info *tx_ring_info;
-	struct page *shared_ring;
-
-	/* check if we have importer ring created for given sdomain */
-	ring_info = xen_comm_find_rx_ring(domid);
-
-	if (!ring_info)
-		return;
-
-	xen_comm_remove_rx_ring(domid);
-
-	/* no need to close event channel, will be done by that function */
-	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
-
-	/* unmapping shared ring page */
-	shared_ring = virt_to_page(ring_info->ring_back.sring);
-	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
-	gnttab_free_pages(1, &shared_ring);
-
-	kfree(ring_info);
-
-	tx_ring_info = xen_comm_find_tx_ring(domid);
-	if (!tx_ring_info)
-		return;
-
-	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
-	FRONT_RING_INIT(&(tx_ring_info->ring_front),
-			tx_ring_info->ring_front.sring,
-			PAGE_SIZE);
-}
-
-#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
-
-static void xen_rx_ch_add_delayed(struct work_struct *unused);
-
-static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
-
-#define DOMID_SCAN_START	1	/*  domid = 1 */
-#define DOMID_SCAN_END		10	/* domid = 10 */
-
-static void xen_rx_ch_add_delayed(struct work_struct *unused)
-{
-	int ret;
-	char buf[128];
-	int i, dummy;
-
-	dev_dbg(hy_drv_priv->dev,
-		"Scanning new tx channel comming from another domain\n");
-
-	/* check other domains and schedule another work if driver
-	 * is still running and backend is valid
-	 */
-	if (hy_drv_priv &&
-	    hy_drv_priv->initialized) {
-		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
-			if (i == hy_drv_priv->domid)
-				continue;
-
-			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
-				i, hy_drv_priv->domid);
-
-			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
-
-			if (ret > 0) {
-				if (xen_comm_find_rx_ring(i) != NULL)
-					continue;
-
-				ret = xen_be_init_rx_rbuf(i);
-
-				if (!ret)
-					dev_info(hy_drv_priv->dev,
-						 "Done rx ch init for VM %d\n",
-						 i);
-			}
-		}
-
-		/* check every 10 seconds */
-		schedule_delayed_work(&xen_rx_ch_auto_add_work,
-				      msecs_to_jiffies(10000));
-	}
-}
-
-#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
-
-void xen_init_comm_env_delayed(struct work_struct *unused)
-{
-	int ret;
-
-	/* scheduling another work if driver is still running
-	 * and xenstore hasn't been initialized or dom_id hasn't
-	 * been correctly retrieved.
-	 */
-	if (likely(xenstored_ready == 0 ||
-	    hy_drv_priv->domid == -1)) {
-		dev_dbg(hy_drv_priv->dev,
-			"Xenstore not ready Will re-try in 500ms\n");
-		schedule_delayed_work(&xen_init_comm_env_work,
-				      msecs_to_jiffies(500));
-	} else {
-		ret = xen_comm_setup_data_dir();
-		if (ret < 0) {
-			dev_err(hy_drv_priv->dev,
-				"Failed to create data dir in Xenstore\n");
-		} else {
-			dev_info(hy_drv_priv->dev,
-				"Successfully finished comm env init\n");
-			hy_drv_priv->initialized = true;
-
-#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
-			xen_rx_ch_add_delayed(NULL);
-#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
-		}
-	}
-}
-
-int xen_be_init_comm_env(void)
-{
-	int ret;
-
-	xen_comm_ring_table_init();
-
-	if (unlikely(xenstored_ready == 0 ||
-	    hy_drv_priv->domid == -1)) {
-		xen_init_comm_env_delayed(NULL);
-		return -1;
-	}
-
-	ret = xen_comm_setup_data_dir();
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to create data dir in Xenstore\n");
-	} else {
-		dev_info(hy_drv_priv->dev,
-			"Successfully finished comm env initialization\n");
-
-		hy_drv_priv->initialized = true;
-	}
-
-	return ret;
-}
-
-/* cleans up all tx/rx rings */
-static void xen_be_cleanup_all_rbufs(void)
-{
-	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
-	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
-}
-
-void xen_be_destroy_comm(void)
-{
-	xen_be_cleanup_all_rbufs();
-	xen_comm_destroy_data_dir();
-}
-
-int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
-			      int wait)
-{
-	struct xen_comm_front_ring *ring;
-	struct hyper_dmabuf_req *new_req;
-	struct xen_comm_tx_ring_info *ring_info;
-	int notify;
-
-	struct timeval tv_start, tv_end;
-	struct timeval tv_diff;
-
-	int timeout = 1000;
-
-	/* find a ring info for the channel */
-	ring_info = xen_comm_find_tx_ring(domid);
-	if (!ring_info) {
-		dev_err(hy_drv_priv->dev,
-			"Can't find ring info for the channel\n");
-		return -ENOENT;
-	}
-
-
-	ring = &ring_info->ring_front;
-
-	do_gettimeofday(&tv_start);
-
-	while (RING_FULL(ring)) {
-		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
-
-		if (timeout == 0) {
-			dev_err(hy_drv_priv->dev,
-				"Timeout while waiting for an entry in the ring\n");
-			return -EIO;
-		}
-		usleep_range(100, 120);
-		timeout--;
-	}
-
-	timeout = 1000;
-
-	mutex_lock(&ring_info->lock);
-
-	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
-	if (!new_req) {
-		mutex_unlock(&ring_info->lock);
-		dev_err(hy_drv_priv->dev,
-			"NULL REQUEST\n");
-		return -EIO;
-	}
-
-	req->req_id = xen_comm_next_req_id();
-
-	/* update req_pending with current request */
-	memcpy(&req_pending, req, sizeof(req_pending));
-
-	/* pass current request to the ring */
-	memcpy(new_req, req, sizeof(*new_req));
-
-	ring->req_prod_pvt++;
-
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
-	if (notify)
-		notify_remote_via_irq(ring_info->irq);
-
-	if (wait) {
-		while (timeout--) {
-			if (req_pending.stat !=
-			    HYPER_DMABUF_REQ_NOT_RESPONDED)
-				break;
-			usleep_range(100, 120);
-		}
-
-		if (timeout < 0) {
-			mutex_unlock(&ring_info->lock);
-			dev_err(hy_drv_priv->dev,
-				"request timed-out\n");
-			return -EBUSY;
-		}
-
-		mutex_unlock(&ring_info->lock);
-		do_gettimeofday(&tv_end);
-
-		/* checking time duration for round-trip of a request
-		 * for debugging
-		 */
-		if (tv_end.tv_usec >= tv_start.tv_usec) {
-			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
-			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
-		} else {
-			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
-			tv_diff.tv_usec = tv_end.tv_usec+1000000-
-					  tv_start.tv_usec;
-		}
-
-		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
-			dev_dbg(hy_drv_priv->dev,
-				"send_req:time diff: %ld sec, %ld usec\n",
-				tv_diff.tv_sec, tv_diff.tv_usec);
-	}
-
-	mutex_unlock(&ring_info->lock);
-
-	return 0;
-}
-
-/* ISR for handling request */
-static irqreturn_t back_ring_isr(int irq, void *info)
-{
-	RING_IDX rc, rp;
-	struct hyper_dmabuf_req req;
-	struct hyper_dmabuf_resp resp;
-
-	int notify, more_to_do;
-	int ret;
-
-	struct xen_comm_rx_ring_info *ring_info;
-	struct xen_comm_back_ring *ring;
-
-	ring_info = (struct xen_comm_rx_ring_info *)info;
-	ring = &ring_info->ring_back;
-
-	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
-
-	do {
-		rc = ring->req_cons;
-		rp = ring->sring->req_prod;
-		more_to_do = 0;
-		while (rc != rp) {
-			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
-				break;
-
-			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
-			ring->req_cons = ++rc;
-
-			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
-
-			if (ret > 0) {
-				/* preparing a response for the request and
-				 * send it to the requester
-				 */
-				memcpy(&resp, &req, sizeof(resp));
-				memcpy(RING_GET_RESPONSE(ring,
-							 ring->rsp_prod_pvt),
-							 &resp, sizeof(resp));
-				ring->rsp_prod_pvt++;
-
-				dev_dbg(hy_drv_priv->dev,
-					"responding to exporter for req:%d\n",
-					resp.resp_id);
-
-				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
-								     notify);
-
-				if (notify)
-					notify_remote_via_irq(ring_info->irq);
-			}
-
-			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
-		}
-	} while (more_to_do);
-
-	return IRQ_HANDLED;
-}
-
-/* ISR for handling responses */
-static irqreturn_t front_ring_isr(int irq, void *info)
-{
-	/* front ring only care about response from back */
-	struct hyper_dmabuf_resp *resp;
-	RING_IDX i, rp;
-	int more_to_do, ret;
-
-	struct xen_comm_tx_ring_info *ring_info;
-	struct xen_comm_front_ring *ring;
-
-	ring_info = (struct xen_comm_tx_ring_info *)info;
-	ring = &ring_info->ring_front;
-
-	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
-
-	do {
-		more_to_do = 0;
-		rp = ring->sring->rsp_prod;
-		for (i = ring->rsp_cons; i != rp; i++) {
-			resp = RING_GET_RESPONSE(ring, i);
-
-			/* update pending request's status with what is
-			 * in the response
-			 */
-
-			dev_dbg(hy_drv_priv->dev,
-				"getting response from importer\n");
-
-			if (req_pending.req_id == resp->resp_id)
-				req_pending.stat = resp->stat;
-
-			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
-				/* parsing response */
-				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
-					(struct hyper_dmabuf_req *)resp);
-
-				if (ret < 0) {
-					dev_err(hy_drv_priv->dev,
-						"err while parsing resp\n");
-				}
-			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
-				/* for debugging dma_buf remote synch */
-				dev_dbg(hy_drv_priv->dev,
-					"original request = 0x%x\n", resp->cmd);
-				dev_dbg(hy_drv_priv->dev,
-					"got HYPER_DMABUF_REQ_PROCESSED\n");
-			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
-				/* for debugging dma_buf remote synch */
-				dev_dbg(hy_drv_priv->dev,
-					"original request = 0x%x\n", resp->cmd);
-				dev_dbg(hy_drv_priv->dev,
-					"got HYPER_DMABUF_REQ_ERROR\n");
-			}
-		}
-
-		ring->rsp_cons = i;
-
-		if (i != ring->req_prod_pvt)
-			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
-		else
-			ring->sring->rsp_event = i+1;
-
-	} while (more_to_do);
-
-	return IRQ_HANDLED;
-}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
deleted file mode 100644
index 70a2b70..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ /dev/null
@@ -1,78 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_COMM_H__
-#define __HYPER_DMABUF_XEN_COMM_H__
-
-#include "xen/interface/io/ring.h"
-#include "xen/xenbus.h"
-#include "../hyper_dmabuf_msg.h"
-
-extern int xenstored_ready;
-
-DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
-
-struct xen_comm_tx_ring_info {
-	struct xen_comm_front_ring ring_front;
-	int rdomain;
-	int gref_ring;
-	int irq;
-	int port;
-	struct mutex lock;
-	struct xenbus_watch watch;
-};
-
-struct xen_comm_rx_ring_info {
-	int sdomain;
-	int irq;
-	int evtchn;
-	struct xen_comm_back_ring ring_back;
-	struct gnttab_unmap_grant_ref unmap_op;
-};
-
-int xen_be_get_domid(void);
-
-int xen_be_init_comm_env(void);
-
-/* exporter needs to generated info for page sharing */
-int xen_be_init_tx_rbuf(int domid);
-
-/* importer needs to know about shared page and port numbers
- * for ring buffer and event channel
- */
-int xen_be_init_rx_rbuf(int domid);
-
-/* cleans up exporter ring created for given domain */
-void xen_be_cleanup_tx_rbuf(int domid);
-
-/* cleans up importer ring created for given domain */
-void xen_be_cleanup_rx_rbuf(int domid);
-
-void xen_be_destroy_comm(void);
-
-/* send request to the remote domain */
-int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
-		    int wait);
-
-#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
deleted file mode 100644
index 15023db..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ /dev/null
@@ -1,158 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/cdev.h>
-#include <linux/hashtable.h>
-#include <xen/grant_table.h>
-#include "../hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_xen_comm_list.h"
-
-DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
-DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
-
-void xen_comm_ring_table_init(void)
-{
-	hash_init(xen_comm_rx_ring_hash);
-	hash_init(xen_comm_tx_ring_hash);
-}
-
-int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->info = ring_info;
-
-	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
-		info_entry->info->rdomain);
-
-	return 0;
-}
-
-int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->info = ring_info;
-
-	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
-		info_entry->info->sdomain);
-
-	return 0;
-}
-
-struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->rdomain == domid)
-			return info_entry->info;
-
-	return NULL;
-}
-
-struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->sdomain == domid)
-			return info_entry->info;
-
-	return NULL;
-}
-
-int xen_comm_remove_tx_ring(int domid)
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->rdomain == domid) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
-		}
-
-	return -ENOENT;
-}
-
-int xen_comm_remove_rx_ring(int domid)
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->sdomain == domid) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
-		}
-
-	return -ENOENT;
-}
-
-void xen_comm_foreach_tx_ring(void (*func)(int domid))
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-	struct hlist_node *tmp;
-	int bkt;
-
-	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
-			   info_entry, node) {
-		func(info_entry->info->rdomain);
-	}
-}
-
-void xen_comm_foreach_rx_ring(void (*func)(int domid))
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-	struct hlist_node *tmp;
-	int bkt;
-
-	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
-			   info_entry, node) {
-		func(info_entry->info->sdomain);
-	}
-}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
deleted file mode 100644
index 8502fe7..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
-#define __HYPER_DMABUF_XEN_COMM_LIST_H__
-
-/* number of bits to be used for exported dmabufs hash table */
-#define MAX_ENTRY_TX_RING 7
-/* number of bits to be used for imported dmabufs hash table */
-#define MAX_ENTRY_RX_RING 7
-
-struct xen_comm_tx_ring_info_entry {
-	struct xen_comm_tx_ring_info *info;
-	struct hlist_node node;
-};
-
-struct xen_comm_rx_ring_info_entry {
-	struct xen_comm_rx_ring_info *info;
-	struct hlist_node node;
-};
-
-void xen_comm_ring_table_init(void);
-
-int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
-
-int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
-
-int xen_comm_remove_tx_ring(int domid);
-
-int xen_comm_remove_rx_ring(int domid);
-
-struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
-
-struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
-
-/* iterates over all exporter rings and calls provided
- * function for each of them
- */
-void xen_comm_foreach_tx_ring(void (*func)(int domid));
-
-/* iterates over all importer rings and calls provided
- * function for each of them
- */
-void xen_comm_foreach_rx_ring(void (*func)(int domid));
-
-#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
deleted file mode 100644
index 14ed3bc..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include "../hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_xen_shm.h"
-
-struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
-	.init = NULL, /* not needed for xen */
-	.cleanup = NULL, /* not needed for xen */
-	.get_vm_id = xen_be_get_domid,
-	.share_pages = xen_be_share_pages,
-	.unshare_pages = xen_be_unshare_pages,
-	.map_shared_pages = (void *)xen_be_map_shared_pages,
-	.unmap_shared_pages = xen_be_unmap_shared_pages,
-	.init_comm_env = xen_be_init_comm_env,
-	.destroy_comm = xen_be_destroy_comm,
-	.init_rx_ch = xen_be_init_rx_rbuf,
-	.init_tx_ch = xen_be_init_tx_rbuf,
-	.send_req = xen_be_send_req,
-};
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
deleted file mode 100644
index a4902b7..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_DRV_H__
-#define __HYPER_DMABUF_XEN_DRV_H__
-#include <xen/interface/grant_table.h>
-
-extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
-
-/* Main purpose of this structure is to keep
- * all references created or acquired for sharing
- * pages with another domain for freeing those later
- * when unsharing.
- */
-struct xen_shared_pages_info {
-	/* top level refid */
-	grant_ref_t lvl3_gref;
-
-	/* page of top level addressing, it contains refids of 2nd lvl pages */
-	grant_ref_t *lvl3_table;
-
-	/* table of 2nd level pages, that contains refids to data pages */
-	grant_ref_t *lvl2_table;
-
-	/* unmap ops for mapped pages */
-	struct gnttab_unmap_grant_ref *unmap_ops;
-
-	/* data pages to be unmapped */
-	struct page **data_pages;
-};
-
-#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
deleted file mode 100644
index c6a15f1..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ /dev/null
@@ -1,525 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/slab.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
-#include "hyper_dmabuf_xen_drv.h"
-#include "../hyper_dmabuf_drv.h"
-
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-/*
- * Creates 2 level page directory structure for referencing shared pages.
- * Top level page is a single page that contains up to 1024 refids that
- * point to 2nd level pages.
- *
- * Each 2nd level page contains up to 1024 refids that point to shared
- * data pages.
- *
- * There will always be one top level page and number of 2nd level pages
- * depends on number of shared data pages.
- *
- *      3rd level page                2nd level pages            Data pages
- * +-------------------------+   ┌>+--------------------+ ┌>+------------+
- * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
- * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
- * |           ...           |   | |     ....           | |
- * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
- * +-------------------------+ | | +--------------------+   |Data page 1 |
- *                             | |                          +------------+
- *                             | └>+--------------------+
- *                             |   |Data page 1024 refid|
- *                             |   |Data page 1025 refid|
- *                             |   |       ...          |
- *                             |   |Data page 2047 refid|
- *                             |   +--------------------+
- *                             |
- *                             |        .....
- *                             └-->+-----------------------+
- *                                 |Data page 1047552 refid|
- *                                 |Data page 1047553 refid|
- *                                 |       ...             |
- *                                 |Data page 1048575 refid|
- *                                 +-----------------------+
- *
- * Using such 2 level structure it is possible to reference up to 4GB of
- * shared data using single refid pointing to top level page.
- *
- * Returns refid of top level page.
- */
-int xen_be_share_pages(struct page **pages, int domid, int nents,
-		       void **refs_info)
-{
-	grant_ref_t lvl3_gref;
-	grant_ref_t *lvl2_table;
-	grant_ref_t *lvl3_table;
-
-	/*
-	 * Calculate number of pages needed for 2nd level addresing:
-	 */
-	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			   ((nents % REFS_PER_PAGE) ? 1 : 0));
-
-	struct xen_shared_pages_info *sh_pages_info;
-	int i;
-
-	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
-	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
-
-	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
-
-	if (!sh_pages_info)
-		return -ENOMEM;
-
-	*refs_info = (void *)sh_pages_info;
-
-	/* share data pages in readonly mode for security */
-	for (i = 0; i < nents; i++) {
-		lvl2_table[i] = gnttab_grant_foreign_access(domid,
-					pfn_to_mfn(page_to_pfn(pages[i])),
-					true /* read only */);
-		if (lvl2_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev,
-				"No more space left in grant table\n");
-
-			/* Unshare all already shared pages for lvl2 */
-			while (i--) {
-				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
-				gnttab_free_grant_reference(lvl2_table[i]);
-			}
-			goto err_cleanup;
-		}
-	}
-
-	/* Share 2nd level addressing pages in readonly mode*/
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-					virt_to_mfn(
-					(unsigned long)lvl2_table+i*PAGE_SIZE),
-					true);
-
-		if (lvl3_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev,
-				"No more space left in grant table\n");
-
-			/* Unshare all already shared pages for lvl3 */
-			while (i--) {
-				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
-				gnttab_free_grant_reference(lvl3_table[i]);
-			}
-
-			/* Unshare all pages for lvl2 */
-			while (nents--) {
-				gnttab_end_foreign_access_ref(
-							lvl2_table[nents], 0);
-				gnttab_free_grant_reference(lvl2_table[nents]);
-			}
-
-			goto err_cleanup;
-		}
-	}
-
-	/* Share lvl3_table in readonly mode*/
-	lvl3_gref = gnttab_grant_foreign_access(domid,
-			virt_to_mfn((unsigned long)lvl3_table),
-			true);
-
-	if (lvl3_gref == -ENOSPC) {
-		dev_err(hy_drv_priv->dev,
-			"No more space left in grant table\n");
-
-		/* Unshare all pages for lvl3 */
-		while (i--) {
-			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
-			gnttab_free_grant_reference(lvl3_table[i]);
-		}
-
-		/* Unshare all pages for lvl2 */
-		while (nents--) {
-			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
-			gnttab_free_grant_reference(lvl2_table[nents]);
-		}
-
-		goto err_cleanup;
-	}
-
-	/* Store lvl3_table page to be freed later */
-	sh_pages_info->lvl3_table = lvl3_table;
-
-	/* Store lvl2_table pages to be freed later */
-	sh_pages_info->lvl2_table = lvl2_table;
-
-
-	/* Store exported pages refid to be unshared later */
-	sh_pages_info->lvl3_gref = lvl3_gref;
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return lvl3_gref;
-
-err_cleanup:
-	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
-	free_pages((unsigned long)lvl3_table, 1);
-
-	return -ENOSPC;
-}
-
-int xen_be_unshare_pages(void **refs_info, int nents)
-{
-	struct xen_shared_pages_info *sh_pages_info;
-	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			    ((nents % REFS_PER_PAGE) ? 1 : 0));
-	int i;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
-
-	if (sh_pages_info->lvl3_table == NULL ||
-	    sh_pages_info->lvl2_table ==  NULL ||
-	    sh_pages_info->lvl3_gref == -1) {
-		dev_warn(hy_drv_priv->dev,
-			 "gref table for hyper_dmabuf already cleaned up\n");
-		return 0;
-	}
-
-	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < nents; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
-			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-
-		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
-		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
-	}
-
-	/* End foreign access for 2nd level addressing pages */
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
-			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-
-		if (!gnttab_end_foreign_access_ref(
-					sh_pages_info->lvl3_table[i], 1))
-			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
-
-		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
-	}
-
-	/* End foreign access for top level addressing page */
-	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
-		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
-
-	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
-	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
-
-	/* freeing all pages used for 2 level addressing */
-	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
-	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
-
-	sh_pages_info->lvl3_gref = -1;
-	sh_pages_info->lvl2_table = NULL;
-	sh_pages_info->lvl3_table = NULL;
-	kfree(sh_pages_info);
-	sh_pages_info = NULL;
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return 0;
-}
-
-/* Maps provided top level ref id and then return array of pages
- * containing data refs.
- */
-struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
-				      int nents, void **refs_info)
-{
-	struct page *lvl3_table_page;
-	struct page **lvl2_table_pages;
-	struct page **data_pages;
-	struct xen_shared_pages_info *sh_pages_info;
-
-	grant_ref_t *lvl3_table;
-	grant_ref_t *lvl2_table;
-
-	struct gnttab_map_grant_ref lvl3_map_ops;
-	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
-
-	struct gnttab_map_grant_ref *lvl2_map_ops;
-	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
-
-	struct gnttab_map_grant_ref *data_map_ops;
-	struct gnttab_unmap_grant_ref *data_unmap_ops;
-
-	/* # of grefs in the last page of lvl2 table */
-	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
-	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
-			   ((nents_last > 0) ? 1 : 0) -
-			   (nents_last == REFS_PER_PAGE);
-	int i, j, k;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
-	*refs_info = (void *) sh_pages_info;
-
-	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
-				   GFP_KERNEL);
-
-	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
-
-	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
-			       GFP_KERNEL);
-
-	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
-				 GFP_KERNEL);
-
-	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
-	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
-
-	/* Map top level addressing page */
-	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
-		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
-		return NULL;
-	}
-
-	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
-
-	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
-			  GNTMAP_host_map | GNTMAP_readonly,
-			  (grant_ref_t)lvl3_gref, domid);
-
-	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
-			    GNTMAP_host_map | GNTMAP_readonly, -1);
-
-	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	if (lvl3_map_ops.status) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed status = %d",
-			lvl3_map_ops.status);
-
-		goto error_cleanup_lvl3;
-	} else {
-		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
-	}
-
-	/* Map all second level pages */
-	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
-		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
-		goto error_cleanup_lvl3;
-	}
-
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
-					page_to_pfn(lvl2_table_pages[i]));
-		gnttab_set_map_op(&lvl2_map_ops[i],
-				  (unsigned long)lvl2_table, GNTMAP_host_map |
-				  GNTMAP_readonly,
-				  lvl3_table[i], domid);
-		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
-				    (unsigned long)lvl2_table, GNTMAP_host_map |
-				    GNTMAP_readonly, -1);
-	}
-
-	/* Unmap top level page, as it won't be needed any longer */
-	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
-			      &lvl3_table_page, 1)) {
-		dev_err(hy_drv_priv->dev,
-			"xen: cannot unmap top level page\n");
-		return NULL;
-	}
-
-	/* Mark that page was unmapped */
-	lvl3_unmap_ops.handle = -1;
-
-	if (gnttab_map_refs(lvl2_map_ops, NULL,
-			    lvl2_table_pages, n_lvl2_grefs)) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	/* Checks if pages were mapped correctly */
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		if (lvl2_map_ops[i].status) {
-			dev_err(hy_drv_priv->dev,
-				"HYPERVISOR map grant ref failed status = %d",
-				lvl2_map_ops[i].status);
-			goto error_cleanup_lvl2;
-		} else {
-			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
-		}
-	}
-
-	if (gnttab_alloc_pages(nents, data_pages)) {
-		dev_err(hy_drv_priv->dev,
-			"Cannot allocate pages\n");
-		goto error_cleanup_lvl2;
-	}
-
-	k = 0;
-
-	for (i = 0; i < n_lvl2_grefs - 1; i++) {
-		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
-		for (j = 0; j < REFS_PER_PAGE; j++) {
-			gnttab_set_map_op(&data_map_ops[k],
-				(unsigned long)pfn_to_kaddr(
-						page_to_pfn(data_pages[k])),
-				GNTMAP_host_map | GNTMAP_readonly,
-				lvl2_table[j], domid);
-
-			gnttab_set_unmap_op(&data_unmap_ops[k],
-				(unsigned long)pfn_to_kaddr(
-						page_to_pfn(data_pages[k])),
-				GNTMAP_host_map | GNTMAP_readonly, -1);
-			k++;
-		}
-	}
-
-	/* for grefs in the last lvl2 table page */
-	lvl2_table = pfn_to_kaddr(page_to_pfn(
-				lvl2_table_pages[n_lvl2_grefs - 1]));
-
-	for (j = 0; j < nents_last; j++) {
-		gnttab_set_map_op(&data_map_ops[k],
-			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-			GNTMAP_host_map | GNTMAP_readonly,
-			lvl2_table[j], domid);
-
-		gnttab_set_unmap_op(&data_unmap_ops[k],
-			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-			GNTMAP_host_map | GNTMAP_readonly, -1);
-		k++;
-	}
-
-	if (gnttab_map_refs(data_map_ops, NULL,
-			    data_pages, nents)) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed\n");
-		return NULL;
-	}
-
-	/* unmapping lvl2 table pages */
-	if (gnttab_unmap_refs(lvl2_unmap_ops,
-			      NULL, lvl2_table_pages,
-			      n_lvl2_grefs)) {
-		dev_err(hy_drv_priv->dev,
-			"Cannot unmap 2nd level refs\n");
-		return NULL;
-	}
-
-	/* Mark that pages were unmapped */
-	for (i = 0; i < n_lvl2_grefs; i++)
-		lvl2_unmap_ops[i].handle = -1;
-
-	for (i = 0; i < nents; i++) {
-		if (data_map_ops[i].status) {
-			dev_err(hy_drv_priv->dev,
-				"HYPERVISOR map grant ref failed status = %d\n",
-				data_map_ops[i].status);
-			goto error_cleanup_data;
-		} else {
-			data_unmap_ops[i].handle = data_map_ops[i].handle;
-		}
-	}
-
-	/* store these references for unmapping in the future */
-	sh_pages_info->unmap_ops = data_unmap_ops;
-	sh_pages_info->data_pages = data_pages;
-
-	gnttab_free_pages(1, &lvl3_table_page);
-	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
-	kfree(lvl2_table_pages);
-	kfree(lvl2_map_ops);
-	kfree(lvl2_unmap_ops);
-	kfree(data_map_ops);
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return data_pages;
-
-error_cleanup_data:
-	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
-			  nents);
-
-	gnttab_free_pages(nents, data_pages);
-
-error_cleanup_lvl2:
-	if (lvl2_unmap_ops[0].handle != -1)
-		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
-				  lvl2_table_pages, n_lvl2_grefs);
-	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
-
-error_cleanup_lvl3:
-	if (lvl3_unmap_ops.handle != -1)
-		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
-				  &lvl3_table_page, 1);
-	gnttab_free_pages(1, &lvl3_table_page);
-
-	kfree(lvl2_table_pages);
-	kfree(lvl2_map_ops);
-	kfree(lvl2_unmap_ops);
-	kfree(data_map_ops);
-
-
-	return NULL;
-}
-
-int xen_be_unmap_shared_pages(void **refs_info, int nents)
-{
-	struct xen_shared_pages_info *sh_pages_info;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
-
-	if (sh_pages_info->unmap_ops == NULL ||
-	    sh_pages_info->data_pages == NULL) {
-		dev_warn(hy_drv_priv->dev,
-			 "pages already cleaned up or buffer not imported yet\n");
-		return 0;
-	}
-
-	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
-			      sh_pages_info->data_pages, nents)) {
-		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
-		return -EFAULT;
-	}
-
-	gnttab_free_pages(nents, sh_pages_info->data_pages);
-
-	kfree(sh_pages_info->data_pages);
-	kfree(sh_pages_info->unmap_ops);
-	sh_pages_info->unmap_ops = NULL;
-	sh_pages_info->data_pages = NULL;
-	kfree(sh_pages_info);
-	sh_pages_info = NULL;
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
deleted file mode 100644
index d5236b5..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_SHM_H__
-#define __HYPER_DMABUF_XEN_SHM_H__
-
-/* This collects all reference numbers for 2nd level shared pages and
- * create a table with those in 1st level shared pages then return reference
- * numbers for this top level table.
- */
-int xen_be_share_pages(struct page **pages, int domid, int nents,
-		    void **refs_info);
-
-int xen_be_unshare_pages(void **refs_info, int nents);
-
-/* Maps provided top level ref id and then return array of pages containing
- * data refs.
- */
-struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
-				      int nents,
-				      void **refs_info);
-
-int xen_be_unmap_shared_pages(void **refs_info, int nents);
-
-#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* [RFC PATCH 60/60] hyper_dmabuf: move hyper_dmabuf to under drivers/dma-buf/
  2017-12-19 19:29 ` Dongwon Kim
                   ` (82 preceding siblings ...)
  (?)
@ 2017-12-19 19:30 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:30 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

This driver's ultimate goal is to expand the boundary of data
sharing via DMA-BUF to across different OSes running on the same
hardware regardless of what Hypervisor is currently used for the
OS virtualization. So it makes more sense to have its implementation
under drivers/dma-buf.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/dma-buf/hyper_dmabuf/Kconfig               |  42 +
 drivers/dma-buf/hyper_dmabuf/Makefile              |  49 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    | 408 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h    | 118 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c  | 122 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h  |  38 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c     | 133 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h     |  51 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 786 +++++++++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  50 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c   | 293 +++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h   |  71 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 414 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  87 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 413 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h    |  32 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c  | 172 ++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h  |  10 +
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.c        | 322 +++++++
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.h        |  30 +
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 255 ++++++
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  41 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 141 +++
 .../xen-backend/hyper_dmabuf_xen_comm.c            | 941 +++++++++++++++++++++
 .../xen-backend/hyper_dmabuf_xen_comm.h            |  78 ++
 .../xen-backend/hyper_dmabuf_xen_comm_list.c       | 158 ++++
 .../xen-backend/hyper_dmabuf_xen_comm_list.h       |  67 ++
 .../xen-backend/hyper_dmabuf_xen_drv.c             |  46 +
 .../xen-backend/hyper_dmabuf_xen_drv.h             |  53 ++
 .../xen-backend/hyper_dmabuf_xen_shm.c             | 525 ++++++++++++
 .../xen-backend/hyper_dmabuf_xen_shm.h             |  46 +
 drivers/xen/Kconfig                                |   2 +-
 drivers/xen/Makefile                               |   2 +-
 drivers/xen/hyper_dmabuf/Kconfig                   |  42 -
 drivers/xen/hyper_dmabuf/Makefile                  |  49 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        | 408 ---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 118 ---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c      | 122 ---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h      |  38 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c         | 133 ---
 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h         |  51 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 786 -----------------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h      |  50 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 293 -------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  71 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 414 ---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  87 --
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c        | 413 ---------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h        |  32 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c      | 172 ----
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  10 -
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c    | 322 -------
 .../xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h    |  30 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 255 ------
 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  41 -
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     | 141 ---
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 941 ---------------------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  78 --
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 158 ----
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  67 --
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c    |  46 -
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h    |  53 --
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c    | 525 ------------
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h    |  46 -
 64 files changed, 5994 insertions(+), 5994 deletions(-)
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Kconfig
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Makefile
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h
 delete mode 100644 drivers/xen/hyper_dmabuf/Kconfig
 delete mode 100644 drivers/xen/hyper_dmabuf/Makefile
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
 delete mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
 delete mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h

diff --git a/drivers/dma-buf/hyper_dmabuf/Kconfig b/drivers/dma-buf/hyper_dmabuf/Kconfig
new file mode 100644
index 0000000..5efcd44
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/Kconfig
@@ -0,0 +1,42 @@
+menu "hyper_dmabuf options"
+
+config HYPER_DMABUF
+	tristate "Enables hyper dmabuf driver"
+	default y
+
+config HYPER_DMABUF_XEN
+	bool "Configure hyper_dmabuf for XEN hypervisor"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Configuring hyper_dmabuf driver for XEN hypervisor
+
+config HYPER_DMABUF_SYSFS
+	bool "Enable sysfs information about hyper DMA buffers"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Expose information about imported and exported buffers using
+	  hyper_dmabuf driver
+
+config HYPER_DMABUF_EVENT_GEN
+	bool "Enable event-generation and polling operation"
+	default n
+	depends on HYPER_DMABUF
+	help
+	  With this config enabled, hyper_dmabuf driver on the importer side
+	  generates events and queue those up in the event list whenever a new
+	  shared DMA-BUF is available. Events in the list can be retrieved by
+	  read operation.
+
+config HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+	bool "Enable automatic rx-ch add with 10 secs interval"
+	default y
+	depends on HYPER_DMABUF && HYPER_DMABUF_XEN
+	help
+	  If enabled, driver reads a node in xenstore every 10 seconds
+	  to check whether there is any tx comm ch configured by another
+	  domain then initialize matched rx comm ch automatically for any
+	  existing tx comm chs.
+
+endmenu
diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
new file mode 100644
index 0000000..cce8e69
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/Makefile
@@ -0,0 +1,49 @@
+TARGET_MODULE:=hyper_dmabuf
+
+PLATFORM:=XEN
+
+# If we running by kernel building system
+ifneq ($(KERNELRELEASE),)
+	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
+                                 hyper_dmabuf_ioctl.o \
+                                 hyper_dmabuf_list.o \
+				 hyper_dmabuf_sgl_proc.o \
+				 hyper_dmabuf_ops.o \
+				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_id.o \
+				 hyper_dmabuf_remote_sync.o \
+				 hyper_dmabuf_query.o \
+
+ifeq ($(CONFIG_HYPER_DMABUF_EVENT_GEN), y)
+	$(TARGET_MODULE)-objs += hyper_dmabuf_event.o
+endif
+
+ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
+	$(TARGET_MODULE)-objs += xen-backend/hyper_dmabuf_xen_comm.o \
+				 xen-backend/hyper_dmabuf_xen_comm_list.o \
+				 xen-backend/hyper_dmabuf_xen_shm.o \
+				 xen-backend/hyper_dmabuf_xen_drv.o
+endif
+
+obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
+
+# If we are running without kernel build system
+else
+BUILDSYSTEM_DIR?=../../../
+PWD:=$(shell pwd)
+
+all :
+# run kernel build system to make module
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
+
+clean:
+# run kernel build system to cleanup in current directory
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
+
+load:
+	insmod ./$(TARGET_MODULE).ko
+
+unload:
+	rmmod ./$(TARGET_MODULE).ko
+
+endif
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
new file mode 100644
index 0000000..498b06c
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -0,0 +1,408 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+#include <linux/device.h>
+#include <linux/uaccess.h>
+#include <linux/poll.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_ioctl.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
+
+#ifdef CONFIG_HYPER_DMABUF_XEN
+#include "xen-backend/hyper_dmabuf_xen_drv.h"
+#endif
+
+MODULE_LICENSE("GPL and additional rights");
+MODULE_AUTHOR("Intel Corporation");
+
+struct hyper_dmabuf_private *hy_drv_priv;
+
+static void force_free(struct exported_sgt_info *exported,
+		       void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file *)attr;
+
+	if (!filp || !exported)
+		return;
+
+	if (exported->filp == filp) {
+		dev_dbg(hy_drv_priv->dev,
+			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		unexport_attr.hid = exported->hid;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
+	}
+}
+
+static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+{
+	int ret = 0;
+
+	/* Do not allow exclusive open */
+	if (filp->f_flags & O_EXCL)
+		return -EBUSY;
+
+	return ret;
+}
+
+static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+{
+	hyper_dmabuf_foreach_exported(force_free, filp);
+
+	return 0;
+}
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+
+static unsigned int hyper_dmabuf_event_poll(struct file *filp,
+				     struct poll_table_struct *wait)
+{
+	poll_wait(filp, &hy_drv_priv->event_wait, wait);
+
+	if (!list_empty(&hy_drv_priv->event_list))
+		return POLLIN | POLLRDNORM;
+
+	return 0;
+}
+
+static ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
+		size_t count, loff_t *offset)
+{
+	int ret;
+
+	/* only root can read events */
+	if (!capable(CAP_DAC_OVERRIDE)) {
+		dev_err(hy_drv_priv->dev,
+			"Only root can read events\n");
+		return -EPERM;
+	}
+
+	/* make sure user buffer can be written */
+	if (!access_ok(VERIFY_WRITE, buffer, count)) {
+		dev_err(hy_drv_priv->dev,
+			"User buffer can't be written.\n");
+		return -EINVAL;
+	}
+
+	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
+	if (ret)
+		return ret;
+
+	while (1) {
+		struct hyper_dmabuf_event *e = NULL;
+
+		spin_lock_irq(&hy_drv_priv->event_lock);
+		if (!list_empty(&hy_drv_priv->event_list)) {
+			e = list_first_entry(&hy_drv_priv->event_list,
+					struct hyper_dmabuf_event, link);
+			list_del(&e->link);
+		}
+		spin_unlock_irq(&hy_drv_priv->event_lock);
+
+		if (!e) {
+			if (ret)
+				break;
+
+			if (filp->f_flags & O_NONBLOCK) {
+				ret = -EAGAIN;
+				break;
+			}
+
+			mutex_unlock(&hy_drv_priv->event_read_lock);
+			ret = wait_event_interruptible(hy_drv_priv->event_wait,
+				  !list_empty(&hy_drv_priv->event_list));
+
+			if (ret == 0)
+				ret = mutex_lock_interruptible(
+					&hy_drv_priv->event_read_lock);
+
+			if (ret)
+				return ret;
+		} else {
+			unsigned int length = (sizeof(e->event_data.hdr) +
+						      e->event_data.hdr.size);
+
+			if (length > count - ret) {
+put_back_event:
+				spin_lock_irq(&hy_drv_priv->event_lock);
+				list_add(&e->link, &hy_drv_priv->event_list);
+				spin_unlock_irq(&hy_drv_priv->event_lock);
+				break;
+			}
+
+			if (copy_to_user(buffer + ret, &e->event_data.hdr,
+					 sizeof(e->event_data.hdr))) {
+				if (ret == 0)
+					ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += sizeof(e->event_data.hdr);
+
+			if (copy_to_user(buffer + ret, e->event_data.data,
+					 e->event_data.hdr.size)) {
+				/* error while copying void *data */
+
+				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
+
+				ret -= sizeof(e->event_data.hdr);
+
+				/* nullifying hdr of the event in user buffer */
+				if (copy_to_user(buffer + ret, &dummy_hdr,
+						 sizeof(dummy_hdr))) {
+					dev_err(hy_drv_priv->dev,
+						"failed to nullify invalid hdr already in userspace\n");
+				}
+
+				ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += e->event_data.hdr.size;
+			hy_drv_priv->pending--;
+			kfree(e);
+		}
+	}
+
+	mutex_unlock(&hy_drv_priv->event_read_lock);
+
+	return ret;
+}
+
+#endif
+
+static const struct file_operations hyper_dmabuf_driver_fops = {
+	.owner = THIS_MODULE,
+	.open = hyper_dmabuf_open,
+	.release = hyper_dmabuf_release,
+
+/* poll and read interfaces are needed only for event-polling */
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	.read = hyper_dmabuf_event_read,
+	.poll = hyper_dmabuf_event_poll,
+#endif
+
+	.unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+static int register_device(void)
+{
+	int ret = 0;
+
+	ret = misc_register(&hyper_dmabuf_miscdev);
+
+	if (ret) {
+		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
+		return ret;
+	}
+
+	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask */
+	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
+
+	return ret;
+}
+
+static void unregister_device(void)
+{
+	dev_info(hy_drv_priv->dev,
+		"hyper_dmabuf: unregister_device() is called\n");
+
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
+
+static int __init hyper_dmabuf_drv_init(void)
+{
+	int ret = 0;
+
+	printk(KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
+
+	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
+			      GFP_KERNEL);
+
+	if (!hy_drv_priv)
+		return -ENOMEM;
+
+	ret = register_device();
+	if (ret < 0) {
+		kfree(hy_drv_priv);
+		return ret;
+	}
+
+/* currently only supports XEN hypervisor */
+#ifdef CONFIG_HYPER_DMABUF_XEN
+	hy_drv_priv->bknd_ops = &xen_bknd_ops;
+#else
+	hy_drv_priv->bknd_ops = NULL;
+	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
+#endif
+
+	if (hy_drv_priv->bknd_ops == NULL) {
+		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
+		kfree(hy_drv_priv);
+		return -1;
+	}
+
+	mutex_init(&hy_drv_priv->lock);
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	hy_drv_priv->initialized = false;
+
+	dev_info(hy_drv_priv->dev,
+		 "initializing database for imported/exported dmabufs\n");
+
+	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
+
+	ret = hyper_dmabuf_table_init();
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"fail to init table for exported/imported entries\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
+		return ret;
+	}
+
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to initialize sysfs\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
+		return ret;
+	}
+#endif
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	mutex_init(&hy_drv_priv->event_read_lock);
+	spin_lock_init(&hy_drv_priv->event_lock);
+
+	/* Initialize event queue */
+	INIT_LIST_HEAD(&hy_drv_priv->event_list);
+	init_waitqueue_head(&hy_drv_priv->event_wait);
+
+	/* resetting number of pending events */
+	hy_drv_priv->pending = 0;
+#endif
+
+	if (hy_drv_priv->bknd_ops->init) {
+		ret = hy_drv_priv->bknd_ops->init();
+
+		if (ret < 0) {
+			dev_dbg(hy_drv_priv->dev,
+				"failed to initialize backend.\n");
+			mutex_unlock(&hy_drv_priv->lock);
+			kfree(hy_drv_priv);
+			return ret;
+		}
+	}
+
+	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
+
+	ret = hy_drv_priv->bknd_ops->init_comm_env();
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"failed to initialize comm-env.\n");
+	} else {
+		hy_drv_priv->initialized = true;
+	}
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_info(hy_drv_priv->dev,
+		"Finishing up initialization of hyper_dmabuf drv\n");
+
+	/* interrupt for comm should be registered here: */
+	return ret;
+}
+
+static void hyper_dmabuf_drv_exit(void)
+{
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
+#endif
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	/* hash tables for export/import entries and ring_infos */
+	hyper_dmabuf_table_destroy();
+
+	hy_drv_priv->bknd_ops->destroy_comm();
+
+	if (hy_drv_priv->bknd_ops->cleanup) {
+		hy_drv_priv->bknd_ops->cleanup();
+	};
+
+	/* destroy workqueue */
+	if (hy_drv_priv->work_queue)
+		destroy_workqueue(hy_drv_priv->work_queue);
+
+	/* destroy id_queue */
+	if (hy_drv_priv->id_queue)
+		hyper_dmabuf_free_hid_list();
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	/* clean up event queue */
+	hyper_dmabuf_events_release();
+#endif
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_info(hy_drv_priv->dev,
+		 "hyper_dmabuf driver: Exiting\n");
+
+	kfree(hy_drv_priv);
+
+	unregister_device();
+}
+
+module_init(hyper_dmabuf_drv_init);
+module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
new file mode 100644
index 0000000..c2bb3ce
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -0,0 +1,118 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+
+#include <linux/device.h>
+#include <xen/hyper_dmabuf.h>
+
+struct hyper_dmabuf_req;
+
+struct hyper_dmabuf_event {
+	struct hyper_dmabuf_event_data event_data;
+	struct list_head link;
+};
+
+struct hyper_dmabuf_private {
+	struct device *dev;
+
+	/* VM(domain) id of current VM instance */
+	int domid;
+
+	/* workqueue dedicated to hyper_dmabuf driver */
+	struct workqueue_struct *work_queue;
+
+	/* list of reusable hyper_dmabuf_ids */
+	struct list_reusable_id *id_queue;
+
+	/* backend ops - hypervisor specific */
+	struct hyper_dmabuf_bknd_ops *bknd_ops;
+
+	/* device global lock */
+	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
+	struct mutex lock;
+
+	/* flag that shows whether backend is initialized */
+	bool initialized;
+
+	wait_queue_head_t event_wait;
+	struct list_head event_list;
+
+	spinlock_t event_lock;
+	struct mutex event_read_lock;
+
+	/* # of pending events */
+	int pending;
+};
+
+struct list_reusable_id {
+	hyper_dmabuf_id_t hid;
+	struct list_head list;
+};
+
+struct hyper_dmabuf_bknd_ops {
+	/* backend initialization routine (optional) */
+	int (*init)(void);
+
+	/* backend cleanup routine (optional) */
+	int (*cleanup)(void);
+
+	/* retreiving id of current virtual machine */
+	int (*get_vm_id)(void);
+
+	/* get pages shared via hypervisor-specific method */
+	int (*share_pages)(struct page **, int, int, void **);
+
+	/* make shared pages unshared via hypervisor specific method */
+	int (*unshare_pages)(void **, int);
+
+	/* map remotely shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	struct page ** (*map_shared_pages)(unsigned long, int, int, void **);
+
+	/* unmap and free shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	int (*unmap_shared_pages)(void **, int);
+
+	/* initialize communication environment */
+	int (*init_comm_env)(void);
+
+	void (*destroy_comm)(void);
+
+	/* upstream ch setup (receiving and responding) */
+	int (*init_rx_ch)(int);
+
+	/* downstream ch setup (transmitting and parsing responses) */
+	int (*init_tx_ch)(int);
+
+	int (*send_req)(int, struct hyper_dmabuf_req *, int);
+};
+
+/* exporting global drv private info */
+extern struct hyper_dmabuf_private *hy_drv_priv;
+
+#endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
new file mode 100644
index 0000000..392ea99
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_event.h"
+
+static void send_event(struct hyper_dmabuf_event *e)
+{
+	struct hyper_dmabuf_event *oldest;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
+
+	/* check current number of event then if it hits the max num allowed
+	 * then remove the oldest event in the list
+	 */
+	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
+		oldest = list_first_entry(&hy_drv_priv->event_list,
+				struct hyper_dmabuf_event, link);
+		list_del(&oldest->link);
+		hy_drv_priv->pending--;
+		kfree(oldest);
+	}
+
+	list_add_tail(&e->link,
+		      &hy_drv_priv->event_list);
+
+	hy_drv_priv->pending++;
+
+	wake_up_interruptible(&hy_drv_priv->event_wait);
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+}
+
+void hyper_dmabuf_events_release(void)
+{
+	struct hyper_dmabuf_event *e, *et;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
+
+	list_for_each_entry_safe(e, et, &hy_drv_priv->event_list,
+				 link) {
+		list_del(&e->link);
+		kfree(e);
+		hy_drv_priv->pending--;
+	}
+
+	if (hy_drv_priv->pending) {
+		dev_err(hy_drv_priv->dev,
+			"possible leak on event_list\n");
+	}
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+}
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
+{
+	struct hyper_dmabuf_event *e;
+	struct imported_sgt_info *imported;
+
+	imported = hyper_dmabuf_find_imported(hid);
+
+	if (!imported) {
+		dev_err(hy_drv_priv->dev,
+			"can't find imported_sgt_info in the list\n");
+		return -EINVAL;
+	}
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+
+	if (!e)
+		return -ENOMEM;
+
+	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
+	e->event_data.hdr.hid = hid;
+	e->event_data.data = (void *)imported->priv;
+	e->event_data.hdr.size = imported->sz_priv;
+
+	send_event(e);
+
+	dev_dbg(hy_drv_priv->dev,
+		"event number = %d :", hy_drv_priv->pending);
+
+	dev_dbg(hy_drv_priv->dev,
+		"generating events for {%d, %d, %d, %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
new file mode 100644
index 0000000..50db04f
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
@@ -0,0 +1,38 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_EVENT_H__
+#define __HYPER_DMABUF_EVENT_H__
+
+#define MAX_DEPTH_EVENT_QUEUE 32
+
+enum hyper_dmabuf_event_type {
+	HYPER_DMABUF_NEW_IMPORT = 0x10000,
+};
+
+void hyper_dmabuf_events_release(void);
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid);
+
+#endif /* __HYPER_DMABUF_EVENT_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
new file mode 100644
index 0000000..e67b84a
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
@@ -0,0 +1,133 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	struct list_reusable_id *new_reusable;
+
+	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
+
+	if (!new_reusable)
+		return;
+
+	new_reusable->hid = hid;
+
+	list_add(&new_reusable->list, &reusable_head->list);
+}
+
+static hyper_dmabuf_id_t get_reusable_hid(void)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
+
+	/* check there is reusable id */
+	if (!list_empty(&reusable_head->list)) {
+		reusable_head = list_first_entry(&reusable_head->list,
+						 struct list_reusable_id,
+						 list);
+
+		list_del(&reusable_head->list);
+		hid = reusable_head->hid;
+		kfree(reusable_head);
+	}
+
+	return hid;
+}
+
+void hyper_dmabuf_free_hid_list(void)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	struct list_reusable_id *temp_head;
+
+	if (reusable_head) {
+		/* freeing mem space all reusable ids in the stack */
+		while (!list_empty(&reusable_head->list)) {
+			temp_head = list_first_entry(&reusable_head->list,
+						     struct list_reusable_id,
+						     list);
+			list_del(&temp_head->list);
+			kfree(temp_head);
+		}
+
+		/* freeing head */
+		kfree(reusable_head);
+	}
+}
+
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
+{
+	static int count;
+	hyper_dmabuf_id_t hid;
+	struct list_reusable_id *reusable_head;
+
+	/* first call to hyper_dmabuf_get_id */
+	if (count == 0) {
+		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
+
+		if (!reusable_head)
+			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
+
+		/* list head has an invalid count */
+		reusable_head->hid.id = -1;
+		INIT_LIST_HEAD(&reusable_head->list);
+		hy_drv_priv->id_queue = reusable_head;
+	}
+
+	hid = get_reusable_hid();
+
+	/*creating a new H-ID only if nothing in the reusable id queue
+	 * and count is less than maximum allowed
+	 */
+	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
+		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
+
+	/* random data embedded in the id for security */
+	get_random_bytes(&hid.rng_key[0], 12);
+
+	return hid;
+}
+
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
+{
+	int i;
+
+	/* compare keys */
+	for (i = 0; i < 3; i++) {
+		if (hid1.rng_key[i] != hid2.rng_key[i])
+			return false;
+	}
+
+	return true;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
new file mode 100644
index 0000000..ed690f3
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_ID_H__
+#define __HYPER_DMABUF_ID_H__
+
+#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
+	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
+
+#define HYPER_DMABUF_DOM_ID(hid) \
+	(((hid.id) >> 24) & 0xFF)
+
+/* currently maximum number of buffers shared
+ * at any given moment is limited to 1000
+ */
+#define HYPER_DMABUF_ID_MAX 1000
+
+/* adding freed hid to the reusable list */
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
+
+/* freeing the reusasble list */
+void hyper_dmabuf_free_hid_list(void);
+
+/* getting a hid available to use. */
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
+
+/* comparing two different hid */
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
+
+#endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
new file mode 100644
index 0000000..ca6edf2
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -0,0 +1,786 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ioctl.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_query.h"
+
+static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret = 0;
+
+	if (!data) {
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
+		return -EINVAL;
+	}
+	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
+
+	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
+
+	return ret;
+}
+
+static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret = 0;
+
+	if (!data) {
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
+		return -EINVAL;
+	}
+
+	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
+
+	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
+
+	return ret;
+}
+
+static int send_export_msg(struct exported_sgt_info *exported,
+			   struct pages_info *pg_info)
+{
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct hyper_dmabuf_req *req;
+	int op[MAX_NUMBER_OF_OPERANDS] = {0};
+	int ret, i;
+
+	/* now create request for importer via ring */
+	op[0] = exported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = exported->hid.rng_key[i];
+
+	if (pg_info) {
+		op[4] = pg_info->nents;
+		op[5] = pg_info->frst_ofst;
+		op[6] = pg_info->last_len;
+		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
+					 pg_info->nents, &exported->refs_info);
+		if (op[7] < 0) {
+			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
+			return op[7];
+		}
+	}
+
+	op[8] = exported->sz_priv;
+
+	/* driver/application specific private info */
+	memcpy(&op[9], exported->priv, op[8]);
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return -ENOMEM;
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
+
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
+
+	kfree(req);
+
+	return ret;
+}
+
+/* Fast path exporting routine in case same buffer is already exported.
+ * In this function, we skip normal exporting process and just update
+ * private data on both VMs (importer and exporter)
+ *
+ * return '1' if reexport is needed, return '0' if succeeds, return
+ * Kernel error code if something goes wrong
+ */
+static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
+{
+	int reexport = 1;
+	int ret = 0;
+	struct exported_sgt_info *exported;
+
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported)
+		return reexport;
+
+	if (exported->valid == false)
+		return reexport;
+
+	/*
+	 * Check if unexport is already scheduled for that buffer,
+	 * if so try to cancel it. If that will fail, buffer needs
+	 * to be reexport once again.
+	 */
+	if (exported->unexport_sched) {
+		if (!cancel_delayed_work_sync(&exported->unexport))
+			return reexport;
+
+		exported->unexport_sched = false;
+	}
+
+	/* if there's any change in size of private data.
+	 * we reallocate space for private data with new size
+	 */
+	if (sz_priv != exported->sz_priv) {
+		kfree(exported->priv);
+
+		/* truncating size */
+		if (sz_priv > MAX_SIZE_PRIV_DATA)
+			exported->sz_priv = MAX_SIZE_PRIV_DATA;
+		else
+			exported->sz_priv = sz_priv;
+
+		exported->priv = kcalloc(1, exported->sz_priv,
+					 GFP_KERNEL);
+
+		if (!exported->priv) {
+			hyper_dmabuf_remove_exported(exported->hid);
+			hyper_dmabuf_cleanup_sgt_info(exported, true);
+			kfree(exported);
+			return -ENOMEM;
+		}
+	}
+
+	/* update private data in sgt_info with new ones */
+	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to load a new private data\n");
+		ret = -EINVAL;
+	} else {
+		/* send an export msg for updating priv in importer */
+		ret = send_export_msg(exported, NULL);
+
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to send a new private data\n");
+			ret = -EBUSY;
+		}
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
+			(struct ioctl_hyper_dmabuf_export_remote *)data;
+	struct dma_buf *dma_buf;
+	struct dma_buf_attachment *attachment;
+	struct sg_table *sgt;
+	struct pages_info *pg_info;
+	struct exported_sgt_info *exported;
+	hyper_dmabuf_id_t hid;
+	int ret = 0;
+
+	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
+		dev_err(hy_drv_priv->dev,
+			"exporting to the same VM is not permitted\n");
+		return -EINVAL;
+	}
+
+	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+
+	if (IS_ERR(dma_buf)) {
+		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
+		return PTR_ERR(dma_buf);
+	}
+
+	/* we check if this specific attachment was already exported
+	 * to the same domain and if yes and it's valid sgt_info,
+	 * it returns hyper_dmabuf_id of pre-exported sgt_info
+	 */
+	hid = hyper_dmabuf_find_hid_exported(dma_buf,
+					     export_remote_attr->remote_domain);
+
+	if (hid.id != -1) {
+		ret = fastpath_export(hid, export_remote_attr->sz_priv,
+				      export_remote_attr->priv);
+
+		/* return if fastpath_export succeeds or
+		 * gets some fatal error
+		 */
+		if (ret <= 0) {
+			dma_buf_put(dma_buf);
+			export_remote_attr->hid = hid;
+			return ret;
+		}
+	}
+
+	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
+	if (IS_ERR(attachment)) {
+		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
+		ret = PTR_ERR(attachment);
+		goto fail_attach;
+	}
+
+	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
+
+	if (IS_ERR(sgt)) {
+		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
+		ret = PTR_ERR(sgt);
+		goto fail_map_attachment;
+	}
+
+	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
+
+	if (!exported) {
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
+	}
+
+	/* possible truncation */
+	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
+		exported->sz_priv = MAX_SIZE_PRIV_DATA;
+	else
+		exported->sz_priv = export_remote_attr->sz_priv;
+
+	/* creating buffer for private data of buffer */
+	if (exported->sz_priv != 0) {
+		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
+
+		if (!exported->priv) {
+			ret = -ENOMEM;
+			goto fail_priv_creation;
+		}
+	} else {
+		dev_err(hy_drv_priv->dev, "size is 0\n");
+	}
+
+	exported->hid = hyper_dmabuf_get_hid();
+
+	/* no more exported dmabuf allowed */
+	if (exported->hid.id == -1) {
+		dev_err(hy_drv_priv->dev,
+			"exceeds allowed number of dmabuf to be exported\n");
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
+	}
+
+	exported->rdomid = export_remote_attr->remote_domain;
+	exported->dma_buf = dma_buf;
+	exported->valid = true;
+
+	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	if (!exported->active_sgts) {
+		ret = -ENOMEM;
+		goto fail_map_active_sgts;
+	}
+
+	exported->active_attached = kmalloc(sizeof(struct attachment_list),
+					    GFP_KERNEL);
+	if (!exported->active_attached) {
+		ret = -ENOMEM;
+		goto fail_map_active_attached;
+	}
+
+	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
+				       GFP_KERNEL);
+	if (!exported->va_kmapped) {
+		ret = -ENOMEM;
+		goto fail_map_va_kmapped;
+	}
+
+	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
+				       GFP_KERNEL);
+	if (!exported->va_vmapped) {
+		ret = -ENOMEM;
+		goto fail_map_va_vmapped;
+	}
+
+	exported->active_sgts->sgt = sgt;
+	exported->active_attached->attach = attachment;
+	exported->va_kmapped->vaddr = NULL;
+	exported->va_vmapped->vaddr = NULL;
+
+	/* initialize list of sgt, attachment and vaddr for dmabuf sync
+	 * via shadow dma-buf
+	 */
+	INIT_LIST_HEAD(&exported->active_sgts->list);
+	INIT_LIST_HEAD(&exported->active_attached->list);
+	INIT_LIST_HEAD(&exported->va_kmapped->list);
+	INIT_LIST_HEAD(&exported->va_vmapped->list);
+
+	/* copy private data to sgt_info */
+	ret = copy_from_user(exported->priv, export_remote_attr->priv,
+			     exported->sz_priv);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"failed to load private data\n");
+		ret = -EINVAL;
+		goto fail_export;
+	}
+
+	pg_info = hyper_dmabuf_ext_pgs(sgt);
+	if (!pg_info) {
+		dev_err(hy_drv_priv->dev,
+			"failed to construct pg_info\n");
+		ret = -ENOMEM;
+		goto fail_export;
+	}
+
+	exported->nents = pg_info->nents;
+
+	/* now register it to export list */
+	hyper_dmabuf_register_exported(exported);
+
+	export_remote_attr->hid = exported->hid;
+
+	ret = send_export_msg(exported, pg_info);
+
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to send out the export request\n");
+		goto fail_send_request;
+	}
+
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	exported->filp = filp;
+
+	return ret;
+
+/* Clean-up if error occurs */
+
+fail_send_request:
+	hyper_dmabuf_remove_exported(exported->hid);
+
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+fail_export:
+	kfree(exported->va_vmapped);
+
+fail_map_va_vmapped:
+	kfree(exported->va_kmapped);
+
+fail_map_va_kmapped:
+	kfree(exported->active_attached);
+
+fail_map_active_attached:
+	kfree(exported->active_sgts);
+	kfree(exported->priv);
+
+fail_priv_creation:
+	kfree(exported);
+
+fail_map_active_sgts:
+fail_sgt_info_creation:
+	dma_buf_unmap_attachment(attachment, sgt,
+				 DMA_BIDIRECTIONAL);
+
+fail_map_attachment:
+	dma_buf_detach(dma_buf, attachment);
+
+fail_attach:
+	dma_buf_put(dma_buf);
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
+			(struct ioctl_hyper_dmabuf_export_fd *)data;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_req *req;
+	struct page **data_pgs;
+	int op[4];
+	int i;
+	int ret = 0;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	/* look for dmabuf for the id */
+	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
+
+	/* can't find sgt from the table */
+	if (!imported) {
+		dev_err(hy_drv_priv->dev, "can't find the entry\n");
+		return -ENOENT;
+	}
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	imported->importers++;
+
+	/* send notification for export_fd to exporter */
+	op[0] = imported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = imported->hid.rng_key[i];
+
+	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req) {
+		mutex_unlock(&hy_drv_priv->lock);
+		return -ENOMEM;
+	}
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
+
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
+
+	if (ret < 0) {
+		/* in case of timeout other end eventually will receive request,
+		 * so we need to undo it
+		 */
+		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
+					&op[0]);
+		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
+		kfree(req);
+		dev_err(hy_drv_priv->dev,
+			"Failed to create sgt or notify exporter\n");
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
+		return ret;
+	}
+
+	kfree(req);
+
+	if (ret == HYPER_DMABUF_REQ_ERROR) {
+		dev_err(hy_drv_priv->dev,
+			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
+		return -EINVAL;
+	}
+
+	ret = 0;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Found buffer gref %d off %d\n",
+		imported->ref_handle, imported->frst_ofst);
+
+	dev_dbg(hy_drv_priv->dev,
+		"last len %d nents %d domain %d\n",
+		imported->last_len, imported->nents,
+		HYPER_DMABUF_DOM_ID(imported->hid));
+
+	if (!imported->sgt) {
+		dev_dbg(hy_drv_priv->dev,
+			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
+					HYPER_DMABUF_DOM_ID(imported->hid),
+					imported->nents,
+					&imported->refs_info);
+
+		if (!data_pgs) {
+			dev_err(hy_drv_priv->dev,
+				"can't map pages hid {id:%d key:%d %d %d}\n",
+				imported->hid.id, imported->hid.rng_key[0],
+				imported->hid.rng_key[1],
+				imported->hid.rng_key[2]);
+
+			imported->importers--;
+
+			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+			if (!req) {
+				mutex_unlock(&hy_drv_priv->lock);
+				return -ENOMEM;
+			}
+
+			hyper_dmabuf_create_req(req,
+						HYPER_DMABUF_EXPORT_FD_FAILED,
+						&op[0]);
+			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
+							  false);
+			kfree(req);
+			mutex_unlock(&hy_drv_priv->lock);
+			return -EINVAL;
+		}
+
+		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
+							imported->frst_ofst,
+							imported->last_len,
+							imported->nents);
+
+	}
+
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
+						    export_fd_attr->flags);
+
+	if (export_fd_attr->fd < 0) {
+		/* fail to get fd */
+		ret = export_fd_attr->fd;
+	}
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return ret;
+}
+
+/* unexport dmabuf from the database and send int req to the source domain
+ * to unmap it.
+ */
+static void delayed_unexport(struct work_struct *work)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct exported_sgt_info *exported =
+		container_of(work, struct exported_sgt_info, unexport.work);
+	int op[4];
+	int i, ret;
+
+	if (!exported)
+		return;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
+		exported->hid.id, exported->hid.rng_key[0],
+		exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+	/* no longer valid */
+	exported->valid = false;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return;
+
+	op[0] = exported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = exported->hid.rng_key[i];
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
+
+	/* Now send unexport request to remote domain, marking
+	 * that buffer should not be used anymore
+	 */
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+	}
+
+	kfree(req);
+	exported->unexport_sched = false;
+
+	/* Immediately clean-up if it has never been exported by importer
+	 * (so no SGT is constructed on importer).
+	 * clean it up later in remote sync when final release ops
+	 * is called (importer does this only when there's no
+	 * no consumer of locally exported FDs)
+	 */
+	if (exported->active == 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"claning up buffer {id:%d key:%d %d %d} completly\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		hyper_dmabuf_cleanup_sgt_info(exported, false);
+		hyper_dmabuf_remove_exported(exported->hid);
+
+		/* register hyper_dmabuf_id to the list for reuse */
+		hyper_dmabuf_store_hid(exported->hid);
+
+		if (exported->sz_priv > 0 && !exported->priv)
+			kfree(exported->priv);
+
+		kfree(exported);
+	}
+}
+
+/* Schedule unexport of dmabuf.
+ */
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
+			(struct ioctl_hyper_dmabuf_unexport *)data;
+	struct exported_sgt_info *exported;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	/* find dmabuf in export list */
+	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
+
+	dev_dbg(hy_drv_priv->dev,
+		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
+		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
+		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
+
+	/* failed to find corresponding entry in export list */
+	if (exported == NULL) {
+		unexport_attr->status = -ENOENT;
+		return -ENOENT;
+	}
+
+	if (exported->unexport_sched)
+		return 0;
+
+	exported->unexport_sched = true;
+	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
+	schedule_delayed_work(&exported->unexport,
+			      msecs_to_jiffies(unexport_attr->delay_ms));
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
+
+static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_query *query_attr =
+			(struct ioctl_hyper_dmabuf_query *)data;
+	struct exported_sgt_info *exported = NULL;
+	struct imported_sgt_info *imported = NULL;
+	int ret = 0;
+
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hy_drv_priv->domid) {
+		/* query for exported dmabuf */
+		exported = hyper_dmabuf_find_exported(query_attr->hid);
+		if (exported) {
+			ret = hyper_dmabuf_query_exported(exported,
+							  query_attr->item,
+							  &query_attr->info);
+		} else {
+			dev_err(hy_drv_priv->dev,
+				"hid {id:%d key:%d %d %d} not in exp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	} else {
+		/* query for imported dmabuf */
+		imported = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported) {
+			ret = hyper_dmabuf_query_imported(imported,
+							  query_attr->item,
+							  &query_attr->info);
+		} else {
+			dev_err(hy_drv_priv->dev,
+				"hid {id:%d key:%d %d %d} not in imp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	}
+
+	return ret;
+}
+
+const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
+			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
+			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
+			       hyper_dmabuf_export_remote_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
+			       hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
+			       hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY,
+			       hyper_dmabuf_query_ioctl, 0),
+};
+
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param)
+{
+	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
+	unsigned int nr = _IOC_NR(cmd);
+	int ret;
+	hyper_dmabuf_ioctl_t func;
+	char *kdata;
+
+	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
+		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
+		return -EINVAL;
+	}
+
+	ioctl = &hyper_dmabuf_ioctls[nr];
+
+	func = ioctl->func;
+
+	if (unlikely(!func)) {
+		dev_err(hy_drv_priv->dev, "no function\n");
+		return -EINVAL;
+	}
+
+	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+	if (!kdata)
+		return -ENOMEM;
+
+	if (copy_from_user(kdata, (void __user *)param,
+			   _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy from user arguments\n");
+		ret = -EFAULT;
+		goto ioctl_error;
+	}
+
+	ret = func(filp, kdata);
+
+	if (copy_to_user((void __user *)param, kdata,
+			 _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy to user arguments\n");
+		ret = -EFAULT;
+		goto ioctl_error;
+	}
+
+ioctl_error:
+	kfree(kdata);
+
+	return ret;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
new file mode 100644
index 0000000..5991a87
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IOCTL_H__
+#define __HYPER_DMABUF_IOCTL_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param);
+
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
+
+#endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
new file mode 100644
index 0000000..bba6d1d
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
@@ -0,0 +1,293 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <linux/hashtable.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
+
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+static ssize_t hyper_dmabuf_imported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
+		hyper_dmabuf_id_t hid = info_entry->imported->hid;
+		int nents = info_entry->imported->nents;
+		bool valid = info_entry->imported->valid;
+		int num_importers = info_entry->imported->importers;
+
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				num_importers);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
+
+	return count;
+}
+
+static ssize_t hyper_dmabuf_exported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
+		hyper_dmabuf_id_t hid = info_entry->exported->hid;
+		int nents = info_entry->exported->nents;
+		bool valid = info_entry->exported->valid;
+		int importer_exported = info_entry->exported->active;
+
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1],
+				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				   importer_exported);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
+
+	return count;
+}
+
+static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
+static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
+
+int hyper_dmabuf_register_sysfs(struct device *dev)
+{
+	int err;
+
+	err = device_create_file(dev, &dev_attr_imported);
+	if (err < 0)
+		goto err1;
+	err = device_create_file(dev, &dev_attr_exported);
+	if (err < 0)
+		goto err2;
+
+	return 0;
+err2:
+	device_remove_file(dev, &dev_attr_imported);
+err1:
+	return -1;
+}
+
+int hyper_dmabuf_unregister_sysfs(struct device *dev)
+{
+	device_remove_file(dev, &dev_attr_imported);
+	device_remove_file(dev, &dev_attr_exported);
+	return 0;
+}
+
+#endif
+
+int hyper_dmabuf_table_init(void)
+{
+	hash_init(hyper_dmabuf_hash_imported);
+	hash_init(hyper_dmabuf_hash_exported);
+	return 0;
+}
+
+int hyper_dmabuf_table_destroy(void)
+{
+	/* TODO: cleanup hyper_dmabuf_hash_imported
+	 * and hyper_dmabuf_hash_exported
+	 */
+	return 0;
+}
+
+int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
+{
+	struct list_entry_exported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->exported = exported;
+
+	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
+		 info_entry->exported->hid.id);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
+{
+	struct list_entry_imported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->imported = imported;
+
+	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
+		 info_entry->imported->hid.id);
+
+	return 0;
+}
+
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->exported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid))
+				return info_entry->exported;
+
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
+		}
+
+	return NULL;
+}
+
+/* search for pre-exported sgt and return id of it if it exist */
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid)
+{
+	struct list_entry_exported *info_entry;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if (info_entry->exported->dma_buf == dmabuf &&
+		    info_entry->exported->rdomid == domid)
+			return info_entry->exported->hid;
+
+	return hid;
+}
+
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->imported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid))
+				return info_entry->imported;
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
+		}
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->exported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			}
+
+			break;
+		}
+
+	return -ENOENT;
+}
+
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->imported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			}
+
+			break;
+		}
+
+	return -ENOENT;
+}
+
+void hyper_dmabuf_foreach_exported(
+	void (*func)(struct exported_sgt_info *, void *attr),
+	void *attr)
+{
+	struct list_entry_exported *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
+			info_entry, node) {
+		func(info_entry->exported, attr);
+	}
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
new file mode 100644
index 0000000..f7102f5
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
@@ -0,0 +1,71 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_LIST_H__
+#define __HYPER_DMABUF_LIST_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORTED 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORTED 7
+
+struct list_entry_exported {
+	struct exported_sgt_info *exported;
+	struct hlist_node node;
+};
+
+struct list_entry_imported {
+	struct imported_sgt_info *imported;
+	struct hlist_node node;
+};
+
+int hyper_dmabuf_table_init(void);
+
+int hyper_dmabuf_table_destroy(void);
+
+int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
+
+/* search for pre-exported sgt and return id of it if it exist */
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid);
+
+int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
+
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
+
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
+
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
+
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
+
+void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
+				   void *attr), void *attr);
+
+int hyper_dmabuf_register_sysfs(struct device *dev);
+int hyper_dmabuf_unregister_sysfs(struct device *dev);
+
+#endif /* __HYPER_DMABUF_LIST_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
new file mode 100644
index 0000000..afc1fd6e
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -0,0 +1,414 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_event.h"
+#include "hyper_dmabuf_list.h"
+
+struct cmd_process {
+	struct work_struct work;
+	struct hyper_dmabuf_req *rq;
+	int domid;
+};
+
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+			     enum hyper_dmabuf_command cmd, int *op)
+{
+	int i;
+
+	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	req->cmd = cmd;
+
+	switch (cmd) {
+	/* as exporter, commands to importer */
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * op0~op3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data
+		 *	   (e.g. graphic buffer's meta info)
+		 */
+
+		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
+		break;
+
+	case HYPER_DMABUF_NOTIFY_UNEXPORT:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * op0~op3 : hyper_dmabuf_id_t hid
+		 */
+
+		for (i = 0; i < 4; i++)
+			req->op[i] = op[i];
+		break;
+
+	case HYPER_DMABUF_EXPORT_FD:
+	case HYPER_DMABUF_EXPORT_FD_FAILED:
+		/* dmabuf fd is being created on imported side or importing
+		 * failed
+		 *
+		 * command : HYPER_DMABUF_EXPORT_FD or
+		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
+		 * op0~op3 : hyper_dmabuf_id
+		 */
+
+		for (i = 0; i < 4; i++)
+			req->op[i] = op[i];
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed)
+		 * for dmabuf synchronization
+		 */
+		break;
+
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make
+		 * the driver to do shadow mapping or unmapping for
+		 * synchronization with original exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		for (i = 0; i < 5; i++)
+			req->op[i] = op[i];
+		break;
+
+	default:
+		/* no command found */
+		return;
+	}
+}
+
+static void cmd_process_work(struct work_struct *work)
+{
+	struct imported_sgt_info *imported;
+	struct cmd_process *proc = container_of(work,
+						struct cmd_process, work);
+	struct hyper_dmabuf_req *req;
+	int domid;
+	int i;
+
+	req = proc->rq;
+	domid = proc->domid;
+
+	switch (req->cmd) {
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * op0~op3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data
+		 *         (e.g. graphic buffer's meta info)
+		 */
+
+		/* if nents == 0, it means it is a message only for
+		 * priv synchronization. for existing imported_sgt_info
+		 * so not creating a new one
+		 */
+		if (req->op[4] == 0) {
+			hyper_dmabuf_id_t exist = {req->op[0],
+						   {req->op[1], req->op[2],
+						   req->op[3] } };
+
+			imported = hyper_dmabuf_find_imported(exist);
+
+			if (!imported) {
+				dev_err(hy_drv_priv->dev,
+					"Can't find imported sgt_info\n");
+				break;
+			}
+
+			/* if size of new private data is different,
+			 * we reallocate it.
+			 */
+			if (imported->sz_priv != req->op[8]) {
+				kfree(imported->priv);
+				imported->sz_priv = req->op[8];
+				imported->priv = kcalloc(1, req->op[8],
+							 GFP_KERNEL);
+				if (!imported->priv) {
+					/* set it invalid */
+					imported->valid = 0;
+					break;
+				}
+			}
+
+			/* updating priv data */
+			memcpy(imported->priv, &req->op[9], req->op[8]);
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+			/* generating import event */
+			hyper_dmabuf_import_event(imported->hid);
+#endif
+
+			break;
+		}
+
+		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
+
+		if (!imported)
+			break;
+
+		imported->sz_priv = req->op[8];
+		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
+
+		if (!imported->priv) {
+			kfree(imported);
+			break;
+		}
+
+		imported->hid.id = req->op[0];
+
+		for (i = 0; i < 3; i++)
+			imported->hid.rng_key[i] = req->op[i+1];
+
+		imported->nents = req->op[4];
+		imported->frst_ofst = req->op[5];
+		imported->last_len = req->op[6];
+		imported->ref_handle = req->op[7];
+
+		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
+		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
+			req->op[0], req->op[1], req->op[2],
+			req->op[3]);
+		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
+		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
+		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
+		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
+
+		memcpy(imported->priv, &req->op[9], req->op[8]);
+
+		imported->valid = true;
+		hyper_dmabuf_register_imported(imported);
+
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+		/* generating import event */
+		hyper_dmabuf_import_event(imported->hid);
+#endif
+
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer
+		 * (probably not needed) for dmabuf synchronization
+		 */
+		break;
+
+	default:
+		/* shouldn't get here */
+		break;
+	}
+
+	kfree(req);
+	kfree(proc);
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
+{
+	struct cmd_process *proc;
+	struct hyper_dmabuf_req *temp_req;
+	struct imported_sgt_info *imported;
+	struct exported_sgt_info *exported;
+	hyper_dmabuf_id_t hid;
+	int ret;
+
+	if (!req) {
+		dev_err(hy_drv_priv->dev, "request is NULL\n");
+		return -EINVAL;
+	}
+
+	hid.id = req->op[0];
+	hid.rng_key[0] = req->op[1];
+	hid.rng_key[1] = req->op[2];
+	hid.rng_key[2] = req->op[3];
+
+	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
+		(req->cmd > HYPER_DMABUF_OPS_TO_SOURCE)) {
+		dev_err(hy_drv_priv->dev, "invalid command\n");
+		return -EINVAL;
+	}
+
+	req->stat = HYPER_DMABUF_REQ_PROCESSED;
+
+	/* HYPER_DMABUF_DESTROY requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
+		 * op0~3 : hyper_dmabuf_id
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
+
+		imported = hyper_dmabuf_find_imported(hid);
+
+		if (imported) {
+			/* if anything is still using dma_buf */
+			if (imported->importers) {
+				/* Buffer is still in  use, just mark that
+				 * it should not be allowed to export its fd
+				 * anymore.
+				 */
+				imported->valid = false;
+			} else {
+				/* No one is using buffer, remove it from
+				 * imported list
+				 */
+				hyper_dmabuf_remove_imported(hid);
+				kfree(imported);
+			}
+		} else {
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		}
+
+		return req->cmd;
+	}
+
+	/* dma buf remote synchronization */
+	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
+		/* notifying dmabuf map/unmap to exporter, map will
+		 * make the driver to do shadow mapping
+		 * or unmapping for synchronization with original
+		 * exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
+		 * op0~3 : hyper_dmabuf_id
+		 * op1 : enum hyper_dmabuf_ops {....}
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
+
+		ret = hyper_dmabuf_remote_sync(hid, req->op[4]);
+
+		if (ret)
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		else
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+
+		return req->cmd;
+	}
+
+	/* synchronous dma_buf_fd export */
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
+		/* find a corresponding SGT for the id */
+		dev_dbg(hy_drv_priv->dev,
+			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exported = hyper_dmabuf_find_exported(hid);
+
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else if (!exported->valid) {
+			dev_dbg(hy_drv_priv->dev,
+				"Buffer no longer valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			dev_dbg(hy_drv_priv->dev,
+				"Buffer still valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			exported->active++;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->cmd;
+	}
+
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
+		dev_dbg(hy_drv_priv->dev,
+			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exported = hyper_dmabuf_find_exported(hid);
+
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			exported->active--;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->cmd;
+	}
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: putting request to workqueue\n", __func__);
+	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
+
+	if (!temp_req)
+		return -ENOMEM;
+
+	memcpy(temp_req, req, sizeof(*temp_req));
+
+	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
+
+	if (!proc) {
+		kfree(temp_req);
+		return -ENOMEM;
+	}
+
+	proc->rq = temp_req;
+	proc->domid = domid;
+
+	INIT_WORK(&(proc->work), cmd_process_work);
+
+	queue_work(hy_drv_priv->work_queue, &(proc->work));
+
+	return req->cmd;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
new file mode 100644
index 0000000..9c8a76b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_MSG_H__
+#define __HYPER_DMABUF_MSG_H__
+
+#define MAX_NUMBER_OF_OPERANDS 64
+
+struct hyper_dmabuf_req {
+	unsigned int req_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_resp {
+	unsigned int resp_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
+};
+
+enum hyper_dmabuf_command {
+	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_EXPORT_FD,
+	HYPER_DMABUF_EXPORT_FD_FAILED,
+	HYPER_DMABUF_NOTIFY_UNEXPORT,
+	HYPER_DMABUF_OPS_TO_REMOTE,
+	HYPER_DMABUF_OPS_TO_SOURCE,
+};
+
+enum hyper_dmabuf_ops {
+	HYPER_DMABUF_OPS_ATTACH = 0x1000,
+	HYPER_DMABUF_OPS_DETACH,
+	HYPER_DMABUF_OPS_MAP,
+	HYPER_DMABUF_OPS_UNMAP,
+	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
+	HYPER_DMABUF_OPS_END_CPU_ACCESS,
+	HYPER_DMABUF_OPS_KMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KMAP,
+	HYPER_DMABUF_OPS_KUNMAP,
+	HYPER_DMABUF_OPS_MMAP,
+	HYPER_DMABUF_OPS_VMAP,
+	HYPER_DMABUF_OPS_VUNMAP,
+};
+
+enum hyper_dmabuf_req_feedback {
+	HYPER_DMABUF_REQ_PROCESSED = 0x100,
+	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
+	HYPER_DMABUF_REQ_ERROR,
+	HYPER_DMABUF_REQ_NOT_RESPONDED
+};
+
+/* create a request packet with given command and operands */
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+				 enum hyper_dmabuf_command command,
+				 int *operands);
+
+/* parse incoming request packet (or response) and take
+ * appropriate actions for those
+ */
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
+
+#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
new file mode 100644
index 0000000..e85f619
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -0,0 +1,413 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+#define WAIT_AFTER_SYNC_REQ 0
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+static int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -EINVAL;
+}
+
+static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int op[5];
+	int i;
+	int ret;
+
+	op[0] = hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = hid.rng_key[i];
+
+	op[4] = dmabuf_ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return -ENOMEM;
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
+
+	/* send request and wait for a response */
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(hid), req,
+				 WAIT_AFTER_SYNC_REQ);
+
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"dmabuf sync request failed:%d\n", req->op[4]);
+	}
+
+	kfree(req);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
+				   struct device *dev,
+				   struct dma_buf_attachment *attach)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_ATTACH);
+
+	return ret;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
+				    struct dma_buf_attachment *attach)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_DETACH);
+}
+
+static struct sg_table *hyper_dmabuf_ops_map(
+				struct dma_buf_attachment *attachment,
+				enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct imported_sgt_info *imported;
+	struct pages_info *pg_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
+
+	if (!pg_info)
+		return NULL;
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
+				     pg_info->last_len, pg_info->nents);
+	if (!st)
+		goto err_free_sg;
+
+	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
+		goto err_free_sg;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MAP);
+
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	return st;
+
+err_free_sg:
+	if (st) {
+		sg_free_table(st);
+		kfree(st);
+	}
+
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+				   struct sg_table *sg,
+				   enum dma_data_direction dir)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_UNMAP);
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
+{
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret;
+	int finish;
+
+	if (!dma_buf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dma_buf->priv;
+
+	if (!dmabuf_refcount(imported->dma_buf))
+		imported->dma_buf = NULL;
+
+	imported->importers--;
+
+	if (imported->importers == 0) {
+		bknd_ops->unmap_shared_pages(&imported->refs_info,
+					     imported->nents);
+
+		if (imported->sgt) {
+			sg_free_table(imported->sgt);
+			kfree(imported->sgt);
+			imported->sgt = NULL;
+		}
+	}
+
+	finish = imported && !imported->valid &&
+		 !imported->importers;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_RELEASE);
+
+	/*
+	 * Check if buffer is still valid and if not remove it
+	 * from imported list. That has to be done after sending
+	 * sync request
+	 */
+	if (finish) {
+		hyper_dmabuf_remove_imported(imported->hid);
+		kfree(imported);
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
+					     enum dma_data_direction dir)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
+					   enum dma_data_direction dir)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_END_CPU_ACCESS);
+
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
+					  unsigned long pgnum)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP_ATOMIC);
+
+	/* TODO: NULL for now. Need to return the addr of mapped region */
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
+					   unsigned long pgnum, void *vaddr)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP);
+
+	/* for now NULL.. need to return the address of mapped region */
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
+				    void *vaddr)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP);
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
+				 struct vm_area_struct *vma)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MMAP);
+
+	return ret;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VMAP);
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VUNMAP);
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+	.attach = hyper_dmabuf_ops_attach,
+	.detach = hyper_dmabuf_ops_detach,
+	.map_dma_buf = hyper_dmabuf_ops_map,
+	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+	.release = hyper_dmabuf_ops_release,
+	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
+	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
+	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+	.map = hyper_dmabuf_ops_kmap,
+	.unmap = hyper_dmabuf_ops_kunmap,
+	.mmap = hyper_dmabuf_ops_mmap,
+	.vmap = hyper_dmabuf_ops_vmap,
+	.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
+{
+	int fd = -1;
+
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
+	hyper_dmabuf_export_dma_buf(imported);
+
+	if (imported->dma_buf)
+		fd = dma_buf_fd(imported->dma_buf, flags);
+
+	return fd;
+}
+
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = imported->sgt->nents * PAGE_SIZE;
+	exp_info.flags = /* not sure about flag */ 0;
+	exp_info.priv = imported;
+
+	imported->dma_buf = dma_buf_export(&exp_info);
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
new file mode 100644
index 0000000..c5505a4
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_OPS_H__
+#define __HYPER_DMABUF_OPS_H__
+
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
+
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
new file mode 100644
index 0000000..1f2f56b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
@@ -0,0 +1,172 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/uaccess.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_id.h"
+
+#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
+	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
+
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
+				int query, unsigned long *info)
+{
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = EXPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(exported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = exported->rdomid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		*info = exported->dma_buf->size;
+		break;
+
+	/* whether the buffer is used by importer */
+	case HYPER_DMABUF_QUERY_BUSY:
+		*info = (exported->active > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !exported->valid;
+		break;
+
+	/* whether the buffer is scheduled to be unexported */
+	case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+		*info = !exported->unexport_sched;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = exported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (exported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *) *info,
+					exported->priv,
+					exported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
+				int query, unsigned long *info)
+{
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = IMPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(imported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = hy_drv_priv->domid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		if (imported->dma_buf) {
+			/* if local dma_buf is created (if it's
+			 * ever mapped), retrieve it directly
+			 * from struct dma_buf *
+			 */
+			*info = imported->dma_buf->size;
+		} else {
+			/* calcuate it from given nents, frst_ofst
+			 * and last_len
+			 */
+			*info = HYPER_DMABUF_SIZE(imported->nents,
+						  imported->frst_ofst,
+						  imported->last_len);
+		}
+		break;
+
+	/* whether the buffer is used or not */
+	case HYPER_DMABUF_QUERY_BUSY:
+		/* checks if it's used by importer */
+		*info = (imported->importers > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !imported->valid;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = imported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (imported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *)*info,
+					imported->priv,
+					imported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
new file mode 100644
index 0000000..65ae738
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
@@ -0,0 +1,10 @@
+#ifndef __HYPER_DMABUF_QUERY_H__
+#define __HYPER_DMABUF_QUERY_H__
+
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
+				int query, unsigned long *info);
+
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
+				int query, unsigned long *info);
+
+#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
new file mode 100644
index 0000000..a82fd7b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -0,0 +1,322 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_sgl_proc.h"
+
+/* Whenever importer does dma operations from remote domain,
+ * a notification is sent to the exporter so that exporter
+ * issues equivalent dma operation on the original dma buf
+ * for indirect synchronization via shadow operations.
+ *
+ * All ptrs and references (e.g struct sg_table*,
+ * struct dma_buf_attachment) created via these operations on
+ * exporter's side are kept in stack (implemented as circular
+ * linked-lists) separately so that those can be re-referenced
+ * later when unmapping operations are invoked to free those.
+ *
+ * The very first element on the bottom of each stack holds
+ * is what is created when initial exporting is issued so it
+ * should not be modified or released by this fuction.
+ */
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
+{
+	struct exported_sgt_info *exported;
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	int ret;
+
+	/* find a coresponding SGT for the id */
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported) {
+		dev_err(hy_drv_priv->dev,
+			"dmabuf remote sync::can't find exported list\n");
+		return -ENOENT;
+	}
+
+	switch (ops) {
+	case HYPER_DMABUF_OPS_ATTACH:
+		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
+
+		if (!attachl)
+			return -ENOMEM;
+
+		attachl->attach = dma_buf_attach(exported->dma_buf,
+						 hy_drv_priv->dev);
+
+		if (!attachl->attach) {
+			kfree(attachl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
+			return -ENOMEM;
+		}
+
+		list_add(&attachl->list, &exported->active_attached->list);
+		break;
+
+	case HYPER_DMABUF_OPS_DETACH:
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_DETACH\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf attachment left to be detached\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(exported->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+		break;
+
+	case HYPER_DMABUF_OPS_MAP:
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf attachment left to be mapped\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+
+		if (!sgtl)
+			return -ENOMEM;
+
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach,
+						   DMA_BIDIRECTIONAL);
+		if (!sgtl->sgt) {
+			kfree(sgtl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			return -ENOMEM;
+		}
+		list_add(&sgtl->list, &exported->active_sgts->list);
+		break;
+
+	case HYPER_DMABUF_OPS_UNMAP:
+		if (list_empty(&exported->active_sgts->list) ||
+		    list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no SGT or attach left to be unmapped\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+		sgtl = list_first_entry(&exported->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+		break;
+
+	case HYPER_DMABUF_OPS_RELEASE:
+		dev_dbg(hy_drv_priv->dev,
+			"id:%d key:%d %d %d} released, ref left: %d\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2],
+			 exported->active - 1);
+
+		exported->active--;
+
+		/* If there are still importers just break, if no then
+		 * continue with final cleanup
+		 */
+		if (exported->active)
+			break;
+
+		/* Importer just released buffer fd, check if there is
+		 * any other importer still using it.
+		 * If not and buffer was unexported, clean up shared
+		 * data and remove that buffer.
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"Buffer {id:%d key:%d %d %d} final released\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		if (!exported->valid && !exported->active &&
+		    !exported->unexport_sched) {
+			hyper_dmabuf_cleanup_sgt_info(exported, false);
+			hyper_dmabuf_remove_exported(hid);
+			kfree(exported);
+			/* store hyper_dmabuf_id in the list for reuse */
+			hyper_dmabuf_store_hid(hid);
+		}
+
+		break;
+
+	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
+		ret = dma_buf_begin_cpu_access(exported->dma_buf,
+					       DMA_BIDIRECTIONAL);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			return ret;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
+		ret = dma_buf_end_cpu_access(exported->dma_buf,
+					     DMA_BIDIRECTIONAL);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			return ret;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KMAP:
+		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+		if (!va_kmapl)
+			return -ENOMEM;
+
+		/* dummy kmapping of 1 page */
+		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
+			va_kmapl->vaddr = dma_buf_kmap_atomic(
+						exported->dma_buf, 1);
+		else
+			va_kmapl->vaddr = dma_buf_kmap(
+						exported->dma_buf, 1);
+
+		if (!va_kmapl->vaddr) {
+			kfree(va_kmapl);
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -ENOMEM;
+		}
+		list_add(&va_kmapl->list, &exported->va_kmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KUNMAP:
+		if (list_empty(&exported->va_kmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf VA to be freed\n");
+			return -EFAULT;
+		}
+
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+		if (!va_kmapl->vaddr) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			return PTR_ERR(va_kmapl->vaddr);
+		}
+
+		/* unmapping 1 page */
+		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
+			dma_buf_kunmap_atomic(exported->dma_buf,
+					      1, va_kmapl->vaddr);
+		else
+			dma_buf_kunmap(exported->dma_buf,
+				       1, va_kmapl->vaddr);
+
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+		break;
+
+	case HYPER_DMABUF_OPS_MMAP:
+		/* currently not supported: looking for a way to create
+		 * a dummy vma
+		 */
+		dev_warn(hy_drv_priv->dev,
+			 "remote sync::sychronized mmap is not supported\n");
+		break;
+
+	case HYPER_DMABUF_OPS_VMAP:
+		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
+
+		if (!va_vmapl)
+			return -ENOMEM;
+
+		/* dummy vmapping */
+		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
+
+		if (!va_vmapl->vaddr) {
+			kfree(va_vmapl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
+			return -ENOMEM;
+		}
+		list_add(&va_vmapl->list, &exported->va_vmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_VUNMAP:
+		if (list_empty(&exported->va_vmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf VA to be freed\n");
+			return -EFAULT;
+		}
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
+					struct vmap_vaddr_list, list);
+		if (!va_vmapl || va_vmapl->vaddr == NULL) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			return -EFAULT;
+		}
+
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
+
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+		break;
+
+	default:
+		/* program should not get here */
+		break;
+	}
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
new file mode 100644
index 0000000..36638928
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
+#define __HYPER_DMABUF_REMOTE_SYNC_H__
+
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops);
+
+#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
new file mode 100644
index 0000000..d15eb17
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -0,0 +1,255 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_sgl_proc.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/* return total number of pages referenced by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+
+	/* round-up */
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+
+		/* round-up */
+		num_pages += ((sgl->length + PAGE_SIZE - 1) /
+			     PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct pages_info *pg_info;
+	int i, j, k;
+	int length;
+	struct scatterlist *sgl;
+
+	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
+	if (!pg_info)
+		return NULL;
+
+	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
+				     sizeof(struct page *),
+				     GFP_KERNEL);
+
+	if (!pg_info->pgs) {
+		kfree(pg_info);
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	pg_info->nents = 1;
+	pg_info->frst_ofst = sgl->offset;
+	pg_info->pgs[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i = 1;
+
+	while (length > 0) {
+		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pg_info->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pg_info->pgs[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pg_info->nents++;
+		k = 1;
+
+		while (length > 0) {
+			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
+			length -= PAGE_SIZE;
+			pg_info->nents++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pg_info->last_len = PAGE_SIZE + length;
+
+	return pg_info;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (!sgt)
+		return NULL;
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		if (sgt) {
+			sg_free_table(sgt);
+			kfree(sgt);
+		}
+
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i = 1; i < nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
+	}
+
+	if (nents > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pgs[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force)
+{
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+
+	if (!exported) {
+		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
+		return -EINVAL;
+	}
+
+	/* if force != 1, sgt_info can be released only if
+	 * there's no activity on exported dma-buf on importer
+	 * side.
+	 */
+	if (!force &&
+	    exported->active) {
+		dev_warn(hy_drv_priv->dev,
+			 "dma-buf is used by importer\n");
+
+		return -EPERM;
+	}
+
+	/* force == 1 is not recommended */
+	while (!list_empty(&exported->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+
+		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+	}
+
+	while (!list_empty(&exported->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
+					    struct vmap_vaddr_list, list);
+
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+	}
+
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = list_first_entry(&exported->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+	}
+
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(exported->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+	}
+
+	/* Start cleanup of buffer in reverse order to exporting */
+	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
+
+	/* unmap dma-buf */
+	dma_buf_unmap_attachment(exported->active_attached->attach,
+				 exported->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+
+	/* detatch dma-buf */
+	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
+
+	/* close connection to dma-buf completely */
+	dma_buf_put(exported->dma_buf);
+	exported->dma_buf = NULL;
+
+	kfree(exported->active_sgts);
+	kfree(exported->active_attached);
+	kfree(exported->va_kmapped);
+	kfree(exported->va_vmapped);
+	kfree(exported->priv);
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
new file mode 100644
index 0000000..869d982
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+/* extract pages directly from struct sg_table */
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents);
+
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force);
+
+void hyper_dmabuf_free_sgt(struct sg_table *sgt);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
new file mode 100644
index 0000000..a11f804
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -0,0 +1,141 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_STRUCT_H__
+#define __HYPER_DMABUF_STRUCT_H__
+
+/* stack of mapped sgts */
+struct sgt_list {
+	struct sg_table *sgt;
+	struct list_head list;
+};
+
+/* stack of attachments */
+struct attachment_list {
+	struct dma_buf_attachment *attach;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via kmap */
+struct kmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via vmap */
+struct vmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* Exporter builds pages_info before sharing pages */
+struct pages_info {
+	int frst_ofst;
+	int last_len;
+	int nents;
+	struct page **pgs;
+};
+
+
+/* Exporter stores references to sgt in a hash table
+ * Exporter keeps these references for synchronization
+ * and tracking purposes
+ */
+struct exported_sgt_info {
+	hyper_dmabuf_id_t hid;
+
+	/* VM ID of importer */
+	int rdomid;
+
+	struct dma_buf *dma_buf;
+	int nents;
+
+	/* list for tracking activities on dma_buf */
+	struct sgt_list *active_sgts;
+	struct attachment_list *active_attached;
+	struct kmap_vaddr_list *va_kmapped;
+	struct vmap_vaddr_list *va_vmapped;
+
+	/* set to 0 when unexported. Importer doesn't
+	 * do a new mapping of buffer if valid == false
+	 */
+	bool valid;
+
+	/* active == true if the buffer is actively used
+	 * (mapped) by importer
+	 */
+	int active;
+
+	/* hypervisor specific reference data for shared pages */
+	void *refs_info;
+
+	struct delayed_work unexport;
+	bool unexport_sched;
+
+	/* list for file pointers associated with all user space
+	 * application that have exported this same buffer to
+	 * another VM. This needs to be tracked to know whether
+	 * the buffer can be completely freed.
+	 */
+	struct file *filp;
+
+	/* size of private */
+	size_t sz_priv;
+
+	/* private data associated with the exported buffer */
+	char *priv;
+};
+
+/* imported_sgt_info contains information about imported DMA_BUF
+ * this info is kept in IMPORT list and asynchorously retrieved and
+ * used to map DMA_BUF on importer VM's side upon export fd ioctl
+ * request from user-space
+ */
+
+struct imported_sgt_info {
+	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
+
+	/* hypervisor-specific handle to pages */
+	int ref_handle;
+
+	/* offset and size info of DMA_BUF */
+	int frst_ofst;
+	int last_len;
+	int nents;
+
+	struct dma_buf *dma_buf;
+	struct sg_table *sgt;
+
+	void *refs_info;
+	bool valid;
+	int importers;
+
+	/* size of private */
+	size_t sz_priv;
+
+	/* private data associated with the exported buffer */
+	char *priv;
+};
+
+#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c
new file mode 100644
index 0000000..4a073ce
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.c
@@ -0,0 +1,941 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+#include <xen/grant_table.h>
+#include <xen/events.h>
+#include <xen/xenbus.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+#include "../hyper_dmabuf_drv.h"
+
+static int export_req_id;
+
+struct hyper_dmabuf_req req_pending = {0};
+
+static void xen_get_domid_delayed(struct work_struct *unused);
+static void xen_init_comm_env_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
+static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
+
+/* Creates entry in xen store that will keep details of all
+ * exporter rings created by this domain
+ */
+static int xen_comm_setup_data_dir(void)
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
+	return xenbus_mkdir(XBT_NIL, buf, "");
+}
+
+/* Removes entry from xenstore with exporter ring details.
+ * Other domains that has connected to any of exporter rings
+ * created by this domain, will be notified about removal of
+ * this entry and will treat that as signal to cleanup importer
+ * rings created for this domain
+ */
+static int xen_comm_destroy_data_dir(void)
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
+	return xenbus_rm(XBT_NIL, buf, "");
+}
+
+/* Adds xenstore entries with details of exporter ring created
+ * for given remote domain. It requires special daemon running
+ * in dom0 to make sure that given remote domain will have right
+ * permissions to access that data.
+ */
+static int xen_comm_expose_ring_details(int domid, int rdomid,
+					int gref, int port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		domid, rdomid);
+
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Queries details of ring exposed by remote domain.
+ */
+static int xen_comm_get_ring_details(int domid, int rdomid,
+				     int *grefid, int *port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		rdomid, domid);
+
+	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret <= 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret <= 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	return (ret <= 0 ? 1 : 0);
+}
+
+static void xen_get_domid_delayed(struct work_struct *unused)
+{
+	struct xenbus_transaction xbt;
+	int domid, ret;
+
+	/* scheduling another if driver is still running
+	 * and xenstore has not been initialized
+	 */
+	if (likely(xenstored_ready == 0)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore is not ready yet. Will retry in 500ms\n");
+		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+	} else {
+		xenbus_transaction_start(&xbt);
+
+		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
+
+		if (ret <= 0)
+			domid = -1;
+
+		xenbus_transaction_end(xbt, 0);
+
+		/* try again since -1 is an invalid id for domain
+		 * (but only if driver is still running)
+		 */
+		if (unlikely(domid == -1)) {
+			dev_dbg(hy_drv_priv->dev,
+				"domid==-1 is invalid. Will retry it in 500ms\n");
+			schedule_delayed_work(&get_vm_id_work,
+					      msecs_to_jiffies(500));
+		} else {
+			dev_info(hy_drv_priv->dev,
+				 "Successfully retrieved domid from Xenstore:%d\n",
+				 domid);
+			hy_drv_priv->domid = domid;
+		}
+	}
+}
+
+int xen_be_get_domid(void)
+{
+	struct xenbus_transaction xbt;
+	int domid;
+
+	if (unlikely(xenstored_ready == 0)) {
+		xen_get_domid_delayed(NULL);
+		return -1;
+	}
+
+	xenbus_transaction_start(&xbt);
+
+	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
+		domid = -1;
+
+	xenbus_transaction_end(xbt, 0);
+
+	return domid;
+}
+
+static int xen_comm_next_req_id(void)
+{
+	export_req_id++;
+	return export_req_id;
+}
+
+/* For now cache latast rings as global variables TODO: keep them in list*/
+static irqreturn_t front_ring_isr(int irq, void *info);
+static irqreturn_t back_ring_isr(int irq, void *info);
+
+/* Callback function that will be called on any change of xenbus path
+ * being watched. Used for detecting creation/destruction of remote
+ * domain exporter ring.
+ *
+ * When remote domain's exporter ring will be detected, importer ring
+ * on this domain will be created.
+ *
+ * When remote domain's exporter ring destruction will be detected it
+ * will celanup this domain importer ring.
+ *
+ * Destruction can be caused by unloading module by remote domain or
+ * it's crash/force shutdown.
+ */
+static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
+					 const char *path, const char *token)
+{
+	int rdom, ret;
+	uint32_t grefid, port;
+	struct xen_comm_rx_ring_info *ring_info;
+
+	/* Check which domain has changed its exporter rings */
+	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
+	if (ret <= 0)
+		return;
+
+	/* Check if we have importer ring for given remote domain already
+	 * created
+	 */
+	ring_info = xen_comm_find_rx_ring(rdom);
+
+	/* Try to query remote domain exporter ring details - if
+	 * that will fail and we have importer ring that means remote
+	 * domains has cleanup its exporter ring, so our importer ring
+	 * is no longer useful.
+	 *
+	 * If querying details will succeed and we don't have importer ring,
+	 * it means that remote domain has setup it for us and we should
+	 * connect to it.
+	 */
+
+	ret = xen_comm_get_ring_details(xen_be_get_domid(),
+					rdom, &grefid, &port);
+
+	if (ring_info && ret != 0) {
+		dev_info(hy_drv_priv->dev,
+			 "Remote exporter closed, cleaninup importer\n");
+		xen_be_cleanup_rx_rbuf(rdom);
+	} else if (!ring_info && ret == 0) {
+		dev_info(hy_drv_priv->dev,
+			 "Registering importer\n");
+		xen_be_init_rx_rbuf(rdom);
+	}
+}
+
+/* exporter needs to generated info for page sharing */
+int xen_be_init_tx_rbuf(int domid)
+{
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
+	struct evtchn_alloc_unbound alloc_unbound;
+	struct evtchn_close close;
+
+	void *shared_ring;
+	int ret;
+
+	/* check if there's any existing tx channel in the table */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (ring_info) {
+		dev_info(hy_drv_priv->dev,
+			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	if (!ring_info)
+		return -ENOMEM;
+
+	/* from exporter to importer */
+	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
+	if (shared_ring == 0) {
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
+	sring = (struct xen_comm_sring *) shared_ring;
+
+	SHARED_RING_INIT(sring);
+
+	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
+
+	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
+						virt_to_mfn(shared_ring),
+						0);
+	if (ring_info->gref_ring < 0) {
+		/* fail to get gref */
+		kfree(ring_info);
+		return -EFAULT;
+	}
+
+	alloc_unbound.dom = DOMID_SELF;
+	alloc_unbound.remote_dom = domid;
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
+					  &alloc_unbound);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate event channel\n");
+		kfree(ring_info);
+		return -EIO;
+	}
+
+	/* setting up interrupt */
+	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
+					front_ring_isr, 0,
+					NULL, (void *) ring_info);
+
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to setup event channel\n");
+		close.port = alloc_unbound.port;
+		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
+		gnttab_end_foreign_access(ring_info->gref_ring, 0,
+					virt_to_mfn(shared_ring));
+		kfree(ring_info);
+		return -EIO;
+	}
+
+	ring_info->rdomain = domid;
+	ring_info->irq = ret;
+	ring_info->port = alloc_unbound.port;
+
+	mutex_init(&ring_info->lock);
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+		__func__,
+		ring_info->gref_ring,
+		ring_info->port,
+		ring_info->irq);
+
+	ret = xen_comm_add_tx_ring(ring_info);
+
+	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
+					   domid,
+					   ring_info->gref_ring,
+					   ring_info->port);
+
+	/* Register watch for remote domain exporter ring.
+	 * When remote domain will setup its exporter ring,
+	 * we will automatically connect our importer ring to it.
+	 */
+	ring_info->watch.callback = remote_dom_exporter_watch_cb;
+	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
+
+	if (!ring_info->watch.node) {
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
+	sprintf((char *)ring_info->watch.node,
+		"/local/domain/%d/data/hyper_dmabuf/%d/port",
+		domid, xen_be_get_domid());
+
+	register_xenbus_watch(&ring_info->watch);
+
+	return ret;
+}
+
+/* cleans up exporter ring created for given remote domain */
+void xen_be_cleanup_tx_rbuf(int domid)
+{
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_rx_ring_info *rx_ring_info;
+
+	/* check if we at all have exporter ring for given rdomain */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (!ring_info)
+		return;
+
+	xen_comm_remove_tx_ring(domid);
+
+	unregister_xenbus_watch(&ring_info->watch);
+	kfree(ring_info->watch.node);
+
+	/* No need to close communication channel, will be done by
+	 * this function
+	 */
+	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
+
+	/* No need to free sring page, will be freed by this function
+	 * when other side will end its access
+	 */
+	gnttab_end_foreign_access(ring_info->gref_ring, 0,
+				  (unsigned long) ring_info->ring_front.sring);
+
+	kfree(ring_info);
+
+	rx_ring_info = xen_comm_find_rx_ring(domid);
+	if (!rx_ring_info)
+		return;
+
+	BACK_RING_INIT(&(rx_ring_info->ring_back),
+		       rx_ring_info->ring_back.sring,
+		       PAGE_SIZE);
+}
+
+/* importer needs to know about shared page and port numbers for
+ * ring buffer and event channel
+ */
+int xen_be_init_rx_rbuf(int domid)
+{
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
+
+	struct page *shared_ring;
+
+	struct gnttab_map_grant_ref *map_ops;
+
+	int ret;
+	int rx_gref, rx_port;
+
+	/* check if there's existing rx ring channel */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (ring_info) {
+		dev_info(hy_drv_priv->dev,
+			 "rx ring ch from domid = %d already exist\n",
+			 ring_info->sdomain);
+
+		return 0;
+	}
+
+	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
+					&rx_gref, &rx_port);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Domain %d has not created exporter ring for current domain\n",
+			domid);
+
+		return ret;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	if (!ring_info)
+		return -ENOMEM;
+
+	ring_info->sdomain = domid;
+	ring_info->evtchn = rx_port;
+
+	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
+
+	if (!map_ops) {
+		ret = -ENOMEM;
+		goto fail_no_map_ops;
+	}
+
+	if (gnttab_alloc_pages(1, &shared_ring)) {
+		ret = -ENOMEM;
+		goto fail_others;
+	}
+
+	gnttab_set_map_op(&map_ops[0],
+			  (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
+			  GNTMAP_host_map, rx_gref, domid);
+
+	gnttab_set_unmap_op(&ring_info->unmap_op,
+			    (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
+			    GNTMAP_host_map, -1);
+
+	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
+		ret = -EFAULT;
+		goto fail_others;
+	}
+
+	if (map_ops[0].status) {
+		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
+		ret = -EFAULT;
+		goto fail_others;
+	} else {
+		ring_info->unmap_op.handle = map_ops[0].handle;
+	}
+
+	kfree(map_ops);
+
+	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
+
+	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
+
+	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
+
+	if (ret < 0) {
+		ret = -EIO;
+		goto fail_others;
+	}
+
+	ring_info->irq = ret;
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+		rx_port,
+		ring_info->irq);
+
+	ret = xen_comm_add_rx_ring(ring_info);
+
+	/* Setup communcation channel in opposite direction */
+	if (!xen_comm_find_tx_ring(domid))
+		ret = xen_be_init_tx_rbuf(domid);
+
+	ret = request_irq(ring_info->irq,
+			  back_ring_isr, 0,
+			  NULL, (void *)ring_info);
+
+	return ret;
+
+fail_others:
+	kfree(map_ops);
+
+fail_no_map_ops:
+	kfree(ring_info);
+
+	return ret;
+}
+
+/* clenas up importer ring create for given source domain */
+void xen_be_cleanup_rx_rbuf(int domid)
+{
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_tx_ring_info *tx_ring_info;
+	struct page *shared_ring;
+
+	/* check if we have importer ring created for given sdomain */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (!ring_info)
+		return;
+
+	xen_comm_remove_rx_ring(domid);
+
+	/* no need to close event channel, will be done by that function */
+	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
+
+	/* unmapping shared ring page */
+	shared_ring = virt_to_page(ring_info->ring_back.sring);
+	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
+	gnttab_free_pages(1, &shared_ring);
+
+	kfree(ring_info);
+
+	tx_ring_info = xen_comm_find_tx_ring(domid);
+	if (!tx_ring_info)
+		return;
+
+	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
+	FRONT_RING_INIT(&(tx_ring_info->ring_front),
+			tx_ring_info->ring_front.sring,
+			PAGE_SIZE);
+}
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
+
+#define DOMID_SCAN_START	1	/*  domid = 1 */
+#define DOMID_SCAN_END		10	/* domid = 10 */
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused)
+{
+	int ret;
+	char buf[128];
+	int i, dummy;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Scanning new tx channel comming from another domain\n");
+
+	/* check other domains and schedule another work if driver
+	 * is still running and backend is valid
+	 */
+	if (hy_drv_priv &&
+	    hy_drv_priv->initialized) {
+		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
+			if (i == hy_drv_priv->domid)
+				continue;
+
+			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+				i, hy_drv_priv->domid);
+
+			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
+
+			if (ret > 0) {
+				if (xen_comm_find_rx_ring(i) != NULL)
+					continue;
+
+				ret = xen_be_init_rx_rbuf(i);
+
+				if (!ret)
+					dev_info(hy_drv_priv->dev,
+						 "Done rx ch init for VM %d\n",
+						 i);
+			}
+		}
+
+		/* check every 10 seconds */
+		schedule_delayed_work(&xen_rx_ch_auto_add_work,
+				      msecs_to_jiffies(10000));
+	}
+}
+
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+
+void xen_init_comm_env_delayed(struct work_struct *unused)
+{
+	int ret;
+
+	/* scheduling another work if driver is still running
+	 * and xenstore hasn't been initialized or dom_id hasn't
+	 * been correctly retrieved.
+	 */
+	if (likely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore not ready Will re-try in 500ms\n");
+		schedule_delayed_work(&xen_init_comm_env_work,
+				      msecs_to_jiffies(500));
+	} else {
+		ret = xen_comm_setup_data_dir();
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to create data dir in Xenstore\n");
+		} else {
+			dev_info(hy_drv_priv->dev,
+				"Successfully finished comm env init\n");
+			hy_drv_priv->initialized = true;
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+			xen_rx_ch_add_delayed(NULL);
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+		}
+	}
+}
+
+int xen_be_init_comm_env(void)
+{
+	int ret;
+
+	xen_comm_ring_table_init();
+
+	if (unlikely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		xen_init_comm_env_delayed(NULL);
+		return -1;
+	}
+
+	ret = xen_comm_setup_data_dir();
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to create data dir in Xenstore\n");
+	} else {
+		dev_info(hy_drv_priv->dev,
+			"Successfully finished comm env initialization\n");
+
+		hy_drv_priv->initialized = true;
+	}
+
+	return ret;
+}
+
+/* cleans up all tx/rx rings */
+static void xen_be_cleanup_all_rbufs(void)
+{
+	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
+	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
+}
+
+void xen_be_destroy_comm(void)
+{
+	xen_be_cleanup_all_rbufs();
+	xen_comm_destroy_data_dir();
+}
+
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+			      int wait)
+{
+	struct xen_comm_front_ring *ring;
+	struct hyper_dmabuf_req *new_req;
+	struct xen_comm_tx_ring_info *ring_info;
+	int notify;
+
+	struct timeval tv_start, tv_end;
+	struct timeval tv_diff;
+
+	int timeout = 1000;
+
+	/* find a ring info for the channel */
+	ring_info = xen_comm_find_tx_ring(domid);
+	if (!ring_info) {
+		dev_err(hy_drv_priv->dev,
+			"Can't find ring info for the channel\n");
+		return -ENOENT;
+	}
+
+
+	ring = &ring_info->ring_front;
+
+	do_gettimeofday(&tv_start);
+
+	while (RING_FULL(ring)) {
+		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
+
+		if (timeout == 0) {
+			dev_err(hy_drv_priv->dev,
+				"Timeout while waiting for an entry in the ring\n");
+			return -EIO;
+		}
+		usleep_range(100, 120);
+		timeout--;
+	}
+
+	timeout = 1000;
+
+	mutex_lock(&ring_info->lock);
+
+	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
+	if (!new_req) {
+		mutex_unlock(&ring_info->lock);
+		dev_err(hy_drv_priv->dev,
+			"NULL REQUEST\n");
+		return -EIO;
+	}
+
+	req->req_id = xen_comm_next_req_id();
+
+	/* update req_pending with current request */
+	memcpy(&req_pending, req, sizeof(req_pending));
+
+	/* pass current request to the ring */
+	memcpy(new_req, req, sizeof(*new_req));
+
+	ring->req_prod_pvt++;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
+	if (notify)
+		notify_remote_via_irq(ring_info->irq);
+
+	if (wait) {
+		while (timeout--) {
+			if (req_pending.stat !=
+			    HYPER_DMABUF_REQ_NOT_RESPONDED)
+				break;
+			usleep_range(100, 120);
+		}
+
+		if (timeout < 0) {
+			mutex_unlock(&ring_info->lock);
+			dev_err(hy_drv_priv->dev,
+				"request timed-out\n");
+			return -EBUSY;
+		}
+
+		mutex_unlock(&ring_info->lock);
+		do_gettimeofday(&tv_end);
+
+		/* checking time duration for round-trip of a request
+		 * for debugging
+		 */
+		if (tv_end.tv_usec >= tv_start.tv_usec) {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
+			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
+		} else {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
+			tv_diff.tv_usec = tv_end.tv_usec+1000000-
+					  tv_start.tv_usec;
+		}
+
+		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
+			dev_dbg(hy_drv_priv->dev,
+				"send_req:time diff: %ld sec, %ld usec\n",
+				tv_diff.tv_sec, tv_diff.tv_usec);
+	}
+
+	mutex_unlock(&ring_info->lock);
+
+	return 0;
+}
+
+/* ISR for handling request */
+static irqreturn_t back_ring_isr(int irq, void *info)
+{
+	RING_IDX rc, rp;
+	struct hyper_dmabuf_req req;
+	struct hyper_dmabuf_resp resp;
+
+	int notify, more_to_do;
+	int ret;
+
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_back_ring *ring;
+
+	ring_info = (struct xen_comm_rx_ring_info *)info;
+	ring = &ring_info->ring_back;
+
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
+
+	do {
+		rc = ring->req_cons;
+		rp = ring->sring->req_prod;
+		more_to_do = 0;
+		while (rc != rp) {
+			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
+				break;
+
+			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
+			ring->req_cons = ++rc;
+
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
+
+			if (ret > 0) {
+				/* preparing a response for the request and
+				 * send it to the requester
+				 */
+				memcpy(&resp, &req, sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring,
+							 ring->rsp_prod_pvt),
+							 &resp, sizeof(resp));
+				ring->rsp_prod_pvt++;
+
+				dev_dbg(hy_drv_priv->dev,
+					"responding to exporter for req:%d\n",
+					resp.resp_id);
+
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
+								     notify);
+
+				if (notify)
+					notify_remote_via_irq(ring_info->irq);
+			}
+
+			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
+
+/* ISR for handling responses */
+static irqreturn_t front_ring_isr(int irq, void *info)
+{
+	/* front ring only care about response from back */
+	struct hyper_dmabuf_resp *resp;
+	RING_IDX i, rp;
+	int more_to_do, ret;
+
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_front_ring *ring;
+
+	ring_info = (struct xen_comm_tx_ring_info *)info;
+	ring = &ring_info->ring_front;
+
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
+
+	do {
+		more_to_do = 0;
+		rp = ring->sring->rsp_prod;
+		for (i = ring->rsp_cons; i != rp; i++) {
+			resp = RING_GET_RESPONSE(ring, i);
+
+			/* update pending request's status with what is
+			 * in the response
+			 */
+
+			dev_dbg(hy_drv_priv->dev,
+				"getting response from importer\n");
+
+			if (req_pending.req_id == resp->resp_id)
+				req_pending.stat = resp->stat;
+
+			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+				/* parsing response */
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
+					(struct hyper_dmabuf_req *)resp);
+
+				if (ret < 0) {
+					dev_err(hy_drv_priv->dev,
+						"err while parsing resp\n");
+				}
+			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
+				/* for debugging dma_buf remote synch */
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
+					"got HYPER_DMABUF_REQ_PROCESSED\n");
+			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
+				/* for debugging dma_buf remote synch */
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
+					"got HYPER_DMABUF_REQ_ERROR\n");
+			}
+		}
+
+		ring->rsp_cons = i;
+
+		if (i != ring->req_prod_pvt)
+			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
+		else
+			ring->sring->rsp_event = i+1;
+
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h
new file mode 100644
index 0000000..70a2b70
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_COMM_H__
+#define __HYPER_DMABUF_XEN_COMM_H__
+
+#include "xen/interface/io/ring.h"
+#include "xen/xenbus.h"
+#include "../hyper_dmabuf_msg.h"
+
+extern int xenstored_ready;
+
+DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
+
+struct xen_comm_tx_ring_info {
+	struct xen_comm_front_ring ring_front;
+	int rdomain;
+	int gref_ring;
+	int irq;
+	int port;
+	struct mutex lock;
+	struct xenbus_watch watch;
+};
+
+struct xen_comm_rx_ring_info {
+	int sdomain;
+	int irq;
+	int evtchn;
+	struct xen_comm_back_ring ring_back;
+	struct gnttab_unmap_grant_ref unmap_op;
+};
+
+int xen_be_get_domid(void);
+
+int xen_be_init_comm_env(void);
+
+/* exporter needs to generated info for page sharing */
+int xen_be_init_tx_rbuf(int domid);
+
+/* importer needs to know about shared page and port numbers
+ * for ring buffer and event channel
+ */
+int xen_be_init_rx_rbuf(int domid);
+
+/* cleans up exporter ring created for given domain */
+void xen_be_cleanup_tx_rbuf(int domid);
+
+/* cleans up importer ring created for given domain */
+void xen_be_cleanup_rx_rbuf(int domid);
+
+void xen_be_destroy_comm(void);
+
+/* send request to the remote domain */
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+		    int wait);
+
+#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c
new file mode 100644
index 0000000..15023db
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.c
@@ -0,0 +1,158 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <linux/hashtable.h>
+#include <xen/grant_table.h>
+#include "../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+
+DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
+DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
+
+void xen_comm_ring_table_init(void)
+{
+	hash_init(xen_comm_rx_ring_hash);
+	hash_init(xen_comm_tx_ring_hash);
+}
+
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->info = ring_info;
+
+	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
+		info_entry->info->rdomain);
+
+	return 0;
+}
+
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->info = ring_info;
+
+	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
+		info_entry->info->sdomain);
+
+	return 0;
+}
+
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->rdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->sdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int xen_comm_remove_tx_ring(int domid)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->rdomain == domid) {
+			hash_del(&info_entry->node);
+			kfree(info_entry);
+			return 0;
+		}
+
+	return -ENOENT;
+}
+
+int xen_comm_remove_rx_ring(int domid)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->sdomain == domid) {
+			hash_del(&info_entry->node);
+			kfree(info_entry);
+			return 0;
+		}
+
+	return -ENOENT;
+}
+
+void xen_comm_foreach_tx_ring(void (*func)(int domid))
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
+			   info_entry, node) {
+		func(info_entry->info->rdomain);
+	}
+}
+
+void xen_comm_foreach_rx_ring(void (*func)(int domid))
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
+			   info_entry, node) {
+		func(info_entry->info->sdomain);
+	}
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h
new file mode 100644
index 0000000..8502fe7
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_comm_list.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
+#define __HYPER_DMABUF_XEN_COMM_LIST_H__
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_TX_RING 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_RX_RING 7
+
+struct xen_comm_tx_ring_info_entry {
+	struct xen_comm_tx_ring_info *info;
+	struct hlist_node node;
+};
+
+struct xen_comm_rx_ring_info_entry {
+	struct xen_comm_rx_ring_info *info;
+	struct hlist_node node;
+};
+
+void xen_comm_ring_table_init(void);
+
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
+
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
+
+int xen_comm_remove_tx_ring(int domid);
+
+int xen_comm_remove_rx_ring(int domid);
+
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
+
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
+
+/* iterates over all exporter rings and calls provided
+ * function for each of them
+ */
+void xen_comm_foreach_tx_ring(void (*func)(int domid));
+
+/* iterates over all importer rings and calls provided
+ * function for each of them
+ */
+void xen_comm_foreach_rx_ring(void (*func)(int domid));
+
+#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c
new file mode 100644
index 0000000..14ed3bc
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include "../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_shm.h"
+
+struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
+	.init = NULL, /* not needed for xen */
+	.cleanup = NULL, /* not needed for xen */
+	.get_vm_id = xen_be_get_domid,
+	.share_pages = xen_be_share_pages,
+	.unshare_pages = xen_be_unshare_pages,
+	.map_shared_pages = (void *)xen_be_map_shared_pages,
+	.unmap_shared_pages = xen_be_unmap_shared_pages,
+	.init_comm_env = xen_be_init_comm_env,
+	.destroy_comm = xen_be_destroy_comm,
+	.init_rx_ch = xen_be_init_rx_rbuf,
+	.init_tx_ch = xen_be_init_tx_rbuf,
+	.send_req = xen_be_send_req,
+};
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h
new file mode 100644
index 0000000..a4902b7
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_drv.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_DRV_H__
+#define __HYPER_DMABUF_XEN_DRV_H__
+#include <xen/interface/grant_table.h>
+
+extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
+
+/* Main purpose of this structure is to keep
+ * all references created or acquired for sharing
+ * pages with another domain for freeing those later
+ * when unsharing.
+ */
+struct xen_shared_pages_info {
+	/* top level refid */
+	grant_ref_t lvl3_gref;
+
+	/* page of top level addressing, it contains refids of 2nd lvl pages */
+	grant_ref_t *lvl3_table;
+
+	/* table of 2nd level pages, that contains refids to data pages */
+	grant_ref_t *lvl2_table;
+
+	/* unmap ops for mapped pages */
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	/* data pages to be unmapped */
+	struct page **data_pages;
+};
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c
new file mode 100644
index 0000000..c6a15f1
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.c
@@ -0,0 +1,525 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/slab.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_drv.h"
+#include "../hyper_dmabuf_drv.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ *
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ *
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      3rd level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌>+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
+ * +-------------------------+ | | +--------------------+   |Data page 1 |
+ *                             | |                          +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|
+ *                                 +-----------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		       void **refs_info)
+{
+	grant_ref_t lvl3_gref;
+	grant_ref_t *lvl2_table;
+	grant_ref_t *lvl3_table;
+
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			   ((nents % REFS_PER_PAGE) ? 1 : 0));
+
+	struct xen_shared_pages_info *sh_pages_info;
+	int i;
+
+	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
+	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+
+	if (!sh_pages_info)
+		return -ENOMEM;
+
+	*refs_info = (void *)sh_pages_info;
+
+	/* share data pages in readonly mode for security */
+	for (i = 0; i < nents; i++) {
+		lvl2_table[i] = gnttab_grant_foreign_access(domid,
+					pfn_to_mfn(page_to_pfn(pages[i])),
+					true /* read only */);
+		if (lvl2_table[i] == -ENOSPC) {
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl2 */
+			while (i--) {
+				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
+				gnttab_free_grant_reference(lvl2_table[i]);
+			}
+			goto err_cleanup;
+		}
+	}
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl3_table[i] = gnttab_grant_foreign_access(domid,
+					virt_to_mfn(
+					(unsigned long)lvl2_table+i*PAGE_SIZE),
+					true);
+
+		if (lvl3_table[i] == -ENOSPC) {
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl3 */
+			while (i--) {
+				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+				gnttab_free_grant_reference(lvl3_table[i]);
+			}
+
+			/* Unshare all pages for lvl2 */
+			while (nents--) {
+				gnttab_end_foreign_access_ref(
+							lvl2_table[nents], 0);
+				gnttab_free_grant_reference(lvl2_table[nents]);
+			}
+
+			goto err_cleanup;
+		}
+	}
+
+	/* Share lvl3_table in readonly mode*/
+	lvl3_gref = gnttab_grant_foreign_access(domid,
+			virt_to_mfn((unsigned long)lvl3_table),
+			true);
+
+	if (lvl3_gref == -ENOSPC) {
+		dev_err(hy_drv_priv->dev,
+			"No more space left in grant table\n");
+
+		/* Unshare all pages for lvl3 */
+		while (i--) {
+			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+			gnttab_free_grant_reference(lvl3_table[i]);
+		}
+
+		/* Unshare all pages for lvl2 */
+		while (nents--) {
+			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+			gnttab_free_grant_reference(lvl2_table[nents]);
+		}
+
+		goto err_cleanup;
+	}
+
+	/* Store lvl3_table page to be freed later */
+	sh_pages_info->lvl3_table = lvl3_table;
+
+	/* Store lvl2_table pages to be freed later */
+	sh_pages_info->lvl2_table = lvl2_table;
+
+
+	/* Store exported pages refid to be unshared later */
+	sh_pages_info->lvl3_gref = lvl3_gref;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return lvl3_gref;
+
+err_cleanup:
+	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)lvl3_table, 1);
+
+	return -ENOSPC;
+}
+
+int xen_be_unshare_pages(void **refs_info, int nents)
+{
+	struct xen_shared_pages_info *sh_pages_info;
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			    ((nents % REFS_PER_PAGE) ? 1 : 0));
+	int i;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->lvl3_table == NULL ||
+	    sh_pages_info->lvl2_table ==  NULL ||
+	    sh_pages_info->lvl3_gref == -1) {
+		dev_warn(hy_drv_priv->dev,
+			 "gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < nents; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
+
+		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
+		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
+
+		if (!gnttab_end_foreign_access_ref(
+					sh_pages_info->lvl3_table[i], 1))
+			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
+
+		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
+	}
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
+		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
+
+	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
+	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
+
+	/* freeing all pages used for 2 level addressing */
+	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
+
+	sh_pages_info->lvl3_gref = -1;
+	sh_pages_info->lvl2_table = NULL;
+	sh_pages_info->lvl3_table = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
+
+/* Maps provided top level ref id and then return array of pages
+ * containing data refs.
+ */
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents, void **refs_info)
+{
+	struct page *lvl3_table_page;
+	struct page **lvl2_table_pages;
+	struct page **data_pages;
+	struct xen_shared_pages_info *sh_pages_info;
+
+	grant_ref_t *lvl3_table;
+	grant_ref_t *lvl2_table;
+
+	struct gnttab_map_grant_ref lvl3_map_ops;
+	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
+
+	struct gnttab_map_grant_ref *lvl2_map_ops;
+	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
+
+	struct gnttab_map_grant_ref *data_map_ops;
+	struct gnttab_unmap_grant_ref *data_unmap_ops;
+
+	/* # of grefs in the last page of lvl2 table */
+	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
+			   ((nents_last > 0) ? 1 : 0) -
+			   (nents_last == REFS_PER_PAGE);
+	int i, j, k;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+	*refs_info = (void *) sh_pages_info;
+
+	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
+				   GFP_KERNEL);
+
+	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
+
+	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
+			       GFP_KERNEL);
+
+	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
+				 GFP_KERNEL);
+
+	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
+	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
+		return NULL;
+	}
+
+	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
+
+	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  (grant_ref_t)lvl3_gref, domid);
+
+	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
+			    GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (lvl3_map_ops.status) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed status = %d",
+			lvl3_map_ops.status);
+
+		goto error_cleanup_lvl3;
+	} else {
+		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
+	}
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
+		goto error_cleanup_lvl3;
+	}
+
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
+					page_to_pfn(lvl2_table_pages[i]));
+		gnttab_set_map_op(&lvl2_map_ops[i],
+				  (unsigned long)lvl2_table, GNTMAP_host_map |
+				  GNTMAP_readonly,
+				  lvl3_table[i], domid);
+		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
+				    (unsigned long)lvl2_table, GNTMAP_host_map |
+				    GNTMAP_readonly, -1);
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+			      &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	/* Mark that page was unmapped */
+	lvl3_unmap_ops.handle = -1;
+
+	if (gnttab_map_refs(lvl2_map_ops, NULL,
+			    lvl2_table_pages, n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (lvl2_map_ops[i].status) {
+			dev_err(hy_drv_priv->dev,
+				"HYPERVISOR map grant ref failed status = %d",
+				lvl2_map_ops[i].status);
+			goto error_cleanup_lvl2;
+		} else {
+			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
+		}
+	}
+
+	if (gnttab_alloc_pages(nents, data_pages)) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate pages\n");
+		goto error_cleanup_lvl2;
+	}
+
+	k = 0;
+
+	for (i = 0; i < n_lvl2_grefs - 1; i++) {
+		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		for (j = 0; j < REFS_PER_PAGE; j++) {
+			gnttab_set_map_op(&data_map_ops[k],
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly,
+				lvl2_table[j], domid);
+
+			gnttab_set_unmap_op(&data_unmap_ops[k],
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly, -1);
+			k++;
+		}
+	}
+
+	/* for grefs in the last lvl2 table page */
+	lvl2_table = pfn_to_kaddr(page_to_pfn(
+				lvl2_table_pages[n_lvl2_grefs - 1]));
+
+	for (j = 0; j < nents_last; j++) {
+		gnttab_set_map_op(&data_map_ops[k],
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly,
+			lvl2_table[j], domid);
+
+		gnttab_set_unmap_op(&data_unmap_ops[k],
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly, -1);
+		k++;
+	}
+
+	if (gnttab_map_refs(data_map_ops, NULL,
+			    data_pages, nents)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	/* unmapping lvl2 table pages */
+	if (gnttab_unmap_refs(lvl2_unmap_ops,
+			      NULL, lvl2_table_pages,
+			      n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	/* Mark that pages were unmapped */
+	for (i = 0; i < n_lvl2_grefs; i++)
+		lvl2_unmap_ops[i].handle = -1;
+
+	for (i = 0; i < nents; i++) {
+		if (data_map_ops[i].status) {
+			dev_err(hy_drv_priv->dev,
+				"HYPERVISOR map grant ref failed status = %d\n",
+				data_map_ops[i].status);
+			goto error_cleanup_data;
+		} else {
+			data_unmap_ops[i].handle = data_map_ops[i].handle;
+		}
+	}
+
+	/* store these references for unmapping in the future */
+	sh_pages_info->unmap_ops = data_unmap_ops;
+	sh_pages_info->data_pages = data_pages;
+
+	gnttab_free_pages(1, &lvl3_table_page);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return data_pages;
+
+error_cleanup_data:
+	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
+			  nents);
+
+	gnttab_free_pages(nents, data_pages);
+
+error_cleanup_lvl2:
+	if (lvl2_unmap_ops[0].handle != -1)
+		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
+				  lvl2_table_pages, n_lvl2_grefs);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+
+error_cleanup_lvl3:
+	if (lvl3_unmap_ops.handle != -1)
+		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+				  &lvl3_table_page, 1);
+	gnttab_free_pages(1, &lvl3_table_page);
+
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+
+	return NULL;
+}
+
+int xen_be_unmap_shared_pages(void **refs_info, int nents)
+{
+	struct xen_shared_pages_info *sh_pages_info;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->unmap_ops == NULL ||
+	    sh_pages_info->data_pages == NULL) {
+		dev_warn(hy_drv_priv->dev,
+			 "pages already cleaned up or buffer not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
+			      sh_pages_info->data_pages, nents)) {
+		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
+		return -EFAULT;
+	}
+
+	gnttab_free_pages(nents, sh_pages_info->data_pages);
+
+	kfree(sh_pages_info->data_pages);
+	kfree(sh_pages_info->unmap_ops);
+	sh_pages_info->unmap_ops = NULL;
+	sh_pages_info->data_pages = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h
new file mode 100644
index 0000000..d5236b5
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/xen-backend/hyper_dmabuf_xen_shm.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright © 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_SHM_H__
+#define __HYPER_DMABUF_XEN_SHM_H__
+
+/* This collects all reference numbers for 2nd level shared pages and
+ * create a table with those in 1st level shared pages then return reference
+ * numbers for this top level table.
+ */
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		    void **refs_info);
+
+int xen_be_unshare_pages(void **refs_info, int nents);
+
+/* Maps provided top level ref id and then return array of pages containing
+ * data refs.
+ */
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents,
+				      void **refs_info);
+
+int xen_be_unmap_shared_pages(void **refs_info, int nents);
+
+#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index b59b0e3..6aa302d 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -321,6 +321,6 @@ config XEN_SYMS
 config XEN_HAVE_VPMU
        bool
 
-source "drivers/xen/hyper_dmabuf/Kconfig"
+source "drivers/dma-buf/hyper_dmabuf/Kconfig"
 
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index a6e253a..ede7082 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -4,7 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
 obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
 obj-y	+= events/
 obj-y	+= xenbus/
-obj-y	+= hyper_dmabuf/
+obj-y	+= ../dma-buf/hyper_dmabuf/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
deleted file mode 100644
index 5efcd44..0000000
--- a/drivers/xen/hyper_dmabuf/Kconfig
+++ /dev/null
@@ -1,42 +0,0 @@
-menu "hyper_dmabuf options"
-
-config HYPER_DMABUF
-	tristate "Enables hyper dmabuf driver"
-	default y
-
-config HYPER_DMABUF_XEN
-	bool "Configure hyper_dmabuf for XEN hypervisor"
-	default y
-	depends on HYPER_DMABUF
-	help
-	  Configuring hyper_dmabuf driver for XEN hypervisor
-
-config HYPER_DMABUF_SYSFS
-	bool "Enable sysfs information about hyper DMA buffers"
-	default y
-	depends on HYPER_DMABUF
-	help
-	  Expose information about imported and exported buffers using
-	  hyper_dmabuf driver
-
-config HYPER_DMABUF_EVENT_GEN
-	bool "Enable event-generation and polling operation"
-	default n
-	depends on HYPER_DMABUF
-	help
-	  With this config enabled, hyper_dmabuf driver on the importer side
-	  generates events and queue those up in the event list whenever a new
-	  shared DMA-BUF is available. Events in the list can be retrieved by
-	  read operation.
-
-config HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
-	bool "Enable automatic rx-ch add with 10 secs interval"
-	default y
-	depends on HYPER_DMABUF && HYPER_DMABUF_XEN
-	help
-	  If enabled, driver reads a node in xenstore every 10 seconds
-	  to check whether there is any tx comm ch configured by another
-	  domain then initialize matched rx comm ch automatically for any
-	  existing tx comm chs.
-
-endmenu
diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
deleted file mode 100644
index a113bfc..0000000
--- a/drivers/xen/hyper_dmabuf/Makefile
+++ /dev/null
@@ -1,49 +0,0 @@
-TARGET_MODULE:=hyper_dmabuf
-
-PLATFORM:=XEN
-
-# If we running by kernel building system
-ifneq ($(KERNELRELEASE),)
-	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
-                                 hyper_dmabuf_ioctl.o \
-                                 hyper_dmabuf_list.o \
-				 hyper_dmabuf_sgl_proc.o \
-				 hyper_dmabuf_ops.o \
-				 hyper_dmabuf_msg.o \
-				 hyper_dmabuf_id.o \
-				 hyper_dmabuf_remote_sync.o \
-				 hyper_dmabuf_query.o \
-
-ifeq ($(CONFIG_HYPER_DMABUF_EVENT_GEN), y)
-	$(TARGET_MODULE)-objs += hyper_dmabuf_event.o
-endif
-
-ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
-	$(TARGET_MODULE)-objs += xen/hyper_dmabuf_xen_comm.o \
-				 xen/hyper_dmabuf_xen_comm_list.o \
-				 xen/hyper_dmabuf_xen_shm.o \
-				 xen/hyper_dmabuf_xen_drv.o
-endif
-
-obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
-
-# If we are running without kernel build system
-else
-BUILDSYSTEM_DIR?=../../../
-PWD:=$(shell pwd)
-
-all :
-# run kernel build system to make module
-$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
-
-clean:
-# run kernel build system to cleanup in current directory
-$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
-
-load:
-	insmod ./$(TARGET_MODULE).ko
-
-unload:
-	rmmod ./$(TARGET_MODULE).ko
-
-endif
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
deleted file mode 100644
index eead4c0..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
+++ /dev/null
@@ -1,408 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/miscdevice.h>
-#include <linux/workqueue.h>
-#include <linux/slab.h>
-#include <linux/device.h>
-#include <linux/uaccess.h>
-#include <linux/poll.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_ioctl.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_event.h"
-
-#ifdef CONFIG_HYPER_DMABUF_XEN
-#include "xen/hyper_dmabuf_xen_drv.h"
-#endif
-
-MODULE_LICENSE("GPL and additional rights");
-MODULE_AUTHOR("Intel Corporation");
-
-struct hyper_dmabuf_private *hy_drv_priv;
-
-static void force_free(struct exported_sgt_info *exported,
-		       void *attr)
-{
-	struct ioctl_hyper_dmabuf_unexport unexport_attr;
-	struct file *filp = (struct file *)attr;
-
-	if (!filp || !exported)
-		return;
-
-	if (exported->filp == filp) {
-		dev_dbg(hy_drv_priv->dev,
-			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
-			 exported->hid.id, exported->hid.rng_key[0],
-			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-		unexport_attr.hid = exported->hid;
-		unexport_attr.delay_ms = 0;
-
-		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
-	}
-}
-
-static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
-{
-	int ret = 0;
-
-	/* Do not allow exclusive open */
-	if (filp->f_flags & O_EXCL)
-		return -EBUSY;
-
-	return ret;
-}
-
-static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
-{
-	hyper_dmabuf_foreach_exported(force_free, filp);
-
-	return 0;
-}
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-
-static unsigned int hyper_dmabuf_event_poll(struct file *filp,
-				     struct poll_table_struct *wait)
-{
-	poll_wait(filp, &hy_drv_priv->event_wait, wait);
-
-	if (!list_empty(&hy_drv_priv->event_list))
-		return POLLIN | POLLRDNORM;
-
-	return 0;
-}
-
-static ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
-		size_t count, loff_t *offset)
-{
-	int ret;
-
-	/* only root can read events */
-	if (!capable(CAP_DAC_OVERRIDE)) {
-		dev_err(hy_drv_priv->dev,
-			"Only root can read events\n");
-		return -EPERM;
-	}
-
-	/* make sure user buffer can be written */
-	if (!access_ok(VERIFY_WRITE, buffer, count)) {
-		dev_err(hy_drv_priv->dev,
-			"User buffer can't be written.\n");
-		return -EINVAL;
-	}
-
-	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
-	if (ret)
-		return ret;
-
-	while (1) {
-		struct hyper_dmabuf_event *e = NULL;
-
-		spin_lock_irq(&hy_drv_priv->event_lock);
-		if (!list_empty(&hy_drv_priv->event_list)) {
-			e = list_first_entry(&hy_drv_priv->event_list,
-					struct hyper_dmabuf_event, link);
-			list_del(&e->link);
-		}
-		spin_unlock_irq(&hy_drv_priv->event_lock);
-
-		if (!e) {
-			if (ret)
-				break;
-
-			if (filp->f_flags & O_NONBLOCK) {
-				ret = -EAGAIN;
-				break;
-			}
-
-			mutex_unlock(&hy_drv_priv->event_read_lock);
-			ret = wait_event_interruptible(hy_drv_priv->event_wait,
-				  !list_empty(&hy_drv_priv->event_list));
-
-			if (ret == 0)
-				ret = mutex_lock_interruptible(
-					&hy_drv_priv->event_read_lock);
-
-			if (ret)
-				return ret;
-		} else {
-			unsigned int length = (sizeof(e->event_data.hdr) +
-						      e->event_data.hdr.size);
-
-			if (length > count - ret) {
-put_back_event:
-				spin_lock_irq(&hy_drv_priv->event_lock);
-				list_add(&e->link, &hy_drv_priv->event_list);
-				spin_unlock_irq(&hy_drv_priv->event_lock);
-				break;
-			}
-
-			if (copy_to_user(buffer + ret, &e->event_data.hdr,
-					 sizeof(e->event_data.hdr))) {
-				if (ret == 0)
-					ret = -EFAULT;
-
-				goto put_back_event;
-			}
-
-			ret += sizeof(e->event_data.hdr);
-
-			if (copy_to_user(buffer + ret, e->event_data.data,
-					 e->event_data.hdr.size)) {
-				/* error while copying void *data */
-
-				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
-
-				ret -= sizeof(e->event_data.hdr);
-
-				/* nullifying hdr of the event in user buffer */
-				if (copy_to_user(buffer + ret, &dummy_hdr,
-						 sizeof(dummy_hdr))) {
-					dev_err(hy_drv_priv->dev,
-						"failed to nullify invalid hdr already in userspace\n");
-				}
-
-				ret = -EFAULT;
-
-				goto put_back_event;
-			}
-
-			ret += e->event_data.hdr.size;
-			hy_drv_priv->pending--;
-			kfree(e);
-		}
-	}
-
-	mutex_unlock(&hy_drv_priv->event_read_lock);
-
-	return ret;
-}
-
-#endif
-
-static const struct file_operations hyper_dmabuf_driver_fops = {
-	.owner = THIS_MODULE,
-	.open = hyper_dmabuf_open,
-	.release = hyper_dmabuf_release,
-
-/* poll and read interfaces are needed only for event-polling */
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-	.read = hyper_dmabuf_event_read,
-	.poll = hyper_dmabuf_event_poll,
-#endif
-
-	.unlocked_ioctl = hyper_dmabuf_ioctl,
-};
-
-static struct miscdevice hyper_dmabuf_miscdev = {
-	.minor = MISC_DYNAMIC_MINOR,
-	.name = "hyper_dmabuf",
-	.fops = &hyper_dmabuf_driver_fops,
-};
-
-static int register_device(void)
-{
-	int ret = 0;
-
-	ret = misc_register(&hyper_dmabuf_miscdev);
-
-	if (ret) {
-		printk(KERN_ERR "hyper_dmabuf: driver can't be registered\n");
-		return ret;
-	}
-
-	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
-
-	/* TODO: Check if there is a different way to initialize dma mask */
-	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
-
-	return ret;
-}
-
-static void unregister_device(void)
-{
-	dev_info(hy_drv_priv->dev,
-		"hyper_dmabuf: unregister_device() is called\n");
-
-	misc_deregister(&hyper_dmabuf_miscdev);
-}
-
-static int __init hyper_dmabuf_drv_init(void)
-{
-	int ret = 0;
-
-	printk(KERN_NOTICE "hyper_dmabuf_starting: Initialization started\n");
-
-	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
-			      GFP_KERNEL);
-
-	if (!hy_drv_priv)
-		return -ENOMEM;
-
-	ret = register_device();
-	if (ret < 0) {
-		kfree(hy_drv_priv);
-		return ret;
-	}
-
-/* currently only supports XEN hypervisor */
-#ifdef CONFIG_HYPER_DMABUF_XEN
-	hy_drv_priv->bknd_ops = &xen_bknd_ops;
-#else
-	hy_drv_priv->bknd_ops = NULL;
-	printk(KERN_ERR "hyper_dmabuf drv currently supports XEN only.\n");
-#endif
-
-	if (hy_drv_priv->bknd_ops == NULL) {
-		printk(KERN_ERR "Hyper_dmabuf: no backend found\n");
-		kfree(hy_drv_priv);
-		return -1;
-	}
-
-	mutex_init(&hy_drv_priv->lock);
-
-	mutex_lock(&hy_drv_priv->lock);
-
-	hy_drv_priv->initialized = false;
-
-	dev_info(hy_drv_priv->dev,
-		 "initializing database for imported/exported dmabufs\n");
-
-	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
-
-	ret = hyper_dmabuf_table_init();
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"fail to init table for exported/imported entries\n");
-		mutex_unlock(&hy_drv_priv->lock);
-		kfree(hy_drv_priv);
-		return ret;
-	}
-
-#ifdef CONFIG_HYPER_DMABUF_SYSFS
-	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to initialize sysfs\n");
-		mutex_unlock(&hy_drv_priv->lock);
-		kfree(hy_drv_priv);
-		return ret;
-	}
-#endif
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-	mutex_init(&hy_drv_priv->event_read_lock);
-	spin_lock_init(&hy_drv_priv->event_lock);
-
-	/* Initialize event queue */
-	INIT_LIST_HEAD(&hy_drv_priv->event_list);
-	init_waitqueue_head(&hy_drv_priv->event_wait);
-
-	/* resetting number of pending events */
-	hy_drv_priv->pending = 0;
-#endif
-
-	if (hy_drv_priv->bknd_ops->init) {
-		ret = hy_drv_priv->bknd_ops->init();
-
-		if (ret < 0) {
-			dev_dbg(hy_drv_priv->dev,
-				"failed to initialize backend.\n");
-			mutex_unlock(&hy_drv_priv->lock);
-			kfree(hy_drv_priv);
-			return ret;
-		}
-	}
-
-	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
-
-	ret = hy_drv_priv->bknd_ops->init_comm_env();
-	if (ret < 0) {
-		dev_dbg(hy_drv_priv->dev,
-			"failed to initialize comm-env.\n");
-	} else {
-		hy_drv_priv->initialized = true;
-	}
-
-	mutex_unlock(&hy_drv_priv->lock);
-
-	dev_info(hy_drv_priv->dev,
-		"Finishing up initialization of hyper_dmabuf drv\n");
-
-	/* interrupt for comm should be registered here: */
-	return ret;
-}
-
-static void hyper_dmabuf_drv_exit(void)
-{
-#ifdef CONFIG_HYPER_DMABUF_SYSFS
-	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
-#endif
-
-	mutex_lock(&hy_drv_priv->lock);
-
-	/* hash tables for export/import entries and ring_infos */
-	hyper_dmabuf_table_destroy();
-
-	hy_drv_priv->bknd_ops->destroy_comm();
-
-	if (hy_drv_priv->bknd_ops->cleanup) {
-		hy_drv_priv->bknd_ops->cleanup();
-	};
-
-	/* destroy workqueue */
-	if (hy_drv_priv->work_queue)
-		destroy_workqueue(hy_drv_priv->work_queue);
-
-	/* destroy id_queue */
-	if (hy_drv_priv->id_queue)
-		hyper_dmabuf_free_hid_list();
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-	/* clean up event queue */
-	hyper_dmabuf_events_release();
-#endif
-
-	mutex_unlock(&hy_drv_priv->lock);
-
-	dev_info(hy_drv_priv->dev,
-		 "hyper_dmabuf driver: Exiting\n");
-
-	kfree(hy_drv_priv);
-
-	unregister_device();
-}
-
-module_init(hyper_dmabuf_drv_init);
-module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
deleted file mode 100644
index c2bb3ce..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
+++ /dev/null
@@ -1,118 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
-#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
-
-#include <linux/device.h>
-#include <xen/hyper_dmabuf.h>
-
-struct hyper_dmabuf_req;
-
-struct hyper_dmabuf_event {
-	struct hyper_dmabuf_event_data event_data;
-	struct list_head link;
-};
-
-struct hyper_dmabuf_private {
-	struct device *dev;
-
-	/* VM(domain) id of current VM instance */
-	int domid;
-
-	/* workqueue dedicated to hyper_dmabuf driver */
-	struct workqueue_struct *work_queue;
-
-	/* list of reusable hyper_dmabuf_ids */
-	struct list_reusable_id *id_queue;
-
-	/* backend ops - hypervisor specific */
-	struct hyper_dmabuf_bknd_ops *bknd_ops;
-
-	/* device global lock */
-	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
-	struct mutex lock;
-
-	/* flag that shows whether backend is initialized */
-	bool initialized;
-
-	wait_queue_head_t event_wait;
-	struct list_head event_list;
-
-	spinlock_t event_lock;
-	struct mutex event_read_lock;
-
-	/* # of pending events */
-	int pending;
-};
-
-struct list_reusable_id {
-	hyper_dmabuf_id_t hid;
-	struct list_head list;
-};
-
-struct hyper_dmabuf_bknd_ops {
-	/* backend initialization routine (optional) */
-	int (*init)(void);
-
-	/* backend cleanup routine (optional) */
-	int (*cleanup)(void);
-
-	/* retreiving id of current virtual machine */
-	int (*get_vm_id)(void);
-
-	/* get pages shared via hypervisor-specific method */
-	int (*share_pages)(struct page **, int, int, void **);
-
-	/* make shared pages unshared via hypervisor specific method */
-	int (*unshare_pages)(void **, int);
-
-	/* map remotely shared pages on importer's side via
-	 * hypervisor-specific method
-	 */
-	struct page ** (*map_shared_pages)(unsigned long, int, int, void **);
-
-	/* unmap and free shared pages on importer's side via
-	 * hypervisor-specific method
-	 */
-	int (*unmap_shared_pages)(void **, int);
-
-	/* initialize communication environment */
-	int (*init_comm_env)(void);
-
-	void (*destroy_comm)(void);
-
-	/* upstream ch setup (receiving and responding) */
-	int (*init_rx_ch)(int);
-
-	/* downstream ch setup (transmitting and parsing responses) */
-	int (*init_tx_ch)(int);
-
-	int (*send_req)(int, struct hyper_dmabuf_req *, int);
-};
-
-/* exporting global drv private info */
-extern struct hyper_dmabuf_private *hy_drv_priv;
-
-#endif /* __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
deleted file mode 100644
index 392ea99..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.c
+++ /dev/null
@@ -1,122 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_event.h"
-
-static void send_event(struct hyper_dmabuf_event *e)
-{
-	struct hyper_dmabuf_event *oldest;
-	unsigned long irqflags;
-
-	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
-
-	/* check current number of event then if it hits the max num allowed
-	 * then remove the oldest event in the list
-	 */
-	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
-		oldest = list_first_entry(&hy_drv_priv->event_list,
-				struct hyper_dmabuf_event, link);
-		list_del(&oldest->link);
-		hy_drv_priv->pending--;
-		kfree(oldest);
-	}
-
-	list_add_tail(&e->link,
-		      &hy_drv_priv->event_list);
-
-	hy_drv_priv->pending++;
-
-	wake_up_interruptible(&hy_drv_priv->event_wait);
-
-	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
-}
-
-void hyper_dmabuf_events_release(void)
-{
-	struct hyper_dmabuf_event *e, *et;
-	unsigned long irqflags;
-
-	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
-
-	list_for_each_entry_safe(e, et, &hy_drv_priv->event_list,
-				 link) {
-		list_del(&e->link);
-		kfree(e);
-		hy_drv_priv->pending--;
-	}
-
-	if (hy_drv_priv->pending) {
-		dev_err(hy_drv_priv->dev,
-			"possible leak on event_list\n");
-	}
-
-	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
-}
-
-int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
-{
-	struct hyper_dmabuf_event *e;
-	struct imported_sgt_info *imported;
-
-	imported = hyper_dmabuf_find_imported(hid);
-
-	if (!imported) {
-		dev_err(hy_drv_priv->dev,
-			"can't find imported_sgt_info in the list\n");
-		return -EINVAL;
-	}
-
-	e = kzalloc(sizeof(*e), GFP_KERNEL);
-
-	if (!e)
-		return -ENOMEM;
-
-	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
-	e->event_data.hdr.hid = hid;
-	e->event_data.data = (void *)imported->priv;
-	e->event_data.hdr.size = imported->sz_priv;
-
-	send_event(e);
-
-	dev_dbg(hy_drv_priv->dev,
-		"event number = %d :", hy_drv_priv->pending);
-
-	dev_dbg(hy_drv_priv->dev,
-		"generating events for {%d, %d, %d, %d}\n",
-		imported->hid.id, imported->hid.rng_key[0],
-		imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
deleted file mode 100644
index 50db04f..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_event.h
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_EVENT_H__
-#define __HYPER_DMABUF_EVENT_H__
-
-#define MAX_DEPTH_EVENT_QUEUE 32
-
-enum hyper_dmabuf_event_type {
-	HYPER_DMABUF_NEW_IMPORT = 0x10000,
-};
-
-void hyper_dmabuf_events_release(void);
-
-int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid);
-
-#endif /* __HYPER_DMABUF_EVENT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
deleted file mode 100644
index e67b84a..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.c
+++ /dev/null
@@ -1,133 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/list.h>
-#include <linux/slab.h>
-#include <linux/random.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_id.h"
-
-void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
-{
-	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	struct list_reusable_id *new_reusable;
-
-	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
-
-	if (!new_reusable)
-		return;
-
-	new_reusable->hid = hid;
-
-	list_add(&new_reusable->list, &reusable_head->list);
-}
-
-static hyper_dmabuf_id_t get_reusable_hid(void)
-{
-	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
-
-	/* check there is reusable id */
-	if (!list_empty(&reusable_head->list)) {
-		reusable_head = list_first_entry(&reusable_head->list,
-						 struct list_reusable_id,
-						 list);
-
-		list_del(&reusable_head->list);
-		hid = reusable_head->hid;
-		kfree(reusable_head);
-	}
-
-	return hid;
-}
-
-void hyper_dmabuf_free_hid_list(void)
-{
-	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
-	struct list_reusable_id *temp_head;
-
-	if (reusable_head) {
-		/* freeing mem space all reusable ids in the stack */
-		while (!list_empty(&reusable_head->list)) {
-			temp_head = list_first_entry(&reusable_head->list,
-						     struct list_reusable_id,
-						     list);
-			list_del(&temp_head->list);
-			kfree(temp_head);
-		}
-
-		/* freeing head */
-		kfree(reusable_head);
-	}
-}
-
-hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
-{
-	static int count;
-	hyper_dmabuf_id_t hid;
-	struct list_reusable_id *reusable_head;
-
-	/* first call to hyper_dmabuf_get_id */
-	if (count == 0) {
-		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
-
-		if (!reusable_head)
-			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
-
-		/* list head has an invalid count */
-		reusable_head->hid.id = -1;
-		INIT_LIST_HEAD(&reusable_head->list);
-		hy_drv_priv->id_queue = reusable_head;
-	}
-
-	hid = get_reusable_hid();
-
-	/*creating a new H-ID only if nothing in the reusable id queue
-	 * and count is less than maximum allowed
-	 */
-	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
-		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
-
-	/* random data embedded in the id for security */
-	get_random_bytes(&hid.rng_key[0], 12);
-
-	return hid;
-}
-
-bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
-{
-	int i;
-
-	/* compare keys */
-	for (i = 0; i < 3; i++) {
-		if (hid1.rng_key[i] != hid2.rng_key[i])
-			return false;
-	}
-
-	return true;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
deleted file mode 100644
index ed690f3..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_id.h
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_ID_H__
-#define __HYPER_DMABUF_ID_H__
-
-#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
-	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
-
-#define HYPER_DMABUF_DOM_ID(hid) \
-	(((hid.id) >> 24) & 0xFF)
-
-/* currently maximum number of buffers shared
- * at any given moment is limited to 1000
- */
-#define HYPER_DMABUF_ID_MAX 1000
-
-/* adding freed hid to the reusable list */
-void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
-
-/* freeing the reusasble list */
-void hyper_dmabuf_free_hid_list(void);
-
-/* getting a hid available to use. */
-hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
-
-/* comparing two different hid */
-bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
-
-#endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
deleted file mode 100644
index ca6edf2..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ /dev/null
@@ -1,786 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/uaccess.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_ioctl.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_sgl_proc.h"
-#include "hyper_dmabuf_ops.h"
-#include "hyper_dmabuf_query.h"
-
-static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int ret = 0;
-
-	if (!data) {
-		dev_err(hy_drv_priv->dev, "user data is NULL\n");
-		return -EINVAL;
-	}
-	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
-
-	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
-
-	return ret;
-}
-
-static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int ret = 0;
-
-	if (!data) {
-		dev_err(hy_drv_priv->dev, "user data is NULL\n");
-		return -EINVAL;
-	}
-
-	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
-
-	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
-
-	return ret;
-}
-
-static int send_export_msg(struct exported_sgt_info *exported,
-			   struct pages_info *pg_info)
-{
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	struct hyper_dmabuf_req *req;
-	int op[MAX_NUMBER_OF_OPERANDS] = {0};
-	int ret, i;
-
-	/* now create request for importer via ring */
-	op[0] = exported->hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = exported->hid.rng_key[i];
-
-	if (pg_info) {
-		op[4] = pg_info->nents;
-		op[5] = pg_info->frst_ofst;
-		op[6] = pg_info->last_len;
-		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
-					 pg_info->nents, &exported->refs_info);
-		if (op[7] < 0) {
-			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
-			return op[7];
-		}
-	}
-
-	op[8] = exported->sz_priv;
-
-	/* driver/application specific private info */
-	memcpy(&op[9], exported->priv, op[8]);
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req)
-		return -ENOMEM;
-
-	/* composing a message to the importer */
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
-
-	ret = bknd_ops->send_req(exported->rdomid, req, true);
-
-	kfree(req);
-
-	return ret;
-}
-
-/* Fast path exporting routine in case same buffer is already exported.
- * In this function, we skip normal exporting process and just update
- * private data on both VMs (importer and exporter)
- *
- * return '1' if reexport is needed, return '0' if succeeds, return
- * Kernel error code if something goes wrong
- */
-static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
-{
-	int reexport = 1;
-	int ret = 0;
-	struct exported_sgt_info *exported;
-
-	exported = hyper_dmabuf_find_exported(hid);
-
-	if (!exported)
-		return reexport;
-
-	if (exported->valid == false)
-		return reexport;
-
-	/*
-	 * Check if unexport is already scheduled for that buffer,
-	 * if so try to cancel it. If that will fail, buffer needs
-	 * to be reexport once again.
-	 */
-	if (exported->unexport_sched) {
-		if (!cancel_delayed_work_sync(&exported->unexport))
-			return reexport;
-
-		exported->unexport_sched = false;
-	}
-
-	/* if there's any change in size of private data.
-	 * we reallocate space for private data with new size
-	 */
-	if (sz_priv != exported->sz_priv) {
-		kfree(exported->priv);
-
-		/* truncating size */
-		if (sz_priv > MAX_SIZE_PRIV_DATA)
-			exported->sz_priv = MAX_SIZE_PRIV_DATA;
-		else
-			exported->sz_priv = sz_priv;
-
-		exported->priv = kcalloc(1, exported->sz_priv,
-					 GFP_KERNEL);
-
-		if (!exported->priv) {
-			hyper_dmabuf_remove_exported(exported->hid);
-			hyper_dmabuf_cleanup_sgt_info(exported, true);
-			kfree(exported);
-			return -ENOMEM;
-		}
-	}
-
-	/* update private data in sgt_info with new ones */
-	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to load a new private data\n");
-		ret = -EINVAL;
-	} else {
-		/* send an export msg for updating priv in importer */
-		ret = send_export_msg(exported, NULL);
-
-		if (ret < 0) {
-			dev_err(hy_drv_priv->dev,
-				"Failed to send a new private data\n");
-			ret = -EBUSY;
-		}
-	}
-
-	return ret;
-}
-
-static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
-			(struct ioctl_hyper_dmabuf_export_remote *)data;
-	struct dma_buf *dma_buf;
-	struct dma_buf_attachment *attachment;
-	struct sg_table *sgt;
-	struct pages_info *pg_info;
-	struct exported_sgt_info *exported;
-	hyper_dmabuf_id_t hid;
-	int ret = 0;
-
-	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
-		dev_err(hy_drv_priv->dev,
-			"exporting to the same VM is not permitted\n");
-		return -EINVAL;
-	}
-
-	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
-
-	if (IS_ERR(dma_buf)) {
-		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
-		return PTR_ERR(dma_buf);
-	}
-
-	/* we check if this specific attachment was already exported
-	 * to the same domain and if yes and it's valid sgt_info,
-	 * it returns hyper_dmabuf_id of pre-exported sgt_info
-	 */
-	hid = hyper_dmabuf_find_hid_exported(dma_buf,
-					     export_remote_attr->remote_domain);
-
-	if (hid.id != -1) {
-		ret = fastpath_export(hid, export_remote_attr->sz_priv,
-				      export_remote_attr->priv);
-
-		/* return if fastpath_export succeeds or
-		 * gets some fatal error
-		 */
-		if (ret <= 0) {
-			dma_buf_put(dma_buf);
-			export_remote_attr->hid = hid;
-			return ret;
-		}
-	}
-
-	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
-	if (IS_ERR(attachment)) {
-		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
-		ret = PTR_ERR(attachment);
-		goto fail_attach;
-	}
-
-	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
-
-	if (IS_ERR(sgt)) {
-		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
-		ret = PTR_ERR(sgt);
-		goto fail_map_attachment;
-	}
-
-	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
-
-	if (!exported) {
-		ret = -ENOMEM;
-		goto fail_sgt_info_creation;
-	}
-
-	/* possible truncation */
-	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
-		exported->sz_priv = MAX_SIZE_PRIV_DATA;
-	else
-		exported->sz_priv = export_remote_attr->sz_priv;
-
-	/* creating buffer for private data of buffer */
-	if (exported->sz_priv != 0) {
-		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
-
-		if (!exported->priv) {
-			ret = -ENOMEM;
-			goto fail_priv_creation;
-		}
-	} else {
-		dev_err(hy_drv_priv->dev, "size is 0\n");
-	}
-
-	exported->hid = hyper_dmabuf_get_hid();
-
-	/* no more exported dmabuf allowed */
-	if (exported->hid.id == -1) {
-		dev_err(hy_drv_priv->dev,
-			"exceeds allowed number of dmabuf to be exported\n");
-		ret = -ENOMEM;
-		goto fail_sgt_info_creation;
-	}
-
-	exported->rdomid = export_remote_attr->remote_domain;
-	exported->dma_buf = dma_buf;
-	exported->valid = true;
-
-	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
-	if (!exported->active_sgts) {
-		ret = -ENOMEM;
-		goto fail_map_active_sgts;
-	}
-
-	exported->active_attached = kmalloc(sizeof(struct attachment_list),
-					    GFP_KERNEL);
-	if (!exported->active_attached) {
-		ret = -ENOMEM;
-		goto fail_map_active_attached;
-	}
-
-	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
-				       GFP_KERNEL);
-	if (!exported->va_kmapped) {
-		ret = -ENOMEM;
-		goto fail_map_va_kmapped;
-	}
-
-	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
-				       GFP_KERNEL);
-	if (!exported->va_vmapped) {
-		ret = -ENOMEM;
-		goto fail_map_va_vmapped;
-	}
-
-	exported->active_sgts->sgt = sgt;
-	exported->active_attached->attach = attachment;
-	exported->va_kmapped->vaddr = NULL;
-	exported->va_vmapped->vaddr = NULL;
-
-	/* initialize list of sgt, attachment and vaddr for dmabuf sync
-	 * via shadow dma-buf
-	 */
-	INIT_LIST_HEAD(&exported->active_sgts->list);
-	INIT_LIST_HEAD(&exported->active_attached->list);
-	INIT_LIST_HEAD(&exported->va_kmapped->list);
-	INIT_LIST_HEAD(&exported->va_vmapped->list);
-
-	/* copy private data to sgt_info */
-	ret = copy_from_user(exported->priv, export_remote_attr->priv,
-			     exported->sz_priv);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"failed to load private data\n");
-		ret = -EINVAL;
-		goto fail_export;
-	}
-
-	pg_info = hyper_dmabuf_ext_pgs(sgt);
-	if (!pg_info) {
-		dev_err(hy_drv_priv->dev,
-			"failed to construct pg_info\n");
-		ret = -ENOMEM;
-		goto fail_export;
-	}
-
-	exported->nents = pg_info->nents;
-
-	/* now register it to export list */
-	hyper_dmabuf_register_exported(exported);
-
-	export_remote_attr->hid = exported->hid;
-
-	ret = send_export_msg(exported, pg_info);
-
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to send out the export request\n");
-		goto fail_send_request;
-	}
-
-	/* free pg_info */
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-	exported->filp = filp;
-
-	return ret;
-
-/* Clean-up if error occurs */
-
-fail_send_request:
-	hyper_dmabuf_remove_exported(exported->hid);
-
-	/* free pg_info */
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-fail_export:
-	kfree(exported->va_vmapped);
-
-fail_map_va_vmapped:
-	kfree(exported->va_kmapped);
-
-fail_map_va_kmapped:
-	kfree(exported->active_attached);
-
-fail_map_active_attached:
-	kfree(exported->active_sgts);
-	kfree(exported->priv);
-
-fail_priv_creation:
-	kfree(exported);
-
-fail_map_active_sgts:
-fail_sgt_info_creation:
-	dma_buf_unmap_attachment(attachment, sgt,
-				 DMA_BIDIRECTIONAL);
-
-fail_map_attachment:
-	dma_buf_detach(dma_buf, attachment);
-
-fail_attach:
-	dma_buf_put(dma_buf);
-
-	return ret;
-}
-
-static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
-			(struct ioctl_hyper_dmabuf_export_fd *)data;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	struct imported_sgt_info *imported;
-	struct hyper_dmabuf_req *req;
-	struct page **data_pgs;
-	int op[4];
-	int i;
-	int ret = 0;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	/* look for dmabuf for the id */
-	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
-
-	/* can't find sgt from the table */
-	if (!imported) {
-		dev_err(hy_drv_priv->dev, "can't find the entry\n");
-		return -ENOENT;
-	}
-
-	mutex_lock(&hy_drv_priv->lock);
-
-	imported->importers++;
-
-	/* send notification for export_fd to exporter */
-	op[0] = imported->hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = imported->hid.rng_key[i];
-
-	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
-		imported->hid.id, imported->hid.rng_key[0],
-		imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req) {
-		mutex_unlock(&hy_drv_priv->lock);
-		return -ENOMEM;
-	}
-
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
-
-	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
-
-	if (ret < 0) {
-		/* in case of timeout other end eventually will receive request,
-		 * so we need to undo it
-		 */
-		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
-					&op[0]);
-		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, false);
-		kfree(req);
-		dev_err(hy_drv_priv->dev,
-			"Failed to create sgt or notify exporter\n");
-		imported->importers--;
-		mutex_unlock(&hy_drv_priv->lock);
-		return ret;
-	}
-
-	kfree(req);
-
-	if (ret == HYPER_DMABUF_REQ_ERROR) {
-		dev_err(hy_drv_priv->dev,
-			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
-			imported->hid.id, imported->hid.rng_key[0],
-			imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-		imported->importers--;
-		mutex_unlock(&hy_drv_priv->lock);
-		return -EINVAL;
-	}
-
-	ret = 0;
-
-	dev_dbg(hy_drv_priv->dev,
-		"Found buffer gref %d off %d\n",
-		imported->ref_handle, imported->frst_ofst);
-
-	dev_dbg(hy_drv_priv->dev,
-		"last len %d nents %d domain %d\n",
-		imported->last_len, imported->nents,
-		HYPER_DMABUF_DOM_ID(imported->hid));
-
-	if (!imported->sgt) {
-		dev_dbg(hy_drv_priv->dev,
-			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
-			imported->hid.id, imported->hid.rng_key[0],
-			imported->hid.rng_key[1], imported->hid.rng_key[2]);
-
-		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
-					HYPER_DMABUF_DOM_ID(imported->hid),
-					imported->nents,
-					&imported->refs_info);
-
-		if (!data_pgs) {
-			dev_err(hy_drv_priv->dev,
-				"can't map pages hid {id:%d key:%d %d %d}\n",
-				imported->hid.id, imported->hid.rng_key[0],
-				imported->hid.rng_key[1],
-				imported->hid.rng_key[2]);
-
-			imported->importers--;
-
-			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-			if (!req) {
-				mutex_unlock(&hy_drv_priv->lock);
-				return -ENOMEM;
-			}
-
-			hyper_dmabuf_create_req(req,
-						HYPER_DMABUF_EXPORT_FD_FAILED,
-						&op[0]);
-			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req,
-							  false);
-			kfree(req);
-			mutex_unlock(&hy_drv_priv->lock);
-			return -EINVAL;
-		}
-
-		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
-							imported->frst_ofst,
-							imported->last_len,
-							imported->nents);
-
-	}
-
-	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
-						    export_fd_attr->flags);
-
-	if (export_fd_attr->fd < 0) {
-		/* fail to get fd */
-		ret = export_fd_attr->fd;
-	}
-
-	mutex_unlock(&hy_drv_priv->lock);
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return ret;
-}
-
-/* unexport dmabuf from the database and send int req to the source domain
- * to unmap it.
- */
-static void delayed_unexport(struct work_struct *work)
-{
-	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	struct exported_sgt_info *exported =
-		container_of(work, struct exported_sgt_info, unexport.work);
-	int op[4];
-	int i, ret;
-
-	if (!exported)
-		return;
-
-	dev_dbg(hy_drv_priv->dev,
-		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
-		exported->hid.id, exported->hid.rng_key[0],
-		exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-	/* no longer valid */
-	exported->valid = false;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req)
-		return;
-
-	op[0] = exported->hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = exported->hid.rng_key[i];
-
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
-
-	/* Now send unexport request to remote domain, marking
-	 * that buffer should not be used anymore
-	 */
-	ret = bknd_ops->send_req(exported->rdomid, req, true);
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
-			exported->hid.id, exported->hid.rng_key[0],
-			exported->hid.rng_key[1], exported->hid.rng_key[2]);
-	}
-
-	kfree(req);
-	exported->unexport_sched = false;
-
-	/* Immediately clean-up if it has never been exported by importer
-	 * (so no SGT is constructed on importer).
-	 * clean it up later in remote sync when final release ops
-	 * is called (importer does this only when there's no
-	 * no consumer of locally exported FDs)
-	 */
-	if (exported->active == 0) {
-		dev_dbg(hy_drv_priv->dev,
-			"claning up buffer {id:%d key:%d %d %d} completly\n",
-			exported->hid.id, exported->hid.rng_key[0],
-			exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-		hyper_dmabuf_cleanup_sgt_info(exported, false);
-		hyper_dmabuf_remove_exported(exported->hid);
-
-		/* register hyper_dmabuf_id to the list for reuse */
-		hyper_dmabuf_store_hid(exported->hid);
-
-		if (exported->sz_priv > 0 && !exported->priv)
-			kfree(exported->priv);
-
-		kfree(exported);
-	}
-}
-
-/* Schedule unexport of dmabuf.
- */
-int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
-			(struct ioctl_hyper_dmabuf_unexport *)data;
-	struct exported_sgt_info *exported;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	/* find dmabuf in export list */
-	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
-
-	dev_dbg(hy_drv_priv->dev,
-		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
-		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
-		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
-
-	/* failed to find corresponding entry in export list */
-	if (exported == NULL) {
-		unexport_attr->status = -ENOENT;
-		return -ENOENT;
-	}
-
-	if (exported->unexport_sched)
-		return 0;
-
-	exported->unexport_sched = true;
-	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
-	schedule_delayed_work(&exported->unexport,
-			      msecs_to_jiffies(unexport_attr->delay_ms));
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return 0;
-}
-
-static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
-{
-	struct ioctl_hyper_dmabuf_query *query_attr =
-			(struct ioctl_hyper_dmabuf_query *)data;
-	struct exported_sgt_info *exported = NULL;
-	struct imported_sgt_info *imported = NULL;
-	int ret = 0;
-
-	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hy_drv_priv->domid) {
-		/* query for exported dmabuf */
-		exported = hyper_dmabuf_find_exported(query_attr->hid);
-		if (exported) {
-			ret = hyper_dmabuf_query_exported(exported,
-							  query_attr->item,
-							  &query_attr->info);
-		} else {
-			dev_err(hy_drv_priv->dev,
-				"hid {id:%d key:%d %d %d} not in exp list\n",
-				query_attr->hid.id,
-				query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
-			return -ENOENT;
-		}
-	} else {
-		/* query for imported dmabuf */
-		imported = hyper_dmabuf_find_imported(query_attr->hid);
-		if (imported) {
-			ret = hyper_dmabuf_query_imported(imported,
-							  query_attr->item,
-							  &query_attr->info);
-		} else {
-			dev_err(hy_drv_priv->dev,
-				"hid {id:%d key:%d %d %d} not in imp list\n",
-				query_attr->hid.id,
-				query_attr->hid.rng_key[0],
-				query_attr->hid.rng_key[1],
-				query_attr->hid.rng_key[2]);
-			return -ENOENT;
-		}
-	}
-
-	return ret;
-}
-
-const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
-			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
-			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
-			       hyper_dmabuf_export_remote_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
-			       hyper_dmabuf_export_fd_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
-			       hyper_dmabuf_unexport_ioctl, 0),
-	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY,
-			       hyper_dmabuf_query_ioctl, 0),
-};
-
-long hyper_dmabuf_ioctl(struct file *filp,
-			unsigned int cmd, unsigned long param)
-{
-	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
-	unsigned int nr = _IOC_NR(cmd);
-	int ret;
-	hyper_dmabuf_ioctl_t func;
-	char *kdata;
-
-	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
-		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
-		return -EINVAL;
-	}
-
-	ioctl = &hyper_dmabuf_ioctls[nr];
-
-	func = ioctl->func;
-
-	if (unlikely(!func)) {
-		dev_err(hy_drv_priv->dev, "no function\n");
-		return -EINVAL;
-	}
-
-	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
-	if (!kdata)
-		return -ENOMEM;
-
-	if (copy_from_user(kdata, (void __user *)param,
-			   _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to copy from user arguments\n");
-		ret = -EFAULT;
-		goto ioctl_error;
-	}
-
-	ret = func(filp, kdata);
-
-	if (copy_to_user((void __user *)param, kdata,
-			 _IOC_SIZE(cmd)) != 0) {
-		dev_err(hy_drv_priv->dev,
-			"failed to copy to user arguments\n");
-		ret = -EFAULT;
-		goto ioctl_error;
-	}
-
-ioctl_error:
-	kfree(kdata);
-
-	return ret;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
deleted file mode 100644
index 5991a87..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.h
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_IOCTL_H__
-#define __HYPER_DMABUF_IOCTL_H__
-
-typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
-
-struct hyper_dmabuf_ioctl_desc {
-	unsigned int cmd;
-	int flags;
-	hyper_dmabuf_ioctl_t func;
-	const char *name;
-};
-
-#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
-	[_IOC_NR(ioctl)] = {				\
-			.cmd = ioctl,			\
-			.func = _func,			\
-			.flags = _flags,		\
-			.name = #ioctl			\
-	}
-
-long hyper_dmabuf_ioctl(struct file *filp,
-			unsigned int cmd, unsigned long param);
-
-int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
-
-#endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
deleted file mode 100644
index bba6d1d..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
+++ /dev/null
@@ -1,293 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/cdev.h>
-#include <linux/hashtable.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_event.h"
-
-DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
-DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
-
-#ifdef CONFIG_HYPER_DMABUF_SYSFS
-static ssize_t hyper_dmabuf_imported_show(struct device *drv,
-					  struct device_attribute *attr,
-					  char *buf)
-{
-	struct list_entry_imported *info_entry;
-	int bkt;
-	ssize_t count = 0;
-	size_t total = 0;
-
-	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->imported->hid;
-		int nents = info_entry->imported->nents;
-		bool valid = info_entry->imported->valid;
-		int num_importers = info_entry->imported->importers;
-
-		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count,
-				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2], nents, (valid ? 't' : 'f'),
-				num_importers);
-	}
-	count += scnprintf(buf + count, PAGE_SIZE - count,
-			   "total nents: %lu\n", total);
-
-	return count;
-}
-
-static ssize_t hyper_dmabuf_exported_show(struct device *drv,
-					  struct device_attribute *attr,
-					  char *buf)
-{
-	struct list_entry_exported *info_entry;
-	int bkt;
-	ssize_t count = 0;
-	size_t total = 0;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
-		hyper_dmabuf_id_t hid = info_entry->exported->hid;
-		int nents = info_entry->exported->nents;
-		bool valid = info_entry->exported->valid;
-		int importer_exported = info_entry->exported->active;
-
-		total += nents;
-		count += scnprintf(buf + count, PAGE_SIZE - count,
-				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
-				   hid.id, hid.rng_key[0], hid.rng_key[1],
-				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
-				   importer_exported);
-	}
-	count += scnprintf(buf + count, PAGE_SIZE - count,
-			   "total nents: %lu\n", total);
-
-	return count;
-}
-
-static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
-static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
-
-int hyper_dmabuf_register_sysfs(struct device *dev)
-{
-	int err;
-
-	err = device_create_file(dev, &dev_attr_imported);
-	if (err < 0)
-		goto err1;
-	err = device_create_file(dev, &dev_attr_exported);
-	if (err < 0)
-		goto err2;
-
-	return 0;
-err2:
-	device_remove_file(dev, &dev_attr_imported);
-err1:
-	return -1;
-}
-
-int hyper_dmabuf_unregister_sysfs(struct device *dev)
-{
-	device_remove_file(dev, &dev_attr_imported);
-	device_remove_file(dev, &dev_attr_exported);
-	return 0;
-}
-
-#endif
-
-int hyper_dmabuf_table_init(void)
-{
-	hash_init(hyper_dmabuf_hash_imported);
-	hash_init(hyper_dmabuf_hash_exported);
-	return 0;
-}
-
-int hyper_dmabuf_table_destroy(void)
-{
-	/* TODO: cleanup hyper_dmabuf_hash_imported
-	 * and hyper_dmabuf_hash_exported
-	 */
-	return 0;
-}
-
-int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
-{
-	struct list_entry_exported *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->exported = exported;
-
-	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
-		 info_entry->exported->hid.id);
-
-	return 0;
-}
-
-int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
-{
-	struct list_entry_imported *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->imported = imported;
-
-	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
-		 info_entry->imported->hid.id);
-
-	return 0;
-}
-
-struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_exported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->exported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
-						    hid))
-				return info_entry->exported;
-
-			/* if key is unmatched, given HID is invalid,
-			 * so returning NULL
-			 */
-			break;
-		}
-
-	return NULL;
-}
-
-/* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
-						 int domid)
-{
-	struct list_entry_exported *info_entry;
-	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		if (info_entry->exported->dma_buf == dmabuf &&
-		    info_entry->exported->rdomid == domid)
-			return info_entry->exported->hid;
-
-	return hid;
-}
-
-struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_imported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->imported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
-						    hid))
-				return info_entry->imported;
-			/* if key is unmatched, given HID is invalid,
-			 * so returning NULL
-			 */
-			break;
-		}
-
-	return NULL;
-}
-
-int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_exported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->exported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
-						    hid)) {
-				hash_del(&info_entry->node);
-				kfree(info_entry);
-				return 0;
-			}
-
-			break;
-		}
-
-	return -ENOENT;
-}
-
-int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
-{
-	struct list_entry_imported *info_entry;
-	int bkt;
-
-	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
-		/* checking hid.id first */
-		if (info_entry->imported->hid.id == hid.id) {
-			/* then key is compared */
-			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
-						    hid)) {
-				hash_del(&info_entry->node);
-				kfree(info_entry);
-				return 0;
-			}
-
-			break;
-		}
-
-	return -ENOENT;
-}
-
-void hyper_dmabuf_foreach_exported(
-	void (*func)(struct exported_sgt_info *, void *attr),
-	void *attr)
-{
-	struct list_entry_exported *info_entry;
-	struct hlist_node *tmp;
-	int bkt;
-
-	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
-			info_entry, node) {
-		func(info_entry->exported, attr);
-	}
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
deleted file mode 100644
index f7102f5..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_LIST_H__
-#define __HYPER_DMABUF_LIST_H__
-
-#include "hyper_dmabuf_struct.h"
-
-/* number of bits to be used for exported dmabufs hash table */
-#define MAX_ENTRY_EXPORTED 7
-/* number of bits to be used for imported dmabufs hash table */
-#define MAX_ENTRY_IMPORTED 7
-
-struct list_entry_exported {
-	struct exported_sgt_info *exported;
-	struct hlist_node node;
-};
-
-struct list_entry_imported {
-	struct imported_sgt_info *imported;
-	struct hlist_node node;
-};
-
-int hyper_dmabuf_table_init(void);
-
-int hyper_dmabuf_table_destroy(void);
-
-int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
-
-/* search for pre-exported sgt and return id of it if it exist */
-hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
-						 int domid);
-
-int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
-
-struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
-
-struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
-
-int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
-
-int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
-
-void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
-				   void *attr), void *attr);
-
-int hyper_dmabuf_register_sysfs(struct device *dev);
-int hyper_dmabuf_unregister_sysfs(struct device *dev);
-
-#endif /* __HYPER_DMABUF_LIST_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
deleted file mode 100644
index afc1fd6e..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
+++ /dev/null
@@ -1,414 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/workqueue.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_remote_sync.h"
-#include "hyper_dmabuf_event.h"
-#include "hyper_dmabuf_list.h"
-
-struct cmd_process {
-	struct work_struct work;
-	struct hyper_dmabuf_req *rq;
-	int domid;
-};
-
-void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
-			     enum hyper_dmabuf_command cmd, int *op)
-{
-	int i;
-
-	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
-	req->cmd = cmd;
-
-	switch (cmd) {
-	/* as exporter, commands to importer */
-	case HYPER_DMABUF_EXPORT:
-		/* exporting pages for dmabuf */
-		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~op3 : hyper_dmabuf_id
-		 * op4 : number of pages to be shared
-		 * op5 : offset of data in the first page
-		 * op6 : length of data in the last page
-		 * op7 : top-level reference number for shared pages
-		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data
-		 *	   (e.g. graphic buffer's meta info)
-		 */
-
-		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
-		break;
-
-	case HYPER_DMABUF_NOTIFY_UNEXPORT:
-		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : DMABUF_DESTROY,
-		 * op0~op3 : hyper_dmabuf_id_t hid
-		 */
-
-		for (i = 0; i < 4; i++)
-			req->op[i] = op[i];
-		break;
-
-	case HYPER_DMABUF_EXPORT_FD:
-	case HYPER_DMABUF_EXPORT_FD_FAILED:
-		/* dmabuf fd is being created on imported side or importing
-		 * failed
-		 *
-		 * command : HYPER_DMABUF_EXPORT_FD or
-		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
-		 * op0~op3 : hyper_dmabuf_id
-		 */
-
-		for (i = 0; i < 4; i++)
-			req->op[i] = op[i];
-		break;
-
-	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer (probably not needed)
-		 * for dmabuf synchronization
-		 */
-		break;
-
-	case HYPER_DMABUF_OPS_TO_SOURCE:
-		/* notifying dmabuf map/unmap to exporter, map will make
-		 * the driver to do shadow mapping or unmapping for
-		 * synchronization with original exporter (e.g. i915)
-		 *
-		 * command : DMABUF_OPS_TO_SOURCE.
-		 * op0~3 : hyper_dmabuf_id
-		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
-		 */
-		for (i = 0; i < 5; i++)
-			req->op[i] = op[i];
-		break;
-
-	default:
-		/* no command found */
-		return;
-	}
-}
-
-static void cmd_process_work(struct work_struct *work)
-{
-	struct imported_sgt_info *imported;
-	struct cmd_process *proc = container_of(work,
-						struct cmd_process, work);
-	struct hyper_dmabuf_req *req;
-	int domid;
-	int i;
-
-	req = proc->rq;
-	domid = proc->domid;
-
-	switch (req->cmd) {
-	case HYPER_DMABUF_EXPORT:
-		/* exporting pages for dmabuf */
-		/* command : HYPER_DMABUF_EXPORT,
-		 * op0~op3 : hyper_dmabuf_id
-		 * op4 : number of pages to be shared
-		 * op5 : offset of data in the first page
-		 * op6 : length of data in the last page
-		 * op7 : top-level reference number for shared pages
-		 * op8 : size of private data (from op9)
-		 * op9 ~ : Driver-specific private data
-		 *         (e.g. graphic buffer's meta info)
-		 */
-
-		/* if nents == 0, it means it is a message only for
-		 * priv synchronization. for existing imported_sgt_info
-		 * so not creating a new one
-		 */
-		if (req->op[4] == 0) {
-			hyper_dmabuf_id_t exist = {req->op[0],
-						   {req->op[1], req->op[2],
-						   req->op[3] } };
-
-			imported = hyper_dmabuf_find_imported(exist);
-
-			if (!imported) {
-				dev_err(hy_drv_priv->dev,
-					"Can't find imported sgt_info\n");
-				break;
-			}
-
-			/* if size of new private data is different,
-			 * we reallocate it.
-			 */
-			if (imported->sz_priv != req->op[8]) {
-				kfree(imported->priv);
-				imported->sz_priv = req->op[8];
-				imported->priv = kcalloc(1, req->op[8],
-							 GFP_KERNEL);
-				if (!imported->priv) {
-					/* set it invalid */
-					imported->valid = 0;
-					break;
-				}
-			}
-
-			/* updating priv data */
-			memcpy(imported->priv, &req->op[9], req->op[8]);
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-			/* generating import event */
-			hyper_dmabuf_import_event(imported->hid);
-#endif
-
-			break;
-		}
-
-		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
-
-		if (!imported)
-			break;
-
-		imported->sz_priv = req->op[8];
-		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
-
-		if (!imported->priv) {
-			kfree(imported);
-			break;
-		}
-
-		imported->hid.id = req->op[0];
-
-		for (i = 0; i < 3; i++)
-			imported->hid.rng_key[i] = req->op[i+1];
-
-		imported->nents = req->op[4];
-		imported->frst_ofst = req->op[5];
-		imported->last_len = req->op[6];
-		imported->ref_handle = req->op[7];
-
-		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
-		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
-			req->op[0], req->op[1], req->op[2],
-			req->op[3]);
-		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
-		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
-		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
-		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
-
-		memcpy(imported->priv, &req->op[9], req->op[8]);
-
-		imported->valid = true;
-		hyper_dmabuf_register_imported(imported);
-
-#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
-		/* generating import event */
-		hyper_dmabuf_import_event(imported->hid);
-#endif
-
-		break;
-
-	case HYPER_DMABUF_OPS_TO_REMOTE:
-		/* notifying dmabuf map/unmap to importer
-		 * (probably not needed) for dmabuf synchronization
-		 */
-		break;
-
-	default:
-		/* shouldn't get here */
-		break;
-	}
-
-	kfree(req);
-	kfree(proc);
-}
-
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
-{
-	struct cmd_process *proc;
-	struct hyper_dmabuf_req *temp_req;
-	struct imported_sgt_info *imported;
-	struct exported_sgt_info *exported;
-	hyper_dmabuf_id_t hid;
-	int ret;
-
-	if (!req) {
-		dev_err(hy_drv_priv->dev, "request is NULL\n");
-		return -EINVAL;
-	}
-
-	hid.id = req->op[0];
-	hid.rng_key[0] = req->op[1];
-	hid.rng_key[1] = req->op[2];
-	hid.rng_key[2] = req->op[3];
-
-	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
-		(req->cmd > HYPER_DMABUF_OPS_TO_SOURCE)) {
-		dev_err(hy_drv_priv->dev, "invalid command\n");
-		return -EINVAL;
-	}
-
-	req->stat = HYPER_DMABUF_REQ_PROCESSED;
-
-	/* HYPER_DMABUF_DESTROY requires immediate
-	 * follow up so can't be processed in workqueue
-	 */
-	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
-		/* destroy sg_list for hyper_dmabuf_id on remote side */
-		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
-		 * op0~3 : hyper_dmabuf_id
-		 */
-		dev_dbg(hy_drv_priv->dev,
-			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
-
-		imported = hyper_dmabuf_find_imported(hid);
-
-		if (imported) {
-			/* if anything is still using dma_buf */
-			if (imported->importers) {
-				/* Buffer is still in  use, just mark that
-				 * it should not be allowed to export its fd
-				 * anymore.
-				 */
-				imported->valid = false;
-			} else {
-				/* No one is using buffer, remove it from
-				 * imported list
-				 */
-				hyper_dmabuf_remove_imported(hid);
-				kfree(imported);
-			}
-		} else {
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		}
-
-		return req->cmd;
-	}
-
-	/* dma buf remote synchronization */
-	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
-		/* notifying dmabuf map/unmap to exporter, map will
-		 * make the driver to do shadow mapping
-		 * or unmapping for synchronization with original
-		 * exporter (e.g. i915)
-		 *
-		 * command : DMABUF_OPS_TO_SOURCE.
-		 * op0~3 : hyper_dmabuf_id
-		 * op1 : enum hyper_dmabuf_ops {....}
-		 */
-		dev_dbg(hy_drv_priv->dev,
-			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
-
-		ret = hyper_dmabuf_remote_sync(hid, req->op[4]);
-
-		if (ret)
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		else
-			req->stat = HYPER_DMABUF_REQ_PROCESSED;
-
-		return req->cmd;
-	}
-
-	/* synchronous dma_buf_fd export */
-	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
-		/* find a corresponding SGT for the id */
-		dev_dbg(hy_drv_priv->dev,
-			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
-			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
-
-		exported = hyper_dmabuf_find_exported(hid);
-
-		if (!exported) {
-			dev_err(hy_drv_priv->dev,
-				"buffer {id:%d key:%d %d %d} not found\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		} else if (!exported->valid) {
-			dev_dbg(hy_drv_priv->dev,
-				"Buffer no longer valid {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		} else {
-			dev_dbg(hy_drv_priv->dev,
-				"Buffer still valid {id:%d key:%d %d %d}\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			exported->active++;
-			req->stat = HYPER_DMABUF_REQ_PROCESSED;
-		}
-		return req->cmd;
-	}
-
-	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
-		dev_dbg(hy_drv_priv->dev,
-			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
-			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
-
-		exported = hyper_dmabuf_find_exported(hid);
-
-		if (!exported) {
-			dev_err(hy_drv_priv->dev,
-				"buffer {id:%d key:%d %d %d} not found\n",
-				hid.id, hid.rng_key[0], hid.rng_key[1],
-				hid.rng_key[2]);
-
-			req->stat = HYPER_DMABUF_REQ_ERROR;
-		} else {
-			exported->active--;
-			req->stat = HYPER_DMABUF_REQ_PROCESSED;
-		}
-		return req->cmd;
-	}
-
-	dev_dbg(hy_drv_priv->dev,
-		"%s: putting request to workqueue\n", __func__);
-	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
-
-	if (!temp_req)
-		return -ENOMEM;
-
-	memcpy(temp_req, req, sizeof(*temp_req));
-
-	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
-
-	if (!proc) {
-		kfree(temp_req);
-		return -ENOMEM;
-	}
-
-	proc->rq = temp_req;
-	proc->domid = domid;
-
-	INIT_WORK(&(proc->work), cmd_process_work);
-
-	queue_work(hy_drv_priv->work_queue, &(proc->work));
-
-	return req->cmd;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
deleted file mode 100644
index 9c8a76b..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
+++ /dev/null
@@ -1,87 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_MSG_H__
-#define __HYPER_DMABUF_MSG_H__
-
-#define MAX_NUMBER_OF_OPERANDS 64
-
-struct hyper_dmabuf_req {
-	unsigned int req_id;
-	unsigned int stat;
-	unsigned int cmd;
-	unsigned int op[MAX_NUMBER_OF_OPERANDS];
-};
-
-struct hyper_dmabuf_resp {
-	unsigned int resp_id;
-	unsigned int stat;
-	unsigned int cmd;
-	unsigned int op[MAX_NUMBER_OF_OPERANDS];
-};
-
-enum hyper_dmabuf_command {
-	HYPER_DMABUF_EXPORT = 0x10,
-	HYPER_DMABUF_EXPORT_FD,
-	HYPER_DMABUF_EXPORT_FD_FAILED,
-	HYPER_DMABUF_NOTIFY_UNEXPORT,
-	HYPER_DMABUF_OPS_TO_REMOTE,
-	HYPER_DMABUF_OPS_TO_SOURCE,
-};
-
-enum hyper_dmabuf_ops {
-	HYPER_DMABUF_OPS_ATTACH = 0x1000,
-	HYPER_DMABUF_OPS_DETACH,
-	HYPER_DMABUF_OPS_MAP,
-	HYPER_DMABUF_OPS_UNMAP,
-	HYPER_DMABUF_OPS_RELEASE,
-	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
-	HYPER_DMABUF_OPS_END_CPU_ACCESS,
-	HYPER_DMABUF_OPS_KMAP_ATOMIC,
-	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
-	HYPER_DMABUF_OPS_KMAP,
-	HYPER_DMABUF_OPS_KUNMAP,
-	HYPER_DMABUF_OPS_MMAP,
-	HYPER_DMABUF_OPS_VMAP,
-	HYPER_DMABUF_OPS_VUNMAP,
-};
-
-enum hyper_dmabuf_req_feedback {
-	HYPER_DMABUF_REQ_PROCESSED = 0x100,
-	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
-	HYPER_DMABUF_REQ_ERROR,
-	HYPER_DMABUF_REQ_NOT_RESPONDED
-};
-
-/* create a request packet with given command and operands */
-void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
-				 enum hyper_dmabuf_command command,
-				 int *operands);
-
-/* parse incoming request packet (or response) and take
- * appropriate actions for those
- */
-int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
-
-#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
deleted file mode 100644
index e85f619..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.c
+++ /dev/null
@@ -1,413 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_ops.h"
-#include "hyper_dmabuf_sgl_proc.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_list.h"
-
-#define WAIT_AFTER_SYNC_REQ 0
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-static int dmabuf_refcount(struct dma_buf *dma_buf)
-{
-	if ((dma_buf != NULL) && (dma_buf->file != NULL))
-		return file_count(dma_buf->file);
-
-	return -EINVAL;
-}
-
-static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
-{
-	struct hyper_dmabuf_req *req;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int op[5];
-	int i;
-	int ret;
-
-	op[0] = hid.id;
-
-	for (i = 0; i < 3; i++)
-		op[i+1] = hid.rng_key[i];
-
-	op[4] = dmabuf_ops;
-
-	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
-
-	if (!req)
-		return -ENOMEM;
-
-	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
-
-	/* send request and wait for a response */
-	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(hid), req,
-				 WAIT_AFTER_SYNC_REQ);
-
-	if (ret < 0) {
-		dev_dbg(hy_drv_priv->dev,
-			"dmabuf sync request failed:%d\n", req->op[4]);
-	}
-
-	kfree(req);
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
-				   struct device *dev,
-				   struct dma_buf_attachment *attach)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_ATTACH);
-
-	return ret;
-}
-
-static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
-				    struct dma_buf_attachment *attach)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!attach->dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_DETACH);
-}
-
-static struct sg_table *hyper_dmabuf_ops_map(
-				struct dma_buf_attachment *attachment,
-				enum dma_data_direction dir)
-{
-	struct sg_table *st;
-	struct imported_sgt_info *imported;
-	struct pages_info *pg_info;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
-
-	/* extract pages from sgt */
-	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
-
-	if (!pg_info)
-		return NULL;
-
-	/* create a new sg_table with extracted pages */
-	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
-				     pg_info->last_len, pg_info->nents);
-	if (!st)
-		goto err_free_sg;
-
-	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
-		goto err_free_sg;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MAP);
-
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-	return st;
-
-err_free_sg:
-	if (st) {
-		sg_free_table(st);
-		kfree(st);
-	}
-
-	kfree(pg_info->pgs);
-	kfree(pg_info);
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
-				   struct sg_table *sg,
-				   enum dma_data_direction dir)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!attachment->dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
-
-	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
-
-	sg_free_table(sg);
-	kfree(sg);
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_UNMAP);
-}
-
-static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
-{
-	struct imported_sgt_info *imported;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-	int ret;
-	int finish;
-
-	if (!dma_buf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dma_buf->priv;
-
-	if (!dmabuf_refcount(imported->dma_buf))
-		imported->dma_buf = NULL;
-
-	imported->importers--;
-
-	if (imported->importers == 0) {
-		bknd_ops->unmap_shared_pages(&imported->refs_info,
-					     imported->nents);
-
-		if (imported->sgt) {
-			sg_free_table(imported->sgt);
-			kfree(imported->sgt);
-			imported->sgt = NULL;
-		}
-	}
-
-	finish = imported && !imported->valid &&
-		 !imported->importers;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_RELEASE);
-
-	/*
-	 * Check if buffer is still valid and if not remove it
-	 * from imported list. That has to be done after sending
-	 * sync request
-	 */
-	if (finish) {
-		hyper_dmabuf_remove_imported(imported->hid);
-		kfree(imported);
-	}
-}
-
-static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
-					     enum dma_data_direction dir)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
-
-	return ret;
-}
-
-static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
-					   enum dma_data_direction dir)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_END_CPU_ACCESS);
-
-	return 0;
-}
-
-static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
-					  unsigned long pgnum)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP_ATOMIC);
-
-	/* TODO: NULL for now. Need to return the addr of mapped region */
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
-					   unsigned long pgnum, void *vaddr)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
-}
-
-static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP);
-
-	/* for now NULL.. need to return the address of mapped region */
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
-				    void *vaddr)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP);
-}
-
-static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
-				 struct vm_area_struct *vma)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return -EINVAL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MMAP);
-
-	return ret;
-}
-
-static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return NULL;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VMAP);
-
-	return NULL;
-}
-
-static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
-{
-	struct imported_sgt_info *imported;
-	int ret;
-
-	if (!dmabuf->priv)
-		return;
-
-	imported = (struct imported_sgt_info *)dmabuf->priv;
-
-	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VUNMAP);
-}
-
-static const struct dma_buf_ops hyper_dmabuf_ops = {
-	.attach = hyper_dmabuf_ops_attach,
-	.detach = hyper_dmabuf_ops_detach,
-	.map_dma_buf = hyper_dmabuf_ops_map,
-	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
-	.release = hyper_dmabuf_ops_release,
-	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
-	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
-	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
-	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
-	.map = hyper_dmabuf_ops_kmap,
-	.unmap = hyper_dmabuf_ops_kunmap,
-	.mmap = hyper_dmabuf_ops_mmap,
-	.vmap = hyper_dmabuf_ops_vmap,
-	.vunmap = hyper_dmabuf_ops_vunmap,
-};
-
-/* exporting dmabuf as fd */
-int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
-{
-	int fd = -1;
-
-	/* call hyper_dmabuf_export_dmabuf and create
-	 * and bind a handle for it then release
-	 */
-	hyper_dmabuf_export_dma_buf(imported);
-
-	if (imported->dma_buf)
-		fd = dma_buf_fd(imported->dma_buf, flags);
-
-	return fd;
-}
-
-void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
-{
-	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
-
-	exp_info.ops = &hyper_dmabuf_ops;
-
-	/* multiple of PAGE_SIZE, not considering offset */
-	exp_info.size = imported->sgt->nents * PAGE_SIZE;
-	exp_info.flags = /* not sure about flag */ 0;
-	exp_info.priv = imported;
-
-	imported->dma_buf = dma_buf_export(&exp_info);
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
deleted file mode 100644
index c5505a4..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ops.h
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_OPS_H__
-#define __HYPER_DMABUF_OPS_H__
-
-int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
-
-void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
-
-#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
deleted file mode 100644
index 1f2f56b..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.c
+++ /dev/null
@@ -1,172 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/dma-buf.h>
-#include <linux/uaccess.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_id.h"
-
-#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
-	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
-
-int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
-				int query, unsigned long *info)
-{
-	switch (query) {
-	case HYPER_DMABUF_QUERY_TYPE:
-		*info = EXPORTED;
-		break;
-
-	/* exporting domain of this specific dmabuf*/
-	case HYPER_DMABUF_QUERY_EXPORTER:
-		*info = HYPER_DMABUF_DOM_ID(exported->hid);
-		break;
-
-	/* importing domain of this specific dmabuf */
-	case HYPER_DMABUF_QUERY_IMPORTER:
-		*info = exported->rdomid;
-		break;
-
-	/* size of dmabuf in byte */
-	case HYPER_DMABUF_QUERY_SIZE:
-		*info = exported->dma_buf->size;
-		break;
-
-	/* whether the buffer is used by importer */
-	case HYPER_DMABUF_QUERY_BUSY:
-		*info = (exported->active > 0);
-		break;
-
-	/* whether the buffer is unexported */
-	case HYPER_DMABUF_QUERY_UNEXPORTED:
-		*info = !exported->valid;
-		break;
-
-	/* whether the buffer is scheduled to be unexported */
-	case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
-		*info = !exported->unexport_sched;
-		break;
-
-	/* size of private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-		*info = exported->sz_priv;
-		break;
-
-	/* copy private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO:
-		if (exported->sz_priv > 0) {
-			int n;
-
-			n = copy_to_user((void __user *) *info,
-					exported->priv,
-					exported->sz_priv);
-			if (n != 0)
-				return -EINVAL;
-		}
-		break;
-
-	default:
-		return -EINVAL;
-	}
-
-	return 0;
-}
-
-
-int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
-				int query, unsigned long *info)
-{
-	switch (query) {
-	case HYPER_DMABUF_QUERY_TYPE:
-		*info = IMPORTED;
-		break;
-
-	/* exporting domain of this specific dmabuf*/
-	case HYPER_DMABUF_QUERY_EXPORTER:
-		*info = HYPER_DMABUF_DOM_ID(imported->hid);
-		break;
-
-	/* importing domain of this specific dmabuf */
-	case HYPER_DMABUF_QUERY_IMPORTER:
-		*info = hy_drv_priv->domid;
-		break;
-
-	/* size of dmabuf in byte */
-	case HYPER_DMABUF_QUERY_SIZE:
-		if (imported->dma_buf) {
-			/* if local dma_buf is created (if it's
-			 * ever mapped), retrieve it directly
-			 * from struct dma_buf *
-			 */
-			*info = imported->dma_buf->size;
-		} else {
-			/* calcuate it from given nents, frst_ofst
-			 * and last_len
-			 */
-			*info = HYPER_DMABUF_SIZE(imported->nents,
-						  imported->frst_ofst,
-						  imported->last_len);
-		}
-		break;
-
-	/* whether the buffer is used or not */
-	case HYPER_DMABUF_QUERY_BUSY:
-		/* checks if it's used by importer */
-		*info = (imported->importers > 0);
-		break;
-
-	/* whether the buffer is unexported */
-	case HYPER_DMABUF_QUERY_UNEXPORTED:
-		*info = !imported->valid;
-		break;
-
-	/* size of private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
-		*info = imported->sz_priv;
-		break;
-
-	/* copy private info attached to buffer */
-	case HYPER_DMABUF_QUERY_PRIV_INFO:
-		if (imported->sz_priv > 0) {
-			int n;
-
-			n = copy_to_user((void __user *)*info,
-					imported->priv,
-					imported->sz_priv);
-			if (n != 0)
-				return -EINVAL;
-		}
-		break;
-
-	default:
-		return -EINVAL;
-	}
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
deleted file mode 100644
index 65ae738..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
+++ /dev/null
@@ -1,10 +0,0 @@
-#ifndef __HYPER_DMABUF_QUERY_H__
-#define __HYPER_DMABUF_QUERY_H__
-
-int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
-				int query, unsigned long *info);
-
-int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
-				int query, unsigned long *info);
-
-#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
deleted file mode 100644
index a82fd7b..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.c
+++ /dev/null
@@ -1,322 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_list.h"
-#include "hyper_dmabuf_msg.h"
-#include "hyper_dmabuf_id.h"
-#include "hyper_dmabuf_sgl_proc.h"
-
-/* Whenever importer does dma operations from remote domain,
- * a notification is sent to the exporter so that exporter
- * issues equivalent dma operation on the original dma buf
- * for indirect synchronization via shadow operations.
- *
- * All ptrs and references (e.g struct sg_table*,
- * struct dma_buf_attachment) created via these operations on
- * exporter's side are kept in stack (implemented as circular
- * linked-lists) separately so that those can be re-referenced
- * later when unmapping operations are invoked to free those.
- *
- * The very first element on the bottom of each stack holds
- * is what is created when initial exporting is issued so it
- * should not be modified or released by this fuction.
- */
-int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
-{
-	struct exported_sgt_info *exported;
-	struct sgt_list *sgtl;
-	struct attachment_list *attachl;
-	struct kmap_vaddr_list *va_kmapl;
-	struct vmap_vaddr_list *va_vmapl;
-	int ret;
-
-	/* find a coresponding SGT for the id */
-	exported = hyper_dmabuf_find_exported(hid);
-
-	if (!exported) {
-		dev_err(hy_drv_priv->dev,
-			"dmabuf remote sync::can't find exported list\n");
-		return -ENOENT;
-	}
-
-	switch (ops) {
-	case HYPER_DMABUF_OPS_ATTACH:
-		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
-
-		if (!attachl)
-			return -ENOMEM;
-
-		attachl->attach = dma_buf_attach(exported->dma_buf,
-						 hy_drv_priv->dev);
-
-		if (!attachl->attach) {
-			kfree(attachl);
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
-			return -ENOMEM;
-		}
-
-		list_add(&attachl->list, &exported->active_attached->list);
-		break;
-
-	case HYPER_DMABUF_OPS_DETACH:
-		if (list_empty(&exported->active_attached->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_DETACH\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf attachment left to be detached\n");
-			return -EFAULT;
-		}
-
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		dma_buf_detach(exported->dma_buf, attachl->attach);
-		list_del(&attachl->list);
-		kfree(attachl);
-		break;
-
-	case HYPER_DMABUF_OPS_MAP:
-		if (list_empty(&exported->active_attached->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_MAP\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf attachment left to be mapped\n");
-			return -EFAULT;
-		}
-
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
-
-		if (!sgtl)
-			return -ENOMEM;
-
-		sgtl->sgt = dma_buf_map_attachment(attachl->attach,
-						   DMA_BIDIRECTIONAL);
-		if (!sgtl->sgt) {
-			kfree(sgtl);
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_MAP\n");
-			return -ENOMEM;
-		}
-		list_add(&sgtl->list, &exported->active_sgts->list);
-		break;
-
-	case HYPER_DMABUF_OPS_UNMAP:
-		if (list_empty(&exported->active_sgts->list) ||
-		    list_empty(&exported->active_attached->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
-			dev_err(hy_drv_priv->dev,
-				"no SGT or attach left to be unmapped\n");
-			return -EFAULT;
-		}
-
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-		sgtl = list_first_entry(&exported->active_sgts->list,
-					struct sgt_list, list);
-
-		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					 DMA_BIDIRECTIONAL);
-		list_del(&sgtl->list);
-		kfree(sgtl);
-		break;
-
-	case HYPER_DMABUF_OPS_RELEASE:
-		dev_dbg(hy_drv_priv->dev,
-			"id:%d key:%d %d %d} released, ref left: %d\n",
-			 exported->hid.id, exported->hid.rng_key[0],
-			 exported->hid.rng_key[1], exported->hid.rng_key[2],
-			 exported->active - 1);
-
-		exported->active--;
-
-		/* If there are still importers just break, if no then
-		 * continue with final cleanup
-		 */
-		if (exported->active)
-			break;
-
-		/* Importer just released buffer fd, check if there is
-		 * any other importer still using it.
-		 * If not and buffer was unexported, clean up shared
-		 * data and remove that buffer.
-		 */
-		dev_dbg(hy_drv_priv->dev,
-			"Buffer {id:%d key:%d %d %d} final released\n",
-			exported->hid.id, exported->hid.rng_key[0],
-			exported->hid.rng_key[1], exported->hid.rng_key[2]);
-
-		if (!exported->valid && !exported->active &&
-		    !exported->unexport_sched) {
-			hyper_dmabuf_cleanup_sgt_info(exported, false);
-			hyper_dmabuf_remove_exported(hid);
-			kfree(exported);
-			/* store hyper_dmabuf_id in the list for reuse */
-			hyper_dmabuf_store_hid(hid);
-		}
-
-		break;
-
-	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
-		ret = dma_buf_begin_cpu_access(exported->dma_buf,
-					       DMA_BIDIRECTIONAL);
-		if (ret) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
-			return ret;
-		}
-		break;
-
-	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
-		ret = dma_buf_end_cpu_access(exported->dma_buf,
-					     DMA_BIDIRECTIONAL);
-		if (ret) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
-			return ret;
-		}
-		break;
-
-	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
-	case HYPER_DMABUF_OPS_KMAP:
-		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
-		if (!va_kmapl)
-			return -ENOMEM;
-
-		/* dummy kmapping of 1 page */
-		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
-			va_kmapl->vaddr = dma_buf_kmap_atomic(
-						exported->dma_buf, 1);
-		else
-			va_kmapl->vaddr = dma_buf_kmap(
-						exported->dma_buf, 1);
-
-		if (!va_kmapl->vaddr) {
-			kfree(va_kmapl);
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
-			return -ENOMEM;
-		}
-		list_add(&va_kmapl->list, &exported->va_kmapped->list);
-		break;
-
-	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
-	case HYPER_DMABUF_OPS_KUNMAP:
-		if (list_empty(&exported->va_kmapped->list)) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf VA to be freed\n");
-			return -EFAULT;
-		}
-
-		va_kmapl = list_first_entry(&exported->va_kmapped->list,
-					    struct kmap_vaddr_list, list);
-		if (!va_kmapl->vaddr) {
-			dev_err(hy_drv_priv->dev,
-				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
-			return PTR_ERR(va_kmapl->vaddr);
-		}
-
-		/* unmapping 1 page */
-		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
-			dma_buf_kunmap_atomic(exported->dma_buf,
-					      1, va_kmapl->vaddr);
-		else
-			dma_buf_kunmap(exported->dma_buf,
-				       1, va_kmapl->vaddr);
-
-		list_del(&va_kmapl->list);
-		kfree(va_kmapl);
-		break;
-
-	case HYPER_DMABUF_OPS_MMAP:
-		/* currently not supported: looking for a way to create
-		 * a dummy vma
-		 */
-		dev_warn(hy_drv_priv->dev,
-			 "remote sync::sychronized mmap is not supported\n");
-		break;
-
-	case HYPER_DMABUF_OPS_VMAP:
-		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
-
-		if (!va_vmapl)
-			return -ENOMEM;
-
-		/* dummy vmapping */
-		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
-
-		if (!va_vmapl->vaddr) {
-			kfree(va_vmapl);
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
-			return -ENOMEM;
-		}
-		list_add(&va_vmapl->list, &exported->va_vmapped->list);
-		break;
-
-	case HYPER_DMABUF_OPS_VUNMAP:
-		if (list_empty(&exported->va_vmapped->list)) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
-			dev_err(hy_drv_priv->dev,
-				"no more dmabuf VA to be freed\n");
-			return -EFAULT;
-		}
-		va_vmapl = list_first_entry(&exported->va_vmapped->list,
-					struct vmap_vaddr_list, list);
-		if (!va_vmapl || va_vmapl->vaddr == NULL) {
-			dev_err(hy_drv_priv->dev,
-				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
-			return -EFAULT;
-		}
-
-		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
-
-		list_del(&va_vmapl->list);
-		kfree(va_vmapl);
-		break;
-
-	default:
-		/* program should not get here */
-		break;
-	}
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
deleted file mode 100644
index 36638928..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_remote_sync.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
-#define __HYPER_DMABUF_REMOTE_SYNC_H__
-
-int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops);
-
-#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
deleted file mode 100644
index d15eb17..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ /dev/null
@@ -1,255 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/dma-buf.h>
-#include "hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_struct.h"
-#include "hyper_dmabuf_sgl_proc.h"
-
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-/* return total number of pages referenced by a sgt
- * for pre-calculation of # of pages behind a given sgt
- */
-static int get_num_pgs(struct sg_table *sgt)
-{
-	struct scatterlist *sgl;
-	int length, i;
-	/* at least one page */
-	int num_pages = 1;
-
-	sgl = sgt->sgl;
-
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-
-	/* round-up */
-	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
-
-	for (i = 1; i < sgt->nents; i++) {
-		sgl = sg_next(sgl);
-
-		/* round-up */
-		num_pages += ((sgl->length + PAGE_SIZE - 1) /
-			     PAGE_SIZE); /* round-up */
-	}
-
-	return num_pages;
-}
-
-/* extract pages directly from struct sg_table */
-struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
-{
-	struct pages_info *pg_info;
-	int i, j, k;
-	int length;
-	struct scatterlist *sgl;
-
-	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
-	if (!pg_info)
-		return NULL;
-
-	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
-				     sizeof(struct page *),
-				     GFP_KERNEL);
-
-	if (!pg_info->pgs) {
-		kfree(pg_info);
-		return NULL;
-	}
-
-	sgl = sgt->sgl;
-
-	pg_info->nents = 1;
-	pg_info->frst_ofst = sgl->offset;
-	pg_info->pgs[0] = sg_page(sgl);
-	length = sgl->length - PAGE_SIZE + sgl->offset;
-	i = 1;
-
-	while (length > 0) {
-		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
-		length -= PAGE_SIZE;
-		pg_info->nents++;
-		i++;
-	}
-
-	for (j = 1; j < sgt->nents; j++) {
-		sgl = sg_next(sgl);
-		pg_info->pgs[i++] = sg_page(sgl);
-		length = sgl->length - PAGE_SIZE;
-		pg_info->nents++;
-		k = 1;
-
-		while (length > 0) {
-			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
-			length -= PAGE_SIZE;
-			pg_info->nents++;
-		}
-	}
-
-	/*
-	 * lenght at that point will be 0 or negative,
-	 * so to calculate last page size just add it to PAGE_SIZE
-	 */
-	pg_info->last_len = PAGE_SIZE + length;
-
-	return pg_info;
-}
-
-/* create sg_table with given pages and other parameters */
-struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
-					 int frst_ofst, int last_len,
-					 int nents)
-{
-	struct sg_table *sgt;
-	struct scatterlist *sgl;
-	int i, ret;
-
-	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
-	if (!sgt)
-		return NULL;
-
-	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
-	if (ret) {
-		if (sgt) {
-			sg_free_table(sgt);
-			kfree(sgt);
-		}
-
-		return NULL;
-	}
-
-	sgl = sgt->sgl;
-
-	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
-
-	for (i = 1; i < nents-1; i++) {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
-	}
-
-	if (nents > 1) /* more than one page */ {
-		sgl = sg_next(sgl);
-		sg_set_page(sgl, pgs[i], last_len, 0);
-	}
-
-	return sgt;
-}
-
-int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
-				  int force)
-{
-	struct sgt_list *sgtl;
-	struct attachment_list *attachl;
-	struct kmap_vaddr_list *va_kmapl;
-	struct vmap_vaddr_list *va_vmapl;
-	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
-
-	if (!exported) {
-		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
-		return -EINVAL;
-	}
-
-	/* if force != 1, sgt_info can be released only if
-	 * there's no activity on exported dma-buf on importer
-	 * side.
-	 */
-	if (!force &&
-	    exported->active) {
-		dev_warn(hy_drv_priv->dev,
-			 "dma-buf is used by importer\n");
-
-		return -EPERM;
-	}
-
-	/* force == 1 is not recommended */
-	while (!list_empty(&exported->va_kmapped->list)) {
-		va_kmapl = list_first_entry(&exported->va_kmapped->list,
-					    struct kmap_vaddr_list, list);
-
-		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
-		list_del(&va_kmapl->list);
-		kfree(va_kmapl);
-	}
-
-	while (!list_empty(&exported->va_vmapped->list)) {
-		va_vmapl = list_first_entry(&exported->va_vmapped->list,
-					    struct vmap_vaddr_list, list);
-
-		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
-		list_del(&va_vmapl->list);
-		kfree(va_vmapl);
-	}
-
-	while (!list_empty(&exported->active_sgts->list)) {
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		sgtl = list_first_entry(&exported->active_sgts->list,
-					struct sgt_list, list);
-
-		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
-					 DMA_BIDIRECTIONAL);
-		list_del(&sgtl->list);
-		kfree(sgtl);
-	}
-
-	while (!list_empty(&exported->active_sgts->list)) {
-		attachl = list_first_entry(&exported->active_attached->list,
-					   struct attachment_list, list);
-
-		dma_buf_detach(exported->dma_buf, attachl->attach);
-		list_del(&attachl->list);
-		kfree(attachl);
-	}
-
-	/* Start cleanup of buffer in reverse order to exporting */
-	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
-
-	/* unmap dma-buf */
-	dma_buf_unmap_attachment(exported->active_attached->attach,
-				 exported->active_sgts->sgt,
-				 DMA_BIDIRECTIONAL);
-
-	/* detatch dma-buf */
-	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
-
-	/* close connection to dma-buf completely */
-	dma_buf_put(exported->dma_buf);
-	exported->dma_buf = NULL;
-
-	kfree(exported->active_sgts);
-	kfree(exported->active_attached);
-	kfree(exported->va_kmapped);
-	kfree(exported->va_vmapped);
-	kfree(exported->priv);
-
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
deleted file mode 100644
index 869d982..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_IMP_H__
-#define __HYPER_DMABUF_IMP_H__
-
-/* extract pages directly from struct sg_table */
-struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
-
-/* create sg_table with given pages and other parameters */
-struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
-					 int frst_ofst, int last_len,
-					 int nents);
-
-int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
-				  int force);
-
-void hyper_dmabuf_free_sgt(struct sg_table *sgt);
-
-#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
deleted file mode 100644
index a11f804..0000000
--- a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
+++ /dev/null
@@ -1,141 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_STRUCT_H__
-#define __HYPER_DMABUF_STRUCT_H__
-
-/* stack of mapped sgts */
-struct sgt_list {
-	struct sg_table *sgt;
-	struct list_head list;
-};
-
-/* stack of attachments */
-struct attachment_list {
-	struct dma_buf_attachment *attach;
-	struct list_head list;
-};
-
-/* stack of vaddr mapped via kmap */
-struct kmap_vaddr_list {
-	void *vaddr;
-	struct list_head list;
-};
-
-/* stack of vaddr mapped via vmap */
-struct vmap_vaddr_list {
-	void *vaddr;
-	struct list_head list;
-};
-
-/* Exporter builds pages_info before sharing pages */
-struct pages_info {
-	int frst_ofst;
-	int last_len;
-	int nents;
-	struct page **pgs;
-};
-
-
-/* Exporter stores references to sgt in a hash table
- * Exporter keeps these references for synchronization
- * and tracking purposes
- */
-struct exported_sgt_info {
-	hyper_dmabuf_id_t hid;
-
-	/* VM ID of importer */
-	int rdomid;
-
-	struct dma_buf *dma_buf;
-	int nents;
-
-	/* list for tracking activities on dma_buf */
-	struct sgt_list *active_sgts;
-	struct attachment_list *active_attached;
-	struct kmap_vaddr_list *va_kmapped;
-	struct vmap_vaddr_list *va_vmapped;
-
-	/* set to 0 when unexported. Importer doesn't
-	 * do a new mapping of buffer if valid == false
-	 */
-	bool valid;
-
-	/* active == true if the buffer is actively used
-	 * (mapped) by importer
-	 */
-	int active;
-
-	/* hypervisor specific reference data for shared pages */
-	void *refs_info;
-
-	struct delayed_work unexport;
-	bool unexport_sched;
-
-	/* list for file pointers associated with all user space
-	 * application that have exported this same buffer to
-	 * another VM. This needs to be tracked to know whether
-	 * the buffer can be completely freed.
-	 */
-	struct file *filp;
-
-	/* size of private */
-	size_t sz_priv;
-
-	/* private data associated with the exported buffer */
-	char *priv;
-};
-
-/* imported_sgt_info contains information about imported DMA_BUF
- * this info is kept in IMPORT list and asynchorously retrieved and
- * used to map DMA_BUF on importer VM's side upon export fd ioctl
- * request from user-space
- */
-
-struct imported_sgt_info {
-	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
-
-	/* hypervisor-specific handle to pages */
-	int ref_handle;
-
-	/* offset and size info of DMA_BUF */
-	int frst_ofst;
-	int last_len;
-	int nents;
-
-	struct dma_buf *dma_buf;
-	struct sg_table *sgt;
-
-	void *refs_info;
-	bool valid;
-	int importers;
-
-	/* size of private */
-	size_t sz_priv;
-
-	/* private data associated with the exported buffer */
-	char *priv;
-};
-
-#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
deleted file mode 100644
index 4a073ce..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
+++ /dev/null
@@ -1,941 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/workqueue.h>
-#include <linux/delay.h>
-#include <xen/grant_table.h>
-#include <xen/events.h>
-#include <xen/xenbus.h>
-#include <asm/xen/page.h>
-#include "hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_xen_comm_list.h"
-#include "../hyper_dmabuf_drv.h"
-
-static int export_req_id;
-
-struct hyper_dmabuf_req req_pending = {0};
-
-static void xen_get_domid_delayed(struct work_struct *unused);
-static void xen_init_comm_env_delayed(struct work_struct *unused);
-
-static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
-static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
-
-/* Creates entry in xen store that will keep details of all
- * exporter rings created by this domain
- */
-static int xen_comm_setup_data_dir(void)
-{
-	char buf[255];
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
-		hy_drv_priv->domid);
-
-	return xenbus_mkdir(XBT_NIL, buf, "");
-}
-
-/* Removes entry from xenstore with exporter ring details.
- * Other domains that has connected to any of exporter rings
- * created by this domain, will be notified about removal of
- * this entry and will treat that as signal to cleanup importer
- * rings created for this domain
- */
-static int xen_comm_destroy_data_dir(void)
-{
-	char buf[255];
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
-		hy_drv_priv->domid);
-
-	return xenbus_rm(XBT_NIL, buf, "");
-}
-
-/* Adds xenstore entries with details of exporter ring created
- * for given remote domain. It requires special daemon running
- * in dom0 to make sure that given remote domain will have right
- * permissions to access that data.
- */
-static int xen_comm_expose_ring_details(int domid, int rdomid,
-					int gref, int port)
-{
-	char buf[255];
-	int ret;
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
-		domid, rdomid);
-
-	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to write xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to write xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	return 0;
-}
-
-/*
- * Queries details of ring exposed by remote domain.
- */
-static int xen_comm_get_ring_details(int domid, int rdomid,
-				     int *grefid, int *port)
-{
-	char buf[255];
-	int ret;
-
-	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
-		rdomid, domid);
-
-	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
-
-	if (ret <= 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to read xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
-
-	if (ret <= 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to read xenbus entry %s: %d\n",
-			buf, ret);
-
-		return ret;
-	}
-
-	return (ret <= 0 ? 1 : 0);
-}
-
-static void xen_get_domid_delayed(struct work_struct *unused)
-{
-	struct xenbus_transaction xbt;
-	int domid, ret;
-
-	/* scheduling another if driver is still running
-	 * and xenstore has not been initialized
-	 */
-	if (likely(xenstored_ready == 0)) {
-		dev_dbg(hy_drv_priv->dev,
-			"Xenstore is not ready yet. Will retry in 500ms\n");
-		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
-	} else {
-		xenbus_transaction_start(&xbt);
-
-		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
-
-		if (ret <= 0)
-			domid = -1;
-
-		xenbus_transaction_end(xbt, 0);
-
-		/* try again since -1 is an invalid id for domain
-		 * (but only if driver is still running)
-		 */
-		if (unlikely(domid == -1)) {
-			dev_dbg(hy_drv_priv->dev,
-				"domid==-1 is invalid. Will retry it in 500ms\n");
-			schedule_delayed_work(&get_vm_id_work,
-					      msecs_to_jiffies(500));
-		} else {
-			dev_info(hy_drv_priv->dev,
-				 "Successfully retrieved domid from Xenstore:%d\n",
-				 domid);
-			hy_drv_priv->domid = domid;
-		}
-	}
-}
-
-int xen_be_get_domid(void)
-{
-	struct xenbus_transaction xbt;
-	int domid;
-
-	if (unlikely(xenstored_ready == 0)) {
-		xen_get_domid_delayed(NULL);
-		return -1;
-	}
-
-	xenbus_transaction_start(&xbt);
-
-	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
-		domid = -1;
-
-	xenbus_transaction_end(xbt, 0);
-
-	return domid;
-}
-
-static int xen_comm_next_req_id(void)
-{
-	export_req_id++;
-	return export_req_id;
-}
-
-/* For now cache latast rings as global variables TODO: keep them in list*/
-static irqreturn_t front_ring_isr(int irq, void *info);
-static irqreturn_t back_ring_isr(int irq, void *info);
-
-/* Callback function that will be called on any change of xenbus path
- * being watched. Used for detecting creation/destruction of remote
- * domain exporter ring.
- *
- * When remote domain's exporter ring will be detected, importer ring
- * on this domain will be created.
- *
- * When remote domain's exporter ring destruction will be detected it
- * will celanup this domain importer ring.
- *
- * Destruction can be caused by unloading module by remote domain or
- * it's crash/force shutdown.
- */
-static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
-					 const char *path, const char *token)
-{
-	int rdom, ret;
-	uint32_t grefid, port;
-	struct xen_comm_rx_ring_info *ring_info;
-
-	/* Check which domain has changed its exporter rings */
-	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
-	if (ret <= 0)
-		return;
-
-	/* Check if we have importer ring for given remote domain already
-	 * created
-	 */
-	ring_info = xen_comm_find_rx_ring(rdom);
-
-	/* Try to query remote domain exporter ring details - if
-	 * that will fail and we have importer ring that means remote
-	 * domains has cleanup its exporter ring, so our importer ring
-	 * is no longer useful.
-	 *
-	 * If querying details will succeed and we don't have importer ring,
-	 * it means that remote domain has setup it for us and we should
-	 * connect to it.
-	 */
-
-	ret = xen_comm_get_ring_details(xen_be_get_domid(),
-					rdom, &grefid, &port);
-
-	if (ring_info && ret != 0) {
-		dev_info(hy_drv_priv->dev,
-			 "Remote exporter closed, cleaninup importer\n");
-		xen_be_cleanup_rx_rbuf(rdom);
-	} else if (!ring_info && ret == 0) {
-		dev_info(hy_drv_priv->dev,
-			 "Registering importer\n");
-		xen_be_init_rx_rbuf(rdom);
-	}
-}
-
-/* exporter needs to generated info for page sharing */
-int xen_be_init_tx_rbuf(int domid)
-{
-	struct xen_comm_tx_ring_info *ring_info;
-	struct xen_comm_sring *sring;
-	struct evtchn_alloc_unbound alloc_unbound;
-	struct evtchn_close close;
-
-	void *shared_ring;
-	int ret;
-
-	/* check if there's any existing tx channel in the table */
-	ring_info = xen_comm_find_tx_ring(domid);
-
-	if (ring_info) {
-		dev_info(hy_drv_priv->dev,
-			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
-		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
-		return 0;
-	}
-
-	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
-
-	if (!ring_info)
-		return -ENOMEM;
-
-	/* from exporter to importer */
-	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
-	if (shared_ring == 0) {
-		kfree(ring_info);
-		return -ENOMEM;
-	}
-
-	sring = (struct xen_comm_sring *) shared_ring;
-
-	SHARED_RING_INIT(sring);
-
-	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
-
-	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
-						virt_to_mfn(shared_ring),
-						0);
-	if (ring_info->gref_ring < 0) {
-		/* fail to get gref */
-		kfree(ring_info);
-		return -EFAULT;
-	}
-
-	alloc_unbound.dom = DOMID_SELF;
-	alloc_unbound.remote_dom = domid;
-	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
-					  &alloc_unbound);
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Cannot allocate event channel\n");
-		kfree(ring_info);
-		return -EIO;
-	}
-
-	/* setting up interrupt */
-	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
-					front_ring_isr, 0,
-					NULL, (void *) ring_info);
-
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to setup event channel\n");
-		close.port = alloc_unbound.port;
-		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
-		gnttab_end_foreign_access(ring_info->gref_ring, 0,
-					virt_to_mfn(shared_ring));
-		kfree(ring_info);
-		return -EIO;
-	}
-
-	ring_info->rdomain = domid;
-	ring_info->irq = ret;
-	ring_info->port = alloc_unbound.port;
-
-	mutex_init(&ring_info->lock);
-
-	dev_dbg(hy_drv_priv->dev,
-		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
-		__func__,
-		ring_info->gref_ring,
-		ring_info->port,
-		ring_info->irq);
-
-	ret = xen_comm_add_tx_ring(ring_info);
-
-	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
-					   domid,
-					   ring_info->gref_ring,
-					   ring_info->port);
-
-	/* Register watch for remote domain exporter ring.
-	 * When remote domain will setup its exporter ring,
-	 * we will automatically connect our importer ring to it.
-	 */
-	ring_info->watch.callback = remote_dom_exporter_watch_cb;
-	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
-
-	if (!ring_info->watch.node) {
-		kfree(ring_info);
-		return -ENOMEM;
-	}
-
-	sprintf((char *)ring_info->watch.node,
-		"/local/domain/%d/data/hyper_dmabuf/%d/port",
-		domid, xen_be_get_domid());
-
-	register_xenbus_watch(&ring_info->watch);
-
-	return ret;
-}
-
-/* cleans up exporter ring created for given remote domain */
-void xen_be_cleanup_tx_rbuf(int domid)
-{
-	struct xen_comm_tx_ring_info *ring_info;
-	struct xen_comm_rx_ring_info *rx_ring_info;
-
-	/* check if we at all have exporter ring for given rdomain */
-	ring_info = xen_comm_find_tx_ring(domid);
-
-	if (!ring_info)
-		return;
-
-	xen_comm_remove_tx_ring(domid);
-
-	unregister_xenbus_watch(&ring_info->watch);
-	kfree(ring_info->watch.node);
-
-	/* No need to close communication channel, will be done by
-	 * this function
-	 */
-	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
-
-	/* No need to free sring page, will be freed by this function
-	 * when other side will end its access
-	 */
-	gnttab_end_foreign_access(ring_info->gref_ring, 0,
-				  (unsigned long) ring_info->ring_front.sring);
-
-	kfree(ring_info);
-
-	rx_ring_info = xen_comm_find_rx_ring(domid);
-	if (!rx_ring_info)
-		return;
-
-	BACK_RING_INIT(&(rx_ring_info->ring_back),
-		       rx_ring_info->ring_back.sring,
-		       PAGE_SIZE);
-}
-
-/* importer needs to know about shared page and port numbers for
- * ring buffer and event channel
- */
-int xen_be_init_rx_rbuf(int domid)
-{
-	struct xen_comm_rx_ring_info *ring_info;
-	struct xen_comm_sring *sring;
-
-	struct page *shared_ring;
-
-	struct gnttab_map_grant_ref *map_ops;
-
-	int ret;
-	int rx_gref, rx_port;
-
-	/* check if there's existing rx ring channel */
-	ring_info = xen_comm_find_rx_ring(domid);
-
-	if (ring_info) {
-		dev_info(hy_drv_priv->dev,
-			 "rx ring ch from domid = %d already exist\n",
-			 ring_info->sdomain);
-
-		return 0;
-	}
-
-	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
-					&rx_gref, &rx_port);
-
-	if (ret) {
-		dev_err(hy_drv_priv->dev,
-			"Domain %d has not created exporter ring for current domain\n",
-			domid);
-
-		return ret;
-	}
-
-	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
-
-	if (!ring_info)
-		return -ENOMEM;
-
-	ring_info->sdomain = domid;
-	ring_info->evtchn = rx_port;
-
-	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
-
-	if (!map_ops) {
-		ret = -ENOMEM;
-		goto fail_no_map_ops;
-	}
-
-	if (gnttab_alloc_pages(1, &shared_ring)) {
-		ret = -ENOMEM;
-		goto fail_others;
-	}
-
-	gnttab_set_map_op(&map_ops[0],
-			  (unsigned long)pfn_to_kaddr(
-					page_to_pfn(shared_ring)),
-			  GNTMAP_host_map, rx_gref, domid);
-
-	gnttab_set_unmap_op(&ring_info->unmap_op,
-			    (unsigned long)pfn_to_kaddr(
-					page_to_pfn(shared_ring)),
-			    GNTMAP_host_map, -1);
-
-	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
-		ret = -EFAULT;
-		goto fail_others;
-	}
-
-	if (map_ops[0].status) {
-		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
-		ret = -EFAULT;
-		goto fail_others;
-	} else {
-		ring_info->unmap_op.handle = map_ops[0].handle;
-	}
-
-	kfree(map_ops);
-
-	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
-
-	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
-
-	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
-
-	if (ret < 0) {
-		ret = -EIO;
-		goto fail_others;
-	}
-
-	ring_info->irq = ret;
-
-	dev_dbg(hy_drv_priv->dev,
-		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
-		rx_port,
-		ring_info->irq);
-
-	ret = xen_comm_add_rx_ring(ring_info);
-
-	/* Setup communcation channel in opposite direction */
-	if (!xen_comm_find_tx_ring(domid))
-		ret = xen_be_init_tx_rbuf(domid);
-
-	ret = request_irq(ring_info->irq,
-			  back_ring_isr, 0,
-			  NULL, (void *)ring_info);
-
-	return ret;
-
-fail_others:
-	kfree(map_ops);
-
-fail_no_map_ops:
-	kfree(ring_info);
-
-	return ret;
-}
-
-/* clenas up importer ring create for given source domain */
-void xen_be_cleanup_rx_rbuf(int domid)
-{
-	struct xen_comm_rx_ring_info *ring_info;
-	struct xen_comm_tx_ring_info *tx_ring_info;
-	struct page *shared_ring;
-
-	/* check if we have importer ring created for given sdomain */
-	ring_info = xen_comm_find_rx_ring(domid);
-
-	if (!ring_info)
-		return;
-
-	xen_comm_remove_rx_ring(domid);
-
-	/* no need to close event channel, will be done by that function */
-	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
-
-	/* unmapping shared ring page */
-	shared_ring = virt_to_page(ring_info->ring_back.sring);
-	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
-	gnttab_free_pages(1, &shared_ring);
-
-	kfree(ring_info);
-
-	tx_ring_info = xen_comm_find_tx_ring(domid);
-	if (!tx_ring_info)
-		return;
-
-	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
-	FRONT_RING_INIT(&(tx_ring_info->ring_front),
-			tx_ring_info->ring_front.sring,
-			PAGE_SIZE);
-}
-
-#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
-
-static void xen_rx_ch_add_delayed(struct work_struct *unused);
-
-static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
-
-#define DOMID_SCAN_START	1	/*  domid = 1 */
-#define DOMID_SCAN_END		10	/* domid = 10 */
-
-static void xen_rx_ch_add_delayed(struct work_struct *unused)
-{
-	int ret;
-	char buf[128];
-	int i, dummy;
-
-	dev_dbg(hy_drv_priv->dev,
-		"Scanning new tx channel comming from another domain\n");
-
-	/* check other domains and schedule another work if driver
-	 * is still running and backend is valid
-	 */
-	if (hy_drv_priv &&
-	    hy_drv_priv->initialized) {
-		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
-			if (i == hy_drv_priv->domid)
-				continue;
-
-			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
-				i, hy_drv_priv->domid);
-
-			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
-
-			if (ret > 0) {
-				if (xen_comm_find_rx_ring(i) != NULL)
-					continue;
-
-				ret = xen_be_init_rx_rbuf(i);
-
-				if (!ret)
-					dev_info(hy_drv_priv->dev,
-						 "Done rx ch init for VM %d\n",
-						 i);
-			}
-		}
-
-		/* check every 10 seconds */
-		schedule_delayed_work(&xen_rx_ch_auto_add_work,
-				      msecs_to_jiffies(10000));
-	}
-}
-
-#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
-
-void xen_init_comm_env_delayed(struct work_struct *unused)
-{
-	int ret;
-
-	/* scheduling another work if driver is still running
-	 * and xenstore hasn't been initialized or dom_id hasn't
-	 * been correctly retrieved.
-	 */
-	if (likely(xenstored_ready == 0 ||
-	    hy_drv_priv->domid == -1)) {
-		dev_dbg(hy_drv_priv->dev,
-			"Xenstore not ready Will re-try in 500ms\n");
-		schedule_delayed_work(&xen_init_comm_env_work,
-				      msecs_to_jiffies(500));
-	} else {
-		ret = xen_comm_setup_data_dir();
-		if (ret < 0) {
-			dev_err(hy_drv_priv->dev,
-				"Failed to create data dir in Xenstore\n");
-		} else {
-			dev_info(hy_drv_priv->dev,
-				"Successfully finished comm env init\n");
-			hy_drv_priv->initialized = true;
-
-#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
-			xen_rx_ch_add_delayed(NULL);
-#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
-		}
-	}
-}
-
-int xen_be_init_comm_env(void)
-{
-	int ret;
-
-	xen_comm_ring_table_init();
-
-	if (unlikely(xenstored_ready == 0 ||
-	    hy_drv_priv->domid == -1)) {
-		xen_init_comm_env_delayed(NULL);
-		return -1;
-	}
-
-	ret = xen_comm_setup_data_dir();
-	if (ret < 0) {
-		dev_err(hy_drv_priv->dev,
-			"Failed to create data dir in Xenstore\n");
-	} else {
-		dev_info(hy_drv_priv->dev,
-			"Successfully finished comm env initialization\n");
-
-		hy_drv_priv->initialized = true;
-	}
-
-	return ret;
-}
-
-/* cleans up all tx/rx rings */
-static void xen_be_cleanup_all_rbufs(void)
-{
-	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
-	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
-}
-
-void xen_be_destroy_comm(void)
-{
-	xen_be_cleanup_all_rbufs();
-	xen_comm_destroy_data_dir();
-}
-
-int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
-			      int wait)
-{
-	struct xen_comm_front_ring *ring;
-	struct hyper_dmabuf_req *new_req;
-	struct xen_comm_tx_ring_info *ring_info;
-	int notify;
-
-	struct timeval tv_start, tv_end;
-	struct timeval tv_diff;
-
-	int timeout = 1000;
-
-	/* find a ring info for the channel */
-	ring_info = xen_comm_find_tx_ring(domid);
-	if (!ring_info) {
-		dev_err(hy_drv_priv->dev,
-			"Can't find ring info for the channel\n");
-		return -ENOENT;
-	}
-
-
-	ring = &ring_info->ring_front;
-
-	do_gettimeofday(&tv_start);
-
-	while (RING_FULL(ring)) {
-		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
-
-		if (timeout == 0) {
-			dev_err(hy_drv_priv->dev,
-				"Timeout while waiting for an entry in the ring\n");
-			return -EIO;
-		}
-		usleep_range(100, 120);
-		timeout--;
-	}
-
-	timeout = 1000;
-
-	mutex_lock(&ring_info->lock);
-
-	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
-	if (!new_req) {
-		mutex_unlock(&ring_info->lock);
-		dev_err(hy_drv_priv->dev,
-			"NULL REQUEST\n");
-		return -EIO;
-	}
-
-	req->req_id = xen_comm_next_req_id();
-
-	/* update req_pending with current request */
-	memcpy(&req_pending, req, sizeof(req_pending));
-
-	/* pass current request to the ring */
-	memcpy(new_req, req, sizeof(*new_req));
-
-	ring->req_prod_pvt++;
-
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
-	if (notify)
-		notify_remote_via_irq(ring_info->irq);
-
-	if (wait) {
-		while (timeout--) {
-			if (req_pending.stat !=
-			    HYPER_DMABUF_REQ_NOT_RESPONDED)
-				break;
-			usleep_range(100, 120);
-		}
-
-		if (timeout < 0) {
-			mutex_unlock(&ring_info->lock);
-			dev_err(hy_drv_priv->dev,
-				"request timed-out\n");
-			return -EBUSY;
-		}
-
-		mutex_unlock(&ring_info->lock);
-		do_gettimeofday(&tv_end);
-
-		/* checking time duration for round-trip of a request
-		 * for debugging
-		 */
-		if (tv_end.tv_usec >= tv_start.tv_usec) {
-			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
-			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
-		} else {
-			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
-			tv_diff.tv_usec = tv_end.tv_usec+1000000-
-					  tv_start.tv_usec;
-		}
-
-		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
-			dev_dbg(hy_drv_priv->dev,
-				"send_req:time diff: %ld sec, %ld usec\n",
-				tv_diff.tv_sec, tv_diff.tv_usec);
-	}
-
-	mutex_unlock(&ring_info->lock);
-
-	return 0;
-}
-
-/* ISR for handling request */
-static irqreturn_t back_ring_isr(int irq, void *info)
-{
-	RING_IDX rc, rp;
-	struct hyper_dmabuf_req req;
-	struct hyper_dmabuf_resp resp;
-
-	int notify, more_to_do;
-	int ret;
-
-	struct xen_comm_rx_ring_info *ring_info;
-	struct xen_comm_back_ring *ring;
-
-	ring_info = (struct xen_comm_rx_ring_info *)info;
-	ring = &ring_info->ring_back;
-
-	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
-
-	do {
-		rc = ring->req_cons;
-		rp = ring->sring->req_prod;
-		more_to_do = 0;
-		while (rc != rp) {
-			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
-				break;
-
-			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
-			ring->req_cons = ++rc;
-
-			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
-
-			if (ret > 0) {
-				/* preparing a response for the request and
-				 * send it to the requester
-				 */
-				memcpy(&resp, &req, sizeof(resp));
-				memcpy(RING_GET_RESPONSE(ring,
-							 ring->rsp_prod_pvt),
-							 &resp, sizeof(resp));
-				ring->rsp_prod_pvt++;
-
-				dev_dbg(hy_drv_priv->dev,
-					"responding to exporter for req:%d\n",
-					resp.resp_id);
-
-				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
-								     notify);
-
-				if (notify)
-					notify_remote_via_irq(ring_info->irq);
-			}
-
-			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
-		}
-	} while (more_to_do);
-
-	return IRQ_HANDLED;
-}
-
-/* ISR for handling responses */
-static irqreturn_t front_ring_isr(int irq, void *info)
-{
-	/* front ring only care about response from back */
-	struct hyper_dmabuf_resp *resp;
-	RING_IDX i, rp;
-	int more_to_do, ret;
-
-	struct xen_comm_tx_ring_info *ring_info;
-	struct xen_comm_front_ring *ring;
-
-	ring_info = (struct xen_comm_tx_ring_info *)info;
-	ring = &ring_info->ring_front;
-
-	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
-
-	do {
-		more_to_do = 0;
-		rp = ring->sring->rsp_prod;
-		for (i = ring->rsp_cons; i != rp; i++) {
-			resp = RING_GET_RESPONSE(ring, i);
-
-			/* update pending request's status with what is
-			 * in the response
-			 */
-
-			dev_dbg(hy_drv_priv->dev,
-				"getting response from importer\n");
-
-			if (req_pending.req_id == resp->resp_id)
-				req_pending.stat = resp->stat;
-
-			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
-				/* parsing response */
-				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
-					(struct hyper_dmabuf_req *)resp);
-
-				if (ret < 0) {
-					dev_err(hy_drv_priv->dev,
-						"err while parsing resp\n");
-				}
-			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
-				/* for debugging dma_buf remote synch */
-				dev_dbg(hy_drv_priv->dev,
-					"original request = 0x%x\n", resp->cmd);
-				dev_dbg(hy_drv_priv->dev,
-					"got HYPER_DMABUF_REQ_PROCESSED\n");
-			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
-				/* for debugging dma_buf remote synch */
-				dev_dbg(hy_drv_priv->dev,
-					"original request = 0x%x\n", resp->cmd);
-				dev_dbg(hy_drv_priv->dev,
-					"got HYPER_DMABUF_REQ_ERROR\n");
-			}
-		}
-
-		ring->rsp_cons = i;
-
-		if (i != ring->req_prod_pvt)
-			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
-		else
-			ring->sring->rsp_event = i+1;
-
-	} while (more_to_do);
-
-	return IRQ_HANDLED;
-}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
deleted file mode 100644
index 70a2b70..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
+++ /dev/null
@@ -1,78 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_COMM_H__
-#define __HYPER_DMABUF_XEN_COMM_H__
-
-#include "xen/interface/io/ring.h"
-#include "xen/xenbus.h"
-#include "../hyper_dmabuf_msg.h"
-
-extern int xenstored_ready;
-
-DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
-
-struct xen_comm_tx_ring_info {
-	struct xen_comm_front_ring ring_front;
-	int rdomain;
-	int gref_ring;
-	int irq;
-	int port;
-	struct mutex lock;
-	struct xenbus_watch watch;
-};
-
-struct xen_comm_rx_ring_info {
-	int sdomain;
-	int irq;
-	int evtchn;
-	struct xen_comm_back_ring ring_back;
-	struct gnttab_unmap_grant_ref unmap_op;
-};
-
-int xen_be_get_domid(void);
-
-int xen_be_init_comm_env(void);
-
-/* exporter needs to generated info for page sharing */
-int xen_be_init_tx_rbuf(int domid);
-
-/* importer needs to know about shared page and port numbers
- * for ring buffer and event channel
- */
-int xen_be_init_rx_rbuf(int domid);
-
-/* cleans up exporter ring created for given domain */
-void xen_be_cleanup_tx_rbuf(int domid);
-
-/* cleans up importer ring created for given domain */
-void xen_be_cleanup_rx_rbuf(int domid);
-
-void xen_be_destroy_comm(void);
-
-/* send request to the remote domain */
-int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
-		    int wait);
-
-#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
deleted file mode 100644
index 15023db..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
+++ /dev/null
@@ -1,158 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/slab.h>
-#include <linux/cdev.h>
-#include <linux/hashtable.h>
-#include <xen/grant_table.h>
-#include "../hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_xen_comm_list.h"
-
-DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
-DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
-
-void xen_comm_ring_table_init(void)
-{
-	hash_init(xen_comm_rx_ring_hash);
-	hash_init(xen_comm_tx_ring_hash);
-}
-
-int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->info = ring_info;
-
-	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
-		info_entry->info->rdomain);
-
-	return 0;
-}
-
-int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-
-	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
-
-	if (!info_entry)
-		return -ENOMEM;
-
-	info_entry->info = ring_info;
-
-	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
-		info_entry->info->sdomain);
-
-	return 0;
-}
-
-struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->rdomain == domid)
-			return info_entry->info;
-
-	return NULL;
-}
-
-struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->sdomain == domid)
-			return info_entry->info;
-
-	return NULL;
-}
-
-int xen_comm_remove_tx_ring(int domid)
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->rdomain == domid) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
-		}
-
-	return -ENOENT;
-}
-
-int xen_comm_remove_rx_ring(int domid)
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-	int bkt;
-
-	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
-		if (info_entry->info->sdomain == domid) {
-			hash_del(&info_entry->node);
-			kfree(info_entry);
-			return 0;
-		}
-
-	return -ENOENT;
-}
-
-void xen_comm_foreach_tx_ring(void (*func)(int domid))
-{
-	struct xen_comm_tx_ring_info_entry *info_entry;
-	struct hlist_node *tmp;
-	int bkt;
-
-	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
-			   info_entry, node) {
-		func(info_entry->info->rdomain);
-	}
-}
-
-void xen_comm_foreach_rx_ring(void (*func)(int domid))
-{
-	struct xen_comm_rx_ring_info_entry *info_entry;
-	struct hlist_node *tmp;
-	int bkt;
-
-	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
-			   info_entry, node) {
-		func(info_entry->info->sdomain);
-	}
-}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
deleted file mode 100644
index 8502fe7..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
-#define __HYPER_DMABUF_XEN_COMM_LIST_H__
-
-/* number of bits to be used for exported dmabufs hash table */
-#define MAX_ENTRY_TX_RING 7
-/* number of bits to be used for imported dmabufs hash table */
-#define MAX_ENTRY_RX_RING 7
-
-struct xen_comm_tx_ring_info_entry {
-	struct xen_comm_tx_ring_info *info;
-	struct hlist_node node;
-};
-
-struct xen_comm_rx_ring_info_entry {
-	struct xen_comm_rx_ring_info *info;
-	struct hlist_node node;
-};
-
-void xen_comm_ring_table_init(void);
-
-int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
-
-int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
-
-int xen_comm_remove_tx_ring(int domid);
-
-int xen_comm_remove_rx_ring(int domid);
-
-struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
-
-struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
-
-/* iterates over all exporter rings and calls provided
- * function for each of them
- */
-void xen_comm_foreach_tx_ring(void (*func)(int domid));
-
-/* iterates over all importer rings and calls provided
- * function for each of them
- */
-void xen_comm_foreach_rx_ring(void (*func)(int domid));
-
-#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
deleted file mode 100644
index 14ed3bc..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.c
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include "../hyper_dmabuf_drv.h"
-#include "hyper_dmabuf_xen_comm.h"
-#include "hyper_dmabuf_xen_shm.h"
-
-struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
-	.init = NULL, /* not needed for xen */
-	.cleanup = NULL, /* not needed for xen */
-	.get_vm_id = xen_be_get_domid,
-	.share_pages = xen_be_share_pages,
-	.unshare_pages = xen_be_unshare_pages,
-	.map_shared_pages = (void *)xen_be_map_shared_pages,
-	.unmap_shared_pages = xen_be_unmap_shared_pages,
-	.init_comm_env = xen_be_init_comm_env,
-	.destroy_comm = xen_be_destroy_comm,
-	.init_rx_ch = xen_be_init_rx_rbuf,
-	.init_tx_ch = xen_be_init_tx_rbuf,
-	.send_req = xen_be_send_req,
-};
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
deleted file mode 100644
index a4902b7..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_drv.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_DRV_H__
-#define __HYPER_DMABUF_XEN_DRV_H__
-#include <xen/interface/grant_table.h>
-
-extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
-
-/* Main purpose of this structure is to keep
- * all references created or acquired for sharing
- * pages with another domain for freeing those later
- * when unsharing.
- */
-struct xen_shared_pages_info {
-	/* top level refid */
-	grant_ref_t lvl3_gref;
-
-	/* page of top level addressing, it contains refids of 2nd lvl pages */
-	grant_ref_t *lvl3_table;
-
-	/* table of 2nd level pages, that contains refids to data pages */
-	grant_ref_t *lvl2_table;
-
-	/* unmap ops for mapped pages */
-	struct gnttab_unmap_grant_ref *unmap_ops;
-
-	/* data pages to be unmapped */
-	struct page **data_pages;
-};
-
-#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
deleted file mode 100644
index c6a15f1..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.c
+++ /dev/null
@@ -1,525 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- *    Dongwon Kim <dongwon.kim@intel.com>
- *    Mateusz Polrola <mateuszx.potrola@intel.com>
- *
- */
-
-#include <linux/slab.h>
-#include <xen/grant_table.h>
-#include <asm/xen/page.h>
-#include "hyper_dmabuf_xen_drv.h"
-#include "../hyper_dmabuf_drv.h"
-
-#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
-
-/*
- * Creates 2 level page directory structure for referencing shared pages.
- * Top level page is a single page that contains up to 1024 refids that
- * point to 2nd level pages.
- *
- * Each 2nd level page contains up to 1024 refids that point to shared
- * data pages.
- *
- * There will always be one top level page and number of 2nd level pages
- * depends on number of shared data pages.
- *
- *      3rd level page                2nd level pages            Data pages
- * +-------------------------+   ┌>+--------------------+ ┌>+------------+
- * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
- * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
- * |           ...           |   | |     ....           | |
- * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
- * +-------------------------+ | | +--------------------+   |Data page 1 |
- *                             | |                          +------------+
- *                             | └>+--------------------+
- *                             |   |Data page 1024 refid|
- *                             |   |Data page 1025 refid|
- *                             |   |       ...          |
- *                             |   |Data page 2047 refid|
- *                             |   +--------------------+
- *                             |
- *                             |        .....
- *                             └-->+-----------------------+
- *                                 |Data page 1047552 refid|
- *                                 |Data page 1047553 refid|
- *                                 |       ...             |
- *                                 |Data page 1048575 refid|
- *                                 +-----------------------+
- *
- * Using such 2 level structure it is possible to reference up to 4GB of
- * shared data using single refid pointing to top level page.
- *
- * Returns refid of top level page.
- */
-int xen_be_share_pages(struct page **pages, int domid, int nents,
-		       void **refs_info)
-{
-	grant_ref_t lvl3_gref;
-	grant_ref_t *lvl2_table;
-	grant_ref_t *lvl3_table;
-
-	/*
-	 * Calculate number of pages needed for 2nd level addresing:
-	 */
-	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			   ((nents % REFS_PER_PAGE) ? 1 : 0));
-
-	struct xen_shared_pages_info *sh_pages_info;
-	int i;
-
-	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
-	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
-
-	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
-
-	if (!sh_pages_info)
-		return -ENOMEM;
-
-	*refs_info = (void *)sh_pages_info;
-
-	/* share data pages in readonly mode for security */
-	for (i = 0; i < nents; i++) {
-		lvl2_table[i] = gnttab_grant_foreign_access(domid,
-					pfn_to_mfn(page_to_pfn(pages[i])),
-					true /* read only */);
-		if (lvl2_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev,
-				"No more space left in grant table\n");
-
-			/* Unshare all already shared pages for lvl2 */
-			while (i--) {
-				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
-				gnttab_free_grant_reference(lvl2_table[i]);
-			}
-			goto err_cleanup;
-		}
-	}
-
-	/* Share 2nd level addressing pages in readonly mode*/
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		lvl3_table[i] = gnttab_grant_foreign_access(domid,
-					virt_to_mfn(
-					(unsigned long)lvl2_table+i*PAGE_SIZE),
-					true);
-
-		if (lvl3_table[i] == -ENOSPC) {
-			dev_err(hy_drv_priv->dev,
-				"No more space left in grant table\n");
-
-			/* Unshare all already shared pages for lvl3 */
-			while (i--) {
-				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
-				gnttab_free_grant_reference(lvl3_table[i]);
-			}
-
-			/* Unshare all pages for lvl2 */
-			while (nents--) {
-				gnttab_end_foreign_access_ref(
-							lvl2_table[nents], 0);
-				gnttab_free_grant_reference(lvl2_table[nents]);
-			}
-
-			goto err_cleanup;
-		}
-	}
-
-	/* Share lvl3_table in readonly mode*/
-	lvl3_gref = gnttab_grant_foreign_access(domid,
-			virt_to_mfn((unsigned long)lvl3_table),
-			true);
-
-	if (lvl3_gref == -ENOSPC) {
-		dev_err(hy_drv_priv->dev,
-			"No more space left in grant table\n");
-
-		/* Unshare all pages for lvl3 */
-		while (i--) {
-			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
-			gnttab_free_grant_reference(lvl3_table[i]);
-		}
-
-		/* Unshare all pages for lvl2 */
-		while (nents--) {
-			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
-			gnttab_free_grant_reference(lvl2_table[nents]);
-		}
-
-		goto err_cleanup;
-	}
-
-	/* Store lvl3_table page to be freed later */
-	sh_pages_info->lvl3_table = lvl3_table;
-
-	/* Store lvl2_table pages to be freed later */
-	sh_pages_info->lvl2_table = lvl2_table;
-
-
-	/* Store exported pages refid to be unshared later */
-	sh_pages_info->lvl3_gref = lvl3_gref;
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return lvl3_gref;
-
-err_cleanup:
-	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
-	free_pages((unsigned long)lvl3_table, 1);
-
-	return -ENOSPC;
-}
-
-int xen_be_unshare_pages(void **refs_info, int nents)
-{
-	struct xen_shared_pages_info *sh_pages_info;
-	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
-			    ((nents % REFS_PER_PAGE) ? 1 : 0));
-	int i;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
-
-	if (sh_pages_info->lvl3_table == NULL ||
-	    sh_pages_info->lvl2_table ==  NULL ||
-	    sh_pages_info->lvl3_gref == -1) {
-		dev_warn(hy_drv_priv->dev,
-			 "gref table for hyper_dmabuf already cleaned up\n");
-		return 0;
-	}
-
-	/* End foreign access for data pages, but do not free them */
-	for (i = 0; i < nents; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
-			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-
-		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
-		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
-	}
-
-	/* End foreign access for 2nd level addressing pages */
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
-			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
-
-		if (!gnttab_end_foreign_access_ref(
-					sh_pages_info->lvl3_table[i], 1))
-			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
-
-		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
-	}
-
-	/* End foreign access for top level addressing page */
-	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
-		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
-
-	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
-	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
-
-	/* freeing all pages used for 2 level addressing */
-	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
-	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
-
-	sh_pages_info->lvl3_gref = -1;
-	sh_pages_info->lvl2_table = NULL;
-	sh_pages_info->lvl3_table = NULL;
-	kfree(sh_pages_info);
-	sh_pages_info = NULL;
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return 0;
-}
-
-/* Maps provided top level ref id and then return array of pages
- * containing data refs.
- */
-struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
-				      int nents, void **refs_info)
-{
-	struct page *lvl3_table_page;
-	struct page **lvl2_table_pages;
-	struct page **data_pages;
-	struct xen_shared_pages_info *sh_pages_info;
-
-	grant_ref_t *lvl3_table;
-	grant_ref_t *lvl2_table;
-
-	struct gnttab_map_grant_ref lvl3_map_ops;
-	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
-
-	struct gnttab_map_grant_ref *lvl2_map_ops;
-	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
-
-	struct gnttab_map_grant_ref *data_map_ops;
-	struct gnttab_unmap_grant_ref *data_unmap_ops;
-
-	/* # of grefs in the last page of lvl2 table */
-	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
-	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
-			   ((nents_last > 0) ? 1 : 0) -
-			   (nents_last == REFS_PER_PAGE);
-	int i, j, k;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
-	*refs_info = (void *) sh_pages_info;
-
-	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
-				   GFP_KERNEL);
-
-	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
-
-	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
-			       GFP_KERNEL);
-
-	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
-				 GFP_KERNEL);
-
-	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
-	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
-
-	/* Map top level addressing page */
-	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
-		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
-		return NULL;
-	}
-
-	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
-
-	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
-			  GNTMAP_host_map | GNTMAP_readonly,
-			  (grant_ref_t)lvl3_gref, domid);
-
-	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
-			    GNTMAP_host_map | GNTMAP_readonly, -1);
-
-	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	if (lvl3_map_ops.status) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed status = %d",
-			lvl3_map_ops.status);
-
-		goto error_cleanup_lvl3;
-	} else {
-		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
-	}
-
-	/* Map all second level pages */
-	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
-		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
-		goto error_cleanup_lvl3;
-	}
-
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
-					page_to_pfn(lvl2_table_pages[i]));
-		gnttab_set_map_op(&lvl2_map_ops[i],
-				  (unsigned long)lvl2_table, GNTMAP_host_map |
-				  GNTMAP_readonly,
-				  lvl3_table[i], domid);
-		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
-				    (unsigned long)lvl2_table, GNTMAP_host_map |
-				    GNTMAP_readonly, -1);
-	}
-
-	/* Unmap top level page, as it won't be needed any longer */
-	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
-			      &lvl3_table_page, 1)) {
-		dev_err(hy_drv_priv->dev,
-			"xen: cannot unmap top level page\n");
-		return NULL;
-	}
-
-	/* Mark that page was unmapped */
-	lvl3_unmap_ops.handle = -1;
-
-	if (gnttab_map_refs(lvl2_map_ops, NULL,
-			    lvl2_table_pages, n_lvl2_grefs)) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed");
-		return NULL;
-	}
-
-	/* Checks if pages were mapped correctly */
-	for (i = 0; i < n_lvl2_grefs; i++) {
-		if (lvl2_map_ops[i].status) {
-			dev_err(hy_drv_priv->dev,
-				"HYPERVISOR map grant ref failed status = %d",
-				lvl2_map_ops[i].status);
-			goto error_cleanup_lvl2;
-		} else {
-			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
-		}
-	}
-
-	if (gnttab_alloc_pages(nents, data_pages)) {
-		dev_err(hy_drv_priv->dev,
-			"Cannot allocate pages\n");
-		goto error_cleanup_lvl2;
-	}
-
-	k = 0;
-
-	for (i = 0; i < n_lvl2_grefs - 1; i++) {
-		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
-		for (j = 0; j < REFS_PER_PAGE; j++) {
-			gnttab_set_map_op(&data_map_ops[k],
-				(unsigned long)pfn_to_kaddr(
-						page_to_pfn(data_pages[k])),
-				GNTMAP_host_map | GNTMAP_readonly,
-				lvl2_table[j], domid);
-
-			gnttab_set_unmap_op(&data_unmap_ops[k],
-				(unsigned long)pfn_to_kaddr(
-						page_to_pfn(data_pages[k])),
-				GNTMAP_host_map | GNTMAP_readonly, -1);
-			k++;
-		}
-	}
-
-	/* for grefs in the last lvl2 table page */
-	lvl2_table = pfn_to_kaddr(page_to_pfn(
-				lvl2_table_pages[n_lvl2_grefs - 1]));
-
-	for (j = 0; j < nents_last; j++) {
-		gnttab_set_map_op(&data_map_ops[k],
-			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-			GNTMAP_host_map | GNTMAP_readonly,
-			lvl2_table[j], domid);
-
-		gnttab_set_unmap_op(&data_unmap_ops[k],
-			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
-			GNTMAP_host_map | GNTMAP_readonly, -1);
-		k++;
-	}
-
-	if (gnttab_map_refs(data_map_ops, NULL,
-			    data_pages, nents)) {
-		dev_err(hy_drv_priv->dev,
-			"HYPERVISOR map grant ref failed\n");
-		return NULL;
-	}
-
-	/* unmapping lvl2 table pages */
-	if (gnttab_unmap_refs(lvl2_unmap_ops,
-			      NULL, lvl2_table_pages,
-			      n_lvl2_grefs)) {
-		dev_err(hy_drv_priv->dev,
-			"Cannot unmap 2nd level refs\n");
-		return NULL;
-	}
-
-	/* Mark that pages were unmapped */
-	for (i = 0; i < n_lvl2_grefs; i++)
-		lvl2_unmap_ops[i].handle = -1;
-
-	for (i = 0; i < nents; i++) {
-		if (data_map_ops[i].status) {
-			dev_err(hy_drv_priv->dev,
-				"HYPERVISOR map grant ref failed status = %d\n",
-				data_map_ops[i].status);
-			goto error_cleanup_data;
-		} else {
-			data_unmap_ops[i].handle = data_map_ops[i].handle;
-		}
-	}
-
-	/* store these references for unmapping in the future */
-	sh_pages_info->unmap_ops = data_unmap_ops;
-	sh_pages_info->data_pages = data_pages;
-
-	gnttab_free_pages(1, &lvl3_table_page);
-	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
-	kfree(lvl2_table_pages);
-	kfree(lvl2_map_ops);
-	kfree(lvl2_unmap_ops);
-	kfree(data_map_ops);
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return data_pages;
-
-error_cleanup_data:
-	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
-			  nents);
-
-	gnttab_free_pages(nents, data_pages);
-
-error_cleanup_lvl2:
-	if (lvl2_unmap_ops[0].handle != -1)
-		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
-				  lvl2_table_pages, n_lvl2_grefs);
-	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
-
-error_cleanup_lvl3:
-	if (lvl3_unmap_ops.handle != -1)
-		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
-				  &lvl3_table_page, 1);
-	gnttab_free_pages(1, &lvl3_table_page);
-
-	kfree(lvl2_table_pages);
-	kfree(lvl2_map_ops);
-	kfree(lvl2_unmap_ops);
-	kfree(data_map_ops);
-
-
-	return NULL;
-}
-
-int xen_be_unmap_shared_pages(void **refs_info, int nents)
-{
-	struct xen_shared_pages_info *sh_pages_info;
-
-	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
-
-	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
-
-	if (sh_pages_info->unmap_ops == NULL ||
-	    sh_pages_info->data_pages == NULL) {
-		dev_warn(hy_drv_priv->dev,
-			 "pages already cleaned up or buffer not imported yet\n");
-		return 0;
-	}
-
-	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
-			      sh_pages_info->data_pages, nents)) {
-		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
-		return -EFAULT;
-	}
-
-	gnttab_free_pages(nents, sh_pages_info->data_pages);
-
-	kfree(sh_pages_info->data_pages);
-	kfree(sh_pages_info->unmap_ops);
-	sh_pages_info->unmap_ops = NULL;
-	sh_pages_info->data_pages = NULL;
-	kfree(sh_pages_info);
-	sh_pages_info = NULL;
-
-	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
-	return 0;
-}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
deleted file mode 100644
index d5236b5..0000000
--- a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_shm.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/*
- * Copyright © 2017 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- */
-
-#ifndef __HYPER_DMABUF_XEN_SHM_H__
-#define __HYPER_DMABUF_XEN_SHM_H__
-
-/* This collects all reference numbers for 2nd level shared pages and
- * create a table with those in 1st level shared pages then return reference
- * numbers for this top level table.
- */
-int xen_be_share_pages(struct page **pages, int domid, int nents,
-		    void **refs_info);
-
-int xen_be_unshare_pages(void **refs_info, int nents);
-
-/* Maps provided top level ref id and then return array of pages containing
- * data refs.
- */
-struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
-				      int nents,
-				      void **refs_info);
-
-int xen_be_unmap_shared_pages(void **refs_info, int nents);
-
-#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 19:29 ` Dongwon Kim
@ 2017-12-19 23:27   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 23:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, Potrola, MateuszX

I forgot to include this brief information about this patch series.

This patch series contains the implementation of a new device driver,
hyper_dmabuf, which provides a method for DMA-BUF sharing across
different OSes running on the same virtual OS platform powered by
a hypervisor.

Detailed information about this driver is described in a high-level doc
added by the second patch of the series.

[RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing

I am attaching 'Overview' section here as a summary.

------------------------------------------------------------------------------
Section 1. Overview
------------------------------------------------------------------------------

Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
achines (VMs), which expands DMA-BUF sharing capability to the VM environment
where multiple different OS instances need to share same physical data without
data-copy across VMs.

To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
for the buffer to the importing VM (so called, “importer”).

Another instance of the Hyper_DMABUF driver on importer registers
a hyper_dmabuf_id together with reference information for the shared physical
pages associated with the DMA_BUF to its database when the export happens.

The actual mapping of the DMA_BUF on the importer’s side is done by
the Hyper_DMABUF driver when user space issues the IOCTL command to access
the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
exporting driver as is, that is, no special configuration is required.
Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
exchange.

------------------------------------------------------------------------------

There is a git repository at github.com where this series of patches are all
integrated in Linux kernel tree based on the commit:

        commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
        Author: Linus Torvalds <torvalds@linux-foundation.org>
        Date:   Sun Dec 3 11:01:47 2017 -0500

            Linux 4.15-rc2

https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2017-12-19 23:27   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 23:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel

I forgot to include this brief information about this patch series.

This patch series contains the implementation of a new device driver,
hyper_dmabuf, which provides a method for DMA-BUF sharing across
different OSes running on the same virtual OS platform powered by
a hypervisor.

Detailed information about this driver is described in a high-level doc
added by the second patch of the series.

[RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing

I am attaching 'Overview' section here as a summary.

------------------------------------------------------------------------------
Section 1. Overview
------------------------------------------------------------------------------

Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
achines (VMs), which expands DMA-BUF sharing capability to the VM environment
where multiple different OS instances need to share same physical data without
data-copy across VMs.

To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
for the buffer to the importing VM (so called, “importer”).

Another instance of the Hyper_DMABUF driver on importer registers
a hyper_dmabuf_id together with reference information for the shared physical
pages associated with the DMA_BUF to its database when the export happens.

The actual mapping of the DMA_BUF on the importer’s side is done by
the Hyper_DMABUF driver when user space issues the IOCTL command to access
the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
exporting driver as is, that is, no special configuration is required.
Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
exchange.

------------------------------------------------------------------------------

There is a git repository at github.com where this series of patches are all
integrated in Linux kernel tree based on the commit:

        commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
        Author: Linus Torvalds <torvalds@linux-foundation.org>
        Date:   Sun Dec 3 11:01:47 2017 -0500

            Linux 4.15-rc2

https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 19:29 ` Dongwon Kim
                   ` (84 preceding siblings ...)
  (?)
@ 2017-12-19 23:27 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 23:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel

I forgot to include this brief information about this patch series.

This patch series contains the implementation of a new device driver,
hyper_dmabuf, which provides a method for DMA-BUF sharing across
different OSes running on the same virtual OS platform powered by
a hypervisor.

Detailed information about this driver is described in a high-level doc
added by the second patch of the series.

[RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing

I am attaching 'Overview' section here as a summary.

------------------------------------------------------------------------------
Section 1. Overview
------------------------------------------------------------------------------

Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
achines (VMs), which expands DMA-BUF sharing capability to the VM environment
where multiple different OS instances need to share same physical data without
data-copy across VMs.

To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
for the buffer to the importing VM (so called, “importer”).

Another instance of the Hyper_DMABUF driver on importer registers
a hyper_dmabuf_id together with reference information for the shared physical
pages associated with the DMA_BUF to its database when the export happens.

The actual mapping of the DMA_BUF on the importer’s side is done by
the Hyper_DMABUF driver when user space issues the IOCTL command to access
the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
exporting driver as is, that is, no special configuration is required.
Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
exchange.

------------------------------------------------------------------------------

There is a git repository at github.com where this series of patches are all
integrated in Linux kernel tree based on the commit:

        commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
        Author: Linus Torvalds <torvalds@linux-foundation.org>
        Date:   Sun Dec 3 11:01:47 2017 -0500

            Linux 4.15-rc2

https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [Xen-devel] [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 23:27   ` Dongwon Kim
  (?)
  (?)
@ 2017-12-20  8:17   ` Juergen Gross
  2018-01-10 23:21     ` Dongwon Kim
  2018-01-10 23:21       ` Dongwon Kim
  -1 siblings, 2 replies; 160+ messages in thread
From: Juergen Gross @ 2017-12-20  8:17 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel

On 20/12/17 00:27, Dongwon Kim wrote:
> I forgot to include this brief information about this patch series.
> 
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.

Some general remarks regarding this series:

You are starting the whole driver in drivers/xen/ and in the last patch
you move it over to drivers/dma-buf/. Why don't you use drivers/dma-buf/
from the beginning? The same applies to e.g. patch 22 changing the
license. Please make it easier for the reviewers by not letting us
review the development history of your work.

Please run ./scripts/checkpatch.pl on each patch and correct the issues
it is reporting. At the first glance I've seen several style problems
which I won't comment until the next round.

Please add the maintainers as Cc:, not only the related mailing lists.
As you seem to aim supporting other hypervisors than Xen you might want
to add virtualization@lists.linux-foundation.org as well.


Juergen

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 23:27   ` Dongwon Kim
  (?)
@ 2017-12-20  8:17   ` Juergen Gross
  -1 siblings, 0 replies; 160+ messages in thread
From: Juergen Gross @ 2017-12-20  8:17 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel

On 20/12/17 00:27, Dongwon Kim wrote:
> I forgot to include this brief information about this patch series.
> 
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.

Some general remarks regarding this series:

You are starting the whole driver in drivers/xen/ and in the last patch
you move it over to drivers/dma-buf/. Why don't you use drivers/dma-buf/
from the beginning? The same applies to e.g. patch 22 changing the
license. Please make it easier for the reviewers by not letting us
review the development history of your work.

Please run ./scripts/checkpatch.pl on each patch and correct the issues
it is reporting. At the first glance I've seen several style problems
which I won't comment until the next round.

Please add the maintainers as Cc:, not only the related mailing lists.
As you seem to aim supporting other hypervisors than Xen you might want
to add virtualization@lists.linux-foundation.org as well.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [Xen-devel] [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 23:27   ` Dongwon Kim
                     ` (3 preceding siblings ...)
  (?)
@ 2017-12-20  8:38   ` Oleksandr Andrushchenko
  2018-01-10 23:14     ` Dongwon Kim
  2018-01-10 23:14     ` [Xen-devel] " Dongwon Kim
  -1 siblings, 2 replies; 160+ messages in thread
From: Oleksandr Andrushchenko @ 2017-12-20  8:38 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel


On 12/20/2017 01:27 AM, Dongwon Kim wrote:
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.
This is very interesting at least in context of embedded systems.
Could you please share use-cases for this work and, if possible,
sources of the test applications if any.

Thank you,
Oleksandr

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 23:27   ` Dongwon Kim
                     ` (2 preceding siblings ...)
  (?)
@ 2017-12-20  8:38   ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 160+ messages in thread
From: Oleksandr Andrushchenko @ 2017-12-20  8:38 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel


On 12/20/2017 01:27 AM, Dongwon Kim wrote:
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.
This is very interesting at least in context of embedded systems.
Could you please share use-cases for this work and, if possible,
sources of the test applications if any.

Thank you,
Oleksandr

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 23:27   ` Dongwon Kim
@ 2017-12-20  9:59     ` Daniel Vetter
  -1 siblings, 0 replies; 160+ messages in thread
From: Daniel Vetter @ 2017-12-20  9:59 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: linux-kernel, xen-devel, Potrola, MateuszX, dri-devel,
	Intel Graphics Development, intel-gvt-dev

On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> I forgot to include this brief information about this patch series.
> 
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.
> 
> Detailed information about this driver is described in a high-level doc
> added by the second patch of the series.
> 
> [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> 
> I am attaching 'Overview' section here as a summary.
> 
> ------------------------------------------------------------------------------
> Section 1. Overview
> ------------------------------------------------------------------------------
> 
> Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> where multiple different OS instances need to share same physical data without
> data-copy across VMs.
> 
> To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> for the buffer to the importing VM (so called, “importer”).
> 
> Another instance of the Hyper_DMABUF driver on importer registers
> a hyper_dmabuf_id together with reference information for the shared physical
> pages associated with the DMA_BUF to its database when the export happens.
> 
> The actual mapping of the DMA_BUF on the importer’s side is done by
> the Hyper_DMABUF driver when user space issues the IOCTL command to access
> the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> exporting driver as is, that is, no special configuration is required.
> Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> exchange.

So I know that most dma-buf implementations (especially lots of importers
in drivers/gpu) break this, but fundamentally only the original exporter
is allowed to know about the underlying pages. There's various scenarios
where a dma-buf isn't backed by anything like a struct page.

So your first step of noodling the underlying struct page out from the
dma-buf is kinda breaking the abstraction, and I think it's not a good
idea to have that. Especially not for sharing across VMs.

I think a better design would be if hyper-dmabuf would be the dma-buf
exporter in both of the VMs, and you'd import it everywhere you want to in
some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
in control of the pages, and a lot of the troubling forwarding you
currently need to do disappears.

2nd thing: This seems very much related to what's happening around gvt and
allowing at least the host (in a kvm based VM environment) to be able to
access some of the dma-buf (or well, framebuffers in general) that the
client is using. Adding some mailing lists for that.
-Daniel

> 
> ------------------------------------------------------------------------------
> 
> There is a git repository at github.com where this series of patches are all
> integrated in Linux kernel tree based on the commit:
> 
>         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>         Author: Linus Torvalds <torvalds@linux-foundation.org>
>         Date:   Sun Dec 3 11:01:47 2017 -0500
> 
>             Linux 4.15-rc2
> 
> https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2017-12-20  9:59     ` Daniel Vetter
  0 siblings, 0 replies; 160+ messages in thread
From: Daniel Vetter @ 2017-12-20  9:59 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: Intel Graphics Development, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, intel-gvt-dev

On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> I forgot to include this brief information about this patch series.
> 
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.
> 
> Detailed information about this driver is described in a high-level doc
> added by the second patch of the series.
> 
> [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> 
> I am attaching 'Overview' section here as a summary.
> 
> ------------------------------------------------------------------------------
> Section 1. Overview
> ------------------------------------------------------------------------------
> 
> Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> where multiple different OS instances need to share same physical data without
> data-copy across VMs.
> 
> To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> for the buffer to the importing VM (so called, “importer”).
> 
> Another instance of the Hyper_DMABUF driver on importer registers
> a hyper_dmabuf_id together with reference information for the shared physical
> pages associated with the DMA_BUF to its database when the export happens.
> 
> The actual mapping of the DMA_BUF on the importer’s side is done by
> the Hyper_DMABUF driver when user space issues the IOCTL command to access
> the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> exporting driver as is, that is, no special configuration is required.
> Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> exchange.

So I know that most dma-buf implementations (especially lots of importers
in drivers/gpu) break this, but fundamentally only the original exporter
is allowed to know about the underlying pages. There's various scenarios
where a dma-buf isn't backed by anything like a struct page.

So your first step of noodling the underlying struct page out from the
dma-buf is kinda breaking the abstraction, and I think it's not a good
idea to have that. Especially not for sharing across VMs.

I think a better design would be if hyper-dmabuf would be the dma-buf
exporter in both of the VMs, and you'd import it everywhere you want to in
some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
in control of the pages, and a lot of the troubling forwarding you
currently need to do disappears.

2nd thing: This seems very much related to what's happening around gvt and
allowing at least the host (in a kvm based VM environment) to be able to
access some of the dma-buf (or well, framebuffers in general) that the
client is using. Adding some mailing lists for that.
-Daniel

> 
> ------------------------------------------------------------------------------
> 
> There is a git repository at github.com where this series of patches are all
> integrated in Linux kernel tree based on the commit:
> 
>         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>         Author: Linus Torvalds <torvalds@linux-foundation.org>
>         Date:   Sun Dec 3 11:01:47 2017 -0500
> 
>             Linux 4.15-rc2
> 
> https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 23:27   ` Dongwon Kim
                     ` (5 preceding siblings ...)
  (?)
@ 2017-12-20  9:59   ` Daniel Vetter
  -1 siblings, 0 replies; 160+ messages in thread
From: Daniel Vetter @ 2017-12-20  9:59 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: Intel Graphics Development, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, intel-gvt-dev

On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> I forgot to include this brief information about this patch series.
> 
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.
> 
> Detailed information about this driver is described in a high-level doc
> added by the second patch of the series.
> 
> [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> 
> I am attaching 'Overview' section here as a summary.
> 
> ------------------------------------------------------------------------------
> Section 1. Overview
> ------------------------------------------------------------------------------
> 
> Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> where multiple different OS instances need to share same physical data without
> data-copy across VMs.
> 
> To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> for the buffer to the importing VM (so called, “importer”).
> 
> Another instance of the Hyper_DMABUF driver on importer registers
> a hyper_dmabuf_id together with reference information for the shared physical
> pages associated with the DMA_BUF to its database when the export happens.
> 
> The actual mapping of the DMA_BUF on the importer’s side is done by
> the Hyper_DMABUF driver when user space issues the IOCTL command to access
> the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> exporting driver as is, that is, no special configuration is required.
> Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> exchange.

So I know that most dma-buf implementations (especially lots of importers
in drivers/gpu) break this, but fundamentally only the original exporter
is allowed to know about the underlying pages. There's various scenarios
where a dma-buf isn't backed by anything like a struct page.

So your first step of noodling the underlying struct page out from the
dma-buf is kinda breaking the abstraction, and I think it's not a good
idea to have that. Especially not for sharing across VMs.

I think a better design would be if hyper-dmabuf would be the dma-buf
exporter in both of the VMs, and you'd import it everywhere you want to in
some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
in control of the pages, and a lot of the troubling forwarding you
currently need to do disappears.

2nd thing: This seems very much related to what's happening around gvt and
allowing at least the host (in a kvm based VM environment) to be able to
access some of the dma-buf (or well, framebuffers in general) that the
client is using. Adding some mailing lists for that.
-Daniel

> 
> ------------------------------------------------------------------------------
> 
> There is a git repository at github.com where this series of patches are all
> integrated in Linux kernel tree based on the commit:
> 
>         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>         Author: Linus Torvalds <torvalds@linux-foundation.org>
>         Date:   Sun Dec 3 11:01:47 2017 -0500
> 
>             Linux 4.15-rc2
> 
> https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  9:59     ` Daniel Vetter
@ 2017-12-26 18:19       ` Matt Roper
  -1 siblings, 0 replies; 160+ messages in thread
From: Matt Roper @ 2017-12-26 18:19 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, xen-devel, Potrola, MateuszX,
	dri-devel, Intel Graphics Development, intel-gvt-dev

On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> > 
> > Detailed information about this driver is described in a high-level doc
> > added by the second patch of the series.
> > 
> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> > 
> > I am attaching 'Overview' section here as a summary.
> > 
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> > 
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> > 
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> > 
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> > 
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
> 
> So I know that most dma-buf implementations (especially lots of importers
> in drivers/gpu) break this, but fundamentally only the original exporter
> is allowed to know about the underlying pages. There's various scenarios
> where a dma-buf isn't backed by anything like a struct page.
> 
> So your first step of noodling the underlying struct page out from the
> dma-buf is kinda breaking the abstraction, and I think it's not a good
> idea to have that. Especially not for sharing across VMs.
> 
> I think a better design would be if hyper-dmabuf would be the dma-buf
> exporter in both of the VMs, and you'd import it everywhere you want to in
> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
> in control of the pages, and a lot of the troubling forwarding you
> currently need to do disappears.

I think one of the main driving use cases here is for a "local" graphics
compositor inside the VM to accept client buffers from unmodified
applications and then pass those buffers along to a "global" compositor
running in the service domain.  This would allow the global compositor
to composite applications running in different virtual machines (and
possibly running under different operating systems).

If we require that hyper-dmabuf always be the exporter, that complicates
things a little bit since a buffer allocated via regular interfaces (GEM
ioctls or whatever) wouldn't be directly transferrable to the global
compositor.  For graphics use cases like this, we could probably hide a
lot of the details by modifying/replacing the EGL implementation that
handles the details of buffer allocation.  However if we have
applications that are themselves just passing along externally-allocated
buffers (e.g., images from a camera device), we'd probably need to
modify those applications and/or the drivers they get their content
from.


Matt

> 
> 2nd thing: This seems very much related to what's happening around gvt and
> allowing at least the host (in a kvm based VM environment) to be able to
> access some of the dma-buf (or well, framebuffers in general) that the
> client is using. Adding some mailing lists for that.
> -Daniel
> 
> > 
> > ------------------------------------------------------------------------------
> > 
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> > 
> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
> >         Date:   Sun Dec 3 11:01:47 2017 -0500
> > 
> >             Linux 4.15-rc2
> > 
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Matt Roper
Graphics Software Engineer
IoTG Platform Enabling & Development
Intel Corporation
(916) 356-2795

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2017-12-26 18:19       ` Matt Roper
  0 siblings, 0 replies; 160+ messages in thread
From: Matt Roper @ 2017-12-26 18:19 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, xen-devel, Potrola, MateuszX,
	dri-devel, Intel Graphics Development, intel-gvt-dev

On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> > 
> > Detailed information about this driver is described in a high-level doc
> > added by the second patch of the series.
> > 
> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> > 
> > I am attaching 'Overview' section here as a summary.
> > 
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> > 
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> > 
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> > 
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> > 
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
> 
> So I know that most dma-buf implementations (especially lots of importers
> in drivers/gpu) break this, but fundamentally only the original exporter
> is allowed to know about the underlying pages. There's various scenarios
> where a dma-buf isn't backed by anything like a struct page.
> 
> So your first step of noodling the underlying struct page out from the
> dma-buf is kinda breaking the abstraction, and I think it's not a good
> idea to have that. Especially not for sharing across VMs.
> 
> I think a better design would be if hyper-dmabuf would be the dma-buf
> exporter in both of the VMs, and you'd import it everywhere you want to in
> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
> in control of the pages, and a lot of the troubling forwarding you
> currently need to do disappears.

I think one of the main driving use cases here is for a "local" graphics
compositor inside the VM to accept client buffers from unmodified
applications and then pass those buffers along to a "global" compositor
running in the service domain.  This would allow the global compositor
to composite applications running in different virtual machines (and
possibly running under different operating systems).

If we require that hyper-dmabuf always be the exporter, that complicates
things a little bit since a buffer allocated via regular interfaces (GEM
ioctls or whatever) wouldn't be directly transferrable to the global
compositor.  For graphics use cases like this, we could probably hide a
lot of the details by modifying/replacing the EGL implementation that
handles the details of buffer allocation.  However if we have
applications that are themselves just passing along externally-allocated
buffers (e.g., images from a camera device), we'd probably need to
modify those applications and/or the drivers they get their content
from.


Matt

> 
> 2nd thing: This seems very much related to what's happening around gvt and
> allowing at least the host (in a kvm based VM environment) to be able to
> access some of the dma-buf (or well, framebuffers in general) that the
> client is using. Adding some mailing lists for that.
> -Daniel
> 
> > 
> > ------------------------------------------------------------------------------
> > 
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> > 
> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
> >         Date:   Sun Dec 3 11:01:47 2017 -0500
> > 
> >             Linux 4.15-rc2
> > 
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Matt Roper
Graphics Software Engineer
IoTG Platform Enabling & Development
Intel Corporation
(916) 356-2795
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  9:59     ` Daniel Vetter
  (?)
  (?)
@ 2017-12-26 18:19     ` Matt Roper
  -1 siblings, 0 replies; 160+ messages in thread
From: Matt Roper @ 2017-12-26 18:19 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, xen-devel, Potrola, MateuszX,
	dri-devel, Intel Graphics Development, intel-gvt-dev

On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> > 
> > Detailed information about this driver is described in a high-level doc
> > added by the second patch of the series.
> > 
> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> > 
> > I am attaching 'Overview' section here as a summary.
> > 
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> > 
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> > 
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> > 
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> > 
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
> 
> So I know that most dma-buf implementations (especially lots of importers
> in drivers/gpu) break this, but fundamentally only the original exporter
> is allowed to know about the underlying pages. There's various scenarios
> where a dma-buf isn't backed by anything like a struct page.
> 
> So your first step of noodling the underlying struct page out from the
> dma-buf is kinda breaking the abstraction, and I think it's not a good
> idea to have that. Especially not for sharing across VMs.
> 
> I think a better design would be if hyper-dmabuf would be the dma-buf
> exporter in both of the VMs, and you'd import it everywhere you want to in
> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
> in control of the pages, and a lot of the troubling forwarding you
> currently need to do disappears.

I think one of the main driving use cases here is for a "local" graphics
compositor inside the VM to accept client buffers from unmodified
applications and then pass those buffers along to a "global" compositor
running in the service domain.  This would allow the global compositor
to composite applications running in different virtual machines (and
possibly running under different operating systems).

If we require that hyper-dmabuf always be the exporter, that complicates
things a little bit since a buffer allocated via regular interfaces (GEM
ioctls or whatever) wouldn't be directly transferrable to the global
compositor.  For graphics use cases like this, we could probably hide a
lot of the details by modifying/replacing the EGL implementation that
handles the details of buffer allocation.  However if we have
applications that are themselves just passing along externally-allocated
buffers (e.g., images from a camera device), we'd probably need to
modify those applications and/or the drivers they get their content
from.


Matt

> 
> 2nd thing: This seems very much related to what's happening around gvt and
> allowing at least the host (in a kvm based VM environment) to be able to
> access some of the dma-buf (or well, framebuffers in general) that the
> client is using. Adding some mailing lists for that.
> -Daniel
> 
> > 
> > ------------------------------------------------------------------------------
> > 
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> > 
> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
> >         Date:   Sun Dec 3 11:01:47 2017 -0500
> > 
> >             Linux 4.15-rc2
> > 
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Matt Roper
Graphics Software Engineer
IoTG Platform Enabling & Development
Intel Corporation
(916) 356-2795

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-26 18:19       ` Matt Roper
@ 2017-12-29 13:03         ` Tomeu Vizoso
  -1 siblings, 0 replies; 160+ messages in thread
From: Tomeu Vizoso @ 2017-12-29 13:03 UTC (permalink / raw)
  To: Matt Roper
  Cc: Dongwon Kim, linux-kernel, xen-devel, Potrola, MateuszX,
	dri-devel, Intel Graphics Development, intel-gvt-dev

On 26 December 2017 at 19:19, Matt Roper <matthew.d.roper@intel.com> wrote:
> On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
>> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
>> > I forgot to include this brief information about this patch series.
>> >
>> > This patch series contains the implementation of a new device driver,
>> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
>> > different OSes running on the same virtual OS platform powered by
>> > a hypervisor.
>> >
>> > Detailed information about this driver is described in a high-level doc
>> > added by the second patch of the series.
>> >
>> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
>> >
>> > I am attaching 'Overview' section here as a summary.
>> >
>> > ------------------------------------------------------------------------------
>> > Section 1. Overview
>> > ------------------------------------------------------------------------------
>> >
>> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
>> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
>> > where multiple different OS instances need to share same physical data without
>> > data-copy across VMs.
>> >
>> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
>> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
>> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
>> > for the buffer to the importing VM (so called, “importer”).
>> >
>> > Another instance of the Hyper_DMABUF driver on importer registers
>> > a hyper_dmabuf_id together with reference information for the shared physical
>> > pages associated with the DMA_BUF to its database when the export happens.
>> >
>> > The actual mapping of the DMA_BUF on the importer’s side is done by
>> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
>> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
>> > exporting driver as is, that is, no special configuration is required.
>> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
>> > exchange.
>>
>> So I know that most dma-buf implementations (especially lots of importers
>> in drivers/gpu) break this, but fundamentally only the original exporter
>> is allowed to know about the underlying pages. There's various scenarios
>> where a dma-buf isn't backed by anything like a struct page.
>>
>> So your first step of noodling the underlying struct page out from the
>> dma-buf is kinda breaking the abstraction, and I think it's not a good
>> idea to have that. Especially not for sharing across VMs.
>>
>> I think a better design would be if hyper-dmabuf would be the dma-buf
>> exporter in both of the VMs, and you'd import it everywhere you want to in
>> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
>> in control of the pages, and a lot of the troubling forwarding you
>> currently need to do disappears.
>
> I think one of the main driving use cases here is for a "local" graphics
> compositor inside the VM to accept client buffers from unmodified
> applications and then pass those buffers along to a "global" compositor
> running in the service domain.  This would allow the global compositor
> to composite applications running in different virtual machines (and
> possibly running under different operating systems).
>
> If we require that hyper-dmabuf always be the exporter, that complicates
> things a little bit since a buffer allocated via regular interfaces (GEM
> ioctls or whatever) wouldn't be directly transferrable to the global
> compositor.  For graphics use cases like this, we could probably hide a
> lot of the details by modifying/replacing the EGL implementation that
> handles the details of buffer allocation.  However if we have
> applications that are themselves just passing along externally-allocated
> buffers (e.g., images from a camera device), we'd probably need to
> modify those applications and/or the drivers they get their content
> from.

There's also non-GPU-rendering clients that pass SHM buffers to the compositor.

For now, a Wayland proxy in the guest is copying the client-provided
buffers to virtio-gpu resources at the appropriate times, which also
need to be copied once more to host memory. Would be great to reduce
the number of copies that that implies.

For more on this effort:

https://patchwork.kernel.org/patch/10134603/

Regards,

Tomeu

>
> Matt
>
>>
>> 2nd thing: This seems very much related to what's happening around gvt and
>> allowing at least the host (in a kvm based VM environment) to be able to
>> access some of the dma-buf (or well, framebuffers in general) that the
>> client is using. Adding some mailing lists for that.
>> -Daniel
>>
>> >
>> > ------------------------------------------------------------------------------
>> >
>> > There is a git repository at github.com where this series of patches are all
>> > integrated in Linux kernel tree based on the commit:
>> >
>> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
>> >         Date:   Sun Dec 3 11:01:47 2017 -0500
>> >
>> >             Linux 4.15-rc2
>> >
>> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
>> >
>> > _______________________________________________
>> > dri-devel mailing list
>> > dri-devel@lists.freedesktop.org
>> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Matt Roper
> Graphics Software Engineer
> IoTG Platform Enabling & Development
> Intel Corporation
> (916) 356-2795

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2017-12-29 13:03         ` Tomeu Vizoso
  0 siblings, 0 replies; 160+ messages in thread
From: Tomeu Vizoso @ 2017-12-29 13:03 UTC (permalink / raw)
  To: Matt Roper
  Cc: Dongwon Kim, Intel Graphics Development, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, intel-gvt-dev

On 26 December 2017 at 19:19, Matt Roper <matthew.d.roper@intel.com> wrote:
> On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
>> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
>> > I forgot to include this brief information about this patch series.
>> >
>> > This patch series contains the implementation of a new device driver,
>> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
>> > different OSes running on the same virtual OS platform powered by
>> > a hypervisor.
>> >
>> > Detailed information about this driver is described in a high-level doc
>> > added by the second patch of the series.
>> >
>> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
>> >
>> > I am attaching 'Overview' section here as a summary.
>> >
>> > ------------------------------------------------------------------------------
>> > Section 1. Overview
>> > ------------------------------------------------------------------------------
>> >
>> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
>> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
>> > where multiple different OS instances need to share same physical data without
>> > data-copy across VMs.
>> >
>> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
>> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
>> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
>> > for the buffer to the importing VM (so called, “importer”).
>> >
>> > Another instance of the Hyper_DMABUF driver on importer registers
>> > a hyper_dmabuf_id together with reference information for the shared physical
>> > pages associated with the DMA_BUF to its database when the export happens.
>> >
>> > The actual mapping of the DMA_BUF on the importer’s side is done by
>> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
>> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
>> > exporting driver as is, that is, no special configuration is required.
>> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
>> > exchange.
>>
>> So I know that most dma-buf implementations (especially lots of importers
>> in drivers/gpu) break this, but fundamentally only the original exporter
>> is allowed to know about the underlying pages. There's various scenarios
>> where a dma-buf isn't backed by anything like a struct page.
>>
>> So your first step of noodling the underlying struct page out from the
>> dma-buf is kinda breaking the abstraction, and I think it's not a good
>> idea to have that. Especially not for sharing across VMs.
>>
>> I think a better design would be if hyper-dmabuf would be the dma-buf
>> exporter in both of the VMs, and you'd import it everywhere you want to in
>> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
>> in control of the pages, and a lot of the troubling forwarding you
>> currently need to do disappears.
>
> I think one of the main driving use cases here is for a "local" graphics
> compositor inside the VM to accept client buffers from unmodified
> applications and then pass those buffers along to a "global" compositor
> running in the service domain.  This would allow the global compositor
> to composite applications running in different virtual machines (and
> possibly running under different operating systems).
>
> If we require that hyper-dmabuf always be the exporter, that complicates
> things a little bit since a buffer allocated via regular interfaces (GEM
> ioctls or whatever) wouldn't be directly transferrable to the global
> compositor.  For graphics use cases like this, we could probably hide a
> lot of the details by modifying/replacing the EGL implementation that
> handles the details of buffer allocation.  However if we have
> applications that are themselves just passing along externally-allocated
> buffers (e.g., images from a camera device), we'd probably need to
> modify those applications and/or the drivers they get their content
> from.

There's also non-GPU-rendering clients that pass SHM buffers to the compositor.

For now, a Wayland proxy in the guest is copying the client-provided
buffers to virtio-gpu resources at the appropriate times, which also
need to be copied once more to host memory. Would be great to reduce
the number of copies that that implies.

For more on this effort:

https://patchwork.kernel.org/patch/10134603/

Regards,

Tomeu

>
> Matt
>
>>
>> 2nd thing: This seems very much related to what's happening around gvt and
>> allowing at least the host (in a kvm based VM environment) to be able to
>> access some of the dma-buf (or well, framebuffers in general) that the
>> client is using. Adding some mailing lists for that.
>> -Daniel
>>
>> >
>> > ------------------------------------------------------------------------------
>> >
>> > There is a git repository at github.com where this series of patches are all
>> > integrated in Linux kernel tree based on the commit:
>> >
>> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
>> >         Date:   Sun Dec 3 11:01:47 2017 -0500
>> >
>> >             Linux 4.15-rc2
>> >
>> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
>> >
>> > _______________________________________________
>> > dri-devel mailing list
>> > dri-devel@lists.freedesktop.org
>> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Matt Roper
> Graphics Software Engineer
> IoTG Platform Enabling & Development
> Intel Corporation
> (916) 356-2795
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-26 18:19       ` Matt Roper
  (?)
@ 2017-12-29 13:03       ` Tomeu Vizoso
  -1 siblings, 0 replies; 160+ messages in thread
From: Tomeu Vizoso @ 2017-12-29 13:03 UTC (permalink / raw)
  To: Matt Roper
  Cc: Dongwon Kim, Intel Graphics Development, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, intel-gvt-dev

On 26 December 2017 at 19:19, Matt Roper <matthew.d.roper@intel.com> wrote:
> On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
>> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
>> > I forgot to include this brief information about this patch series.
>> >
>> > This patch series contains the implementation of a new device driver,
>> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
>> > different OSes running on the same virtual OS platform powered by
>> > a hypervisor.
>> >
>> > Detailed information about this driver is described in a high-level doc
>> > added by the second patch of the series.
>> >
>> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
>> >
>> > I am attaching 'Overview' section here as a summary.
>> >
>> > ------------------------------------------------------------------------------
>> > Section 1. Overview
>> > ------------------------------------------------------------------------------
>> >
>> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
>> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
>> > where multiple different OS instances need to share same physical data without
>> > data-copy across VMs.
>> >
>> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
>> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
>> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
>> > for the buffer to the importing VM (so called, “importer”).
>> >
>> > Another instance of the Hyper_DMABUF driver on importer registers
>> > a hyper_dmabuf_id together with reference information for the shared physical
>> > pages associated with the DMA_BUF to its database when the export happens.
>> >
>> > The actual mapping of the DMA_BUF on the importer’s side is done by
>> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
>> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
>> > exporting driver as is, that is, no special configuration is required.
>> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
>> > exchange.
>>
>> So I know that most dma-buf implementations (especially lots of importers
>> in drivers/gpu) break this, but fundamentally only the original exporter
>> is allowed to know about the underlying pages. There's various scenarios
>> where a dma-buf isn't backed by anything like a struct page.
>>
>> So your first step of noodling the underlying struct page out from the
>> dma-buf is kinda breaking the abstraction, and I think it's not a good
>> idea to have that. Especially not for sharing across VMs.
>>
>> I think a better design would be if hyper-dmabuf would be the dma-buf
>> exporter in both of the VMs, and you'd import it everywhere you want to in
>> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
>> in control of the pages, and a lot of the troubling forwarding you
>> currently need to do disappears.
>
> I think one of the main driving use cases here is for a "local" graphics
> compositor inside the VM to accept client buffers from unmodified
> applications and then pass those buffers along to a "global" compositor
> running in the service domain.  This would allow the global compositor
> to composite applications running in different virtual machines (and
> possibly running under different operating systems).
>
> If we require that hyper-dmabuf always be the exporter, that complicates
> things a little bit since a buffer allocated via regular interfaces (GEM
> ioctls or whatever) wouldn't be directly transferrable to the global
> compositor.  For graphics use cases like this, we could probably hide a
> lot of the details by modifying/replacing the EGL implementation that
> handles the details of buffer allocation.  However if we have
> applications that are themselves just passing along externally-allocated
> buffers (e.g., images from a camera device), we'd probably need to
> modify those applications and/or the drivers they get their content
> from.

There's also non-GPU-rendering clients that pass SHM buffers to the compositor.

For now, a Wayland proxy in the guest is copying the client-provided
buffers to virtio-gpu resources at the appropriate times, which also
need to be copied once more to host memory. Would be great to reduce
the number of copies that that implies.

For more on this effort:

https://patchwork.kernel.org/patch/10134603/

Regards,

Tomeu

>
> Matt
>
>>
>> 2nd thing: This seems very much related to what's happening around gvt and
>> allowing at least the host (in a kvm based VM environment) to be able to
>> access some of the dma-buf (or well, framebuffers in general) that the
>> client is using. Adding some mailing lists for that.
>> -Daniel
>>
>> >
>> > ------------------------------------------------------------------------------
>> >
>> > There is a git repository at github.com where this series of patches are all
>> > integrated in Linux kernel tree based on the commit:
>> >
>> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
>> >         Date:   Sun Dec 3 11:01:47 2017 -0500
>> >
>> >             Linux 4.15-rc2
>> >
>> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
>> >
>> > _______________________________________________
>> > dri-devel mailing list
>> > dri-devel@lists.freedesktop.org
>> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Matt Roper
> Graphics Software Engineer
> IoTG Platform Enabling & Development
> Intel Corporation
> (916) 356-2795

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  9:59     ` Daniel Vetter
                       ` (2 preceding siblings ...)
  (?)
@ 2018-01-10 23:13     ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:13 UTC (permalink / raw)
  To: linux-kernel, xen-devel, Potrola, MateuszX, dri-devel,
	Intel Graphics Development, intel-gvt-dev

On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> > 
> > Detailed information about this driver is described in a high-level doc
> > added by the second patch of the series.
> > 
> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> > 
> > I am attaching 'Overview' section here as a summary.
> > 
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> > 
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> > 
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> > 
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> > 
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
> 
> So I know that most dma-buf implementations (especially lots of importers
> in drivers/gpu) break this, but fundamentally only the original exporter
> is allowed to know about the underlying pages. There's various scenarios
> where a dma-buf isn't backed by anything like a struct page.
> 
> So your first step of noodling the underlying struct page out from the
> dma-buf is kinda breaking the abstraction, and I think it's not a good
> idea to have that. Especially not for sharing across VMs.
> 
> I think a better design would be if hyper-dmabuf would be the dma-buf
> exporter in both of the VMs, and you'd import it everywhere you want to in
> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
> in control of the pages, and a lot of the troubling forwarding you
> currently need to do disappears.

It could be another way to implement dma-buf sharing however, it would break
the flexibility and transparency that this driver has now. With suggested
method, there will be two different types of dma-buf exist in general usage
model, one is local-dmabuf, a traditional dmabuf that can be shared only
within in the same OS instance and the other is cross-vm sharable dmabuf
created by hyper_dmabuf driver. 

The problem with this approach is that an application needs to know whether
the contents will be shared or not across VMs in advance before deciding
what type of dma-buf it needs to create. Otherwise, the application should
always use hyper_dmabuf as the exporter for all contents that can be possibly
shared in the future and I think this will require significant amount of
application changes and also adds unnecessary dependency on hyper_dmabuf driver.

> 
> 2nd thing: This seems very much related to what's happening around gvt and
> allowing at least the host (in a kvm based VM environment) to be able to
> access some of the dma-buf (or well, framebuffers in general) that the
> client is using. Adding some mailing lists for that.

I think you are talking about exposing framebuffer to another domain via GTT
memory sharing. And yes, one of primary use cases for hyper_dmabuf is to share
a framebuffer or other graphic object across VMs but it is designed to do it
via more general way using existing dma-buf framework. Also, we wanted to
make this feature available virtually for any sharable contents which can
currently be shared via dma-buf locally.

> -Daniel
> 
> > 
> > ------------------------------------------------------------------------------
> > 
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> > 
> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
> >         Date:   Sun Dec 3 11:01:47 2017 -0500
> > 
> >             Linux 4.15-rc2
> > 
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  9:59     ` Daniel Vetter
                       ` (3 preceding siblings ...)
  (?)
@ 2018-01-10 23:13     ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:13 UTC (permalink / raw)
  To: linux-kernel, xen-devel, Potrola, MateuszX, dri-devel,
	Intel Graphics Development, intel-gvt-dev

On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> > 
> > Detailed information about this driver is described in a high-level doc
> > added by the second patch of the series.
> > 
> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> > 
> > I am attaching 'Overview' section here as a summary.
> > 
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> > 
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> > 
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> > 
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> > 
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
> 
> So I know that most dma-buf implementations (especially lots of importers
> in drivers/gpu) break this, but fundamentally only the original exporter
> is allowed to know about the underlying pages. There's various scenarios
> where a dma-buf isn't backed by anything like a struct page.
> 
> So your first step of noodling the underlying struct page out from the
> dma-buf is kinda breaking the abstraction, and I think it's not a good
> idea to have that. Especially not for sharing across VMs.
> 
> I think a better design would be if hyper-dmabuf would be the dma-buf
> exporter in both of the VMs, and you'd import it everywhere you want to in
> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
> in control of the pages, and a lot of the troubling forwarding you
> currently need to do disappears.

It could be another way to implement dma-buf sharing however, it would break
the flexibility and transparency that this driver has now. With suggested
method, there will be two different types of dma-buf exist in general usage
model, one is local-dmabuf, a traditional dmabuf that can be shared only
within in the same OS instance and the other is cross-vm sharable dmabuf
created by hyper_dmabuf driver. 

The problem with this approach is that an application needs to know whether
the contents will be shared or not across VMs in advance before deciding
what type of dma-buf it needs to create. Otherwise, the application should
always use hyper_dmabuf as the exporter for all contents that can be possibly
shared in the future and I think this will require significant amount of
application changes and also adds unnecessary dependency on hyper_dmabuf driver.

> 
> 2nd thing: This seems very much related to what's happening around gvt and
> allowing at least the host (in a kvm based VM environment) to be able to
> access some of the dma-buf (or well, framebuffers in general) that the
> client is using. Adding some mailing lists for that.

I think you are talking about exposing framebuffer to another domain via GTT
memory sharing. And yes, one of primary use cases for hyper_dmabuf is to share
a framebuffer or other graphic object across VMs but it is designed to do it
via more general way using existing dma-buf framework. Also, we wanted to
make this feature available virtually for any sharable contents which can
currently be shared via dma-buf locally.

> -Daniel
> 
> > 
> > ------------------------------------------------------------------------------
> > 
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> > 
> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> >         Author: Linus Torvalds <torvalds@linux-foundation.org>
> >         Date:   Sun Dec 3 11:01:47 2017 -0500
> > 
> >             Linux 4.15-rc2
> > 
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [Xen-devel] [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  8:38   ` [Xen-devel] " Oleksandr Andrushchenko
  2018-01-10 23:14     ` Dongwon Kim
@ 2018-01-10 23:14     ` Dongwon Kim
  1 sibling, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:14 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: linux-kernel, xen-devel, Potrola, MateuszX, dri-devel

Yes, I will post a test application.
Thanks

On Wed, Dec 20, 2017 at 10:38:08AM +0200, Oleksandr Andrushchenko wrote:
> 
> On 12/20/2017 01:27 AM, Dongwon Kim wrote:
> >This patch series contains the implementation of a new device driver,
> >hyper_dmabuf, which provides a method for DMA-BUF sharing across
> >different OSes running on the same virtual OS platform powered by
> >a hypervisor.
> This is very interesting at least in context of embedded systems.
> Could you please share use-cases for this work and, if possible,
> sources of the test applications if any.
> 
> Thank you,
> Oleksandr

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  8:38   ` [Xen-devel] " Oleksandr Andrushchenko
@ 2018-01-10 23:14     ` Dongwon Kim
  2018-01-10 23:14     ` [Xen-devel] " Dongwon Kim
  1 sibling, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:14 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: xen-devel, linux-kernel, dri-devel, Potrola, MateuszX

Yes, I will post a test application.
Thanks

On Wed, Dec 20, 2017 at 10:38:08AM +0200, Oleksandr Andrushchenko wrote:
> 
> On 12/20/2017 01:27 AM, Dongwon Kim wrote:
> >This patch series contains the implementation of a new device driver,
> >hyper_dmabuf, which provides a method for DMA-BUF sharing across
> >different OSes running on the same virtual OS platform powered by
> >a hypervisor.
> This is very interesting at least in context of embedded systems.
> Could you please share use-cases for this work and, if possible,
> sources of the test applications if any.
> 
> Thank you,
> Oleksandr

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [Xen-devel] [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  8:17   ` [Xen-devel] " Juergen Gross
@ 2018-01-10 23:21       ` Dongwon Kim
  2018-01-10 23:21       ` Dongwon Kim
  1 sibling, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:21 UTC (permalink / raw)
  To: Juergen Gross; +Cc: linux-kernel, xen-devel, Potrola, MateuszX, dri-devel

On Wed, Dec 20, 2017 at 09:17:07AM +0100, Juergen Gross wrote:
> On 20/12/17 00:27, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> 
> Some general remarks regarding this series:
> 
> You are starting the whole driver in drivers/xen/ and in the last patch
> you move it over to drivers/dma-buf/. Why don't you use drivers/dma-buf/
> from the beginning? The same applies to e.g. patch 22 changing the
> license. Please make it easier for the reviewers by not letting us
> review the development history of your work.

Yeah, I tried to clean up our developement history but because of
dependencies among patches, I couldn't make those things clear in the
first place.

I will try to clean things up further.

> 
> Please run ./scripts/checkpatch.pl on each patch and correct the issues
> it is reporting. At the first glance I've seen several style problems
> which I won't comment until the next round.

hmm.. I ran the script only on the final version and try to fix all the
issues after that. If it's required for individual patches, I will clean
up every patch once again.

> 
> Please add the maintainers as Cc:, not only the related mailing lists.
> As you seem to aim supporting other hypervisors than Xen you might want
> to add virtualization@lists.linux-foundation.org as well.

Ok, thanks!

> 
> 
> Juergen

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [Xen-devel] [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2018-01-10 23:21       ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:21 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, linux-kernel, dri-devel, Potrola, MateuszX

On Wed, Dec 20, 2017 at 09:17:07AM +0100, Juergen Gross wrote:
> On 20/12/17 00:27, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> 
> Some general remarks regarding this series:
> 
> You are starting the whole driver in drivers/xen/ and in the last patch
> you move it over to drivers/dma-buf/. Why don't you use drivers/dma-buf/
> from the beginning? The same applies to e.g. patch 22 changing the
> license. Please make it easier for the reviewers by not letting us
> review the development history of your work.

Yeah, I tried to clean up our developement history but because of
dependencies among patches, I couldn't make those things clear in the
first place.

I will try to clean things up further.

> 
> Please run ./scripts/checkpatch.pl on each patch and correct the issues
> it is reporting. At the first glance I've seen several style problems
> which I won't comment until the next round.

hmm.. I ran the script only on the final version and try to fix all the
issues after that. If it's required for individual patches, I will clean
up every patch once again.

> 
> Please add the maintainers as Cc:, not only the related mailing lists.
> As you seem to aim supporting other hypervisors than Xen you might want
> to add virtualization@lists.linux-foundation.org as well.

Ok, thanks!

> 
> 
> Juergen
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-20  8:17   ` [Xen-devel] " Juergen Gross
@ 2018-01-10 23:21     ` Dongwon Kim
  2018-01-10 23:21       ` Dongwon Kim
  1 sibling, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:21 UTC (permalink / raw)
  To: Juergen Gross; +Cc: xen-devel, linux-kernel, dri-devel, Potrola, MateuszX

On Wed, Dec 20, 2017 at 09:17:07AM +0100, Juergen Gross wrote:
> On 20/12/17 00:27, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> > 
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> 
> Some general remarks regarding this series:
> 
> You are starting the whole driver in drivers/xen/ and in the last patch
> you move it over to drivers/dma-buf/. Why don't you use drivers/dma-buf/
> from the beginning? The same applies to e.g. patch 22 changing the
> license. Please make it easier for the reviewers by not letting us
> review the development history of your work.

Yeah, I tried to clean up our developement history but because of
dependencies among patches, I couldn't make those things clear in the
first place.

I will try to clean things up further.

> 
> Please run ./scripts/checkpatch.pl on each patch and correct the issues
> it is reporting. At the first glance I've seen several style problems
> which I won't comment until the next round.

hmm.. I ran the script only on the final version and try to fix all the
issues after that. If it's required for individual patches, I will clean
up every patch once again.

> 
> Please add the maintainers as Cc:, not only the related mailing lists.
> As you seem to aim supporting other hypervisors than Xen you might want
> to add virtualization@lists.linux-foundation.org as well.

Ok, thanks!

> 
> 
> Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 19:29 ` Dongwon Kim
@ 2018-02-15  1:34   ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-02-15  1:34 UTC (permalink / raw)
  To: linux-kernel; +Cc: dri-devel, xen-devel, Potrola, MateuszX

Abandoning this series as a new version was submitted for the review

"[RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver"

On Tue, Dec 19, 2017 at 11:29:17AM -0800, Kim, Dongwon wrote:
> Upload of intial version of hyper_DMABUF driver enabling
> DMA_BUF exchange between two different VMs in virtualized
> platform based on hypervisor such as KVM or XEN.
> 
> Hyper_DMABUF drv's primary role is to import a DMA_BUF
> from originator then re-export it to another Linux VM
> so that it can be mapped and accessed by it.
> 
> The functionality of this driver highly depends on
> Hypervisor's native page sharing mechanism and inter-VM
> communication support.
> 
> This driver has two layers, one is main hyper_DMABUF
> framework for scatter-gather list management that handles
> actual import and export of DMA_BUF. Lower layer is about
> actual memory sharing and communication between two VMs,
> which is hypervisor-specific interface.
> 
> This driver is initially designed to enable DMA_BUF
> sharing across VMs in Xen environment, so currently working
> with Xen only.
> 
> This also adds Kernel configuration for hyper_DMABUF drv
> under Device Drivers->Xen driver support->hyper_dmabuf
> options.
> 
> To give some brief information about each source file,
> 
> hyper_dmabuf/hyper_dmabuf_conf.h
> : configuration info
> 
> hyper_dmabuf/hyper_dmabuf_drv.c
> : driver interface and initialization
> 
> hyper_dmabuf/hyper_dmabuf_imp.c
> : scatter-gather list generation and management. DMA_BUF
> ops for DMA_BUF reconstructed from hyper_DMABUF
> 
> hyper_dmabuf/hyper_dmabuf_ioctl.c
> : IOCTLs calls for export/import and comm channel creation
> unexport.
> 
> hyper_dmabuf/hyper_dmabuf_list.c
> : Database (linked-list) for exported and imported
> hyper_DMABUF
> 
> hyper_dmabuf/hyper_dmabuf_msg.c
> : creation and management of messages between exporter and
> importer
> 
> hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> : comm ch management and ISRs for incoming messages.
> 
> hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> : Database (linked-list) for keeping information about
> existing comm channels among VMs
> 
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
> ---
>  drivers/xen/Kconfig                                |   2 +
>  drivers/xen/Makefile                               |   1 +
>  drivers/xen/hyper_dmabuf/Kconfig                   |  14 +
>  drivers/xen/hyper_dmabuf/Makefile                  |  34 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   2 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  54 ++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 101 +++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 852 +++++++++++++++++++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  31 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 462 +++++++++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 119 +++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  40 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 212 +++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  45 ++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  16 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  70 ++
>  .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 328 ++++++++
>  .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  62 ++
>  .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 106 +++
>  .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  35 +
>  20 files changed, 2586 insertions(+)
>  create mode 100644 drivers/xen/hyper_dmabuf/Kconfig
>  create mode 100644 drivers/xen/hyper_dmabuf/Makefile
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> 
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index d8dd546..b59b0e3 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -321,4 +321,6 @@ config XEN_SYMS
>  config XEN_HAVE_VPMU
>         bool
>  
> +source "drivers/xen/hyper_dmabuf/Kconfig"
> +
>  endmenu
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 451e833..a6e253a 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -4,6 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
>  obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
>  obj-y	+= events/
>  obj-y	+= xenbus/
> +obj-y	+= hyper_dmabuf/
>  
>  nostackp := $(call cc-option, -fno-stack-protector)
>  CFLAGS_features.o			:= $(nostackp)
> diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
> new file mode 100644
> index 0000000..75e1f96
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/Kconfig
> @@ -0,0 +1,14 @@
> +menu "hyper_dmabuf options"
> +
> +config HYPER_DMABUF
> +	tristate "Enables hyper dmabuf driver"
> +	default y
> +
> +config HYPER_DMABUF_XEN
> +	bool "Configure hyper_dmabuf for XEN hypervisor"
> +	default y
> +	depends on HYPER_DMABUF
> +	help
> +	  Configuring hyper_dmabuf driver for XEN hypervisor
> +
> +endmenu
> diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
> new file mode 100644
> index 0000000..0be7445
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/Makefile
> @@ -0,0 +1,34 @@
> +TARGET_MODULE:=hyper_dmabuf
> +
> +# If we running by kernel building system
> +ifneq ($(KERNELRELEASE),)
> +	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
> +                                 hyper_dmabuf_ioctl.o \
> +                                 hyper_dmabuf_list.o \
> +				 hyper_dmabuf_imp.o \
> +				 hyper_dmabuf_msg.o \
> +				 xen/hyper_dmabuf_xen_comm.o \
> +				 xen/hyper_dmabuf_xen_comm_list.o
> +
> +obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
> +
> +# If we are running without kernel build system
> +else
> +BUILDSYSTEM_DIR?=../../../
> +PWD:=$(shell pwd)
> +
> +all :
> +# run kernel build system to make module
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
> +
> +clean:
> +# run kernel build system to cleanup in current directory
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
> +
> +load:
> +	insmod ./$(TARGET_MODULE).ko
> +
> +unload:
> +	rmmod ./$(TARGET_MODULE).ko
> +
> +endif
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
> new file mode 100644
> index 0000000..3d9b2d6
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
> @@ -0,0 +1,2 @@
> +#define CURRENT_TARGET XEN
> +#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
> new file mode 100644
> index 0000000..0698327
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
> @@ -0,0 +1,54 @@
> +#include <linux/init.h>       /* module_init, module_exit */
> +#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
> +#include "hyper_dmabuf_conf.h"
> +#include "hyper_dmabuf_list.h"
> +#include "xen/hyper_dmabuf_xen_comm_list.h"
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_AUTHOR("IOTG-PED, INTEL");
> +
> +int register_device(void);
> +int unregister_device(void);
> +
> +/*===============================================================================================*/
> +static int hyper_dmabuf_drv_init(void)
> +{
> +	int ret = 0;
> +
> +	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
> +
> +	ret = register_device();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
> +
> +	ret = hyper_dmabuf_table_init();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	ret = hyper_dmabuf_ring_table_init();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	/* interrupt for comm should be registered here: */
> +	return ret;
> +}
> +
> +/*-----------------------------------------------------------------------------------------------*/
> +static void hyper_dmabuf_drv_exit(void)
> +{
> +	/* hash tables for export/import entries and ring_infos */
> +	hyper_dmabuf_table_destroy();
> +	hyper_dmabuf_ring_table_init();
> +
> +	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
> +	unregister_device();
> +}
> +/*===============================================================================================*/
> +
> +module_init(hyper_dmabuf_drv_init);
> +module_exit(hyper_dmabuf_drv_exit);
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
> new file mode 100644
> index 0000000..2dad9a6
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
> @@ -0,0 +1,101 @@
> +#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> +#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> +
> +typedef int (*hyper_dmabuf_ioctl_t)(void *data);
> +
> +struct hyper_dmabuf_ioctl_desc {
> +	unsigned int cmd;
> +	int flags;
> +	hyper_dmabuf_ioctl_t func;
> +	const char *name;
> +};
> +
> +#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
> +	[_IOC_NR(ioctl)] = {				\
> +			.cmd = ioctl,			\
> +			.func = _func,			\
> +			.flags = _flags,		\
> +			.name = #ioctl			\
> +	}
> +
> +#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
> +struct ioctl_hyper_dmabuf_exporter_ring_setup {
> +	/* IN parameters */
> +	/* Remote domain id */
> +	uint32_t remote_domain;
> +	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
> +	uint32_t port; /* assigned by driver, copied to userspace after initialization */
> +};
> +
> +#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
> +struct ioctl_hyper_dmabuf_importer_ring_setup {
> +	/* IN parameters */
> +	/* Source domain id */
> +	uint32_t source_domain;
> +	/* Ring shared page refid */
> +	grant_ref_t ring_refid;
> +	/* Port number */
> +	uint32_t port;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
> +_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
> +struct ioctl_hyper_dmabuf_export_remote {
> +	/* IN parameters */
> +	/* DMA buf fd to be exported */
> +	uint32_t dmabuf_fd;
> +	/* Domain id to which buffer should be exported */
> +	uint32_t remote_domain;
> +	/* exported dma buf id */
> +	uint32_t hyper_dmabuf_id;
> +	uint32_t private[4];
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_FD \
> +_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
> +struct ioctl_hyper_dmabuf_export_fd {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be imported */
> +	uint32_t hyper_dmabuf_id;
> +	/* flags */
> +	uint32_t flags;
> +	/* OUT parameters */
> +	/* exported dma buf fd */
> +	uint32_t fd;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_DESTROY \
> +_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
> +struct ioctl_hyper_dmabuf_destroy {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be destroyed */
> +	uint32_t hyper_dmabuf_id;
> +	/* OUT parameters */
> +	/* Status of request */
> +	uint32_t status;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_QUERY \
> +_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
> +struct ioctl_hyper_dmabuf_query {
> +	/* in parameters */
> +	/* hyper dmabuf id to be queried */
> +	uint32_t hyper_dmabuf_id;
> +	/* item to be queried */
> +	uint32_t item;
> +	/* OUT parameters */
> +	/* Value of queried item */
> +	uint32_t info;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
> +struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
> +	/* in parameters */
> +	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
> +	uint32_t info;
> +};
> +
> +#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
> new file mode 100644
> index 0000000..faa5c1b
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
> @@ -0,0 +1,852 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/module.h>
> +#include <linux/dma-buf.h>
> +#include <xen/grant_table.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_imp.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +
> +#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
> +
> +/* return total number of pages referecned by a sgt
> + * for pre-calculation of # of pages behind a given sgt
> + */
> +static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
> +{
> +	struct scatterlist *sgl;
> +	int length, i;
> +	/* at least one page */
> +	int num_pages = 1;
> +
> +	sgl = sgt->sgl;
> +
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
> +
> +	for (i = 1; i < sgt->nents; i++) {
> +		sgl = sg_next(sgl);
> +		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
> +	}
> +
> +	return num_pages;
> +}
> +
> +/* extract pages directly from struct sg_table */
> +struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
> +{
> +	struct hyper_dmabuf_pages_info *pinfo;
> +	int i, j;
> +	int length;
> +	struct scatterlist *sgl;
> +
> +	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
> +	if (pinfo == NULL)
> +		return NULL;
> +
> +	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
> +	if (pinfo->pages == NULL)
> +		return NULL;
> +
> +	sgl = sgt->sgl;
> +
> +	pinfo->nents = 1;
> +	pinfo->frst_ofst = sgl->offset;
> +	pinfo->pages[0] = sg_page(sgl);
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +	i=1;
> +
> +	while (length > 0) {
> +		pinfo->pages[i] = nth_page(sg_page(sgl), i);
> +		length -= PAGE_SIZE;
> +		pinfo->nents++;
> +		i++;
> +	}
> +
> +	for (j = 1; j < sgt->nents; j++) {
> +		sgl = sg_next(sgl);
> +		pinfo->pages[i++] = sg_page(sgl);
> +		length = sgl->length - PAGE_SIZE;
> +		pinfo->nents++;
> +
> +		while (length > 0) {
> +			pinfo->pages[i] = nth_page(sg_page(sgl), i);
> +			length -= PAGE_SIZE;
> +			pinfo->nents++;
> +			i++;
> +		}
> +	}
> +
> +	/*
> +	 * lenght at that point will be 0 or negative,
> +	 * so to calculate last page size just add it to PAGE_SIZE
> +	 */
> +	pinfo->last_len = PAGE_SIZE + length;
> +
> +	return pinfo;
> +}
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
> +				int frst_ofst, int last_len, int nents)
> +{
> +	struct sg_table *sgt;
> +	struct scatterlist *sgl;
> +	int i, ret;
> +
> +	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> +	if (sgt == NULL) {
> +		return NULL;
> +	}
> +
> +	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
> +	if (ret) {
> +		kfree(sgt);
> +		return NULL;
> +	}
> +
> +	sgl = sgt->sgl;
> +
> +	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
> +
> +	for (i=1; i<nents-1; i++) {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
> +	}
> +
> +	if (i > 1) /* more than one page */ {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pages[i], last_len, 0);
> +	}
> +
> +	return sgt;
> +}
> +
> +/*
> + * Creates 2 level page directory structure for referencing shared pages.
> + * Top level page is a single page that contains up to 1024 refids that
> + * point to 2nd level pages.
> + * Each 2nd level page contains up to 1024 refids that point to shared
> + * data pages.
> + * There will always be one top level page and number of 2nd level pages
> + * depends on number of shared data pages.
> + *
> + *      Top level page                2nd level pages            Data pages
> + * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
> + * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
> + * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
> + * |           ...           |   | |     ....           | |
> + * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
> + * +-------------------------+ | | +--------------------+      |Data page 1 |
> + *                             | |                             +------------+
> + *                             | └>+--------------------+
> + *                             |   |Data page 1024 refid|
> + *                             |   |Data page 1025 refid|
> + *                             |   |       ...          |
> + *                             |   |Data page 2047 refid|
> + *                             |   +--------------------+
> + *                             |
> + *                             |        .....
> + *                             └-->+-----------------------+
> + *                                 |Data page 1047552 refid|
> + *                                 |Data page 1047553 refid|
> + *                                 |       ...             |
> + *                                 |Data page 1048575 refid|-->+------------------+
> + *                                 +-----------------------+   |Data page 1048575 |
> + *                                                             +------------------+
> + *
> + * Using such 2 level structure it is possible to reference up to 4GB of
> + * shared data using single refid pointing to top level page.
> + *
> + * Returns refid of top level page.
> + */
> +grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
> +						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	/*
> +	 * Calculate number of pages needed for 2nd level addresing:
> +	 */
> +	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
> +	int i;
> +	unsigned long gref_page_start;
> +	grant_ref_t *tmp_page;
> +	grant_ref_t top_level_ref;
> +	grant_ref_t * addr_refs;
> +	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
> +
> +	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
> +	tmp_page = (grant_ref_t *)gref_page_start;
> +
> +	/* Store 2nd level pages to be freed later */
> +	shared_pages_info->addr_pages = tmp_page;
> +
> +	/*TODO: make sure that allocated memory is filled with 0*/
> +
> +	/* Share 2nd level addressing pages in readonly mode*/
> +	for (i=0; i< n_2nd_level_pages; i++) {
> +		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
> +	}
> +
> +	/*
> +	 * fill second level pages with data refs
> +	 */
> +	for (i = 0; i < nents; i++) {
> +		tmp_page[i] = data_refs[i];
> +	}
> +
> +
> +	/* allocate top level page */
> +	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
> +	tmp_page = (grant_ref_t *)gref_page_start;
> +
> +	/* Store top level page to be freed later */
> +	shared_pages_info->top_level_page = tmp_page;
> +
> +	/*
> +	 * fill top level page with reference numbers of second level pages refs.
> +	 */
> +	for (i=0; i< n_2nd_level_pages; i++) {
> +		tmp_page[i] =  addr_refs[i];
> +	}
> +
> +	/* Share top level addressing page in readonly mode*/
> +	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
> +
> +	kfree(addr_refs);
> +
> +	return top_level_ref;
> +}
> +
> +/*
> + * Maps provided top level ref id and then return array of pages containing data refs.
> + */
> +struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
> +					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	struct page *top_level_page;
> +	struct page **level2_pages;
> +
> +	grant_ref_t *top_level_refs;
> +
> +	struct gnttab_map_grant_ref top_level_map_ops;
> +	struct gnttab_unmap_grant_ref top_level_unmap_ops;
> +
> +	struct gnttab_map_grant_ref *map_ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +
> +	unsigned long addr;
> +	int n_level2_refs = 0;
> +	int i;
> +
> +	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
> +
> +	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
> +
> +	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
> +	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
> +
> +	/* Map top level addressing page */
> +	if (gnttab_alloc_pages(1, &top_level_page)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
> +	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
> +	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +
> +	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	if (top_level_map_ops.status) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
> +				top_level_map_ops.status);
> +		return NULL;
> +	} else {
> +		top_level_unmap_ops.handle = top_level_map_ops.handle;
> +	}
> +
> +	/* Parse contents of top level addressing page to find how many second level pages is there*/
> +	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
> +
> +	/* Map all second level pages */
> +	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	for (i = 0; i < n_level2_refs; i++) {
> +		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
> +		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
> +		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +	}
> +
> +	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
> +	for (i = 0; i < n_level2_refs; i++) {
> +		if (map_ops[i].status) {
> +			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
> +					map_ops[i].status);
> +			return NULL;
> +		} else {
> +			unmap_ops[i].handle = map_ops[i].handle;
> +		}
> +	}
> +
> +	/* Unmap top level page, as it won't be needed any longer */
> +	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
> +		printk("\xen: cannot unmap top level page\n");
> +		return NULL;
> +	}
> +
> +	gnttab_free_pages(1, &top_level_page);
> +	kfree(map_ops);
> +	shared_pages_info->unmap_ops = unmap_ops;
> +
> +	return level2_pages;
> +}
> +
> +
> +/* This collects all reference numbers for 2nd level shared pages and create a table
> + * with those in 1st level shared pages then return reference numbers for this top level
> + * table. */
> +grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
> +					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	int i = 0;
> +	grant_ref_t *data_refs;
> +	grant_ref_t top_level_ref;
> +
> +	/* allocate temp array for refs of shared data pages */
> +	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
> +
> +	/* share data pages in rw mode*/
> +	for (i=0; i<nents; i++) {
> +		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
> +	}
> +
> +	/* create additional shared pages with 2 level addressing of data pages */
> +	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
> +							      shared_pages_info);
> +
> +	/* Store exported pages refid to be unshared later */
> +	shared_pages_info->data_refs = data_refs;
> +	shared_pages_info->top_level_ref = top_level_ref;
> +
> +	return top_level_ref;
> +}
> +
> +int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
> +	uint32_t i = 0;
> +	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
> +
> +	grant_ref_t *ref = shared_pages_info->top_level_page;
> +	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
> +
> +
> +	if (shared_pages_info->data_refs == NULL ||
> +	    shared_pages_info->addr_pages ==  NULL ||
> +	    shared_pages_info->top_level_page == NULL ||
> +	    shared_pages_info->top_level_ref == -1) {
> +		printk("gref table for hyper_dmabuf already cleaned up\n");
> +		return 0;
> +	}
> +
> +	/* End foreign access for 2nd level addressing pages */
> +	while(ref[i] != 0 && i < n_2nd_level_pages) {
> +		if (gnttab_query_foreign_access(ref[i])) {
> +			printk("refid not shared !!\n");
> +		}
> +		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
> +			printk("refid still in use!!!\n");
> +		}
> +		i++;
> +	}
> +	free_pages((unsigned long)shared_pages_info->addr_pages, i);
> +
> +	/* End foreign access for top level addressing page */
> +	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
> +		printk("refid not shared !!\n");
> +	}
> +	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
> +		printk("refid still in use!!!\n");
> +	}
> +	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
> +	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
> +
> +	/* End foreign access for data pages, but do not free them */
> +	for (i = 0; i < sgt_info->sgt->nents; i++) {
> +		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
> +			printk("refid not shared !!\n");
> +		}
> +		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
> +	}
> +
> +	kfree(shared_pages_info->data_refs);
> +
> +	shared_pages_info->data_refs = NULL;
> +	shared_pages_info->addr_pages = NULL;
> +	shared_pages_info->top_level_page = NULL;
> +	shared_pages_info->top_level_ref = -1;
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
> +	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
> +
> +	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
> +		printk("Imported pages already cleaned up or buffer was not imported yet\n");
> +		return 0;
> +	}
> +
> +	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
> +		printk("Cannot unmap data pages\n");
> +		return -EINVAL;
> +	}
> +
> +	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
> +	kfree(shared_pages_info->data_pages);
> +	kfree(shared_pages_info->unmap_ops);
> +	shared_pages_info->unmap_ops = NULL;
> +	shared_pages_info->data_pages = NULL;
> +
> +	return 0;
> +}
> +
> +/* map and construct sg_lists from reference numbers */
> +struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
> +					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	struct sg_table *st;
> +	struct page **pages;
> +	struct gnttab_map_grant_ref *ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +	unsigned long addr;
> +	grant_ref_t *refs;
> +	int i;
> +	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
> +
> +	/* Get data refids */
> +	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
> +							       shared_pages_info);
> +
> +	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
> +	if (pages == NULL) {
> +		return NULL;
> +	}
> +
> +	/* allocate new pages that are mapped to shared pages via grant-table */
> +	if (gnttab_alloc_pages(nents, pages)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
> +	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
> +
> +	for (i=0; i<nents; i++) {
> +		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
> +		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
> +		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
> +		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +	}
> +
> +	if (gnttab_map_refs(ops, NULL, pages, nents)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
> +		return NULL;
> +	}
> +
> +	for (i=0; i<nents; i++) {
> +		if (ops[i].status) {
> +			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
> +				ops[0].status);
> +			return NULL;
> +		} else {
> +			unmap_ops[i].handle = ops[i].handle;
> +		}
> +	}
> +
> +	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
> +
> +	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
> +		printk("Cannot unmap 2nd level refs\n");
> +		return NULL;
> +	}
> +
> +	gnttab_free_pages(n_level2_refs, refid_pages);
> +	kfree(refid_pages);
> +
> +	kfree(shared_pages_info->unmap_ops);
> +	shared_pages_info->unmap_ops = unmap_ops;
> +	shared_pages_info->data_pages = pages;
> +	kfree(ops);
> +
> +	return st;
> +}
> +
> +inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
> +{
> +	struct hyper_dmabuf_ring_rq *req;
> +	int operands[2];
> +	int ret;
> +
> +	operands[0] = id;
> +	operands[1] = ops;
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
> +
> +	/* send request */
> +	ret = hyper_dmabuf_send_request(id, req);
> +
> +	/* TODO: wait until it gets response.. or can we just move on? */
> +
> +	kfree(req);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
> +			struct dma_buf_attachment *attach)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attach->dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_ATTACH);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attach->dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_DETACH);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
> +						enum dma_data_direction dir)
> +{
> +	struct sg_table *st;
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	struct hyper_dmabuf_pages_info *page_info;
> +	int ret;
> +
> +	if (!attachment->dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	/* extract pages from sgt */
> +	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
> +
> +	/* create a new sg_table with extracted pages */
> +	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
> +				page_info->last_len, page_info->nents);
> +	if (st == NULL)
> +		goto err_free_sg;
> +
> +        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
> +                goto err_free_sg;
> +        }
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_MAP);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return st;
> +
> +err_free_sg:
> +	sg_free_table(st);
> +	kfree(st);
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
> +						struct sg_table *sg,
> +						enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attachment->dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
> +
> +	sg_free_table(sg);
> +	kfree(sg);
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_UNMAP);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_RELEASE);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_END_CPU_ACCESS);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return 0;
> +}
> +
> +static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KMAP_ATOMIC);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL; /* for now NULL.. need to return the address of mapped region */
> +}
> +
> +static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL; /* for now NULL.. need to return the address of mapped region */
> +}
> +
> +static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KUNMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_MMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_VMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_VUNMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static const struct dma_buf_ops hyper_dmabuf_ops = {
> +		.attach = hyper_dmabuf_ops_attach,
> +		.detach = hyper_dmabuf_ops_detach,
> +		.map_dma_buf = hyper_dmabuf_ops_map,
> +		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
> +		.release = hyper_dmabuf_ops_release,
> +		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
> +		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
> +		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
> +		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
> +		.map = hyper_dmabuf_ops_kmap,
> +		.unmap = hyper_dmabuf_ops_kunmap,
> +		.mmap = hyper_dmabuf_ops_mmap,
> +		.vmap = hyper_dmabuf_ops_vmap,
> +		.vunmap = hyper_dmabuf_ops_vunmap,
> +};
> +
> +/* exporting dmabuf as fd */
> +int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
> +{
> +	int fd;
> +
> +	struct dma_buf* dmabuf;
> +
> +/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
> + * then release */
> +
> +	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
> +
> +	fd = dma_buf_fd(dmabuf, flags);
> +
> +	return fd;
> +}
> +
> +struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
> +{
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +
> +	exp_info.ops = &hyper_dmabuf_ops;
> +	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
> +	exp_info.flags = /* not sure about flag */0;
> +	exp_info.priv = dinfo;
> +
> +	return dma_buf_export(&exp_info);
> +};
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
> new file mode 100644
> index 0000000..003c158
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
> @@ -0,0 +1,31 @@
> +#ifndef __HYPER_DMABUF_IMP_H__
> +#define __HYPER_DMABUF_IMP_H__
> +
> +#include "hyper_dmabuf_struct.h"
> +
> +/* extract pages directly from struct sg_table */
> +struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
> +                                int frst_ofst, int last_len, int nents);
> +
> +grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
> +					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
> +
> +int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
> +
> +int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
> +
> +/* map first level tables that contains reference numbers for actual shared pages */
> +grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
> +
> +/* map and construct sg_lists from reference numbers */
> +struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
> +					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
> +
> +int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
> +
> +struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
> +
> +#endif /* __HYPER_DMABUF_IMP_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
> new file mode 100644
> index 0000000..5e50908
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
> @@ -0,0 +1,462 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/miscdevice.h>
> +#include <linux/uaccess.h>
> +#include <linux/dma-buf.h>
> +#include <linux/delay.h>
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_imp.h"
> +#include "hyper_dmabuf_list.h"
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_query.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +
> +struct hyper_dmabuf_private {
> +	struct device *device;
> +} hyper_dmabuf_private;
> +
> +static uint32_t hyper_dmabuf_id_gen(void) {
> +	/* TODO: add proper implementation */
> +	static uint32_t id = 0;
> +	static int32_t domid = -1;
> +	if (domid == -1) {
> +		domid = hyper_dmabuf_get_domid();
> +	}
> +	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
> +}
> +
> +static int hyper_dmabuf_exporter_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
> +
> +	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
> +						&ring_attr->ring_refid,
> +						&ring_attr->port);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_importer_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
> +
> +	/* user need to provide a port number and ref # for the page used as ring buffer */
> +	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
> +						 setup_imp_ring_attr->ring_refid,
> +						 setup_imp_ring_attr->port);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_export_remote(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
> +	struct dma_buf *dma_buf;
> +	struct dma_buf_attachment *attachment;
> +	struct sg_table *sgt;
> +	struct hyper_dmabuf_pages_info *page_info;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_ring_rq *req;
> +	int operands[9];
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
> +
> +	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
> +	if (!dma_buf) {
> +		printk("Cannot get dma buf\n");
> +		return -1;
> +	}
> +
> +	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
> +	if (!attachment) {
> +		printk("Cannot get attachment\n");
> +		return -1;
> +	}
> +
> +	/* we check if this specific attachment was already exported
> +	 * to the same domain and if yes, it returns hyper_dmabuf_id
> +	 * of pre-exported sgt */
> +	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
> +	if (ret != -1) {
> +		dma_buf_detach(dma_buf, attachment);
> +		dma_buf_put(dma_buf);
> +		export_remote_attr->hyper_dmabuf_id = ret;
> +		return 0;
> +	}
> +	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
> +	ret = 0;
> +
> +	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
> +
> +	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
> +
> +	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
> +	/* TODO: We might need to consider using port number on event channel? */
> +	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
> +	sgt_info->sgt = sgt;
> +	sgt_info->attachment = attachment;
> +	sgt_info->dma_buf = dma_buf;
> +
> +	page_info = hyper_dmabuf_ext_pgs(sgt);
> +	if (page_info == NULL)
> +		goto fail_export;
> +
> +	/* now register it to export list */
> +	hyper_dmabuf_register_exported(sgt_info);
> +
> +	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
> +	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
> +
> +	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
> +
> +	/* now create table of grefs for shared pages and */
> +
> +	/* now create request for importer via ring */
> +	operands[0] = page_info->hyper_dmabuf_id;
> +	operands[1] = page_info->nents;
> +	operands[2] = page_info->frst_ofst;
> +	operands[3] = page_info->last_len;
> +	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
> +						page_info->nents, &sgt_info->shared_pages_info);
> +	/* driver/application specific private info, max 32 bytes */
> +	operands[5] = export_remote_attr->private[0];
> +	operands[6] = export_remote_attr->private[1];
> +	operands[7] = export_remote_attr->private[2];
> +	operands[8] = export_remote_attr->private[3];
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	/* composing a message to the importer */
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
> +	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
> +		goto fail_send_request;
> +
> +	/* free msg */
> +	kfree(req);
> +	/* free page_info */
> +	kfree(page_info);
> +
> +	return ret;
> +
> +fail_send_request:
> +	kfree(req);
> +	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
> +
> +fail_export:
> +	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
> +	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
> +	dma_buf_put(sgt_info->dma_buf);
> +
> +	return -EINVAL;
> +}
> +
> +static int hyper_dmabuf_export_fd_ioctl(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
> +
> +	/* look for dmabuf for the id */
> +	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
> +	if (imported_sgt_info == NULL) /* can't find sgt from the table */
> +		return -1;
> +
> +	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
> +		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
> +		imported_sgt_info->last_len, imported_sgt_info->nents,
> +		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
> +
> +	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
> +						imported_sgt_info->frst_ofst,
> +						imported_sgt_info->last_len,
> +						imported_sgt_info->nents,
> +						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
> +						&imported_sgt_info->shared_pages_info);
> +
> +	if (!imported_sgt_info->sgt) {
> +		return -1;
> +	}
> +
> +	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
> +	if (export_fd_attr < 0) {
> +		ret = export_fd_attr->fd;
> +	}
> +
> +	return ret;
> +}
> +
> +/* removing dmabuf from the database and send int req to the source domain
> +* to unmap it. */
> +static int hyper_dmabuf_destroy(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_ring_rq *req;
> +	int ret;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
> +
> +	/* find dmabuf in export list */
> +	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
> +	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
> +		destroy_attr->status = -EINVAL;
> +		return -EFAULT;
> +	}
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
> +
> +	/* now send destroy request to remote domain
> +	 * currently assuming there's only one importer exist */
> +	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
> +	if (ret < 0) {
> +		kfree(req);
> +		return -EFAULT;
> +	}
> +
> +	/* free msg */
> +	kfree(req);
> +	destroy_attr->status = ret;
> +
> +	/* Rest of cleanup will follow when importer will free it's buffer,
> +	 * current implementation assumes that there is only one importer
> +         */
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_query(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_query *query_attr;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
> +
> +	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
> +	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
> +
> +	/* if dmabuf can't be found in both lists, return */
> +	if (!(sgt_info && imported_sgt_info)) {
> +		printk("can't find entry anywhere\n");
> +		return -EINVAL;
> +	}
> +
> +	/* not considering the case where a dmabuf is found on both queues
> +	 * in one domain */
> +	switch (query_attr->item)
> +	{
> +		case DMABUF_QUERY_TYPE_LIST:
> +			if (sgt_info) {
> +				query_attr->info = EXPORTED;
> +			} else {
> +				query_attr->info = IMPORTED;
> +			}
> +			break;
> +
> +		/* exporting domain of this specific dmabuf*/
> +		case DMABUF_QUERY_EXPORTER:
> +			if (sgt_info) {
> +				query_attr->info = 0xFFFFFFFF; /* myself */
> +			} else {
> +				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
> +			}
> +			break;
> +
> +		/* importing domain of this specific dmabuf */
> +		case DMABUF_QUERY_IMPORTER:
> +			if (sgt_info) {
> +				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
> +			} else {
> +#if 0 /* TODO: a global variable, current_domain does not exist yet*/
> +				query_attr->info = current_domain;
> +#endif
> +			}
> +			break;
> +
> +		/* size of dmabuf in byte */
> +		case DMABUF_QUERY_SIZE:
> +			if (sgt_info) {
> +#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
> +				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
> +#endif
> +			} else {
> +				query_attr->info = imported_sgt_info->nents * 4096 -
> +						   imported_sgt_info->frst_ofst - 4096 +
> +						   imported_sgt_info->last_len;
> +			}
> +			break;
> +	}
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
> +	struct hyper_dmabuf_ring_rq *req;
> +
> +	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
> +
> +	/* requesting remote domain to set-up exporter's ring */
> +	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
> +		kfree(req);
> +		return -EINVAL;
> +	}
> +
> +	kfree(req);
> +	return 0;
> +}
> +
> +static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
> +};
> +
> +static long hyper_dmabuf_ioctl(struct file *filp,
> +			unsigned int cmd, unsigned long param)
> +{
> +	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
> +	unsigned int nr = _IOC_NR(cmd);
> +	int ret = -EINVAL;
> +	hyper_dmabuf_ioctl_t func;
> +	char *kdata;
> +
> +	ioctl = &hyper_dmabuf_ioctls[nr];
> +
> +	func = ioctl->func;
> +
> +	if (unlikely(!func)) {
> +		printk("no function\n");
> +		return -EINVAL;
> +	}
> +
> +	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
> +	if (!kdata) {
> +		printk("no memory\n");
> +		return -ENOMEM;
> +	}
> +
> +	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
> +		printk("failed to copy from user arguments\n");
> +		return -EFAULT;
> +	}
> +
> +	ret = func(kdata);
> +
> +	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
> +		printk("failed to copy to user arguments\n");
> +		return -EFAULT;
> +	}
> +
> +	kfree(kdata);
> +
> +	return ret;
> +}
> +
> +struct device_info {
> +	int curr_domain;
> +};
> +
> +/*===============================================================================================*/
> +static struct file_operations hyper_dmabuf_driver_fops =
> +{
> +   .owner = THIS_MODULE,
> +   .unlocked_ioctl = hyper_dmabuf_ioctl,
> +};
> +
> +static struct miscdevice hyper_dmabuf_miscdev = {
> +	.minor = MISC_DYNAMIC_MINOR,
> +	.name = "xen/hyper_dmabuf",
> +	.fops = &hyper_dmabuf_driver_fops,
> +};
> +
> +static const char device_name[] = "hyper_dmabuf";
> +
> +/*===============================================================================================*/
> +int register_device(void)
> +{
> +	int result = 0;
> +
> +	result = misc_register(&hyper_dmabuf_miscdev);
> +
> +	if (result != 0) {
> +		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
> +		return result;
> +	}
> +
> +	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
> +
> +	/* TODO: Check if there is a different way to initialize dma mask nicely */
> +	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
> +
> +	/* TODO find a way to provide parameters for below function or move that to ioctl */
> +/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
> +				src_sink_isr, PORT_NUM, "remote_domain", &info);
> +	if (err < 0) {
> +		printk("hyper_dmabuf: can't register interrupt handlers\n");
> +		return -EFAULT;
> +	}
> +
> +	info.irq = err;
> +*/
> +	return result;
> +}
> +
> +/*-----------------------------------------------------------------------------------------------*/
> +void unregister_device(void)
> +{
> +	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
> +	misc_deregister(&hyper_dmabuf_miscdev);
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
> new file mode 100644
> index 0000000..77a7e65
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
> @@ -0,0 +1,119 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <asm/uaccess.h>
> +#include <linux/hashtable.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_list.h"
> +
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
> +
> +int hyper_dmabuf_table_init()
> +{
> +	hash_init(hyper_dmabuf_hash_imported);
> +	hash_init(hyper_dmabuf_hash_exported);
> +	return 0;
> +}
> +
> +int hyper_dmabuf_table_destroy()
> +{
> +	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = info;
> +
> +	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
> +		info_entry->info->hyper_dmabuf_id);
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = info;
> +
> +	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
> +		info_entry->info->hyper_dmabuf_id);
> +
> +	return 0;
> +}
> +
> +struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->attachment == attach &&
> +			info_entry->info->hyper_dmabuf_rdomain == domid)
> +			return info_entry->info->hyper_dmabuf_id;
> +
> +	return -1;
> +}
> +
> +struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +int hyper_dmabuf_remove_exported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> +
> +int hyper_dmabuf_remove_imported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
> new file mode 100644
> index 0000000..869cd9a
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
> @@ -0,0 +1,40 @@
> +#ifndef __HYPER_DMABUF_LIST_H__
> +#define __HYPER_DMABUF_LIST_H__
> +
> +#include "hyper_dmabuf_struct.h"
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_EXPORTED 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_IMPORTED 7
> +
> +struct hyper_dmabuf_info_entry_exported {
> +        struct hyper_dmabuf_sgt_info *info;
> +        struct hlist_node node;
> +};
> +
> +struct hyper_dmabuf_info_entry_imported {
> +        struct hyper_dmabuf_imported_sgt_info *info;
> +        struct hlist_node node;
> +};
> +
> +int hyper_dmabuf_table_init(void);
> +
> +int hyper_dmabuf_table_destroy(void);
> +
> +int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
> +
> +int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
> +
> +struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
> +
> +struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
> +
> +int hyper_dmabuf_remove_exported(int id);
> +
> +int hyper_dmabuf_remove_imported(int id);
> +
> +#endif // __HYPER_DMABUF_LIST_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
> new file mode 100644
> index 0000000..3237e50
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
> @@ -0,0 +1,212 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_imp.h"
> +//#include "hyper_dmabuf_remote_sync.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +#include "hyper_dmabuf_list.h"
> +
> +void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
> +				        enum hyper_dmabuf_command command, int *operands)
> +{
> +	int i;
> +
> +	request->request_id = hyper_dmabuf_next_req_id_export();
> +	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
> +	request->command = command;
> +
> +	switch(command) {
> +	/* as exporter, commands to importer */
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : number of pages to be shared
> +		 * operands2 : offset of data in the first page
> +		 * operands3 : length of data in the last page
> +		 * operands4 : top-level reference number for shared pages
> +		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
> +		 */
> +		for (i=0; i < 8; i++)
> +			request->operands[i] = operands[i];
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY:
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : DMABUF_DESTROY,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +		request->operands[0] = operands[0];
> +		break;
> +
> +	case HYPER_DMABUF_OPS_TO_REMOTE:
> +		/* notifying dmabuf map/unmap to importer (probably not needed) */
> +		/* for dmabuf synchronization */
> +		break;
> +
> +	/* as importer, command to exporter */
> +	case HYPER_DMABUF_OPS_TO_SOURCE:
> +		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
> +		* or unmapping for synchronization with original exporter (e.g. i915) */
> +		/* command : DMABUF_OPS_TO_SOURCE.
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
> +		 */
> +		for (i=0; i<2; i++)
> +			request->operands[i] = operands[i];
> +		break;
> +
> +	/* requesting the other side to setup another ring channel for reverse direction */
> +	case HYPER_DMABUF_EXPORTER_RING_SETUP:
> +		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
> +		/* no operands needed */
> +		break;
> +
> +	default:
> +		/* no command found */
> +		return;
> +	}
> +}
> +
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
> +{
> +	uint32_t i, ret;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +
> +	/* make sure req is not NULL (may not be needed) */
> +	if (!req) {
> +		return -EINVAL;
> +	}
> +
> +	req->status = HYPER_DMABUF_REQ_PROCESSED;
> +
> +	switch (req->command) {
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : number of pages to be shared
> +		 * operands2 : offset of data in the first page
> +		 * operands3 : length of data in the last page
> +		 * operands4 : top-level reference number for shared pages
> +		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
> +		 */
> +		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
> +		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
> +		imported_sgt_info->frst_ofst = req->operands[2];
> +		imported_sgt_info->last_len = req->operands[3];
> +		imported_sgt_info->nents = req->operands[1];
> +		imported_sgt_info->gref = req->operands[4];
> +
> +		printk("DMABUF was exported\n");
> +		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
> +		printk("\tnents %d\n", req->operands[1]);
> +		printk("\tfirst offset %d\n", req->operands[2]);
> +		printk("\tlast len %d\n", req->operands[3]);
> +		printk("\tgrefid %d\n", req->operands[4]);
> +
> +		for (i=0; i<4; i++)
> +			imported_sgt_info->private[i] = req->operands[5+i];
> +
> +		hyper_dmabuf_register_imported(imported_sgt_info);
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY:
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : DMABUF_DESTROY,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +
> +		imported_sgt_info =
> +			hyper_dmabuf_find_imported(req->operands[0]);
> +
> +		if (imported_sgt_info) {
> +			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
> +
> +			hyper_dmabuf_remove_imported(req->operands[0]);
> +
> +			/* TODO: cleanup sgt on importer side etc */
> +		}
> +
> +		/* Notify exporter that buffer is freed and it can cleanup it */
> +		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
> +		req->command = HYPER_DMABUF_DESTROY_FINISH;
> +
> +#if 0 /* function is not implemented yet */
> +
> +		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
> +#endif
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY_FINISH:
> +		/* destroy sg_list for hyper_dmabuf_id on local side */
> +		/* command : DMABUF_DESTROY_FINISH,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +
> +		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
> +		sgt_info =
> +			hyper_dmabuf_find_exported(req->operands[0]);
> +
> +		if (sgt_info) {
> +			hyper_dmabuf_cleanup_gref_table(sgt_info);
> +
> +			/* unmap dmabuf */
> +			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
> +			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
> +			dma_buf_put(sgt_info->dma_buf);
> +
> +			/* TODO: Rest of cleanup, sgt cleanup etc */
> +		}
> +
> +		break;
> +
> +	case HYPER_DMABUF_OPS_TO_REMOTE:
> +		/* notifying dmabuf map/unmap to importer (probably not needed) */
> +		/* for dmabuf synchronization */
> +		break;
> +
> +	/* as importer, command to exporter */
> +	case HYPER_DMABUF_OPS_TO_SOURCE:
> +		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
> +		* or unmapping for synchronization with original exporter (e.g. i915) */
> +		/* command : DMABUF_OPS_TO_SOURCE.
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
> +		 */
> +		break;
> +
> +	/* requesting the other side to setup another ring channel for reverse direction */
> +	case HYPER_DMABUF_EXPORTER_RING_SETUP:
> +		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
> +		 * no operands needed */
> +		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
> +		if (ret < 0) {
> +			req->status = HYPER_DMABUF_REQ_ERROR;
> +			return -EINVAL;
> +		}
> +
> +		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
> +		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
> +		break;
> +
> +	case HYPER_DMABUF_IMPORTER_RING_SETUP:
> +		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
> +		/* no operands needed */
> +		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
> +		if (ret < 0)
> +			return -EINVAL;
> +
> +		break;
> +
> +	default:
> +		/* no matched command, nothing to do.. just return error */
> +		return -EINVAL;
> +	}
> +
> +	return req->command;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
> new file mode 100644
> index 0000000..44bfb70
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
> @@ -0,0 +1,45 @@
> +#ifndef __HYPER_DMABUF_MSG_H__
> +#define __HYPER_DMABUF_MSG_H__
> +
> +enum hyper_dmabuf_command {
> +	HYPER_DMABUF_EXPORT = 0x10,
> +	HYPER_DMABUF_DESTROY,
> +	HYPER_DMABUF_DESTROY_FINISH,
> +	HYPER_DMABUF_OPS_TO_REMOTE,
> +	HYPER_DMABUF_OPS_TO_SOURCE,
> +	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
> +	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
> +};
> +
> +enum hyper_dmabuf_ops {
> +	HYPER_DMABUF_OPS_ATTACH = 0x1000,
> +	HYPER_DMABUF_OPS_DETACH,
> +	HYPER_DMABUF_OPS_MAP,
> +	HYPER_DMABUF_OPS_UNMAP,
> +	HYPER_DMABUF_OPS_RELEASE,
> +	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_END_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_KMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KMAP,
> +	HYPER_DMABUF_OPS_KUNMAP,
> +	HYPER_DMABUF_OPS_MMAP,
> +	HYPER_DMABUF_OPS_VMAP,
> +	HYPER_DMABUF_OPS_VUNMAP,
> +};
> +
> +enum hyper_dmabuf_req_feedback {
> +	HYPER_DMABUF_REQ_PROCESSED = 0x100,
> +	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
> +	HYPER_DMABUF_REQ_ERROR,
> +	HYPER_DMABUF_REQ_NOT_RESPONDED
> +};
> +
> +/* create a request packet with given command and operands */
> +void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
> +                                        enum hyper_dmabuf_command command, int *operands);
> +
> +/* parse incoming request packet (or response) and take appropriate actions for those */
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
> +
> +#endif // __HYPER_DMABUF_MSG_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
> new file mode 100644
> index 0000000..a577167
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
> @@ -0,0 +1,16 @@
> +#ifndef __HYPER_DMABUF_QUERY_H__
> +#define __HYPER_DMABUF_QUERY_H__
> +
> +enum hyper_dmabuf_query {
> +	DMABUF_QUERY_TYPE_LIST = 0x10,
> +	DMABUF_QUERY_EXPORTER,
> +	DMABUF_QUERY_IMPORTER,
> +	DMABUF_QUERY_SIZE
> +};
> +
> +enum hyper_dmabuf_status {
> +	EXPORTED = 0x01,
> +	IMPORTED
> +};
> +
> +#endif /* __HYPER_DMABUF_QUERY_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
> new file mode 100644
> index 0000000..c8a2f4d
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
> @@ -0,0 +1,70 @@
> +#ifndef __HYPER_DMABUF_STRUCT_H__
> +#define __HYPER_DMABUF_STRUCT_H__
> +
> +#include <xen/interface/grant_table.h>
> +
> +/* Importer combine source domain id with given hyper_dmabuf_id
> + * to make it unique in case there are multiple exporters */
> +
> +#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
> +	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
> +
> +#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
> +	(((id) >> 24) & 0xFF)
> +
> +/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
> + * in this block meaning we can share 4KB*4096 = 16MB of buffer
> + * (needs to be increased for large buffer use-cases such as 4K
> + * frame buffer) */
> +#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
> +
> +struct hyper_dmabuf_shared_pages_info {
> +	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
> +	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
> +	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
> +	grant_ref_t top_level_ref; /* top level refid */
> +	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
> +	struct page **data_pages; /* data pages to be unmapped */
> +};
> +
> +/* Exporter builds pages_info before sharing pages */
> +struct hyper_dmabuf_pages_info {
> +        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
> +        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
> +        int frst_ofst; /* offset of data in the first page */
> +        int last_len; /* length of data in the last page */
> +        int nents; /* # of pages */
> +        struct page **pages; /* pages that contains reference numbers of shared pages*/
> +};
> +
> +/* Both importer and exporter use this structure to point to sg lists
> + *
> + * Exporter stores references to sgt in a hash table
> + * Exporter keeps these references for synchronization and tracking purposes
> + *
> + * Importer use this structure exporting to other drivers in the same domain */
> +struct hyper_dmabuf_sgt_info {
> +        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
> +	int hyper_dmabuf_rdomain; /* domain importing this sgt */
> +        struct sg_table *sgt; /* pointer to sgt */
> +	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
> +	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
> +	struct hyper_dmabuf_shared_pages_info shared_pages_info;
> +	int private[4]; /* device specific info (e.g. image's meta info?) */
> +};
> +
> +/* Importer store references (before mapping) on shared pages
> + * Importer store these references in the table and map it in
> + * its own memory map once userspace asks for reference for the buffer */
> +struct hyper_dmabuf_imported_sgt_info {
> +	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
> +	int frst_ofst;	/* start offset in shared page #1 */
> +	int last_len;	/* length of data in the last shared page */
> +	int nents;	/* number of pages to be shared */
> +	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
> +	struct sg_table *sgt; /* sgt pointer after importing buffer */
> +	struct hyper_dmabuf_shared_pages_info shared_pages_info;
> +	int private[4]; /* device specific info (e.g. image's meta info?) */
> +};
> +
> +#endif /* __HYPER_DMABUF_STRUCT_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> new file mode 100644
> index 0000000..22f2ef0
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> @@ -0,0 +1,328 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +#include <xen/grant_table.h>
> +#include <xen/events.h>
> +#include <xen/xenbus.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +#include "../hyper_dmabuf_imp.h"
> +#include "../hyper_dmabuf_list.h"
> +#include "../hyper_dmabuf_msg.h"
> +
> +static int export_req_id = 0;
> +static int import_req_id = 0;
> +
> +int32_t hyper_dmabuf_get_domid(void)
> +{
> +	struct xenbus_transaction xbt;
> +	int32_t domid;
> +
> +        xenbus_transaction_start(&xbt);
> +
> +        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
> +		domid = -1;
> +        }
> +        xenbus_transaction_end(xbt, 0);
> +
> +	return domid;
> +}
> +
> +int hyper_dmabuf_next_req_id_export(void)
> +{
> +        export_req_id++;
> +        return export_req_id;
> +}
> +
> +int hyper_dmabuf_next_req_id_import(void)
> +{
> +        import_req_id++;
> +        return import_req_id;
> +}
> +
> +/* For now cache latast rings as global variables TODO: keep them in list*/
> +static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
> +static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
> +
> +/* exporter needs to generated info for page sharing */
> +int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
> +{
> +	struct hyper_dmabuf_ring_info_export *ring_info;
> +	struct hyper_dmabuf_sring *sring;
> +	struct evtchn_alloc_unbound alloc_unbound;
> +	struct evtchn_close close;
> +
> +	void *shared_ring;
> +	int ret;
> +
> +	ring_info = (struct hyper_dmabuf_ring_info_export*)
> +				kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	/* from exporter to importer */
> +	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
> +	if (shared_ring == 0) {
> +		return -EINVAL;
> +	}
> +
> +	sring = (struct hyper_dmabuf_sring *) shared_ring;
> +
> +	SHARED_RING_INIT(sring);
> +
> +	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
> +
> +	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
> +							virt_to_mfn(shared_ring), 0);
> +	if (ring_info->gref_ring < 0) {
> +		return -EINVAL; /* fail to get gref */
> +	}
> +
> +	alloc_unbound.dom = DOMID_SELF;
> +	alloc_unbound.remote_dom = rdomain;
> +	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
> +	if (ret != 0) {
> +		printk("Cannot allocate event channel\n");
> +		return -EINVAL;
> +	}
> +
> +	/* setting up interrupt */
> +	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
> +					hyper_dmabuf_front_ring_isr, 0,
> +					NULL, (void*) ring_info);
> +
> +	if (ret < 0) {
> +		printk("Failed to setup event channel\n");
> +		close.port = alloc_unbound.port;
> +		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
> +		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
> +		return -EINVAL;
> +	}
> +
> +	ring_info->rdomain = rdomain;
> +	ring_info->irq = ret;
> +	ring_info->port = alloc_unbound.port;
> +
> +	/* store refid and port numbers for userspace's use */
> +	*refid = ring_info->gref_ring;
> +	*port = ring_info->port;
> +
> +	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
> +		ring_info->gref_ring,
> +		ring_info->port,
> +		ring_info->irq);
> +
> +	/* register ring info */
> +	ret = hyper_dmabuf_register_exporter_ring(ring_info);
> +
> +	return ret;
> +}
> +
> +/* importer needs to know about shared page and port numbers for ring buffer and event channel */
> +int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
> +{
> +	struct hyper_dmabuf_ring_info_import *ring_info;
> +	struct hyper_dmabuf_sring *sring;
> +
> +	struct page *shared_ring;
> +
> +	struct gnttab_map_grant_ref *ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +	int ret;
> +
> +	ring_info = (struct hyper_dmabuf_ring_info_import *)
> +			kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	ring_info->sdomain = sdomain;
> +	ring_info->evtchn = port;
> +
> +	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
> +	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
> +
> +	if (gnttab_alloc_pages(1, &shared_ring)) {
> +		return -EINVAL;
> +	}
> +
> +	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
> +			GNTMAP_host_map, gref, sdomain);
> +
> +	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
> +	if (ret < 0) {
> +		printk("Cannot map ring\n");
> +		return -EINVAL;
> +	}
> +
> +	if (ops[0].status) {
> +		printk("Ring mapping failed\n");
> +		return -EINVAL;
> +	}
> +
> +	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
> +
> +	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
> +
> +	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
> +						    NULL, (void*)ring_info);
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	ring_info->irq = ret;
> +
> +	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
> +		port,
> +		ring_info->irq);
> +
> +	ret = hyper_dmabuf_register_importer_ring(ring_info);
> +
> +	return ret;
> +}
> +
> +int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
> +{
> +	struct hyper_dmabuf_front_ring *ring;
> +	struct hyper_dmabuf_ring_rq *new_req;
> +	struct hyper_dmabuf_ring_info_export *ring_info;
> +	int notify;
> +
> +	/* find a ring info for the channel */
> +	ring_info = hyper_dmabuf_find_exporter_ring(domain);
> +	if (!ring_info) {
> +		printk("Can't find ring info for the channel\n");
> +		return -EINVAL;
> +	}
> +
> +	ring = &ring_info->ring_front;
> +
> +	if (RING_FULL(ring))
> +		return -EBUSY;
> +
> +	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +	if (!new_req) {
> +		printk("NULL REQUEST\n");
> +		return -EIO;
> +	}
> +
> +	memcpy(new_req, req, sizeof(*new_req));
> +
> +	ring->req_prod_pvt++;
> +
> +	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
> +	if (notify) {
> +		notify_remote_via_irq(ring_info->irq);
> +	}
> +
> +	return 0;
> +}
> +
> +/* called by interrupt (WORKQUEUE) */
> +int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
> +{
> +	/* as a importer and as a exporter */
> +	return 0;
> +}
> +
> +/* ISR for request from exporter (as an importer) */
> +static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
> +{
> +	RING_IDX rc, rp;
> +	struct hyper_dmabuf_ring_rq request;
> +	struct hyper_dmabuf_ring_rp response;
> +	int notify, more_to_do;
> +	int ret;
> +//	struct hyper_dmabuf_work *work;
> +
> +	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
> +	struct hyper_dmabuf_back_ring *ring;
> +
> +	ring = &ring_info->ring_back;
> +
> +	do {
> +		rc = ring->req_cons;
> +		rp = ring->sring->req_prod;
> +
> +		while (rc != rp) {
> +			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
> +				break;
> +
> +			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
> +			printk("Got request\n");
> +			ring->req_cons = ++rc;
> +
> +			/* TODO: probably using linked list for multiple requests then let
> +			 * a task in a workqueue to process those is better idea becuase
> +			 * we do not want to stay in ISR for long.
> +			 */
> +			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
> +
> +			if (ret > 0) {
> +				/* build response */
> +				memcpy(&response, &request, sizeof(response));
> +
> +				/* we sent back modified request as a response.. we might just need to have request only..*/
> +				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
> +				ring->rsp_prod_pvt++;
> +
> +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
> +
> +				if (notify) {
> +					printk("Notyfing\n");
> +					notify_remote_via_irq(ring_info->irq);
> +				}
> +			}
> +
> +			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
> +			printk("Final check for requests %d\n", more_to_do);
> +		}
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +/* ISR for responses from importer */
> +static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
> +{
> +	/* front ring only care about response from back */
> +	struct hyper_dmabuf_ring_rp *response;
> +	RING_IDX i, rp;
> +	int more_to_do, ret;
> +
> +	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
> +	struct hyper_dmabuf_front_ring *ring;
> +	ring = &ring_info->ring_front;
> +
> +	do {
> +		more_to_do = 0;
> +		rp = ring->sring->rsp_prod;
> +		for (i = ring->rsp_cons; i != rp; i++) {
> +			unsigned long id;
> +
> +			response = RING_GET_RESPONSE(ring, i);
> +			id = response->response_id;
> +
> +			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
> +				/* parsing response */
> +				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
> +
> +				if (ret < 0) {
> +					printk("getting error while parsing response\n");
> +				}
> +			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
> +				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
> +			}
> +
> +		}
> +
> +		ring->rsp_cons = i;
> +
> +		if (i != ring->req_prod_pvt) {
> +			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
> +			printk("more to do %d\n", more_to_do);
> +		} else {
> +			ring->sring->rsp_event = i+1;
> +		}
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
> new file mode 100644
> index 0000000..2754917
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
> @@ -0,0 +1,62 @@
> +#ifndef __HYPER_DMABUF_XEN_COMM_H__
> +#define __HYPER_DMABUF_XEN_COMM_H__
> +
> +#include "xen/interface/io/ring.h"
> +
> +#define MAX_NUMBER_OF_OPERANDS 9
> +
> +struct hyper_dmabuf_ring_rq {
> +        unsigned int request_id;
> +        unsigned int status;
> +        unsigned int command;
> +        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
> +};
> +
> +struct hyper_dmabuf_ring_rp {
> +        unsigned int response_id;
> +        unsigned int status;
> +        unsigned int command;
> +        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
> +};
> +
> +DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
> +
> +struct hyper_dmabuf_ring_info_export {
> +        struct hyper_dmabuf_front_ring ring_front;
> +	int rdomain;
> +        int gref_ring;
> +        int irq;
> +        int port;
> +};
> +
> +struct hyper_dmabuf_ring_info_import {
> +        int sdomain;
> +        int irq;
> +        int evtchn;
> +        struct hyper_dmabuf_back_ring ring_back;
> +};
> +
> +//struct hyper_dmabuf_work {
> +//	hyper_dmabuf_ring_rq requrest;
> +//	struct work_struct msg_parse;
> +//};
> +
> +int32_t hyper_dmabuf_get_domid(void);
> +
> +int hyper_dmabuf_next_req_id_export(void);
> +
> +int hyper_dmabuf_next_req_id_import(void);
> +
> +/* exporter needs to generated info for page sharing */
> +int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
> +
> +/* importer needs to know about shared page and port numbers for ring buffer and event channel */
> +int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
> +
> +/* send request to the remote domain */
> +int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
> +
> +/* called by interrupt (WORKQUEUE) */
> +int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_H__
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> new file mode 100644
> index 0000000..15c9d29
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> @@ -0,0 +1,106 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <asm/uaccess.h>
> +#include <linux/hashtable.h>
> +#include <xen/grant_table.h>
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
> +
> +int hyper_dmabuf_ring_table_init()
> +{
> +	hash_init(hyper_dmabuf_hash_importer_ring);
> +	hash_init(hyper_dmabuf_hash_exporter_ring);
> +	return 0;
> +}
> +
> +int hyper_dmabuf_ring_table_destroy()
> +{
> +	/* TODO: cleanup tables*/
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
> +		info_entry->info->rdomain);
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
> +		info_entry->info->sdomain);
> +
> +	return 0;
> +}
> +
> +struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
> +		if(info_entry->info->rdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
> +		if(info_entry->info->sdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +int hyper_dmabuf_remove_exporter_ring(int domid)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
> +		if(info_entry->info->rdomain == domid) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> +
> +int hyper_dmabuf_remove_importer_ring(int domid)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
> +		if(info_entry->info->sdomain == domid) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> new file mode 100644
> index 0000000..5929f99
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> @@ -0,0 +1,35 @@
> +#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
> +#define __HYPER_DMABUF_XEN_COMM_LIST_H__
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_EXPORT_RING 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_IMPORT_RING 7
> +
> +struct hyper_dmabuf_exporter_ring_info {
> +        struct hyper_dmabuf_ring_info_export *info;
> +        struct hlist_node node;
> +};
> +
> +struct hyper_dmabuf_importer_ring_info {
> +        struct hyper_dmabuf_ring_info_import *info;
> +        struct hlist_node node;
> +};
> +
> +int hyper_dmabuf_ring_table_init(void);
> +
> +int hyper_dmabuf_ring_table_destroy(void);
> +
> +int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
> +
> +int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
> +
> +struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
> +
> +struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
> +
> +int hyper_dmabuf_remove_exporter_ring(int domid);
> +
> +int hyper_dmabuf_remove_importer_ring(int domid);
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
> -- 
> 2.7.4
> 
> 

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2018-02-15  1:34   ` Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-02-15  1:34 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel

Abandoning this series as a new version was submitted for the review

"[RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver"

On Tue, Dec 19, 2017 at 11:29:17AM -0800, Kim, Dongwon wrote:
> Upload of intial version of hyper_DMABUF driver enabling
> DMA_BUF exchange between two different VMs in virtualized
> platform based on hypervisor such as KVM or XEN.
> 
> Hyper_DMABUF drv's primary role is to import a DMA_BUF
> from originator then re-export it to another Linux VM
> so that it can be mapped and accessed by it.
> 
> The functionality of this driver highly depends on
> Hypervisor's native page sharing mechanism and inter-VM
> communication support.
> 
> This driver has two layers, one is main hyper_DMABUF
> framework for scatter-gather list management that handles
> actual import and export of DMA_BUF. Lower layer is about
> actual memory sharing and communication between two VMs,
> which is hypervisor-specific interface.
> 
> This driver is initially designed to enable DMA_BUF
> sharing across VMs in Xen environment, so currently working
> with Xen only.
> 
> This also adds Kernel configuration for hyper_DMABUF drv
> under Device Drivers->Xen driver support->hyper_dmabuf
> options.
> 
> To give some brief information about each source file,
> 
> hyper_dmabuf/hyper_dmabuf_conf.h
> : configuration info
> 
> hyper_dmabuf/hyper_dmabuf_drv.c
> : driver interface and initialization
> 
> hyper_dmabuf/hyper_dmabuf_imp.c
> : scatter-gather list generation and management. DMA_BUF
> ops for DMA_BUF reconstructed from hyper_DMABUF
> 
> hyper_dmabuf/hyper_dmabuf_ioctl.c
> : IOCTLs calls for export/import and comm channel creation
> unexport.
> 
> hyper_dmabuf/hyper_dmabuf_list.c
> : Database (linked-list) for exported and imported
> hyper_DMABUF
> 
> hyper_dmabuf/hyper_dmabuf_msg.c
> : creation and management of messages between exporter and
> importer
> 
> hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> : comm ch management and ISRs for incoming messages.
> 
> hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> : Database (linked-list) for keeping information about
> existing comm channels among VMs
> 
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
> ---
>  drivers/xen/Kconfig                                |   2 +
>  drivers/xen/Makefile                               |   1 +
>  drivers/xen/hyper_dmabuf/Kconfig                   |  14 +
>  drivers/xen/hyper_dmabuf/Makefile                  |  34 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   2 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  54 ++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 101 +++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 852 +++++++++++++++++++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  31 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 462 +++++++++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 119 +++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  40 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 212 +++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  45 ++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  16 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  70 ++
>  .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 328 ++++++++
>  .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  62 ++
>  .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 106 +++
>  .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  35 +
>  20 files changed, 2586 insertions(+)
>  create mode 100644 drivers/xen/hyper_dmabuf/Kconfig
>  create mode 100644 drivers/xen/hyper_dmabuf/Makefile
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> 
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index d8dd546..b59b0e3 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -321,4 +321,6 @@ config XEN_SYMS
>  config XEN_HAVE_VPMU
>         bool
>  
> +source "drivers/xen/hyper_dmabuf/Kconfig"
> +
>  endmenu
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 451e833..a6e253a 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -4,6 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
>  obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
>  obj-y	+= events/
>  obj-y	+= xenbus/
> +obj-y	+= hyper_dmabuf/
>  
>  nostackp := $(call cc-option, -fno-stack-protector)
>  CFLAGS_features.o			:= $(nostackp)
> diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
> new file mode 100644
> index 0000000..75e1f96
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/Kconfig
> @@ -0,0 +1,14 @@
> +menu "hyper_dmabuf options"
> +
> +config HYPER_DMABUF
> +	tristate "Enables hyper dmabuf driver"
> +	default y
> +
> +config HYPER_DMABUF_XEN
> +	bool "Configure hyper_dmabuf for XEN hypervisor"
> +	default y
> +	depends on HYPER_DMABUF
> +	help
> +	  Configuring hyper_dmabuf driver for XEN hypervisor
> +
> +endmenu
> diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
> new file mode 100644
> index 0000000..0be7445
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/Makefile
> @@ -0,0 +1,34 @@
> +TARGET_MODULE:=hyper_dmabuf
> +
> +# If we running by kernel building system
> +ifneq ($(KERNELRELEASE),)
> +	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
> +                                 hyper_dmabuf_ioctl.o \
> +                                 hyper_dmabuf_list.o \
> +				 hyper_dmabuf_imp.o \
> +				 hyper_dmabuf_msg.o \
> +				 xen/hyper_dmabuf_xen_comm.o \
> +				 xen/hyper_dmabuf_xen_comm_list.o
> +
> +obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
> +
> +# If we are running without kernel build system
> +else
> +BUILDSYSTEM_DIR?=../../../
> +PWD:=$(shell pwd)
> +
> +all :
> +# run kernel build system to make module
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
> +
> +clean:
> +# run kernel build system to cleanup in current directory
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
> +
> +load:
> +	insmod ./$(TARGET_MODULE).ko
> +
> +unload:
> +	rmmod ./$(TARGET_MODULE).ko
> +
> +endif
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
> new file mode 100644
> index 0000000..3d9b2d6
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
> @@ -0,0 +1,2 @@
> +#define CURRENT_TARGET XEN
> +#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
> new file mode 100644
> index 0000000..0698327
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
> @@ -0,0 +1,54 @@
> +#include <linux/init.h>       /* module_init, module_exit */
> +#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
> +#include "hyper_dmabuf_conf.h"
> +#include "hyper_dmabuf_list.h"
> +#include "xen/hyper_dmabuf_xen_comm_list.h"
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_AUTHOR("IOTG-PED, INTEL");
> +
> +int register_device(void);
> +int unregister_device(void);
> +
> +/*===============================================================================================*/
> +static int hyper_dmabuf_drv_init(void)
> +{
> +	int ret = 0;
> +
> +	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
> +
> +	ret = register_device();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
> +
> +	ret = hyper_dmabuf_table_init();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	ret = hyper_dmabuf_ring_table_init();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	/* interrupt for comm should be registered here: */
> +	return ret;
> +}
> +
> +/*-----------------------------------------------------------------------------------------------*/
> +static void hyper_dmabuf_drv_exit(void)
> +{
> +	/* hash tables for export/import entries and ring_infos */
> +	hyper_dmabuf_table_destroy();
> +	hyper_dmabuf_ring_table_init();
> +
> +	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
> +	unregister_device();
> +}
> +/*===============================================================================================*/
> +
> +module_init(hyper_dmabuf_drv_init);
> +module_exit(hyper_dmabuf_drv_exit);
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
> new file mode 100644
> index 0000000..2dad9a6
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
> @@ -0,0 +1,101 @@
> +#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> +#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> +
> +typedef int (*hyper_dmabuf_ioctl_t)(void *data);
> +
> +struct hyper_dmabuf_ioctl_desc {
> +	unsigned int cmd;
> +	int flags;
> +	hyper_dmabuf_ioctl_t func;
> +	const char *name;
> +};
> +
> +#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
> +	[_IOC_NR(ioctl)] = {				\
> +			.cmd = ioctl,			\
> +			.func = _func,			\
> +			.flags = _flags,		\
> +			.name = #ioctl			\
> +	}
> +
> +#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
> +struct ioctl_hyper_dmabuf_exporter_ring_setup {
> +	/* IN parameters */
> +	/* Remote domain id */
> +	uint32_t remote_domain;
> +	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
> +	uint32_t port; /* assigned by driver, copied to userspace after initialization */
> +};
> +
> +#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
> +struct ioctl_hyper_dmabuf_importer_ring_setup {
> +	/* IN parameters */
> +	/* Source domain id */
> +	uint32_t source_domain;
> +	/* Ring shared page refid */
> +	grant_ref_t ring_refid;
> +	/* Port number */
> +	uint32_t port;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
> +_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
> +struct ioctl_hyper_dmabuf_export_remote {
> +	/* IN parameters */
> +	/* DMA buf fd to be exported */
> +	uint32_t dmabuf_fd;
> +	/* Domain id to which buffer should be exported */
> +	uint32_t remote_domain;
> +	/* exported dma buf id */
> +	uint32_t hyper_dmabuf_id;
> +	uint32_t private[4];
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_FD \
> +_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
> +struct ioctl_hyper_dmabuf_export_fd {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be imported */
> +	uint32_t hyper_dmabuf_id;
> +	/* flags */
> +	uint32_t flags;
> +	/* OUT parameters */
> +	/* exported dma buf fd */
> +	uint32_t fd;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_DESTROY \
> +_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
> +struct ioctl_hyper_dmabuf_destroy {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be destroyed */
> +	uint32_t hyper_dmabuf_id;
> +	/* OUT parameters */
> +	/* Status of request */
> +	uint32_t status;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_QUERY \
> +_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
> +struct ioctl_hyper_dmabuf_query {
> +	/* in parameters */
> +	/* hyper dmabuf id to be queried */
> +	uint32_t hyper_dmabuf_id;
> +	/* item to be queried */
> +	uint32_t item;
> +	/* OUT parameters */
> +	/* Value of queried item */
> +	uint32_t info;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
> +struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
> +	/* in parameters */
> +	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
> +	uint32_t info;
> +};
> +
> +#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
> new file mode 100644
> index 0000000..faa5c1b
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
> @@ -0,0 +1,852 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/module.h>
> +#include <linux/dma-buf.h>
> +#include <xen/grant_table.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_imp.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +
> +#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
> +
> +/* return total number of pages referecned by a sgt
> + * for pre-calculation of # of pages behind a given sgt
> + */
> +static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
> +{
> +	struct scatterlist *sgl;
> +	int length, i;
> +	/* at least one page */
> +	int num_pages = 1;
> +
> +	sgl = sgt->sgl;
> +
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
> +
> +	for (i = 1; i < sgt->nents; i++) {
> +		sgl = sg_next(sgl);
> +		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
> +	}
> +
> +	return num_pages;
> +}
> +
> +/* extract pages directly from struct sg_table */
> +struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
> +{
> +	struct hyper_dmabuf_pages_info *pinfo;
> +	int i, j;
> +	int length;
> +	struct scatterlist *sgl;
> +
> +	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
> +	if (pinfo == NULL)
> +		return NULL;
> +
> +	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
> +	if (pinfo->pages == NULL)
> +		return NULL;
> +
> +	sgl = sgt->sgl;
> +
> +	pinfo->nents = 1;
> +	pinfo->frst_ofst = sgl->offset;
> +	pinfo->pages[0] = sg_page(sgl);
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +	i=1;
> +
> +	while (length > 0) {
> +		pinfo->pages[i] = nth_page(sg_page(sgl), i);
> +		length -= PAGE_SIZE;
> +		pinfo->nents++;
> +		i++;
> +	}
> +
> +	for (j = 1; j < sgt->nents; j++) {
> +		sgl = sg_next(sgl);
> +		pinfo->pages[i++] = sg_page(sgl);
> +		length = sgl->length - PAGE_SIZE;
> +		pinfo->nents++;
> +
> +		while (length > 0) {
> +			pinfo->pages[i] = nth_page(sg_page(sgl), i);
> +			length -= PAGE_SIZE;
> +			pinfo->nents++;
> +			i++;
> +		}
> +	}
> +
> +	/*
> +	 * lenght at that point will be 0 or negative,
> +	 * so to calculate last page size just add it to PAGE_SIZE
> +	 */
> +	pinfo->last_len = PAGE_SIZE + length;
> +
> +	return pinfo;
> +}
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
> +				int frst_ofst, int last_len, int nents)
> +{
> +	struct sg_table *sgt;
> +	struct scatterlist *sgl;
> +	int i, ret;
> +
> +	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> +	if (sgt == NULL) {
> +		return NULL;
> +	}
> +
> +	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
> +	if (ret) {
> +		kfree(sgt);
> +		return NULL;
> +	}
> +
> +	sgl = sgt->sgl;
> +
> +	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
> +
> +	for (i=1; i<nents-1; i++) {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
> +	}
> +
> +	if (i > 1) /* more than one page */ {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pages[i], last_len, 0);
> +	}
> +
> +	return sgt;
> +}
> +
> +/*
> + * Creates 2 level page directory structure for referencing shared pages.
> + * Top level page is a single page that contains up to 1024 refids that
> + * point to 2nd level pages.
> + * Each 2nd level page contains up to 1024 refids that point to shared
> + * data pages.
> + * There will always be one top level page and number of 2nd level pages
> + * depends on number of shared data pages.
> + *
> + *      Top level page                2nd level pages            Data pages
> + * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
> + * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
> + * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
> + * |           ...           |   | |     ....           | |
> + * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
> + * +-------------------------+ | | +--------------------+      |Data page 1 |
> + *                             | |                             +------------+
> + *                             | └>+--------------------+
> + *                             |   |Data page 1024 refid|
> + *                             |   |Data page 1025 refid|
> + *                             |   |       ...          |
> + *                             |   |Data page 2047 refid|
> + *                             |   +--------------------+
> + *                             |
> + *                             |        .....
> + *                             └-->+-----------------------+
> + *                                 |Data page 1047552 refid|
> + *                                 |Data page 1047553 refid|
> + *                                 |       ...             |
> + *                                 |Data page 1048575 refid|-->+------------------+
> + *                                 +-----------------------+   |Data page 1048575 |
> + *                                                             +------------------+
> + *
> + * Using such 2 level structure it is possible to reference up to 4GB of
> + * shared data using single refid pointing to top level page.
> + *
> + * Returns refid of top level page.
> + */
> +grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
> +						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	/*
> +	 * Calculate number of pages needed for 2nd level addresing:
> +	 */
> +	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
> +	int i;
> +	unsigned long gref_page_start;
> +	grant_ref_t *tmp_page;
> +	grant_ref_t top_level_ref;
> +	grant_ref_t * addr_refs;
> +	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
> +
> +	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
> +	tmp_page = (grant_ref_t *)gref_page_start;
> +
> +	/* Store 2nd level pages to be freed later */
> +	shared_pages_info->addr_pages = tmp_page;
> +
> +	/*TODO: make sure that allocated memory is filled with 0*/
> +
> +	/* Share 2nd level addressing pages in readonly mode*/
> +	for (i=0; i< n_2nd_level_pages; i++) {
> +		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
> +	}
> +
> +	/*
> +	 * fill second level pages with data refs
> +	 */
> +	for (i = 0; i < nents; i++) {
> +		tmp_page[i] = data_refs[i];
> +	}
> +
> +
> +	/* allocate top level page */
> +	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
> +	tmp_page = (grant_ref_t *)gref_page_start;
> +
> +	/* Store top level page to be freed later */
> +	shared_pages_info->top_level_page = tmp_page;
> +
> +	/*
> +	 * fill top level page with reference numbers of second level pages refs.
> +	 */
> +	for (i=0; i< n_2nd_level_pages; i++) {
> +		tmp_page[i] =  addr_refs[i];
> +	}
> +
> +	/* Share top level addressing page in readonly mode*/
> +	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
> +
> +	kfree(addr_refs);
> +
> +	return top_level_ref;
> +}
> +
> +/*
> + * Maps provided top level ref id and then return array of pages containing data refs.
> + */
> +struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
> +					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	struct page *top_level_page;
> +	struct page **level2_pages;
> +
> +	grant_ref_t *top_level_refs;
> +
> +	struct gnttab_map_grant_ref top_level_map_ops;
> +	struct gnttab_unmap_grant_ref top_level_unmap_ops;
> +
> +	struct gnttab_map_grant_ref *map_ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +
> +	unsigned long addr;
> +	int n_level2_refs = 0;
> +	int i;
> +
> +	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
> +
> +	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
> +
> +	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
> +	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
> +
> +	/* Map top level addressing page */
> +	if (gnttab_alloc_pages(1, &top_level_page)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
> +	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
> +	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +
> +	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	if (top_level_map_ops.status) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
> +				top_level_map_ops.status);
> +		return NULL;
> +	} else {
> +		top_level_unmap_ops.handle = top_level_map_ops.handle;
> +	}
> +
> +	/* Parse contents of top level addressing page to find how many second level pages is there*/
> +	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
> +
> +	/* Map all second level pages */
> +	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	for (i = 0; i < n_level2_refs; i++) {
> +		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
> +		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
> +		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +	}
> +
> +	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
> +	for (i = 0; i < n_level2_refs; i++) {
> +		if (map_ops[i].status) {
> +			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
> +					map_ops[i].status);
> +			return NULL;
> +		} else {
> +			unmap_ops[i].handle = map_ops[i].handle;
> +		}
> +	}
> +
> +	/* Unmap top level page, as it won't be needed any longer */
> +	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
> +		printk("\xen: cannot unmap top level page\n");
> +		return NULL;
> +	}
> +
> +	gnttab_free_pages(1, &top_level_page);
> +	kfree(map_ops);
> +	shared_pages_info->unmap_ops = unmap_ops;
> +
> +	return level2_pages;
> +}
> +
> +
> +/* This collects all reference numbers for 2nd level shared pages and create a table
> + * with those in 1st level shared pages then return reference numbers for this top level
> + * table. */
> +grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
> +					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	int i = 0;
> +	grant_ref_t *data_refs;
> +	grant_ref_t top_level_ref;
> +
> +	/* allocate temp array for refs of shared data pages */
> +	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
> +
> +	/* share data pages in rw mode*/
> +	for (i=0; i<nents; i++) {
> +		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
> +	}
> +
> +	/* create additional shared pages with 2 level addressing of data pages */
> +	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
> +							      shared_pages_info);
> +
> +	/* Store exported pages refid to be unshared later */
> +	shared_pages_info->data_refs = data_refs;
> +	shared_pages_info->top_level_ref = top_level_ref;
> +
> +	return top_level_ref;
> +}
> +
> +int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
> +	uint32_t i = 0;
> +	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
> +
> +	grant_ref_t *ref = shared_pages_info->top_level_page;
> +	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
> +
> +
> +	if (shared_pages_info->data_refs == NULL ||
> +	    shared_pages_info->addr_pages ==  NULL ||
> +	    shared_pages_info->top_level_page == NULL ||
> +	    shared_pages_info->top_level_ref == -1) {
> +		printk("gref table for hyper_dmabuf already cleaned up\n");
> +		return 0;
> +	}
> +
> +	/* End foreign access for 2nd level addressing pages */
> +	while(ref[i] != 0 && i < n_2nd_level_pages) {
> +		if (gnttab_query_foreign_access(ref[i])) {
> +			printk("refid not shared !!\n");
> +		}
> +		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
> +			printk("refid still in use!!!\n");
> +		}
> +		i++;
> +	}
> +	free_pages((unsigned long)shared_pages_info->addr_pages, i);
> +
> +	/* End foreign access for top level addressing page */
> +	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
> +		printk("refid not shared !!\n");
> +	}
> +	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
> +		printk("refid still in use!!!\n");
> +	}
> +	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
> +	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
> +
> +	/* End foreign access for data pages, but do not free them */
> +	for (i = 0; i < sgt_info->sgt->nents; i++) {
> +		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
> +			printk("refid not shared !!\n");
> +		}
> +		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
> +	}
> +
> +	kfree(shared_pages_info->data_refs);
> +
> +	shared_pages_info->data_refs = NULL;
> +	shared_pages_info->addr_pages = NULL;
> +	shared_pages_info->top_level_page = NULL;
> +	shared_pages_info->top_level_ref = -1;
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
> +	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
> +
> +	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
> +		printk("Imported pages already cleaned up or buffer was not imported yet\n");
> +		return 0;
> +	}
> +
> +	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
> +		printk("Cannot unmap data pages\n");
> +		return -EINVAL;
> +	}
> +
> +	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
> +	kfree(shared_pages_info->data_pages);
> +	kfree(shared_pages_info->unmap_ops);
> +	shared_pages_info->unmap_ops = NULL;
> +	shared_pages_info->data_pages = NULL;
> +
> +	return 0;
> +}
> +
> +/* map and construct sg_lists from reference numbers */
> +struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
> +					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	struct sg_table *st;
> +	struct page **pages;
> +	struct gnttab_map_grant_ref *ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +	unsigned long addr;
> +	grant_ref_t *refs;
> +	int i;
> +	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
> +
> +	/* Get data refids */
> +	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
> +							       shared_pages_info);
> +
> +	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
> +	if (pages == NULL) {
> +		return NULL;
> +	}
> +
> +	/* allocate new pages that are mapped to shared pages via grant-table */
> +	if (gnttab_alloc_pages(nents, pages)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
> +	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
> +
> +	for (i=0; i<nents; i++) {
> +		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
> +		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
> +		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
> +		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +	}
> +
> +	if (gnttab_map_refs(ops, NULL, pages, nents)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
> +		return NULL;
> +	}
> +
> +	for (i=0; i<nents; i++) {
> +		if (ops[i].status) {
> +			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
> +				ops[0].status);
> +			return NULL;
> +		} else {
> +			unmap_ops[i].handle = ops[i].handle;
> +		}
> +	}
> +
> +	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
> +
> +	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
> +		printk("Cannot unmap 2nd level refs\n");
> +		return NULL;
> +	}
> +
> +	gnttab_free_pages(n_level2_refs, refid_pages);
> +	kfree(refid_pages);
> +
> +	kfree(shared_pages_info->unmap_ops);
> +	shared_pages_info->unmap_ops = unmap_ops;
> +	shared_pages_info->data_pages = pages;
> +	kfree(ops);
> +
> +	return st;
> +}
> +
> +inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
> +{
> +	struct hyper_dmabuf_ring_rq *req;
> +	int operands[2];
> +	int ret;
> +
> +	operands[0] = id;
> +	operands[1] = ops;
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
> +
> +	/* send request */
> +	ret = hyper_dmabuf_send_request(id, req);
> +
> +	/* TODO: wait until it gets response.. or can we just move on? */
> +
> +	kfree(req);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
> +			struct dma_buf_attachment *attach)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attach->dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_ATTACH);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attach->dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_DETACH);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
> +						enum dma_data_direction dir)
> +{
> +	struct sg_table *st;
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	struct hyper_dmabuf_pages_info *page_info;
> +	int ret;
> +
> +	if (!attachment->dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	/* extract pages from sgt */
> +	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
> +
> +	/* create a new sg_table with extracted pages */
> +	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
> +				page_info->last_len, page_info->nents);
> +	if (st == NULL)
> +		goto err_free_sg;
> +
> +        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
> +                goto err_free_sg;
> +        }
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_MAP);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return st;
> +
> +err_free_sg:
> +	sg_free_table(st);
> +	kfree(st);
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
> +						struct sg_table *sg,
> +						enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attachment->dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
> +
> +	sg_free_table(sg);
> +	kfree(sg);
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_UNMAP);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_RELEASE);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_END_CPU_ACCESS);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return 0;
> +}
> +
> +static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KMAP_ATOMIC);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL; /* for now NULL.. need to return the address of mapped region */
> +}
> +
> +static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL; /* for now NULL.. need to return the address of mapped region */
> +}
> +
> +static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KUNMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_MMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_VMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_VUNMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static const struct dma_buf_ops hyper_dmabuf_ops = {
> +		.attach = hyper_dmabuf_ops_attach,
> +		.detach = hyper_dmabuf_ops_detach,
> +		.map_dma_buf = hyper_dmabuf_ops_map,
> +		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
> +		.release = hyper_dmabuf_ops_release,
> +		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
> +		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
> +		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
> +		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
> +		.map = hyper_dmabuf_ops_kmap,
> +		.unmap = hyper_dmabuf_ops_kunmap,
> +		.mmap = hyper_dmabuf_ops_mmap,
> +		.vmap = hyper_dmabuf_ops_vmap,
> +		.vunmap = hyper_dmabuf_ops_vunmap,
> +};
> +
> +/* exporting dmabuf as fd */
> +int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
> +{
> +	int fd;
> +
> +	struct dma_buf* dmabuf;
> +
> +/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
> + * then release */
> +
> +	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
> +
> +	fd = dma_buf_fd(dmabuf, flags);
> +
> +	return fd;
> +}
> +
> +struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
> +{
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +
> +	exp_info.ops = &hyper_dmabuf_ops;
> +	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
> +	exp_info.flags = /* not sure about flag */0;
> +	exp_info.priv = dinfo;
> +
> +	return dma_buf_export(&exp_info);
> +};
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
> new file mode 100644
> index 0000000..003c158
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
> @@ -0,0 +1,31 @@
> +#ifndef __HYPER_DMABUF_IMP_H__
> +#define __HYPER_DMABUF_IMP_H__
> +
> +#include "hyper_dmabuf_struct.h"
> +
> +/* extract pages directly from struct sg_table */
> +struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
> +                                int frst_ofst, int last_len, int nents);
> +
> +grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
> +					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
> +
> +int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
> +
> +int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
> +
> +/* map first level tables that contains reference numbers for actual shared pages */
> +grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
> +
> +/* map and construct sg_lists from reference numbers */
> +struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
> +					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
> +
> +int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
> +
> +struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
> +
> +#endif /* __HYPER_DMABUF_IMP_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
> new file mode 100644
> index 0000000..5e50908
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
> @@ -0,0 +1,462 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/miscdevice.h>
> +#include <linux/uaccess.h>
> +#include <linux/dma-buf.h>
> +#include <linux/delay.h>
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_imp.h"
> +#include "hyper_dmabuf_list.h"
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_query.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +
> +struct hyper_dmabuf_private {
> +	struct device *device;
> +} hyper_dmabuf_private;
> +
> +static uint32_t hyper_dmabuf_id_gen(void) {
> +	/* TODO: add proper implementation */
> +	static uint32_t id = 0;
> +	static int32_t domid = -1;
> +	if (domid == -1) {
> +		domid = hyper_dmabuf_get_domid();
> +	}
> +	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
> +}
> +
> +static int hyper_dmabuf_exporter_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
> +
> +	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
> +						&ring_attr->ring_refid,
> +						&ring_attr->port);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_importer_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
> +
> +	/* user need to provide a port number and ref # for the page used as ring buffer */
> +	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
> +						 setup_imp_ring_attr->ring_refid,
> +						 setup_imp_ring_attr->port);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_export_remote(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
> +	struct dma_buf *dma_buf;
> +	struct dma_buf_attachment *attachment;
> +	struct sg_table *sgt;
> +	struct hyper_dmabuf_pages_info *page_info;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_ring_rq *req;
> +	int operands[9];
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
> +
> +	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
> +	if (!dma_buf) {
> +		printk("Cannot get dma buf\n");
> +		return -1;
> +	}
> +
> +	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
> +	if (!attachment) {
> +		printk("Cannot get attachment\n");
> +		return -1;
> +	}
> +
> +	/* we check if this specific attachment was already exported
> +	 * to the same domain and if yes, it returns hyper_dmabuf_id
> +	 * of pre-exported sgt */
> +	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
> +	if (ret != -1) {
> +		dma_buf_detach(dma_buf, attachment);
> +		dma_buf_put(dma_buf);
> +		export_remote_attr->hyper_dmabuf_id = ret;
> +		return 0;
> +	}
> +	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
> +	ret = 0;
> +
> +	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
> +
> +	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
> +
> +	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
> +	/* TODO: We might need to consider using port number on event channel? */
> +	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
> +	sgt_info->sgt = sgt;
> +	sgt_info->attachment = attachment;
> +	sgt_info->dma_buf = dma_buf;
> +
> +	page_info = hyper_dmabuf_ext_pgs(sgt);
> +	if (page_info == NULL)
> +		goto fail_export;
> +
> +	/* now register it to export list */
> +	hyper_dmabuf_register_exported(sgt_info);
> +
> +	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
> +	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
> +
> +	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
> +
> +	/* now create table of grefs for shared pages and */
> +
> +	/* now create request for importer via ring */
> +	operands[0] = page_info->hyper_dmabuf_id;
> +	operands[1] = page_info->nents;
> +	operands[2] = page_info->frst_ofst;
> +	operands[3] = page_info->last_len;
> +	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
> +						page_info->nents, &sgt_info->shared_pages_info);
> +	/* driver/application specific private info, max 32 bytes */
> +	operands[5] = export_remote_attr->private[0];
> +	operands[6] = export_remote_attr->private[1];
> +	operands[7] = export_remote_attr->private[2];
> +	operands[8] = export_remote_attr->private[3];
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	/* composing a message to the importer */
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
> +	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
> +		goto fail_send_request;
> +
> +	/* free msg */
> +	kfree(req);
> +	/* free page_info */
> +	kfree(page_info);
> +
> +	return ret;
> +
> +fail_send_request:
> +	kfree(req);
> +	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
> +
> +fail_export:
> +	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
> +	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
> +	dma_buf_put(sgt_info->dma_buf);
> +
> +	return -EINVAL;
> +}
> +
> +static int hyper_dmabuf_export_fd_ioctl(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
> +
> +	/* look for dmabuf for the id */
> +	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
> +	if (imported_sgt_info == NULL) /* can't find sgt from the table */
> +		return -1;
> +
> +	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
> +		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
> +		imported_sgt_info->last_len, imported_sgt_info->nents,
> +		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
> +
> +	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
> +						imported_sgt_info->frst_ofst,
> +						imported_sgt_info->last_len,
> +						imported_sgt_info->nents,
> +						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
> +						&imported_sgt_info->shared_pages_info);
> +
> +	if (!imported_sgt_info->sgt) {
> +		return -1;
> +	}
> +
> +	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
> +	if (export_fd_attr < 0) {
> +		ret = export_fd_attr->fd;
> +	}
> +
> +	return ret;
> +}
> +
> +/* removing dmabuf from the database and send int req to the source domain
> +* to unmap it. */
> +static int hyper_dmabuf_destroy(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_ring_rq *req;
> +	int ret;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
> +
> +	/* find dmabuf in export list */
> +	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
> +	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
> +		destroy_attr->status = -EINVAL;
> +		return -EFAULT;
> +	}
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
> +
> +	/* now send destroy request to remote domain
> +	 * currently assuming there's only one importer exist */
> +	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
> +	if (ret < 0) {
> +		kfree(req);
> +		return -EFAULT;
> +	}
> +
> +	/* free msg */
> +	kfree(req);
> +	destroy_attr->status = ret;
> +
> +	/* Rest of cleanup will follow when importer will free it's buffer,
> +	 * current implementation assumes that there is only one importer
> +         */
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_query(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_query *query_attr;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
> +
> +	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
> +	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
> +
> +	/* if dmabuf can't be found in both lists, return */
> +	if (!(sgt_info && imported_sgt_info)) {
> +		printk("can't find entry anywhere\n");
> +		return -EINVAL;
> +	}
> +
> +	/* not considering the case where a dmabuf is found on both queues
> +	 * in one domain */
> +	switch (query_attr->item)
> +	{
> +		case DMABUF_QUERY_TYPE_LIST:
> +			if (sgt_info) {
> +				query_attr->info = EXPORTED;
> +			} else {
> +				query_attr->info = IMPORTED;
> +			}
> +			break;
> +
> +		/* exporting domain of this specific dmabuf*/
> +		case DMABUF_QUERY_EXPORTER:
> +			if (sgt_info) {
> +				query_attr->info = 0xFFFFFFFF; /* myself */
> +			} else {
> +				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
> +			}
> +			break;
> +
> +		/* importing domain of this specific dmabuf */
> +		case DMABUF_QUERY_IMPORTER:
> +			if (sgt_info) {
> +				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
> +			} else {
> +#if 0 /* TODO: a global variable, current_domain does not exist yet*/
> +				query_attr->info = current_domain;
> +#endif
> +			}
> +			break;
> +
> +		/* size of dmabuf in byte */
> +		case DMABUF_QUERY_SIZE:
> +			if (sgt_info) {
> +#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
> +				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
> +#endif
> +			} else {
> +				query_attr->info = imported_sgt_info->nents * 4096 -
> +						   imported_sgt_info->frst_ofst - 4096 +
> +						   imported_sgt_info->last_len;
> +			}
> +			break;
> +	}
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
> +	struct hyper_dmabuf_ring_rq *req;
> +
> +	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
> +
> +	/* requesting remote domain to set-up exporter's ring */
> +	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
> +		kfree(req);
> +		return -EINVAL;
> +	}
> +
> +	kfree(req);
> +	return 0;
> +}
> +
> +static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
> +};
> +
> +static long hyper_dmabuf_ioctl(struct file *filp,
> +			unsigned int cmd, unsigned long param)
> +{
> +	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
> +	unsigned int nr = _IOC_NR(cmd);
> +	int ret = -EINVAL;
> +	hyper_dmabuf_ioctl_t func;
> +	char *kdata;
> +
> +	ioctl = &hyper_dmabuf_ioctls[nr];
> +
> +	func = ioctl->func;
> +
> +	if (unlikely(!func)) {
> +		printk("no function\n");
> +		return -EINVAL;
> +	}
> +
> +	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
> +	if (!kdata) {
> +		printk("no memory\n");
> +		return -ENOMEM;
> +	}
> +
> +	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
> +		printk("failed to copy from user arguments\n");
> +		return -EFAULT;
> +	}
> +
> +	ret = func(kdata);
> +
> +	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
> +		printk("failed to copy to user arguments\n");
> +		return -EFAULT;
> +	}
> +
> +	kfree(kdata);
> +
> +	return ret;
> +}
> +
> +struct device_info {
> +	int curr_domain;
> +};
> +
> +/*===============================================================================================*/
> +static struct file_operations hyper_dmabuf_driver_fops =
> +{
> +   .owner = THIS_MODULE,
> +   .unlocked_ioctl = hyper_dmabuf_ioctl,
> +};
> +
> +static struct miscdevice hyper_dmabuf_miscdev = {
> +	.minor = MISC_DYNAMIC_MINOR,
> +	.name = "xen/hyper_dmabuf",
> +	.fops = &hyper_dmabuf_driver_fops,
> +};
> +
> +static const char device_name[] = "hyper_dmabuf";
> +
> +/*===============================================================================================*/
> +int register_device(void)
> +{
> +	int result = 0;
> +
> +	result = misc_register(&hyper_dmabuf_miscdev);
> +
> +	if (result != 0) {
> +		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
> +		return result;
> +	}
> +
> +	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
> +
> +	/* TODO: Check if there is a different way to initialize dma mask nicely */
> +	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
> +
> +	/* TODO find a way to provide parameters for below function or move that to ioctl */
> +/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
> +				src_sink_isr, PORT_NUM, "remote_domain", &info);
> +	if (err < 0) {
> +		printk("hyper_dmabuf: can't register interrupt handlers\n");
> +		return -EFAULT;
> +	}
> +
> +	info.irq = err;
> +*/
> +	return result;
> +}
> +
> +/*-----------------------------------------------------------------------------------------------*/
> +void unregister_device(void)
> +{
> +	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
> +	misc_deregister(&hyper_dmabuf_miscdev);
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
> new file mode 100644
> index 0000000..77a7e65
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
> @@ -0,0 +1,119 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <asm/uaccess.h>
> +#include <linux/hashtable.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_list.h"
> +
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
> +
> +int hyper_dmabuf_table_init()
> +{
> +	hash_init(hyper_dmabuf_hash_imported);
> +	hash_init(hyper_dmabuf_hash_exported);
> +	return 0;
> +}
> +
> +int hyper_dmabuf_table_destroy()
> +{
> +	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = info;
> +
> +	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
> +		info_entry->info->hyper_dmabuf_id);
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = info;
> +
> +	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
> +		info_entry->info->hyper_dmabuf_id);
> +
> +	return 0;
> +}
> +
> +struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->attachment == attach &&
> +			info_entry->info->hyper_dmabuf_rdomain == domid)
> +			return info_entry->info->hyper_dmabuf_id;
> +
> +	return -1;
> +}
> +
> +struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +int hyper_dmabuf_remove_exported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> +
> +int hyper_dmabuf_remove_imported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
> new file mode 100644
> index 0000000..869cd9a
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
> @@ -0,0 +1,40 @@
> +#ifndef __HYPER_DMABUF_LIST_H__
> +#define __HYPER_DMABUF_LIST_H__
> +
> +#include "hyper_dmabuf_struct.h"
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_EXPORTED 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_IMPORTED 7
> +
> +struct hyper_dmabuf_info_entry_exported {
> +        struct hyper_dmabuf_sgt_info *info;
> +        struct hlist_node node;
> +};
> +
> +struct hyper_dmabuf_info_entry_imported {
> +        struct hyper_dmabuf_imported_sgt_info *info;
> +        struct hlist_node node;
> +};
> +
> +int hyper_dmabuf_table_init(void);
> +
> +int hyper_dmabuf_table_destroy(void);
> +
> +int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
> +
> +int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
> +
> +struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
> +
> +struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
> +
> +int hyper_dmabuf_remove_exported(int id);
> +
> +int hyper_dmabuf_remove_imported(int id);
> +
> +#endif // __HYPER_DMABUF_LIST_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
> new file mode 100644
> index 0000000..3237e50
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
> @@ -0,0 +1,212 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_imp.h"
> +//#include "hyper_dmabuf_remote_sync.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +#include "hyper_dmabuf_list.h"
> +
> +void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
> +				        enum hyper_dmabuf_command command, int *operands)
> +{
> +	int i;
> +
> +	request->request_id = hyper_dmabuf_next_req_id_export();
> +	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
> +	request->command = command;
> +
> +	switch(command) {
> +	/* as exporter, commands to importer */
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : number of pages to be shared
> +		 * operands2 : offset of data in the first page
> +		 * operands3 : length of data in the last page
> +		 * operands4 : top-level reference number for shared pages
> +		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
> +		 */
> +		for (i=0; i < 8; i++)
> +			request->operands[i] = operands[i];
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY:
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : DMABUF_DESTROY,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +		request->operands[0] = operands[0];
> +		break;
> +
> +	case HYPER_DMABUF_OPS_TO_REMOTE:
> +		/* notifying dmabuf map/unmap to importer (probably not needed) */
> +		/* for dmabuf synchronization */
> +		break;
> +
> +	/* as importer, command to exporter */
> +	case HYPER_DMABUF_OPS_TO_SOURCE:
> +		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
> +		* or unmapping for synchronization with original exporter (e.g. i915) */
> +		/* command : DMABUF_OPS_TO_SOURCE.
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
> +		 */
> +		for (i=0; i<2; i++)
> +			request->operands[i] = operands[i];
> +		break;
> +
> +	/* requesting the other side to setup another ring channel for reverse direction */
> +	case HYPER_DMABUF_EXPORTER_RING_SETUP:
> +		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
> +		/* no operands needed */
> +		break;
> +
> +	default:
> +		/* no command found */
> +		return;
> +	}
> +}
> +
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
> +{
> +	uint32_t i, ret;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +
> +	/* make sure req is not NULL (may not be needed) */
> +	if (!req) {
> +		return -EINVAL;
> +	}
> +
> +	req->status = HYPER_DMABUF_REQ_PROCESSED;
> +
> +	switch (req->command) {
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : number of pages to be shared
> +		 * operands2 : offset of data in the first page
> +		 * operands3 : length of data in the last page
> +		 * operands4 : top-level reference number for shared pages
> +		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
> +		 */
> +		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
> +		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
> +		imported_sgt_info->frst_ofst = req->operands[2];
> +		imported_sgt_info->last_len = req->operands[3];
> +		imported_sgt_info->nents = req->operands[1];
> +		imported_sgt_info->gref = req->operands[4];
> +
> +		printk("DMABUF was exported\n");
> +		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
> +		printk("\tnents %d\n", req->operands[1]);
> +		printk("\tfirst offset %d\n", req->operands[2]);
> +		printk("\tlast len %d\n", req->operands[3]);
> +		printk("\tgrefid %d\n", req->operands[4]);
> +
> +		for (i=0; i<4; i++)
> +			imported_sgt_info->private[i] = req->operands[5+i];
> +
> +		hyper_dmabuf_register_imported(imported_sgt_info);
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY:
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : DMABUF_DESTROY,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +
> +		imported_sgt_info =
> +			hyper_dmabuf_find_imported(req->operands[0]);
> +
> +		if (imported_sgt_info) {
> +			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
> +
> +			hyper_dmabuf_remove_imported(req->operands[0]);
> +
> +			/* TODO: cleanup sgt on importer side etc */
> +		}
> +
> +		/* Notify exporter that buffer is freed and it can cleanup it */
> +		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
> +		req->command = HYPER_DMABUF_DESTROY_FINISH;
> +
> +#if 0 /* function is not implemented yet */
> +
> +		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
> +#endif
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY_FINISH:
> +		/* destroy sg_list for hyper_dmabuf_id on local side */
> +		/* command : DMABUF_DESTROY_FINISH,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +
> +		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
> +		sgt_info =
> +			hyper_dmabuf_find_exported(req->operands[0]);
> +
> +		if (sgt_info) {
> +			hyper_dmabuf_cleanup_gref_table(sgt_info);
> +
> +			/* unmap dmabuf */
> +			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
> +			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
> +			dma_buf_put(sgt_info->dma_buf);
> +
> +			/* TODO: Rest of cleanup, sgt cleanup etc */
> +		}
> +
> +		break;
> +
> +	case HYPER_DMABUF_OPS_TO_REMOTE:
> +		/* notifying dmabuf map/unmap to importer (probably not needed) */
> +		/* for dmabuf synchronization */
> +		break;
> +
> +	/* as importer, command to exporter */
> +	case HYPER_DMABUF_OPS_TO_SOURCE:
> +		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
> +		* or unmapping for synchronization with original exporter (e.g. i915) */
> +		/* command : DMABUF_OPS_TO_SOURCE.
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
> +		 */
> +		break;
> +
> +	/* requesting the other side to setup another ring channel for reverse direction */
> +	case HYPER_DMABUF_EXPORTER_RING_SETUP:
> +		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
> +		 * no operands needed */
> +		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
> +		if (ret < 0) {
> +			req->status = HYPER_DMABUF_REQ_ERROR;
> +			return -EINVAL;
> +		}
> +
> +		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
> +		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
> +		break;
> +
> +	case HYPER_DMABUF_IMPORTER_RING_SETUP:
> +		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
> +		/* no operands needed */
> +		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
> +		if (ret < 0)
> +			return -EINVAL;
> +
> +		break;
> +
> +	default:
> +		/* no matched command, nothing to do.. just return error */
> +		return -EINVAL;
> +	}
> +
> +	return req->command;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
> new file mode 100644
> index 0000000..44bfb70
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
> @@ -0,0 +1,45 @@
> +#ifndef __HYPER_DMABUF_MSG_H__
> +#define __HYPER_DMABUF_MSG_H__
> +
> +enum hyper_dmabuf_command {
> +	HYPER_DMABUF_EXPORT = 0x10,
> +	HYPER_DMABUF_DESTROY,
> +	HYPER_DMABUF_DESTROY_FINISH,
> +	HYPER_DMABUF_OPS_TO_REMOTE,
> +	HYPER_DMABUF_OPS_TO_SOURCE,
> +	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
> +	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
> +};
> +
> +enum hyper_dmabuf_ops {
> +	HYPER_DMABUF_OPS_ATTACH = 0x1000,
> +	HYPER_DMABUF_OPS_DETACH,
> +	HYPER_DMABUF_OPS_MAP,
> +	HYPER_DMABUF_OPS_UNMAP,
> +	HYPER_DMABUF_OPS_RELEASE,
> +	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_END_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_KMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KMAP,
> +	HYPER_DMABUF_OPS_KUNMAP,
> +	HYPER_DMABUF_OPS_MMAP,
> +	HYPER_DMABUF_OPS_VMAP,
> +	HYPER_DMABUF_OPS_VUNMAP,
> +};
> +
> +enum hyper_dmabuf_req_feedback {
> +	HYPER_DMABUF_REQ_PROCESSED = 0x100,
> +	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
> +	HYPER_DMABUF_REQ_ERROR,
> +	HYPER_DMABUF_REQ_NOT_RESPONDED
> +};
> +
> +/* create a request packet with given command and operands */
> +void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
> +                                        enum hyper_dmabuf_command command, int *operands);
> +
> +/* parse incoming request packet (or response) and take appropriate actions for those */
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
> +
> +#endif // __HYPER_DMABUF_MSG_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
> new file mode 100644
> index 0000000..a577167
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
> @@ -0,0 +1,16 @@
> +#ifndef __HYPER_DMABUF_QUERY_H__
> +#define __HYPER_DMABUF_QUERY_H__
> +
> +enum hyper_dmabuf_query {
> +	DMABUF_QUERY_TYPE_LIST = 0x10,
> +	DMABUF_QUERY_EXPORTER,
> +	DMABUF_QUERY_IMPORTER,
> +	DMABUF_QUERY_SIZE
> +};
> +
> +enum hyper_dmabuf_status {
> +	EXPORTED = 0x01,
> +	IMPORTED
> +};
> +
> +#endif /* __HYPER_DMABUF_QUERY_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
> new file mode 100644
> index 0000000..c8a2f4d
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
> @@ -0,0 +1,70 @@
> +#ifndef __HYPER_DMABUF_STRUCT_H__
> +#define __HYPER_DMABUF_STRUCT_H__
> +
> +#include <xen/interface/grant_table.h>
> +
> +/* Importer combine source domain id with given hyper_dmabuf_id
> + * to make it unique in case there are multiple exporters */
> +
> +#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
> +	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
> +
> +#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
> +	(((id) >> 24) & 0xFF)
> +
> +/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
> + * in this block meaning we can share 4KB*4096 = 16MB of buffer
> + * (needs to be increased for large buffer use-cases such as 4K
> + * frame buffer) */
> +#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
> +
> +struct hyper_dmabuf_shared_pages_info {
> +	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
> +	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
> +	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
> +	grant_ref_t top_level_ref; /* top level refid */
> +	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
> +	struct page **data_pages; /* data pages to be unmapped */
> +};
> +
> +/* Exporter builds pages_info before sharing pages */
> +struct hyper_dmabuf_pages_info {
> +        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
> +        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
> +        int frst_ofst; /* offset of data in the first page */
> +        int last_len; /* length of data in the last page */
> +        int nents; /* # of pages */
> +        struct page **pages; /* pages that contains reference numbers of shared pages*/
> +};
> +
> +/* Both importer and exporter use this structure to point to sg lists
> + *
> + * Exporter stores references to sgt in a hash table
> + * Exporter keeps these references for synchronization and tracking purposes
> + *
> + * Importer use this structure exporting to other drivers in the same domain */
> +struct hyper_dmabuf_sgt_info {
> +        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
> +	int hyper_dmabuf_rdomain; /* domain importing this sgt */
> +        struct sg_table *sgt; /* pointer to sgt */
> +	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
> +	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
> +	struct hyper_dmabuf_shared_pages_info shared_pages_info;
> +	int private[4]; /* device specific info (e.g. image's meta info?) */
> +};
> +
> +/* Importer store references (before mapping) on shared pages
> + * Importer store these references in the table and map it in
> + * its own memory map once userspace asks for reference for the buffer */
> +struct hyper_dmabuf_imported_sgt_info {
> +	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
> +	int frst_ofst;	/* start offset in shared page #1 */
> +	int last_len;	/* length of data in the last shared page */
> +	int nents;	/* number of pages to be shared */
> +	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
> +	struct sg_table *sgt; /* sgt pointer after importing buffer */
> +	struct hyper_dmabuf_shared_pages_info shared_pages_info;
> +	int private[4]; /* device specific info (e.g. image's meta info?) */
> +};
> +
> +#endif /* __HYPER_DMABUF_STRUCT_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> new file mode 100644
> index 0000000..22f2ef0
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> @@ -0,0 +1,328 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +#include <xen/grant_table.h>
> +#include <xen/events.h>
> +#include <xen/xenbus.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +#include "../hyper_dmabuf_imp.h"
> +#include "../hyper_dmabuf_list.h"
> +#include "../hyper_dmabuf_msg.h"
> +
> +static int export_req_id = 0;
> +static int import_req_id = 0;
> +
> +int32_t hyper_dmabuf_get_domid(void)
> +{
> +	struct xenbus_transaction xbt;
> +	int32_t domid;
> +
> +        xenbus_transaction_start(&xbt);
> +
> +        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
> +		domid = -1;
> +        }
> +        xenbus_transaction_end(xbt, 0);
> +
> +	return domid;
> +}
> +
> +int hyper_dmabuf_next_req_id_export(void)
> +{
> +        export_req_id++;
> +        return export_req_id;
> +}
> +
> +int hyper_dmabuf_next_req_id_import(void)
> +{
> +        import_req_id++;
> +        return import_req_id;
> +}
> +
> +/* For now cache latast rings as global variables TODO: keep them in list*/
> +static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
> +static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
> +
> +/* exporter needs to generated info for page sharing */
> +int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
> +{
> +	struct hyper_dmabuf_ring_info_export *ring_info;
> +	struct hyper_dmabuf_sring *sring;
> +	struct evtchn_alloc_unbound alloc_unbound;
> +	struct evtchn_close close;
> +
> +	void *shared_ring;
> +	int ret;
> +
> +	ring_info = (struct hyper_dmabuf_ring_info_export*)
> +				kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	/* from exporter to importer */
> +	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
> +	if (shared_ring == 0) {
> +		return -EINVAL;
> +	}
> +
> +	sring = (struct hyper_dmabuf_sring *) shared_ring;
> +
> +	SHARED_RING_INIT(sring);
> +
> +	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
> +
> +	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
> +							virt_to_mfn(shared_ring), 0);
> +	if (ring_info->gref_ring < 0) {
> +		return -EINVAL; /* fail to get gref */
> +	}
> +
> +	alloc_unbound.dom = DOMID_SELF;
> +	alloc_unbound.remote_dom = rdomain;
> +	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
> +	if (ret != 0) {
> +		printk("Cannot allocate event channel\n");
> +		return -EINVAL;
> +	}
> +
> +	/* setting up interrupt */
> +	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
> +					hyper_dmabuf_front_ring_isr, 0,
> +					NULL, (void*) ring_info);
> +
> +	if (ret < 0) {
> +		printk("Failed to setup event channel\n");
> +		close.port = alloc_unbound.port;
> +		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
> +		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
> +		return -EINVAL;
> +	}
> +
> +	ring_info->rdomain = rdomain;
> +	ring_info->irq = ret;
> +	ring_info->port = alloc_unbound.port;
> +
> +	/* store refid and port numbers for userspace's use */
> +	*refid = ring_info->gref_ring;
> +	*port = ring_info->port;
> +
> +	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
> +		ring_info->gref_ring,
> +		ring_info->port,
> +		ring_info->irq);
> +
> +	/* register ring info */
> +	ret = hyper_dmabuf_register_exporter_ring(ring_info);
> +
> +	return ret;
> +}
> +
> +/* importer needs to know about shared page and port numbers for ring buffer and event channel */
> +int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
> +{
> +	struct hyper_dmabuf_ring_info_import *ring_info;
> +	struct hyper_dmabuf_sring *sring;
> +
> +	struct page *shared_ring;
> +
> +	struct gnttab_map_grant_ref *ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +	int ret;
> +
> +	ring_info = (struct hyper_dmabuf_ring_info_import *)
> +			kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	ring_info->sdomain = sdomain;
> +	ring_info->evtchn = port;
> +
> +	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
> +	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
> +
> +	if (gnttab_alloc_pages(1, &shared_ring)) {
> +		return -EINVAL;
> +	}
> +
> +	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
> +			GNTMAP_host_map, gref, sdomain);
> +
> +	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
> +	if (ret < 0) {
> +		printk("Cannot map ring\n");
> +		return -EINVAL;
> +	}
> +
> +	if (ops[0].status) {
> +		printk("Ring mapping failed\n");
> +		return -EINVAL;
> +	}
> +
> +	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
> +
> +	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
> +
> +	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
> +						    NULL, (void*)ring_info);
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	ring_info->irq = ret;
> +
> +	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
> +		port,
> +		ring_info->irq);
> +
> +	ret = hyper_dmabuf_register_importer_ring(ring_info);
> +
> +	return ret;
> +}
> +
> +int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
> +{
> +	struct hyper_dmabuf_front_ring *ring;
> +	struct hyper_dmabuf_ring_rq *new_req;
> +	struct hyper_dmabuf_ring_info_export *ring_info;
> +	int notify;
> +
> +	/* find a ring info for the channel */
> +	ring_info = hyper_dmabuf_find_exporter_ring(domain);
> +	if (!ring_info) {
> +		printk("Can't find ring info for the channel\n");
> +		return -EINVAL;
> +	}
> +
> +	ring = &ring_info->ring_front;
> +
> +	if (RING_FULL(ring))
> +		return -EBUSY;
> +
> +	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +	if (!new_req) {
> +		printk("NULL REQUEST\n");
> +		return -EIO;
> +	}
> +
> +	memcpy(new_req, req, sizeof(*new_req));
> +
> +	ring->req_prod_pvt++;
> +
> +	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
> +	if (notify) {
> +		notify_remote_via_irq(ring_info->irq);
> +	}
> +
> +	return 0;
> +}
> +
> +/* called by interrupt (WORKQUEUE) */
> +int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
> +{
> +	/* as a importer and as a exporter */
> +	return 0;
> +}
> +
> +/* ISR for request from exporter (as an importer) */
> +static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
> +{
> +	RING_IDX rc, rp;
> +	struct hyper_dmabuf_ring_rq request;
> +	struct hyper_dmabuf_ring_rp response;
> +	int notify, more_to_do;
> +	int ret;
> +//	struct hyper_dmabuf_work *work;
> +
> +	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
> +	struct hyper_dmabuf_back_ring *ring;
> +
> +	ring = &ring_info->ring_back;
> +
> +	do {
> +		rc = ring->req_cons;
> +		rp = ring->sring->req_prod;
> +
> +		while (rc != rp) {
> +			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
> +				break;
> +
> +			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
> +			printk("Got request\n");
> +			ring->req_cons = ++rc;
> +
> +			/* TODO: probably using linked list for multiple requests then let
> +			 * a task in a workqueue to process those is better idea becuase
> +			 * we do not want to stay in ISR for long.
> +			 */
> +			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
> +
> +			if (ret > 0) {
> +				/* build response */
> +				memcpy(&response, &request, sizeof(response));
> +
> +				/* we sent back modified request as a response.. we might just need to have request only..*/
> +				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
> +				ring->rsp_prod_pvt++;
> +
> +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
> +
> +				if (notify) {
> +					printk("Notyfing\n");
> +					notify_remote_via_irq(ring_info->irq);
> +				}
> +			}
> +
> +			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
> +			printk("Final check for requests %d\n", more_to_do);
> +		}
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +/* ISR for responses from importer */
> +static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
> +{
> +	/* front ring only care about response from back */
> +	struct hyper_dmabuf_ring_rp *response;
> +	RING_IDX i, rp;
> +	int more_to_do, ret;
> +
> +	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
> +	struct hyper_dmabuf_front_ring *ring;
> +	ring = &ring_info->ring_front;
> +
> +	do {
> +		more_to_do = 0;
> +		rp = ring->sring->rsp_prod;
> +		for (i = ring->rsp_cons; i != rp; i++) {
> +			unsigned long id;
> +
> +			response = RING_GET_RESPONSE(ring, i);
> +			id = response->response_id;
> +
> +			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
> +				/* parsing response */
> +				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
> +
> +				if (ret < 0) {
> +					printk("getting error while parsing response\n");
> +				}
> +			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
> +				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
> +			}
> +
> +		}
> +
> +		ring->rsp_cons = i;
> +
> +		if (i != ring->req_prod_pvt) {
> +			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
> +			printk("more to do %d\n", more_to_do);
> +		} else {
> +			ring->sring->rsp_event = i+1;
> +		}
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
> new file mode 100644
> index 0000000..2754917
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
> @@ -0,0 +1,62 @@
> +#ifndef __HYPER_DMABUF_XEN_COMM_H__
> +#define __HYPER_DMABUF_XEN_COMM_H__
> +
> +#include "xen/interface/io/ring.h"
> +
> +#define MAX_NUMBER_OF_OPERANDS 9
> +
> +struct hyper_dmabuf_ring_rq {
> +        unsigned int request_id;
> +        unsigned int status;
> +        unsigned int command;
> +        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
> +};
> +
> +struct hyper_dmabuf_ring_rp {
> +        unsigned int response_id;
> +        unsigned int status;
> +        unsigned int command;
> +        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
> +};
> +
> +DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
> +
> +struct hyper_dmabuf_ring_info_export {
> +        struct hyper_dmabuf_front_ring ring_front;
> +	int rdomain;
> +        int gref_ring;
> +        int irq;
> +        int port;
> +};
> +
> +struct hyper_dmabuf_ring_info_import {
> +        int sdomain;
> +        int irq;
> +        int evtchn;
> +        struct hyper_dmabuf_back_ring ring_back;
> +};
> +
> +//struct hyper_dmabuf_work {
> +//	hyper_dmabuf_ring_rq requrest;
> +//	struct work_struct msg_parse;
> +//};
> +
> +int32_t hyper_dmabuf_get_domid(void);
> +
> +int hyper_dmabuf_next_req_id_export(void);
> +
> +int hyper_dmabuf_next_req_id_import(void);
> +
> +/* exporter needs to generated info for page sharing */
> +int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
> +
> +/* importer needs to know about shared page and port numbers for ring buffer and event channel */
> +int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
> +
> +/* send request to the remote domain */
> +int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
> +
> +/* called by interrupt (WORKQUEUE) */
> +int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_H__
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> new file mode 100644
> index 0000000..15c9d29
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> @@ -0,0 +1,106 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <asm/uaccess.h>
> +#include <linux/hashtable.h>
> +#include <xen/grant_table.h>
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
> +
> +int hyper_dmabuf_ring_table_init()
> +{
> +	hash_init(hyper_dmabuf_hash_importer_ring);
> +	hash_init(hyper_dmabuf_hash_exporter_ring);
> +	return 0;
> +}
> +
> +int hyper_dmabuf_ring_table_destroy()
> +{
> +	/* TODO: cleanup tables*/
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
> +		info_entry->info->rdomain);
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
> +		info_entry->info->sdomain);
> +
> +	return 0;
> +}
> +
> +struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
> +		if(info_entry->info->rdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
> +		if(info_entry->info->sdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +int hyper_dmabuf_remove_exporter_ring(int domid)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
> +		if(info_entry->info->rdomain == domid) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> +
> +int hyper_dmabuf_remove_importer_ring(int domid)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
> +		if(info_entry->info->sdomain == domid) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> new file mode 100644
> index 0000000..5929f99
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> @@ -0,0 +1,35 @@
> +#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
> +#define __HYPER_DMABUF_XEN_COMM_LIST_H__
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_EXPORT_RING 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_IMPORT_RING 7
> +
> +struct hyper_dmabuf_exporter_ring_info {
> +        struct hyper_dmabuf_ring_info_export *info;
> +        struct hlist_node node;
> +};
> +
> +struct hyper_dmabuf_importer_ring_info {
> +        struct hyper_dmabuf_ring_info_import *info;
> +        struct hlist_node node;
> +};
> +
> +int hyper_dmabuf_ring_table_init(void);
> +
> +int hyper_dmabuf_ring_table_destroy(void);
> +
> +int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
> +
> +int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
> +
> +struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
> +
> +struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
> +
> +int hyper_dmabuf_remove_exporter_ring(int domid);
> +
> +int hyper_dmabuf_remove_importer_ring(int domid);
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
> -- 
> 2.7.4
> 
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
  2017-12-19 19:29 ` Dongwon Kim
                   ` (86 preceding siblings ...)
  (?)
@ 2018-02-15  1:34 ` Dongwon Kim
  -1 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2018-02-15  1:34 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, Potrola, MateuszX, dri-devel

Abandoning this series as a new version was submitted for the review

"[RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver"

On Tue, Dec 19, 2017 at 11:29:17AM -0800, Kim, Dongwon wrote:
> Upload of intial version of hyper_DMABUF driver enabling
> DMA_BUF exchange between two different VMs in virtualized
> platform based on hypervisor such as KVM or XEN.
> 
> Hyper_DMABUF drv's primary role is to import a DMA_BUF
> from originator then re-export it to another Linux VM
> so that it can be mapped and accessed by it.
> 
> The functionality of this driver highly depends on
> Hypervisor's native page sharing mechanism and inter-VM
> communication support.
> 
> This driver has two layers, one is main hyper_DMABUF
> framework for scatter-gather list management that handles
> actual import and export of DMA_BUF. Lower layer is about
> actual memory sharing and communication between two VMs,
> which is hypervisor-specific interface.
> 
> This driver is initially designed to enable DMA_BUF
> sharing across VMs in Xen environment, so currently working
> with Xen only.
> 
> This also adds Kernel configuration for hyper_DMABUF drv
> under Device Drivers->Xen driver support->hyper_dmabuf
> options.
> 
> To give some brief information about each source file,
> 
> hyper_dmabuf/hyper_dmabuf_conf.h
> : configuration info
> 
> hyper_dmabuf/hyper_dmabuf_drv.c
> : driver interface and initialization
> 
> hyper_dmabuf/hyper_dmabuf_imp.c
> : scatter-gather list generation and management. DMA_BUF
> ops for DMA_BUF reconstructed from hyper_DMABUF
> 
> hyper_dmabuf/hyper_dmabuf_ioctl.c
> : IOCTLs calls for export/import and comm channel creation
> unexport.
> 
> hyper_dmabuf/hyper_dmabuf_list.c
> : Database (linked-list) for exported and imported
> hyper_DMABUF
> 
> hyper_dmabuf/hyper_dmabuf_msg.c
> : creation and management of messages between exporter and
> importer
> 
> hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> : comm ch management and ISRs for incoming messages.
> 
> hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> : Database (linked-list) for keeping information about
> existing comm channels among VMs
> 
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
> ---
>  drivers/xen/Kconfig                                |   2 +
>  drivers/xen/Makefile                               |   1 +
>  drivers/xen/hyper_dmabuf/Kconfig                   |  14 +
>  drivers/xen/hyper_dmabuf/Makefile                  |  34 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   2 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  54 ++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 101 +++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 852 +++++++++++++++++++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  31 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 462 +++++++++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 119 +++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  40 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 212 +++++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  45 ++
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  16 +
>  drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  70 ++
>  .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 328 ++++++++
>  .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  62 ++
>  .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 106 +++
>  .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  35 +
>  20 files changed, 2586 insertions(+)
>  create mode 100644 drivers/xen/hyper_dmabuf/Kconfig
>  create mode 100644 drivers/xen/hyper_dmabuf/Makefile
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
>  create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
>  create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> 
> diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
> index d8dd546..b59b0e3 100644
> --- a/drivers/xen/Kconfig
> +++ b/drivers/xen/Kconfig
> @@ -321,4 +321,6 @@ config XEN_SYMS
>  config XEN_HAVE_VPMU
>         bool
>  
> +source "drivers/xen/hyper_dmabuf/Kconfig"
> +
>  endmenu
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 451e833..a6e253a 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -4,6 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
>  obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
>  obj-y	+= events/
>  obj-y	+= xenbus/
> +obj-y	+= hyper_dmabuf/
>  
>  nostackp := $(call cc-option, -fno-stack-protector)
>  CFLAGS_features.o			:= $(nostackp)
> diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
> new file mode 100644
> index 0000000..75e1f96
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/Kconfig
> @@ -0,0 +1,14 @@
> +menu "hyper_dmabuf options"
> +
> +config HYPER_DMABUF
> +	tristate "Enables hyper dmabuf driver"
> +	default y
> +
> +config HYPER_DMABUF_XEN
> +	bool "Configure hyper_dmabuf for XEN hypervisor"
> +	default y
> +	depends on HYPER_DMABUF
> +	help
> +	  Configuring hyper_dmabuf driver for XEN hypervisor
> +
> +endmenu
> diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
> new file mode 100644
> index 0000000..0be7445
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/Makefile
> @@ -0,0 +1,34 @@
> +TARGET_MODULE:=hyper_dmabuf
> +
> +# If we running by kernel building system
> +ifneq ($(KERNELRELEASE),)
> +	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
> +                                 hyper_dmabuf_ioctl.o \
> +                                 hyper_dmabuf_list.o \
> +				 hyper_dmabuf_imp.o \
> +				 hyper_dmabuf_msg.o \
> +				 xen/hyper_dmabuf_xen_comm.o \
> +				 xen/hyper_dmabuf_xen_comm_list.o
> +
> +obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
> +
> +# If we are running without kernel build system
> +else
> +BUILDSYSTEM_DIR?=../../../
> +PWD:=$(shell pwd)
> +
> +all :
> +# run kernel build system to make module
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
> +
> +clean:
> +# run kernel build system to cleanup in current directory
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
> +
> +load:
> +	insmod ./$(TARGET_MODULE).ko
> +
> +unload:
> +	rmmod ./$(TARGET_MODULE).ko
> +
> +endif
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
> new file mode 100644
> index 0000000..3d9b2d6
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
> @@ -0,0 +1,2 @@
> +#define CURRENT_TARGET XEN
> +#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
> new file mode 100644
> index 0000000..0698327
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
> @@ -0,0 +1,54 @@
> +#include <linux/init.h>       /* module_init, module_exit */
> +#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
> +#include "hyper_dmabuf_conf.h"
> +#include "hyper_dmabuf_list.h"
> +#include "xen/hyper_dmabuf_xen_comm_list.h"
> +
> +MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_AUTHOR("IOTG-PED, INTEL");
> +
> +int register_device(void);
> +int unregister_device(void);
> +
> +/*===============================================================================================*/
> +static int hyper_dmabuf_drv_init(void)
> +{
> +	int ret = 0;
> +
> +	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
> +
> +	ret = register_device();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
> +
> +	ret = hyper_dmabuf_table_init();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	ret = hyper_dmabuf_ring_table_init();
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	/* interrupt for comm should be registered here: */
> +	return ret;
> +}
> +
> +/*-----------------------------------------------------------------------------------------------*/
> +static void hyper_dmabuf_drv_exit(void)
> +{
> +	/* hash tables for export/import entries and ring_infos */
> +	hyper_dmabuf_table_destroy();
> +	hyper_dmabuf_ring_table_init();
> +
> +	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
> +	unregister_device();
> +}
> +/*===============================================================================================*/
> +
> +module_init(hyper_dmabuf_drv_init);
> +module_exit(hyper_dmabuf_drv_exit);
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
> new file mode 100644
> index 0000000..2dad9a6
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
> @@ -0,0 +1,101 @@
> +#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> +#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> +
> +typedef int (*hyper_dmabuf_ioctl_t)(void *data);
> +
> +struct hyper_dmabuf_ioctl_desc {
> +	unsigned int cmd;
> +	int flags;
> +	hyper_dmabuf_ioctl_t func;
> +	const char *name;
> +};
> +
> +#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
> +	[_IOC_NR(ioctl)] = {				\
> +			.cmd = ioctl,			\
> +			.func = _func,			\
> +			.flags = _flags,		\
> +			.name = #ioctl			\
> +	}
> +
> +#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
> +struct ioctl_hyper_dmabuf_exporter_ring_setup {
> +	/* IN parameters */
> +	/* Remote domain id */
> +	uint32_t remote_domain;
> +	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
> +	uint32_t port; /* assigned by driver, copied to userspace after initialization */
> +};
> +
> +#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
> +struct ioctl_hyper_dmabuf_importer_ring_setup {
> +	/* IN parameters */
> +	/* Source domain id */
> +	uint32_t source_domain;
> +	/* Ring shared page refid */
> +	grant_ref_t ring_refid;
> +	/* Port number */
> +	uint32_t port;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
> +_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
> +struct ioctl_hyper_dmabuf_export_remote {
> +	/* IN parameters */
> +	/* DMA buf fd to be exported */
> +	uint32_t dmabuf_fd;
> +	/* Domain id to which buffer should be exported */
> +	uint32_t remote_domain;
> +	/* exported dma buf id */
> +	uint32_t hyper_dmabuf_id;
> +	uint32_t private[4];
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_FD \
> +_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
> +struct ioctl_hyper_dmabuf_export_fd {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be imported */
> +	uint32_t hyper_dmabuf_id;
> +	/* flags */
> +	uint32_t flags;
> +	/* OUT parameters */
> +	/* exported dma buf fd */
> +	uint32_t fd;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_DESTROY \
> +_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
> +struct ioctl_hyper_dmabuf_destroy {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be destroyed */
> +	uint32_t hyper_dmabuf_id;
> +	/* OUT parameters */
> +	/* Status of request */
> +	uint32_t status;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_QUERY \
> +_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
> +struct ioctl_hyper_dmabuf_query {
> +	/* in parameters */
> +	/* hyper dmabuf id to be queried */
> +	uint32_t hyper_dmabuf_id;
> +	/* item to be queried */
> +	uint32_t item;
> +	/* OUT parameters */
> +	/* Value of queried item */
> +	uint32_t info;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
> +_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
> +struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
> +	/* in parameters */
> +	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
> +	uint32_t info;
> +};
> +
> +#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
> new file mode 100644
> index 0000000..faa5c1b
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
> @@ -0,0 +1,852 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/module.h>
> +#include <linux/dma-buf.h>
> +#include <xen/grant_table.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_imp.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +
> +#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
> +
> +/* return total number of pages referecned by a sgt
> + * for pre-calculation of # of pages behind a given sgt
> + */
> +static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
> +{
> +	struct scatterlist *sgl;
> +	int length, i;
> +	/* at least one page */
> +	int num_pages = 1;
> +
> +	sgl = sgt->sgl;
> +
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
> +
> +	for (i = 1; i < sgt->nents; i++) {
> +		sgl = sg_next(sgl);
> +		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
> +	}
> +
> +	return num_pages;
> +}
> +
> +/* extract pages directly from struct sg_table */
> +struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
> +{
> +	struct hyper_dmabuf_pages_info *pinfo;
> +	int i, j;
> +	int length;
> +	struct scatterlist *sgl;
> +
> +	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
> +	if (pinfo == NULL)
> +		return NULL;
> +
> +	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
> +	if (pinfo->pages == NULL)
> +		return NULL;
> +
> +	sgl = sgt->sgl;
> +
> +	pinfo->nents = 1;
> +	pinfo->frst_ofst = sgl->offset;
> +	pinfo->pages[0] = sg_page(sgl);
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +	i=1;
> +
> +	while (length > 0) {
> +		pinfo->pages[i] = nth_page(sg_page(sgl), i);
> +		length -= PAGE_SIZE;
> +		pinfo->nents++;
> +		i++;
> +	}
> +
> +	for (j = 1; j < sgt->nents; j++) {
> +		sgl = sg_next(sgl);
> +		pinfo->pages[i++] = sg_page(sgl);
> +		length = sgl->length - PAGE_SIZE;
> +		pinfo->nents++;
> +
> +		while (length > 0) {
> +			pinfo->pages[i] = nth_page(sg_page(sgl), i);
> +			length -= PAGE_SIZE;
> +			pinfo->nents++;
> +			i++;
> +		}
> +	}
> +
> +	/*
> +	 * lenght at that point will be 0 or negative,
> +	 * so to calculate last page size just add it to PAGE_SIZE
> +	 */
> +	pinfo->last_len = PAGE_SIZE + length;
> +
> +	return pinfo;
> +}
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
> +				int frst_ofst, int last_len, int nents)
> +{
> +	struct sg_table *sgt;
> +	struct scatterlist *sgl;
> +	int i, ret;
> +
> +	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> +	if (sgt == NULL) {
> +		return NULL;
> +	}
> +
> +	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
> +	if (ret) {
> +		kfree(sgt);
> +		return NULL;
> +	}
> +
> +	sgl = sgt->sgl;
> +
> +	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
> +
> +	for (i=1; i<nents-1; i++) {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
> +	}
> +
> +	if (i > 1) /* more than one page */ {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pages[i], last_len, 0);
> +	}
> +
> +	return sgt;
> +}
> +
> +/*
> + * Creates 2 level page directory structure for referencing shared pages.
> + * Top level page is a single page that contains up to 1024 refids that
> + * point to 2nd level pages.
> + * Each 2nd level page contains up to 1024 refids that point to shared
> + * data pages.
> + * There will always be one top level page and number of 2nd level pages
> + * depends on number of shared data pages.
> + *
> + *      Top level page                2nd level pages            Data pages
> + * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
> + * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
> + * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
> + * |           ...           |   | |     ....           | |
> + * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
> + * +-------------------------+ | | +--------------------+      |Data page 1 |
> + *                             | |                             +------------+
> + *                             | └>+--------------------+
> + *                             |   |Data page 1024 refid|
> + *                             |   |Data page 1025 refid|
> + *                             |   |       ...          |
> + *                             |   |Data page 2047 refid|
> + *                             |   +--------------------+
> + *                             |
> + *                             |        .....
> + *                             └-->+-----------------------+
> + *                                 |Data page 1047552 refid|
> + *                                 |Data page 1047553 refid|
> + *                                 |       ...             |
> + *                                 |Data page 1048575 refid|-->+------------------+
> + *                                 +-----------------------+   |Data page 1048575 |
> + *                                                             +------------------+
> + *
> + * Using such 2 level structure it is possible to reference up to 4GB of
> + * shared data using single refid pointing to top level page.
> + *
> + * Returns refid of top level page.
> + */
> +grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
> +						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	/*
> +	 * Calculate number of pages needed for 2nd level addresing:
> +	 */
> +	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
> +	int i;
> +	unsigned long gref_page_start;
> +	grant_ref_t *tmp_page;
> +	grant_ref_t top_level_ref;
> +	grant_ref_t * addr_refs;
> +	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
> +
> +	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
> +	tmp_page = (grant_ref_t *)gref_page_start;
> +
> +	/* Store 2nd level pages to be freed later */
> +	shared_pages_info->addr_pages = tmp_page;
> +
> +	/*TODO: make sure that allocated memory is filled with 0*/
> +
> +	/* Share 2nd level addressing pages in readonly mode*/
> +	for (i=0; i< n_2nd_level_pages; i++) {
> +		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
> +	}
> +
> +	/*
> +	 * fill second level pages with data refs
> +	 */
> +	for (i = 0; i < nents; i++) {
> +		tmp_page[i] = data_refs[i];
> +	}
> +
> +
> +	/* allocate top level page */
> +	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
> +	tmp_page = (grant_ref_t *)gref_page_start;
> +
> +	/* Store top level page to be freed later */
> +	shared_pages_info->top_level_page = tmp_page;
> +
> +	/*
> +	 * fill top level page with reference numbers of second level pages refs.
> +	 */
> +	for (i=0; i< n_2nd_level_pages; i++) {
> +		tmp_page[i] =  addr_refs[i];
> +	}
> +
> +	/* Share top level addressing page in readonly mode*/
> +	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
> +
> +	kfree(addr_refs);
> +
> +	return top_level_ref;
> +}
> +
> +/*
> + * Maps provided top level ref id and then return array of pages containing data refs.
> + */
> +struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
> +					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	struct page *top_level_page;
> +	struct page **level2_pages;
> +
> +	grant_ref_t *top_level_refs;
> +
> +	struct gnttab_map_grant_ref top_level_map_ops;
> +	struct gnttab_unmap_grant_ref top_level_unmap_ops;
> +
> +	struct gnttab_map_grant_ref *map_ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +
> +	unsigned long addr;
> +	int n_level2_refs = 0;
> +	int i;
> +
> +	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
> +
> +	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
> +
> +	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
> +	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
> +
> +	/* Map top level addressing page */
> +	if (gnttab_alloc_pages(1, &top_level_page)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
> +	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
> +	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +
> +	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	if (top_level_map_ops.status) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
> +				top_level_map_ops.status);
> +		return NULL;
> +	} else {
> +		top_level_unmap_ops.handle = top_level_map_ops.handle;
> +	}
> +
> +	/* Parse contents of top level addressing page to find how many second level pages is there*/
> +	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
> +
> +	/* Map all second level pages */
> +	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	for (i = 0; i < n_level2_refs; i++) {
> +		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
> +		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
> +		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +	}
> +
> +	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
> +	for (i = 0; i < n_level2_refs; i++) {
> +		if (map_ops[i].status) {
> +			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
> +					map_ops[i].status);
> +			return NULL;
> +		} else {
> +			unmap_ops[i].handle = map_ops[i].handle;
> +		}
> +	}
> +
> +	/* Unmap top level page, as it won't be needed any longer */
> +	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
> +		printk("\xen: cannot unmap top level page\n");
> +		return NULL;
> +	}
> +
> +	gnttab_free_pages(1, &top_level_page);
> +	kfree(map_ops);
> +	shared_pages_info->unmap_ops = unmap_ops;
> +
> +	return level2_pages;
> +}
> +
> +
> +/* This collects all reference numbers for 2nd level shared pages and create a table
> + * with those in 1st level shared pages then return reference numbers for this top level
> + * table. */
> +grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
> +					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	int i = 0;
> +	grant_ref_t *data_refs;
> +	grant_ref_t top_level_ref;
> +
> +	/* allocate temp array for refs of shared data pages */
> +	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
> +
> +	/* share data pages in rw mode*/
> +	for (i=0; i<nents; i++) {
> +		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
> +	}
> +
> +	/* create additional shared pages with 2 level addressing of data pages */
> +	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
> +							      shared_pages_info);
> +
> +	/* Store exported pages refid to be unshared later */
> +	shared_pages_info->data_refs = data_refs;
> +	shared_pages_info->top_level_ref = top_level_ref;
> +
> +	return top_level_ref;
> +}
> +
> +int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
> +	uint32_t i = 0;
> +	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
> +
> +	grant_ref_t *ref = shared_pages_info->top_level_page;
> +	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
> +
> +
> +	if (shared_pages_info->data_refs == NULL ||
> +	    shared_pages_info->addr_pages ==  NULL ||
> +	    shared_pages_info->top_level_page == NULL ||
> +	    shared_pages_info->top_level_ref == -1) {
> +		printk("gref table for hyper_dmabuf already cleaned up\n");
> +		return 0;
> +	}
> +
> +	/* End foreign access for 2nd level addressing pages */
> +	while(ref[i] != 0 && i < n_2nd_level_pages) {
> +		if (gnttab_query_foreign_access(ref[i])) {
> +			printk("refid not shared !!\n");
> +		}
> +		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
> +			printk("refid still in use!!!\n");
> +		}
> +		i++;
> +	}
> +	free_pages((unsigned long)shared_pages_info->addr_pages, i);
> +
> +	/* End foreign access for top level addressing page */
> +	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
> +		printk("refid not shared !!\n");
> +	}
> +	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
> +		printk("refid still in use!!!\n");
> +	}
> +	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
> +	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
> +
> +	/* End foreign access for data pages, but do not free them */
> +	for (i = 0; i < sgt_info->sgt->nents; i++) {
> +		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
> +			printk("refid not shared !!\n");
> +		}
> +		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
> +	}
> +
> +	kfree(shared_pages_info->data_refs);
> +
> +	shared_pages_info->data_refs = NULL;
> +	shared_pages_info->addr_pages = NULL;
> +	shared_pages_info->top_level_page = NULL;
> +	shared_pages_info->top_level_ref = -1;
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
> +	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
> +
> +	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
> +		printk("Imported pages already cleaned up or buffer was not imported yet\n");
> +		return 0;
> +	}
> +
> +	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
> +		printk("Cannot unmap data pages\n");
> +		return -EINVAL;
> +	}
> +
> +	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
> +	kfree(shared_pages_info->data_pages);
> +	kfree(shared_pages_info->unmap_ops);
> +	shared_pages_info->unmap_ops = NULL;
> +	shared_pages_info->data_pages = NULL;
> +
> +	return 0;
> +}
> +
> +/* map and construct sg_lists from reference numbers */
> +struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
> +					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
> +{
> +	struct sg_table *st;
> +	struct page **pages;
> +	struct gnttab_map_grant_ref *ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +	unsigned long addr;
> +	grant_ref_t *refs;
> +	int i;
> +	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
> +
> +	/* Get data refids */
> +	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
> +							       shared_pages_info);
> +
> +	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
> +	if (pages == NULL) {
> +		return NULL;
> +	}
> +
> +	/* allocate new pages that are mapped to shared pages via grant-table */
> +	if (gnttab_alloc_pages(nents, pages)) {
> +		printk("Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
> +	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
> +
> +	for (i=0; i<nents; i++) {
> +		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
> +		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
> +		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
> +		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
> +	}
> +
> +	if (gnttab_map_refs(ops, NULL, pages, nents)) {
> +		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
> +		return NULL;
> +	}
> +
> +	for (i=0; i<nents; i++) {
> +		if (ops[i].status) {
> +			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
> +				ops[0].status);
> +			return NULL;
> +		} else {
> +			unmap_ops[i].handle = ops[i].handle;
> +		}
> +	}
> +
> +	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
> +
> +	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
> +		printk("Cannot unmap 2nd level refs\n");
> +		return NULL;
> +	}
> +
> +	gnttab_free_pages(n_level2_refs, refid_pages);
> +	kfree(refid_pages);
> +
> +	kfree(shared_pages_info->unmap_ops);
> +	shared_pages_info->unmap_ops = unmap_ops;
> +	shared_pages_info->data_pages = pages;
> +	kfree(ops);
> +
> +	return st;
> +}
> +
> +inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
> +{
> +	struct hyper_dmabuf_ring_rq *req;
> +	int operands[2];
> +	int ret;
> +
> +	operands[0] = id;
> +	operands[1] = ops;
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
> +
> +	/* send request */
> +	ret = hyper_dmabuf_send_request(id, req);
> +
> +	/* TODO: wait until it gets response.. or can we just move on? */
> +
> +	kfree(req);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
> +			struct dma_buf_attachment *attach)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attach->dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_ATTACH);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attach->dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_DETACH);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
> +						enum dma_data_direction dir)
> +{
> +	struct sg_table *st;
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	struct hyper_dmabuf_pages_info *page_info;
> +	int ret;
> +
> +	if (!attachment->dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	/* extract pages from sgt */
> +	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
> +
> +	/* create a new sg_table with extracted pages */
> +	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
> +				page_info->last_len, page_info->nents);
> +	if (st == NULL)
> +		goto err_free_sg;
> +
> +        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
> +                goto err_free_sg;
> +        }
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_MAP);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return st;
> +
> +err_free_sg:
> +	sg_free_table(st);
> +	kfree(st);
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
> +						struct sg_table *sg,
> +						enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!attachment->dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
> +
> +	sg_free_table(sg);
> +	kfree(sg);
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_UNMAP);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_RELEASE);
> +
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_END_CPU_ACCESS);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return 0;
> +}
> +
> +static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KMAP_ATOMIC);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL; /* for now NULL.. need to return the address of mapped region */
> +}
> +
> +static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL; /* for now NULL.. need to return the address of mapped region */
> +}
> +
> +static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_KUNMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return -EINVAL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_MMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return ret;
> +}
> +
> +static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return NULL;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_VMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
> +{
> +	struct hyper_dmabuf_imported_sgt_info *sgt_info;
> +	int ret;
> +
> +	if (!dmabuf->priv)
> +		return;
> +
> +	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
> +
> +	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
> +						HYPER_DMABUF_OPS_VUNMAP);
> +	if (ret < 0) {
> +		printk("send dmabuf sync request failed\n");
> +	}
> +}
> +
> +static const struct dma_buf_ops hyper_dmabuf_ops = {
> +		.attach = hyper_dmabuf_ops_attach,
> +		.detach = hyper_dmabuf_ops_detach,
> +		.map_dma_buf = hyper_dmabuf_ops_map,
> +		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
> +		.release = hyper_dmabuf_ops_release,
> +		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
> +		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
> +		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
> +		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
> +		.map = hyper_dmabuf_ops_kmap,
> +		.unmap = hyper_dmabuf_ops_kunmap,
> +		.mmap = hyper_dmabuf_ops_mmap,
> +		.vmap = hyper_dmabuf_ops_vmap,
> +		.vunmap = hyper_dmabuf_ops_vunmap,
> +};
> +
> +/* exporting dmabuf as fd */
> +int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
> +{
> +	int fd;
> +
> +	struct dma_buf* dmabuf;
> +
> +/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
> + * then release */
> +
> +	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
> +
> +	fd = dma_buf_fd(dmabuf, flags);
> +
> +	return fd;
> +}
> +
> +struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
> +{
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +
> +	exp_info.ops = &hyper_dmabuf_ops;
> +	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
> +	exp_info.flags = /* not sure about flag */0;
> +	exp_info.priv = dinfo;
> +
> +	return dma_buf_export(&exp_info);
> +};
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
> new file mode 100644
> index 0000000..003c158
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
> @@ -0,0 +1,31 @@
> +#ifndef __HYPER_DMABUF_IMP_H__
> +#define __HYPER_DMABUF_IMP_H__
> +
> +#include "hyper_dmabuf_struct.h"
> +
> +/* extract pages directly from struct sg_table */
> +struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
> +                                int frst_ofst, int last_len, int nents);
> +
> +grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
> +					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
> +
> +int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
> +
> +int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
> +
> +/* map first level tables that contains reference numbers for actual shared pages */
> +grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
> +
> +/* map and construct sg_lists from reference numbers */
> +struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
> +					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
> +
> +int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
> +
> +struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
> +
> +#endif /* __HYPER_DMABUF_IMP_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
> new file mode 100644
> index 0000000..5e50908
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
> @@ -0,0 +1,462 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/miscdevice.h>
> +#include <linux/uaccess.h>
> +#include <linux/dma-buf.h>
> +#include <linux/delay.h>
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_imp.h"
> +#include "hyper_dmabuf_list.h"
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_query.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +
> +struct hyper_dmabuf_private {
> +	struct device *device;
> +} hyper_dmabuf_private;
> +
> +static uint32_t hyper_dmabuf_id_gen(void) {
> +	/* TODO: add proper implementation */
> +	static uint32_t id = 0;
> +	static int32_t domid = -1;
> +	if (domid == -1) {
> +		domid = hyper_dmabuf_get_domid();
> +	}
> +	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
> +}
> +
> +static int hyper_dmabuf_exporter_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
> +
> +	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
> +						&ring_attr->ring_refid,
> +						&ring_attr->port);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_importer_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
> +
> +	/* user need to provide a port number and ref # for the page used as ring buffer */
> +	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
> +						 setup_imp_ring_attr->ring_refid,
> +						 setup_imp_ring_attr->port);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_export_remote(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
> +	struct dma_buf *dma_buf;
> +	struct dma_buf_attachment *attachment;
> +	struct sg_table *sgt;
> +	struct hyper_dmabuf_pages_info *page_info;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_ring_rq *req;
> +	int operands[9];
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
> +
> +	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
> +	if (!dma_buf) {
> +		printk("Cannot get dma buf\n");
> +		return -1;
> +	}
> +
> +	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
> +	if (!attachment) {
> +		printk("Cannot get attachment\n");
> +		return -1;
> +	}
> +
> +	/* we check if this specific attachment was already exported
> +	 * to the same domain and if yes, it returns hyper_dmabuf_id
> +	 * of pre-exported sgt */
> +	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
> +	if (ret != -1) {
> +		dma_buf_detach(dma_buf, attachment);
> +		dma_buf_put(dma_buf);
> +		export_remote_attr->hyper_dmabuf_id = ret;
> +		return 0;
> +	}
> +	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
> +	ret = 0;
> +
> +	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
> +
> +	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
> +
> +	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
> +	/* TODO: We might need to consider using port number on event channel? */
> +	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
> +	sgt_info->sgt = sgt;
> +	sgt_info->attachment = attachment;
> +	sgt_info->dma_buf = dma_buf;
> +
> +	page_info = hyper_dmabuf_ext_pgs(sgt);
> +	if (page_info == NULL)
> +		goto fail_export;
> +
> +	/* now register it to export list */
> +	hyper_dmabuf_register_exported(sgt_info);
> +
> +	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
> +	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
> +
> +	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
> +
> +	/* now create table of grefs for shared pages and */
> +
> +	/* now create request for importer via ring */
> +	operands[0] = page_info->hyper_dmabuf_id;
> +	operands[1] = page_info->nents;
> +	operands[2] = page_info->frst_ofst;
> +	operands[3] = page_info->last_len;
> +	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
> +						page_info->nents, &sgt_info->shared_pages_info);
> +	/* driver/application specific private info, max 32 bytes */
> +	operands[5] = export_remote_attr->private[0];
> +	operands[6] = export_remote_attr->private[1];
> +	operands[7] = export_remote_attr->private[2];
> +	operands[8] = export_remote_attr->private[3];
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	/* composing a message to the importer */
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
> +	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
> +		goto fail_send_request;
> +
> +	/* free msg */
> +	kfree(req);
> +	/* free page_info */
> +	kfree(page_info);
> +
> +	return ret;
> +
> +fail_send_request:
> +	kfree(req);
> +	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
> +
> +fail_export:
> +	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
> +	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
> +	dma_buf_put(sgt_info->dma_buf);
> +
> +	return -EINVAL;
> +}
> +
> +static int hyper_dmabuf_export_fd_ioctl(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -1;
> +	}
> +
> +	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
> +
> +	/* look for dmabuf for the id */
> +	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
> +	if (imported_sgt_info == NULL) /* can't find sgt from the table */
> +		return -1;
> +
> +	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
> +		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
> +		imported_sgt_info->last_len, imported_sgt_info->nents,
> +		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
> +
> +	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
> +						imported_sgt_info->frst_ofst,
> +						imported_sgt_info->last_len,
> +						imported_sgt_info->nents,
> +						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
> +						&imported_sgt_info->shared_pages_info);
> +
> +	if (!imported_sgt_info->sgt) {
> +		return -1;
> +	}
> +
> +	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
> +	if (export_fd_attr < 0) {
> +		ret = export_fd_attr->fd;
> +	}
> +
> +	return ret;
> +}
> +
> +/* removing dmabuf from the database and send int req to the source domain
> +* to unmap it. */
> +static int hyper_dmabuf_destroy(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_ring_rq *req;
> +	int ret;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
> +
> +	/* find dmabuf in export list */
> +	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
> +	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
> +		destroy_attr->status = -EINVAL;
> +		return -EFAULT;
> +	}
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
> +
> +	/* now send destroy request to remote domain
> +	 * currently assuming there's only one importer exist */
> +	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
> +	if (ret < 0) {
> +		kfree(req);
> +		return -EFAULT;
> +	}
> +
> +	/* free msg */
> +	kfree(req);
> +	destroy_attr->status = ret;
> +
> +	/* Rest of cleanup will follow when importer will free it's buffer,
> +	 * current implementation assumes that there is only one importer
> +         */
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_query(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_query *query_attr;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	int ret = 0;
> +
> +	if (!data) {
> +		printk("user data is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
> +
> +	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
> +	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
> +
> +	/* if dmabuf can't be found in both lists, return */
> +	if (!(sgt_info && imported_sgt_info)) {
> +		printk("can't find entry anywhere\n");
> +		return -EINVAL;
> +	}
> +
> +	/* not considering the case where a dmabuf is found on both queues
> +	 * in one domain */
> +	switch (query_attr->item)
> +	{
> +		case DMABUF_QUERY_TYPE_LIST:
> +			if (sgt_info) {
> +				query_attr->info = EXPORTED;
> +			} else {
> +				query_attr->info = IMPORTED;
> +			}
> +			break;
> +
> +		/* exporting domain of this specific dmabuf*/
> +		case DMABUF_QUERY_EXPORTER:
> +			if (sgt_info) {
> +				query_attr->info = 0xFFFFFFFF; /* myself */
> +			} else {
> +				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
> +			}
> +			break;
> +
> +		/* importing domain of this specific dmabuf */
> +		case DMABUF_QUERY_IMPORTER:
> +			if (sgt_info) {
> +				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
> +			} else {
> +#if 0 /* TODO: a global variable, current_domain does not exist yet*/
> +				query_attr->info = current_domain;
> +#endif
> +			}
> +			break;
> +
> +		/* size of dmabuf in byte */
> +		case DMABUF_QUERY_SIZE:
> +			if (sgt_info) {
> +#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
> +				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
> +#endif
> +			} else {
> +				query_attr->info = imported_sgt_info->nents * 4096 -
> +						   imported_sgt_info->frst_ofst - 4096 +
> +						   imported_sgt_info->last_len;
> +			}
> +			break;
> +	}
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
> +{
> +	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
> +	struct hyper_dmabuf_ring_rq *req;
> +
> +	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
> +
> +	/* requesting remote domain to set-up exporter's ring */
> +	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
> +		kfree(req);
> +		return -EINVAL;
> +	}
> +
> +	kfree(req);
> +	return 0;
> +}
> +
> +static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
> +};
> +
> +static long hyper_dmabuf_ioctl(struct file *filp,
> +			unsigned int cmd, unsigned long param)
> +{
> +	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
> +	unsigned int nr = _IOC_NR(cmd);
> +	int ret = -EINVAL;
> +	hyper_dmabuf_ioctl_t func;
> +	char *kdata;
> +
> +	ioctl = &hyper_dmabuf_ioctls[nr];
> +
> +	func = ioctl->func;
> +
> +	if (unlikely(!func)) {
> +		printk("no function\n");
> +		return -EINVAL;
> +	}
> +
> +	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
> +	if (!kdata) {
> +		printk("no memory\n");
> +		return -ENOMEM;
> +	}
> +
> +	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
> +		printk("failed to copy from user arguments\n");
> +		return -EFAULT;
> +	}
> +
> +	ret = func(kdata);
> +
> +	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
> +		printk("failed to copy to user arguments\n");
> +		return -EFAULT;
> +	}
> +
> +	kfree(kdata);
> +
> +	return ret;
> +}
> +
> +struct device_info {
> +	int curr_domain;
> +};
> +
> +/*===============================================================================================*/
> +static struct file_operations hyper_dmabuf_driver_fops =
> +{
> +   .owner = THIS_MODULE,
> +   .unlocked_ioctl = hyper_dmabuf_ioctl,
> +};
> +
> +static struct miscdevice hyper_dmabuf_miscdev = {
> +	.minor = MISC_DYNAMIC_MINOR,
> +	.name = "xen/hyper_dmabuf",
> +	.fops = &hyper_dmabuf_driver_fops,
> +};
> +
> +static const char device_name[] = "hyper_dmabuf";
> +
> +/*===============================================================================================*/
> +int register_device(void)
> +{
> +	int result = 0;
> +
> +	result = misc_register(&hyper_dmabuf_miscdev);
> +
> +	if (result != 0) {
> +		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
> +		return result;
> +	}
> +
> +	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
> +
> +	/* TODO: Check if there is a different way to initialize dma mask nicely */
> +	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
> +
> +	/* TODO find a way to provide parameters for below function or move that to ioctl */
> +/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
> +				src_sink_isr, PORT_NUM, "remote_domain", &info);
> +	if (err < 0) {
> +		printk("hyper_dmabuf: can't register interrupt handlers\n");
> +		return -EFAULT;
> +	}
> +
> +	info.irq = err;
> +*/
> +	return result;
> +}
> +
> +/*-----------------------------------------------------------------------------------------------*/
> +void unregister_device(void)
> +{
> +	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
> +	misc_deregister(&hyper_dmabuf_miscdev);
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
> new file mode 100644
> index 0000000..77a7e65
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
> @@ -0,0 +1,119 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <asm/uaccess.h>
> +#include <linux/hashtable.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_list.h"
> +
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
> +
> +int hyper_dmabuf_table_init()
> +{
> +	hash_init(hyper_dmabuf_hash_imported);
> +	hash_init(hyper_dmabuf_hash_exported);
> +	return 0;
> +}
> +
> +int hyper_dmabuf_table_destroy()
> +{
> +	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = info;
> +
> +	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
> +		info_entry->info->hyper_dmabuf_id);
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = info;
> +
> +	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
> +		info_entry->info->hyper_dmabuf_id);
> +
> +	return 0;
> +}
> +
> +struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->attachment == attach &&
> +			info_entry->info->hyper_dmabuf_rdomain == domid)
> +			return info_entry->info->hyper_dmabuf_id;
> +
> +	return -1;
> +}
> +
> +struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +int hyper_dmabuf_remove_exported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> +
> +int hyper_dmabuf_remove_imported(int id)
> +{
> +	struct hyper_dmabuf_info_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		if(info_entry->info->hyper_dmabuf_id == id) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
> new file mode 100644
> index 0000000..869cd9a
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
> @@ -0,0 +1,40 @@
> +#ifndef __HYPER_DMABUF_LIST_H__
> +#define __HYPER_DMABUF_LIST_H__
> +
> +#include "hyper_dmabuf_struct.h"
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_EXPORTED 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_IMPORTED 7
> +
> +struct hyper_dmabuf_info_entry_exported {
> +        struct hyper_dmabuf_sgt_info *info;
> +        struct hlist_node node;
> +};
> +
> +struct hyper_dmabuf_info_entry_imported {
> +        struct hyper_dmabuf_imported_sgt_info *info;
> +        struct hlist_node node;
> +};
> +
> +int hyper_dmabuf_table_init(void);
> +
> +int hyper_dmabuf_table_destroy(void);
> +
> +int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
> +
> +int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
> +
> +struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
> +
> +struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
> +
> +int hyper_dmabuf_remove_exported(int id);
> +
> +int hyper_dmabuf_remove_imported(int id);
> +
> +#endif // __HYPER_DMABUF_LIST_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
> new file mode 100644
> index 0000000..3237e50
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
> @@ -0,0 +1,212 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_imp.h"
> +//#include "hyper_dmabuf_remote_sync.h"
> +#include "xen/hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_msg.h"
> +#include "hyper_dmabuf_list.h"
> +
> +void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
> +				        enum hyper_dmabuf_command command, int *operands)
> +{
> +	int i;
> +
> +	request->request_id = hyper_dmabuf_next_req_id_export();
> +	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
> +	request->command = command;
> +
> +	switch(command) {
> +	/* as exporter, commands to importer */
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : number of pages to be shared
> +		 * operands2 : offset of data in the first page
> +		 * operands3 : length of data in the last page
> +		 * operands4 : top-level reference number for shared pages
> +		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
> +		 */
> +		for (i=0; i < 8; i++)
> +			request->operands[i] = operands[i];
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY:
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : DMABUF_DESTROY,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +		request->operands[0] = operands[0];
> +		break;
> +
> +	case HYPER_DMABUF_OPS_TO_REMOTE:
> +		/* notifying dmabuf map/unmap to importer (probably not needed) */
> +		/* for dmabuf synchronization */
> +		break;
> +
> +	/* as importer, command to exporter */
> +	case HYPER_DMABUF_OPS_TO_SOURCE:
> +		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
> +		* or unmapping for synchronization with original exporter (e.g. i915) */
> +		/* command : DMABUF_OPS_TO_SOURCE.
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
> +		 */
> +		for (i=0; i<2; i++)
> +			request->operands[i] = operands[i];
> +		break;
> +
> +	/* requesting the other side to setup another ring channel for reverse direction */
> +	case HYPER_DMABUF_EXPORTER_RING_SETUP:
> +		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
> +		/* no operands needed */
> +		break;
> +
> +	default:
> +		/* no command found */
> +		return;
> +	}
> +}
> +
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
> +{
> +	uint32_t i, ret;
> +	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
> +	struct hyper_dmabuf_sgt_info *sgt_info;
> +
> +	/* make sure req is not NULL (may not be needed) */
> +	if (!req) {
> +		return -EINVAL;
> +	}
> +
> +	req->status = HYPER_DMABUF_REQ_PROCESSED;
> +
> +	switch (req->command) {
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : number of pages to be shared
> +		 * operands2 : offset of data in the first page
> +		 * operands3 : length of data in the last page
> +		 * operands4 : top-level reference number for shared pages
> +		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
> +		 */
> +		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
> +		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
> +		imported_sgt_info->frst_ofst = req->operands[2];
> +		imported_sgt_info->last_len = req->operands[3];
> +		imported_sgt_info->nents = req->operands[1];
> +		imported_sgt_info->gref = req->operands[4];
> +
> +		printk("DMABUF was exported\n");
> +		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
> +		printk("\tnents %d\n", req->operands[1]);
> +		printk("\tfirst offset %d\n", req->operands[2]);
> +		printk("\tlast len %d\n", req->operands[3]);
> +		printk("\tgrefid %d\n", req->operands[4]);
> +
> +		for (i=0; i<4; i++)
> +			imported_sgt_info->private[i] = req->operands[5+i];
> +
> +		hyper_dmabuf_register_imported(imported_sgt_info);
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY:
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : DMABUF_DESTROY,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +
> +		imported_sgt_info =
> +			hyper_dmabuf_find_imported(req->operands[0]);
> +
> +		if (imported_sgt_info) {
> +			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
> +
> +			hyper_dmabuf_remove_imported(req->operands[0]);
> +
> +			/* TODO: cleanup sgt on importer side etc */
> +		}
> +
> +		/* Notify exporter that buffer is freed and it can cleanup it */
> +		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
> +		req->command = HYPER_DMABUF_DESTROY_FINISH;
> +
> +#if 0 /* function is not implemented yet */
> +
> +		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
> +#endif
> +		break;
> +
> +	case HYPER_DMABUF_DESTROY_FINISH:
> +		/* destroy sg_list for hyper_dmabuf_id on local side */
> +		/* command : DMABUF_DESTROY_FINISH,
> +		 * operands0 : hyper_dmabuf_id
> +		 */
> +
> +		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
> +		sgt_info =
> +			hyper_dmabuf_find_exported(req->operands[0]);
> +
> +		if (sgt_info) {
> +			hyper_dmabuf_cleanup_gref_table(sgt_info);
> +
> +			/* unmap dmabuf */
> +			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
> +			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
> +			dma_buf_put(sgt_info->dma_buf);
> +
> +			/* TODO: Rest of cleanup, sgt cleanup etc */
> +		}
> +
> +		break;
> +
> +	case HYPER_DMABUF_OPS_TO_REMOTE:
> +		/* notifying dmabuf map/unmap to importer (probably not needed) */
> +		/* for dmabuf synchronization */
> +		break;
> +
> +	/* as importer, command to exporter */
> +	case HYPER_DMABUF_OPS_TO_SOURCE:
> +		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
> +		* or unmapping for synchronization with original exporter (e.g. i915) */
> +		/* command : DMABUF_OPS_TO_SOURCE.
> +		 * operands0 : hyper_dmabuf_id
> +		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
> +		 */
> +		break;
> +
> +	/* requesting the other side to setup another ring channel for reverse direction */
> +	case HYPER_DMABUF_EXPORTER_RING_SETUP:
> +		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
> +		 * no operands needed */
> +		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
> +		if (ret < 0) {
> +			req->status = HYPER_DMABUF_REQ_ERROR;
> +			return -EINVAL;
> +		}
> +
> +		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
> +		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
> +		break;
> +
> +	case HYPER_DMABUF_IMPORTER_RING_SETUP:
> +		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
> +		/* no operands needed */
> +		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
> +		if (ret < 0)
> +			return -EINVAL;
> +
> +		break;
> +
> +	default:
> +		/* no matched command, nothing to do.. just return error */
> +		return -EINVAL;
> +	}
> +
> +	return req->command;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
> new file mode 100644
> index 0000000..44bfb70
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
> @@ -0,0 +1,45 @@
> +#ifndef __HYPER_DMABUF_MSG_H__
> +#define __HYPER_DMABUF_MSG_H__
> +
> +enum hyper_dmabuf_command {
> +	HYPER_DMABUF_EXPORT = 0x10,
> +	HYPER_DMABUF_DESTROY,
> +	HYPER_DMABUF_DESTROY_FINISH,
> +	HYPER_DMABUF_OPS_TO_REMOTE,
> +	HYPER_DMABUF_OPS_TO_SOURCE,
> +	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
> +	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
> +};
> +
> +enum hyper_dmabuf_ops {
> +	HYPER_DMABUF_OPS_ATTACH = 0x1000,
> +	HYPER_DMABUF_OPS_DETACH,
> +	HYPER_DMABUF_OPS_MAP,
> +	HYPER_DMABUF_OPS_UNMAP,
> +	HYPER_DMABUF_OPS_RELEASE,
> +	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_END_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_KMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KMAP,
> +	HYPER_DMABUF_OPS_KUNMAP,
> +	HYPER_DMABUF_OPS_MMAP,
> +	HYPER_DMABUF_OPS_VMAP,
> +	HYPER_DMABUF_OPS_VUNMAP,
> +};
> +
> +enum hyper_dmabuf_req_feedback {
> +	HYPER_DMABUF_REQ_PROCESSED = 0x100,
> +	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
> +	HYPER_DMABUF_REQ_ERROR,
> +	HYPER_DMABUF_REQ_NOT_RESPONDED
> +};
> +
> +/* create a request packet with given command and operands */
> +void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
> +                                        enum hyper_dmabuf_command command, int *operands);
> +
> +/* parse incoming request packet (or response) and take appropriate actions for those */
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
> +
> +#endif // __HYPER_DMABUF_MSG_H__
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
> new file mode 100644
> index 0000000..a577167
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
> @@ -0,0 +1,16 @@
> +#ifndef __HYPER_DMABUF_QUERY_H__
> +#define __HYPER_DMABUF_QUERY_H__
> +
> +enum hyper_dmabuf_query {
> +	DMABUF_QUERY_TYPE_LIST = 0x10,
> +	DMABUF_QUERY_EXPORTER,
> +	DMABUF_QUERY_IMPORTER,
> +	DMABUF_QUERY_SIZE
> +};
> +
> +enum hyper_dmabuf_status {
> +	EXPORTED = 0x01,
> +	IMPORTED
> +};
> +
> +#endif /* __HYPER_DMABUF_QUERY_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
> new file mode 100644
> index 0000000..c8a2f4d
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
> @@ -0,0 +1,70 @@
> +#ifndef __HYPER_DMABUF_STRUCT_H__
> +#define __HYPER_DMABUF_STRUCT_H__
> +
> +#include <xen/interface/grant_table.h>
> +
> +/* Importer combine source domain id with given hyper_dmabuf_id
> + * to make it unique in case there are multiple exporters */
> +
> +#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
> +	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
> +
> +#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
> +	(((id) >> 24) & 0xFF)
> +
> +/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
> + * in this block meaning we can share 4KB*4096 = 16MB of buffer
> + * (needs to be increased for large buffer use-cases such as 4K
> + * frame buffer) */
> +#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
> +
> +struct hyper_dmabuf_shared_pages_info {
> +	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
> +	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
> +	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
> +	grant_ref_t top_level_ref; /* top level refid */
> +	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
> +	struct page **data_pages; /* data pages to be unmapped */
> +};
> +
> +/* Exporter builds pages_info before sharing pages */
> +struct hyper_dmabuf_pages_info {
> +        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
> +        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
> +        int frst_ofst; /* offset of data in the first page */
> +        int last_len; /* length of data in the last page */
> +        int nents; /* # of pages */
> +        struct page **pages; /* pages that contains reference numbers of shared pages*/
> +};
> +
> +/* Both importer and exporter use this structure to point to sg lists
> + *
> + * Exporter stores references to sgt in a hash table
> + * Exporter keeps these references for synchronization and tracking purposes
> + *
> + * Importer use this structure exporting to other drivers in the same domain */
> +struct hyper_dmabuf_sgt_info {
> +        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
> +	int hyper_dmabuf_rdomain; /* domain importing this sgt */
> +        struct sg_table *sgt; /* pointer to sgt */
> +	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
> +	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
> +	struct hyper_dmabuf_shared_pages_info shared_pages_info;
> +	int private[4]; /* device specific info (e.g. image's meta info?) */
> +};
> +
> +/* Importer store references (before mapping) on shared pages
> + * Importer store these references in the table and map it in
> + * its own memory map once userspace asks for reference for the buffer */
> +struct hyper_dmabuf_imported_sgt_info {
> +	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
> +	int frst_ofst;	/* start offset in shared page #1 */
> +	int last_len;	/* length of data in the last shared page */
> +	int nents;	/* number of pages to be shared */
> +	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
> +	struct sg_table *sgt; /* sgt pointer after importing buffer */
> +	struct hyper_dmabuf_shared_pages_info shared_pages_info;
> +	int private[4]; /* device specific info (e.g. image's meta info?) */
> +};
> +
> +#endif /* __HYPER_DMABUF_STRUCT_H__ */
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> new file mode 100644
> index 0000000..22f2ef0
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
> @@ -0,0 +1,328 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +#include <xen/grant_table.h>
> +#include <xen/events.h>
> +#include <xen/xenbus.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +#include "../hyper_dmabuf_imp.h"
> +#include "../hyper_dmabuf_list.h"
> +#include "../hyper_dmabuf_msg.h"
> +
> +static int export_req_id = 0;
> +static int import_req_id = 0;
> +
> +int32_t hyper_dmabuf_get_domid(void)
> +{
> +	struct xenbus_transaction xbt;
> +	int32_t domid;
> +
> +        xenbus_transaction_start(&xbt);
> +
> +        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
> +		domid = -1;
> +        }
> +        xenbus_transaction_end(xbt, 0);
> +
> +	return domid;
> +}
> +
> +int hyper_dmabuf_next_req_id_export(void)
> +{
> +        export_req_id++;
> +        return export_req_id;
> +}
> +
> +int hyper_dmabuf_next_req_id_import(void)
> +{
> +        import_req_id++;
> +        return import_req_id;
> +}
> +
> +/* For now cache latast rings as global variables TODO: keep them in list*/
> +static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
> +static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
> +
> +/* exporter needs to generated info for page sharing */
> +int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
> +{
> +	struct hyper_dmabuf_ring_info_export *ring_info;
> +	struct hyper_dmabuf_sring *sring;
> +	struct evtchn_alloc_unbound alloc_unbound;
> +	struct evtchn_close close;
> +
> +	void *shared_ring;
> +	int ret;
> +
> +	ring_info = (struct hyper_dmabuf_ring_info_export*)
> +				kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	/* from exporter to importer */
> +	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
> +	if (shared_ring == 0) {
> +		return -EINVAL;
> +	}
> +
> +	sring = (struct hyper_dmabuf_sring *) shared_ring;
> +
> +	SHARED_RING_INIT(sring);
> +
> +	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
> +
> +	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
> +							virt_to_mfn(shared_ring), 0);
> +	if (ring_info->gref_ring < 0) {
> +		return -EINVAL; /* fail to get gref */
> +	}
> +
> +	alloc_unbound.dom = DOMID_SELF;
> +	alloc_unbound.remote_dom = rdomain;
> +	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
> +	if (ret != 0) {
> +		printk("Cannot allocate event channel\n");
> +		return -EINVAL;
> +	}
> +
> +	/* setting up interrupt */
> +	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
> +					hyper_dmabuf_front_ring_isr, 0,
> +					NULL, (void*) ring_info);
> +
> +	if (ret < 0) {
> +		printk("Failed to setup event channel\n");
> +		close.port = alloc_unbound.port;
> +		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
> +		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
> +		return -EINVAL;
> +	}
> +
> +	ring_info->rdomain = rdomain;
> +	ring_info->irq = ret;
> +	ring_info->port = alloc_unbound.port;
> +
> +	/* store refid and port numbers for userspace's use */
> +	*refid = ring_info->gref_ring;
> +	*port = ring_info->port;
> +
> +	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
> +		ring_info->gref_ring,
> +		ring_info->port,
> +		ring_info->irq);
> +
> +	/* register ring info */
> +	ret = hyper_dmabuf_register_exporter_ring(ring_info);
> +
> +	return ret;
> +}
> +
> +/* importer needs to know about shared page and port numbers for ring buffer and event channel */
> +int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
> +{
> +	struct hyper_dmabuf_ring_info_import *ring_info;
> +	struct hyper_dmabuf_sring *sring;
> +
> +	struct page *shared_ring;
> +
> +	struct gnttab_map_grant_ref *ops;
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +	int ret;
> +
> +	ring_info = (struct hyper_dmabuf_ring_info_import *)
> +			kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	ring_info->sdomain = sdomain;
> +	ring_info->evtchn = port;
> +
> +	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
> +	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
> +
> +	if (gnttab_alloc_pages(1, &shared_ring)) {
> +		return -EINVAL;
> +	}
> +
> +	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
> +			GNTMAP_host_map, gref, sdomain);
> +
> +	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
> +	if (ret < 0) {
> +		printk("Cannot map ring\n");
> +		return -EINVAL;
> +	}
> +
> +	if (ops[0].status) {
> +		printk("Ring mapping failed\n");
> +		return -EINVAL;
> +	}
> +
> +	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
> +
> +	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
> +
> +	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
> +						    NULL, (void*)ring_info);
> +	if (ret < 0) {
> +		return -EINVAL;
> +	}
> +
> +	ring_info->irq = ret;
> +
> +	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
> +		port,
> +		ring_info->irq);
> +
> +	ret = hyper_dmabuf_register_importer_ring(ring_info);
> +
> +	return ret;
> +}
> +
> +int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
> +{
> +	struct hyper_dmabuf_front_ring *ring;
> +	struct hyper_dmabuf_ring_rq *new_req;
> +	struct hyper_dmabuf_ring_info_export *ring_info;
> +	int notify;
> +
> +	/* find a ring info for the channel */
> +	ring_info = hyper_dmabuf_find_exporter_ring(domain);
> +	if (!ring_info) {
> +		printk("Can't find ring info for the channel\n");
> +		return -EINVAL;
> +	}
> +
> +	ring = &ring_info->ring_front;
> +
> +	if (RING_FULL(ring))
> +		return -EBUSY;
> +
> +	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +	if (!new_req) {
> +		printk("NULL REQUEST\n");
> +		return -EIO;
> +	}
> +
> +	memcpy(new_req, req, sizeof(*new_req));
> +
> +	ring->req_prod_pvt++;
> +
> +	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
> +	if (notify) {
> +		notify_remote_via_irq(ring_info->irq);
> +	}
> +
> +	return 0;
> +}
> +
> +/* called by interrupt (WORKQUEUE) */
> +int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
> +{
> +	/* as a importer and as a exporter */
> +	return 0;
> +}
> +
> +/* ISR for request from exporter (as an importer) */
> +static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
> +{
> +	RING_IDX rc, rp;
> +	struct hyper_dmabuf_ring_rq request;
> +	struct hyper_dmabuf_ring_rp response;
> +	int notify, more_to_do;
> +	int ret;
> +//	struct hyper_dmabuf_work *work;
> +
> +	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
> +	struct hyper_dmabuf_back_ring *ring;
> +
> +	ring = &ring_info->ring_back;
> +
> +	do {
> +		rc = ring->req_cons;
> +		rp = ring->sring->req_prod;
> +
> +		while (rc != rp) {
> +			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
> +				break;
> +
> +			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
> +			printk("Got request\n");
> +			ring->req_cons = ++rc;
> +
> +			/* TODO: probably using linked list for multiple requests then let
> +			 * a task in a workqueue to process those is better idea becuase
> +			 * we do not want to stay in ISR for long.
> +			 */
> +			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
> +
> +			if (ret > 0) {
> +				/* build response */
> +				memcpy(&response, &request, sizeof(response));
> +
> +				/* we sent back modified request as a response.. we might just need to have request only..*/
> +				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
> +				ring->rsp_prod_pvt++;
> +
> +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
> +
> +				if (notify) {
> +					printk("Notyfing\n");
> +					notify_remote_via_irq(ring_info->irq);
> +				}
> +			}
> +
> +			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
> +			printk("Final check for requests %d\n", more_to_do);
> +		}
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +/* ISR for responses from importer */
> +static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
> +{
> +	/* front ring only care about response from back */
> +	struct hyper_dmabuf_ring_rp *response;
> +	RING_IDX i, rp;
> +	int more_to_do, ret;
> +
> +	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
> +	struct hyper_dmabuf_front_ring *ring;
> +	ring = &ring_info->ring_front;
> +
> +	do {
> +		more_to_do = 0;
> +		rp = ring->sring->rsp_prod;
> +		for (i = ring->rsp_cons; i != rp; i++) {
> +			unsigned long id;
> +
> +			response = RING_GET_RESPONSE(ring, i);
> +			id = response->response_id;
> +
> +			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
> +				/* parsing response */
> +				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
> +
> +				if (ret < 0) {
> +					printk("getting error while parsing response\n");
> +				}
> +			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
> +				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
> +			}
> +
> +		}
> +
> +		ring->rsp_cons = i;
> +
> +		if (i != ring->req_prod_pvt) {
> +			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
> +			printk("more to do %d\n", more_to_do);
> +		} else {
> +			ring->sring->rsp_event = i+1;
> +		}
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
> new file mode 100644
> index 0000000..2754917
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
> @@ -0,0 +1,62 @@
> +#ifndef __HYPER_DMABUF_XEN_COMM_H__
> +#define __HYPER_DMABUF_XEN_COMM_H__
> +
> +#include "xen/interface/io/ring.h"
> +
> +#define MAX_NUMBER_OF_OPERANDS 9
> +
> +struct hyper_dmabuf_ring_rq {
> +        unsigned int request_id;
> +        unsigned int status;
> +        unsigned int command;
> +        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
> +};
> +
> +struct hyper_dmabuf_ring_rp {
> +        unsigned int response_id;
> +        unsigned int status;
> +        unsigned int command;
> +        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
> +};
> +
> +DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
> +
> +struct hyper_dmabuf_ring_info_export {
> +        struct hyper_dmabuf_front_ring ring_front;
> +	int rdomain;
> +        int gref_ring;
> +        int irq;
> +        int port;
> +};
> +
> +struct hyper_dmabuf_ring_info_import {
> +        int sdomain;
> +        int irq;
> +        int evtchn;
> +        struct hyper_dmabuf_back_ring ring_back;
> +};
> +
> +//struct hyper_dmabuf_work {
> +//	hyper_dmabuf_ring_rq requrest;
> +//	struct work_struct msg_parse;
> +//};
> +
> +int32_t hyper_dmabuf_get_domid(void);
> +
> +int hyper_dmabuf_next_req_id_export(void);
> +
> +int hyper_dmabuf_next_req_id_import(void);
> +
> +/* exporter needs to generated info for page sharing */
> +int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
> +
> +/* importer needs to know about shared page and port numbers for ring buffer and event channel */
> +int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
> +
> +/* send request to the remote domain */
> +int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
> +
> +/* called by interrupt (WORKQUEUE) */
> +int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_H__
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> new file mode 100644
> index 0000000..15c9d29
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
> @@ -0,0 +1,106 @@
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <asm/uaccess.h>
> +#include <linux/hashtable.h>
> +#include <xen/grant_table.h>
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
> +
> +int hyper_dmabuf_ring_table_init()
> +{
> +	hash_init(hyper_dmabuf_hash_importer_ring);
> +	hash_init(hyper_dmabuf_hash_exporter_ring);
> +	return 0;
> +}
> +
> +int hyper_dmabuf_ring_table_destroy()
> +{
> +	/* TODO: cleanup tables*/
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
> +		info_entry->info->rdomain);
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
> +		info_entry->info->sdomain);
> +
> +	return 0;
> +}
> +
> +struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
> +		if(info_entry->info->rdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
> +		if(info_entry->info->sdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +int hyper_dmabuf_remove_exporter_ring(int domid)
> +{
> +	struct hyper_dmabuf_exporter_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
> +		if(info_entry->info->rdomain == domid) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> +
> +int hyper_dmabuf_remove_importer_ring(int domid)
> +{
> +	struct hyper_dmabuf_importer_ring_info *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
> +		if(info_entry->info->sdomain == domid) {
> +			hash_del(&info_entry->node);
> +			return 0;
> +		}
> +
> +	return -1;
> +}
> diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> new file mode 100644
> index 0000000..5929f99
> --- /dev/null
> +++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
> @@ -0,0 +1,35 @@
> +#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
> +#define __HYPER_DMABUF_XEN_COMM_LIST_H__
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_EXPORT_RING 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_IMPORT_RING 7
> +
> +struct hyper_dmabuf_exporter_ring_info {
> +        struct hyper_dmabuf_ring_info_export *info;
> +        struct hlist_node node;
> +};
> +
> +struct hyper_dmabuf_importer_ring_info {
> +        struct hyper_dmabuf_ring_info_import *info;
> +        struct hlist_node node;
> +};
> +
> +int hyper_dmabuf_ring_table_init(void);
> +
> +int hyper_dmabuf_ring_table_destroy(void);
> +
> +int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
> +
> +int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
> +
> +struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
> +
> +struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
> +
> +int hyper_dmabuf_remove_exporter_ring(int domid);
> +
> +int hyper_dmabuf_remove_importer_ring(int domid);
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
> -- 
> 2.7.4
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 160+ messages in thread

* [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
@ 2017-12-19 19:29 Dongwon Kim
  0 siblings, 0 replies; 160+ messages in thread
From: Dongwon Kim @ 2017-12-19 19:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: xen-devel, mateuszx.potrola, dri-devel, dongwon.kim

Upload of intial version of hyper_DMABUF driver enabling
DMA_BUF exchange between two different VMs in virtualized
platform based on hypervisor such as KVM or XEN.

Hyper_DMABUF drv's primary role is to import a DMA_BUF
from originator then re-export it to another Linux VM
so that it can be mapped and accessed by it.

The functionality of this driver highly depends on
Hypervisor's native page sharing mechanism and inter-VM
communication support.

This driver has two layers, one is main hyper_DMABUF
framework for scatter-gather list management that handles
actual import and export of DMA_BUF. Lower layer is about
actual memory sharing and communication between two VMs,
which is hypervisor-specific interface.

This driver is initially designed to enable DMA_BUF
sharing across VMs in Xen environment, so currently working
with Xen only.

This also adds Kernel configuration for hyper_DMABUF drv
under Device Drivers->Xen driver support->hyper_dmabuf
options.

To give some brief information about each source file,

hyper_dmabuf/hyper_dmabuf_conf.h
: configuration info

hyper_dmabuf/hyper_dmabuf_drv.c
: driver interface and initialization

hyper_dmabuf/hyper_dmabuf_imp.c
: scatter-gather list generation and management. DMA_BUF
ops for DMA_BUF reconstructed from hyper_DMABUF

hyper_dmabuf/hyper_dmabuf_ioctl.c
: IOCTLs calls for export/import and comm channel creation
unexport.

hyper_dmabuf/hyper_dmabuf_list.c
: Database (linked-list) for exported and imported
hyper_DMABUF

hyper_dmabuf/hyper_dmabuf_msg.c
: creation and management of messages between exporter and
importer

hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
: comm ch management and ISRs for incoming messages.

hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
: Database (linked-list) for keeping information about
existing comm channels among VMs

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/xen/Kconfig                                |   2 +
 drivers/xen/Makefile                               |   1 +
 drivers/xen/hyper_dmabuf/Kconfig                   |  14 +
 drivers/xen/hyper_dmabuf/Makefile                  |  34 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h       |   2 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c        |  54 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h        | 101 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c        | 852 +++++++++++++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h        |  31 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c      | 462 +++++++++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c       | 119 +++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h       |  40 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c        | 212 +++++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h        |  45 ++
 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h      |  16 +
 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h     |  70 ++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c   | 328 ++++++++
 .../xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h   |  62 ++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c  | 106 +++
 .../hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h  |  35 +
 20 files changed, 2586 insertions(+)
 create mode 100644 drivers/xen/hyper_dmabuf/Kconfig
 create mode 100644 drivers/xen/hyper_dmabuf/Makefile
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
 create mode 100644 drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
 create mode 100644 drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index d8dd546..b59b0e3 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -321,4 +321,6 @@ config XEN_SYMS
 config XEN_HAVE_VPMU
        bool
 
+source "drivers/xen/hyper_dmabuf/Kconfig"
+
 endmenu
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 451e833..a6e253a 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_X86)			+= fallback.o
 obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
 obj-y	+= events/
 obj-y	+= xenbus/
+obj-y	+= hyper_dmabuf/
 
 nostackp := $(call cc-option, -fno-stack-protector)
 CFLAGS_features.o			:= $(nostackp)
diff --git a/drivers/xen/hyper_dmabuf/Kconfig b/drivers/xen/hyper_dmabuf/Kconfig
new file mode 100644
index 0000000..75e1f96
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/Kconfig
@@ -0,0 +1,14 @@
+menu "hyper_dmabuf options"
+
+config HYPER_DMABUF
+	tristate "Enables hyper dmabuf driver"
+	default y
+
+config HYPER_DMABUF_XEN
+	bool "Configure hyper_dmabuf for XEN hypervisor"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Configuring hyper_dmabuf driver for XEN hypervisor
+
+endmenu
diff --git a/drivers/xen/hyper_dmabuf/Makefile b/drivers/xen/hyper_dmabuf/Makefile
new file mode 100644
index 0000000..0be7445
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/Makefile
@@ -0,0 +1,34 @@
+TARGET_MODULE:=hyper_dmabuf
+
+# If we running by kernel building system
+ifneq ($(KERNELRELEASE),)
+	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
+                                 hyper_dmabuf_ioctl.o \
+                                 hyper_dmabuf_list.o \
+				 hyper_dmabuf_imp.o \
+				 hyper_dmabuf_msg.o \
+				 xen/hyper_dmabuf_xen_comm.o \
+				 xen/hyper_dmabuf_xen_comm_list.o
+
+obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
+
+# If we are running without kernel build system
+else
+BUILDSYSTEM_DIR?=../../../
+PWD:=$(shell pwd)
+
+all :
+# run kernel build system to make module
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
+
+clean:
+# run kernel build system to cleanup in current directory
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
+
+load:
+	insmod ./$(TARGET_MODULE).ko
+
+unload:
+	rmmod ./$(TARGET_MODULE).ko
+
+endif
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
new file mode 100644
index 0000000..3d9b2d6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_conf.h
@@ -0,0 +1,2 @@
+#define CURRENT_TARGET XEN
+#define INTER_DOMAIN_DMABUF_SYNCHRONIZATION
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
new file mode 100644
index 0000000..0698327
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -0,0 +1,54 @@
+#include <linux/init.h>       /* module_init, module_exit */
+#include <linux/module.h> /* version info, MODULE_LICENSE, MODULE_AUTHOR, printk() */
+#include "hyper_dmabuf_conf.h"
+#include "hyper_dmabuf_list.h"
+#include "xen/hyper_dmabuf_xen_comm_list.h"
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("IOTG-PED, INTEL");
+
+int register_device(void);
+int unregister_device(void);
+
+/*===============================================================================================*/
+static int hyper_dmabuf_drv_init(void)
+{
+	int ret = 0;
+
+	printk( KERN_NOTICE "hyper_dmabuf_starting: Initialization started" );
+
+	ret = register_device();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	printk( KERN_NOTICE "initializing database for imported/exported dmabufs\n");
+
+	ret = hyper_dmabuf_table_init();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	ret = hyper_dmabuf_ring_table_init();
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	/* interrupt for comm should be registered here: */
+	return ret;
+}
+
+/*-----------------------------------------------------------------------------------------------*/
+static void hyper_dmabuf_drv_exit(void)
+{
+	/* hash tables for export/import entries and ring_infos */
+	hyper_dmabuf_table_destroy();
+	hyper_dmabuf_ring_table_init();
+
+	printk( KERN_NOTICE "dma_buf-src_sink model: Exiting" );
+	unregister_device();
+}
+/*===============================================================================================*/
+
+module_init(hyper_dmabuf_drv_init);
+module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
new file mode 100644
index 0000000..2dad9a6
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -0,0 +1,101 @@
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags) 	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+#define IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_exporter_ring_setup))
+struct ioctl_hyper_dmabuf_exporter_ring_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	uint32_t remote_domain;
+	grant_ref_t ring_refid; /* assigned by driver, copied to userspace after initialization */
+	uint32_t port; /* assigned by driver, copied to userspace after initialization */
+};
+
+#define IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_importer_ring_setup))
+struct ioctl_hyper_dmabuf_importer_ring_setup {
+	/* IN parameters */
+	/* Source domain id */
+	uint32_t source_domain;
+	/* Ring shared page refid */
+	grant_ref_t ring_refid;
+	/* Port number */
+	uint32_t port;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	uint32_t dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	uint32_t remote_domain;
+	/* exported dma buf id */
+	uint32_t hyper_dmabuf_id;
+	uint32_t private[4];
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	uint32_t hyper_dmabuf_id;
+	/* flags */
+	uint32_t flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	uint32_t fd;
+};
+
+#define IOCTL_HYPER_DMABUF_DESTROY \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_destroy))
+struct ioctl_hyper_dmabuf_destroy {
+	/* IN parameters */
+	/* hyper dmabuf id to be destroyed */
+	uint32_t hyper_dmabuf_id;
+	/* OUT parameters */
+	/* Status of request */
+	uint32_t status;
+};
+
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	uint32_t hyper_dmabuf_id;
+	/* item to be queried */
+	uint32_t item;
+	/* OUT parameters */
+	/* Value of queried item */
+	uint32_t info;
+};
+
+#define IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP \
+_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_hyper_dmabuf_remote_exporter_ring_setup))
+struct ioctl_hyper_dmabuf_remote_exporter_ring_setup {
+	/* in parameters */
+	uint32_t rdomain; /* id of remote domain where exporter's ring need to be setup */
+	uint32_t info;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_DRV_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
new file mode 100644
index 0000000..faa5c1b
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.c
@@ -0,0 +1,852 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/dma-buf.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_imp.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/* return total number of pages referecned by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int hyper_dmabuf_get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE); /* round-up */
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+		num_pages += ((sgl->length + PAGE_SIZE - 1) / PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct hyper_dmabuf_pages_info *pinfo;
+	int i, j;
+	int length;
+	struct scatterlist *sgl;
+
+	pinfo = kmalloc(sizeof(*pinfo), GFP_KERNEL);
+	if (pinfo == NULL)
+		return NULL;
+
+	pinfo->pages = kmalloc(sizeof(struct page *)*hyper_dmabuf_get_num_pgs(sgt), GFP_KERNEL);
+	if (pinfo->pages == NULL)
+		return NULL;
+
+	sgl = sgt->sgl;
+
+	pinfo->nents = 1;
+	pinfo->frst_ofst = sgl->offset;
+	pinfo->pages[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i=1;
+
+	while (length > 0) {
+		pinfo->pages[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pinfo->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pinfo->pages[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pinfo->nents++;
+
+		while (length > 0) {
+			pinfo->pages[i] = nth_page(sg_page(sgl), i);
+			length -= PAGE_SIZE;
+			pinfo->nents++;
+			i++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pinfo->last_len = PAGE_SIZE + length;
+
+	return pinfo;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+				int frst_ofst, int last_len, int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (sgt == NULL) {
+		return NULL;
+	}
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		kfree(sgt);
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pages[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i=1; i<nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], PAGE_SIZE, 0);
+	}
+
+	if (i > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pages[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      Top level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌--->+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘    |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐    +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └--->+------------+
+ * +-------------------------+ | | +--------------------+      |Data page 1 |
+ *                             | |                             +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|-->+------------------+
+ *                                 +-----------------------+   |Data page 1048575 |
+ *                                                             +------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+grant_ref_t hyper_dmabuf_create_addressing_tables(grant_ref_t *data_refs, int nents, int rdomain,
+						  struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_2nd_level_pages = (nents/REFS_PER_PAGE + ((nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+	int i;
+	unsigned long gref_page_start;
+	grant_ref_t *tmp_page;
+	grant_ref_t top_level_ref;
+	grant_ref_t * addr_refs;
+	addr_refs = kcalloc(sizeof(grant_ref_t), n_2nd_level_pages, GFP_KERNEL);
+
+	gref_page_start = __get_free_pages(GFP_KERNEL, n_2nd_level_pages);
+	tmp_page = (grant_ref_t *)gref_page_start;
+
+	/* Store 2nd level pages to be freed later */
+	shared_pages_info->addr_pages = tmp_page;
+
+	/*TODO: make sure that allocated memory is filled with 0*/
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i=0; i< n_2nd_level_pages; i++) {
+		addr_refs[i] = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page+i*PAGE_SIZE ), 1);
+	}
+
+	/*
+	 * fill second level pages with data refs
+	 */
+	for (i = 0; i < nents; i++) {
+		tmp_page[i] = data_refs[i];
+	}
+
+
+	/* allocate top level page */
+	gref_page_start = __get_free_pages(GFP_KERNEL, 1);
+	tmp_page = (grant_ref_t *)gref_page_start;
+
+	/* Store top level page to be freed later */
+	shared_pages_info->top_level_page = tmp_page;
+
+	/*
+	 * fill top level page with reference numbers of second level pages refs.
+	 */
+	for (i=0; i< n_2nd_level_pages; i++) {
+		tmp_page[i] =  addr_refs[i];
+	}
+
+	/* Share top level addressing page in readonly mode*/
+	top_level_ref = gnttab_grant_foreign_access(rdomain, virt_to_mfn((unsigned long)tmp_page), 1);
+
+	kfree(addr_refs);
+
+	return top_level_ref;
+}
+
+/*
+ * Maps provided top level ref id and then return array of pages containing data refs.
+ */
+struct page** hyper_dmabuf_get_data_refs(grant_ref_t top_level_ref, int domid, int nents,
+					 struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	struct page *top_level_page;
+	struct page **level2_pages;
+
+	grant_ref_t *top_level_refs;
+
+	struct gnttab_map_grant_ref top_level_map_ops;
+	struct gnttab_unmap_grant_ref top_level_unmap_ops;
+
+	struct gnttab_map_grant_ref *map_ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	unsigned long addr;
+	int n_level2_refs = 0;
+	int i;
+
+	n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
+
+	level2_pages = kcalloc(sizeof(struct page*), n_level2_refs, GFP_KERNEL);
+
+	map_ops = kcalloc(sizeof(map_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
+	unmap_ops = kcalloc(sizeof(unmap_ops[0]), REFS_PER_PAGE, GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &top_level_page)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	addr = (unsigned long)pfn_to_kaddr(page_to_pfn(top_level_page));
+	gnttab_set_map_op(&top_level_map_ops, addr, GNTMAP_host_map | GNTMAP_readonly, top_level_ref, domid);
+	gnttab_set_unmap_op(&top_level_unmap_ops, addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&top_level_map_ops, NULL, &top_level_page, 1)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (top_level_map_ops.status) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+				top_level_map_ops.status);
+		return NULL;
+	} else {
+		top_level_unmap_ops.handle = top_level_map_ops.handle;
+	}
+
+	/* Parse contents of top level addressing page to find how many second level pages is there*/
+	top_level_refs = pfn_to_kaddr(page_to_pfn(top_level_page));
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_level2_refs, level2_pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	for (i = 0; i < n_level2_refs; i++) {
+		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(level2_pages[i]));
+		gnttab_set_map_op(&map_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, top_level_refs[i], domid);
+		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	if (gnttab_map_refs(map_ops, NULL, level2_pages, n_level2_refs)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly and at the same time is calculating total number of data refids*/
+	for (i = 0; i < n_level2_refs; i++) {
+		if (map_ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d",
+					map_ops[i].status);
+			return NULL;
+		} else {
+			unmap_ops[i].handle = map_ops[i].handle;
+		}
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&top_level_unmap_ops, NULL, &top_level_page, 1)) {
+		printk("\xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	gnttab_free_pages(1, &top_level_page);
+	kfree(map_ops);
+	shared_pages_info->unmap_ops = unmap_ops;
+
+	return level2_pages;
+}
+
+
+/* This collects all reference numbers for 2nd level shared pages and create a table
+ * with those in 1st level shared pages then return reference numbers for this top level
+ * table. */
+grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
+					   struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	int i = 0;
+	grant_ref_t *data_refs;
+	grant_ref_t top_level_ref;
+
+	/* allocate temp array for refs of shared data pages */
+	data_refs = kcalloc(nents, sizeof(grant_ref_t), GFP_KERNEL);
+
+	/* share data pages in rw mode*/
+	for (i=0; i<nents; i++) {
+		data_refs[i] = gnttab_grant_foreign_access(rdomain, pfn_to_mfn(page_to_pfn(pages[i])), 0);
+	}
+
+	/* create additional shared pages with 2 level addressing of data pages */
+	top_level_ref = hyper_dmabuf_create_addressing_tables(data_refs, nents, rdomain,
+							      shared_pages_info);
+
+	/* Store exported pages refid to be unshared later */
+	shared_pages_info->data_refs = data_refs;
+	shared_pages_info->top_level_ref = top_level_ref;
+
+	return top_level_ref;
+}
+
+int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info) {
+	uint32_t i = 0;
+	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
+
+	grant_ref_t *ref = shared_pages_info->top_level_page;
+	int n_2nd_level_pages = (sgt_info->sgt->nents/REFS_PER_PAGE + ((sgt_info->sgt->nents % REFS_PER_PAGE) ? 1: 0));/* rounding */
+
+
+	if (shared_pages_info->data_refs == NULL ||
+	    shared_pages_info->addr_pages ==  NULL ||
+	    shared_pages_info->top_level_page == NULL ||
+	    shared_pages_info->top_level_ref == -1) {
+		printk("gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	while(ref[i] != 0 && i < n_2nd_level_pages) {
+		if (gnttab_query_foreign_access(ref[i])) {
+			printk("refid not shared !!\n");
+		}
+		if (!gnttab_end_foreign_access_ref(ref[i], 1)) {
+			printk("refid still in use!!!\n");
+		}
+		i++;
+	}
+	free_pages((unsigned long)shared_pages_info->addr_pages, i);
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(shared_pages_info->top_level_ref)) {
+		printk("refid not shared !!\n");
+	}
+	if (!gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1)) {
+		printk("refid still in use!!!\n");
+	}
+	gnttab_end_foreign_access_ref(shared_pages_info->top_level_ref, 1);
+	free_pages((unsigned long)shared_pages_info->top_level_page, 1);
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < sgt_info->sgt->nents; i++) {
+		if (gnttab_query_foreign_access(shared_pages_info->data_refs[i])) {
+			printk("refid not shared !!\n");
+		}
+		gnttab_end_foreign_access_ref(shared_pages_info->data_refs[i], 0);
+	}
+
+	kfree(shared_pages_info->data_refs);
+
+	shared_pages_info->data_refs = NULL;
+	shared_pages_info->addr_pages = NULL;
+	shared_pages_info->top_level_page = NULL;
+	shared_pages_info->top_level_ref = -1;
+
+	return 0;
+}
+
+int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info) {
+	struct hyper_dmabuf_shared_pages_info *shared_pages_info = &sgt_info->shared_pages_info;
+
+	if(shared_pages_info->unmap_ops == NULL || shared_pages_info->data_pages == NULL) {
+		printk("Imported pages already cleaned up or buffer was not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, shared_pages_info->data_pages, sgt_info->nents) ) {
+		printk("Cannot unmap data pages\n");
+		return -EINVAL;
+	}
+
+	gnttab_free_pages(sgt_info->nents, shared_pages_info->data_pages);
+	kfree(shared_pages_info->data_pages);
+	kfree(shared_pages_info->unmap_ops);
+	shared_pages_info->unmap_ops = NULL;
+	shared_pages_info->data_pages = NULL;
+
+	return 0;
+}
+
+/* map and construct sg_lists from reference numbers */
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t top_level_gref, int frst_ofst, int last_len, int nents, int sdomain,
+					struct hyper_dmabuf_shared_pages_info *shared_pages_info)
+{
+	struct sg_table *st;
+	struct page **pages;
+	struct gnttab_map_grant_ref *ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	unsigned long addr;
+	grant_ref_t *refs;
+	int i;
+	int n_level2_refs = (nents / REFS_PER_PAGE) + ((nents % REFS_PER_PAGE) ? 1 : 0);
+
+	/* Get data refids */
+	struct page** refid_pages = hyper_dmabuf_get_data_refs(top_level_gref, sdomain, nents,
+							       shared_pages_info);
+
+	pages = kcalloc(sizeof(struct page*), nents, GFP_KERNEL);
+	if (pages == NULL) {
+		return NULL;
+	}
+
+	/* allocate new pages that are mapped to shared pages via grant-table */
+	if (gnttab_alloc_pages(nents, pages)) {
+		printk("Cannot allocate pages\n");
+		return NULL;
+	}
+
+	ops = (struct gnttab_map_grant_ref *)kcalloc(nents, sizeof(struct gnttab_map_grant_ref), GFP_KERNEL);
+	unmap_ops = (struct gnttab_unmap_grant_ref *)kcalloc(nents, sizeof(struct gnttab_unmap_grant_ref), GFP_KERNEL);
+
+	for (i=0; i<nents; i++) {
+		addr = (unsigned long)pfn_to_kaddr(page_to_pfn(pages[i]));
+		refs = pfn_to_kaddr(page_to_pfn(refid_pages[i / REFS_PER_PAGE]));
+		gnttab_set_map_op(&ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, refs[i % REFS_PER_PAGE], sdomain);
+		gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map | GNTMAP_readonly, -1);
+	}
+
+	if (gnttab_map_refs(ops, NULL, pages, nents)) {
+		printk("\nxen: dom0: HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	for (i=0; i<nents; i++) {
+		if (ops[i].status) {
+			printk("\nxen: dom0: HYPERVISOR map grant ref failed status = %d\n",
+				ops[0].status);
+			return NULL;
+		} else {
+			unmap_ops[i].handle = ops[i].handle;
+		}
+	}
+
+	st = hyper_dmabuf_create_sgt(pages, frst_ofst, last_len, nents);
+
+	if (gnttab_unmap_refs(shared_pages_info->unmap_ops, NULL, refid_pages, n_level2_refs) ) {
+		printk("Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	gnttab_free_pages(n_level2_refs, refid_pages);
+	kfree(refid_pages);
+
+	kfree(shared_pages_info->unmap_ops);
+	shared_pages_info->unmap_ops = unmap_ops;
+	shared_pages_info->data_pages = pages;
+	kfree(ops);
+
+	return st;
+}
+
+inline int hyper_dmabuf_sync_request_and_wait(int id, int ops)
+{
+	struct hyper_dmabuf_ring_rq *req;
+	int operands[2];
+	int ret;
+
+	operands[0] = id;
+	operands[1] = ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_OPS_TO_SOURCE, &operands[0]);
+
+	/* send request */
+	ret = hyper_dmabuf_send_request(id, req);
+
+	/* TODO: wait until it gets response.. or can we just move on? */
+
+	kfree(req);
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf* dmabuf, struct device* dev,
+			struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_ATTACH);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf* dmabuf, struct dma_buf_attachment *attach)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_DETACH);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static struct sg_table* hyper_dmabuf_ops_map(struct dma_buf_attachment *attachment,
+						enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	struct hyper_dmabuf_pages_info *page_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	page_info = hyper_dmabuf_ext_pgs(sgt_info->sgt);
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(page_info->pages, page_info->frst_ofst,
+				page_info->last_len, page_info->nents);
+	if (st == NULL)
+		goto err_free_sg;
+
+        if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir)) {
+                goto err_free_sg;
+        }
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_MAP);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return st;
+
+err_free_sg:
+	sg_free_table(st);
+	kfree(st);
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+						struct sg_table *sg,
+						enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_UNMAP);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_RELEASE);
+
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction dir)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_END_CPU_ACCESS);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KMAP_ATOMIC);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL; /* for now NULL.. need to return the address of mapped region */
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_KUNMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_MMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return ret;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_VMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct hyper_dmabuf_imported_sgt_info *sgt_info;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	sgt_info = (struct hyper_dmabuf_imported_sgt_info *)dmabuf->priv;
+
+	ret = hyper_dmabuf_sync_request_and_wait(HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(sgt_info->hyper_dmabuf_id),
+						HYPER_DMABUF_OPS_VUNMAP);
+	if (ret < 0) {
+		printk("send dmabuf sync request failed\n");
+	}
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+		.attach = hyper_dmabuf_ops_attach,
+		.detach = hyper_dmabuf_ops_detach,
+		.map_dma_buf = hyper_dmabuf_ops_map,
+		.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+		.release = hyper_dmabuf_ops_release,
+		.begin_cpu_access = (void*)hyper_dmabuf_ops_begin_cpu_access,
+		.end_cpu_access = (void*)hyper_dmabuf_ops_end_cpu_access,
+		.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+		.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+		.map = hyper_dmabuf_ops_kmap,
+		.unmap = hyper_dmabuf_ops_kunmap,
+		.mmap = hyper_dmabuf_ops_mmap,
+		.vmap = hyper_dmabuf_ops_vmap,
+		.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags)
+{
+	int fd;
+
+	struct dma_buf* dmabuf;
+
+/* call hyper_dmabuf_export_dmabuf and create and bind a handle for it
+ * then release */
+
+	dmabuf = hyper_dmabuf_export_dma_buf(dinfo);
+
+	fd = dma_buf_fd(dmabuf, flags);
+
+	return fd;
+}
+
+struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+	exp_info.size = dinfo->sgt->nents * PAGE_SIZE; /* multiple of PAGE_SIZE, not considering offset */
+	exp_info.flags = /* not sure about flag */0;
+	exp_info.priv = dinfo;
+
+	return dma_buf_export(&exp_info);
+};
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
new file mode 100644
index 0000000..003c158
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_imp.h
@@ -0,0 +1,31 @@
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* extract pages directly from struct sg_table */
+struct hyper_dmabuf_pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table* hyper_dmabuf_create_sgt(struct page **pages,
+                                int frst_ofst, int last_len, int nents);
+
+grant_ref_t hyper_dmabuf_create_gref_table(struct page **pages, int rdomain, int nents,
+					   struct hyper_dmabuf_shared_pages_info *shared_pages_info);
+
+int hyper_dmabuf_cleanup_gref_table(struct hyper_dmabuf_sgt_info *sgt_info);
+
+int hyper_dmabuf_cleanup_imported_pages(struct hyper_dmabuf_imported_sgt_info *sgt_info);
+
+/* map first level tables that contains reference numbers for actual shared pages */
+grant_ref_t *hyper_dmabuf_map_gref_table(grant_ref_t *gref_table, int n_pages_table);
+
+/* map and construct sg_lists from reference numbers */
+struct sg_table* hyper_dmabuf_map_pages(grant_ref_t gref, int frst_ofst, int last_len, int nents, int sdomain,
+					struct hyper_dmabuf_shared_pages_info *shared_pages_info);
+
+int hyper_dmabuf_export_fd(struct hyper_dmabuf_imported_sgt_info *dinfo, int flags);
+
+struct dma_buf* hyper_dmabuf_export_dma_buf(struct hyper_dmabuf_imported_sgt_info *dinfo);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
new file mode 100644
index 0000000..5e50908
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -0,0 +1,462 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/miscdevice.h>
+#include <linux/uaccess.h>
+#include <linux/dma-buf.h>
+#include <linux/delay.h>
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_imp.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_query.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+
+struct hyper_dmabuf_private {
+	struct device *device;
+} hyper_dmabuf_private;
+
+static uint32_t hyper_dmabuf_id_gen(void) {
+	/* TODO: add proper implementation */
+	static uint32_t id = 0;
+	static int32_t domid = -1;
+	if (domid == -1) {
+		domid = hyper_dmabuf_get_domid();
+	}
+	return HYPER_DMABUF_ID_IMPORTER(domid, id++);
+}
+
+static int hyper_dmabuf_exporter_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_exporter_ring_setup *ring_attr;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+	ring_attr = (struct ioctl_hyper_dmabuf_exporter_ring_setup *)data;
+
+	ret = hyper_dmabuf_exporter_ringbuf_init(ring_attr->remote_domain,
+						&ring_attr->ring_refid,
+						&ring_attr->port);
+
+	return ret;
+}
+
+static int hyper_dmabuf_importer_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_importer_ring_setup *setup_imp_ring_attr;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	setup_imp_ring_attr = (struct ioctl_hyper_dmabuf_importer_ring_setup *)data;
+
+	/* user need to provide a port number and ref # for the page used as ring buffer */
+	ret = hyper_dmabuf_importer_ringbuf_init(setup_imp_ring_attr->source_domain,
+						 setup_imp_ring_attr->ring_refid,
+						 setup_imp_ring_attr->port);
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_remote(void *data)
+{
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr;
+	struct dma_buf *dma_buf;
+	struct dma_buf_attachment *attachment;
+	struct sg_table *sgt;
+	struct hyper_dmabuf_pages_info *page_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int operands[9];
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	export_remote_attr = (struct ioctl_hyper_dmabuf_export_remote *)data;
+
+	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+	if (!dma_buf) {
+		printk("Cannot get dma buf\n");
+		return -1;
+	}
+
+	attachment = dma_buf_attach(dma_buf, hyper_dmabuf_private.device);
+	if (!attachment) {
+		printk("Cannot get attachment\n");
+		return -1;
+	}
+
+	/* we check if this specific attachment was already exported
+	 * to the same domain and if yes, it returns hyper_dmabuf_id
+	 * of pre-exported sgt */
+	ret = hyper_dmabuf_find_id(attachment, export_remote_attr->remote_domain);
+	if (ret != -1) {
+		dma_buf_detach(dma_buf, attachment);
+		dma_buf_put(dma_buf);
+		export_remote_attr->hyper_dmabuf_id = ret;
+		return 0;
+	}
+	/* Clear ret, as that will cause whole ioctl to return failure to userspace, which is not true */
+	ret = 0;
+
+	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
+
+	sgt_info = kmalloc(sizeof(*sgt_info), GFP_KERNEL);
+
+	sgt_info->hyper_dmabuf_id = hyper_dmabuf_id_gen();
+	/* TODO: We might need to consider using port number on event channel? */
+	sgt_info->hyper_dmabuf_rdomain = export_remote_attr->remote_domain;
+	sgt_info->sgt = sgt;
+	sgt_info->attachment = attachment;
+	sgt_info->dma_buf = dma_buf;
+
+	page_info = hyper_dmabuf_ext_pgs(sgt);
+	if (page_info == NULL)
+		goto fail_export;
+
+	/* now register it to export list */
+	hyper_dmabuf_register_exported(sgt_info);
+
+	page_info->hyper_dmabuf_rdomain = sgt_info->hyper_dmabuf_rdomain;
+	page_info->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id; /* may not be needed */
+
+	export_remote_attr->hyper_dmabuf_id = sgt_info->hyper_dmabuf_id;
+
+	/* now create table of grefs for shared pages and */
+
+	/* now create request for importer via ring */
+	operands[0] = page_info->hyper_dmabuf_id;
+	operands[1] = page_info->nents;
+	operands[2] = page_info->frst_ofst;
+	operands[3] = page_info->last_len;
+	operands[4] = hyper_dmabuf_create_gref_table(page_info->pages, export_remote_attr->remote_domain,
+						page_info->nents, &sgt_info->shared_pages_info);
+	/* driver/application specific private info, max 32 bytes */
+	operands[5] = export_remote_attr->private[0];
+	operands[6] = export_remote_attr->private[1];
+	operands[7] = export_remote_attr->private[2];
+	operands[8] = export_remote_attr->private[3];
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORT, &operands[0]);
+	if(hyper_dmabuf_send_request(export_remote_attr->remote_domain, req))
+		goto fail_send_request;
+
+	/* free msg */
+	kfree(req);
+	/* free page_info */
+	kfree(page_info);
+
+	return ret;
+
+fail_send_request:
+	kfree(req);
+	hyper_dmabuf_remove_exported(sgt_info->hyper_dmabuf_id);
+
+fail_export:
+	dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
+	dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+	dma_buf_put(sgt_info->dma_buf);
+
+	return -EINVAL;
+}
+
+static int hyper_dmabuf_export_fd_ioctl(void *data)
+{
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -1;
+	}
+
+	export_fd_attr = (struct ioctl_hyper_dmabuf_export_fd *)data;
+
+	/* look for dmabuf for the id */
+	imported_sgt_info = hyper_dmabuf_find_imported(export_fd_attr->hyper_dmabuf_id);
+	if (imported_sgt_info == NULL) /* can't find sgt from the table */
+		return -1;
+
+	printk("%s Found buffer gref %d  off %d last len %d nents %d domain %d\n", __func__,
+		imported_sgt_info->gref, imported_sgt_info->frst_ofst,
+		imported_sgt_info->last_len, imported_sgt_info->nents,
+		HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+
+	imported_sgt_info->sgt = hyper_dmabuf_map_pages(imported_sgt_info->gref,
+						imported_sgt_info->frst_ofst,
+						imported_sgt_info->last_len,
+						imported_sgt_info->nents,
+						HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id),
+						&imported_sgt_info->shared_pages_info);
+
+	if (!imported_sgt_info->sgt) {
+		return -1;
+	}
+
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported_sgt_info, export_fd_attr->flags);
+	if (export_fd_attr < 0) {
+		ret = export_fd_attr->fd;
+	}
+
+	return ret;
+}
+
+/* removing dmabuf from the database and send int req to the source domain
+* to unmap it. */
+static int hyper_dmabuf_destroy(void *data)
+{
+	struct ioctl_hyper_dmabuf_destroy *destroy_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_ring_rq *req;
+	int ret;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -EINVAL;
+	}
+
+	destroy_attr = (struct ioctl_hyper_dmabuf_destroy *)data;
+
+	/* find dmabuf in export list */
+	sgt_info = hyper_dmabuf_find_exported(destroy_attr->hyper_dmabuf_id);
+	if (sgt_info == NULL) { /* failed to find corresponding entry in export list */
+		destroy_attr->status = -EINVAL;
+		return -EFAULT;
+	}
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_DESTROY, &destroy_attr->hyper_dmabuf_id);
+
+	/* now send destroy request to remote domain
+	 * currently assuming there's only one importer exist */
+	ret = hyper_dmabuf_send_request(sgt_info->hyper_dmabuf_rdomain, req);
+	if (ret < 0) {
+		kfree(req);
+		return -EFAULT;
+	}
+
+	/* free msg */
+	kfree(req);
+	destroy_attr->status = ret;
+
+	/* Rest of cleanup will follow when importer will free it's buffer,
+	 * current implementation assumes that there is only one importer
+         */
+
+	return ret;
+}
+
+static int hyper_dmabuf_query(void *data)
+{
+	struct ioctl_hyper_dmabuf_query *query_attr;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	int ret = 0;
+
+	if (!data) {
+		printk("user data is NULL\n");
+		return -EINVAL;
+	}
+
+	query_attr = (struct ioctl_hyper_dmabuf_query *)data;
+
+	sgt_info = hyper_dmabuf_find_exported(query_attr->hyper_dmabuf_id);
+	imported_sgt_info = hyper_dmabuf_find_imported(query_attr->hyper_dmabuf_id);
+
+	/* if dmabuf can't be found in both lists, return */
+	if (!(sgt_info && imported_sgt_info)) {
+		printk("can't find entry anywhere\n");
+		return -EINVAL;
+	}
+
+	/* not considering the case where a dmabuf is found on both queues
+	 * in one domain */
+	switch (query_attr->item)
+	{
+		case DMABUF_QUERY_TYPE_LIST:
+			if (sgt_info) {
+				query_attr->info = EXPORTED;
+			} else {
+				query_attr->info = IMPORTED;
+			}
+			break;
+
+		/* exporting domain of this specific dmabuf*/
+		case DMABUF_QUERY_EXPORTER:
+			if (sgt_info) {
+				query_attr->info = 0xFFFFFFFF; /* myself */
+			} else {
+				query_attr->info = (HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(imported_sgt_info->hyper_dmabuf_id));
+			}
+			break;
+
+		/* importing domain of this specific dmabuf */
+		case DMABUF_QUERY_IMPORTER:
+			if (sgt_info) {
+				query_attr->info = sgt_info->hyper_dmabuf_rdomain;
+			} else {
+#if 0 /* TODO: a global variable, current_domain does not exist yet*/
+				query_attr->info = current_domain;
+#endif
+			}
+			break;
+
+		/* size of dmabuf in byte */
+		case DMABUF_QUERY_SIZE:
+			if (sgt_info) {
+#if 0 /* TODO: hyper_dmabuf_buf_size is not implemented yet */
+				query_attr->info = hyper_dmabuf_buf_size(sgt_info->sgt);
+#endif
+			} else {
+				query_attr->info = imported_sgt_info->nents * 4096 -
+						   imported_sgt_info->frst_ofst - 4096 +
+						   imported_sgt_info->last_len;
+			}
+			break;
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_remote_exporter_ring_setup(void *data)
+{
+	struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *remote_exporter_ring_setup;
+	struct hyper_dmabuf_ring_rq *req;
+
+	remote_exporter_ring_setup = (struct ioctl_hyper_dmabuf_remote_exporter_ring_setup *)data;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+	hyper_dmabuf_create_request(req, HYPER_DMABUF_EXPORTER_RING_SETUP, NULL);
+
+	/* requesting remote domain to set-up exporter's ring */
+	if(hyper_dmabuf_send_request(remote_exporter_ring_setup->rdomain, req) < 0) {
+		kfree(req);
+		return -EINVAL;
+	}
+
+	kfree(req);
+	return 0;
+}
+
+static const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORTER_RING_SETUP, hyper_dmabuf_exporter_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_IMPORTER_RING_SETUP, hyper_dmabuf_importer_ring_setup, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE, hyper_dmabuf_export_remote, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD, hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_DESTROY, hyper_dmabuf_destroy, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY, hyper_dmabuf_query, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_REMOTE_EXPORTER_RING_SETUP, hyper_dmabuf_remote_exporter_ring_setup, 0),
+};
+
+static long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param)
+{
+	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
+	unsigned int nr = _IOC_NR(cmd);
+	int ret = -EINVAL;
+	hyper_dmabuf_ioctl_t func;
+	char *kdata;
+
+	ioctl = &hyper_dmabuf_ioctls[nr];
+
+	func = ioctl->func;
+
+	if (unlikely(!func)) {
+		printk("no function\n");
+		return -EINVAL;
+	}
+
+	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+	if (!kdata) {
+		printk("no memory\n");
+		return -ENOMEM;
+	}
+
+	if (copy_from_user(kdata, (void __user *)param, _IOC_SIZE(cmd)) != 0) {
+		printk("failed to copy from user arguments\n");
+		return -EFAULT;
+	}
+
+	ret = func(kdata);
+
+	if (copy_to_user((void __user *)param, kdata, _IOC_SIZE(cmd)) != 0) {
+		printk("failed to copy to user arguments\n");
+		return -EFAULT;
+	}
+
+	kfree(kdata);
+
+	return ret;
+}
+
+struct device_info {
+	int curr_domain;
+};
+
+/*===============================================================================================*/
+static struct file_operations hyper_dmabuf_driver_fops =
+{
+   .owner = THIS_MODULE,
+   .unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "xen/hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+static const char device_name[] = "hyper_dmabuf";
+
+/*===============================================================================================*/
+int register_device(void)
+{
+	int result = 0;
+
+	result = misc_register(&hyper_dmabuf_miscdev);
+
+	if (result != 0) {
+		printk(KERN_WARNING "hyper_dmabuf: driver can't be registered\n");
+		return result;
+	}
+
+	hyper_dmabuf_private.device = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask nicely */
+	dma_coerce_mask_and_coherent(hyper_dmabuf_private.device, 0xFFFFFFFF);
+
+	/* TODO find a way to provide parameters for below function or move that to ioctl */
+/*	err = bind_interdomain_evtchn_to_irqhandler(rdomain, evtchn,
+				src_sink_isr, PORT_NUM, "remote_domain", &info);
+	if (err < 0) {
+		printk("hyper_dmabuf: can't register interrupt handlers\n");
+		return -EFAULT;
+	}
+
+	info.irq = err;
+*/
+	return result;
+}
+
+/*-----------------------------------------------------------------------------------------------*/
+void unregister_device(void)
+{
+	printk( KERN_NOTICE "hyper_dmabuf: unregister_device() is called" );
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
new file mode 100644
index 0000000..77a7e65
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.c
@@ -0,0 +1,119 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <asm/uaccess.h>
+#include <linux/hashtable.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_list.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
+
+int hyper_dmabuf_table_init()
+{
+	hash_init(hyper_dmabuf_hash_imported);
+	hash_init(hyper_dmabuf_hash_exported);
+	return 0;
+}
+
+int hyper_dmabuf_table_destroy()
+{
+	/* TODO: cleanup hyper_dmabuf_hash_imported and hyper_dmabuf_hash_exported */
+	return 0;
+}
+
+int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = info;
+
+	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
+		info_entry->info->hyper_dmabuf_id);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = info;
+
+	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
+		info_entry->info->hyper_dmabuf_id);
+
+	return 0;
+}
+
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id)
+			return info_entry->info;
+
+	return NULL;
+}
+
+/* search for pre-exported sgt and return id of it if it exist */
+int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->attachment == attach &&
+			info_entry->info->hyper_dmabuf_rdomain == domid)
+			return info_entry->info->hyper_dmabuf_id;
+
+	return -1;
+}
+
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exported(int id)
+{
+	struct hyper_dmabuf_info_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
+
+int hyper_dmabuf_remove_imported(int id)
+{
+	struct hyper_dmabuf_info_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		if(info_entry->info->hyper_dmabuf_id == id) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
new file mode 100644
index 0000000..869cd9a
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_list.h
@@ -0,0 +1,40 @@
+#ifndef __HYPER_DMABUF_LIST_H__
+#define __HYPER_DMABUF_LIST_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORTED 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORTED 7
+
+struct hyper_dmabuf_info_entry_exported {
+        struct hyper_dmabuf_sgt_info *info;
+        struct hlist_node node;
+};
+
+struct hyper_dmabuf_info_entry_imported {
+        struct hyper_dmabuf_imported_sgt_info *info;
+        struct hlist_node node;
+};
+
+int hyper_dmabuf_table_init(void);
+
+int hyper_dmabuf_table_destroy(void);
+
+int hyper_dmabuf_register_exported(struct hyper_dmabuf_sgt_info *info);
+
+/* search for pre-exported sgt and return id of it if it exist */
+int hyper_dmabuf_find_id(struct dma_buf_attachment *attach, int domid);
+
+int hyper_dmabuf_register_imported(struct hyper_dmabuf_imported_sgt_info* info);
+
+struct hyper_dmabuf_sgt_info *hyper_dmabuf_find_exported(int id);
+
+struct hyper_dmabuf_imported_sgt_info *hyper_dmabuf_find_imported(int id);
+
+int hyper_dmabuf_remove_exported(int id);
+
+int hyper_dmabuf_remove_imported(int id);
+
+#endif // __HYPER_DMABUF_LIST_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
new file mode 100644
index 0000000..3237e50
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -0,0 +1,212 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_imp.h"
+//#include "hyper_dmabuf_remote_sync.h"
+#include "xen/hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
+				        enum hyper_dmabuf_command command, int *operands)
+{
+	int i;
+
+	request->request_id = hyper_dmabuf_next_req_id_export();
+	request->status = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	request->command = command;
+
+	switch(command) {
+	/* as exporter, commands to importer */
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : number of pages to be shared
+		 * operands2 : offset of data in the first page
+		 * operands3 : length of data in the last page
+		 * operands4 : top-level reference number for shared pages
+		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 */
+		for (i=0; i < 8; i++)
+			request->operands[i] = operands[i];
+		break;
+
+	case HYPER_DMABUF_DESTROY:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+		request->operands[0] = operands[0];
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed) */
+		/* for dmabuf synchronization */
+		break;
+
+	/* as importer, command to exporter */
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		* or unmapping for synchronization with original exporter (e.g. i915) */
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		for (i=0; i<2; i++)
+			request->operands[i] = operands[i];
+		break;
+
+	/* requesting the other side to setup another ring channel for reverse direction */
+	case HYPER_DMABUF_EXPORTER_RING_SETUP:
+		/* command : HYPER_DMABUF_EXPORTER_RING_SETUP */
+		/* no operands needed */
+		break;
+
+	default:
+		/* no command found */
+		return;
+	}
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req)
+{
+	uint32_t i, ret;
+	struct hyper_dmabuf_imported_sgt_info *imported_sgt_info;
+	struct hyper_dmabuf_sgt_info *sgt_info;
+
+	/* make sure req is not NULL (may not be needed) */
+	if (!req) {
+		return -EINVAL;
+	}
+
+	req->status = HYPER_DMABUF_REQ_PROCESSED;
+
+	switch (req->command) {
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : number of pages to be shared
+		 * operands2 : offset of data in the first page
+		 * operands3 : length of data in the last page
+		 * operands4 : top-level reference number for shared pages
+		 * operands5~8 : Driver-specific private data (e.g. graphic buffer's meta info)
+		 */
+		imported_sgt_info = (struct hyper_dmabuf_imported_sgt_info*)kcalloc(1, sizeof(*imported_sgt_info), GFP_KERNEL);
+		imported_sgt_info->hyper_dmabuf_id = req->operands[0];
+		imported_sgt_info->frst_ofst = req->operands[2];
+		imported_sgt_info->last_len = req->operands[3];
+		imported_sgt_info->nents = req->operands[1];
+		imported_sgt_info->gref = req->operands[4];
+
+		printk("DMABUF was exported\n");
+		printk("\thyper_dmabuf_id %d\n", req->operands[0]);
+		printk("\tnents %d\n", req->operands[1]);
+		printk("\tfirst offset %d\n", req->operands[2]);
+		printk("\tlast len %d\n", req->operands[3]);
+		printk("\tgrefid %d\n", req->operands[4]);
+
+		for (i=0; i<4; i++)
+			imported_sgt_info->private[i] = req->operands[5+i];
+
+		hyper_dmabuf_register_imported(imported_sgt_info);
+		break;
+
+	case HYPER_DMABUF_DESTROY:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * operands0 : hyper_dmabuf_id
+		 */
+
+		imported_sgt_info =
+			hyper_dmabuf_find_imported(req->operands[0]);
+
+		if (imported_sgt_info) {
+			hyper_dmabuf_cleanup_imported_pages(imported_sgt_info);
+
+			hyper_dmabuf_remove_imported(req->operands[0]);
+
+			/* TODO: cleanup sgt on importer side etc */
+		}
+
+		/* Notify exporter that buffer is freed and it can cleanup it */
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_DESTROY_FINISH;
+
+#if 0 /* function is not implemented yet */
+
+		ret = hyper_dmabuf_destroy_sgt(req->hyper_dmabuf_id);
+#endif
+		break;
+
+	case HYPER_DMABUF_DESTROY_FINISH:
+		/* destroy sg_list for hyper_dmabuf_id on local side */
+		/* command : DMABUF_DESTROY_FINISH,
+		 * operands0 : hyper_dmabuf_id
+		 */
+
+		/* TODO: that should be done on workqueue, when received ack from all importers that buffer is no longer used */
+		sgt_info =
+			hyper_dmabuf_find_exported(req->operands[0]);
+
+		if (sgt_info) {
+			hyper_dmabuf_cleanup_gref_table(sgt_info);
+
+			/* unmap dmabuf */
+			dma_buf_unmap_attachment(sgt_info->attachment, sgt_info->sgt, DMA_BIDIRECTIONAL);
+			dma_buf_detach(sgt_info->dma_buf, sgt_info->attachment);
+			dma_buf_put(sgt_info->dma_buf);
+
+			/* TODO: Rest of cleanup, sgt cleanup etc */
+		}
+
+		break;
+
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed) */
+		/* for dmabuf synchronization */
+		break;
+
+	/* as importer, command to exporter */
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make the driver to do shadow mapping
+		* or unmapping for synchronization with original exporter (e.g. i915) */
+		/* command : DMABUF_OPS_TO_SOURCE.
+		 * operands0 : hyper_dmabuf_id
+		 * operands1 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		break;
+
+	/* requesting the other side to setup another ring channel for reverse direction */
+	case HYPER_DMABUF_EXPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_EXPORTER_RING_SETUP
+		 * no operands needed */
+		ret = hyper_dmabuf_exporter_ringbuf_init(domid, &req->operands[0], &req->operands[1]);
+		if (ret < 0) {
+			req->status = HYPER_DMABUF_REQ_ERROR;
+			return -EINVAL;
+		}
+
+		req->status = HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP;
+		req->command = HYPER_DMABUF_IMPORTER_RING_SETUP;
+		break;
+
+	case HYPER_DMABUF_IMPORTER_RING_SETUP:
+		/* command: HYPER_DMABUF_IMPORTER_RING_SETUP */
+		/* no operands needed */
+		ret = hyper_dmabuf_importer_ringbuf_init(domid, req->operands[0], req->operands[1]);
+		if (ret < 0)
+			return -EINVAL;
+
+		break;
+
+	default:
+		/* no matched command, nothing to do.. just return error */
+		return -EINVAL;
+	}
+
+	return req->command;
+}
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
new file mode 100644
index 0000000..44bfb70
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -0,0 +1,45 @@
+#ifndef __HYPER_DMABUF_MSG_H__
+#define __HYPER_DMABUF_MSG_H__
+
+enum hyper_dmabuf_command {
+	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_DESTROY,
+	HYPER_DMABUF_DESTROY_FINISH,
+	HYPER_DMABUF_OPS_TO_REMOTE,
+	HYPER_DMABUF_OPS_TO_SOURCE,
+	HYPER_DMABUF_EXPORTER_RING_SETUP, /* requesting remote domain to set up exporter's ring */
+	HYPER_DMABUF_IMPORTER_RING_SETUP, /* requesting remote domain to set up importer's ring */
+};
+
+enum hyper_dmabuf_ops {
+	HYPER_DMABUF_OPS_ATTACH = 0x1000,
+	HYPER_DMABUF_OPS_DETACH,
+	HYPER_DMABUF_OPS_MAP,
+	HYPER_DMABUF_OPS_UNMAP,
+	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
+	HYPER_DMABUF_OPS_END_CPU_ACCESS,
+	HYPER_DMABUF_OPS_KMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KMAP,
+	HYPER_DMABUF_OPS_KUNMAP,
+	HYPER_DMABUF_OPS_MMAP,
+	HYPER_DMABUF_OPS_VMAP,
+	HYPER_DMABUF_OPS_VUNMAP,
+};
+
+enum hyper_dmabuf_req_feedback {
+	HYPER_DMABUF_REQ_PROCESSED = 0x100,
+	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
+	HYPER_DMABUF_REQ_ERROR,
+	HYPER_DMABUF_REQ_NOT_RESPONDED
+};
+
+/* create a request packet with given command and operands */
+void hyper_dmabuf_create_request(struct hyper_dmabuf_ring_rq *request,
+                                        enum hyper_dmabuf_command command, int *operands);
+
+/* parse incoming request packet (or response) and take appropriate actions for those */
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_ring_rq *req);
+
+#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
new file mode 100644
index 0000000..a577167
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_query.h
@@ -0,0 +1,16 @@
+#ifndef __HYPER_DMABUF_QUERY_H__
+#define __HYPER_DMABUF_QUERY_H__
+
+enum hyper_dmabuf_query {
+	DMABUF_QUERY_TYPE_LIST = 0x10,
+	DMABUF_QUERY_EXPORTER,
+	DMABUF_QUERY_IMPORTER,
+	DMABUF_QUERY_SIZE
+};
+
+enum hyper_dmabuf_status {
+	EXPORTED = 0x01,
+	IMPORTED
+};
+
+#endif /* __HYPER_DMABUF_QUERY_H__ */
diff --git a/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
new file mode 100644
index 0000000..c8a2f4d
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -0,0 +1,70 @@
+#ifndef __HYPER_DMABUF_STRUCT_H__
+#define __HYPER_DMABUF_STRUCT_H__
+
+#include <xen/interface/grant_table.h>
+
+/* Importer combine source domain id with given hyper_dmabuf_id
+ * to make it unique in case there are multiple exporters */
+
+#define HYPER_DMABUF_ID_IMPORTER(sdomain, id) \
+	((((sdomain) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+#define HYPER_DMABUF_ID_IMPORTER_GET_SDOMAIN_ID(id) \
+	(((id) >> 24) & 0xFF)
+
+/* each grant_ref_t is 4 bytes, so total 4096 grant_ref_t can be
+ * in this block meaning we can share 4KB*4096 = 16MB of buffer
+ * (needs to be increased for large buffer use-cases such as 4K
+ * frame buffer) */
+#define MAX_ALLOWED_NUM_PAGES_FOR_GREF_NUM_ARRAYS 4
+
+struct hyper_dmabuf_shared_pages_info {
+	grant_ref_t *data_refs;	/* table with shared buffer pages refid */
+	grant_ref_t *addr_pages; /* pages of 2nd level addressing */
+	grant_ref_t *top_level_page; /* page of top level addressing, it contains refids of 2nd level pages */
+	grant_ref_t top_level_ref; /* top level refid */
+	struct gnttab_unmap_grant_ref* unmap_ops; /* unmap ops for mapped pages */
+	struct page **data_pages; /* data pages to be unmapped */
+};
+
+/* Exporter builds pages_info before sharing pages */
+struct hyper_dmabuf_pages_info {
+        int hyper_dmabuf_id; /* unique id to reference dmabuf in source domain */
+        int hyper_dmabuf_rdomain; /* currenting considering just one remote domain access it */
+        int frst_ofst; /* offset of data in the first page */
+        int last_len; /* length of data in the last page */
+        int nents; /* # of pages */
+        struct page **pages; /* pages that contains reference numbers of shared pages*/
+};
+
+/* Both importer and exporter use this structure to point to sg lists
+ *
+ * Exporter stores references to sgt in a hash table
+ * Exporter keeps these references for synchronization and tracking purposes
+ *
+ * Importer use this structure exporting to other drivers in the same domain */
+struct hyper_dmabuf_sgt_info {
+        int hyper_dmabuf_id; /* unique id to reference dmabuf in remote domain */
+	int hyper_dmabuf_rdomain; /* domain importing this sgt */
+        struct sg_table *sgt; /* pointer to sgt */
+	struct dma_buf *dma_buf; /* needed to store this for freeing it later */
+	struct dma_buf_attachment *attachment; /* needed to store this for freeing this later */
+	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int private[4]; /* device specific info (e.g. image's meta info?) */
+};
+
+/* Importer store references (before mapping) on shared pages
+ * Importer store these references in the table and map it in
+ * its own memory map once userspace asks for reference for the buffer */
+struct hyper_dmabuf_imported_sgt_info {
+	int hyper_dmabuf_id; /* unique id to reference dmabuf (HYPER_DMABUF_ID_IMPORTER(source domain id, exporter's hyper_dmabuf_id */
+	int frst_ofst;	/* start offset in shared page #1 */
+	int last_len;	/* length of data in the last shared page */
+	int nents;	/* number of pages to be shared */
+	grant_ref_t gref; /* reference number of top level addressing page of shared pages */
+	struct sg_table *sgt; /* sgt pointer after importing buffer */
+	struct hyper_dmabuf_shared_pages_info shared_pages_info;
+	int private[4]; /* device specific info (e.g. image's meta info?) */
+};
+
+#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
new file mode 100644
index 0000000..22f2ef0
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.c
@@ -0,0 +1,328 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <xen/grant_table.h>
+#include <xen/events.h>
+#include <xen/xenbus.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+#include "../hyper_dmabuf_imp.h"
+#include "../hyper_dmabuf_list.h"
+#include "../hyper_dmabuf_msg.h"
+
+static int export_req_id = 0;
+static int import_req_id = 0;
+
+int32_t hyper_dmabuf_get_domid(void)
+{
+	struct xenbus_transaction xbt;
+	int32_t domid;
+
+        xenbus_transaction_start(&xbt);
+
+        if (!xenbus_scanf(xbt, "domid","", "%d", &domid)) {
+		domid = -1;
+        }
+        xenbus_transaction_end(xbt, 0);
+
+	return domid;
+}
+
+int hyper_dmabuf_next_req_id_export(void)
+{
+        export_req_id++;
+        return export_req_id;
+}
+
+int hyper_dmabuf_next_req_id_import(void)
+{
+        import_req_id++;
+        return import_req_id;
+}
+
+/* For now cache latast rings as global variables TODO: keep them in list*/
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id);
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id);
+
+/* exporter needs to generated info for page sharing */
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *refid, int *port)
+{
+	struct hyper_dmabuf_ring_info_export *ring_info;
+	struct hyper_dmabuf_sring *sring;
+	struct evtchn_alloc_unbound alloc_unbound;
+	struct evtchn_close close;
+
+	void *shared_ring;
+	int ret;
+
+	ring_info = (struct hyper_dmabuf_ring_info_export*)
+				kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	/* from exporter to importer */
+	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
+	if (shared_ring == 0) {
+		return -EINVAL;
+	}
+
+	sring = (struct hyper_dmabuf_sring *) shared_ring;
+
+	SHARED_RING_INIT(sring);
+
+	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
+
+	ring_info->gref_ring = gnttab_grant_foreign_access(rdomain,
+							virt_to_mfn(shared_ring), 0);
+	if (ring_info->gref_ring < 0) {
+		return -EINVAL; /* fail to get gref */
+	}
+
+	alloc_unbound.dom = DOMID_SELF;
+	alloc_unbound.remote_dom = rdomain;
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound, &alloc_unbound);
+	if (ret != 0) {
+		printk("Cannot allocate event channel\n");
+		return -EINVAL;
+	}
+
+	/* setting up interrupt */
+	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
+					hyper_dmabuf_front_ring_isr, 0,
+					NULL, (void*) ring_info);
+
+	if (ret < 0) {
+		printk("Failed to setup event channel\n");
+		close.port = alloc_unbound.port;
+		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
+		gnttab_end_foreign_access(ring_info->gref_ring, 0, virt_to_mfn(shared_ring));
+		return -EINVAL;
+	}
+
+	ring_info->rdomain = rdomain;
+	ring_info->irq = ret;
+	ring_info->port = alloc_unbound.port;
+
+	/* store refid and port numbers for userspace's use */
+	*refid = ring_info->gref_ring;
+	*port = ring_info->port;
+
+	printk("%s: allocated eventchannel gref %d  port: %d  irq: %d\n", __func__,
+		ring_info->gref_ring,
+		ring_info->port,
+		ring_info->irq);
+
+	/* register ring info */
+	ret = hyper_dmabuf_register_exporter_ring(ring_info);
+
+	return ret;
+}
+
+/* importer needs to know about shared page and port numbers for ring buffer and event channel */
+int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port)
+{
+	struct hyper_dmabuf_ring_info_import *ring_info;
+	struct hyper_dmabuf_sring *sring;
+
+	struct page *shared_ring;
+
+	struct gnttab_map_grant_ref *ops;
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	int ret;
+
+	ring_info = (struct hyper_dmabuf_ring_info_import *)
+			kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	ring_info->sdomain = sdomain;
+	ring_info->evtchn = port;
+
+	ops = (struct gnttab_map_grant_ref*)kmalloc(sizeof(*ops), GFP_KERNEL);
+	unmap_ops = (struct gnttab_unmap_grant_ref*)kmalloc(sizeof(*unmap_ops), GFP_KERNEL);
+
+	if (gnttab_alloc_pages(1, &shared_ring)) {
+		return -EINVAL;
+	}
+
+	gnttab_set_map_op(&ops[0], (unsigned long)pfn_to_kaddr(page_to_pfn(shared_ring)),
+			GNTMAP_host_map, gref, sdomain);
+
+	ret = gnttab_map_refs(ops, NULL, &shared_ring, 1);
+	if (ret < 0) {
+		printk("Cannot map ring\n");
+		return -EINVAL;
+	}
+
+	if (ops[0].status) {
+		printk("Ring mapping failed\n");
+		return -EINVAL;
+	}
+
+	sring = (struct hyper_dmabuf_sring*) pfn_to_kaddr(page_to_pfn(shared_ring));
+
+	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
+
+	ret = bind_interdomain_evtchn_to_irqhandler(sdomain, port, hyper_dmabuf_back_ring_isr, 0,
+						    NULL, (void*)ring_info);
+	if (ret < 0) {
+		return -EINVAL;
+	}
+
+	ring_info->irq = ret;
+
+	printk("%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+		port,
+		ring_info->irq);
+
+	ret = hyper_dmabuf_register_importer_ring(ring_info);
+
+	return ret;
+}
+
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req)
+{
+	struct hyper_dmabuf_front_ring *ring;
+	struct hyper_dmabuf_ring_rq *new_req;
+	struct hyper_dmabuf_ring_info_export *ring_info;
+	int notify;
+
+	/* find a ring info for the channel */
+	ring_info = hyper_dmabuf_find_exporter_ring(domain);
+	if (!ring_info) {
+		printk("Can't find ring info for the channel\n");
+		return -EINVAL;
+	}
+
+	ring = &ring_info->ring_front;
+
+	if (RING_FULL(ring))
+		return -EBUSY;
+
+	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
+	if (!new_req) {
+		printk("NULL REQUEST\n");
+		return -EIO;
+	}
+
+	memcpy(new_req, req, sizeof(*new_req));
+
+	ring->req_prod_pvt++;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
+	if (notify) {
+		notify_remote_via_irq(ring_info->irq);
+	}
+
+	return 0;
+}
+
+/* called by interrupt (WORKQUEUE) */
+int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain)
+{
+	/* as a importer and as a exporter */
+	return 0;
+}
+
+/* ISR for request from exporter (as an importer) */
+static irqreturn_t hyper_dmabuf_back_ring_isr(int irq, void *dev_id)
+{
+	RING_IDX rc, rp;
+	struct hyper_dmabuf_ring_rq request;
+	struct hyper_dmabuf_ring_rp response;
+	int notify, more_to_do;
+	int ret;
+//	struct hyper_dmabuf_work *work;
+
+	struct hyper_dmabuf_ring_info_import *ring_info = (struct hyper_dmabuf_ring_info_import *)dev_id;
+	struct hyper_dmabuf_back_ring *ring;
+
+	ring = &ring_info->ring_back;
+
+	do {
+		rc = ring->req_cons;
+		rp = ring->sring->req_prod;
+
+		while (rc != rp) {
+			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
+				break;
+
+			memcpy(&request, RING_GET_REQUEST(ring, rc), sizeof(request));
+			printk("Got request\n");
+			ring->req_cons = ++rc;
+
+			/* TODO: probably using linked list for multiple requests then let
+			 * a task in a workqueue to process those is better idea becuase
+			 * we do not want to stay in ISR for long.
+			 */
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &request);
+
+			if (ret > 0) {
+				/* build response */
+				memcpy(&response, &request, sizeof(response));
+
+				/* we sent back modified request as a response.. we might just need to have request only..*/
+				memcpy(RING_GET_RESPONSE(ring, ring->rsp_prod_pvt), &response, sizeof(response));
+				ring->rsp_prod_pvt++;
+
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring, notify);
+
+				if (notify) {
+					printk("Notyfing\n");
+					notify_remote_via_irq(ring_info->irq);
+				}
+			}
+
+			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
+			printk("Final check for requests %d\n", more_to_do);
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
+
+/* ISR for responses from importer */
+static irqreturn_t hyper_dmabuf_front_ring_isr(int irq, void *dev_id)
+{
+	/* front ring only care about response from back */
+	struct hyper_dmabuf_ring_rp *response;
+	RING_IDX i, rp;
+	int more_to_do, ret;
+
+	struct hyper_dmabuf_ring_info_export *ring_info = (struct hyper_dmabuf_ring_info_export *)dev_id;
+	struct hyper_dmabuf_front_ring *ring;
+	ring = &ring_info->ring_front;
+
+	do {
+		more_to_do = 0;
+		rp = ring->sring->rsp_prod;
+		for (i = ring->rsp_cons; i != rp; i++) {
+			unsigned long id;
+
+			response = RING_GET_RESPONSE(ring, i);
+			id = response->response_id;
+
+			if (response->status == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+				/* parsing response */
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain, (struct hyper_dmabuf_ring_rq*)response);
+
+				if (ret < 0) {
+					printk("getting error while parsing response\n");
+				}
+			} else if (response->status == HYPER_DMABUF_REQ_ERROR) {
+				printk("remote domain %d couldn't process request %d\n", ring_info->rdomain, response->command);
+			}
+
+		}
+
+		ring->rsp_cons = i;
+
+		if (i != ring->req_prod_pvt) {
+			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
+			printk("more to do %d\n", more_to_do);
+		} else {
+			ring->sring->rsp_event = i+1;
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
new file mode 100644
index 0000000..2754917
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm.h
@@ -0,0 +1,62 @@
+#ifndef __HYPER_DMABUF_XEN_COMM_H__
+#define __HYPER_DMABUF_XEN_COMM_H__
+
+#include "xen/interface/io/ring.h"
+
+#define MAX_NUMBER_OF_OPERANDS 9
+
+struct hyper_dmabuf_ring_rq {
+        unsigned int request_id;
+        unsigned int status;
+        unsigned int command;
+        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_ring_rp {
+        unsigned int response_id;
+        unsigned int status;
+        unsigned int command;
+        unsigned int operands[MAX_NUMBER_OF_OPERANDS];
+};
+
+DEFINE_RING_TYPES(hyper_dmabuf, struct hyper_dmabuf_ring_rq, struct hyper_dmabuf_ring_rp);
+
+struct hyper_dmabuf_ring_info_export {
+        struct hyper_dmabuf_front_ring ring_front;
+	int rdomain;
+        int gref_ring;
+        int irq;
+        int port;
+};
+
+struct hyper_dmabuf_ring_info_import {
+        int sdomain;
+        int irq;
+        int evtchn;
+        struct hyper_dmabuf_back_ring ring_back;
+};
+
+//struct hyper_dmabuf_work {
+//	hyper_dmabuf_ring_rq requrest;
+//	struct work_struct msg_parse;
+//};
+
+int32_t hyper_dmabuf_get_domid(void);
+
+int hyper_dmabuf_next_req_id_export(void);
+
+int hyper_dmabuf_next_req_id_import(void);
+
+/* exporter needs to generated info for page sharing */
+int hyper_dmabuf_exporter_ringbuf_init(int rdomain, grant_ref_t *gref, int *port);
+
+/* importer needs to know about shared page and port numbers for ring buffer and event channel */
+int hyper_dmabuf_importer_ringbuf_init(int sdomain, grant_ref_t gref, int port);
+
+/* send request to the remote domain */
+int hyper_dmabuf_send_request(int domain, struct hyper_dmabuf_ring_rq *req);
+
+/* called by interrupt (WORKQUEUE) */
+int hyper_dmabuf_send_response(struct hyper_dmabuf_ring_rp* response, int domain);
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
new file mode 100644
index 0000000..15c9d29
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.c
@@ -0,0 +1,106 @@
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <asm/uaccess.h>
+#include <linux/hashtable.h>
+#include <xen/grant_table.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_importer_ring, MAX_ENTRY_IMPORT_RING);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exporter_ring, MAX_ENTRY_EXPORT_RING);
+
+int hyper_dmabuf_ring_table_init()
+{
+	hash_init(hyper_dmabuf_hash_importer_ring);
+	hash_init(hyper_dmabuf_hash_exporter_ring);
+	return 0;
+}
+
+int hyper_dmabuf_ring_table_destroy()
+{
+	/* TODO: cleanup tables*/
+	return 0;
+}
+
+int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = ring_info;
+
+	hash_add(hyper_dmabuf_hash_exporter_ring, &info_entry->node,
+		info_entry->info->rdomain);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	info_entry->info = ring_info;
+
+	hash_add(hyper_dmabuf_hash_importer_ring, &info_entry->node,
+		info_entry->info->sdomain);
+
+	return 0;
+}
+
+struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+		if(info_entry->info->rdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+		if(info_entry->info->sdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exporter_ring(int domid)
+{
+	struct hyper_dmabuf_exporter_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exporter_ring, bkt, info_entry, node)
+		if(info_entry->info->rdomain == domid) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
+
+int hyper_dmabuf_remove_importer_ring(int domid)
+{
+	struct hyper_dmabuf_importer_ring_info *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_importer_ring, bkt, info_entry, node)
+		if(info_entry->info->sdomain == domid) {
+			hash_del(&info_entry->node);
+			return 0;
+		}
+
+	return -1;
+}
diff --git a/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
new file mode 100644
index 0000000..5929f99
--- /dev/null
+++ b/drivers/xen/hyper_dmabuf/xen/hyper_dmabuf_xen_comm_list.h
@@ -0,0 +1,35 @@
+#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
+#define __HYPER_DMABUF_XEN_COMM_LIST_H__
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORT_RING 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORT_RING 7
+
+struct hyper_dmabuf_exporter_ring_info {
+        struct hyper_dmabuf_ring_info_export *info;
+        struct hlist_node node;
+};
+
+struct hyper_dmabuf_importer_ring_info {
+        struct hyper_dmabuf_ring_info_import *info;
+        struct hlist_node node;
+};
+
+int hyper_dmabuf_ring_table_init(void);
+
+int hyper_dmabuf_ring_table_destroy(void);
+
+int hyper_dmabuf_register_exporter_ring(struct hyper_dmabuf_ring_info_export *ring_info);
+
+int hyper_dmabuf_register_importer_ring(struct hyper_dmabuf_ring_info_import *ring_info);
+
+struct hyper_dmabuf_ring_info_export *hyper_dmabuf_find_exporter_ring(int domid);
+
+struct hyper_dmabuf_ring_info_import *hyper_dmabuf_find_importer_ring(int domid);
+
+int hyper_dmabuf_remove_exporter_ring(int domid);
+
+int hyper_dmabuf_remove_importer_ring(int domid);
+
+#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 160+ messages in thread

end of thread, other threads:[~2018-02-15  1:37 UTC | newest]

Thread overview: 160+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-19 19:29 [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 03/60] hyper_dmabuf: re-use dma_buf previously exported if exist Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 04/60] hyper_dmabuf: new index, k for pointing a right n-th page Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 05/60] hyper_dmabuf: skip creating a comm ch if exist for the VM Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 06/60] hyper_dmabuf: map shared pages only once when importing Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 07/60] hyper_dmabuf: message parsing done via workqueue Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 08/60] hyper_dmabuf: automatic comm channel initialization using xenstore Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 09/60] hyper_dmabuf: indirect DMA_BUF synchronization via shadowing Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 10/60] hyper_dmabuf: make sure to free memory to prevent leak Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 11/60] hyper_dmabuf: check stack before unmapping/detaching shadow DMA_BUF Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 12/60] hyper_dmabuf: two different unexporting mechanisms Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 13/60] hyper_dmabuf: postponing cleanup of hyper_DMABUF Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 14/60] hyper_dmabuf: clean-up process based on file->f_count Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 15/60] hyper_dmabuf: reusing previously released hyper_dmabuf_id Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 16/60] hyper_dmabuf: define hypervisor specific backend API Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 17/60] hyper_dmabuf: use dynamic debug macros for logging Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 18/60] hyper_dmabuf: reset comm channel when one end has disconnected Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 19/60] hyper_dmabuf: fix the case with sharing a buffer with 2 pages Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 20/60] hyper_dmabuf: optimized loop with less condition check Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 21/60] hyper_dmabuf: exposing drv information using sysfs Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 22/60] hyper_dmabuf: configure license Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 23/60] hyper_dmabuf: use CONFIG_HYPER_DMABUF_XEN instead of CONFIG_XEN Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 24/60] hyper_dmabuf: waits for resp only if WAIT_AFTER_SYNC_REQ == 1 Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 25/60] hyper_dmabuf: introduced delayed unexport Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 26/60] hyper_dmabuf: add mutexes to prevent several race conditions Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 27/60] hyper_dmabuf: use proper error codes Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 28/60] hyper_dmabuf: address several synchronization issues Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 29/60] hyper_dmabuf: make sure to release allocated buffers when exiting Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 30/60] hyper_dmabuf: free already mapped pages when error happens Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 31/60] hyper_dmabuf: built-in compilation option Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 32/60] hyper_dmabuf: make all shared pages read-only Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 33/60] hyper_dmabuf: error checking on the result of dma_buf_map_attachment Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 34/60] hyper_dmabuf: extend DMA bitmask to 64-bits Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 35/60] hyper_dmabuf: 128bit hyper_dmabuf_id with random keys Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 36/60] hyper_dmabuf: error handling when share_pages fails Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 37/60] hyper_dmabuf: implementation of query ioctl Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 38/60] hyper_dmabuf: preventing self exporting of dma_buf Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 39/60] hyper_dmabuf: correcting DMA-BUF clean-up order Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 40/60] hyper_dmabuf: do not use 'private' as field name Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 41/60] hyper_dmabuf: re-organize driver source Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 42/60] hyper_dmabuf: always generate a new random keys Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:29 ` [RFC PATCH 43/60] hyper_dmabuf: fixes on memory leaks in various places Dongwon Kim
2017-12-19 19:29   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 44/60] hyper_dmabuf: proper handling of sgt_info->priv Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 45/60] hyper_dmabuf: adding poll/read for event generation Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 46/60] hyper_dmabuf: delay auto initialization of comm_env Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 47/60] hyper_dmabuf: fix issues with event-polling Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 48/60] hyper_dmabuf: add query items for buffer private info Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 49/60] hyper_dmabuf: general clean-up and fixes Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 50/60] hyper_dmabuf: fix styling err and warns caught by checkpatch.pl Dongwon Kim
2017-12-19 19:30 ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 51/60] hyper_dmabuf: missing mutex_unlock and move spinlock Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 52/60] hyper_dmabuf: remove prefix 'hyper_dmabuf' from static func and backend APIs Dongwon Kim
2017-12-19 19:30 ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 53/60] hyper_dmabuf: define fastpath_export for exporting existing buffer Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 54/60] hyper_dmabuf: 'backend_ops' reduced to 'bknd_ops' and 'ops' to 'bknd_ops' Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 55/60] hyper_dmabuf: fixed wrong send_req call Dongwon Kim
2017-12-19 19:30 ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 56/60] hyper_dmabuf: add initialization and cleanup to bknd_ops Dongwon Kim
2017-12-19 19:30 ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 57/60] hyper_dmabuf: change type of ref to shared pages to unsigned long Dongwon Kim
2017-12-19 19:30   ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 58/60] hyper_dmabuf: move device node out of /dev/xen/ Dongwon Kim
2017-12-19 19:30 ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 59/60] hyper_dmabuf: freeing hy_drv_priv when drv init fails (v2) Dongwon Kim
2017-12-19 19:30 ` Dongwon Kim
2017-12-19 19:30 ` [RFC PATCH 60/60] hyper_dmabuf: move hyper_dmabuf to under drivers/dma-buf/ Dongwon Kim
2017-12-19 19:30 ` Dongwon Kim
2017-12-19 23:27 ` [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv Dongwon Kim
2017-12-19 23:27 ` Dongwon Kim
2017-12-19 23:27   ` Dongwon Kim
2017-12-20  8:17   ` Juergen Gross
2017-12-20  8:17   ` [Xen-devel] " Juergen Gross
2018-01-10 23:21     ` Dongwon Kim
2018-01-10 23:21     ` [Xen-devel] " Dongwon Kim
2018-01-10 23:21       ` Dongwon Kim
2017-12-20  8:38   ` Oleksandr Andrushchenko
2017-12-20  8:38   ` [Xen-devel] " Oleksandr Andrushchenko
2018-01-10 23:14     ` Dongwon Kim
2018-01-10 23:14     ` [Xen-devel] " Dongwon Kim
2017-12-20  9:59   ` Daniel Vetter
2017-12-20  9:59     ` Daniel Vetter
2017-12-26 18:19     ` Matt Roper
2017-12-26 18:19       ` Matt Roper
2017-12-29 13:03       ` Tomeu Vizoso
2017-12-29 13:03       ` Tomeu Vizoso
2017-12-29 13:03         ` Tomeu Vizoso
2017-12-26 18:19     ` Matt Roper
2018-01-10 23:13     ` Dongwon Kim
2018-01-10 23:13     ` Dongwon Kim
2017-12-20  9:59   ` Daniel Vetter
2018-02-15  1:34 ` Dongwon Kim
2018-02-15  1:34 ` Dongwon Kim
2018-02-15  1:34   ` Dongwon Kim
2017-12-19 19:29 Dongwon Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.