linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver
@ 2018-02-14  1:49 Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework Dongwon Kim
                   ` (9 more replies)
  0 siblings, 10 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:49 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

This patch series contains the implementation of a new device driver,
hyper_DMABUF driver, which provides a way to expand the boundary of
Linux DMA-BUF sharing to across different VM instances in Multi-OS platform
enabled by a Hypervisor (e.g. XEN)

This version 2 series is basically refactored version of old series starting
with "[RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf
drv"

Implementation details of this driver are described in the reference guide
added by the second patch, "[RFC PATCH v2 2/5] hyper_dmabuf: architecture
specification and reference guide".

Attaching 'Overview' section here as a quick summary.

------------------------------------------------------------------------------
Section 1. Overview
------------------------------------------------------------------------------

Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
achines (VMs), which expands DMA-BUF sharing capability to the VM environment
where multiple different OS instances need to share same physical data without
data-copy across VMs.

To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
for the buffer to the importing VM (so called, “importer”).

Another instance of the Hyper_DMABUF driver on importer registers
a hyper_dmabuf_id together with reference information for the shared physical
pages associated with the DMA_BUF to its database when the export happens.

The actual mapping of the DMA_BUF on the importer’s side is done by
the Hyper_DMABUF driver when user space issues the IOCTL command to access
the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
exporting driver as is, that is, no special configuration is required.
Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
exchange.

------------------------------------------------------------------------------

There is a git repository at github.com where this series of patches are all
integrated in Linux kernel tree based on the commit:

        commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
        Author: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
        Date:   Sun Dec 3 11:01:47 2018 -0500

            Linux 4.15-rc2

https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v4

Dongwon Kim, Mateusz Polrola (9):
  hyper_dmabuf: initial upload of hyper_dmabuf drv core framework
  hyper_dmabuf: architecture specification and reference guide
  MAINTAINERS: adding Hyper_DMABUF driver section in MAINTAINERS
  hyper_dmabuf: user private data attached to hyper_DMABUF
  hyper_dmabuf: hyper_DMABUF synchronization across VM
  hyper_dmabuf: query ioctl for retreiving various hyper_DMABUF info
  hyper_dmabuf: event-polling mechanism for detecting a new hyper_DMABUF
  hyper_dmabuf: threaded interrupt in Xen-backend
  hyper_dmabuf: default backend for XEN hypervisor

 Documentation/hyper-dmabuf-sharing.txt             | 734 ++++++++++++++++
 MAINTAINERS                                        |  11 +
 drivers/dma-buf/Kconfig                            |   2 +
 drivers/dma-buf/Makefile                           |   1 +
 drivers/dma-buf/hyper_dmabuf/Kconfig               |  50 ++
 drivers/dma-buf/hyper_dmabuf/Makefile              |  44 +
 .../backends/xen/hyper_dmabuf_xen_comm.c           | 944 +++++++++++++++++++++
 .../backends/xen/hyper_dmabuf_xen_comm.h           |  78 ++
 .../backends/xen/hyper_dmabuf_xen_comm_list.c      | 158 ++++
 .../backends/xen/hyper_dmabuf_xen_comm_list.h      |  67 ++
 .../backends/xen/hyper_dmabuf_xen_drv.c            |  46 +
 .../backends/xen/hyper_dmabuf_xen_drv.h            |  53 ++
 .../backends/xen/hyper_dmabuf_xen_shm.c            | 525 ++++++++++++
 .../backends/xen/hyper_dmabuf_xen_shm.h            |  46 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    | 410 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h    | 122 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c  | 122 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h  |  38 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c     | 135 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h     |  53 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 794 +++++++++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  52 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c   | 295 +++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h   |  73 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 416 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  89 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 415 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h    |  34 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c  | 174 ++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h  |  36 +
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.c        | 324 +++++++
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.h        |  32 +
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 257 ++++++
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  43 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 143 ++++
 include/uapi/linux/hyper_dmabuf.h                  | 134 +++
 36 files changed, 6950 insertions(+)
 create mode 100644 Documentation/hyper-dmabuf-sharing.txt
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Kconfig
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Makefile
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
 create mode 100644 include/uapi/linux/hyper_dmabuf.h

-- 
2.16.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-04-10  8:53   ` [RFC, v2, " Oleksandr Andrushchenko
  2018-02-14  1:50 ` [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide Dongwon Kim
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

Upload of intial version of core framework in hyper_DMABUF driver
enabling DMA_BUF exchange between two different VMs in virtualized
platform based on Hypervisor such as XEN.

Hyper_DMABUF drv's primary role is to import a DMA_BUF from originator
then re-export it to another Linux VM so that it can be mapped and
accessed in there.

This driver has two layers, one is so called, "core framework", which
contains driver interface and core functions handling export/import of
new hyper_DMABUF and its maintenance. This part of the driver is
independent from Hypervisor so can work as is with any Hypervisor.

The other layer is called "Hypervisor Backend". This layer represents
the interface between "core framework" and actual Hypervisor, handling
memory sharing and communication. Not like "core framework", every
Hypervisor needs it's own backend interface designed using its native
mechanism for memory sharing and inter-VM communication.

This patch contains the first part, "core framework", which consists of
7 source files and 11 header files. Some brief description of these
source code are attached below:

hyper_dmabuf_drv.c

- Linux driver interface and initialization/cleaning-up routines

hyper_dmabuf_ioctl.c

- IOCTLs calls for export/import of DMA-BUF comm channel's creation and
  destruction.

hyper_dmabuf_sgl_proc.c

- Provides methods to managing DMA-BUF for exporing and importing. For
  exporting, extraction of pages, sharing pages via procedures in
  "Backend" and notifying importing VM exist. For importing, all
  operations related to the reconstruction of DMA-BUF (with shared
  pages) on importer's side are defined.

hyper_dmabuf_ops.c

- Standard DMA-BUF operations for hyper_DMABUF reconstructed on
  importer's side.

hyper_dmabuf_list.c

- Lists for storing exported and imported hyper_DMABUF to keep track of
  remote usage of hyper_DMABUF currently being shared.

hyper_dmabuf_msg.c

- Defines messages exchanged between VMs (exporter and importer) and
  function calls for sending and parsing (when received) those.

hyper_dmabuf_id.c

- Contains methods to generate and manage "hyper_DMABUF id" for each
  hyper_DMABUF being exported. It is a global handle for a hyper_DMABUF,
  which another VM needs to know to import it.

hyper_dmabuf_struct.h

- Contains data structures of importer or exporter hyper_DMABUF

include/uapi/linux/hyper_dmabuf.h

- Contains definition of data types and structures referenced by user
  application to interact with driver

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/dma-buf/Kconfig                            |   2 +
 drivers/dma-buf/Makefile                           |   1 +
 drivers/dma-buf/hyper_dmabuf/Kconfig               |  23 +
 drivers/dma-buf/hyper_dmabuf/Makefile              |  34 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    | 254 ++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h    | 111 ++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c     | 135 +++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h     |  53 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 672 +++++++++++++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  52 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c   | 294 +++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h   |  73 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 320 ++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  87 +++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 264 ++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h    |  34 ++
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 256 ++++++++
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  43 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 131 ++++
 include/uapi/linux/hyper_dmabuf.h                  |  87 +++
 20 files changed, 2926 insertions(+)
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Kconfig
 create mode 100644 drivers/dma-buf/hyper_dmabuf/Makefile
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
 create mode 100644 include/uapi/linux/hyper_dmabuf.h

diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index ed3b785bae37..09ccac1768e3 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -30,4 +30,6 @@ config SW_SYNC
 	  WARNING: improper use of this can result in deadlocking kernel
 	  drivers from userspace. Intended for test and debug only.
 
+source "drivers/dma-buf/hyper_dmabuf/Kconfig"
+
 endmenu
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index c33bf8863147..445749babb19 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,3 +1,4 @@
 obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
 obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
 obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
+obj-$(CONFIG_HYPER_DMABUF)      += ./hyper_dmabuf/
diff --git a/drivers/dma-buf/hyper_dmabuf/Kconfig b/drivers/dma-buf/hyper_dmabuf/Kconfig
new file mode 100644
index 000000000000..5ebf516d65eb
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/Kconfig
@@ -0,0 +1,23 @@
+menu "HYPER_DMABUF"
+
+config HYPER_DMABUF
+	tristate "Enables hyper dmabuf driver"
+	default y
+	help
+	  This option enables Hyper_DMABUF driver.
+
+	  This driver works as abstraction layer that export and import
+	  DMA_BUF from/to another virtual OS running on the same HW platform
+	  powered by a hypervisor
+
+config HYPER_DMABUF_SYSFS
+	bool "Enable sysfs information about hyper DMA buffers"
+	default y
+	depends on HYPER_DMABUF
+	help
+	  Expose run-time information about currently imported and exported buffers
+	  registered in EXPORT and IMPORT list in Hyper_DMABUF driver.
+
+	  The location of sysfs is under "...."
+
+endmenu
diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
new file mode 100644
index 000000000000..3908522b396a
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/Makefile
@@ -0,0 +1,34 @@
+TARGET_MODULE:=hyper_dmabuf
+
+# If we running by kernel building system
+ifneq ($(KERNELRELEASE),)
+	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
+                                 hyper_dmabuf_ioctl.o \
+                                 hyper_dmabuf_list.o \
+				 hyper_dmabuf_sgl_proc.o \
+				 hyper_dmabuf_ops.o \
+				 hyper_dmabuf_msg.o \
+				 hyper_dmabuf_id.o \
+
+obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
+
+# If we are running without kernel build system
+else
+BUILDSYSTEM_DIR?=../../../
+PWD:=$(shell pwd)
+
+all :
+# run kernel build system to make module
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
+
+clean:
+# run kernel build system to cleanup in current directory
+$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
+
+load:
+	insmod ./$(TARGET_MODULE).ko
+
+unload:
+	rmmod ./$(TARGET_MODULE).ko
+
+endif
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
new file mode 100644
index 000000000000..18c1cd735ea2
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -0,0 +1,254 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+#include <linux/device.h>
+#include <linux/uaccess.h>
+#include <linux/poll.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_ioctl.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
+
+MODULE_LICENSE("GPL and additional rights");
+MODULE_AUTHOR("Intel Corporation");
+
+struct hyper_dmabuf_private *hy_drv_priv;
+
+static void force_free(struct exported_sgt_info *exported,
+		       void *attr)
+{
+	struct ioctl_hyper_dmabuf_unexport unexport_attr;
+	struct file *filp = (struct file *)attr;
+
+	if (!filp || !exported)
+		return;
+
+	if (exported->filp == filp) {
+		dev_dbg(hy_drv_priv->dev,
+			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		unexport_attr.hid = exported->hid;
+		unexport_attr.delay_ms = 0;
+
+		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
+	}
+}
+
+static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
+{
+	int ret = 0;
+
+	/* Do not allow exclusive open */
+	if (filp->f_flags & O_EXCL)
+		return -EBUSY;
+
+	return ret;
+}
+
+static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
+{
+	hyper_dmabuf_foreach_exported(force_free, filp);
+
+	return 0;
+}
+
+static const struct file_operations hyper_dmabuf_driver_fops = {
+	.owner = THIS_MODULE,
+	.open = hyper_dmabuf_open,
+	.release = hyper_dmabuf_release,
+	.unlocked_ioctl = hyper_dmabuf_ioctl,
+};
+
+static struct miscdevice hyper_dmabuf_miscdev = {
+	.minor = MISC_DYNAMIC_MINOR,
+	.name = "hyper_dmabuf",
+	.fops = &hyper_dmabuf_driver_fops,
+};
+
+static int register_device(void)
+{
+	int ret = 0;
+
+	ret = misc_register(&hyper_dmabuf_miscdev);
+
+	if (ret) {
+		pr_err("hyper_dmabuf: driver can't be registered\n");
+		return ret;
+	}
+
+	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
+
+	/* TODO: Check if there is a different way to initialize dma mask */
+	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
+
+	return ret;
+}
+
+static void unregister_device(void)
+{
+	dev_info(hy_drv_priv->dev,
+		"hyper_dmabuf: %s is called\n", __func__);
+
+	misc_deregister(&hyper_dmabuf_miscdev);
+}
+
+static int __init hyper_dmabuf_drv_init(void)
+{
+	int ret = 0;
+
+	pr_notice("hyper_dmabuf_starting: Initialization started\n");
+
+	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
+			      GFP_KERNEL);
+
+	if (!hy_drv_priv)
+		return -ENOMEM;
+
+	ret = register_device();
+	if (ret < 0) {
+		kfree(hy_drv_priv);
+		return ret;
+	}
+
+	hy_drv_priv->bknd_ops = NULL;
+
+	if (hy_drv_priv->bknd_ops == NULL) {
+		pr_err("Hyper_dmabuf: no backend found\n");
+		kfree(hy_drv_priv);
+		return -1;
+	}
+
+	mutex_init(&hy_drv_priv->lock);
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	hy_drv_priv->initialized = false;
+
+	dev_info(hy_drv_priv->dev,
+		 "initializing database for imported/exported dmabufs\n");
+
+	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
+
+	ret = hyper_dmabuf_table_init();
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"fail to init table for exported/imported entries\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
+		return ret;
+	}
+
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to initialize sysfs\n");
+		mutex_unlock(&hy_drv_priv->lock);
+		kfree(hy_drv_priv);
+		return ret;
+	}
+#endif
+
+	if (hy_drv_priv->bknd_ops->init) {
+		ret = hy_drv_priv->bknd_ops->init();
+
+		if (ret < 0) {
+			dev_dbg(hy_drv_priv->dev,
+				"failed to initialize backend.\n");
+			mutex_unlock(&hy_drv_priv->lock);
+			kfree(hy_drv_priv);
+			return ret;
+		}
+	}
+
+	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
+
+	ret = hy_drv_priv->bknd_ops->init_comm_env();
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"failed to initialize comm-env.\n");
+	} else {
+		hy_drv_priv->initialized = true;
+	}
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_info(hy_drv_priv->dev,
+		"Finishing up initialization of hyper_dmabuf drv\n");
+
+	/* interrupt for comm should be registered here: */
+	return ret;
+}
+
+static void hyper_dmabuf_drv_exit(void)
+{
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
+#endif
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	/* hash tables for export/import entries and ring_infos */
+	hyper_dmabuf_table_destroy();
+
+	hy_drv_priv->bknd_ops->destroy_comm();
+
+	if (hy_drv_priv->bknd_ops->cleanup) {
+		hy_drv_priv->bknd_ops->cleanup();
+	};
+
+	/* destroy workqueue */
+	if (hy_drv_priv->work_queue)
+		destroy_workqueue(hy_drv_priv->work_queue);
+
+	/* destroy id_queue */
+	if (hy_drv_priv->id_queue)
+		hyper_dmabuf_free_hid_list();
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_info(hy_drv_priv->dev,
+		 "hyper_dmabuf driver: Exiting\n");
+
+	kfree(hy_drv_priv);
+
+	unregister_device();
+}
+
+module_init(hyper_dmabuf_drv_init);
+module_exit(hyper_dmabuf_drv_exit);
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
new file mode 100644
index 000000000000..46119d762430
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -0,0 +1,111 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __LINUX_HYPER_DMABUF_DRV_H__
+#define __LINUX_HYPER_DMABUF_DRV_H__
+
+#include <linux/device.h>
+#include <linux/hyper_dmabuf.h>
+
+struct hyper_dmabuf_req;
+
+struct hyper_dmabuf_private {
+	struct device *dev;
+
+	/* VM(domain) id of current VM instance */
+	int domid;
+
+	/* workqueue dedicated to hyper_dmabuf driver */
+	struct workqueue_struct *work_queue;
+
+	/* list of reusable hyper_dmabuf_ids */
+	struct list_reusable_id *id_queue;
+
+	/* backend ops - hypervisor specific */
+	struct hyper_dmabuf_bknd_ops *bknd_ops;
+
+	/* device global lock */
+	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
+	struct mutex lock;
+
+	/* flag that shows whether backend is initialized */
+	bool initialized;
+
+	/* # of pending events */
+	int pending;
+};
+
+struct list_reusable_id {
+	hyper_dmabuf_id_t hid;
+	struct list_head list;
+};
+
+struct hyper_dmabuf_bknd_ops {
+	/* backend initialization routine (optional) */
+	int (*init)(void);
+
+	/* backend cleanup routine (optional) */
+	int (*cleanup)(void);
+
+	/* retreiving id of current virtual machine */
+	int (*get_vm_id)(void);
+
+	/* get pages shared via hypervisor-specific method */
+	int (*share_pages)(struct page **pages, int vm_id,
+			   int nents, void **refs_info);
+
+	/* make shared pages unshared via hypervisor specific method */
+	int (*unshare_pages)(void **refs_info, int nents);
+
+	/* map remotely shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
+					   int nents, void **refs_info);
+
+	/* unmap and free shared pages on importer's side via
+	 * hypervisor-specific method
+	 */
+	int (*unmap_shared_pages)(void **refs_info, int nents);
+
+	/* initialize communication environment */
+	int (*init_comm_env)(void);
+
+	void (*destroy_comm)(void);
+
+	/* upstream ch setup (receiving and responding) */
+	int (*init_rx_ch)(int vm_id);
+
+	/* downstream ch setup (transmitting and parsing responses) */
+	int (*init_tx_ch)(int vm_id);
+
+	int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
+};
+
+/* exporting global drv private info */
+extern struct hyper_dmabuf_private *hy_drv_priv;
+
+#endif /* __LINUX_HYPER_DMABUF_DRV_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
new file mode 100644
index 000000000000..f2e994a4957d
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
@@ -0,0 +1,135 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	struct list_reusable_id *new_reusable;
+
+	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
+
+	if (!new_reusable)
+		return;
+
+	new_reusable->hid = hid;
+
+	list_add(&new_reusable->list, &reusable_head->list);
+}
+
+static hyper_dmabuf_id_t get_reusable_hid(void)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
+
+	/* check there is reusable id */
+	if (!list_empty(&reusable_head->list)) {
+		reusable_head = list_first_entry(&reusable_head->list,
+						 struct list_reusable_id,
+						 list);
+
+		list_del(&reusable_head->list);
+		hid = reusable_head->hid;
+		kfree(reusable_head);
+	}
+
+	return hid;
+}
+
+void hyper_dmabuf_free_hid_list(void)
+{
+	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
+	struct list_reusable_id *temp_head;
+
+	if (reusable_head) {
+		/* freeing mem space all reusable ids in the stack */
+		while (!list_empty(&reusable_head->list)) {
+			temp_head = list_first_entry(&reusable_head->list,
+						     struct list_reusable_id,
+						     list);
+			list_del(&temp_head->list);
+			kfree(temp_head);
+		}
+
+		/* freeing head */
+		kfree(reusable_head);
+	}
+}
+
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
+{
+	static int count;
+	hyper_dmabuf_id_t hid;
+	struct list_reusable_id *reusable_head;
+
+	/* first call to hyper_dmabuf_get_id */
+	if (count == 0) {
+		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
+
+		if (!reusable_head)
+			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
+
+		/* list head has an invalid count */
+		reusable_head->hid.id = -1;
+		INIT_LIST_HEAD(&reusable_head->list);
+		hy_drv_priv->id_queue = reusable_head;
+	}
+
+	hid = get_reusable_hid();
+
+	/*creating a new H-ID only if nothing in the reusable id queue
+	 * and count is less than maximum allowed
+	 */
+	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
+		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
+
+	/* random data embedded in the id for security */
+	get_random_bytes(&hid.rng_key[0], 12);
+
+	return hid;
+}
+
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
+{
+	int i;
+
+	/* compare keys */
+	for (i = 0; i < 3; i++) {
+		if (hid1.rng_key[i] != hid2.rng_key[i])
+			return false;
+	}
+
+	return true;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
new file mode 100644
index 000000000000..11f530e2c8f6
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_ID_H__
+#define __HYPER_DMABUF_ID_H__
+
+#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
+	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
+
+#define HYPER_DMABUF_DOM_ID(hid) \
+	(((hid.id) >> 24) & 0xFF)
+
+/* currently maximum number of buffers shared
+ * at any given moment is limited to 1000
+ */
+#define HYPER_DMABUF_ID_MAX 1000
+
+/* adding freed hid to the reusable list */
+void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
+
+/* freeing the reusasble list */
+void hyper_dmabuf_free_hid_list(void);
+
+/* getting a hid available to use. */
+hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
+
+/* comparing two different hid */
+bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
+
+#endif /*__HYPER_DMABUF_ID_H*/
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
new file mode 100644
index 000000000000..020a5590a254
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -0,0 +1,672 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ioctl.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_ops.h"
+
+static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret = 0;
+
+	if (!data) {
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
+		return -EINVAL;
+	}
+	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
+
+	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
+
+	return ret;
+}
+
+static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret = 0;
+
+	if (!data) {
+		dev_err(hy_drv_priv->dev, "user data is NULL\n");
+		return -EINVAL;
+	}
+
+	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
+
+	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
+
+	return ret;
+}
+
+static int send_export_msg(struct exported_sgt_info *exported,
+			   struct pages_info *pg_info)
+{
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct hyper_dmabuf_req *req;
+	int op[MAX_NUMBER_OF_OPERANDS] = {0};
+	int ret, i;
+
+	/* now create request for importer via ring */
+	op[0] = exported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = exported->hid.rng_key[i];
+
+	if (pg_info) {
+		op[4] = pg_info->nents;
+		op[5] = pg_info->frst_ofst;
+		op[6] = pg_info->last_len;
+		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
+					 pg_info->nents, &exported->refs_info);
+		if (op[7] < 0) {
+			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
+			return op[7];
+		}
+	}
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return -ENOMEM;
+
+	/* composing a message to the importer */
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
+
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
+
+	kfree(req);
+
+	return ret;
+}
+
+/* Fast path exporting routine in case same buffer is already exported.
+ *
+ * If same buffer is still valid and exist in EXPORT LIST it returns 0 so
+ * that remaining normal export process can be skipped.
+ *
+ * If "unexport" is scheduled for the buffer, it cancels it since the buffer
+ * is being re-exported.
+ *
+ * return '1' if reexport is needed, return '0' if succeeds, return
+ * Kernel error code if something goes wrong
+ */
+static int fastpath_export(hyper_dmabuf_id_t hid)
+{
+	int reexport = 1;
+	int ret = 0;
+	struct exported_sgt_info *exported;
+
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported)
+		return reexport;
+
+	if (exported->valid == false)
+		return reexport;
+
+	/*
+	 * Check if unexport is already scheduled for that buffer,
+	 * if so try to cancel it. If that will fail, buffer needs
+	 * to be reexport once again.
+	 */
+	if (exported->unexport_sched) {
+		if (!cancel_delayed_work_sync(&exported->unexport))
+			return reexport;
+
+		exported->unexport_sched = false;
+	}
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
+			(struct ioctl_hyper_dmabuf_export_remote *)data;
+	struct dma_buf *dma_buf;
+	struct dma_buf_attachment *attachment;
+	struct sg_table *sgt;
+	struct pages_info *pg_info;
+	struct exported_sgt_info *exported;
+	hyper_dmabuf_id_t hid;
+	int ret = 0;
+
+	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
+		dev_err(hy_drv_priv->dev,
+			"exporting to the same VM is not permitted\n");
+		return -EINVAL;
+	}
+
+	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
+
+	if (IS_ERR(dma_buf)) {
+		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
+		return PTR_ERR(dma_buf);
+	}
+
+	/* we check if this specific attachment was already exported
+	 * to the same domain and if yes and it's valid sgt_info,
+	 * it returns hyper_dmabuf_id of pre-exported sgt_info
+	 */
+	hid = hyper_dmabuf_find_hid_exported(dma_buf,
+					     export_remote_attr->remote_domain);
+
+	if (hid.id != -1) {
+		ret = fastpath_export(hid);
+
+		/* return if fastpath_export succeeds or
+		 * gets some fatal error
+		 */
+		if (ret <= 0) {
+			dma_buf_put(dma_buf);
+			export_remote_attr->hid = hid;
+			return ret;
+		}
+	}
+
+	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
+	if (IS_ERR(attachment)) {
+		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
+		ret = PTR_ERR(attachment);
+		goto fail_attach;
+	}
+
+	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
+
+	if (IS_ERR(sgt)) {
+		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
+		ret = PTR_ERR(sgt);
+		goto fail_map_attachment;
+	}
+
+	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
+
+	if (!exported) {
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
+	}
+
+	exported->hid = hyper_dmabuf_get_hid();
+
+	/* no more exported dmabuf allowed */
+	if (exported->hid.id == -1) {
+		dev_err(hy_drv_priv->dev,
+			"exceeds allowed number of dmabuf to be exported\n");
+		ret = -ENOMEM;
+		goto fail_sgt_info_creation;
+	}
+
+	exported->rdomid = export_remote_attr->remote_domain;
+	exported->dma_buf = dma_buf;
+	exported->valid = true;
+
+	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
+	if (!exported->active_sgts) {
+		ret = -ENOMEM;
+		goto fail_map_active_sgts;
+	}
+
+	exported->active_attached = kmalloc(sizeof(struct attachment_list),
+					    GFP_KERNEL);
+	if (!exported->active_attached) {
+		ret = -ENOMEM;
+		goto fail_map_active_attached;
+	}
+
+	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
+				       GFP_KERNEL);
+	if (!exported->va_kmapped) {
+		ret = -ENOMEM;
+		goto fail_map_va_kmapped;
+	}
+
+	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
+				       GFP_KERNEL);
+	if (!exported->va_vmapped) {
+		ret = -ENOMEM;
+		goto fail_map_va_vmapped;
+	}
+
+	exported->active_sgts->sgt = sgt;
+	exported->active_attached->attach = attachment;
+	exported->va_kmapped->vaddr = NULL;
+	exported->va_vmapped->vaddr = NULL;
+
+	/* initialize list of sgt, attachment and vaddr for dmabuf sync
+	 * via shadow dma-buf
+	 */
+	INIT_LIST_HEAD(&exported->active_sgts->list);
+	INIT_LIST_HEAD(&exported->active_attached->list);
+	INIT_LIST_HEAD(&exported->va_kmapped->list);
+	INIT_LIST_HEAD(&exported->va_vmapped->list);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"failed to load private data\n");
+		ret = -EINVAL;
+		goto fail_export;
+	}
+
+	pg_info = hyper_dmabuf_ext_pgs(sgt);
+	if (!pg_info) {
+		dev_err(hy_drv_priv->dev,
+			"failed to construct pg_info\n");
+		ret = -ENOMEM;
+		goto fail_export;
+	}
+
+	exported->nents = pg_info->nents;
+
+	/* now register it to export list */
+	hyper_dmabuf_register_exported(exported);
+
+	export_remote_attr->hid = exported->hid;
+
+	ret = send_export_msg(exported, pg_info);
+
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to send out the export request\n");
+		goto fail_send_request;
+	}
+
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	exported->filp = filp;
+
+	return ret;
+
+/* Clean-up if error occurs */
+
+fail_send_request:
+	hyper_dmabuf_remove_exported(exported->hid);
+
+	/* free pg_info */
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+fail_export:
+	kfree(exported->va_vmapped);
+
+fail_map_va_vmapped:
+	kfree(exported->va_kmapped);
+
+fail_map_va_kmapped:
+	kfree(exported->active_attached);
+
+fail_map_active_attached:
+	kfree(exported->active_sgts);
+	kfree(exported);
+
+fail_map_active_sgts:
+fail_sgt_info_creation:
+	dma_buf_unmap_attachment(attachment, sgt,
+				 DMA_BIDIRECTIONAL);
+
+fail_map_attachment:
+	dma_buf_detach(dma_buf, attachment);
+
+fail_attach:
+	dma_buf_put(dma_buf);
+
+	return ret;
+}
+
+static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
+			(struct ioctl_hyper_dmabuf_export_fd *)data;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_req *req;
+	struct page **data_pgs;
+	int op[4];
+	int i;
+	int ret = 0;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	/* look for dmabuf for the id */
+	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
+
+	/* can't find sgt from the table */
+	if (!imported) {
+		dev_err(hy_drv_priv->dev, "can't find the entry\n");
+		return -ENOENT;
+	}
+
+	mutex_lock(&hy_drv_priv->lock);
+
+	imported->importers++;
+
+	/* send notification for export_fd to exporter */
+	op[0] = imported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = imported->hid.rng_key[i];
+
+	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req) {
+		mutex_unlock(&hy_drv_priv->lock);
+		return -ENOMEM;
+	}
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
+
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
+
+	if (ret < 0) {
+		/* in case of timeout other end eventually will receive request,
+		 * so we need to undo it
+		 */
+		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
+					&op[0]);
+		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid),
+				   req, false);
+		kfree(req);
+		dev_err(hy_drv_priv->dev,
+			"Failed to create sgt or notify exporter\n");
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
+		return ret;
+	}
+
+	kfree(req);
+
+	if (ret == HYPER_DMABUF_REQ_ERROR) {
+		dev_err(hy_drv_priv->dev,
+			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+		imported->importers--;
+		mutex_unlock(&hy_drv_priv->lock);
+		return -EINVAL;
+	}
+
+	ret = 0;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Found buffer gref %d off %d\n",
+		imported->ref_handle, imported->frst_ofst);
+
+	dev_dbg(hy_drv_priv->dev,
+		"last len %d nents %d domain %d\n",
+		imported->last_len, imported->nents,
+		HYPER_DMABUF_DOM_ID(imported->hid));
+
+	if (!imported->sgt) {
+		dev_dbg(hy_drv_priv->dev,
+			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
+			imported->hid.id, imported->hid.rng_key[0],
+			imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
+					HYPER_DMABUF_DOM_ID(imported->hid),
+					imported->nents,
+					&imported->refs_info);
+
+		if (!data_pgs) {
+			dev_err(hy_drv_priv->dev,
+				"can't map pages hid {id:%d key:%d %d %d}\n",
+				imported->hid.id, imported->hid.rng_key[0],
+				imported->hid.rng_key[1],
+				imported->hid.rng_key[2]);
+
+			imported->importers--;
+
+			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+			if (!req) {
+				mutex_unlock(&hy_drv_priv->lock);
+				return -ENOMEM;
+			}
+
+			hyper_dmabuf_create_req(req,
+						HYPER_DMABUF_EXPORT_FD_FAILED,
+						&op[0]);
+
+			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid),
+					   req, false);
+			kfree(req);
+			mutex_unlock(&hy_drv_priv->lock);
+			return -EINVAL;
+		}
+
+		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
+							imported->frst_ofst,
+							imported->last_len,
+							imported->nents);
+
+	}
+
+	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
+						    export_fd_attr->flags);
+
+	if (export_fd_attr->fd < 0) {
+		/* fail to get fd */
+		ret = export_fd_attr->fd;
+	}
+
+	mutex_unlock(&hy_drv_priv->lock);
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return ret;
+}
+
+/* unexport dmabuf from the database and send int req to the source domain
+ * to unmap it.
+ */
+static void delayed_unexport(struct work_struct *work)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	struct exported_sgt_info *exported =
+		container_of(work, struct exported_sgt_info, unexport.work);
+	int op[4];
+	int i, ret;
+
+	if (!exported)
+		return;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
+		exported->hid.id, exported->hid.rng_key[0],
+		exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+	/* no longer valid */
+	exported->valid = false;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return;
+
+	op[0] = exported->hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = exported->hid.rng_key[i];
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
+
+	/* Now send unexport request to remote domain, marking
+	 * that buffer should not be used anymore
+	 */
+	ret = bknd_ops->send_req(exported->rdomid, req, true);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+	}
+
+	kfree(req);
+	exported->unexport_sched = false;
+
+	/* Immediately clean-up if it has never been exported by importer
+	 * (so no SGT is constructed on importer).
+	 * clean it up later in remote sync when final release ops
+	 * is called (importer does this only when there's no
+	 * no consumer of locally exported FDs)
+	 */
+	if (exported->active == 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"claning up buffer {id:%d key:%d %d %d} completly\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		hyper_dmabuf_cleanup_sgt_info(exported, false);
+		hyper_dmabuf_remove_exported(exported->hid);
+
+		/* register hyper_dmabuf_id to the list for reuse */
+		hyper_dmabuf_store_hid(exported->hid);
+
+		kfree(exported);
+	}
+}
+
+/* Schedule unexport of dmabuf.
+ */
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
+			(struct ioctl_hyper_dmabuf_unexport *)data;
+	struct exported_sgt_info *exported;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	/* find dmabuf in export list */
+	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
+
+	dev_dbg(hy_drv_priv->dev,
+		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
+		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
+		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
+
+	/* failed to find corresponding entry in export list */
+	if (exported == NULL) {
+		unexport_attr->status = -ENOENT;
+		return -ENOENT;
+	}
+
+	if (exported->unexport_sched)
+		return 0;
+
+	exported->unexport_sched = true;
+	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
+	schedule_delayed_work(&exported->unexport,
+			      msecs_to_jiffies(unexport_attr->delay_ms));
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
+
+const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
+			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
+			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
+			       hyper_dmabuf_export_remote_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
+			       hyper_dmabuf_export_fd_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
+			       hyper_dmabuf_unexport_ioctl, 0),
+};
+
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param)
+{
+	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
+	unsigned int nr = _IOC_NR(cmd);
+	int ret;
+	hyper_dmabuf_ioctl_t func;
+	char *kdata;
+
+	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
+		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
+		return -EINVAL;
+	}
+
+	ioctl = &hyper_dmabuf_ioctls[nr];
+
+	func = ioctl->func;
+
+	if (unlikely(!func)) {
+		dev_err(hy_drv_priv->dev, "no function\n");
+		return -EINVAL;
+	}
+
+	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
+	if (!kdata)
+		return -ENOMEM;
+
+	if (copy_from_user(kdata, (void __user *)param,
+			   _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy from user arguments\n");
+		ret = -EFAULT;
+		goto ioctl_error;
+	}
+
+	ret = func(filp, kdata);
+
+	if (copy_to_user((void __user *)param, kdata,
+			 _IOC_SIZE(cmd)) != 0) {
+		dev_err(hy_drv_priv->dev,
+			"failed to copy to user arguments\n");
+		ret = -EFAULT;
+		goto ioctl_error;
+	}
+
+ioctl_error:
+	kfree(kdata);
+
+	return ret;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
new file mode 100644
index 000000000000..d8090900ffa2
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
@@ -0,0 +1,52 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IOCTL_H__
+#define __HYPER_DMABUF_IOCTL_H__
+
+typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
+
+struct hyper_dmabuf_ioctl_desc {
+	unsigned int cmd;
+	int flags;
+	hyper_dmabuf_ioctl_t func;
+	const char *name;
+};
+
+#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
+	[_IOC_NR(ioctl)] = {				\
+			.cmd = ioctl,			\
+			.func = _func,			\
+			.flags = _flags,		\
+			.name = #ioctl			\
+	}
+
+long hyper_dmabuf_ioctl(struct file *filp,
+			unsigned int cmd, unsigned long param);
+
+int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
+
+#endif //__HYPER_DMABUF_IOCTL_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
new file mode 100644
index 000000000000..f2f65a8ec47f
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
@@ -0,0 +1,294 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <linux/hashtable.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_id.h"
+
+DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
+DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
+
+#ifdef CONFIG_HYPER_DMABUF_SYSFS
+static ssize_t hyper_dmabuf_imported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
+		hyper_dmabuf_id_t hid = info_entry->imported->hid;
+		int nents = info_entry->imported->nents;
+		bool valid = info_entry->imported->valid;
+		int num_importers = info_entry->imported->importers;
+
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				num_importers);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
+
+	return count;
+}
+
+static ssize_t hyper_dmabuf_exported_show(struct device *drv,
+					  struct device_attribute *attr,
+					  char *buf)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+	ssize_t count = 0;
+	size_t total = 0;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
+		hyper_dmabuf_id_t hid = info_entry->exported->hid;
+		int nents = info_entry->exported->nents;
+		bool valid = info_entry->exported->valid;
+		int importer_exported = info_entry->exported->active;
+
+		total += nents;
+		count += scnprintf(buf + count, PAGE_SIZE - count,
+				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
+				   hid.id, hid.rng_key[0], hid.rng_key[1],
+				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
+				   importer_exported);
+	}
+	count += scnprintf(buf + count, PAGE_SIZE - count,
+			   "total nents: %lu\n", total);
+
+	return count;
+}
+
+static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
+static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
+
+int hyper_dmabuf_register_sysfs(struct device *dev)
+{
+	int err;
+
+	err = device_create_file(dev, &dev_attr_imported);
+	if (err < 0)
+		goto err1;
+	err = device_create_file(dev, &dev_attr_exported);
+	if (err < 0)
+		goto err2;
+
+	return 0;
+err2:
+	device_remove_file(dev, &dev_attr_imported);
+err1:
+	return -1;
+}
+
+int hyper_dmabuf_unregister_sysfs(struct device *dev)
+{
+	device_remove_file(dev, &dev_attr_imported);
+	device_remove_file(dev, &dev_attr_exported);
+	return 0;
+}
+
+#endif
+
+int hyper_dmabuf_table_init(void)
+{
+	hash_init(hyper_dmabuf_hash_imported);
+	hash_init(hyper_dmabuf_hash_exported);
+	return 0;
+}
+
+int hyper_dmabuf_table_destroy(void)
+{
+	/* TODO: cleanup hyper_dmabuf_hash_imported
+	 * and hyper_dmabuf_hash_exported
+	 */
+	return 0;
+}
+
+int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
+{
+	struct list_entry_exported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->exported = exported;
+
+	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
+		 info_entry->exported->hid.id);
+
+	return 0;
+}
+
+int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
+{
+	struct list_entry_imported *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->imported = imported;
+
+	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
+		 info_entry->imported->hid.id);
+
+	return 0;
+}
+
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->exported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid))
+				return info_entry->exported;
+
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
+		}
+
+	return NULL;
+}
+
+/* search for pre-exported sgt and return id of it if it exist */
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid)
+{
+	struct list_entry_exported *info_entry;
+	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		if (info_entry->exported->dma_buf == dmabuf &&
+		    info_entry->exported->rdomid == domid)
+			return info_entry->exported->hid;
+
+	return hid;
+}
+
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->imported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid))
+				return info_entry->imported;
+			/* if key is unmatched, given HID is invalid,
+			 * so returning NULL
+			 */
+			break;
+		}
+
+	return NULL;
+}
+
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_exported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->exported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
+						    hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			}
+
+			break;
+		}
+
+	return -ENOENT;
+}
+
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
+{
+	struct list_entry_imported *info_entry;
+	int bkt;
+
+	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
+		/* checking hid.id first */
+		if (info_entry->imported->hid.id == hid.id) {
+			/* then key is compared */
+			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
+						    hid)) {
+				hash_del(&info_entry->node);
+				kfree(info_entry);
+				return 0;
+			}
+
+			break;
+		}
+
+	return -ENOENT;
+}
+
+void hyper_dmabuf_foreach_exported(
+	void (*func)(struct exported_sgt_info *, void *attr),
+	void *attr)
+{
+	struct list_entry_exported *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
+			info_entry, node) {
+		func(info_entry->exported, attr);
+	}
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
new file mode 100644
index 000000000000..3c6a23ef80c6
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
@@ -0,0 +1,73 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_LIST_H__
+#define __HYPER_DMABUF_LIST_H__
+
+#include "hyper_dmabuf_struct.h"
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_EXPORTED 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_IMPORTED 7
+
+struct list_entry_exported {
+	struct exported_sgt_info *exported;
+	struct hlist_node node;
+};
+
+struct list_entry_imported {
+	struct imported_sgt_info *imported;
+	struct hlist_node node;
+};
+
+int hyper_dmabuf_table_init(void);
+
+int hyper_dmabuf_table_destroy(void);
+
+int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
+
+/* search for pre-exported sgt and return id of it if it exist */
+hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
+						 int domid);
+
+int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
+
+struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
+
+struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
+
+int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
+
+int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
+
+void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
+				   void *attr), void *attr);
+
+int hyper_dmabuf_register_sysfs(struct device *dev);
+int hyper_dmabuf_unregister_sysfs(struct device *dev);
+
+#endif /* __HYPER_DMABUF_LIST_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
new file mode 100644
index 000000000000..129b2ff2af2b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -0,0 +1,320 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+struct cmd_process {
+	struct work_struct work;
+	struct hyper_dmabuf_req *rq;
+	int domid;
+};
+
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+			     enum hyper_dmabuf_command cmd, int *op)
+{
+	int i;
+
+	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
+	req->cmd = cmd;
+
+	switch (cmd) {
+	/* as exporter, commands to importer */
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * op0~op3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 */
+
+		memcpy(&req->op[0], &op[0], 8 * sizeof(int) + op[8]);
+		break;
+
+	case HYPER_DMABUF_NOTIFY_UNEXPORT:
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : DMABUF_DESTROY,
+		 * op0~op3 : hyper_dmabuf_id_t hid
+		 */
+
+		for (i = 0; i < 4; i++)
+			req->op[i] = op[i];
+		break;
+
+	case HYPER_DMABUF_EXPORT_FD:
+	case HYPER_DMABUF_EXPORT_FD_FAILED:
+		/* dmabuf fd is being created on imported side or importing
+		 * failed
+		 *
+		 * command : HYPER_DMABUF_EXPORT_FD or
+		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
+		 * op0~op3 : hyper_dmabuf_id
+		 */
+
+		for (i = 0; i < 4; i++)
+			req->op[i] = op[i];
+		break;
+
+	default:
+		/* no command found */
+		return;
+	}
+}
+
+static void cmd_process_work(struct work_struct *work)
+{
+	struct imported_sgt_info *imported;
+	struct cmd_process *proc = container_of(work,
+						struct cmd_process, work);
+	struct hyper_dmabuf_req *req;
+	int domid;
+	int i;
+
+	req = proc->rq;
+	domid = proc->domid;
+
+	switch (req->cmd) {
+	case HYPER_DMABUF_EXPORT:
+		/* exporting pages for dmabuf */
+		/* command : HYPER_DMABUF_EXPORT,
+		 * op0~op3 : hyper_dmabuf_id
+		 * op4 : number of pages to be shared
+		 * op5 : offset of data in the first page
+		 * op6 : length of data in the last page
+		 * op7 : top-level reference number for shared pages
+		 */
+
+		/* if nents == 0, it means it is a message only for
+		 * priv synchronization. for existing imported_sgt_info
+		 * so not creating a new one
+		 */
+		if (req->op[4] == 0) {
+			hyper_dmabuf_id_t exist = {req->op[0],
+						   {req->op[1], req->op[2],
+						   req->op[3] } };
+
+			imported = hyper_dmabuf_find_imported(exist);
+
+			if (!imported) {
+				dev_err(hy_drv_priv->dev,
+					"Can't find imported sgt_info\n");
+				break;
+			}
+
+			break;
+		}
+
+		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
+
+		if (!imported)
+			break;
+
+		imported->hid.id = req->op[0];
+
+		for (i = 0; i < 3; i++)
+			imported->hid.rng_key[i] = req->op[i+1];
+
+		imported->nents = req->op[4];
+		imported->frst_ofst = req->op[5];
+		imported->last_len = req->op[6];
+		imported->ref_handle = req->op[7];
+
+		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
+		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
+			req->op[0], req->op[1], req->op[2],
+			req->op[3]);
+		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
+		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
+		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
+		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
+
+		imported->valid = true;
+		hyper_dmabuf_register_imported(imported);
+
+		break;
+
+	default:
+		/* shouldn't get here */
+		break;
+	}
+
+	kfree(req);
+	kfree(proc);
+}
+
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
+{
+	struct cmd_process *proc;
+	struct hyper_dmabuf_req *temp_req;
+	struct imported_sgt_info *imported;
+	struct exported_sgt_info *exported;
+	hyper_dmabuf_id_t hid;
+
+	if (!req) {
+		dev_err(hy_drv_priv->dev, "request is NULL\n");
+		return -EINVAL;
+	}
+
+	hid.id = req->op[0];
+	hid.rng_key[0] = req->op[1];
+	hid.rng_key[1] = req->op[2];
+	hid.rng_key[2] = req->op[3];
+
+	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
+		(req->cmd > HYPER_DMABUF_NOTIFY_UNEXPORT)) {
+		dev_err(hy_drv_priv->dev, "invalid command\n");
+		return -EINVAL;
+	}
+
+	req->stat = HYPER_DMABUF_REQ_PROCESSED;
+
+	/* HYPER_DMABUF_DESTROY requires immediate
+	 * follow up so can't be processed in workqueue
+	 */
+	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
+		/* destroy sg_list for hyper_dmabuf_id on remote side */
+		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
+		 * op0~3 : hyper_dmabuf_id
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
+
+		imported = hyper_dmabuf_find_imported(hid);
+
+		if (imported) {
+			/* if anything is still using dma_buf */
+			if (imported->importers) {
+				/* Buffer is still in  use, just mark that
+				 * it should not be allowed to export its fd
+				 * anymore.
+				 */
+				imported->valid = false;
+			} else {
+				/* No one is using buffer, remove it from
+				 * imported list
+				 */
+				hyper_dmabuf_remove_imported(hid);
+				kfree(imported);
+			}
+		} else {
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		}
+
+		return req->cmd;
+	}
+
+	/* synchronous dma_buf_fd export */
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
+		/* find a corresponding SGT for the id */
+		dev_dbg(hy_drv_priv->dev,
+			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exported = hyper_dmabuf_find_exported(hid);
+
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else if (!exported->valid) {
+			dev_dbg(hy_drv_priv->dev,
+				"Buffer no longer valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			dev_dbg(hy_drv_priv->dev,
+				"Buffer still valid {id:%d key:%d %d %d}\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			exported->active++;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->cmd;
+	}
+
+	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
+		dev_dbg(hy_drv_priv->dev,
+			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
+			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
+
+		exported = hyper_dmabuf_find_exported(hid);
+
+		if (!exported) {
+			dev_err(hy_drv_priv->dev,
+				"buffer {id:%d key:%d %d %d} not found\n",
+				hid.id, hid.rng_key[0], hid.rng_key[1],
+				hid.rng_key[2]);
+
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		} else {
+			exported->active--;
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+		}
+		return req->cmd;
+	}
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: putting request to workqueue\n", __func__);
+	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
+
+	if (!temp_req)
+		return -ENOMEM;
+
+	memcpy(temp_req, req, sizeof(*temp_req));
+
+	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
+
+	if (!proc) {
+		kfree(temp_req);
+		return -ENOMEM;
+	}
+
+	proc->rq = temp_req;
+	proc->domid = domid;
+
+	INIT_WORK(&(proc->work), cmd_process_work);
+
+	queue_work(hy_drv_priv->work_queue, &(proc->work));
+
+	return req->cmd;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
new file mode 100644
index 000000000000..59f1528e9b1e
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_MSG_H__
+#define __HYPER_DMABUF_MSG_H__
+
+#define MAX_NUMBER_OF_OPERANDS 8
+
+struct hyper_dmabuf_req {
+	unsigned int req_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
+};
+
+struct hyper_dmabuf_resp {
+	unsigned int resp_id;
+	unsigned int stat;
+	unsigned int cmd;
+	unsigned int op[MAX_NUMBER_OF_OPERANDS];
+};
+
+enum hyper_dmabuf_command {
+	HYPER_DMABUF_EXPORT = 0x10,
+	HYPER_DMABUF_EXPORT_FD,
+	HYPER_DMABUF_EXPORT_FD_FAILED,
+	HYPER_DMABUF_NOTIFY_UNEXPORT,
+};
+
+enum hyper_dmabuf_ops {
+	HYPER_DMABUF_OPS_ATTACH = 0x1000,
+	HYPER_DMABUF_OPS_DETACH,
+	HYPER_DMABUF_OPS_MAP,
+	HYPER_DMABUF_OPS_UNMAP,
+	HYPER_DMABUF_OPS_RELEASE,
+	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
+	HYPER_DMABUF_OPS_END_CPU_ACCESS,
+	HYPER_DMABUF_OPS_KMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
+	HYPER_DMABUF_OPS_KMAP,
+	HYPER_DMABUF_OPS_KUNMAP,
+	HYPER_DMABUF_OPS_MMAP,
+	HYPER_DMABUF_OPS_VMAP,
+	HYPER_DMABUF_OPS_VUNMAP,
+};
+
+enum hyper_dmabuf_req_feedback {
+	HYPER_DMABUF_REQ_PROCESSED = 0x100,
+	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
+	HYPER_DMABUF_REQ_ERROR,
+	HYPER_DMABUF_REQ_NOT_RESPONDED
+};
+
+/* create a request packet with given command and operands */
+void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
+				 enum hyper_dmabuf_command command,
+				 int *operands);
+
+/* parse incoming request packet (or response) and take
+ * appropriate actions for those
+ */
+int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
+
+#endif // __HYPER_DMABUF_MSG_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
new file mode 100644
index 000000000000..b4d3c2caad73
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -0,0 +1,264 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_sgl_proc.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_list.h"
+
+#define WAIT_AFTER_SYNC_REQ 0
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+static int dmabuf_refcount(struct dma_buf *dma_buf)
+{
+	if ((dma_buf != NULL) && (dma_buf->file != NULL))
+		return file_count(dma_buf->file);
+
+	return -EINVAL;
+}
+
+static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
+				   struct device *dev,
+				   struct dma_buf_attachment *attach)
+{
+	return 0;
+}
+
+static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
+				    struct dma_buf_attachment *attach)
+{
+}
+
+static struct sg_table *hyper_dmabuf_ops_map(
+				struct dma_buf_attachment *attachment,
+				enum dma_data_direction dir)
+{
+	struct sg_table *st;
+	struct imported_sgt_info *imported;
+	struct pages_info *pg_info;
+
+	if (!attachment->dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
+
+	/* extract pages from sgt */
+	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
+
+	if (!pg_info)
+		return NULL;
+
+	/* create a new sg_table with extracted pages */
+	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
+				     pg_info->last_len, pg_info->nents);
+	if (!st)
+		goto err_free_sg;
+
+	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
+		goto err_free_sg;
+
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	return st;
+
+err_free_sg:
+	if (st) {
+		sg_free_table(st);
+		kfree(st);
+	}
+
+	kfree(pg_info->pgs);
+	kfree(pg_info);
+
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
+				   struct sg_table *sg,
+				   enum dma_data_direction dir)
+{
+	struct imported_sgt_info *imported;
+
+	if (!attachment->dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
+
+	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
+
+	sg_free_table(sg);
+	kfree(sg);
+}
+
+static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
+{
+	struct imported_sgt_info *imported;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int finish;
+
+	if (!dma_buf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dma_buf->priv;
+
+	if (!dmabuf_refcount(imported->dma_buf))
+		imported->dma_buf = NULL;
+
+	imported->importers--;
+
+	if (imported->importers == 0) {
+		bknd_ops->unmap_shared_pages(&imported->refs_info,
+					     imported->nents);
+
+		if (imported->sgt) {
+			sg_free_table(imported->sgt);
+			kfree(imported->sgt);
+			imported->sgt = NULL;
+		}
+	}
+
+	finish = imported && !imported->valid &&
+		 !imported->importers;
+
+	/*
+	 * Check if buffer is still valid and if not remove it
+	 * from imported list. That has to be done after sending
+	 * sync request
+	 */
+	if (finish) {
+		hyper_dmabuf_remove_imported(imported->hid);
+		kfree(imported);
+	}
+}
+
+static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
+					     enum dma_data_direction dir)
+{
+	return 0;
+}
+
+static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
+					   enum dma_data_direction dir)
+{
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
+					  unsigned long pgnum)
+{
+	/* TODO: NULL for now. Need to return the addr of mapped region */
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
+					   unsigned long pgnum, void *vaddr)
+{
+}
+
+static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
+{
+	/* for now NULL.. need to return the address of mapped region */
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
+				    void *vaddr)
+{
+}
+
+static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
+				 struct vm_area_struct *vma)
+{
+	return 0;
+}
+
+static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
+{
+	return NULL;
+}
+
+static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+}
+
+static const struct dma_buf_ops hyper_dmabuf_ops = {
+	.attach = hyper_dmabuf_ops_attach,
+	.detach = hyper_dmabuf_ops_detach,
+	.map_dma_buf = hyper_dmabuf_ops_map,
+	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
+	.release = hyper_dmabuf_ops_release,
+	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
+	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
+	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
+	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
+	.map = hyper_dmabuf_ops_kmap,
+	.unmap = hyper_dmabuf_ops_kunmap,
+	.mmap = hyper_dmabuf_ops_mmap,
+	.vmap = hyper_dmabuf_ops_vmap,
+	.vunmap = hyper_dmabuf_ops_vunmap,
+};
+
+/* exporting dmabuf as fd */
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
+{
+	int fd = -1;
+
+	/* call hyper_dmabuf_export_dmabuf and create
+	 * and bind a handle for it then release
+	 */
+	hyper_dmabuf_export_dma_buf(imported);
+
+	if (imported->dma_buf)
+		fd = dma_buf_fd(imported->dma_buf, flags);
+
+	return fd;
+}
+
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &hyper_dmabuf_ops;
+
+	/* multiple of PAGE_SIZE, not considering offset */
+	exp_info.size = imported->sgt->nents * PAGE_SIZE;
+	exp_info.flags = /* not sure about flag */ 0;
+	exp_info.priv = imported;
+
+	imported->dma_buf = dma_buf_export(&exp_info);
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
new file mode 100644
index 000000000000..b30367f2836b
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_OPS_H__
+#define __HYPER_DMABUF_OPS_H__
+
+int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
+
+void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
new file mode 100644
index 000000000000..d92ae13d8a30
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -0,0 +1,256 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_sgl_proc.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/* return total number of pages referenced by a sgt
+ * for pre-calculation of # of pages behind a given sgt
+ */
+static int get_num_pgs(struct sg_table *sgt)
+{
+	struct scatterlist *sgl;
+	int length, i;
+	/* at least one page */
+	int num_pages = 1;
+
+	sgl = sgt->sgl;
+
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+
+	/* round-up */
+	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
+
+	for (i = 1; i < sgt->nents; i++) {
+		sgl = sg_next(sgl);
+
+		/* round-up */
+		num_pages += ((sgl->length + PAGE_SIZE - 1) /
+			     PAGE_SIZE); /* round-up */
+	}
+
+	return num_pages;
+}
+
+/* extract pages directly from struct sg_table */
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
+{
+	struct pages_info *pg_info;
+	int i, j, k;
+	int length;
+	struct scatterlist *sgl;
+
+	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
+	if (!pg_info)
+		return NULL;
+
+	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
+				     sizeof(struct page *),
+				     GFP_KERNEL);
+
+	if (!pg_info->pgs) {
+		kfree(pg_info);
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	pg_info->nents = 1;
+	pg_info->frst_ofst = sgl->offset;
+	pg_info->pgs[0] = sg_page(sgl);
+	length = sgl->length - PAGE_SIZE + sgl->offset;
+	i = 1;
+
+	while (length > 0) {
+		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
+		length -= PAGE_SIZE;
+		pg_info->nents++;
+		i++;
+	}
+
+	for (j = 1; j < sgt->nents; j++) {
+		sgl = sg_next(sgl);
+		pg_info->pgs[i++] = sg_page(sgl);
+		length = sgl->length - PAGE_SIZE;
+		pg_info->nents++;
+		k = 1;
+
+		while (length > 0) {
+			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
+			length -= PAGE_SIZE;
+			pg_info->nents++;
+		}
+	}
+
+	/*
+	 * lenght at that point will be 0 or negative,
+	 * so to calculate last page size just add it to PAGE_SIZE
+	 */
+	pg_info->last_len = PAGE_SIZE + length;
+
+	return pg_info;
+}
+
+/* create sg_table with given pages and other parameters */
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents)
+{
+	struct sg_table *sgt;
+	struct scatterlist *sgl;
+	int i, ret;
+
+	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (!sgt)
+		return NULL;
+
+	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
+	if (ret) {
+		if (sgt) {
+			sg_free_table(sgt);
+			kfree(sgt);
+		}
+
+		return NULL;
+	}
+
+	sgl = sgt->sgl;
+
+	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
+
+	for (i = 1; i < nents-1; i++) {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
+	}
+
+	if (nents > 1) /* more than one page */ {
+		sgl = sg_next(sgl);
+		sg_set_page(sgl, pgs[i], last_len, 0);
+	}
+
+	return sgt;
+}
+
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force)
+{
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+
+	if (!exported) {
+		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
+		return -EINVAL;
+	}
+
+	/* if force != 1, sgt_info can be released only if
+	 * there's no activity on exported dma-buf on importer
+	 * side.
+	 */
+	if (!force &&
+	    exported->active) {
+		dev_warn(hy_drv_priv->dev,
+			 "dma-buf is used by importer\n");
+
+		return -EPERM;
+	}
+
+	/* force == 1 is not recommended */
+	while (!list_empty(&exported->va_kmapped->list)) {
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+
+		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+	}
+
+	while (!list_empty(&exported->va_vmapped->list)) {
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
+					    struct vmap_vaddr_list, list);
+
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+	}
+
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = list_first_entry(&exported->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+	}
+
+	while (!list_empty(&exported->active_sgts->list)) {
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(exported->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+	}
+
+	/* Start cleanup of buffer in reverse order to exporting */
+	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
+
+	/* unmap dma-buf */
+	dma_buf_unmap_attachment(exported->active_attached->attach,
+				 exported->active_sgts->sgt,
+				 DMA_BIDIRECTIONAL);
+
+	/* detatch dma-buf */
+	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
+
+	/* close connection to dma-buf completely */
+	dma_buf_put(exported->dma_buf);
+	exported->dma_buf = NULL;
+
+	kfree(exported->active_sgts);
+	kfree(exported->active_attached);
+	kfree(exported->va_kmapped);
+	kfree(exported->va_vmapped);
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
new file mode 100644
index 000000000000..8dbc9c3dfda4
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
@@ -0,0 +1,43 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_IMP_H__
+#define __HYPER_DMABUF_IMP_H__
+
+/* extract pages directly from struct sg_table */
+struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
+
+/* create sg_table with given pages and other parameters */
+struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
+					 int frst_ofst, int last_len,
+					 int nents);
+
+int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
+				  int force);
+
+void hyper_dmabuf_free_sgt(struct sg_table *sgt);
+
+#endif /* __HYPER_DMABUF_IMP_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
new file mode 100644
index 000000000000..144e3821fbc2
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -0,0 +1,131 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_STRUCT_H__
+#define __HYPER_DMABUF_STRUCT_H__
+
+/* stack of mapped sgts */
+struct sgt_list {
+	struct sg_table *sgt;
+	struct list_head list;
+};
+
+/* stack of attachments */
+struct attachment_list {
+	struct dma_buf_attachment *attach;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via kmap */
+struct kmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* stack of vaddr mapped via vmap */
+struct vmap_vaddr_list {
+	void *vaddr;
+	struct list_head list;
+};
+
+/* Exporter builds pages_info before sharing pages */
+struct pages_info {
+	int frst_ofst;
+	int last_len;
+	int nents;
+	struct page **pgs;
+};
+
+
+/* Exporter stores references to sgt in a hash table
+ * Exporter keeps these references for synchronization
+ * and tracking purposes
+ */
+struct exported_sgt_info {
+	hyper_dmabuf_id_t hid;
+
+	/* VM ID of importer */
+	int rdomid;
+
+	struct dma_buf *dma_buf;
+	int nents;
+
+	/* list for tracking activities on dma_buf */
+	struct sgt_list *active_sgts;
+	struct attachment_list *active_attached;
+	struct kmap_vaddr_list *va_kmapped;
+	struct vmap_vaddr_list *va_vmapped;
+
+	/* set to 0 when unexported. Importer doesn't
+	 * do a new mapping of buffer if valid == false
+	 */
+	bool valid;
+
+	/* active == true if the buffer is actively used
+	 * (mapped) by importer
+	 */
+	int active;
+
+	/* hypervisor specific reference data for shared pages */
+	void *refs_info;
+
+	struct delayed_work unexport;
+	bool unexport_sched;
+
+	/* list for file pointers associated with all user space
+	 * application that have exported this same buffer to
+	 * another VM. This needs to be tracked to know whether
+	 * the buffer can be completely freed.
+	 */
+	struct file *filp;
+};
+
+/* imported_sgt_info contains information about imported DMA_BUF
+ * this info is kept in IMPORT list and asynchorously retrieved and
+ * used to map DMA_BUF on importer VM's side upon export fd ioctl
+ * request from user-space
+ */
+
+struct imported_sgt_info {
+	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
+
+	/* hypervisor-specific handle to pages */
+	int ref_handle;
+
+	/* offset and size info of DMA_BUF */
+	int frst_ofst;
+	int last_len;
+	int nents;
+
+	struct dma_buf *dma_buf;
+	struct sg_table *sgt;
+
+	void *refs_info;
+	bool valid;
+	int importers;
+};
+
+#endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/include/uapi/linux/hyper_dmabuf.h b/include/uapi/linux/hyper_dmabuf.h
new file mode 100644
index 000000000000..caaae2da9d4d
--- /dev/null
+++ b/include/uapi/linux/hyper_dmabuf.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
+#define __LINUX_PUBLIC_HYPER_DMABUF_H__
+
+typedef struct {
+	int id;
+	int rng_key[3]; /* 12bytes long random number */
+} hyper_dmabuf_id_t;
+
+#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+	/* IN parameters */
+	/* Remote domain id */
+	int remote_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
+_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+	/* IN parameters */
+	/* Source domain id */
+	int source_domain;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
+_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
+struct ioctl_hyper_dmabuf_export_remote {
+	/* IN parameters */
+	/* DMA buf fd to be exported */
+	int dmabuf_fd;
+	/* Domain id to which buffer should be exported */
+	int remote_domain;
+	/* exported dma buf id */
+	hyper_dmabuf_id_t hid;
+};
+
+#define IOCTL_HYPER_DMABUF_EXPORT_FD \
+_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
+struct ioctl_hyper_dmabuf_export_fd {
+	/* IN parameters */
+	/* hyper dmabuf id to be imported */
+	hyper_dmabuf_id_t hid;
+	/* flags */
+	int flags;
+	/* OUT parameters */
+	/* exported dma buf fd */
+	int fd;
+};
+
+#define IOCTL_HYPER_DMABUF_UNEXPORT \
+_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
+struct ioctl_hyper_dmabuf_unexport {
+	/* IN parameters */
+	/* hyper dmabuf id to be unexported */
+	hyper_dmabuf_id_t hid;
+	/* delay in ms by which unexport processing will be postponed */
+	int delay_ms;
+	/* OUT parameters */
+	/* Status of request */
+	int status;
+};
+
+#endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-02-23 16:15   ` [Xen-devel] " Roger Pau Monné
  2018-04-10  9:52   ` [RFC, v2, " Oleksandr Andrushchenko
  2018-02-14  1:50 ` [RFC PATCH v2 3/9] MAINTAINERS: adding Hyper_DMABUF driver section in MAINTAINERS Dongwon Kim
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

Reference document for hyper_DMABUF driver

Documentation/hyper-dmabuf-sharing.txt

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 Documentation/hyper-dmabuf-sharing.txt | 734 +++++++++++++++++++++++++++++++++
 1 file changed, 734 insertions(+)
 create mode 100644 Documentation/hyper-dmabuf-sharing.txt

diff --git a/Documentation/hyper-dmabuf-sharing.txt b/Documentation/hyper-dmabuf-sharing.txt
new file mode 100644
index 000000000000..928e411931e3
--- /dev/null
+++ b/Documentation/hyper-dmabuf-sharing.txt
@@ -0,0 +1,734 @@
+Linux Hyper DMABUF Driver
+
+------------------------------------------------------------------------------
+Section 1. Overview
+------------------------------------------------------------------------------
+
+Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
+achines (VMs), which expands DMA-BUF sharing capability to the VM environment
+where multiple different OS instances need to share same physical data without
+data-copy across VMs.
+
+To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
+exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
+producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
+for the buffer to the importing VM (so called, “importer”).
+
+Another instance of the Hyper_DMABUF driver on importer registers
+a hyper_dmabuf_id together with reference information for the shared physical
+pages associated with the DMA_BUF to its database when the export happens.
+
+The actual mapping of the DMA_BUF on the importer’s side is done by
+the Hyper_DMABUF driver when user space issues the IOCTL command to access
+the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
+exporting driver as is, that is, no special configuration is required.
+Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
+exchange.
+
+------------------------------------------------------------------------------
+Section 2. Architecture
+------------------------------------------------------------------------------
+
+1. Hyper_DMABUF ID
+
+hyper_dmabuf_id is a global handle for shared DMA BUFs, which is compatible
+across VMs. It is a key used by the importer to retrieve information about
+shared Kernel pages behind the DMA_BUF structure from the IMPORT list. When
+a DMA_BUF is exported to another domain, its hyper_dmabuf_id and META data
+are also kept in the EXPORT list by the exporter for further synchronization
+of control over the DMA_BUF.
+
+hyper_dmabuf_id is “targeted”, meaning it is valid only in exporting (owner of
+the buffer) and importing VMs, where the corresponding hyper_dmabuf_id is
+stored in their database (EXPORT and IMPORT lists).
+
+A user-space application specifies the targeted VM id in the user parameter
+when it calls the IOCTL command to export shared DMA_BUF to another VM.
+
+hyper_dmabuf_id_t is a data type for hyper_dmabuf_id. It is defined as 16-byte
+data structure, and it contains id and rng_key[3] as elements for
+the structure.
+
+typedef struct {
+        int id;
+        int rng_key[3]; /* 12bytes long random number */
+} hyper_dmabuf_id_t;
+
+The first element in the hyper_dmabuf_id structure, int id is combined data of
+a count number generated by the driver running on the exporter and
+the exporter’s ID. The VM’s ID is a one byte value and located at the field’s
+SB in int id. The remaining three bytes in int id are reserved for a count
+number.
+
+However, there is a limit related to this count number, which is 1000.
+Therefore, only little more than a byte starting from the LSB is actually used
+for storing this count number.
+
+#define HYPER_DMABUF_ID_CREATE(domid, id) \
+        ((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
+
+This limit on the count number directly means the maximum number of DMA BUFs
+that  can be shared simultaneously by one VM. The second element of
+hyper_dmabuf_id, that is int rng_key[3], is an array of three integers. These
+numbers are generated by Linux’s native random number generation mechanism.
+This field is added to enhance the security of the Hyper DMABUF driver by
+maximizing the entropy of hyper_dmabuf_id (that is, preventing it from being
+guessed by a security attacker).
+
+Once DMA_BUF is no longer shared, the hyper_dmabuf_id associated with
+the DMA_BUF is released, but the count number in hyper_dmabuf_id is saved in
+the ID list for reuse. However, random keys stored in int rng_key[3] are not
+reused. Instead, those keys are always filled with freshly generated random
+keys for security.
+
+2. IOCTLs
+
+a. IOCTL_HYPER_DMABUF_TX_CH_SETUP
+
+This type of IOCTL is used for initialization of a one-directional transmit
+communication channel with a remote domain.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_tx_ch_setup {
+    /* IN parameters */
+    /* Remote domain id */
+    int remote_domain;
+};
+
+b. IOCTL_HYPER_DMABUF_RX_CH_SETUP
+
+This type of IOCTL is used for initialization of a one-directional receive
+communication channel with a remote domain.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_rx_ch_setup {
+    /* IN parameters */
+    /* Source domain id */
+    int source_domain;
+};
+
+c. IOCTL_HYPER_DMABUF_EXPORT_REMOTE
+
+This type of IOCTL is used to export a DMA BUF to another VM. When a user
+space application makes this call to the driver, it extracts Kernel pages
+associated with the DMA_BUF, then makes those shared with the importing VM.
+
+All reference information for this shared pages and hyper_dmabuf_id is
+created, then passed to the importing domain through a communications
+channel for synchronous registration. In the meantime, the hyper_dmabuf_id
+for the shared DMA_BUF is also returned to user-space application.
+
+This IOCTL can accept a reference to “user-defined” data as well as a FD
+for the DMA BUF. This private data is then attached to the DMA BUF and
+exported together with it.
+
+More details regarding this private data can be found in chapter for
+“Hyper_DMABUF Private Data”.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_export_remote {
+    /* IN parameters */
+    /* DMA buf fd to be exported */
+    int dmabuf_fd;
+    /* Domain id to which buffer should be exported */
+    int remote_domain;
+    /* exported dma buf id */
+    hyper_dmabuf_id_t hid;
+    /* size of private data */
+    int sz_priv;
+    /* ptr to the private data for Hyper_DMABUF */
+    char *priv;
+};
+
+d. IOCTL_HYPER_DMABUF_EXPORT_FD
+
+The importing VM uses this IOCTL to import and re-export a shared DMA_BUF
+locally to the end-consumer using the standard Linux DMA_BUF framework.
+Upon IOCTL call, the Hyper_DMABUF driver finds the reference information
+of the shared DMA_BUF with the given hyper_dmabuf_id, then maps all shared
+pages in its own Kernel space. The driver then constructs a scatter-gather
+list with those mapped pages and creates a brand-new DMA_BUF with the list,
+which is eventually exported with a file descriptor to the local consumer.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_export_fd {
+    /* IN parameters */
+    /* hyper dmabuf id to be imported */
+    int hyper_dmabuf_id;
+    /* flags */
+    int flags;
+    /* OUT parameters */
+    /* exported dma buf fd */
+    int fd;
+};
+
+e. IOCTL_HYPER_DMABUF_UNEXPORT
+
+This type of IOCTL is used when it is necessary to terminate the current
+sharing of a DMA_BUF. When called, the driver first checks if there are any
+consumers actively using the DMA_BUF. Then, it unexports it if it is not
+mapped or used by any consumers. Otherwise, it postpones unexporting, but
+makes the buffer invalid to prevent any further import of the same DMA_BUF.
+DMA_BUF is completely unexported after the last consumer releases it.
+
+”Unexport” means removing all reference information about the DMA_BUF from the
+LISTs and make all pages private again.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_unexport {
+    /* IN parameters */
+    /* hyper dmabuf id to be unexported */
+    int hyper_dmabuf_id;
+    /* delay in ms by which unexport processing will be postponed */
+    int delay_ms;
+    /* OUT parameters */
+    /* Status of request */
+    int status;
+};
+
+f. IOCTL_HYPER_DMABUF_QUERY
+
+This IOCTL is used to retrieve specific information about a DMA_BUF that
+is being shared.
+
+The user space argument for this type of IOCTL is defined as:
+
+struct ioctl_hyper_dmabuf_query {
+    /* in parameters */
+    /* hyper dmabuf id to be queried */
+    int hyper_dmabuf_id;
+    /* item to be queried */
+    int item;
+    /* OUT parameters */
+    /* output of query */
+    /* info can be either value or reference */
+    unsigned long info;
+};
+
+<Available Queries>
+
+HYPER_DMABUF_QUERY_TYPE
+ - Return the type of DMA_BUF from the current domain, Exported or Imported.
+
+HYPER_DMABUF_QUERY_EXPORTER
+ - Return the exporting domain’s ID of a shared DMA_BUF.
+
+HYPER_DMABUF_QUERY_IMPORTER
+ - Return the importing domain’s ID of a shared DMA_BUF.
+
+HYPER_DMABUF_QUERY_SIZE
+ - Return the size of a shared DMA_BUF in bytes.
+
+HYPER_DMABUF_QUERY_BUSY
+ - Return ‘true’ if a shared DMA_BUF is currently used
+   (mapped by the end-consumer).
+
+HYPER_DMABUF_QUERY_UNEXPORTED
+ - Return ‘true’ if a shared DMA_BUF is not valid anymore
+   (so it does not allow a new consumer to map it).
+
+HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED
+ - Return ‘true’ if a shared DMA_BUF is scheduled to be unexported
+   (but is still valid) within a fixed time.
+
+HYPER_DMABUF_QUERY_PRIV_INFO
+ - Return ‘private’ data attached to shared DMA_BUF to the user space.
+   ‘unsigned long info’ is the user space pointer for the buffer, where
+   private data will be copied to.
+
+HYPER_DMABUF_QUERY_PRIV_INFO_SIZE
+ - Return the size of the private data attached to the shared DMA_BUF.
+
+3. Event Polling
+
+Event-polling can be enabled optionally by selecting the Kernel config option,
+Enable event-generation and polling operation under xen/hypervisor in Kernel’s
+menuconfig. The event-polling mechanism includes the generation of
+an import-event, adding it to the event-queue and providing a notification to
+the application so that it can retrieve the event data from the queue.
+
+For this mechanism, “Poll” and “Read” operations are added to the Hyper_DMABUF
+driver. A user application that polls the driver goes into a sleep state until
+there is a new event added to the queue. An application uses “Read” to retrieve
+event data from the event queue. Event data contains the hyper_dmabuf_id and
+the private data of the buffer that has been received by the importer.
+
+For more information on private data, refer to Section 3.5).
+Using this method, it is possible to lower the risk of the hyper_dmabuf_id and
+other sensitive information about the shared buffer (for example, meta-data
+for shared images) being leaked while being transferred to the importer because
+all of this data is shared as “private info” at the driver level. However,
+please note there should be a way for the importer to find the correct DMA_BUF
+in this case when there are multiple Hyper_DMABUFs being shared simultaneously.
+For example, the surface name or the surface ID of a specific rendering surface
+needs to be sent to the importer in advance before it is exported in a surface-
+sharing use-case.
+
+Each event data given to the user-space consists of a header and the private
+information of the buffer. The data type is defined as follows:
+
+struct hyper_dmabuf_event_hdr {
+        int event_type; /* one type only for now - new import */
+        hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
+        int size; /* size of data */
+};
+
+struct hyper_dmabuf_event_data {
+        struct hyper_dmabuf_event_hdr hdr;
+        void *data; /* private data */
+};
+
+4. Hyper_DMABUF Private Data
+
+Each Hyper_DMABUF can come with private data, the size of which can be up to
+AX_SIZE_PRIV_DATA (currently 192 byte). This private data is just a chunk of
+plain data attached to every Hyper_DMABUF. It is guaranteed to be synchronized
+across VMs, exporter and importer. This private data does not have any specific
+structure defined at the driver level, so any “user-defined” format or
+structure can be used. In addition, there is no dedicated use-case for this
+data. It can be used virtually for any purpose. For example, it can be used to
+share meta-data such as dimension and color formats for shared images in
+a surface sharing model. Another example is when we share protected media
+contents.
+
+This private data can be used to transfer flags related to content protection
+information on streamed media to the importer.
+
+Private data is initially generated when a buffer is exported for the first
+time. Then, it is updated whenever the same buffer is re-exported. During the
+re-exporting process, the Hyper_DMABUF driver only updates private data on
+both sides with new data from user-space since the same buffer already exists
+on both the IMPORT LIST and EXPORT LIST.
+
+There are two different ways to retrieve this private data from user-space.
+The first way is to use “Read” on the Hyper_DMABUF driver. “Read” returns the
+data of events containing private data of the buffer. The second way is to
+make a query to Hyper_DMABUF. There are two query items,
+HYPER_DMABUF_QUERY_PRIV_INFO and HYPER_DMABUF_QUERY_PRIV_INFO_SIZE available
+for retrieving private data and its size.
+
+5. Scatter-Gather List Table (SGT) Management
+
+SGT management is the core part of the Hyper_DMABUF driver that manages an
+SGT, a representation of the group of kernel pages associated with a DMA_BUF.
+This block includes four different sub-blocks:
+
+a. Hyper_DMABUF_id Manager
+
+This ID manager is responsible for generating a hyper_dmabuf_id for an
+exported DMA_BUF. When an ID is requested, the ID Manager first checks if
+there are any reusable IDs left in the list and returns one of those,
+if available. Otherwise, it creates the next count number and returns it
+to the caller.
+
+b. SGT Creator
+
+The SGT (struct sg_table) contains information about the DMA_BUF such as
+references to all kernel pages for the buffer and their connections. The
+SGT Creator creates a new SGT on the importer side with pages shared by
+the hypervisor.
+
+c. Kernel Page Extractor
+
+The Page Extractor extracts pages from a given SGT before those pages
+are shared.
+
+d. List Manager Interface
+
+The SGT manger also interacts with export and import list managers. It
+sends out information (for example, hyper_dmabuf_id, reference, and
+DMA_BUF information) about the exported or imported DMA_BUFs to the
+list manager. Also, on IOCTL request, it asks the list manager to find
+and return the information for a corresponding DMA_BUF in the list.
+
+6. DMA-BUF Interface
+
+The DMA-BUF interface provides standard methods to manage DMA_BUFs
+reconstructed by the Hyper_DMABUF driver from shared pages. All of the
+relevant operations are listed in struct dma_buf_ops. These operations
+are standard DMA_BUF operations, therefore they follow standard DMA BUF
+protocols.
+
+Each DMA_BUF operation communicates with the exporter at the end of the
+routine for “indirect DMA_BUF synchronization”.
+
+7. Export/Import List Management
+
+Whenever a DMA_BUF is shared and exported, its information is added to the
+database (EXPORT-list) on the exporting VM. Similarly, information about an
+imported DMA_BUF is added to the importing database (IMPORT list) on the
+importing VM, when the export happens.
+
+All of the entries in the lists are needed to manage the exported/imported
+DMA_BUF more efficiently. Both lists are implemented as Linux hash tables.
+The key to the list is hyper_dmabuf_id and the output is the information of
+the DMA_BUF. The List Manager manages all requests from other blocks and
+transactions within lists to ensure that all entries are up-to-date and
+that the list structure is consistent.
+
+The List Manager provides basic functionality, such as:
+
+- Adding to the List
+- Removal from the List
+- Finding information about a DMA_BUF, given the hyper_dmabuf_id
+
+8. Page Sharing by Hypercalls
+
+The Hyper_DMABUF driver assumes that there is a native page-by-page memory
+sharing mechanism available on the hypervisor. Referencing a group of pages
+that are being shared is what the driver expects from “backend” APIs or the
+hypervisor itself.
+
+For the example, xen backend integrated in current code base utilizes Xen’s
+grant-table interface for sharing the underlying kernel pages (struct *page).
+
+More details about grant-table interface can be found at the following locations:
+
+https://wiki.xen.org/wiki/Grant_Table
+https://xenbits.xen.org/docs/4.6-testing/misc/grant-tables.txt
+
+9. Message Handling
+
+The exporter and importer can each create a message that consists of an opcode
+(command) and operands (parameters) and send it to each other.
+
+The message format is defined as:
+
+struct hyper_dmabuf_req {
+        unsigned int req_id; /* Sequence number. Used for RING BUF
+                                synchronization */
+        unsigned int stat; /* Status.Response from receiver. */
+        unsigned int cmd;  /* Opcode */
+        unsigned int op[MAX_NUMBER_OF_OPERANDS]; /* Operands */
+};
+
+The following table gives the list of opcodes:
+
+<Opcodes in Message to Exporter/Importer>
+
+HYPER_DMABUF_EXPORT (exporter --> importer)
+ - Export a DMA_BUF to the importer. The importer registers the corresponding
+   DMA_BUF in its IMPORT LIST when the message is received.
+
+HYPER_DMABUF_EXPORT_FD (importer --> exporter)
+ - Locally exported as FD. The importer sends out this command to the exporter
+   to notify that the buffer is now locally exported (mapped and used).
+
+HYPER_DMABUF_EXPORT_FD_FAILED (importer --> exporter)
+ - Failed while exporting locally. The importer sends out this command to the
+   exporter to notify the exporter that the EXPORT_FD failed.
+
+HYPER_DMABUF_NOTIFY_UNEXPORT (exporter --> importer)
+ - Termination of sharing. The exporter notifies the importer that the DMA_BUF
+   has been unexported.
+
+HYPER_DMABUF_OPS_TO_REMOTE (importer --> exporter)
+ - Not implemented yet.
+
+HYPER_DMABUF_OPS_TO_SOURCE (exporter --> importer)
+ - DMA_BUF ops to the exporter, for DMA_BUF upstream synchronization.
+   Note: Implemented but it is done asynchronously due to performance issues.
+
+The following table shows the list of operands for each opcode.
+
+<Operands in Message to Exporter/Importer>
+
+- HYPER_DMABUF_EXPORT
+
+op0 to op3 – hyper_dmabuf_id
+op4 – number of pages to be shared
+op5 – offset of data in the first page
+op6 – length of data in the last page
+op7 – reference number for the group of shared pages
+op8 – size of private data
+op9 to (op9+op8)  – private data
+
+- HYPER_DMABUF_EXPORT_FD
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_EXPORT_FD_FAILED
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_NOTIFY_UNEXPORT
+
+op0 to op3 – hyper_dmabuf_id
+
+- HYPER_DMABUF_OPS_TO_REMOTE(Not implemented)
+
+- HYPER_DMABUF_OPS_TO_SOURCE
+
+op0 to op3 – hyper_dmabuf_id
+op4 – type of DMA_BUF operation
+
+9. Inter VM (Domain) Communication
+
+Two different types of inter-domain communication channels are required,
+one in kernel space and the other in user space. The communication channel
+in user space is for transmitting or receiving the hyper_dmabuf_id. Since
+there is no specific security (for example, encryption) involved in the
+generation of a global id at the driver level, it is highly recommended that
+the customer’s user application set up a very secure channel for exchanging
+hyper_dmabuf_id between VMs.
+
+The communication channel in kernel space is required for exchanging messages
+from “message management” block between two VMs. In the current reference
+backend for Xen hypervisor, Xen ring-buffer and event-channel mechanisms are
+used for message exchange between impoter and exporter.
+
+10. What are required in hypervisor
+
+emory sharing and message communication between VMs
+
+------------------------------------------------------------------------------
+Section 3. Hyper DMABUF Sharing Flow
+------------------------------------------------------------------------------
+
+1. Exporting
+
+To export a DMA_BUF to another VM, user space has to call an IOCTL
+(IOCTL_HYPER_DMABUF_EXPORT_REMOTE) with a file descriptor for the buffer given
+by the original exporter. The Hyper_DMABUF driver maps a DMA_BUF locally, then
+issues a hyper_dmabuf_id and SGT for the DMA_BUF, which is registered to the
+EXPORT list. Then, all pages for the SGT are extracted and each individual
+page is shared via a hypervisor-specific memory sharing mechanism
+(for example, in Xen this is grant-table).
+
+One important requirement on this memory sharing method is that it needs to
+create a single integer value that represents the list of pages, which can
+then be used by the importer for retrieving the group of shared pages.  For
+this, the “Backend” in the reference driver utilizes the multiple level
+addressing mechanism.
+
+Once the integer reference to the list of pages is created, the exporter
+builds the “export” command and sends it to the importer, then notifies the
+importer.
+
+2. Importing
+
+The Import process is divided into two sections. One is the registration
+of DMA_BUF from the exporter. The other is the actual mapping of the buffer
+before accessing the data in the buffer. The former (termed “Registration”)
+happens on an export event (that is, the export command with an interrupt)
+in the exporter.
+
+The latter (termed “Mapping”) is done asynchronously when the driver gets the
+IOCTL call from user space. When the importer gets an interrupt from the
+exporter, it checks the command in the receiving queue and if it is an
+“export” command, the registration process is started. It first finds
+hyper_dmabuf_id and the integer reference for the shared pages, then stores
+all of that information together with the “domain id” of the exporting domain
+in the IMPORT LIST.
+
+In the case where “event-polling” is enabled (Kernel Config - Enable event-
+generation and polling operation), a “new sharing available” event is
+generated right after the reference info for the new shared DMA_BUF is
+registered to the IMPORT LIST. This event is added to the event-queue.
+
+The user process that polls Hyper_DMABUF driver wakes up when this event-queue
+is not empty and is able to read back event data from the queue using the
+driver’s “Read” function. Once the user-application calls EXPORT_FD IOCTL with
+the proper parameters including hyper_dmabuf_id, the Hyper_DMABUF driver
+retrieves information about the matched DMA_BUF from the IMPORT LIST. Then, it
+maps all pages shared (referenced by the integer reference) in its kernel
+space and creates its own DMA_BUF referencing the same shared pages. After
+this, it exports this new DMA_BUF to the other drivers with a file descriptor.
+DMA_BUF can then be used just in the same way a local DMA_BUF is.
+
+3. Indirect Synchronization of DMA_BUF
+
+Synchronization of a DMA_BUF within a single OS is automatically achieved
+because all of importer’s DMA_BUF operations are done using functions defined
+on the exporter’s side, which means there is one central place that has full
+control over the DMA_BUF. In other words, any primary activities such as
+attaching/detaching and mapping/un-mapping are all captured by the exporter,
+meaning that the exporter knows basic information such as who is using the
+DMA_BUF and how it is being used. This, however, is not applicable if this
+sharing is done beyond a single OS because kernel space (where the exporter’s
+DMA_BUF operations reside) is simply not visible to the importing VM.
+
+Therefore, “indirect synchronization” was introduced as an alternative solution,
+which is now implemented in the Hyper_DMABUF driver. This technique makes
+the exporter create a shadow DMA_BUF when the end-consumer of the buffer maps
+the DMA_BUF, then duplicates any DMA_BUF operations performed on
+the importer’s side. Through this “indirect synchronization”, the exporter is
+able to virtually track all activities done by the consumer (mostly reference
+counter) as if those are done in exporter’s local system.
+
+------------------------------------------------------------------------------
+Section 4. Hypervisor Backend Interface
+------------------------------------------------------------------------------
+
+The Hyper_DMABUF driver has a standard “Backend” structure that contains
+mappings to various functions designed for a specific Hypervisor. Most of
+these API functions should provide a low-level implementation of communication
+and memory sharing capability that utilize a Hypervisor’s native mechanisms.
+
+struct hyper_dmabuf_backend_ops {
+        /* retreiving id of current virtual machine */
+        int (*get_vm_id)(void);
+        /* get pages shared via hypervisor-specific method */
+        int (*share_pages)(struct page **, int, int, void **);
+        /* make shared pages unshared via hypervisor specific method */
+        int (*unshare_pages)(void **, int);
+        /* map remotely shared pages on importer's side via
+         *  hypervisor-specific method
+         */
+        struct page ** (*map_shared_pages)(int, int, int, void **);
+        /* unmap and free shared pages on importer's side via
+         *  hypervisor-specific method
+         */
+        int (*unmap_shared_pages)(void **, int);
+        /* initialize communication environment */
+        int (*init_comm_env)(void);
+        /* destroy communication channel */
+        void (*destroy_comm)(void);
+        /* upstream ch setup (receiving and responding) */
+        int (*init_rx_ch)(int);
+        /* downstream ch setup (transmitting and parsing responses) */
+        int (*init_tx_ch)(int);
+        /* send msg via communication ch */
+        int (*send_req)(int, struct hyper_dmabuf_req *, int);
+};
+
+<Hypervisor-specific Backend Structure>
+
+1. get_vm_id
+
+	Returns the VM (domain) ID
+
+	Input:
+
+		-ID of the current domain
+
+	Output:
+
+		None
+
+2. share_pages
+
+	Get pages shared via hypervisor-specific method and return one reference
+	ID that represents the complete list of shared pages
+
+	Input:
+
+		-Array of pages
+		-ID of importing VM
+		-Number of pages
+		-Hypervisor specific Representation of reference info of shared
+		 pages
+
+	Output:
+
+		-Hypervisor specific integer value that represents all of
+		 the shared pages
+
+3. unshare_pages
+
+	Stop sharing pages
+
+	Input:
+
+		-Hypervisor specific Representation of reference info of shared
+		 pages
+		-Number of shared pages
+
+	Output:
+
+		0
+
+4. map_shared_pages
+
+	Map shared pages locally using a hypervisor-specific method
+
+	Input:
+
+		-Reference number that represents all of shared pages
+		-ID of exporting VM, Number of pages
+		-Reference information for any purpose
+
+	Output:
+
+		-An array of shared pages (struct page**)
+
+5. unmap_shared_pages
+
+	Unmap shared pages
+
+	Input:
+
+		-Hypervisor specific Representation of reference info of shared pages
+
+	Output:
+
+		-0 (successful) or one of Standard Kernel errors
+
+6. init_comm_env
+
+	Setup infrastructure needed for communication channel
+
+	Input:
+
+		None
+
+	Output:
+
+		None
+
+7. destroy_comm
+
+	Cleanup everything done via init_comm_env
+
+	Input:
+
+		None
+
+	Output:
+
+		None
+
+8. init_rx_ch
+
+	Configure receive channel
+
+	Input:
+
+		-ID of VM on the other side of the channel
+
+	Output:
+
+		-0 (successful) or one of Standard Kernel errors
+
+9. init_tx_ch
+
+	Configure transmit channel
+
+	Input:
+
+		-ID of VM on the other side of the channel
+
+	Output:
+
+		-0 (success) or one of Standard Kernel errors
+
+10. send_req
+
+	Send message to other VM
+
+	Input:
+
+		-ID of VM that receives the message
+		-Message
+
+	Output:
+
+		-0 (success) or one of Standard Kernel errors
+
+-------------------------------------------------------------------------------
+-------------------------------------------------------------------------------
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 3/9] MAINTAINERS: adding Hyper_DMABUF driver section in MAINTAINERS
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 4/9] hyper_dmabuf: user private data attached to hyper_DMABUF Dongwon Kim
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

Hyper_DMABUF DRIVER
M:      Dongwon Kim <dongwon.kim@intel.com>
M:      Mateusz Polrola <mateuszx.potrola@intel.com>
L:      linux-kernel@vger.kernel.org
L:      xen-devel@lists.xenproject.org
S:      Maintained
F:      drivers/dma-buf/hyper_dmabuf*
F:      include/uapi/linux/hyper_dmabuf.h
F:      Documentation/hyper-dmabuf-sharing.txt
T:      https://github.com/downor/linux_hyper_dmabuf/

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 MAINTAINERS | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index d4fdcb12616c..155f7f839201 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6468,6 +6468,17 @@ S:	Maintained
 F:	mm/memory-failure.c
 F:	mm/hwpoison-inject.c
 
+Hyper_DMABUF DRIVER
+M:	Dongwon Kim <dongwon.kim@intel.com>
+M:	Mateusz Polrola <mateuszx.potrola@intel.com>
+L:	linux-kernel@vger.kernel.org
+L:	xen-devel@lists.xenproject.org
+S:	Maintained
+F:	drivers/dma-buf/hyper_dmabuf*
+F:	include/uapi/linux/hyper_dmabuf.h
+F:	Documentation/hyper-dmabuf-sharing.txt
+T:	https://github.com/downor/linux_hyper_dmabuf/
+
 Hyper-V CORE AND DRIVERS
 M:	"K. Y. Srinivasan" <kys@microsoft.com>
 M:	Haiyang Zhang <haiyangz@microsoft.com>
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 4/9] hyper_dmabuf: user private data attached to hyper_DMABUF
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
                   ` (2 preceding siblings ...)
  2018-02-14  1:50 ` [RFC PATCH v2 3/9] MAINTAINERS: adding Hyper_DMABUF driver section in MAINTAINERS Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-04-10  9:59   ` [RFC, v2, " Oleksandr Andrushchenko
  2018-02-14  1:50 ` [RFC PATCH v2 5/9] hyper_dmabuf: default backend for XEN hypervisor Dongwon Kim
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

Define a private data (e.g. meta data for the buffer) attached to
each hyper_DMABUF structure. This data is provided by userapace via
export_remote IOCTL and its size can be up to 192 bytes.

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 83 ++++++++++++++++++++--
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 36 +++++++++-
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  2 +-
 .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  1 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 12 ++++
 include/uapi/linux/hyper_dmabuf.h                  |  4 ++
 6 files changed, 132 insertions(+), 6 deletions(-)

diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 020a5590a254..168ccf98f710 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -103,6 +103,11 @@ static int send_export_msg(struct exported_sgt_info *exported,
 		}
 	}
 
+	op[8] = exported->sz_priv;
+
+	/* driver/application specific private info */
+	memcpy(&op[9], exported->priv, op[8]);
+
 	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
 
 	if (!req)
@@ -120,8 +125,9 @@ static int send_export_msg(struct exported_sgt_info *exported,
 
 /* Fast path exporting routine in case same buffer is already exported.
  *
- * If same buffer is still valid and exist in EXPORT LIST it returns 0 so
- * that remaining normal export process can be skipped.
+ * If same buffer is still valid and exist in EXPORT LIST, it only updates
+ * user-private data for the buffer and returns 0 so that that it can skip
+ * normal export process.
  *
  * If "unexport" is scheduled for the buffer, it cancels it since the buffer
  * is being re-exported.
@@ -129,7 +135,7 @@ static int send_export_msg(struct exported_sgt_info *exported,
  * return '1' if reexport is needed, return '0' if succeeds, return
  * Kernel error code if something goes wrong
  */
-static int fastpath_export(hyper_dmabuf_id_t hid)
+static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
 {
 	int reexport = 1;
 	int ret = 0;
@@ -155,6 +161,46 @@ static int fastpath_export(hyper_dmabuf_id_t hid)
 		exported->unexport_sched = false;
 	}
 
+	/* if there's any change in size of private data.
+	 * we reallocate space for private data with new size
+	 */
+	if (sz_priv != exported->sz_priv) {
+		kfree(exported->priv);
+
+		/* truncating size */
+		if (sz_priv > MAX_SIZE_PRIV_DATA)
+			exported->sz_priv = MAX_SIZE_PRIV_DATA;
+		else
+			exported->sz_priv = sz_priv;
+
+		exported->priv = kcalloc(1, exported->sz_priv,
+					 GFP_KERNEL);
+
+		if (!exported->priv) {
+			hyper_dmabuf_remove_exported(exported->hid);
+			hyper_dmabuf_cleanup_sgt_info(exported, true);
+			kfree(exported);
+			return -ENOMEM;
+		}
+	}
+
+	/* update private data in sgt_info with new ones */
+	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to load a new private data\n");
+		ret = -EINVAL;
+	} else {
+		/* send an export msg for updating priv in importer */
+		ret = send_export_msg(exported, NULL);
+
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to send a new private data\n");
+			ret = -EBUSY;
+		}
+	}
+
 	return ret;
 }
 
@@ -191,7 +237,8 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 					     export_remote_attr->remote_domain);
 
 	if (hid.id != -1) {
-		ret = fastpath_export(hid);
+		ret = fastpath_export(hid, export_remote_attr->sz_priv,
+				      export_remote_attr->priv);
 
 		/* return if fastpath_export succeeds or
 		 * gets some fatal error
@@ -225,6 +272,24 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 		goto fail_sgt_info_creation;
 	}
 
+	/* possible truncation */
+	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
+		exported->sz_priv = MAX_SIZE_PRIV_DATA;
+	else
+		exported->sz_priv = export_remote_attr->sz_priv;
+
+	/* creating buffer for private data of buffer */
+	if (exported->sz_priv != 0) {
+		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
+
+		if (!exported->priv) {
+			ret = -ENOMEM;
+			goto fail_priv_creation;
+		}
+	} else {
+		dev_err(hy_drv_priv->dev, "size is 0\n");
+	}
+
 	exported->hid = hyper_dmabuf_get_hid();
 
 	/* no more exported dmabuf allowed */
@@ -279,6 +344,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 	INIT_LIST_HEAD(&exported->va_kmapped->list);
 	INIT_LIST_HEAD(&exported->va_vmapped->list);
 
+	/* copy private data to sgt_info */
+	ret = copy_from_user(exported->priv, export_remote_attr->priv,
+			     exported->sz_priv);
+
 	if (ret) {
 		dev_err(hy_drv_priv->dev,
 			"failed to load private data\n");
@@ -337,6 +406,9 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
 
 fail_map_active_attached:
 	kfree(exported->active_sgts);
+	kfree(exported->priv);
+
+fail_priv_creation:
 	kfree(exported);
 
 fail_map_active_sgts:
@@ -567,6 +639,9 @@ static void delayed_unexport(struct work_struct *work)
 		/* register hyper_dmabuf_id to the list for reuse */
 		hyper_dmabuf_store_hid(exported->hid);
 
+		if (exported->sz_priv > 0 && !exported->priv)
+			kfree(exported->priv);
+
 		kfree(exported);
 	}
 }
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
index 129b2ff2af2b..7176fa8fb139 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -60,9 +60,12 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 		 * op5 : offset of data in the first page
 		 * op6 : length of data in the last page
 		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data
+		 *	   (e.g. graphic buffer's meta info)
 		 */
 
-		memcpy(&req->op[0], &op[0], 8 * sizeof(int) + op[8]);
+		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
 		break;
 
 	case HYPER_DMABUF_NOTIFY_UNEXPORT:
@@ -116,6 +119,9 @@ static void cmd_process_work(struct work_struct *work)
 		 * op5 : offset of data in the first page
 		 * op6 : length of data in the last page
 		 * op7 : top-level reference number for shared pages
+		 * op8 : size of private data (from op9)
+		 * op9 ~ : Driver-specific private data
+		 *         (e.g. graphic buffer's meta info)
 		 */
 
 		/* if nents == 0, it means it is a message only for
@@ -135,6 +141,24 @@ static void cmd_process_work(struct work_struct *work)
 				break;
 			}
 
+			/* if size of new private data is different,
+			 * we reallocate it.
+			 */
+			if (imported->sz_priv != req->op[8]) {
+				kfree(imported->priv);
+				imported->sz_priv = req->op[8];
+				imported->priv = kcalloc(1, req->op[8],
+							 GFP_KERNEL);
+				if (!imported->priv) {
+					/* set it invalid */
+					imported->valid = 0;
+					break;
+				}
+			}
+
+			/* updating priv data */
+			memcpy(imported->priv, &req->op[9], req->op[8]);
+
 			break;
 		}
 
@@ -143,6 +167,14 @@ static void cmd_process_work(struct work_struct *work)
 		if (!imported)
 			break;
 
+		imported->sz_priv = req->op[8];
+		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
+
+		if (!imported->priv) {
+			kfree(imported);
+			break;
+		}
+
 		imported->hid.id = req->op[0];
 
 		for (i = 0; i < 3; i++)
@@ -162,6 +194,8 @@ static void cmd_process_work(struct work_struct *work)
 		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
 		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
 
+		memcpy(imported->priv, &req->op[9], req->op[8]);
+
 		imported->valid = true;
 		hyper_dmabuf_register_imported(imported);
 
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
index 59f1528e9b1e..63a39d068d69 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -27,7 +27,7 @@
 #ifndef __HYPER_DMABUF_MSG_H__
 #define __HYPER_DMABUF_MSG_H__
 
-#define MAX_NUMBER_OF_OPERANDS 8
+#define MAX_NUMBER_OF_OPERANDS 64
 
 struct hyper_dmabuf_req {
 	unsigned int req_id;
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
index d92ae13d8a30..9032f89e0cd0 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
@@ -251,6 +251,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
 	kfree(exported->active_attached);
 	kfree(exported->va_kmapped);
 	kfree(exported->va_vmapped);
+	kfree(exported->priv);
 
 	return 0;
 }
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
index 144e3821fbc2..a1220bbf8d0c 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
@@ -101,6 +101,12 @@ struct exported_sgt_info {
 	 * the buffer can be completely freed.
 	 */
 	struct file *filp;
+
+	/* size of private */
+	size_t sz_priv;
+
+	/* private data associated with the exported buffer */
+	char *priv;
 };
 
 /* imported_sgt_info contains information about imported DMA_BUF
@@ -126,6 +132,12 @@ struct imported_sgt_info {
 	void *refs_info;
 	bool valid;
 	int importers;
+
+	/* size of private */
+	size_t sz_priv;
+
+	/* private data associated with the exported buffer */
+	char *priv;
 };
 
 #endif /* __HYPER_DMABUF_STRUCT_H__ */
diff --git a/include/uapi/linux/hyper_dmabuf.h b/include/uapi/linux/hyper_dmabuf.h
index caaae2da9d4d..36794a4af811 100644
--- a/include/uapi/linux/hyper_dmabuf.h
+++ b/include/uapi/linux/hyper_dmabuf.h
@@ -25,6 +25,8 @@
 #ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
 #define __LINUX_PUBLIC_HYPER_DMABUF_H__
 
+#define MAX_SIZE_PRIV_DATA 192
+
 typedef struct {
 	int id;
 	int rng_key[3]; /* 12bytes long random number */
@@ -56,6 +58,8 @@ struct ioctl_hyper_dmabuf_export_remote {
 	int remote_domain;
 	/* exported dma buf id */
 	hyper_dmabuf_id_t hid;
+	int sz_priv;
+	char *priv;
 };
 
 #define IOCTL_HYPER_DMABUF_EXPORT_FD \
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 5/9] hyper_dmabuf: default backend for XEN hypervisor
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
                   ` (3 preceding siblings ...)
  2018-02-14  1:50 ` [RFC PATCH v2 4/9] hyper_dmabuf: user private data attached to hyper_DMABUF Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-04-10  9:27   ` [RFC,v2,5/9] " Oleksandr Andrushchenko
  2018-02-14  1:50 ` [RFC PATCH v2 6/9] hyper_dmabuf: hyper_DMABUF synchronization across VM Dongwon Kim
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

From: "Matuesz Polrola" <mateuszx.potrola@intel.com>

The default backend for XEN hypervisor. This backend contains actual
implementation of individual methods defined in "struct hyper_dmabuf_bknd_ops"
defined as:

struct hyper_dmabuf_bknd_ops {
        /* backend initialization routine (optional) */
        int (*init)(void);

        /* backend cleanup routine (optional) */
        int (*cleanup)(void);

        /* retreiving id of current virtual machine */
        int (*get_vm_id)(void);

        /* get pages shared via hypervisor-specific method */
        int (*share_pages)(struct page **, int, int, void **);

        /* make shared pages unshared via hypervisor specific method */
        int (*unshare_pages)(void **, int);

        /* map remotely shared pages on importer's side via
         * hypervisor-specific method
         */
        struct page ** (*map_shared_pages)(unsigned long, int, int, void **);

        /* unmap and free shared pages on importer's side via
         * hypervisor-specific method
         */
        int (*unmap_shared_pages)(void **, int);

        /* initialize communication environment */
        int (*init_comm_env)(void);

        void (*destroy_comm)(void);

        /* upstream ch setup (receiving and responding) */
        int (*init_rx_ch)(int);

        /* downstream ch setup (transmitting and parsing responses) */
        int (*init_tx_ch)(int);

        int (*send_req)(int, struct hyper_dmabuf_req *, int);
};

First two methods are for extra initialization or cleaning up possibly
required for the current Hypervisor (optional). Third method
(.get_vm_id) provides a way to get current VM's id, which will be used
as an identication of source VM of shared hyper_DMABUF later.

All other methods are related to either memory sharing or inter-VM
communication, which are minimum requirement for hyper_DMABUF driver.
(Brief description of role of each method is embedded as a comment in the
definition of the structure above and header file.)

Actual implementation of each of these methods specific to XEN is under
backends/xen/. Their mappings are done as followed:

struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
        .init = NULL, /* not needed for xen */
        .cleanup = NULL, /* not needed for xen */
        .get_vm_id = xen_be_get_domid,
        .share_pages = xen_be_share_pages,
        .unshare_pages = xen_be_unshare_pages,
        .map_shared_pages = (void *)xen_be_map_shared_pages,
        .unmap_shared_pages = xen_be_unmap_shared_pages,
        .init_comm_env = xen_be_init_comm_env,
        .destroy_comm = xen_be_destroy_comm,
        .init_rx_ch = xen_be_init_rx_rbuf,
        .init_tx_ch = xen_be_init_tx_rbuf,
        .send_req = xen_be_send_req,
};

A section for Hypervisor Backend has been added to

"Documentation/hyper-dmabuf-sharing.txt" accordingly

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/dma-buf/hyper_dmabuf/Kconfig               |   7 +
 drivers/dma-buf/hyper_dmabuf/Makefile              |   7 +
 .../backends/xen/hyper_dmabuf_xen_comm.c           | 941 +++++++++++++++++++++
 .../backends/xen/hyper_dmabuf_xen_comm.h           |  78 ++
 .../backends/xen/hyper_dmabuf_xen_comm_list.c      | 158 ++++
 .../backends/xen/hyper_dmabuf_xen_comm_list.h      |  67 ++
 .../backends/xen/hyper_dmabuf_xen_drv.c            |  46 +
 .../backends/xen/hyper_dmabuf_xen_drv.h            |  53 ++
 .../backends/xen/hyper_dmabuf_xen_shm.c            | 525 ++++++++++++
 .../backends/xen/hyper_dmabuf_xen_shm.h            |  46 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    |  10 +
 11 files changed, 1938 insertions(+)
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h

diff --git a/drivers/dma-buf/hyper_dmabuf/Kconfig b/drivers/dma-buf/hyper_dmabuf/Kconfig
index 5ebf516d65eb..68f3d6ce2c1f 100644
--- a/drivers/dma-buf/hyper_dmabuf/Kconfig
+++ b/drivers/dma-buf/hyper_dmabuf/Kconfig
@@ -20,4 +20,11 @@ config HYPER_DMABUF_SYSFS
 
 	  The location of sysfs is under "...."
 
+config HYPER_DMABUF_XEN
+        bool "Configure hyper_dmabuf for XEN hypervisor"
+        default y
+        depends on HYPER_DMABUF && XEN && XENFS
+        help
+          Enabling Hyper_DMABUF Backend for XEN hypervisor
+
 endmenu
diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
index 3908522b396a..b9ab4eeca6f2 100644
--- a/drivers/dma-buf/hyper_dmabuf/Makefile
+++ b/drivers/dma-buf/hyper_dmabuf/Makefile
@@ -10,6 +10,13 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 
+ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
+	$(TARGET_MODULE)-objs += backends/xen/hyper_dmabuf_xen_comm.o \
+				 backends/xen/hyper_dmabuf_xen_comm_list.o \
+				 backends/xen/hyper_dmabuf_xen_shm.o \
+				 backends/xen/hyper_dmabuf_xen_drv.o
+endif
+
 obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
 
 # If we are running without kernel build system
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
new file mode 100644
index 000000000000..30bc4b6304ac
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
@@ -0,0 +1,941 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+#include <xen/grant_table.h>
+#include <xen/events.h>
+#include <xen/xenbus.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+#include "../../hyper_dmabuf_drv.h"
+
+static int export_req_id;
+
+struct hyper_dmabuf_req req_pending = {0};
+
+static void xen_get_domid_delayed(struct work_struct *unused);
+static void xen_init_comm_env_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
+static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
+
+/* Creates entry in xen store that will keep details of all
+ * exporter rings created by this domain
+ */
+static int xen_comm_setup_data_dir(void)
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
+	return xenbus_mkdir(XBT_NIL, buf, "");
+}
+
+/* Removes entry from xenstore with exporter ring details.
+ * Other domains that has connected to any of exporter rings
+ * created by this domain, will be notified about removal of
+ * this entry and will treat that as signal to cleanup importer
+ * rings created for this domain
+ */
+static int xen_comm_destroy_data_dir(void)
+{
+	char buf[255];
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
+		hy_drv_priv->domid);
+
+	return xenbus_rm(XBT_NIL, buf, "");
+}
+
+/* Adds xenstore entries with details of exporter ring created
+ * for given remote domain. It requires special daemon running
+ * in dom0 to make sure that given remote domain will have right
+ * permissions to access that data.
+ */
+static int xen_comm_expose_ring_details(int domid, int rdomid,
+					int gref, int port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		domid, rdomid);
+
+	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to write xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Queries details of ring exposed by remote domain.
+ */
+static int xen_comm_get_ring_details(int domid, int rdomid,
+				     int *grefid, int *port)
+{
+	char buf[255];
+	int ret;
+
+	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+		rdomid, domid);
+
+	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
+
+	if (ret <= 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
+
+	if (ret <= 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to read xenbus entry %s: %d\n",
+			buf, ret);
+
+		return ret;
+	}
+
+	return (ret <= 0 ? 1 : 0);
+}
+
+static void xen_get_domid_delayed(struct work_struct *unused)
+{
+	struct xenbus_transaction xbt;
+	int domid, ret;
+
+	/* scheduling another if driver is still running
+	 * and xenstore has not been initialized
+	 */
+	if (likely(xenstored_ready == 0)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore is not ready yet. Will retry in 500ms\n");
+		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
+	} else {
+		xenbus_transaction_start(&xbt);
+
+		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
+
+		if (ret <= 0)
+			domid = -1;
+
+		xenbus_transaction_end(xbt, 0);
+
+		/* try again since -1 is an invalid id for domain
+		 * (but only if driver is still running)
+		 */
+		if (unlikely(domid == -1)) {
+			dev_dbg(hy_drv_priv->dev,
+				"domid==-1 is invalid. Will retry it in 500ms\n");
+			schedule_delayed_work(&get_vm_id_work,
+					      msecs_to_jiffies(500));
+		} else {
+			dev_info(hy_drv_priv->dev,
+				 "Successfully retrieved domid from Xenstore:%d\n",
+				 domid);
+			hy_drv_priv->domid = domid;
+		}
+	}
+}
+
+int xen_be_get_domid(void)
+{
+	struct xenbus_transaction xbt;
+	int domid;
+
+	if (unlikely(xenstored_ready == 0)) {
+		xen_get_domid_delayed(NULL);
+		return -1;
+	}
+
+	xenbus_transaction_start(&xbt);
+
+	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
+		domid = -1;
+
+	xenbus_transaction_end(xbt, 0);
+
+	return domid;
+}
+
+static int xen_comm_next_req_id(void)
+{
+	export_req_id++;
+	return export_req_id;
+}
+
+/* For now cache latast rings as global variables TODO: keep them in list*/
+static irqreturn_t front_ring_isr(int irq, void *info);
+static irqreturn_t back_ring_isr(int irq, void *info);
+
+/* Callback function that will be called on any change of xenbus path
+ * being watched. Used for detecting creation/destruction of remote
+ * domain exporter ring.
+ *
+ * When remote domain's exporter ring will be detected, importer ring
+ * on this domain will be created.
+ *
+ * When remote domain's exporter ring destruction will be detected it
+ * will celanup this domain importer ring.
+ *
+ * Destruction can be caused by unloading module by remote domain or
+ * it's crash/force shutdown.
+ */
+static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
+					 const char *path, const char *token)
+{
+	int rdom, ret;
+	uint32_t grefid, port;
+	struct xen_comm_rx_ring_info *ring_info;
+
+	/* Check which domain has changed its exporter rings */
+	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
+	if (ret <= 0)
+		return;
+
+	/* Check if we have importer ring for given remote domain already
+	 * created
+	 */
+	ring_info = xen_comm_find_rx_ring(rdom);
+
+	/* Try to query remote domain exporter ring details - if
+	 * that will fail and we have importer ring that means remote
+	 * domains has cleanup its exporter ring, so our importer ring
+	 * is no longer useful.
+	 *
+	 * If querying details will succeed and we don't have importer ring,
+	 * it means that remote domain has setup it for us and we should
+	 * connect to it.
+	 */
+
+	ret = xen_comm_get_ring_details(xen_be_get_domid(),
+					rdom, &grefid, &port);
+
+	if (ring_info && ret != 0) {
+		dev_info(hy_drv_priv->dev,
+			 "Remote exporter closed, cleaninup importer\n");
+		xen_be_cleanup_rx_rbuf(rdom);
+	} else if (!ring_info && ret == 0) {
+		dev_info(hy_drv_priv->dev,
+			 "Registering importer\n");
+		xen_be_init_rx_rbuf(rdom);
+	}
+}
+
+/* exporter needs to generated info for page sharing */
+int xen_be_init_tx_rbuf(int domid)
+{
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
+	struct evtchn_alloc_unbound alloc_unbound;
+	struct evtchn_close close;
+
+	void *shared_ring;
+	int ret;
+
+	/* check if there's any existing tx channel in the table */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (ring_info) {
+		dev_info(hy_drv_priv->dev,
+			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
+		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
+		return 0;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	if (!ring_info)
+		return -ENOMEM;
+
+	/* from exporter to importer */
+	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
+	if (shared_ring == 0) {
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
+	sring = (struct xen_comm_sring *) shared_ring;
+
+	SHARED_RING_INIT(sring);
+
+	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
+
+	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
+						virt_to_mfn(shared_ring),
+						0);
+	if (ring_info->gref_ring < 0) {
+		/* fail to get gref */
+		kfree(ring_info);
+		return -EFAULT;
+	}
+
+	alloc_unbound.dom = DOMID_SELF;
+	alloc_unbound.remote_dom = domid;
+	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
+					  &alloc_unbound);
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate event channel\n");
+		kfree(ring_info);
+		return -EIO;
+	}
+
+	/* setting up interrupt */
+	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
+					front_ring_isr, 0,
+					NULL, (void *) ring_info);
+
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to setup event channel\n");
+		close.port = alloc_unbound.port;
+		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
+		gnttab_end_foreign_access(ring_info->gref_ring, 0,
+					virt_to_mfn(shared_ring));
+		kfree(ring_info);
+		return -EIO;
+	}
+
+	ring_info->rdomain = domid;
+	ring_info->irq = ret;
+	ring_info->port = alloc_unbound.port;
+
+	mutex_init(&ring_info->lock);
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
+		__func__,
+		ring_info->gref_ring,
+		ring_info->port,
+		ring_info->irq);
+
+	ret = xen_comm_add_tx_ring(ring_info);
+
+	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
+					   domid,
+					   ring_info->gref_ring,
+					   ring_info->port);
+
+	/* Register watch for remote domain exporter ring.
+	 * When remote domain will setup its exporter ring,
+	 * we will automatically connect our importer ring to it.
+	 */
+	ring_info->watch.callback = remote_dom_exporter_watch_cb;
+	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
+
+	if (!ring_info->watch.node) {
+		kfree(ring_info);
+		return -ENOMEM;
+	}
+
+	sprintf((char *)ring_info->watch.node,
+		"/local/domain/%d/data/hyper_dmabuf/%d/port",
+		domid, xen_be_get_domid());
+
+	register_xenbus_watch(&ring_info->watch);
+
+	return ret;
+}
+
+/* cleans up exporter ring created for given remote domain */
+void xen_be_cleanup_tx_rbuf(int domid)
+{
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_rx_ring_info *rx_ring_info;
+
+	/* check if we at all have exporter ring for given rdomain */
+	ring_info = xen_comm_find_tx_ring(domid);
+
+	if (!ring_info)
+		return;
+
+	xen_comm_remove_tx_ring(domid);
+
+	unregister_xenbus_watch(&ring_info->watch);
+	kfree(ring_info->watch.node);
+
+	/* No need to close communication channel, will be done by
+	 * this function
+	 */
+	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
+
+	/* No need to free sring page, will be freed by this function
+	 * when other side will end its access
+	 */
+	gnttab_end_foreign_access(ring_info->gref_ring, 0,
+				  (unsigned long) ring_info->ring_front.sring);
+
+	kfree(ring_info);
+
+	rx_ring_info = xen_comm_find_rx_ring(domid);
+	if (!rx_ring_info)
+		return;
+
+	BACK_RING_INIT(&(rx_ring_info->ring_back),
+		       rx_ring_info->ring_back.sring,
+		       PAGE_SIZE);
+}
+
+/* importer needs to know about shared page and port numbers for
+ * ring buffer and event channel
+ */
+int xen_be_init_rx_rbuf(int domid)
+{
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_sring *sring;
+
+	struct page *shared_ring;
+
+	struct gnttab_map_grant_ref *map_ops;
+
+	int ret;
+	int rx_gref, rx_port;
+
+	/* check if there's existing rx ring channel */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (ring_info) {
+		dev_info(hy_drv_priv->dev,
+			 "rx ring ch from domid = %d already exist\n",
+			 ring_info->sdomain);
+
+		return 0;
+	}
+
+	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
+					&rx_gref, &rx_port);
+
+	if (ret) {
+		dev_err(hy_drv_priv->dev,
+			"Domain %d has not created exporter ring for current domain\n",
+			domid);
+
+		return ret;
+	}
+
+	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
+
+	if (!ring_info)
+		return -ENOMEM;
+
+	ring_info->sdomain = domid;
+	ring_info->evtchn = rx_port;
+
+	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
+
+	if (!map_ops) {
+		ret = -ENOMEM;
+		goto fail_no_map_ops;
+	}
+
+	if (gnttab_alloc_pages(1, &shared_ring)) {
+		ret = -ENOMEM;
+		goto fail_others;
+	}
+
+	gnttab_set_map_op(&map_ops[0],
+			  (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
+			  GNTMAP_host_map, rx_gref, domid);
+
+	gnttab_set_unmap_op(&ring_info->unmap_op,
+			    (unsigned long)pfn_to_kaddr(
+					page_to_pfn(shared_ring)),
+			    GNTMAP_host_map, -1);
+
+	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
+		ret = -EFAULT;
+		goto fail_others;
+	}
+
+	if (map_ops[0].status) {
+		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
+		ret = -EFAULT;
+		goto fail_others;
+	} else {
+		ring_info->unmap_op.handle = map_ops[0].handle;
+	}
+
+	kfree(map_ops);
+
+	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
+
+	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
+
+	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
+
+	if (ret < 0) {
+		ret = -EIO;
+		goto fail_others;
+	}
+
+	ring_info->irq = ret;
+
+	dev_dbg(hy_drv_priv->dev,
+		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
+		rx_port,
+		ring_info->irq);
+
+	ret = xen_comm_add_rx_ring(ring_info);
+
+	/* Setup communcation channel in opposite direction */
+	if (!xen_comm_find_tx_ring(domid))
+		ret = xen_be_init_tx_rbuf(domid);
+
+	ret = request_irq(ring_info->irq,
+			  back_ring_isr, 0,
+			  NULL, (void *)ring_info);
+
+	return ret;
+
+fail_others:
+	kfree(map_ops);
+
+fail_no_map_ops:
+	kfree(ring_info);
+
+	return ret;
+}
+
+/* clenas up importer ring create for given source domain */
+void xen_be_cleanup_rx_rbuf(int domid)
+{
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_tx_ring_info *tx_ring_info;
+	struct page *shared_ring;
+
+	/* check if we have importer ring created for given sdomain */
+	ring_info = xen_comm_find_rx_ring(domid);
+
+	if (!ring_info)
+		return;
+
+	xen_comm_remove_rx_ring(domid);
+
+	/* no need to close event channel, will be done by that function */
+	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
+
+	/* unmapping shared ring page */
+	shared_ring = virt_to_page(ring_info->ring_back.sring);
+	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
+	gnttab_free_pages(1, &shared_ring);
+
+	kfree(ring_info);
+
+	tx_ring_info = xen_comm_find_tx_ring(domid);
+	if (!tx_ring_info)
+		return;
+
+	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
+	FRONT_RING_INIT(&(tx_ring_info->ring_front),
+			tx_ring_info->ring_front.sring,
+			PAGE_SIZE);
+}
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused);
+
+static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
+
+#define DOMID_SCAN_START	1	/*  domid = 1 */
+#define DOMID_SCAN_END		10	/* domid = 10 */
+
+static void xen_rx_ch_add_delayed(struct work_struct *unused)
+{
+	int ret;
+	char buf[128];
+	int i, dummy;
+
+	dev_dbg(hy_drv_priv->dev,
+		"Scanning new tx channel comming from another domain\n");
+
+	/* check other domains and schedule another work if driver
+	 * is still running and backend is valid
+	 */
+	if (hy_drv_priv &&
+	    hy_drv_priv->initialized) {
+		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
+			if (i == hy_drv_priv->domid)
+				continue;
+
+			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
+				i, hy_drv_priv->domid);
+
+			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
+
+			if (ret > 0) {
+				if (xen_comm_find_rx_ring(i) != NULL)
+					continue;
+
+				ret = xen_be_init_rx_rbuf(i);
+
+				if (!ret)
+					dev_info(hy_drv_priv->dev,
+						 "Done rx ch init for VM %d\n",
+						 i);
+			}
+		}
+
+		/* check every 10 seconds */
+		schedule_delayed_work(&xen_rx_ch_auto_add_work,
+				      msecs_to_jiffies(10000));
+	}
+}
+
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+
+void xen_init_comm_env_delayed(struct work_struct *unused)
+{
+	int ret;
+
+	/* scheduling another work if driver is still running
+	 * and xenstore hasn't been initialized or dom_id hasn't
+	 * been correctly retrieved.
+	 */
+	if (likely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		dev_dbg(hy_drv_priv->dev,
+			"Xenstore not ready Will re-try in 500ms\n");
+		schedule_delayed_work(&xen_init_comm_env_work,
+				      msecs_to_jiffies(500));
+	} else {
+		ret = xen_comm_setup_data_dir();
+		if (ret < 0) {
+			dev_err(hy_drv_priv->dev,
+				"Failed to create data dir in Xenstore\n");
+		} else {
+			dev_info(hy_drv_priv->dev,
+				"Successfully finished comm env init\n");
+			hy_drv_priv->initialized = true;
+
+#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+			xen_rx_ch_add_delayed(NULL);
+#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
+		}
+	}
+}
+
+int xen_be_init_comm_env(void)
+{
+	int ret;
+
+	xen_comm_ring_table_init();
+
+	if (unlikely(xenstored_ready == 0 ||
+	    hy_drv_priv->domid == -1)) {
+		xen_init_comm_env_delayed(NULL);
+		return -1;
+	}
+
+	ret = xen_comm_setup_data_dir();
+	if (ret < 0) {
+		dev_err(hy_drv_priv->dev,
+			"Failed to create data dir in Xenstore\n");
+	} else {
+		dev_info(hy_drv_priv->dev,
+			"Successfully finished comm env initialization\n");
+
+		hy_drv_priv->initialized = true;
+	}
+
+	return ret;
+}
+
+/* cleans up all tx/rx rings */
+static void xen_be_cleanup_all_rbufs(void)
+{
+	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
+	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
+}
+
+void xen_be_destroy_comm(void)
+{
+	xen_be_cleanup_all_rbufs();
+	xen_comm_destroy_data_dir();
+}
+
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+			      int wait)
+{
+	struct xen_comm_front_ring *ring;
+	struct hyper_dmabuf_req *new_req;
+	struct xen_comm_tx_ring_info *ring_info;
+	int notify;
+
+	struct timeval tv_start, tv_end;
+	struct timeval tv_diff;
+
+	int timeout = 1000;
+
+	/* find a ring info for the channel */
+	ring_info = xen_comm_find_tx_ring(domid);
+	if (!ring_info) {
+		dev_err(hy_drv_priv->dev,
+			"Can't find ring info for the channel\n");
+		return -ENOENT;
+	}
+
+
+	ring = &ring_info->ring_front;
+
+	do_gettimeofday(&tv_start);
+
+	while (RING_FULL(ring)) {
+		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
+
+		if (timeout == 0) {
+			dev_err(hy_drv_priv->dev,
+				"Timeout while waiting for an entry in the ring\n");
+			return -EIO;
+		}
+		usleep_range(100, 120);
+		timeout--;
+	}
+
+	timeout = 1000;
+
+	mutex_lock(&ring_info->lock);
+
+	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
+	if (!new_req) {
+		mutex_unlock(&ring_info->lock);
+		dev_err(hy_drv_priv->dev,
+			"NULL REQUEST\n");
+		return -EIO;
+	}
+
+	req->req_id = xen_comm_next_req_id();
+
+	/* update req_pending with current request */
+	memcpy(&req_pending, req, sizeof(req_pending));
+
+	/* pass current request to the ring */
+	memcpy(new_req, req, sizeof(*new_req));
+
+	ring->req_prod_pvt++;
+
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
+	if (notify)
+		notify_remote_via_irq(ring_info->irq);
+
+	if (wait) {
+		while (timeout--) {
+			if (req_pending.stat !=
+			    HYPER_DMABUF_REQ_NOT_RESPONDED)
+				break;
+			usleep_range(100, 120);
+		}
+
+		if (timeout < 0) {
+			mutex_unlock(&ring_info->lock);
+			dev_err(hy_drv_priv->dev,
+				"request timed-out\n");
+			return -EBUSY;
+		}
+
+		mutex_unlock(&ring_info->lock);
+		do_gettimeofday(&tv_end);
+
+		/* checking time duration for round-trip of a request
+		 * for debugging
+		 */
+		if (tv_end.tv_usec >= tv_start.tv_usec) {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
+			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
+		} else {
+			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
+			tv_diff.tv_usec = tv_end.tv_usec+1000000-
+					  tv_start.tv_usec;
+		}
+
+		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
+			dev_dbg(hy_drv_priv->dev,
+				"send_req:time diff: %ld sec, %ld usec\n",
+				tv_diff.tv_sec, tv_diff.tv_usec);
+	}
+
+	mutex_unlock(&ring_info->lock);
+
+	return 0;
+}
+
+/* ISR for handling request */
+static irqreturn_t back_ring_isr(int irq, void *info)
+{
+	RING_IDX rc, rp;
+	struct hyper_dmabuf_req req;
+	struct hyper_dmabuf_resp resp;
+
+	int notify, more_to_do;
+	int ret;
+
+	struct xen_comm_rx_ring_info *ring_info;
+	struct xen_comm_back_ring *ring;
+
+	ring_info = (struct xen_comm_rx_ring_info *)info;
+	ring = &ring_info->ring_back;
+
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
+
+	do {
+		rc = ring->req_cons;
+		rp = ring->sring->req_prod;
+		more_to_do = 0;
+		while (rc != rp) {
+			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
+				break;
+
+			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
+			ring->req_cons = ++rc;
+
+			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
+
+			if (ret > 0) {
+				/* preparing a response for the request and
+				 * send it to the requester
+				 */
+				memcpy(&resp, &req, sizeof(resp));
+				memcpy(RING_GET_RESPONSE(ring,
+							 ring->rsp_prod_pvt),
+							 &resp, sizeof(resp));
+				ring->rsp_prod_pvt++;
+
+				dev_dbg(hy_drv_priv->dev,
+					"responding to exporter for req:%d\n",
+					resp.resp_id);
+
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
+								     notify);
+
+				if (notify)
+					notify_remote_via_irq(ring_info->irq);
+			}
+
+			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
+		}
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
+
+/* ISR for handling responses */
+static irqreturn_t front_ring_isr(int irq, void *info)
+{
+	/* front ring only care about response from back */
+	struct hyper_dmabuf_resp *resp;
+	RING_IDX i, rp;
+	int more_to_do, ret;
+
+	struct xen_comm_tx_ring_info *ring_info;
+	struct xen_comm_front_ring *ring;
+
+	ring_info = (struct xen_comm_tx_ring_info *)info;
+	ring = &ring_info->ring_front;
+
+	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
+
+	do {
+		more_to_do = 0;
+		rp = ring->sring->rsp_prod;
+		for (i = ring->rsp_cons; i != rp; i++) {
+			resp = RING_GET_RESPONSE(ring, i);
+
+			/* update pending request's status with what is
+			 * in the response
+			 */
+
+			dev_dbg(hy_drv_priv->dev,
+				"getting response from importer\n");
+
+			if (req_pending.req_id == resp->resp_id)
+				req_pending.stat = resp->stat;
+
+			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
+				/* parsing response */
+				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
+					(struct hyper_dmabuf_req *)resp);
+
+				if (ret < 0) {
+					dev_err(hy_drv_priv->dev,
+						"err while parsing resp\n");
+				}
+			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
+				/* for debugging dma_buf remote synch */
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
+					"got HYPER_DMABUF_REQ_PROCESSED\n");
+			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
+				/* for debugging dma_buf remote synch */
+				dev_dbg(hy_drv_priv->dev,
+					"original request = 0x%x\n", resp->cmd);
+				dev_dbg(hy_drv_priv->dev,
+					"got HYPER_DMABUF_REQ_ERROR\n");
+			}
+		}
+
+		ring->rsp_cons = i;
+
+		if (i != ring->req_prod_pvt)
+			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
+		else
+			ring->sring->rsp_event = i+1;
+
+	} while (more_to_do);
+
+	return IRQ_HANDLED;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
new file mode 100644
index 000000000000..c0d3139ace59
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_COMM_H__
+#define __HYPER_DMABUF_XEN_COMM_H__
+
+#include "xen/interface/io/ring.h"
+#include "xen/xenbus.h"
+#include "../../hyper_dmabuf_msg.h"
+
+extern int xenstored_ready;
+
+DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
+
+struct xen_comm_tx_ring_info {
+	struct xen_comm_front_ring ring_front;
+	int rdomain;
+	int gref_ring;
+	int irq;
+	int port;
+	struct mutex lock;
+	struct xenbus_watch watch;
+};
+
+struct xen_comm_rx_ring_info {
+	int sdomain;
+	int irq;
+	int evtchn;
+	struct xen_comm_back_ring ring_back;
+	struct gnttab_unmap_grant_ref unmap_op;
+};
+
+int xen_be_get_domid(void);
+
+int xen_be_init_comm_env(void);
+
+/* exporter needs to generated info for page sharing */
+int xen_be_init_tx_rbuf(int domid);
+
+/* importer needs to know about shared page and port numbers
+ * for ring buffer and event channel
+ */
+int xen_be_init_rx_rbuf(int domid);
+
+/* cleans up exporter ring created for given domain */
+void xen_be_cleanup_tx_rbuf(int domid);
+
+/* cleans up importer ring created for given domain */
+void xen_be_cleanup_rx_rbuf(int domid);
+
+void xen_be_destroy_comm(void);
+
+/* send request to the remote domain */
+int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
+		    int wait);
+
+#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
new file mode 100644
index 000000000000..5a8e9d9b737f
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
@@ -0,0 +1,158 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/cdev.h>
+#include <linux/hashtable.h>
+#include <xen/grant_table.h>
+#include "../../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_comm_list.h"
+
+DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
+DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
+
+void xen_comm_ring_table_init(void)
+{
+	hash_init(xen_comm_rx_ring_hash);
+	hash_init(xen_comm_tx_ring_hash);
+}
+
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->info = ring_info;
+
+	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
+		info_entry->info->rdomain);
+
+	return 0;
+}
+
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+
+	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
+
+	if (!info_entry)
+		return -ENOMEM;
+
+	info_entry->info = ring_info;
+
+	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
+		info_entry->info->sdomain);
+
+	return 0;
+}
+
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->rdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->sdomain == domid)
+			return info_entry->info;
+
+	return NULL;
+}
+
+int xen_comm_remove_tx_ring(int domid)
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->rdomain == domid) {
+			hash_del(&info_entry->node);
+			kfree(info_entry);
+			return 0;
+		}
+
+	return -ENOENT;
+}
+
+int xen_comm_remove_rx_ring(int domid)
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	int bkt;
+
+	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
+		if (info_entry->info->sdomain == domid) {
+			hash_del(&info_entry->node);
+			kfree(info_entry);
+			return 0;
+		}
+
+	return -ENOENT;
+}
+
+void xen_comm_foreach_tx_ring(void (*func)(int domid))
+{
+	struct xen_comm_tx_ring_info_entry *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
+			   info_entry, node) {
+		func(info_entry->info->rdomain);
+	}
+}
+
+void xen_comm_foreach_rx_ring(void (*func)(int domid))
+{
+	struct xen_comm_rx_ring_info_entry *info_entry;
+	struct hlist_node *tmp;
+	int bkt;
+
+	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
+			   info_entry, node) {
+		func(info_entry->info->sdomain);
+	}
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
new file mode 100644
index 000000000000..8d4b52bd41b0
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
+#define __HYPER_DMABUF_XEN_COMM_LIST_H__
+
+/* number of bits to be used for exported dmabufs hash table */
+#define MAX_ENTRY_TX_RING 7
+/* number of bits to be used for imported dmabufs hash table */
+#define MAX_ENTRY_RX_RING 7
+
+struct xen_comm_tx_ring_info_entry {
+	struct xen_comm_tx_ring_info *info;
+	struct hlist_node node;
+};
+
+struct xen_comm_rx_ring_info_entry {
+	struct xen_comm_rx_ring_info *info;
+	struct hlist_node node;
+};
+
+void xen_comm_ring_table_init(void);
+
+int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
+
+int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
+
+int xen_comm_remove_tx_ring(int domid);
+
+int xen_comm_remove_rx_ring(int domid);
+
+struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
+
+struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
+
+/* iterates over all exporter rings and calls provided
+ * function for each of them
+ */
+void xen_comm_foreach_tx_ring(void (*func)(int domid));
+
+/* iterates over all importer rings and calls provided
+ * function for each of them
+ */
+void xen_comm_foreach_rx_ring(void (*func)(int domid));
+
+#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
new file mode 100644
index 000000000000..8122dc15b4cb
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include "../../hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_xen_comm.h"
+#include "hyper_dmabuf_xen_shm.h"
+
+struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
+	.init = NULL, /* not needed for xen */
+	.cleanup = NULL, /* not needed for xen */
+	.get_vm_id = xen_be_get_domid,
+	.share_pages = xen_be_share_pages,
+	.unshare_pages = xen_be_unshare_pages,
+	.map_shared_pages = (void *)xen_be_map_shared_pages,
+	.unmap_shared_pages = xen_be_unmap_shared_pages,
+	.init_comm_env = xen_be_init_comm_env,
+	.destroy_comm = xen_be_destroy_comm,
+	.init_rx_ch = xen_be_init_rx_rbuf,
+	.init_tx_ch = xen_be_init_tx_rbuf,
+	.send_req = xen_be_send_req,
+};
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
new file mode 100644
index 000000000000..c97dc1c5d042
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_DRV_H__
+#define __HYPER_DMABUF_XEN_DRV_H__
+#include <xen/interface/grant_table.h>
+
+extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
+
+/* Main purpose of this structure is to keep
+ * all references created or acquired for sharing
+ * pages with another domain for freeing those later
+ * when unsharing.
+ */
+struct xen_shared_pages_info {
+	/* top level refid */
+	grant_ref_t lvl3_gref;
+
+	/* page of top level addressing, it contains refids of 2nd lvl pages */
+	grant_ref_t *lvl3_table;
+
+	/* table of 2nd level pages, that contains refids to data pages */
+	grant_ref_t *lvl2_table;
+
+	/* unmap ops for mapped pages */
+	struct gnttab_unmap_grant_ref *unmap_ops;
+
+	/* data pages to be unmapped */
+	struct page **data_pages;
+};
+
+#endif // __HYPER_DMABUF_XEN_COMM_H__
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
new file mode 100644
index 000000000000..b2dcef34e10f
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
@@ -0,0 +1,525 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/slab.h>
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+#include "hyper_dmabuf_xen_drv.h"
+#include "../../hyper_dmabuf_drv.h"
+
+#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
+
+/*
+ * Creates 2 level page directory structure for referencing shared pages.
+ * Top level page is a single page that contains up to 1024 refids that
+ * point to 2nd level pages.
+ *
+ * Each 2nd level page contains up to 1024 refids that point to shared
+ * data pages.
+ *
+ * There will always be one top level page and number of 2nd level pages
+ * depends on number of shared data pages.
+ *
+ *      3rd level page                2nd level pages            Data pages
+ * +-------------------------+   ┌>+--------------------+ ┌>+------------+
+ * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
+ * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
+ * |           ...           |   | |     ....           | |
+ * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
+ * +-------------------------+ | | +--------------------+   |Data page 1 |
+ *                             | |                          +------------+
+ *                             | └>+--------------------+
+ *                             |   |Data page 1024 refid|
+ *                             |   |Data page 1025 refid|
+ *                             |   |       ...          |
+ *                             |   |Data page 2047 refid|
+ *                             |   +--------------------+
+ *                             |
+ *                             |        .....
+ *                             └-->+-----------------------+
+ *                                 |Data page 1047552 refid|
+ *                                 |Data page 1047553 refid|
+ *                                 |       ...             |
+ *                                 |Data page 1048575 refid|
+ *                                 +-----------------------+
+ *
+ * Using such 2 level structure it is possible to reference up to 4GB of
+ * shared data using single refid pointing to top level page.
+ *
+ * Returns refid of top level page.
+ */
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		       void **refs_info)
+{
+	grant_ref_t lvl3_gref;
+	grant_ref_t *lvl2_table;
+	grant_ref_t *lvl3_table;
+
+	/*
+	 * Calculate number of pages needed for 2nd level addresing:
+	 */
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			   ((nents % REFS_PER_PAGE) ? 1 : 0));
+
+	struct xen_shared_pages_info *sh_pages_info;
+	int i;
+
+	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
+	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+
+	if (!sh_pages_info)
+		return -ENOMEM;
+
+	*refs_info = (void *)sh_pages_info;
+
+	/* share data pages in readonly mode for security */
+	for (i = 0; i < nents; i++) {
+		lvl2_table[i] = gnttab_grant_foreign_access(domid,
+					pfn_to_mfn(page_to_pfn(pages[i])),
+					true /* read only */);
+		if (lvl2_table[i] == -ENOSPC) {
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl2 */
+			while (i--) {
+				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
+				gnttab_free_grant_reference(lvl2_table[i]);
+			}
+			goto err_cleanup;
+		}
+	}
+
+	/* Share 2nd level addressing pages in readonly mode*/
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl3_table[i] = gnttab_grant_foreign_access(domid,
+					virt_to_mfn(
+					(unsigned long)lvl2_table+i*PAGE_SIZE),
+					true);
+
+		if (lvl3_table[i] == -ENOSPC) {
+			dev_err(hy_drv_priv->dev,
+				"No more space left in grant table\n");
+
+			/* Unshare all already shared pages for lvl3 */
+			while (i--) {
+				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+				gnttab_free_grant_reference(lvl3_table[i]);
+			}
+
+			/* Unshare all pages for lvl2 */
+			while (nents--) {
+				gnttab_end_foreign_access_ref(
+							lvl2_table[nents], 0);
+				gnttab_free_grant_reference(lvl2_table[nents]);
+			}
+
+			goto err_cleanup;
+		}
+	}
+
+	/* Share lvl3_table in readonly mode*/
+	lvl3_gref = gnttab_grant_foreign_access(domid,
+			virt_to_mfn((unsigned long)lvl3_table),
+			true);
+
+	if (lvl3_gref == -ENOSPC) {
+		dev_err(hy_drv_priv->dev,
+			"No more space left in grant table\n");
+
+		/* Unshare all pages for lvl3 */
+		while (i--) {
+			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
+			gnttab_free_grant_reference(lvl3_table[i]);
+		}
+
+		/* Unshare all pages for lvl2 */
+		while (nents--) {
+			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
+			gnttab_free_grant_reference(lvl2_table[nents]);
+		}
+
+		goto err_cleanup;
+	}
+
+	/* Store lvl3_table page to be freed later */
+	sh_pages_info->lvl3_table = lvl3_table;
+
+	/* Store lvl2_table pages to be freed later */
+	sh_pages_info->lvl2_table = lvl2_table;
+
+
+	/* Store exported pages refid to be unshared later */
+	sh_pages_info->lvl3_gref = lvl3_gref;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return lvl3_gref;
+
+err_cleanup:
+	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)lvl3_table, 1);
+
+	return -ENOSPC;
+}
+
+int xen_be_unshare_pages(void **refs_info, int nents)
+{
+	struct xen_shared_pages_info *sh_pages_info;
+	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
+			    ((nents % REFS_PER_PAGE) ? 1 : 0));
+	int i;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->lvl3_table == NULL ||
+	    sh_pages_info->lvl2_table ==  NULL ||
+	    sh_pages_info->lvl3_gref == -1) {
+		dev_warn(hy_drv_priv->dev,
+			 "gref table for hyper_dmabuf already cleaned up\n");
+		return 0;
+	}
+
+	/* End foreign access for data pages, but do not free them */
+	for (i = 0; i < nents; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
+
+		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
+		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
+	}
+
+	/* End foreign access for 2nd level addressing pages */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
+			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
+
+		if (!gnttab_end_foreign_access_ref(
+					sh_pages_info->lvl3_table[i], 1))
+			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
+
+		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
+	}
+
+	/* End foreign access for top level addressing page */
+	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
+		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
+
+	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
+	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
+
+	/* freeing all pages used for 2 level addressing */
+	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
+	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
+
+	sh_pages_info->lvl3_gref = -1;
+	sh_pages_info->lvl2_table = NULL;
+	sh_pages_info->lvl3_table = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
+
+/* Maps provided top level ref id and then return array of pages
+ * containing data refs.
+ */
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents, void **refs_info)
+{
+	struct page *lvl3_table_page;
+	struct page **lvl2_table_pages;
+	struct page **data_pages;
+	struct xen_shared_pages_info *sh_pages_info;
+
+	grant_ref_t *lvl3_table;
+	grant_ref_t *lvl2_table;
+
+	struct gnttab_map_grant_ref lvl3_map_ops;
+	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
+
+	struct gnttab_map_grant_ref *lvl2_map_ops;
+	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
+
+	struct gnttab_map_grant_ref *data_map_ops;
+	struct gnttab_unmap_grant_ref *data_unmap_ops;
+
+	/* # of grefs in the last page of lvl2 table */
+	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
+	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
+			   ((nents_last > 0) ? 1 : 0) -
+			   (nents_last == REFS_PER_PAGE);
+	int i, j, k;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
+	*refs_info = (void *) sh_pages_info;
+
+	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
+				   GFP_KERNEL);
+
+	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
+
+	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
+			       GFP_KERNEL);
+
+	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
+				 GFP_KERNEL);
+
+	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
+	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
+
+	/* Map top level addressing page */
+	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
+		return NULL;
+	}
+
+	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
+
+	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
+			  GNTMAP_host_map | GNTMAP_readonly,
+			  (grant_ref_t)lvl3_gref, domid);
+
+	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
+			    GNTMAP_host_map | GNTMAP_readonly, -1);
+
+	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	if (lvl3_map_ops.status) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed status = %d",
+			lvl3_map_ops.status);
+
+		goto error_cleanup_lvl3;
+	} else {
+		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
+	}
+
+	/* Map all second level pages */
+	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
+		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
+		goto error_cleanup_lvl3;
+	}
+
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
+					page_to_pfn(lvl2_table_pages[i]));
+		gnttab_set_map_op(&lvl2_map_ops[i],
+				  (unsigned long)lvl2_table, GNTMAP_host_map |
+				  GNTMAP_readonly,
+				  lvl3_table[i], domid);
+		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
+				    (unsigned long)lvl2_table, GNTMAP_host_map |
+				    GNTMAP_readonly, -1);
+	}
+
+	/* Unmap top level page, as it won't be needed any longer */
+	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+			      &lvl3_table_page, 1)) {
+		dev_err(hy_drv_priv->dev,
+			"xen: cannot unmap top level page\n");
+		return NULL;
+	}
+
+	/* Mark that page was unmapped */
+	lvl3_unmap_ops.handle = -1;
+
+	if (gnttab_map_refs(lvl2_map_ops, NULL,
+			    lvl2_table_pages, n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed");
+		return NULL;
+	}
+
+	/* Checks if pages were mapped correctly */
+	for (i = 0; i < n_lvl2_grefs; i++) {
+		if (lvl2_map_ops[i].status) {
+			dev_err(hy_drv_priv->dev,
+				"HYPERVISOR map grant ref failed status = %d",
+				lvl2_map_ops[i].status);
+			goto error_cleanup_lvl2;
+		} else {
+			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
+		}
+	}
+
+	if (gnttab_alloc_pages(nents, data_pages)) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot allocate pages\n");
+		goto error_cleanup_lvl2;
+	}
+
+	k = 0;
+
+	for (i = 0; i < n_lvl2_grefs - 1; i++) {
+		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
+		for (j = 0; j < REFS_PER_PAGE; j++) {
+			gnttab_set_map_op(&data_map_ops[k],
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly,
+				lvl2_table[j], domid);
+
+			gnttab_set_unmap_op(&data_unmap_ops[k],
+				(unsigned long)pfn_to_kaddr(
+						page_to_pfn(data_pages[k])),
+				GNTMAP_host_map | GNTMAP_readonly, -1);
+			k++;
+		}
+	}
+
+	/* for grefs in the last lvl2 table page */
+	lvl2_table = pfn_to_kaddr(page_to_pfn(
+				lvl2_table_pages[n_lvl2_grefs - 1]));
+
+	for (j = 0; j < nents_last; j++) {
+		gnttab_set_map_op(&data_map_ops[k],
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly,
+			lvl2_table[j], domid);
+
+		gnttab_set_unmap_op(&data_unmap_ops[k],
+			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
+			GNTMAP_host_map | GNTMAP_readonly, -1);
+		k++;
+	}
+
+	if (gnttab_map_refs(data_map_ops, NULL,
+			    data_pages, nents)) {
+		dev_err(hy_drv_priv->dev,
+			"HYPERVISOR map grant ref failed\n");
+		return NULL;
+	}
+
+	/* unmapping lvl2 table pages */
+	if (gnttab_unmap_refs(lvl2_unmap_ops,
+			      NULL, lvl2_table_pages,
+			      n_lvl2_grefs)) {
+		dev_err(hy_drv_priv->dev,
+			"Cannot unmap 2nd level refs\n");
+		return NULL;
+	}
+
+	/* Mark that pages were unmapped */
+	for (i = 0; i < n_lvl2_grefs; i++)
+		lvl2_unmap_ops[i].handle = -1;
+
+	for (i = 0; i < nents; i++) {
+		if (data_map_ops[i].status) {
+			dev_err(hy_drv_priv->dev,
+				"HYPERVISOR map grant ref failed status = %d\n",
+				data_map_ops[i].status);
+			goto error_cleanup_data;
+		} else {
+			data_unmap_ops[i].handle = data_map_ops[i].handle;
+		}
+	}
+
+	/* store these references for unmapping in the future */
+	sh_pages_info->unmap_ops = data_unmap_ops;
+	sh_pages_info->data_pages = data_pages;
+
+	gnttab_free_pages(1, &lvl3_table_page);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return data_pages;
+
+error_cleanup_data:
+	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
+			  nents);
+
+	gnttab_free_pages(nents, data_pages);
+
+error_cleanup_lvl2:
+	if (lvl2_unmap_ops[0].handle != -1)
+		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
+				  lvl2_table_pages, n_lvl2_grefs);
+	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
+
+error_cleanup_lvl3:
+	if (lvl3_unmap_ops.handle != -1)
+		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
+				  &lvl3_table_page, 1);
+	gnttab_free_pages(1, &lvl3_table_page);
+
+	kfree(lvl2_table_pages);
+	kfree(lvl2_map_ops);
+	kfree(lvl2_unmap_ops);
+	kfree(data_map_ops);
+
+
+	return NULL;
+}
+
+int xen_be_unmap_shared_pages(void **refs_info, int nents)
+{
+	struct xen_shared_pages_info *sh_pages_info;
+
+	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
+
+	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
+
+	if (sh_pages_info->unmap_ops == NULL ||
+	    sh_pages_info->data_pages == NULL) {
+		dev_warn(hy_drv_priv->dev,
+			 "pages already cleaned up or buffer not imported yet\n");
+		return 0;
+	}
+
+	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
+			      sh_pages_info->data_pages, nents)) {
+		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
+		return -EFAULT;
+	}
+
+	gnttab_free_pages(nents, sh_pages_info->data_pages);
+
+	kfree(sh_pages_info->data_pages);
+	kfree(sh_pages_info->unmap_ops);
+	sh_pages_info->unmap_ops = NULL;
+	sh_pages_info->data_pages = NULL;
+	kfree(sh_pages_info);
+	sh_pages_info = NULL;
+
+	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
new file mode 100644
index 000000000000..c39f241351f8
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
@@ -0,0 +1,46 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_XEN_SHM_H__
+#define __HYPER_DMABUF_XEN_SHM_H__
+
+/* This collects all reference numbers for 2nd level shared pages and
+ * create a table with those in 1st level shared pages then return reference
+ * numbers for this top level table.
+ */
+int xen_be_share_pages(struct page **pages, int domid, int nents,
+		    void **refs_info);
+
+int xen_be_unshare_pages(void **refs_info, int nents);
+
+/* Maps provided top level ref id and then return array of pages containing
+ * data refs.
+ */
+struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
+				      int nents,
+				      void **refs_info);
+
+int xen_be_unmap_shared_pages(void **refs_info, int nents);
+
+#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
index 18c1cd735ea2..3320f9dcc769 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -42,6 +42,10 @@
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
 
+#ifdef CONFIG_HYPER_DMABUF_XEN
+#include "backends/xen/hyper_dmabuf_xen_drv.h"
+#endif
+
 MODULE_LICENSE("GPL and additional rights");
 MODULE_AUTHOR("Intel Corporation");
 
@@ -145,7 +149,13 @@ static int __init hyper_dmabuf_drv_init(void)
 		return ret;
 	}
 
+/* currently only supports XEN hypervisor */
+#ifdef CONFIG_HYPER_DMABUF_XEN
+	hy_drv_priv->bknd_ops = &xen_bknd_ops;
+#else
 	hy_drv_priv->bknd_ops = NULL;
+	pr_err("hyper_dmabuf drv currently supports XEN only.\n");
+#endif
 
 	if (hy_drv_priv->bknd_ops == NULL) {
 		pr_err("Hyper_dmabuf: no backend found\n");
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 6/9] hyper_dmabuf: hyper_DMABUF synchronization across VM
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
                   ` (4 preceding siblings ...)
  2018-02-14  1:50 ` [RFC PATCH v2 5/9] hyper_dmabuf: default backend for XEN hypervisor Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 7/9] hyper_dmabuf: query ioctl for retreiving various hyper_DMABUF info Dongwon Kim
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

All of hyper_DMABUF operations now (hyper_dmabuf_ops.c) send a message
to the exporting VM for synchronization between two VMs. For this, every
mapping done by importer will make exporter perform shadow mapping of
original DMA-BUF. Then all consecutive DMA-BUF operations (attach, detach,
map/unmap and so on) will be mimicked on this shadowed DMA-BUF for tracking
and synchronization purpose (e.g. +-reference count to check the status).

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/dma-buf/hyper_dmabuf/Makefile              |   1 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    |  53 +++-
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |   2 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 157 +++++++++-
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.c        | 324 +++++++++++++++++++++
 .../hyper_dmabuf/hyper_dmabuf_remote_sync.h        |  32 ++
 6 files changed, 565 insertions(+), 4 deletions(-)
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h

diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
index b9ab4eeca6f2..702696f29215 100644
--- a/drivers/dma-buf/hyper_dmabuf/Makefile
+++ b/drivers/dma-buf/hyper_dmabuf/Makefile
@@ -9,6 +9,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_ops.o \
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
+				 hyper_dmabuf_remote_sync.o \
 
 ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
 	$(TARGET_MODULE)-objs += backends/xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
index 7176fa8fb139..1592d5cfaa52 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -34,6 +34,7 @@
 #include <linux/workqueue.h>
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_remote_sync.h"
 #include "hyper_dmabuf_list.h"
 
 struct cmd_process {
@@ -92,6 +93,25 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
 			req->op[i] = op[i];
 		break;
 
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer (probably not needed)
+		 * for dmabuf synchronization
+		 */
+		break;
+
+	case HYPER_DMABUF_OPS_TO_SOURCE:
+		/* notifying dmabuf map/unmap to exporter, map will make
+		 * the driver to do shadow mapping or unmapping for
+		 * synchronization with original exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
+		 * op0~3 : hyper_dmabuf_id
+		 * op4 : map(=1)/unmap(=2)/attach(=3)/detach(=4)
+		 */
+		for (i = 0; i < 5; i++)
+			req->op[i] = op[i];
+		break;
+
 	default:
 		/* no command found */
 		return;
@@ -201,6 +221,12 @@ static void cmd_process_work(struct work_struct *work)
 
 		break;
 
+	case HYPER_DMABUF_OPS_TO_REMOTE:
+		/* notifying dmabuf map/unmap to importer
+		 * (probably not needed) for dmabuf synchronization
+		 */
+		break;
+
 	default:
 		/* shouldn't get here */
 		break;
@@ -217,6 +243,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	struct imported_sgt_info *imported;
 	struct exported_sgt_info *exported;
 	hyper_dmabuf_id_t hid;
+	int ret;
 
 	if (!req) {
 		dev_err(hy_drv_priv->dev, "request is NULL\n");
@@ -229,7 +256,7 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 	hid.rng_key[2] = req->op[3];
 
 	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
-		(req->cmd > HYPER_DMABUF_NOTIFY_UNEXPORT)) {
+		(req->cmd > HYPER_DMABUF_OPS_TO_SOURCE)) {
 		dev_err(hy_drv_priv->dev, "invalid command\n");
 		return -EINVAL;
 	}
@@ -271,6 +298,30 @@ int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
 		return req->cmd;
 	}
 
+	/* dma buf remote synchronization */
+	if (req->cmd == HYPER_DMABUF_OPS_TO_SOURCE) {
+		/* notifying dmabuf map/unmap to exporter, map will
+		 * make the driver to do shadow mapping
+		 * or unmapping for synchronization with original
+		 * exporter (e.g. i915)
+		 *
+		 * command : DMABUF_OPS_TO_SOURCE.
+		 * op0~3 : hyper_dmabuf_id
+		 * op1 : enum hyper_dmabuf_ops {....}
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"%s: HYPER_DMABUF_OPS_TO_SOURCE\n", __func__);
+
+		ret = hyper_dmabuf_remote_sync(hid, req->op[4]);
+
+		if (ret)
+			req->stat = HYPER_DMABUF_REQ_ERROR;
+		else
+			req->stat = HYPER_DMABUF_REQ_PROCESSED;
+
+		return req->cmd;
+	}
+
 	/* synchronous dma_buf_fd export */
 	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
 		/* find a corresponding SGT for the id */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
index 63a39d068d69..82d2900d3077 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
@@ -48,6 +48,8 @@ enum hyper_dmabuf_command {
 	HYPER_DMABUF_EXPORT_FD,
 	HYPER_DMABUF_EXPORT_FD_FAILED,
 	HYPER_DMABUF_NOTIFY_UNEXPORT,
+	HYPER_DMABUF_OPS_TO_REMOTE,
+	HYPER_DMABUF_OPS_TO_SOURCE,
 };
 
 enum hyper_dmabuf_ops {
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
index b4d3c2caad73..02d42c099ad9 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
@@ -51,16 +51,71 @@ static int dmabuf_refcount(struct dma_buf *dma_buf)
 	return -EINVAL;
 }
 
+static int sync_request(hyper_dmabuf_id_t hid, int dmabuf_ops)
+{
+	struct hyper_dmabuf_req *req;
+	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int op[5];
+	int i;
+	int ret;
+
+	op[0] = hid.id;
+
+	for (i = 0; i < 3; i++)
+		op[i+1] = hid.rng_key[i];
+
+	op[4] = dmabuf_ops;
+
+	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
+
+	if (!req)
+		return -ENOMEM;
+
+	hyper_dmabuf_create_req(req, HYPER_DMABUF_OPS_TO_SOURCE, &op[0]);
+
+	/* send request and wait for a response */
+	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(hid), req,
+				 WAIT_AFTER_SYNC_REQ);
+
+	if (ret < 0) {
+		dev_dbg(hy_drv_priv->dev,
+			"dmabuf sync request failed:%d\n", req->op[4]);
+	}
+
+	kfree(req);
+
+	return ret;
+}
+
 static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
 				   struct device *dev,
 				   struct dma_buf_attachment *attach)
 {
-	return 0;
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_ATTACH);
+
+	return ret;
 }
 
 static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
 				    struct dma_buf_attachment *attach)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!attach->dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)attach->dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_DETACH);
 }
 
 static struct sg_table *hyper_dmabuf_ops_map(
@@ -70,6 +125,7 @@ static struct sg_table *hyper_dmabuf_ops_map(
 	struct sg_table *st;
 	struct imported_sgt_info *imported;
 	struct pages_info *pg_info;
+	int ret;
 
 	if (!attachment->dmabuf->priv)
 		return NULL;
@@ -91,6 +147,8 @@ static struct sg_table *hyper_dmabuf_ops_map(
 	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
 		goto err_free_sg;
 
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MAP);
+
 	kfree(pg_info->pgs);
 	kfree(pg_info);
 
@@ -113,6 +171,7 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 				   enum dma_data_direction dir)
 {
 	struct imported_sgt_info *imported;
+	int ret;
 
 	if (!attachment->dmabuf->priv)
 		return;
@@ -123,12 +182,15 @@ static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
 
 	sg_free_table(sg);
 	kfree(sg);
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_UNMAP);
 }
 
 static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 {
 	struct imported_sgt_info *imported;
 	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
+	int ret;
 	int finish;
 
 	if (!dma_buf->priv)
@@ -155,6 +217,8 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 	finish = imported && !imported->valid &&
 		 !imported->importers;
 
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_RELEASE);
+
 	/*
 	 * Check if buffer is still valid and if not remove it
 	 * from imported list. That has to be done after sending
@@ -169,18 +233,48 @@ static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
 static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
 					     enum dma_data_direction dir)
 {
-	return 0;
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS);
+
+	return ret;
 }
 
 static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
 					   enum dma_data_direction dir)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_END_CPU_ACCESS);
+
 	return 0;
 }
 
 static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
 					  unsigned long pgnum)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP_ATOMIC);
+
 	/* TODO: NULL for now. Need to return the addr of mapped region */
 	return NULL;
 }
@@ -188,10 +282,29 @@ static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
 static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
 					   unsigned long pgnum, void *vaddr)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP_ATOMIC);
 }
 
 static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KMAP);
+
 	/* for now NULL.. need to return the address of mapped region */
 	return NULL;
 }
@@ -199,21 +312,59 @@ static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
 static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
 				    void *vaddr)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_KUNMAP);
 }
 
 static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
 				 struct vm_area_struct *vma)
 {
-	return 0;
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return -EINVAL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_MMAP);
+
+	return ret;
 }
 
 static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return NULL;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VMAP);
+
 	return NULL;
 }
 
 static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
 {
+	struct imported_sgt_info *imported;
+	int ret;
+
+	if (!dmabuf->priv)
+		return;
+
+	imported = (struct imported_sgt_info *)dmabuf->priv;
+
+	ret = sync_request(imported->hid, HYPER_DMABUF_OPS_VUNMAP);
 }
 
 static const struct dma_buf_ops hyper_dmabuf_ops = {
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
new file mode 100644
index 000000000000..55c2c1828859
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
@@ -0,0 +1,324 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/dma-buf.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_msg.h"
+#include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_sgl_proc.h"
+
+/* Whenever importer does dma operations from remote domain,
+ * a notification is sent to the exporter so that exporter
+ * issues equivalent dma operation on the original dma buf
+ * for indirect synchronization via shadow operations.
+ *
+ * All ptrs and references (e.g struct sg_table*,
+ * struct dma_buf_attachment) created via these operations on
+ * exporter's side are kept in stack (implemented as circular
+ * linked-lists) separately so that those can be re-referenced
+ * later when unmapping operations are invoked to free those.
+ *
+ * The very first element on the bottom of each stack holds
+ * is what is created when initial exporting is issued so it
+ * should not be modified or released by this fuction.
+ */
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops)
+{
+	struct exported_sgt_info *exported;
+	struct sgt_list *sgtl;
+	struct attachment_list *attachl;
+	struct kmap_vaddr_list *va_kmapl;
+	struct vmap_vaddr_list *va_vmapl;
+	int ret;
+
+	/* find a coresponding SGT for the id */
+	exported = hyper_dmabuf_find_exported(hid);
+
+	if (!exported) {
+		dev_err(hy_drv_priv->dev,
+			"dmabuf remote sync::can't find exported list\n");
+		return -ENOENT;
+	}
+
+	switch (ops) {
+	case HYPER_DMABUF_OPS_ATTACH:
+		attachl = kcalloc(1, sizeof(*attachl), GFP_KERNEL);
+
+		if (!attachl)
+			return -ENOMEM;
+
+		attachl->attach = dma_buf_attach(exported->dma_buf,
+						 hy_drv_priv->dev);
+
+		if (!attachl->attach) {
+			kfree(attachl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_ATTACH\n");
+			return -ENOMEM;
+		}
+
+		list_add(&attachl->list, &exported->active_attached->list);
+		break;
+
+	case HYPER_DMABUF_OPS_DETACH:
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_DETACH\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf attachment left to be detached\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		dma_buf_detach(exported->dma_buf, attachl->attach);
+		list_del(&attachl->list);
+		kfree(attachl);
+		break;
+
+	case HYPER_DMABUF_OPS_MAP:
+		if (list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf attachment left to be mapped\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+
+		sgtl = kcalloc(1, sizeof(*sgtl), GFP_KERNEL);
+
+		if (!sgtl)
+			return -ENOMEM;
+
+		sgtl->sgt = dma_buf_map_attachment(attachl->attach,
+						   DMA_BIDIRECTIONAL);
+		if (!sgtl->sgt) {
+			kfree(sgtl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_MAP\n");
+			return -ENOMEM;
+		}
+		list_add(&sgtl->list, &exported->active_sgts->list);
+		break;
+
+	case HYPER_DMABUF_OPS_UNMAP:
+		if (list_empty(&exported->active_sgts->list) ||
+		    list_empty(&exported->active_attached->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_UNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no SGT or attach left to be unmapped\n");
+			return -EFAULT;
+		}
+
+		attachl = list_first_entry(&exported->active_attached->list,
+					   struct attachment_list, list);
+		sgtl = list_first_entry(&exported->active_sgts->list,
+					struct sgt_list, list);
+
+		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
+					 DMA_BIDIRECTIONAL);
+		list_del(&sgtl->list);
+		kfree(sgtl);
+		break;
+
+	case HYPER_DMABUF_OPS_RELEASE:
+		dev_dbg(hy_drv_priv->dev,
+			"id:%d key:%d %d %d} released, ref left: %d\n",
+			 exported->hid.id, exported->hid.rng_key[0],
+			 exported->hid.rng_key[1], exported->hid.rng_key[2],
+			 exported->active - 1);
+
+		exported->active--;
+
+		/* If there are still importers just break, if no then
+		 * continue with final cleanup
+		 */
+		if (exported->active)
+			break;
+
+		/* Importer just released buffer fd, check if there is
+		 * any other importer still using it.
+		 * If not and buffer was unexported, clean up shared
+		 * data and remove that buffer.
+		 */
+		dev_dbg(hy_drv_priv->dev,
+			"Buffer {id:%d key:%d %d %d} final released\n",
+			exported->hid.id, exported->hid.rng_key[0],
+			exported->hid.rng_key[1], exported->hid.rng_key[2]);
+
+		if (!exported->valid && !exported->active &&
+		    !exported->unexport_sched) {
+			hyper_dmabuf_cleanup_sgt_info(exported, false);
+			hyper_dmabuf_remove_exported(hid);
+			kfree(exported);
+			/* store hyper_dmabuf_id in the list for reuse */
+			hyper_dmabuf_store_hid(hid);
+		}
+
+		break;
+
+	case HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS:
+		ret = dma_buf_begin_cpu_access(exported->dma_buf,
+					       DMA_BIDIRECTIONAL);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS\n");
+			return ret;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_END_CPU_ACCESS:
+		ret = dma_buf_end_cpu_access(exported->dma_buf,
+					     DMA_BIDIRECTIONAL);
+		if (ret) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_END_CPU_ACCESS\n");
+			return ret;
+		}
+		break;
+
+	case HYPER_DMABUF_OPS_KMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KMAP:
+		va_kmapl = kcalloc(1, sizeof(*va_kmapl), GFP_KERNEL);
+		if (!va_kmapl)
+			return -ENOMEM;
+
+		/* dummy kmapping of 1 page */
+		if (ops == HYPER_DMABUF_OPS_KMAP_ATOMIC)
+			va_kmapl->vaddr = dma_buf_kmap_atomic(
+						exported->dma_buf, 1);
+		else
+			va_kmapl->vaddr = dma_buf_kmap(
+						exported->dma_buf, 1);
+
+		if (!va_kmapl->vaddr) {
+			kfree(va_kmapl);
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KMAP(_ATOMIC)\n");
+			return -ENOMEM;
+		}
+		list_add(&va_kmapl->list, &exported->va_kmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_KUNMAP_ATOMIC:
+	case HYPER_DMABUF_OPS_KUNMAP:
+		if (list_empty(&exported->va_kmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf VA to be freed\n");
+			return -EFAULT;
+		}
+
+		va_kmapl = list_first_entry(&exported->va_kmapped->list,
+					    struct kmap_vaddr_list, list);
+		if (!va_kmapl->vaddr) {
+			dev_err(hy_drv_priv->dev,
+				"HYPER_DMABUF_OPS_KUNMAP(_ATOMIC)\n");
+			return PTR_ERR(va_kmapl->vaddr);
+		}
+
+		/* unmapping 1 page */
+		if (ops == HYPER_DMABUF_OPS_KUNMAP_ATOMIC)
+			dma_buf_kunmap_atomic(exported->dma_buf,
+					      1, va_kmapl->vaddr);
+		else
+			dma_buf_kunmap(exported->dma_buf,
+				       1, va_kmapl->vaddr);
+
+		list_del(&va_kmapl->list);
+		kfree(va_kmapl);
+		break;
+
+	case HYPER_DMABUF_OPS_MMAP:
+		/* currently not supported: looking for a way to create
+		 * a dummy vma
+		 */
+		dev_warn(hy_drv_priv->dev,
+			 "remote sync::sychronized mmap is not supported\n");
+		break;
+
+	case HYPER_DMABUF_OPS_VMAP:
+		va_vmapl = kcalloc(1, sizeof(*va_vmapl), GFP_KERNEL);
+
+		if (!va_vmapl)
+			return -ENOMEM;
+
+		/* dummy vmapping */
+		va_vmapl->vaddr = dma_buf_vmap(exported->dma_buf);
+
+		if (!va_vmapl->vaddr) {
+			kfree(va_vmapl);
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VMAP\n");
+			return -ENOMEM;
+		}
+		list_add(&va_vmapl->list, &exported->va_vmapped->list);
+		break;
+
+	case HYPER_DMABUF_OPS_VUNMAP:
+		if (list_empty(&exported->va_vmapped->list)) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			dev_err(hy_drv_priv->dev,
+				"no more dmabuf VA to be freed\n");
+			return -EFAULT;
+		}
+		va_vmapl = list_first_entry(&exported->va_vmapped->list,
+					struct vmap_vaddr_list, list);
+		if (!va_vmapl || va_vmapl->vaddr == NULL) {
+			dev_err(hy_drv_priv->dev,
+				"remote sync::HYPER_DMABUF_OPS_VUNMAP\n");
+			return -EFAULT;
+		}
+
+		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
+
+		list_del(&va_vmapl->list);
+		kfree(va_vmapl);
+		break;
+
+	default:
+		/* program should not get here */
+		break;
+	}
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
new file mode 100644
index 000000000000..a659c83c8f27
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_REMOTE_SYNC_H__
+#define __HYPER_DMABUF_REMOTE_SYNC_H__
+
+int hyper_dmabuf_remote_sync(hyper_dmabuf_id_t hid, int ops);
+
+#endif // __HYPER_DMABUF_REMOTE_SYNC_H__
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 7/9] hyper_dmabuf: query ioctl for retreiving various hyper_DMABUF info
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
                   ` (5 preceding siblings ...)
  2018-02-14  1:50 ` [RFC PATCH v2 6/9] hyper_dmabuf: hyper_DMABUF synchronization across VM Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 8/9] hyper_dmabuf: event-polling mechanism for detecting a new hyper_DMABUF Dongwon Kim
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

Add a new ioctl, "IOCTL_HYPER_DMABUF_QUERY" for the userspace to
retreive various information about hyper_DMABUF, currently being shared
across VMs.

Supported query items are as followed:

enum hyper_dmabuf_query {
        HYPER_DMABUF_QUERY_TYPE = 0x10,
        HYPER_DMABUF_QUERY_EXPORTER,
        HYPER_DMABUF_QUERY_IMPORTER,
        HYPER_DMABUF_QUERY_SIZE,
        HYPER_DMABUF_QUERY_BUSY,
        HYPER_DMABUF_QUERY_UNEXPORTED,
        HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
        HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
        HYPER_DMABUF_QUERY_PRIV_INFO,
};

Query IOCTL call with each query item above returns,

HYPER_DMABUF_QUERY_TYPE - type - EXPORTED/IMPORTED of hyper_DMABUF from
current VM's perspective.

HYPER_DMABUF_QUERY_EXPORTER - ID of exporting VM

HYPER_DMABUF_QUERY_IMPORTER - ID of importing VM

HYPER_DMABUF_QUERY_SIZE - size of shared buffer in byte

HYPER_DMABUF_QUERY_BUSY - true if hyper_DMABUF is being actively used
(e.g. attached and mapped by end-consumer)

HYPER_DMABUF_QUERY_UNEXPORTED - true if hyper_DMABUF has been unexported
on exporting VM's side.

HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED - true if hyper_DMABUF is scheduled
to be unexported (still valid but will be unexported soon)

HYPER_DMABUF_QUERY_PRIV_INFO_SIZE - size of private information (given by
user application on exporter's side) attached to hyper_DMABUF

HYPER_DMABUF_QUERY_PRIV_INFO - private information attached to hyper_DMABUF

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/dma-buf/hyper_dmabuf/Makefile             |   1 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c |  49 +++++-
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c | 174 ++++++++++++++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h |  36 +++++
 include/uapi/linux/hyper_dmabuf.h                 |  32 ++++
 5 files changed, 291 insertions(+), 1 deletion(-)
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h

diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
index 702696f29215..578a669a0d3e 100644
--- a/drivers/dma-buf/hyper_dmabuf/Makefile
+++ b/drivers/dma-buf/hyper_dmabuf/Makefile
@@ -10,6 +10,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_msg.o \
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
+				 hyper_dmabuf_query.o \
 
 ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
 	$(TARGET_MODULE)-objs += backends/xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
index 168ccf98f710..e90e59cd0568 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
@@ -41,6 +41,7 @@
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_sgl_proc.h"
 #include "hyper_dmabuf_ops.h"
+#include "hyper_dmabuf_query.h"
 
 static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
 {
@@ -543,7 +544,6 @@ static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
 			hyper_dmabuf_create_req(req,
 						HYPER_DMABUF_EXPORT_FD_FAILED,
 						&op[0]);
-
 			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid),
 					   req, false);
 			kfree(req);
@@ -682,6 +682,51 @@ int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
 	return 0;
 }
 
+static int hyper_dmabuf_query_ioctl(struct file *filp, void *data)
+{
+	struct ioctl_hyper_dmabuf_query *query_attr =
+			(struct ioctl_hyper_dmabuf_query *)data;
+	struct exported_sgt_info *exported = NULL;
+	struct imported_sgt_info *imported = NULL;
+	int ret = 0;
+
+	if (HYPER_DMABUF_DOM_ID(query_attr->hid) == hy_drv_priv->domid) {
+		/* query for exported dmabuf */
+		exported = hyper_dmabuf_find_exported(query_attr->hid);
+		if (exported) {
+			ret = hyper_dmabuf_query_exported(exported,
+							  query_attr->item,
+							  &query_attr->info);
+		} else {
+			dev_err(hy_drv_priv->dev,
+				"hid {id:%d key:%d %d %d} not in exp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	} else {
+		/* query for imported dmabuf */
+		imported = hyper_dmabuf_find_imported(query_attr->hid);
+		if (imported) {
+			ret = hyper_dmabuf_query_imported(imported,
+							  query_attr->item,
+							  &query_attr->info);
+		} else {
+			dev_err(hy_drv_priv->dev,
+				"hid {id:%d key:%d %d %d} not in imp list\n",
+				query_attr->hid.id,
+				query_attr->hid.rng_key[0],
+				query_attr->hid.rng_key[1],
+				query_attr->hid.rng_key[2]);
+			return -ENOENT;
+		}
+	}
+
+	return ret;
+}
+
 const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
 			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
@@ -693,6 +738,8 @@ const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
 			       hyper_dmabuf_export_fd_ioctl, 0),
 	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
 			       hyper_dmabuf_unexport_ioctl, 0),
+	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_QUERY,
+			       hyper_dmabuf_query_ioctl, 0),
 };
 
 long hyper_dmabuf_ioctl(struct file *filp,
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
new file mode 100644
index 000000000000..edf92318d4cd
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
@@ -0,0 +1,174 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/uaccess.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_id.h"
+
+#define HYPER_DMABUF_SIZE(nents, first_offset, last_len) \
+	((nents)*PAGE_SIZE - (first_offset) - PAGE_SIZE + (last_len))
+
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
+				int query, unsigned long *info)
+{
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = EXPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(exported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = exported->rdomid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		*info = exported->dma_buf->size;
+		break;
+
+	/* whether the buffer is used by importer */
+	case HYPER_DMABUF_QUERY_BUSY:
+		*info = (exported->active > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !exported->valid;
+		break;
+
+	/* whether the buffer is scheduled to be unexported */
+	case HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED:
+		*info = !exported->unexport_sched;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = exported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (exported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *) *info,
+					exported->priv,
+					exported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
+				int query, unsigned long *info)
+{
+	switch (query) {
+	case HYPER_DMABUF_QUERY_TYPE:
+		*info = IMPORTED;
+		break;
+
+	/* exporting domain of this specific dmabuf*/
+	case HYPER_DMABUF_QUERY_EXPORTER:
+		*info = HYPER_DMABUF_DOM_ID(imported->hid);
+		break;
+
+	/* importing domain of this specific dmabuf */
+	case HYPER_DMABUF_QUERY_IMPORTER:
+		*info = hy_drv_priv->domid;
+		break;
+
+	/* size of dmabuf in byte */
+	case HYPER_DMABUF_QUERY_SIZE:
+		if (imported->dma_buf) {
+			/* if local dma_buf is created (if it's
+			 * ever mapped), retrieve it directly
+			 * from struct dma_buf *
+			 */
+			*info = imported->dma_buf->size;
+		} else {
+			/* calcuate it from given nents, frst_ofst
+			 * and last_len
+			 */
+			*info = HYPER_DMABUF_SIZE(imported->nents,
+						  imported->frst_ofst,
+						  imported->last_len);
+		}
+		break;
+
+	/* whether the buffer is used or not */
+	case HYPER_DMABUF_QUERY_BUSY:
+		/* checks if it's used by importer */
+		*info = (imported->importers > 0);
+		break;
+
+	/* whether the buffer is unexported */
+	case HYPER_DMABUF_QUERY_UNEXPORTED:
+		*info = !imported->valid;
+		break;
+
+	/* size of private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO_SIZE:
+		*info = imported->sz_priv;
+		break;
+
+	/* copy private info attached to buffer */
+	case HYPER_DMABUF_QUERY_PRIV_INFO:
+		if (imported->sz_priv > 0) {
+			int n;
+
+			n = copy_to_user((void __user *)*info,
+					imported->priv,
+					imported->sz_priv);
+			if (n != 0)
+				return -EINVAL;
+		}
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
new file mode 100644
index 000000000000..b9687db7e7d5
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: (MIT OR GPL-2.0)
+ *
+ */
+
+#ifndef __HYPER_DMABUF_QUERY_H__
+#define __HYPER_DMABUF_QUERY_H__
+
+int hyper_dmabuf_query_imported(struct imported_sgt_info *imported,
+				int query, unsigned long *info);
+
+int hyper_dmabuf_query_exported(struct exported_sgt_info *exported,
+				int query, unsigned long *info);
+
+#endif // __HYPER_DMABUF_QUERY_H__
diff --git a/include/uapi/linux/hyper_dmabuf.h b/include/uapi/linux/hyper_dmabuf.h
index 36794a4af811..4f8e8ac0375b 100644
--- a/include/uapi/linux/hyper_dmabuf.h
+++ b/include/uapi/linux/hyper_dmabuf.h
@@ -88,4 +88,36 @@ struct ioctl_hyper_dmabuf_unexport {
 	int status;
 };
 
+#define IOCTL_HYPER_DMABUF_QUERY \
+_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_hyper_dmabuf_query))
+struct ioctl_hyper_dmabuf_query {
+	/* in parameters */
+	/* hyper dmabuf id to be queried */
+	hyper_dmabuf_id_t hid;
+	/* item to be queried */
+	int item;
+	/* OUT parameters */
+	/* Value of queried item */
+	unsigned long info;
+};
+
+/* DMABUF query */
+
+enum hyper_dmabuf_query {
+	HYPER_DMABUF_QUERY_TYPE = 0x10,
+	HYPER_DMABUF_QUERY_EXPORTER,
+	HYPER_DMABUF_QUERY_IMPORTER,
+	HYPER_DMABUF_QUERY_SIZE,
+	HYPER_DMABUF_QUERY_BUSY,
+	HYPER_DMABUF_QUERY_UNEXPORTED,
+	HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED,
+	HYPER_DMABUF_QUERY_PRIV_INFO_SIZE,
+	HYPER_DMABUF_QUERY_PRIV_INFO,
+};
+
+enum hyper_dmabuf_status {
+	EXPORTED = 0x01,
+	IMPORTED,
+};
+
 #endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 8/9] hyper_dmabuf: event-polling mechanism for detecting a new hyper_DMABUF
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
                   ` (6 preceding siblings ...)
  2018-02-14  1:50 ` [RFC PATCH v2 7/9] hyper_dmabuf: query ioctl for retreiving various hyper_DMABUF info Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-02-14  1:50 ` [RFC PATCH v2 9/9] hyper_dmabuf: threaded interrupt in Xen-backend Dongwon Kim
  2018-02-19 17:01 ` [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Daniel Vetter
  9 siblings, 0 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

New method based on polling for a importing VM to know about a new
hyper_DMABUF exported to it.

For this, the userspace now can poll the device node to check if
there a new event, which is created if there's a new hyper_DMABUF
available in importing VM (just exported).

A poll function call was added to the device driver interface for this
new functionality. Event-generation functionalitywas also implemented in
all other relavant parts of driver.

This "event-polling" mechanism is optional feature and can be enabled
by setting a Kernel config option, "HYPER_DMABUF_EVENT_GEN".

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
---
 drivers/dma-buf/hyper_dmabuf/Kconfig              |  20 +++
 drivers/dma-buf/hyper_dmabuf/Makefile             |   1 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c   | 146 ++++++++++++++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h   |  11 ++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c | 122 ++++++++++++++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h |  38 ++++++
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c  |   1 +
 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c   |  11 ++
 include/uapi/linux/hyper_dmabuf.h                 |  11 ++
 9 files changed, 361 insertions(+)
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
 create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h

diff --git a/drivers/dma-buf/hyper_dmabuf/Kconfig b/drivers/dma-buf/hyper_dmabuf/Kconfig
index 68f3d6ce2c1f..92510731af25 100644
--- a/drivers/dma-buf/hyper_dmabuf/Kconfig
+++ b/drivers/dma-buf/hyper_dmabuf/Kconfig
@@ -20,6 +20,16 @@ config HYPER_DMABUF_SYSFS
 
 	  The location of sysfs is under "...."
 
+config HYPER_DMABUF_EVENT_GEN
+        bool "Enable event-generation and polling operation"
+        default n
+        depends on HYPER_DMABUF
+        help
+          With this config enabled, hyper_dmabuf driver on the importer side
+          generates events and queue those up in the event list whenever a new
+          shared DMA-BUF is available. Events in the list can be retrieved by
+          read operation.
+
 config HYPER_DMABUF_XEN
         bool "Configure hyper_dmabuf for XEN hypervisor"
         default y
@@ -27,4 +37,14 @@ config HYPER_DMABUF_XEN
         help
           Enabling Hyper_DMABUF Backend for XEN hypervisor
 
+config HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
+        bool "Enable automatic rx-ch add with 10 secs interval"
+        default y
+        depends on HYPER_DMABUF && HYPER_DMABUF_XEN
+        help
+          If enabled, driver reads a node in xenstore every 10 seconds
+          to check whether there is any tx comm ch configured by another
+          domain then initialize matched rx comm ch automatically for any
+          existing tx comm chs.
+
 endmenu
diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
index 578a669a0d3e..f573dd5c4054 100644
--- a/drivers/dma-buf/hyper_dmabuf/Makefile
+++ b/drivers/dma-buf/hyper_dmabuf/Makefile
@@ -11,6 +11,7 @@ ifneq ($(KERNELRELEASE),)
 				 hyper_dmabuf_id.o \
 				 hyper_dmabuf_remote_sync.o \
 				 hyper_dmabuf_query.o \
+				 hyper_dmabuf_event.o \
 
 ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
 	$(TARGET_MODULE)-objs += backends/xen/hyper_dmabuf_xen_comm.o \
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
index 3320f9dcc769..087f091ccae9 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
@@ -41,6 +41,7 @@
 #include "hyper_dmabuf_ioctl.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
 
 #ifdef CONFIG_HYPER_DMABUF_XEN
 #include "backends/xen/hyper_dmabuf_xen_drv.h"
@@ -91,10 +92,138 @@ static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+
+static unsigned int hyper_dmabuf_event_poll(struct file *filp,
+				     struct poll_table_struct *wait)
+{
+	poll_wait(filp, &hy_drv_priv->event_wait, wait);
+
+	if (!list_empty(&hy_drv_priv->event_list))
+		return POLLIN | POLLRDNORM;
+
+	return 0;
+}
+
+static ssize_t hyper_dmabuf_event_read(struct file *filp, char __user *buffer,
+		size_t count, loff_t *offset)
+{
+	int ret;
+
+	/* only root can read events */
+	if (!capable(CAP_DAC_OVERRIDE)) {
+		dev_err(hy_drv_priv->dev,
+			"Only root can read events\n");
+		return -EPERM;
+	}
+
+	/* make sure user buffer can be written */
+	if (!access_ok(VERIFY_WRITE, buffer, count)) {
+		dev_err(hy_drv_priv->dev,
+			"User buffer can't be written.\n");
+		return -EINVAL;
+	}
+
+	ret = mutex_lock_interruptible(&hy_drv_priv->event_read_lock);
+	if (ret)
+		return ret;
+
+	while (1) {
+		struct hyper_dmabuf_event *e = NULL;
+
+		spin_lock_irq(&hy_drv_priv->event_lock);
+		if (!list_empty(&hy_drv_priv->event_list)) {
+			e = list_first_entry(&hy_drv_priv->event_list,
+					struct hyper_dmabuf_event, link);
+			list_del(&e->link);
+		}
+		spin_unlock_irq(&hy_drv_priv->event_lock);
+
+		if (!e) {
+			if (ret)
+				break;
+
+			if (filp->f_flags & O_NONBLOCK) {
+				ret = -EAGAIN;
+				break;
+			}
+
+			mutex_unlock(&hy_drv_priv->event_read_lock);
+			ret = wait_event_interruptible(hy_drv_priv->event_wait,
+				  !list_empty(&hy_drv_priv->event_list));
+
+			if (ret == 0)
+				ret = mutex_lock_interruptible(
+					&hy_drv_priv->event_read_lock);
+
+			if (ret)
+				return ret;
+		} else {
+			unsigned int length = (sizeof(e->event_data.hdr) +
+						      e->event_data.hdr.size);
+
+			if (length > count - ret) {
+put_back_event:
+				spin_lock_irq(&hy_drv_priv->event_lock);
+				list_add(&e->link, &hy_drv_priv->event_list);
+				spin_unlock_irq(&hy_drv_priv->event_lock);
+				break;
+			}
+
+			if (copy_to_user(buffer + ret, &e->event_data.hdr,
+					 sizeof(e->event_data.hdr))) {
+				if (ret == 0)
+					ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += sizeof(e->event_data.hdr);
+
+			if (copy_to_user(buffer + ret, e->event_data.data,
+					 e->event_data.hdr.size)) {
+				/* error while copying void *data */
+
+				struct hyper_dmabuf_event_hdr dummy_hdr = {0};
+
+				ret -= sizeof(e->event_data.hdr);
+
+				/* nullifying hdr of the event in user buffer */
+				if (copy_to_user(buffer + ret, &dummy_hdr,
+						 sizeof(dummy_hdr))) {
+					dev_err(hy_drv_priv->dev,
+						"failed to nullify invalid hdr already in userspace\n");
+				}
+
+				ret = -EFAULT;
+
+				goto put_back_event;
+			}
+
+			ret += e->event_data.hdr.size;
+			hy_drv_priv->pending--;
+			kfree(e);
+		}
+	}
+
+	mutex_unlock(&hy_drv_priv->event_read_lock);
+
+	return ret;
+}
+
+#endif
+
 static const struct file_operations hyper_dmabuf_driver_fops = {
 	.owner = THIS_MODULE,
 	.open = hyper_dmabuf_open,
 	.release = hyper_dmabuf_release,
+
+/* poll and read interfaces are needed only for event-polling */
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	.read = hyper_dmabuf_event_read,
+	.poll = hyper_dmabuf_event_poll,
+#endif
+
 	.unlocked_ioctl = hyper_dmabuf_ioctl,
 };
 
@@ -194,6 +323,18 @@ static int __init hyper_dmabuf_drv_init(void)
 	}
 #endif
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	mutex_init(&hy_drv_priv->event_read_lock);
+	spin_lock_init(&hy_drv_priv->event_lock);
+
+	/* Initialize event queue */
+	INIT_LIST_HEAD(&hy_drv_priv->event_list);
+	init_waitqueue_head(&hy_drv_priv->event_wait);
+
+	/* resetting number of pending events */
+	hy_drv_priv->pending = 0;
+#endif
+
 	if (hy_drv_priv->bknd_ops->init) {
 		ret = hy_drv_priv->bknd_ops->init();
 
@@ -250,6 +391,11 @@ static void hyper_dmabuf_drv_exit(void)
 	if (hy_drv_priv->id_queue)
 		hyper_dmabuf_free_hid_list();
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+	/* clean up event queue */
+	hyper_dmabuf_events_release();
+#endif
+
 	mutex_unlock(&hy_drv_priv->lock);
 
 	dev_info(hy_drv_priv->dev,
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
index 46119d762430..282a507b33bc 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
@@ -32,6 +32,11 @@
 
 struct hyper_dmabuf_req;
 
+struct hyper_dmabuf_event {
+	struct hyper_dmabuf_event_data event_data;
+	struct list_head link;
+};
+
 struct hyper_dmabuf_private {
 	struct device *dev;
 
@@ -54,6 +59,12 @@ struct hyper_dmabuf_private {
 	/* flag that shows whether backend is initialized */
 	bool initialized;
 
+	wait_queue_head_t event_wait;
+	struct list_head event_list;
+
+	spinlock_t event_lock;
+	struct mutex event_read_lock;
+
 	/* # of pending events */
 	int pending;
 };
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
new file mode 100644
index 000000000000..942a1bb78755
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Dongwon Kim <dongwon.kim@intel.com>
+ *    Mateusz Polrola <mateuszx.potrola@intel.com>
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include "hyper_dmabuf_drv.h"
+#include "hyper_dmabuf_struct.h"
+#include "hyper_dmabuf_list.h"
+#include "hyper_dmabuf_event.h"
+
+static void send_event(struct hyper_dmabuf_event *e)
+{
+	struct hyper_dmabuf_event *oldest;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
+
+	/* check current number of event then if it hits the max num allowed
+	 * then remove the oldest event in the list
+	 */
+	if (hy_drv_priv->pending > MAX_DEPTH_EVENT_QUEUE - 1) {
+		oldest = list_first_entry(&hy_drv_priv->event_list,
+				struct hyper_dmabuf_event, link);
+		list_del(&oldest->link);
+		hy_drv_priv->pending--;
+		kfree(oldest);
+	}
+
+	list_add_tail(&e->link,
+		      &hy_drv_priv->event_list);
+
+	hy_drv_priv->pending++;
+
+	wake_up_interruptible(&hy_drv_priv->event_wait);
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+}
+
+void hyper_dmabuf_events_release(void)
+{
+	struct hyper_dmabuf_event *e, *et;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&hy_drv_priv->event_lock, irqflags);
+
+	list_for_each_entry_safe(e, et, &hy_drv_priv->event_list,
+				 link) {
+		list_del(&e->link);
+		kfree(e);
+		hy_drv_priv->pending--;
+	}
+
+	if (hy_drv_priv->pending) {
+		dev_err(hy_drv_priv->dev,
+			"possible leak on event_list\n");
+	}
+
+	spin_unlock_irqrestore(&hy_drv_priv->event_lock, irqflags);
+}
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid)
+{
+	struct hyper_dmabuf_event *e;
+	struct imported_sgt_info *imported;
+
+	imported = hyper_dmabuf_find_imported(hid);
+
+	if (!imported) {
+		dev_err(hy_drv_priv->dev,
+			"can't find imported_sgt_info in the list\n");
+		return -EINVAL;
+	}
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+
+	if (!e)
+		return -ENOMEM;
+
+	e->event_data.hdr.event_type = HYPER_DMABUF_NEW_IMPORT;
+	e->event_data.hdr.hid = hid;
+	e->event_data.data = (void *)imported->priv;
+	e->event_data.hdr.size = imported->sz_priv;
+
+	send_event(e);
+
+	dev_dbg(hy_drv_priv->dev,
+		"event number = %d :", hy_drv_priv->pending);
+
+	dev_dbg(hy_drv_priv->dev,
+		"generating events for {%d, %d, %d, %d}\n",
+		imported->hid.id, imported->hid.rng_key[0],
+		imported->hid.rng_key[1], imported->hid.rng_key[2]);
+
+	return 0;
+}
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
new file mode 100644
index 000000000000..8f61198e623c
--- /dev/null
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
@@ -0,0 +1,38 @@
+/*
+ * Copyright © 2018 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __HYPER_DMABUF_EVENT_H__
+#define __HYPER_DMABUF_EVENT_H__
+
+#define MAX_DEPTH_EVENT_QUEUE 32
+
+enum hyper_dmabuf_event_type {
+	HYPER_DMABUF_NEW_IMPORT = 0x10000,
+};
+
+void hyper_dmabuf_events_release(void);
+
+int hyper_dmabuf_import_event(hyper_dmabuf_id_t hid);
+
+#endif /* __HYPER_DMABUF_EVENT_H__ */
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
index f2f65a8ec47f..30c3af65fcde 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
@@ -36,6 +36,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_list.h"
 #include "hyper_dmabuf_id.h"
+#include "hyper_dmabuf_event.h"
 
 DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
 DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
index 1592d5cfaa52..8f2cf7ea827d 100644
--- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
+++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
@@ -35,6 +35,7 @@
 #include "hyper_dmabuf_drv.h"
 #include "hyper_dmabuf_msg.h"
 #include "hyper_dmabuf_remote_sync.h"
+#include "hyper_dmabuf_event.h"
 #include "hyper_dmabuf_list.h"
 
 struct cmd_process {
@@ -179,6 +180,11 @@ static void cmd_process_work(struct work_struct *work)
 			/* updating priv data */
 			memcpy(imported->priv, &req->op[9], req->op[8]);
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+			/* generating import event */
+			hyper_dmabuf_import_event(imported->hid);
+#endif
+
 			break;
 		}
 
@@ -219,6 +225,11 @@ static void cmd_process_work(struct work_struct *work)
 		imported->valid = true;
 		hyper_dmabuf_register_imported(imported);
 
+#ifdef CONFIG_HYPER_DMABUF_EVENT_GEN
+		/* generating import event */
+		hyper_dmabuf_import_event(imported->hid);
+#endif
+
 		break;
 
 	case HYPER_DMABUF_OPS_TO_REMOTE:
diff --git a/include/uapi/linux/hyper_dmabuf.h b/include/uapi/linux/hyper_dmabuf.h
index 4f8e8ac0375b..dd73db9bf37d 100644
--- a/include/uapi/linux/hyper_dmabuf.h
+++ b/include/uapi/linux/hyper_dmabuf.h
@@ -32,6 +32,17 @@ typedef struct {
 	int rng_key[3]; /* 12bytes long random number */
 } hyper_dmabuf_id_t;
 
+struct hyper_dmabuf_event_hdr {
+	int event_type; /* one type only for now - new import */
+	hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
+	int size; /* size of data */
+};
+
+struct hyper_dmabuf_event_data {
+	struct hyper_dmabuf_event_hdr hdr;
+	void *data; /* private data */
+};
+
 #define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
 _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
 struct ioctl_hyper_dmabuf_tx_ch_setup {
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [RFC PATCH v2 9/9] hyper_dmabuf: threaded interrupt in Xen-backend
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
                   ` (7 preceding siblings ...)
  2018-02-14  1:50 ` [RFC PATCH v2 8/9] hyper_dmabuf: event-polling mechanism for detecting a new hyper_DMABUF Dongwon Kim
@ 2018-02-14  1:50 ` Dongwon Kim
  2018-04-10 10:04   ` [RFC,v2,9/9] " Oleksandr Andrushchenko
  2018-02-19 17:01 ` [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Daniel Vetter
  9 siblings, 1 reply; 21+ messages in thread
From: Dongwon Kim @ 2018-02-14  1:50 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, dongwon.kim, mateuszx.potrola, sumit.semwal

Use threaded interrupt intead of regular one because most part of ISR
is time-critical and possibly sleeps

Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 .../hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
index 30bc4b6304ac..65af5ddfb2d7 100644
--- a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
+++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
@@ -332,11 +332,14 @@ int xen_be_init_tx_rbuf(int domid)
 	}
 
 	/* setting up interrupt */
-	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
-					front_ring_isr, 0,
-					NULL, (void *) ring_info);
+	ring_info->irq = bind_evtchn_to_irq(alloc_unbound.port);
 
-	if (ret < 0) {
+	ret = request_threaded_irq(ring_info->irq,
+				   NULL,
+				   front_ring_isr,
+				   IRQF_ONESHOT, NULL, ring_info);
+
+	if (ret != 0) {
 		dev_err(hy_drv_priv->dev,
 			"Failed to setup event channel\n");
 		close.port = alloc_unbound.port;
@@ -348,7 +351,6 @@ int xen_be_init_tx_rbuf(int domid)
 	}
 
 	ring_info->rdomain = domid;
-	ring_info->irq = ret;
 	ring_info->port = alloc_unbound.port;
 
 	mutex_init(&ring_info->lock);
@@ -535,9 +537,10 @@ int xen_be_init_rx_rbuf(int domid)
 	if (!xen_comm_find_tx_ring(domid))
 		ret = xen_be_init_tx_rbuf(domid);
 
-	ret = request_irq(ring_info->irq,
-			  back_ring_isr, 0,
-			  NULL, (void *)ring_info);
+	ret = request_threaded_irq(ring_info->irq,
+				   NULL,
+				   back_ring_isr, IRQF_ONESHOT,
+				   NULL, (void *)ring_info);
 
 	return ret;
 
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver
  2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
                   ` (8 preceding siblings ...)
  2018-02-14  1:50 ` [RFC PATCH v2 9/9] hyper_dmabuf: threaded interrupt in Xen-backend Dongwon Kim
@ 2018-02-19 17:01 ` Daniel Vetter
  2018-02-21 20:18   ` Dongwon Kim
  9 siblings, 1 reply; 21+ messages in thread
From: Daniel Vetter @ 2018-02-19 17:01 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: linux-kernel, linaro-mm-sig, xen-devel, dri-devel, mateuszx.potrola

On Tue, Feb 13, 2018 at 05:49:59PM -0800, Dongwon Kim wrote:
> This patch series contains the implementation of a new device driver,
> hyper_DMABUF driver, which provides a way to expand the boundary of
> Linux DMA-BUF sharing to across different VM instances in Multi-OS platform
> enabled by a Hypervisor (e.g. XEN)
> 
> This version 2 series is basically refactored version of old series starting
> with "[RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf
> drv"
> 
> Implementation details of this driver are described in the reference guide
> added by the second patch, "[RFC PATCH v2 2/5] hyper_dmabuf: architecture
> specification and reference guide".
> 
> Attaching 'Overview' section here as a quick summary.
> 
> ------------------------------------------------------------------------------
> Section 1. Overview
> ------------------------------------------------------------------------------
> 
> Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> where multiple different OS instances need to share same physical data without
> data-copy across VMs.
> 
> To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> for the buffer to the importing VM (so called, “importer”).
> 
> Another instance of the Hyper_DMABUF driver on importer registers
> a hyper_dmabuf_id together with reference information for the shared physical
> pages associated with the DMA_BUF to its database when the export happens.
> 
> The actual mapping of the DMA_BUF on the importer’s side is done by
> the Hyper_DMABUF driver when user space issues the IOCTL command to access
> the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> exporting driver as is, that is, no special configuration is required.
> Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> exchange.
> 
> ------------------------------------------------------------------------------
> 
> There is a git repository at github.com where this series of patches are all
> integrated in Linux kernel tree based on the commit:
> 
>         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>         Author: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
>         Date:   Sun Dec 3 11:01:47 2018 -0500
> 
>             Linux 4.15-rc2
> 
> https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v4

Since you place this under drivers/dma-buf I'm assuming you want to
maintain this as part of the core dma-buf support, and not as some
Xen-specific thing. Given that, usual graphics folks rules apply:

Where's the userspace for this (must be open source)? What exactly is the
use-case you're trying to solve by sharing dma-bufs in this fashion?

Iirc my feedback on v1 was why exactly you really need to be able to
import a normal dma-buf into a hyper-dmabuf, instead of allocating them
directly in the hyper-dmabuf driver. Which would _massively_ simplify your
design, since you don't need to marshall all the attach and map business
around (since the hypervisor would be in control of the dma-buf, not a
guest OS). Also, all this marshalling leaves me with the impression that
the guest that exports the dma-buf could take down the importer. That
kinda nukes all the separation guarantees that vms provide.

Or you just stuff this somewhere deeply hidden within Xen where gpu folks
can't find it :-)
-Daniel

> 
> Dongwon Kim, Mateusz Polrola (9):
>   hyper_dmabuf: initial upload of hyper_dmabuf drv core framework
>   hyper_dmabuf: architecture specification and reference guide
>   MAINTAINERS: adding Hyper_DMABUF driver section in MAINTAINERS
>   hyper_dmabuf: user private data attached to hyper_DMABUF
>   hyper_dmabuf: hyper_DMABUF synchronization across VM
>   hyper_dmabuf: query ioctl for retreiving various hyper_DMABUF info
>   hyper_dmabuf: event-polling mechanism for detecting a new hyper_DMABUF
>   hyper_dmabuf: threaded interrupt in Xen-backend
>   hyper_dmabuf: default backend for XEN hypervisor
> 
>  Documentation/hyper-dmabuf-sharing.txt             | 734 ++++++++++++++++
>  MAINTAINERS                                        |  11 +
>  drivers/dma-buf/Kconfig                            |   2 +
>  drivers/dma-buf/Makefile                           |   1 +
>  drivers/dma-buf/hyper_dmabuf/Kconfig               |  50 ++
>  drivers/dma-buf/hyper_dmabuf/Makefile              |  44 +
>  .../backends/xen/hyper_dmabuf_xen_comm.c           | 944 +++++++++++++++++++++
>  .../backends/xen/hyper_dmabuf_xen_comm.h           |  78 ++
>  .../backends/xen/hyper_dmabuf_xen_comm_list.c      | 158 ++++
>  .../backends/xen/hyper_dmabuf_xen_comm_list.h      |  67 ++
>  .../backends/xen/hyper_dmabuf_xen_drv.c            |  46 +
>  .../backends/xen/hyper_dmabuf_xen_drv.h            |  53 ++
>  .../backends/xen/hyper_dmabuf_xen_shm.c            | 525 ++++++++++++
>  .../backends/xen/hyper_dmabuf_xen_shm.h            |  46 +
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    | 410 +++++++++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h    | 122 +++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c  | 122 +++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h  |  38 +
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c     | 135 +++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h     |  53 ++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 794 +++++++++++++++++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  52 ++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c   | 295 +++++++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h   |  73 ++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 416 +++++++++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  89 ++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 415 +++++++++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h    |  34 +
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c  | 174 ++++
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h  |  36 +
>  .../hyper_dmabuf/hyper_dmabuf_remote_sync.c        | 324 +++++++
>  .../hyper_dmabuf/hyper_dmabuf_remote_sync.h        |  32 +
>  .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 257 ++++++
>  .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  43 +
>  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 143 ++++
>  include/uapi/linux/hyper_dmabuf.h                  | 134 +++
>  36 files changed, 6950 insertions(+)
>  create mode 100644 Documentation/hyper-dmabuf-sharing.txt
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/Kconfig
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/Makefile
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
>  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
>  create mode 100644 include/uapi/linux/hyper_dmabuf.h
> 
> -- 
> 2.16.1
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver
  2018-02-19 17:01 ` [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Daniel Vetter
@ 2018-02-21 20:18   ` Dongwon Kim
  0 siblings, 0 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-21 20:18 UTC (permalink / raw)
  To: linux-kernel, linaro-mm-sig, xen-devel, dri-devel, mateuszx.potrola

On Mon, Feb 19, 2018 at 06:01:29PM +0100, Daniel Vetter wrote:
> On Tue, Feb 13, 2018 at 05:49:59PM -0800, Dongwon Kim wrote:
> > This patch series contains the implementation of a new device driver,
> > hyper_DMABUF driver, which provides a way to expand the boundary of
> > Linux DMA-BUF sharing to across different VM instances in Multi-OS platform
> > enabled by a Hypervisor (e.g. XEN)
> > 
> > This version 2 series is basically refactored version of old series starting
> > with "[RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf
> > drv"
> > 
> > Implementation details of this driver are described in the reference guide
> > added by the second patch, "[RFC PATCH v2 2/5] hyper_dmabuf: architecture
> > specification and reference guide".
> > 
> > Attaching 'Overview' section here as a quick summary.
> > 
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> > 
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> > 
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> > 
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> > 
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
> > 
> > ------------------------------------------------------------------------------
> > 
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> > 
> >         commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> >         Author: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> >         Date:   Sun Dec 3 11:01:47 2018 -0500
> > 
> >             Linux 4.15-rc2
> > 
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v4
> 
> Since you place this under drivers/dma-buf I'm assuming you want to
> maintain this as part of the core dma-buf support, and not as some
> Xen-specific thing. Given that, usual graphics folks rules apply:

I moved it inside driver/dma-buf because the half of design is not hypervisor
specific and it is possible that we would add more backends for other
additional hypervisor support. 

> 
> Where's the userspace for this (must be open source)? What exactly is the
> use-case you're trying to solve by sharing dma-bufs in this fashion?

Automotive use cases are actually using this feature now where each VM has
their own display and want to share same rendering contents from one to
another. It is a platform based on Xen and Intel hardware and I don't think
all of SW stack is open-sourced. I do have a test application to verify this,
which I think I can make public.

> 
> Iirc my feedback on v1 was why exactly you really need to be able to
> import a normal dma-buf into a hyper-dmabuf, instead of allocating them
> directly in the hyper-dmabuf driver. Which would _massively_ simplify your
> design, since you don't need to marshall all the attach and map business
> around (since the hypervisor would be in control of the dma-buf, not a
> guest OS). 

I am sorry but I don't quite understand which side you are talking about
when you said "import a normal dma-buf". This hyper_dmabuf driver running
on the exporting VM actually imports the normal dma-buf (e.g. the one from
i915) then get underlying pages shared and pass all the references to those
pages to the importing VM. On importing VM, hyper_dmabuf driver is supposed
to create a dma-buf (Is this part what you are talking about?) with those
shared pages and export it using normal dma-buf framework. Attaching and
mapping functions should be defined in this case because hyper_dmabuf will
be the original exporter in importing VM.

I will try to contact you in IRC if more clarification is required.

Also, as far as I remember you suggested to make this driver work as exporter
on both sides. If your comment above is in-line with your previous feedback,
I actually replied back to your initial comment. I am not sure if you had
a chance to look at it, however it would be great if you can review it
and make some comment if my answer was not enough. 

> Also, all this marshalling leaves me with the impression that
> the guest that exports the dma-buf could take down the importer. That
> kinda nukes all the separation guarantees that vms provide.

I understand the importance of separation however, sharing physical memory in
kernel level breaks this guarantee anyway regardless of the implementation.

> 
> Or you just stuff this somewhere deeply hidden within Xen where gpu folks
> can't find it :-)

Grant-table (memory sharing mechanism in Xen) has its own permission control
for shared pages, however at least in graphic use-case, it is not fully
quaranteed once those are mapped in GTT.

> -Daniel

> 
> > 
> > Dongwon Kim, Mateusz Polrola (9):
> >   hyper_dmabuf: initial upload of hyper_dmabuf drv core framework
> >   hyper_dmabuf: architecture specification and reference guide
> >   MAINTAINERS: adding Hyper_DMABUF driver section in MAINTAINERS
> >   hyper_dmabuf: user private data attached to hyper_DMABUF
> >   hyper_dmabuf: hyper_DMABUF synchronization across VM
> >   hyper_dmabuf: query ioctl for retreiving various hyper_DMABUF info
> >   hyper_dmabuf: event-polling mechanism for detecting a new hyper_DMABUF
> >   hyper_dmabuf: threaded interrupt in Xen-backend
> >   hyper_dmabuf: default backend for XEN hypervisor
> > 
> >  Documentation/hyper-dmabuf-sharing.txt             | 734 ++++++++++++++++
> >  MAINTAINERS                                        |  11 +
> >  drivers/dma-buf/Kconfig                            |   2 +
> >  drivers/dma-buf/Makefile                           |   1 +
> >  drivers/dma-buf/hyper_dmabuf/Kconfig               |  50 ++
> >  drivers/dma-buf/hyper_dmabuf/Makefile              |  44 +
> >  .../backends/xen/hyper_dmabuf_xen_comm.c           | 944 +++++++++++++++++++++
> >  .../backends/xen/hyper_dmabuf_xen_comm.h           |  78 ++
> >  .../backends/xen/hyper_dmabuf_xen_comm_list.c      | 158 ++++
> >  .../backends/xen/hyper_dmabuf_xen_comm_list.h      |  67 ++
> >  .../backends/xen/hyper_dmabuf_xen_drv.c            |  46 +
> >  .../backends/xen/hyper_dmabuf_xen_drv.h            |  53 ++
> >  .../backends/xen/hyper_dmabuf_xen_shm.c            | 525 ++++++++++++
> >  .../backends/xen/hyper_dmabuf_xen_shm.h            |  46 +
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    | 410 +++++++++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h    | 122 +++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c  | 122 +++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h  |  38 +
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c     | 135 +++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h     |  53 ++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 794 +++++++++++++++++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  52 ++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c   | 295 +++++++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h   |  73 ++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 416 +++++++++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  89 ++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 415 +++++++++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h    |  34 +
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c  | 174 ++++
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h  |  36 +
> >  .../hyper_dmabuf/hyper_dmabuf_remote_sync.c        | 324 +++++++
> >  .../hyper_dmabuf/hyper_dmabuf_remote_sync.h        |  32 +
> >  .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 257 ++++++
> >  .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  43 +
> >  drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 143 ++++
> >  include/uapi/linux/hyper_dmabuf.h                  | 134 +++
> >  36 files changed, 6950 insertions(+)
> >  create mode 100644 Documentation/hyper-dmabuf-sharing.txt
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/Kconfig
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/Makefile
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_event.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_query.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_remote_sync.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
> >  create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
> >  create mode 100644 include/uapi/linux/hyper_dmabuf.h
> > 
> > -- 
> > 2.16.1
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide
  2018-02-14  1:50 ` [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide Dongwon Kim
@ 2018-02-23 16:15   ` Roger Pau Monné
  2018-02-23 19:02     ` Dongwon Kim
  2018-04-10  9:52   ` [RFC, v2, " Oleksandr Andrushchenko
  1 sibling, 1 reply; 21+ messages in thread
From: Roger Pau Monné @ 2018-02-23 16:15 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: linux-kernel, linaro-mm-sig, xen-devel, sumit.semwal, dri-devel,
	mateuszx.potrola

On Tue, Feb 13, 2018 at 05:50:01PM -0800, Dongwon Kim wrote:
> Reference document for hyper_DMABUF driver
> 
> Documentation/hyper-dmabuf-sharing.txt

This should likely be patch 1 in order for reviewers to have the
appropriate context.

> 
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> ---
>  Documentation/hyper-dmabuf-sharing.txt | 734 +++++++++++++++++++++++++++++++++
>  1 file changed, 734 insertions(+)
>  create mode 100644 Documentation/hyper-dmabuf-sharing.txt
> 
> diff --git a/Documentation/hyper-dmabuf-sharing.txt b/Documentation/hyper-dmabuf-sharing.txt
> new file mode 100644
> index 000000000000..928e411931e3
> --- /dev/null
> +++ b/Documentation/hyper-dmabuf-sharing.txt
> @@ -0,0 +1,734 @@
> +Linux Hyper DMABUF Driver
> +
> +------------------------------------------------------------------------------
> +Section 1. Overview
> +------------------------------------------------------------------------------
> +
> +Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> +achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> +where multiple different OS instances need to share same physical data without
> +data-copy across VMs.
> +
> +To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> +exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> +producer of the buffer,

The usage of export and import in the above sentence makes it almost
impossible to understand.

> then re-exports it with an unique ID, hyper_dmabuf_id
> +for the buffer to the importing VM (so called, “importer”).

And this is even worse.

Maybe it would help to have some kind of flow diagram of all this
import/export operations, but please read below.

> +
> +Another instance of the Hyper_DMABUF driver on importer registers
> +a hyper_dmabuf_id together with reference information for the shared physical
> +pages associated with the DMA_BUF to its database when the export happens.
> +
> +The actual mapping of the DMA_BUF on the importer’s side is done by
> +the Hyper_DMABUF driver when user space issues the IOCTL command to access
> +the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> +exporting driver as is, that is, no special configuration is required.
> +Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> +exchange.

IMHO I need a more generic view of the problem you are trying to solve
in the overview section. I've read the full overview, and I still have
no idea why you need all this.

I think the overview should contain at least:

1. A description of the problem you are trying to solve.
2. A high level description of the proposed solution.
3. How the proposed solution deals with the problem described in 1.

This overview is not useful for people that don't know which problem
you are trying to solve, like myself.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide
  2018-02-23 16:15   ` [Xen-devel] " Roger Pau Monné
@ 2018-02-23 19:02     ` Dongwon Kim
  0 siblings, 0 replies; 21+ messages in thread
From: Dongwon Kim @ 2018-02-23 19:02 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: linux-kernel, linaro-mm-sig, xen-devel, sumit.semwal, dri-devel,
	mateuszx.potrola

Thanks for your comment, Roger
I will try to polish this doc and resubmit.
(I put some comments below as well.)

On Fri, Feb 23, 2018 at 04:15:00PM +0000, Roger Pau Monné wrote:
> On Tue, Feb 13, 2018 at 05:50:01PM -0800, Dongwon Kim wrote:
> > Reference document for hyper_DMABUF driver
> > 
> > Documentation/hyper-dmabuf-sharing.txt
> 
> This should likely be patch 1 in order for reviewers to have the
> appropriate context.
> 
> > 
> > Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> > ---
> >  Documentation/hyper-dmabuf-sharing.txt | 734 +++++++++++++++++++++++++++++++++
> >  1 file changed, 734 insertions(+)
> >  create mode 100644 Documentation/hyper-dmabuf-sharing.txt
> > 
> > diff --git a/Documentation/hyper-dmabuf-sharing.txt b/Documentation/hyper-dmabuf-sharing.txt
> > new file mode 100644
> > index 000000000000..928e411931e3
> > --- /dev/null
> > +++ b/Documentation/hyper-dmabuf-sharing.txt
> > @@ -0,0 +1,734 @@
> > +Linux Hyper DMABUF Driver
> > +
> > +------------------------------------------------------------------------------
> > +Section 1. Overview
> > +------------------------------------------------------------------------------
> > +
> > +Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > +achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > +where multiple different OS instances need to share same physical data without
> > +data-copy across VMs.
> > +
> > +To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > +exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > +producer of the buffer,
> 
> The usage of export and import in the above sentence makes it almost
> impossible to understand.

Ok, it looks confusing. I think the problem is that those words are used for both
local and cross-VMs cases. I will try to clarify those. 

> 
> > then re-exports it with an unique ID, hyper_dmabuf_id
> > +for the buffer to the importing VM (so called, “importer”).
> 
> And this is even worse.
> 
> Maybe it would help to have some kind of flow diagram of all this
> import/export operations, but please read below.

I will add a diagram here.

> 
> > +
> > +Another instance of the Hyper_DMABUF driver on importer registers
> > +a hyper_dmabuf_id together with reference information for the shared physical
> > +pages associated with the DMA_BUF to its database when the export happens.
> > +
> > +The actual mapping of the DMA_BUF on the importer’s side is done by
> > +the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > +the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > +exporting driver as is, that is, no special configuration is required.
> > +Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > +exchange.
> 
> IMHO I need a more generic view of the problem you are trying to solve
> in the overview section. I've read the full overview, and I still have
> no idea why you need all this.

I will add some more paragrahs here to give some more generic view (and possibly
diagrams) of this driver.

> 
> I think the overview should contain at least:
> 
> 1. A description of the problem you are trying to solve.
> 2. A high level description of the proposed solution.
> 3. How the proposed solution deals with the problem described in 1.
> 
> This overview is not useful for people that don't know which problem
> you are trying to solve, like myself.

Thanks again.

> 
> Thanks, Roger.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC, v2, 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework
  2018-02-14  1:50 ` [RFC PATCH v2 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework Dongwon Kim
@ 2018-04-10  8:53   ` Oleksandr Andrushchenko
  2018-04-10 10:47     ` [Xen-devel] " Julien Grall
  0 siblings, 1 reply; 21+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-10  8:53 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, mateuszx.potrola

On 02/14/2018 03:50 AM, Dongwon Kim wrote:
> Upload of intial version of core framework in hyper_DMABUF driver
> enabling DMA_BUF exchange between two different VMs in virtualized
> platform based on Hypervisor such as XEN.
>
> Hyper_DMABUF drv's primary role is to import a DMA_BUF from originator
> then re-export it to another Linux VM so that it can be mapped and
> accessed in there.
>
> This driver has two layers, one is so called, "core framework", which
> contains driver interface and core functions handling export/import of
> new hyper_DMABUF and its maintenance. This part of the driver is
> independent from Hypervisor so can work as is with any Hypervisor.
>
> The other layer is called "Hypervisor Backend". This layer represents
> the interface between "core framework" and actual Hypervisor, handling
> memory sharing and communication. Not like "core framework", every
> Hypervisor needs it's own backend interface designed using its native
> mechanism for memory sharing and inter-VM communication.
>
> This patch contains the first part, "core framework", which consists of
> 7 source files and 11 header files. Some brief description of these
> source code are attached below:
>
> hyper_dmabuf_drv.c
>
> - Linux driver interface and initialization/cleaning-up routines
>
> hyper_dmabuf_ioctl.c
>
> - IOCTLs calls for export/import of DMA-BUF comm channel's creation and
>    destruction.
>
> hyper_dmabuf_sgl_proc.c
>
> - Provides methods to managing DMA-BUF for exporing and importing. For
>    exporting, extraction of pages, sharing pages via procedures in
>    "Backend" and notifying importing VM exist. For importing, all
>    operations related to the reconstruction of DMA-BUF (with shared
>    pages) on importer's side are defined.
>
> hyper_dmabuf_ops.c
>
> - Standard DMA-BUF operations for hyper_DMABUF reconstructed on
>    importer's side.
>
> hyper_dmabuf_list.c
>
> - Lists for storing exported and imported hyper_DMABUF to keep track of
>    remote usage of hyper_DMABUF currently being shared.
>
> hyper_dmabuf_msg.c
>
> - Defines messages exchanged between VMs (exporter and importer) and
>    function calls for sending and parsing (when received) those.
>
> hyper_dmabuf_id.c
>
> - Contains methods to generate and manage "hyper_DMABUF id" for each
>    hyper_DMABUF being exported. It is a global handle for a hyper_DMABUF,
>    which another VM needs to know to import it.
>
> hyper_dmabuf_struct.h
>
> - Contains data structures of importer or exporter hyper_DMABUF
>
> include/uapi/linux/hyper_dmabuf.h
>
> - Contains definition of data types and structures referenced by user
>    application to interact with driver
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
> ---
>   drivers/dma-buf/Kconfig                            |   2 +
>   drivers/dma-buf/Makefile                           |   1 +
>   drivers/dma-buf/hyper_dmabuf/Kconfig               |  23 +
>   drivers/dma-buf/hyper_dmabuf/Makefile              |  34 ++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    | 254 ++++++++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h    | 111 ++++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c     | 135 +++++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h     |  53 ++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 672 +++++++++++++++++++++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h  |  52 ++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c   | 294 +++++++++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h   |  73 +++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 320 ++++++++++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  87 +++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c    | 264 ++++++++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h    |  34 ++
>   .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   | 256 ++++++++
>   .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h   |  43 ++
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 131 ++++
>   include/uapi/linux/hyper_dmabuf.h                  |  87 +++
>   20 files changed, 2926 insertions(+)
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/Kconfig
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/Makefile
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
>   create mode 100644 include/uapi/linux/hyper_dmabuf.h
>
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index ed3b785bae37..09ccac1768e3 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -30,4 +30,6 @@ config SW_SYNC
>   	  WARNING: improper use of this can result in deadlocking kernel
>   	  drivers from userspace. Intended for test and debug only.
>   
> +source "drivers/dma-buf/hyper_dmabuf/Kconfig"
> +
>   endmenu
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index c33bf8863147..445749babb19 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,3 +1,4 @@
>   obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
>   obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
>   obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
> +obj-$(CONFIG_HYPER_DMABUF)      += ./hyper_dmabuf/
> diff --git a/drivers/dma-buf/hyper_dmabuf/Kconfig b/drivers/dma-buf/hyper_dmabuf/Kconfig
> new file mode 100644
> index 000000000000..5ebf516d65eb
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/Kconfig
> @@ -0,0 +1,23 @@
> +menu "HYPER_DMABUF"
> +
> +config HYPER_DMABUF
> +	tristate "Enables hyper dmabuf driver"
> +	default y
Not sure you want this enabled by default
> +	help
> +	  This option enables Hyper_DMABUF driver.
> +
> +	  This driver works as abstraction layer that export and import
> +	  DMA_BUF from/to another virtual OS running on the same HW platform
> +	  powered by a hypervisor
> +
> +config HYPER_DMABUF_SYSFS
> +	bool "Enable sysfs information about hyper DMA buffers"
> +	default y
Ditto
> +	depends on HYPER_DMABUF
> +	help
> +	  Expose run-time information about currently imported and exported buffers
> +	  registered in EXPORT and IMPORT list in Hyper_DMABUF driver.
> +
> +	  The location of sysfs is under "...."
> +
> +endmenu
> diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
> new file mode 100644
> index 000000000000..3908522b396a
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/Makefile
> @@ -0,0 +1,34 @@
> +TARGET_MODULE:=hyper_dmabuf
> +
> +# If we running by kernel building system
> +ifneq ($(KERNELRELEASE),)
Not sure why you need this
> +	$(TARGET_MODULE)-objs := hyper_dmabuf_drv.o \
> +                                 hyper_dmabuf_ioctl.o \
> +                                 hyper_dmabuf_list.o \
> +				 hyper_dmabuf_sgl_proc.o \
> +				 hyper_dmabuf_ops.o \
> +				 hyper_dmabuf_msg.o \
> +				 hyper_dmabuf_id.o \
> +
> +obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
> +
> +# If we are running without kernel build system
Ditto
> +else
> +BUILDSYSTEM_DIR?=../../../
> +PWD:=$(shell pwd)
> +
> +all :
> +# run kernel build system to make module
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) modules
> +
> +clean:
> +# run kernel build system to cleanup in current directory
> +$(MAKE) -C $(BUILDSYSTEM_DIR) M=$(PWD) clean
> +
> +load:
> +	insmod ./$(TARGET_MODULE).ko
> +
> +unload:
> +	rmmod ./$(TARGET_MODULE).ko
> +
This seems to be some helper code you use while doing development
which needs to be removed
> +endif
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
> new file mode 100644
> index 000000000000..18c1cd735ea2
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
> @@ -0,0 +1,254 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/miscdevice.h>
> +#include <linux/workqueue.h>
> +#include <linux/slab.h>
> +#include <linux/device.h>
> +#include <linux/uaccess.h>
> +#include <linux/poll.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_ioctl.h"
> +#include "hyper_dmabuf_list.h"
> +#include "hyper_dmabuf_id.h"
> +
> +MODULE_LICENSE("GPL and additional rights");
> +MODULE_AUTHOR("Intel Corporation");
> +
> +struct hyper_dmabuf_private *hy_drv_priv;
instead of using a global symbol here you might want to
first allocate misc device and then use devm_kzalloc to allocate
your private data
> +
> +static void force_free(struct exported_sgt_info *exported,
> +		       void *attr)
> +{
> +	struct ioctl_hyper_dmabuf_unexport unexport_attr;
> +	struct file *filp = (struct file *)attr;
> +
> +	if (!filp || !exported)
> +		return;
> +
> +	if (exported->filp == filp) {
> +		dev_dbg(hy_drv_priv->dev,
> +			"Forcefully releasing buffer {id:%d key:%d %d %d}\n",
> +			 exported->hid.id, exported->hid.rng_key[0],
> +			 exported->hid.rng_key[1], exported->hid.rng_key[2]);
> +
> +		unexport_attr.hid = exported->hid;
> +		unexport_attr.delay_ms = 0;
> +
> +		hyper_dmabuf_unexport_ioctl(filp, &unexport_attr);
> +	}
> +}
> +
> +static int hyper_dmabuf_open(struct inode *inode, struct file *filp)
> +{
> +	int ret = 0;
> +
> +	/* Do not allow exclusive open */
> +	if (filp->f_flags & O_EXCL)
> +		return -EBUSY;
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_release(struct inode *inode, struct file *filp)
> +{
> +	hyper_dmabuf_foreach_exported(force_free, filp);
> +
> +	return 0;
> +}
> +
> +static const struct file_operations hyper_dmabuf_driver_fops = {
> +	.owner = THIS_MODULE,
> +	.open = hyper_dmabuf_open,
> +	.release = hyper_dmabuf_release,
> +	.unlocked_ioctl = hyper_dmabuf_ioctl,
> +};
> +
> +static struct miscdevice hyper_dmabuf_miscdev = {
> +	.minor = MISC_DYNAMIC_MINOR,
> +	.name = "hyper_dmabuf",
Can this string be a constant through the driver?
> +	.fops = &hyper_dmabuf_driver_fops,
> +};
> +
> +static int register_device(void)
> +{
> +	int ret = 0;
> +
> +	ret = misc_register(&hyper_dmabuf_miscdev);
> +
> +	if (ret) {
> +		pr_err("hyper_dmabuf: driver can't be registered\n");
> +		return ret;
> +	}
> +
> +	hy_drv_priv->dev = hyper_dmabuf_miscdev.this_device;
> +
> +	/* TODO: Check if there is a different way to initialize dma mask */
> +	dma_coerce_mask_and_coherent(hy_drv_priv->dev, DMA_BIT_MASK(64));
> +
> +	return ret;
> +}
> +
> +static void unregister_device(void)
> +{
> +	dev_info(hy_drv_priv->dev,
> +		"hyper_dmabuf: %s is called\n", __func__);
> +
> +	misc_deregister(&hyper_dmabuf_miscdev);
> +}
> +
> +static int __init hyper_dmabuf_drv_init(void)
> +{
> +	int ret = 0;
> +
> +	pr_notice("hyper_dmabuf_starting: Initialization started\n");
> +
> +	hy_drv_priv = kcalloc(1, sizeof(struct hyper_dmabuf_private),
> +			      GFP_KERNEL);
> +
> +	if (!hy_drv_priv)
> +		return -ENOMEM;
> +
> +	ret = register_device();
> +	if (ret < 0) {
> +		kfree(hy_drv_priv);
> +		return ret;
> +	}
> +
> +	hy_drv_priv->bknd_ops = NULL;
> +
> +	if (hy_drv_priv->bknd_ops == NULL) {
> +		pr_err("Hyper_dmabuf: no backend found\n");
> +		kfree(hy_drv_priv);
> +		return -1;
> +	}
> +
> +	mutex_init(&hy_drv_priv->lock);
> +
> +	mutex_lock(&hy_drv_priv->lock);
Why do you need to immediately lock here?
> +
> +	hy_drv_priv->initialized = false;
kcalloc allocates zeroed memory, so you might rely on that fact
> +
> +	dev_info(hy_drv_priv->dev,
> +		 "initializing database for imported/exported dmabufs\n");
> +
> +	hy_drv_priv->work_queue = create_workqueue("hyper_dmabuf_wqueue");
> +
> +	ret = hyper_dmabuf_table_init();
> +	if (ret < 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"fail to init table for exported/imported entries\n");
> +		mutex_unlock(&hy_drv_priv->lock);
> +		kfree(hy_drv_priv);
> +		return ret;
> +	}
> +
> +#ifdef CONFIG_HYPER_DMABUF_SYSFS
> +	ret = hyper_dmabuf_register_sysfs(hy_drv_priv->dev);
> +	if (ret < 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"failed to initialize sysfs\n");
> +		mutex_unlock(&hy_drv_priv->lock);
> +		kfree(hy_drv_priv);
> +		return ret;
> +	}
> +#endif
> +
> +	if (hy_drv_priv->bknd_ops->init) {
> +		ret = hy_drv_priv->bknd_ops->init();
> +
> +		if (ret < 0) {
> +			dev_dbg(hy_drv_priv->dev,
> +				"failed to initialize backend.\n");
> +			mutex_unlock(&hy_drv_priv->lock);
> +			kfree(hy_drv_priv);
unregister sysfs?
> +			return ret;
> +		}
> +	}
> +
> +	hy_drv_priv->domid = hy_drv_priv->bknd_ops->get_vm_id();
> +
This seems to be a bit inconsistent, e.g. domid vs vm_id
> +	ret = hy_drv_priv->bknd_ops->init_comm_env();
> +	if (ret < 0) {
> +		dev_dbg(hy_drv_priv->dev,
> +			"failed to initialize comm-env.\n");
bknd_ops->cleanup?
> +	} else {
> +		hy_drv_priv->initialized = true;
> +	}
> +
> +	mutex_unlock(&hy_drv_priv->lock);
> +
> +	dev_info(hy_drv_priv->dev,
> +		"Finishing up initialization of hyper_dmabuf drv\n");
> +
> +	/* interrupt for comm should be registered here: */
> +	return ret;
> +}
> +
> +static void hyper_dmabuf_drv_exit(void)
__exit?
> +{
> +#ifdef CONFIG_HYPER_DMABUF_SYSFS
> +	hyper_dmabuf_unregister_sysfs(hy_drv_priv->dev);
> +#endif
> +
> +	mutex_lock(&hy_drv_priv->lock);
> +
> +	/* hash tables for export/import entries and ring_infos */
> +	hyper_dmabuf_table_destroy();
> +
> +	hy_drv_priv->bknd_ops->destroy_comm();
> +
> +	if (hy_drv_priv->bknd_ops->cleanup) {
> +		hy_drv_priv->bknd_ops->cleanup();
> +	};
> +
> +	/* destroy workqueue */
> +	if (hy_drv_priv->work_queue)
> +		destroy_workqueue(hy_drv_priv->work_queue);
> +
> +	/* destroy id_queue */
> +	if (hy_drv_priv->id_queue)
> +		hyper_dmabuf_free_hid_list();
> +
> +	mutex_unlock(&hy_drv_priv->lock);
> +
> +	dev_info(hy_drv_priv->dev,
> +		 "hyper_dmabuf driver: Exiting\n");
> +
> +	kfree(hy_drv_priv);
> +
> +	unregister_device();
> +}
> +
> +module_init(hyper_dmabuf_drv_init);
> +module_exit(hyper_dmabuf_drv_exit);
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
> new file mode 100644
> index 000000000000..46119d762430
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.h
> @@ -0,0 +1,111 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __LINUX_HYPER_DMABUF_DRV_H__
> +#define __LINUX_HYPER_DMABUF_DRV_H__
> +
> +#include <linux/device.h>
> +#include <linux/hyper_dmabuf.h>
> +
> +struct hyper_dmabuf_req;
> +
> +struct hyper_dmabuf_private {
> +	struct device *dev;
> +
> +	/* VM(domain) id of current VM instance */
> +	int domid;
> +
> +	/* workqueue dedicated to hyper_dmabuf driver */
> +	struct workqueue_struct *work_queue;
> +
> +	/* list of reusable hyper_dmabuf_ids */
> +	struct list_reusable_id *id_queue;
> +
> +	/* backend ops - hypervisor specific */
> +	struct hyper_dmabuf_bknd_ops *bknd_ops;
> +
> +	/* device global lock */
> +	/* TODO: might need a lock per resource (e.g. EXPORT LIST) */
> +	struct mutex lock;
> +
> +	/* flag that shows whether backend is initialized */
> +	bool initialized;
> +
> +	/* # of pending events */
> +	int pending;
> +};
> +
> +struct list_reusable_id {
> +	hyper_dmabuf_id_t hid;
> +	struct list_head list;
> +};
> +
> +struct hyper_dmabuf_bknd_ops {
> +	/* backend initialization routine (optional) */
> +	int (*init)(void);
> +
> +	/* backend cleanup routine (optional) */
> +	int (*cleanup)(void);
> +
> +	/* retreiving id of current virtual machine */
> +	int (*get_vm_id)(void);
> +
> +	/* get pages shared via hypervisor-specific method */
> +	int (*share_pages)(struct page **pages, int vm_id,
> +			   int nents, void **refs_info);
> +
> +	/* make shared pages unshared via hypervisor specific method */
> +	int (*unshare_pages)(void **refs_info, int nents);
> +
> +	/* map remotely shared pages on importer's side via
> +	 * hypervisor-specific method
> +	 */
> +	struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
> +					   int nents, void **refs_info);
> +
> +	/* unmap and free shared pages on importer's side via
> +	 * hypervisor-specific method
> +	 */
> +	int (*unmap_shared_pages)(void **refs_info, int nents);
> +
> +	/* initialize communication environment */
> +	int (*init_comm_env)(void);
> +
> +	void (*destroy_comm)(void);
> +
> +	/* upstream ch setup (receiving and responding) */
> +	int (*init_rx_ch)(int vm_id);
> +
> +	/* downstream ch setup (transmitting and parsing responses) */
> +	int (*init_tx_ch)(int vm_id);
> +
> +	int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> +};
> +
> +/* exporting global drv private info */
> +extern struct hyper_dmabuf_private *hy_drv_priv;
> +
> +#endif /* __LINUX_HYPER_DMABUF_DRV_H__ */
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
> new file mode 100644
> index 000000000000..f2e994a4957d
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.c
> @@ -0,0 +1,135 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/random.h>
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_id.h"
> +
Common notes:
- I think even if hy_drv_priv is global you shouldn't
touch it directly, but pass it as function parameter.
- Don't you need to protect reusable list with lock?
> +void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid)
> +{
> +	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
> +	struct list_reusable_id *new_reusable;
> +
> +	new_reusable = kmalloc(sizeof(*new_reusable), GFP_KERNEL);
> +
> +	if (!new_reusable)
> +		return;
> +
> +	new_reusable->hid = hid;
> +
> +	list_add(&new_reusable->list, &reusable_head->list);
> +}
> +
> +static hyper_dmabuf_id_t get_reusable_hid(void)
> +{
> +	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
> +	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
> +
> +	/* check there is reusable id */
> +	if (!list_empty(&reusable_head->list)) {
> +		reusable_head = list_first_entry(&reusable_head->list,
> +						 struct list_reusable_id,
> +						 list);
> +
> +		list_del(&reusable_head->list);
> +		hid = reusable_head->hid;
> +		kfree(reusable_head);
> +	}
> +
> +	return hid;
> +}
> +
> +void hyper_dmabuf_free_hid_list(void)
> +{
> +	struct list_reusable_id *reusable_head = hy_drv_priv->id_queue;
> +	struct list_reusable_id *temp_head;
> +
> +	if (reusable_head) {
> +		/* freeing mem space all reusable ids in the stack */
> +		while (!list_empty(&reusable_head->list)) {
> +			temp_head = list_first_entry(&reusable_head->list,
> +						     struct list_reusable_id,
> +						     list);
> +			list_del(&temp_head->list);
> +			kfree(temp_head);
> +		}
> +
> +		/* freeing head */
> +		kfree(reusable_head);
> +	}
> +}
> +
> +hyper_dmabuf_id_t hyper_dmabuf_get_hid(void)
> +{
> +	static int count;
could you please explicitly initialize this?
> +	hyper_dmabuf_id_t hid;
> +	struct list_reusable_id *reusable_head;
> +
> +	/* first call to hyper_dmabuf_get_id */
> +	if (count == 0) {
> +		reusable_head = kmalloc(sizeof(*reusable_head), GFP_KERNEL);
> +
> +		if (!reusable_head)
> +			return (hyper_dmabuf_id_t){-1, {0, 0, 0} };
> +
> +		/* list head has an invalid count */
> +		reusable_head->hid.id = -1;
> +		INIT_LIST_HEAD(&reusable_head->list);
> +		hy_drv_priv->id_queue = reusable_head;
> +	}
> +
> +	hid = get_reusable_hid();
> +
> +	/*creating a new H-ID only if nothing in the reusable id queue
start the comment from a new line
> +	 * and count is less than maximum allowed
> +	 */
> +	if (hid.id == -1 && count < HYPER_DMABUF_ID_MAX)
> +		hid.id = HYPER_DMABUF_ID_CREATE(hy_drv_priv->domid, count++);
> +
> +	/* random data embedded in the id for security */
> +	get_random_bytes(&hid.rng_key[0], 12);
can magic 12 be a defined constant?
> +
> +	return hid;
> +}
> +
> +bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2)
> +{
> +	int i;
> +
> +	/* compare keys */
> +	for (i = 0; i < 3; i++) {
can magic 3 be defined as a constant please?
> +		if (hid1.rng_key[i] != hid2.rng_key[i])
> +			return false;
> +	}
> +
> +	return true;
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
> new file mode 100644
> index 000000000000..11f530e2c8f6
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_ID_H__
> +#define __HYPER_DMABUF_ID_H__
> +
> +#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
> +	((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
I would define hyper_dmabuf_id_t.id as a union or 2 separate
fields to avoid his magic
> +
> +#define HYPER_DMABUF_DOM_ID(hid) \
> +	(((hid.id) >> 24) & 0xFF)
> +
> +/* currently maximum number of buffers shared
> + * at any given moment is limited to 1000
> + */
> +#define HYPER_DMABUF_ID_MAX 1000
Why 1000? Is it just to limit or is dictated by some use-cases/experiments?
> +
> +/* adding freed hid to the reusable list */
> +void hyper_dmabuf_store_hid(hyper_dmabuf_id_t hid);
> +
> +/* freeing the reusasble list */
> +void hyper_dmabuf_free_hid_list(void);
> +
> +/* getting a hid available to use. */
> +hyper_dmabuf_id_t hyper_dmabuf_get_hid(void);
> +
> +/* comparing two different hid */
> +bool hyper_dmabuf_hid_keycomp(hyper_dmabuf_id_t hid1, hyper_dmabuf_id_t hid2);
> +
> +#endif /*__HYPER_DMABUF_ID_H*/
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
> new file mode 100644
> index 000000000000..020a5590a254
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
> @@ -0,0 +1,672 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_id.h"
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_ioctl.h"
> +#include "hyper_dmabuf_list.h"
> +#include "hyper_dmabuf_msg.h"
> +#include "hyper_dmabuf_sgl_proc.h"
> +#include "hyper_dmabuf_ops.h"
> +
Here and below: please do not touch global hy_drv_priv
> +static int hyper_dmabuf_tx_ch_setup_ioctl(struct file *filp, void *data)
> +{
> +	struct ioctl_hyper_dmabuf_tx_ch_setup *tx_ch_attr;
> +	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
> +	int ret = 0;
> +
> +	if (!data) {
> +		dev_err(hy_drv_priv->dev, "user data is NULL\n");
> +		return -EINVAL;
> +	}
> +	tx_ch_attr = (struct ioctl_hyper_dmabuf_tx_ch_setup *)data;
> +
> +	ret = bknd_ops->init_tx_ch(tx_ch_attr->remote_domain);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_rx_ch_setup_ioctl(struct file *filp, void *data)
> +{
> +	struct ioctl_hyper_dmabuf_rx_ch_setup *rx_ch_attr;
> +	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
> +	int ret = 0;
> +
> +	if (!data) {
> +		dev_err(hy_drv_priv->dev, "user data is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	rx_ch_attr = (struct ioctl_hyper_dmabuf_rx_ch_setup *)data;
> +
> +	ret = bknd_ops->init_rx_ch(rx_ch_attr->source_domain);
> +
> +	return ret;
> +}
> +
> +static int send_export_msg(struct exported_sgt_info *exported,
> +			   struct pages_info *pg_info)
> +{
> +	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
> +	struct hyper_dmabuf_req *req;
> +	int op[MAX_NUMBER_OF_OPERANDS] = {0};
> +	int ret, i;
> +
> +	/* now create request for importer via ring */
> +	op[0] = exported->hid.id;
> +
> +	for (i = 0; i < 3; i++)
> +		op[i+1] = exported->hid.rng_key[i];
> +
> +	if (pg_info) {
heh, can we have a well defined structures for requests/responses,
so we don't have to put all these magics?
> +		op[4] = pg_info->nents;
> +		op[5] = pg_info->frst_ofst;
> +		op[6] = pg_info->last_len;
> +		op[7] = bknd_ops->share_pages(pg_info->pgs, exported->rdomid,
> +					 pg_info->nents, &exported->refs_info);
ret?
> +		if (op[7] < 0) {
> +			dev_err(hy_drv_priv->dev, "pages sharing failed\n");
> +			return op[7];
> +		}
> +	}
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	if (!req)
> +		return -ENOMEM;
> +
> +	/* composing a message to the importer */
> +	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT, &op[0]);
> +
> +	ret = bknd_ops->send_req(exported->rdomid, req, true);
can we allocate req on stack? and don't use kcalloc?
> +
> +	kfree(req);
> +
> +	return ret;
> +}
> +
> +/* Fast path exporting routine in case same buffer is already exported.
> + *
> + * If same buffer is still valid and exist in EXPORT LIST it returns 0 so
> + * that remaining normal export process can be skipped.
> + *
> + * If "unexport" is scheduled for the buffer, it cancels it since the buffer
> + * is being re-exported.
> + *
> + * return '1' if reexport is needed, return '0' if succeeds, return
> + * Kernel error code if something goes wrong
> + */
> +static int fastpath_export(hyper_dmabuf_id_t hid)
> +{
> +	int reexport = 1;
> +	int ret = 0;
why do you need these two variables?
> +	struct exported_sgt_info *exported;
> +
> +	exported = hyper_dmabuf_find_exported(hid);
> +
> +	if (!exported)
> +		return reexport;
> +
> +	if (exported->valid == false)
> +		return reexport;
> +
> +	/*
> +	 * Check if unexport is already scheduled for that buffer,
> +	 * if so try to cancel it. If that will fail, buffer needs
> +	 * to be reexport once again.
> +	 */
> +	if (exported->unexport_sched) {
> +		if (!cancel_delayed_work_sync(&exported->unexport))
> +			return reexport;
> +
> +		exported->unexport_sched = false;
> +	}
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_remote *export_remote_attr =
> +			(struct ioctl_hyper_dmabuf_export_remote *)data;
> +	struct dma_buf *dma_buf;
> +	struct dma_buf_attachment *attachment;
> +	struct sg_table *sgt;
> +	struct pages_info *pg_info;
> +	struct exported_sgt_info *exported;
> +	hyper_dmabuf_id_t hid;
> +	int ret = 0;
> +
> +	if (hy_drv_priv->domid == export_remote_attr->remote_domain) {
> +		dev_err(hy_drv_priv->dev,
> +			"exporting to the same VM is not permitted\n");
> +		return -EINVAL;
> +	}
> +
> +	dma_buf = dma_buf_get(export_remote_attr->dmabuf_fd);
> +
> +	if (IS_ERR(dma_buf)) {
> +		dev_err(hy_drv_priv->dev, "Cannot get dma buf\n");
> +		return PTR_ERR(dma_buf);
> +	}
> +
> +	/* we check if this specific attachment was already exported
> +	 * to the same domain and if yes and it's valid sgt_info,
> +	 * it returns hyper_dmabuf_id of pre-exported sgt_info
> +	 */
> +	hid = hyper_dmabuf_find_hid_exported(dma_buf,
> +					     export_remote_attr->remote_domain);
> +
> +	if (hid.id != -1) {
> +		ret = fastpath_export(hid);
> +
> +		/* return if fastpath_export succeeds or
> +		 * gets some fatal error
> +		 */
> +		if (ret <= 0) {
> +			dma_buf_put(dma_buf);
> +			export_remote_attr->hid = hid;
> +			return ret;
> +		}
> +	}
> +
> +	attachment = dma_buf_attach(dma_buf, hy_drv_priv->dev);
> +	if (IS_ERR(attachment)) {
> +		dev_err(hy_drv_priv->dev, "cannot get attachment\n");
> +		ret = PTR_ERR(attachment);
here and below - if you have dma-buf from fastpath don't
you need to release/handle it on error path here?
E.g. fastpath may have canceled unexport work for this buffer

> +		goto fail_attach;
> +	}
> +
> +	sgt = dma_buf_map_attachment(attachment, DMA_BIDIRECTIONAL);
> +
> +	if (IS_ERR(sgt)) {
> +		dev_err(hy_drv_priv->dev, "cannot map attachment\n");
> +		ret = PTR_ERR(sgt);
> +		goto fail_map_attachment;
> +	}
> +
> +	exported = kcalloc(1, sizeof(*exported), GFP_KERNEL);
> +
> +	if (!exported) {
> +		ret = -ENOMEM;
> +		goto fail_sgt_info_creation;
> +	}
> +
> +	exported->hid = hyper_dmabuf_get_hid();
> +
> +	/* no more exported dmabuf allowed */
> +	if (exported->hid.id == -1) {
> +		dev_err(hy_drv_priv->dev,
> +			"exceeds allowed number of dmabuf to be exported\n");
> +		ret = -ENOMEM;
> +		goto fail_sgt_info_creation;
> +	}
> +
> +	exported->rdomid = export_remote_attr->remote_domain;
> +	exported->dma_buf = dma_buf;
> +	exported->valid = true;
> +
> +	exported->active_sgts = kmalloc(sizeof(struct sgt_list), GFP_KERNEL);
> +	if (!exported->active_sgts) {
> +		ret = -ENOMEM;
> +		goto fail_map_active_sgts;
> +	}
> +
> +	exported->active_attached = kmalloc(sizeof(struct attachment_list),
> +					    GFP_KERNEL);
> +	if (!exported->active_attached) {
> +		ret = -ENOMEM;
> +		goto fail_map_active_attached;
> +	}
> +
> +	exported->va_kmapped = kmalloc(sizeof(struct kmap_vaddr_list),
> +				       GFP_KERNEL);
> +	if (!exported->va_kmapped) {
> +		ret = -ENOMEM;
> +		goto fail_map_va_kmapped;
> +	}
> +
> +	exported->va_vmapped = kmalloc(sizeof(struct vmap_vaddr_list),
> +				       GFP_KERNEL);
> +	if (!exported->va_vmapped) {
> +		ret = -ENOMEM;
> +		goto fail_map_va_vmapped;
> +	}
> +
> +	exported->active_sgts->sgt = sgt;
> +	exported->active_attached->attach = attachment;
> +	exported->va_kmapped->vaddr = NULL;
> +	exported->va_vmapped->vaddr = NULL;
> +
> +	/* initialize list of sgt, attachment and vaddr for dmabuf sync
> +	 * via shadow dma-buf
> +	 */
> +	INIT_LIST_HEAD(&exported->active_sgts->list);
> +	INIT_LIST_HEAD(&exported->active_attached->list);
> +	INIT_LIST_HEAD(&exported->va_kmapped->list);
> +	INIT_LIST_HEAD(&exported->va_vmapped->list);
> +
> +	if (ret) {
> +		dev_err(hy_drv_priv->dev,
> +			"failed to load private data\n");
> +		ret = -EINVAL;
> +		goto fail_export;
> +	}
> +
> +	pg_info = hyper_dmabuf_ext_pgs(sgt);
> +	if (!pg_info) {
> +		dev_err(hy_drv_priv->dev,
> +			"failed to construct pg_info\n");
> +		ret = -ENOMEM;
> +		goto fail_export;
> +	}
> +
> +	exported->nents = pg_info->nents;
> +
> +	/* now register it to export list */
> +	hyper_dmabuf_register_exported(exported);
> +
> +	export_remote_attr->hid = exported->hid;
> +
> +	ret = send_export_msg(exported, pg_info);
> +
> +	if (ret < 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"failed to send out the export request\n");
> +		goto fail_send_request;
> +	}
> +
> +	/* free pg_info */
> +	kfree(pg_info->pgs);
> +	kfree(pg_info);
> +
> +	exported->filp = filp;
> +
> +	return ret;
> +
> +/* Clean-up if error occurs */
> +
> +fail_send_request:
> +	hyper_dmabuf_remove_exported(exported->hid);
> +
> +	/* free pg_info */
> +	kfree(pg_info->pgs);
> +	kfree(pg_info);
> +
> +fail_export:
> +	kfree(exported->va_vmapped);
> +
> +fail_map_va_vmapped:
> +	kfree(exported->va_kmapped);
> +
> +fail_map_va_kmapped:
> +	kfree(exported->active_attached);
> +
> +fail_map_active_attached:
> +	kfree(exported->active_sgts);
> +	kfree(exported);
> +
> +fail_map_active_sgts:
> +fail_sgt_info_creation:
> +	dma_buf_unmap_attachment(attachment, sgt,
> +				 DMA_BIDIRECTIONAL);
> +
> +fail_map_attachment:
> +	dma_buf_detach(dma_buf, attachment);
> +
> +fail_attach:
> +	dma_buf_put(dma_buf);
> +
> +	return ret;
> +}
> +
> +static int hyper_dmabuf_export_fd_ioctl(struct file *filp, void *data)
> +{
> +	struct ioctl_hyper_dmabuf_export_fd *export_fd_attr =
> +			(struct ioctl_hyper_dmabuf_export_fd *)data;
> +	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
> +	struct imported_sgt_info *imported;
> +	struct hyper_dmabuf_req *req;
> +	struct page **data_pgs;
> +	int op[4];
don't you have hyper_dmabuf_id_t for that?
> +	int i;
> +	int ret = 0;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
> +
> +	/* look for dmabuf for the id */
> +	imported = hyper_dmabuf_find_imported(export_fd_attr->hid);
> +
> +	/* can't find sgt from the table */
> +	if (!imported) {
> +		dev_err(hy_drv_priv->dev, "can't find the entry\n");
> +		return -ENOENT;
> +	}
> +
> +	mutex_lock(&hy_drv_priv->lock);
> +
> +	imported->importers++;
> +
> +	/* send notification for export_fd to exporter */
> +	op[0] = imported->hid.id;
> +
> +	for (i = 0; i < 3; i++)
> +		op[i+1] = imported->hid.rng_key[i];
> +
> +	dev_dbg(hy_drv_priv->dev, "Export FD of buffer {id:%d key:%d %d %d}\n",
> +		imported->hid.id, imported->hid.rng_key[0],
> +		imported->hid.rng_key[1], imported->hid.rng_key[2]);
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
can you have req allocated on stack?
> +
> +	if (!req) {
> +		mutex_unlock(&hy_drv_priv->lock);
> +		return -ENOMEM;
> +	}
> +
> +	hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD, &op[0]);
> +
> +	ret = bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid), req, true);
> +
> +	if (ret < 0) {
> +		/* in case of timeout other end eventually will receive request,
> +		 * so we need to undo it
> +		 */
and what if there is a race condition? at the time you delete
the buffer the corresponding response comes in?
> +		hyper_dmabuf_create_req(req, HYPER_DMABUF_EXPORT_FD_FAILED,
> +					&op[0]);
> +		bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid),
> +				   req, false);
> +		kfree(req);
> +		dev_err(hy_drv_priv->dev,
> +			"Failed to create sgt or notify exporter\n");
> +		imported->importers--;
> +		mutex_unlock(&hy_drv_priv->lock);
> +		return ret;
> +	}
> +
> +	kfree(req);
> +
> +	if (ret == HYPER_DMABUF_REQ_ERROR) {
> +		dev_err(hy_drv_priv->dev,
> +			"Buffer invalid {id:%d key:%d %d %d}, cannot import\n",
> +			imported->hid.id, imported->hid.rng_key[0],
> +			imported->hid.rng_key[1], imported->hid.rng_key[2]);
> +
> +		imported->importers--;
> +		mutex_unlock(&hy_drv_priv->lock);
> +		return -EINVAL;
> +	}
> +
> +	ret = 0;
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"Found buffer gref %d off %d\n",
> +		imported->ref_handle, imported->frst_ofst);
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"last len %d nents %d domain %d\n",
> +		imported->last_len, imported->nents,
> +		HYPER_DMABUF_DOM_ID(imported->hid));
> +
> +	if (!imported->sgt) {
> +		dev_dbg(hy_drv_priv->dev,
> +			"buffer {id:%d key:%d %d %d} pages not mapped yet\n",
> +			imported->hid.id, imported->hid.rng_key[0],
> +			imported->hid.rng_key[1], imported->hid.rng_key[2]);
> +
> +		data_pgs = bknd_ops->map_shared_pages(imported->ref_handle,
> +					HYPER_DMABUF_DOM_ID(imported->hid),
> +					imported->nents,
> +					&imported->refs_info);
> +
> +		if (!data_pgs) {
> +			dev_err(hy_drv_priv->dev,
> +				"can't map pages hid {id:%d key:%d %d %d}\n",
> +				imported->hid.id, imported->hid.rng_key[0],
> +				imported->hid.rng_key[1],
> +				imported->hid.rng_key[2]);
> +
> +			imported->importers--;
> +
> +			req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +			if (!req) {
> +				mutex_unlock(&hy_drv_priv->lock);
> +				return -ENOMEM;
> +			}
> +
> +			hyper_dmabuf_create_req(req,
> +						HYPER_DMABUF_EXPORT_FD_FAILED,
> +						&op[0]);
> +
> +			bknd_ops->send_req(HYPER_DMABUF_DOM_ID(imported->hid),
> +					   req, false);
> +			kfree(req);
> +			mutex_unlock(&hy_drv_priv->lock);
> +			return -EINVAL;
> +		}
> +
> +		imported->sgt = hyper_dmabuf_create_sgt(data_pgs,
> +							imported->frst_ofst,
> +							imported->last_len,
> +							imported->nents);
> +
> +	}
> +
> +	export_fd_attr->fd = hyper_dmabuf_export_fd(imported,
> +						    export_fd_attr->flags);
> +
> +	if (export_fd_attr->fd < 0) {
> +		/* fail to get fd */
> +		ret = export_fd_attr->fd;
why don't you send HYPER_DMABUF_EXPORT_FD_FAILED in this case?
> +	}
> +
> +	mutex_unlock(&hy_drv_priv->lock);
> +
> +	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
> +	return ret;
> +}
> +
> +/* unexport dmabuf from the database and send int req to the source domain
> + * to unmap it.
> + */
> +static void delayed_unexport(struct work_struct *work)
> +{
> +	struct hyper_dmabuf_req *req;
> +	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
> +	struct exported_sgt_info *exported =
> +		container_of(work, struct exported_sgt_info, unexport.work);
> +	int op[4];
use the struct defined for this
> +	int i, ret;
> +
> +	if (!exported)
> +		return;
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"Marking buffer {id:%d key:%d %d %d} as invalid\n",
> +		exported->hid.id, exported->hid.rng_key[0],
> +		exported->hid.rng_key[1], exported->hid.rng_key[2]);
> +
> +	/* no longer valid */
> +	exported->valid = false;
> +
> +	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
> +
> +	if (!req)
will we leak the buffer because we return here?
> +		return;
> +
> +	op[0] = exported->hid.id;
> +
> +	for (i = 0; i < 3; i++)
> +		op[i+1] = exported->hid.rng_key[i];
> +
> +	hyper_dmabuf_create_req(req, HYPER_DMABUF_NOTIFY_UNEXPORT, &op[0]);
> +
> +	/* Now send unexport request to remote domain, marking
> +	 * that buffer should not be used anymore
> +	 */
> +	ret = bknd_ops->send_req(exported->rdomid, req, true);
> +	if (ret < 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"unexport message for buffer {id:%d key:%d %d %d} failed\n",
> +			exported->hid.id, exported->hid.rng_key[0],
> +			exported->hid.rng_key[1], exported->hid.rng_key[2]);
> +	}
> +
> +	kfree(req);
> +	exported->unexport_sched = false;
> +
> +	/* Immediately clean-up if it has never been exported by importer
> +	 * (so no SGT is constructed on importer).
> +	 * clean it up later in remote sync when final release ops
> +	 * is called (importer does this only when there's no
> +	 * no consumer of locally exported FDs)
> +	 */
> +	if (exported->active == 0) {
> +		dev_dbg(hy_drv_priv->dev,
> +			"claning up buffer {id:%d key:%d %d %d} completly\n",
> +			exported->hid.id, exported->hid.rng_key[0],
> +			exported->hid.rng_key[1], exported->hid.rng_key[2]);
> +
> +		hyper_dmabuf_cleanup_sgt_info(exported, false);
> +		hyper_dmabuf_remove_exported(exported->hid);
> +
> +		/* register hyper_dmabuf_id to the list for reuse */
> +		hyper_dmabuf_store_hid(exported->hid);
> +
> +		kfree(exported);
> +	}
> +}
> +
> +/* Schedule unexport of dmabuf.
> + */
> +int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data)
> +{
> +	struct ioctl_hyper_dmabuf_unexport *unexport_attr =
> +			(struct ioctl_hyper_dmabuf_unexport *)data;
> +	struct exported_sgt_info *exported;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
> +
> +	/* find dmabuf in export list */
> +	exported = hyper_dmabuf_find_exported(unexport_attr->hid);
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"scheduling unexport of buffer {id:%d key:%d %d %d}\n",
> +		unexport_attr->hid.id, unexport_attr->hid.rng_key[0],
> +		unexport_attr->hid.rng_key[1], unexport_attr->hid.rng_key[2]);
> +
> +	/* failed to find corresponding entry in export list */
> +	if (exported == NULL) {
> +		unexport_attr->status = -ENOENT;
> +		return -ENOENT;
> +	}
> +
> +	if (exported->unexport_sched)
> +		return 0;
> +
> +	exported->unexport_sched = true;
> +	INIT_DELAYED_WORK(&exported->unexport, delayed_unexport);
why can't you just wait for the buffer to be unexported?
> +	schedule_delayed_work(&exported->unexport,
> +			      msecs_to_jiffies(unexport_attr->delay_ms));
> +
> +	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
> +	return 0;
> +}
> +
> +const struct hyper_dmabuf_ioctl_desc hyper_dmabuf_ioctls[] = {
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_TX_CH_SETUP,
> +			       hyper_dmabuf_tx_ch_setup_ioctl, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_RX_CH_SETUP,
> +			       hyper_dmabuf_rx_ch_setup_ioctl, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_REMOTE,
> +			       hyper_dmabuf_export_remote_ioctl, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_EXPORT_FD,
> +			       hyper_dmabuf_export_fd_ioctl, 0),
> +	HYPER_DMABUF_IOCTL_DEF(IOCTL_HYPER_DMABUF_UNEXPORT,
> +			       hyper_dmabuf_unexport_ioctl, 0),
> +};
> +
> +long hyper_dmabuf_ioctl(struct file *filp,
> +			unsigned int cmd, unsigned long param)
> +{
> +	const struct hyper_dmabuf_ioctl_desc *ioctl = NULL;
> +	unsigned int nr = _IOC_NR(cmd);
> +	int ret;
> +	hyper_dmabuf_ioctl_t func;
> +	char *kdata;
> +
> +	if (nr > ARRAY_SIZE(hyper_dmabuf_ioctls)) {
> +		dev_err(hy_drv_priv->dev, "invalid ioctl\n");
> +		return -EINVAL;
> +	}
> +
> +	ioctl = &hyper_dmabuf_ioctls[nr];
> +
> +	func = ioctl->func;
> +
> +	if (unlikely(!func)) {
> +		dev_err(hy_drv_priv->dev, "no function\n");
> +		return -EINVAL;
> +	}
> +
> +	kdata = kmalloc(_IOC_SIZE(cmd), GFP_KERNEL);
> +	if (!kdata)
> +		return -ENOMEM;
> +
> +	if (copy_from_user(kdata, (void __user *)param,
> +			   _IOC_SIZE(cmd)) != 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"failed to copy from user arguments\n");
> +		ret = -EFAULT;
> +		goto ioctl_error;
> +	}
> +
> +	ret = func(filp, kdata);
> +
> +	if (copy_to_user((void __user *)param, kdata,
> +			 _IOC_SIZE(cmd)) != 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"failed to copy to user arguments\n");
> +		ret = -EFAULT;
> +		goto ioctl_error;
> +	}
> +
> +ioctl_error:
> +	kfree(kdata);
> +
> +	return ret;
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
> new file mode 100644
> index 000000000000..d8090900ffa2
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.h
> @@ -0,0 +1,52 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_IOCTL_H__
> +#define __HYPER_DMABUF_IOCTL_H__
> +
> +typedef int (*hyper_dmabuf_ioctl_t)(struct file *filp, void *data);
> +
> +struct hyper_dmabuf_ioctl_desc {
> +	unsigned int cmd;
> +	int flags;
> +	hyper_dmabuf_ioctl_t func;
> +	const char *name;
> +};
> +
> +#define HYPER_DMABUF_IOCTL_DEF(ioctl, _func, _flags)	\
> +	[_IOC_NR(ioctl)] = {				\
> +			.cmd = ioctl,			\
> +			.func = _func,			\
> +			.flags = _flags,		\
> +			.name = #ioctl			\
> +	}
> +
> +long hyper_dmabuf_ioctl(struct file *filp,
> +			unsigned int cmd, unsigned long param);
> +
> +int hyper_dmabuf_unexport_ioctl(struct file *filp, void *data);
> +
> +#endif //__HYPER_DMABUF_IOCTL_H__
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
> new file mode 100644
> index 000000000000..f2f65a8ec47f
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.c
> @@ -0,0 +1,294 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <linux/hashtable.h>
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_list.h"
> +#include "hyper_dmabuf_id.h"
> +
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_imported, MAX_ENTRY_IMPORTED);
> +DECLARE_HASHTABLE(hyper_dmabuf_hash_exported, MAX_ENTRY_EXPORTED);
> +
> +#ifdef CONFIG_HYPER_DMABUF_SYSFS
> +static ssize_t hyper_dmabuf_imported_show(struct device *drv,
> +					  struct device_attribute *attr,
> +					  char *buf)
> +{
> +	struct list_entry_imported *info_entry;
> +	int bkt;
> +	ssize_t count = 0;
> +	size_t total = 0;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node) {
> +		hyper_dmabuf_id_t hid = info_entry->imported->hid;
> +		int nents = info_entry->imported->nents;
> +		bool valid = info_entry->imported->valid;
> +		int num_importers = info_entry->imported->importers;
> +
> +		total += nents;
> +		count += scnprintf(buf + count, PAGE_SIZE - count,
> +				"hid:{%d %d %d %d}, nent:%d, v:%c, numi:%d\n",
> +				hid.id, hid.rng_key[0], hid.rng_key[1],
> +				hid.rng_key[2], nents, (valid ? 't' : 'f'),
> +				num_importers);
> +	}
> +	count += scnprintf(buf + count, PAGE_SIZE - count,
> +			   "total nents: %lu\n", total);
> +
> +	return count;
> +}
> +
> +static ssize_t hyper_dmabuf_exported_show(struct device *drv,
> +					  struct device_attribute *attr,
> +					  char *buf)
> +{
> +	struct list_entry_exported *info_entry;
> +	int bkt;
> +	ssize_t count = 0;
> +	size_t total = 0;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node) {
> +		hyper_dmabuf_id_t hid = info_entry->exported->hid;
> +		int nents = info_entry->exported->nents;
> +		bool valid = info_entry->exported->valid;
> +		int importer_exported = info_entry->exported->active;
> +
> +		total += nents;
> +		count += scnprintf(buf + count, PAGE_SIZE - count,
> +				   "hid:{%d %d %d %d}, nent:%d, v:%c, ie:%d\n",
> +				   hid.id, hid.rng_key[0], hid.rng_key[1],
> +				   hid.rng_key[2], nents, (valid ? 't' : 'f'),
> +				   importer_exported);
> +	}
> +	count += scnprintf(buf + count, PAGE_SIZE - count,
> +			   "total nents: %lu\n", total);
> +
> +	return count;
> +}
> +
> +static DEVICE_ATTR(imported, 0400, hyper_dmabuf_imported_show, NULL);
> +static DEVICE_ATTR(exported, 0400, hyper_dmabuf_exported_show, NULL);
> +
> +int hyper_dmabuf_register_sysfs(struct device *dev)
> +{
> +	int err;
> +
> +	err = device_create_file(dev, &dev_attr_imported);
> +	if (err < 0)
> +		goto err1;
> +	err = device_create_file(dev, &dev_attr_exported);
> +	if (err < 0)
> +		goto err2;
> +
> +	return 0;
> +err2:
> +	device_remove_file(dev, &dev_attr_imported);
> +err1:
> +	return -1;
> +}
> +
> +int hyper_dmabuf_unregister_sysfs(struct device *dev)
> +{
> +	device_remove_file(dev, &dev_attr_imported);
> +	device_remove_file(dev, &dev_attr_exported);
> +	return 0;
> +}
> +
> +#endif
> +
> +int hyper_dmabuf_table_init(void)
> +{
> +	hash_init(hyper_dmabuf_hash_imported);
> +	hash_init(hyper_dmabuf_hash_exported);
> +	return 0;
> +}
> +
> +int hyper_dmabuf_table_destroy(void)
> +{
> +	/* TODO: cleanup hyper_dmabuf_hash_imported
> +	 * and hyper_dmabuf_hash_exported
> +	 */
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_exported(struct exported_sgt_info *exported)
> +{
> +	struct list_entry_exported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	if (!info_entry)
> +		return -ENOMEM;
> +
> +	info_entry->exported = exported;
> +
> +	hash_add(hyper_dmabuf_hash_exported, &info_entry->node,
> +		 info_entry->exported->hid.id);
> +
> +	return 0;
> +}
> +
> +int hyper_dmabuf_register_imported(struct imported_sgt_info *imported)
> +{
> +	struct list_entry_imported *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	if (!info_entry)
> +		return -ENOMEM;
> +
> +	info_entry->imported = imported;
> +
> +	hash_add(hyper_dmabuf_hash_imported, &info_entry->node,
> +		 info_entry->imported->hid.id);
> +
> +	return 0;
> +}
> +
> +struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid)
> +{
> +	struct list_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		/* checking hid.id first */
> +		if (info_entry->exported->hid.id == hid.id) {
> +			/* then key is compared */
> +			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
> +						    hid))
> +				return info_entry->exported;
> +
> +			/* if key is unmatched, given HID is invalid,
> +			 * so returning NULL
> +			 */
> +			break;
> +		}
> +
> +	return NULL;
> +}
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
> +						 int domid)
> +{
> +	struct list_entry_exported *info_entry;
> +	hyper_dmabuf_id_t hid = {-1, {0, 0, 0} };
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		if (info_entry->exported->dma_buf == dmabuf &&
> +		    info_entry->exported->rdomid == domid)
> +			return info_entry->exported->hid;
> +
> +	return hid;
> +}
> +
> +struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid)
> +{
> +	struct list_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		/* checking hid.id first */
> +		if (info_entry->imported->hid.id == hid.id) {
> +			/* then key is compared */
> +			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
> +						    hid))
> +				return info_entry->imported;
> +			/* if key is unmatched, given HID is invalid,
> +			 * so returning NULL
> +			 */
> +			break;
> +		}
> +
> +	return NULL;
> +}
> +
> +int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid)
> +{
> +	struct list_entry_exported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_exported, bkt, info_entry, node)
> +		/* checking hid.id first */
> +		if (info_entry->exported->hid.id == hid.id) {
> +			/* then key is compared */
> +			if (hyper_dmabuf_hid_keycomp(info_entry->exported->hid,
> +						    hid)) {
> +				hash_del(&info_entry->node);
> +				kfree(info_entry);
> +				return 0;
> +			}
> +
> +			break;
> +		}
> +
> +	return -ENOENT;
> +}
> +
> +int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid)
> +{
> +	struct list_entry_imported *info_entry;
> +	int bkt;
> +
> +	hash_for_each(hyper_dmabuf_hash_imported, bkt, info_entry, node)
> +		/* checking hid.id first */
> +		if (info_entry->imported->hid.id == hid.id) {
> +			/* then key is compared */
> +			if (hyper_dmabuf_hid_keycomp(info_entry->imported->hid,
> +						    hid)) {
> +				hash_del(&info_entry->node);
> +				kfree(info_entry);
> +				return 0;
> +			}
> +
> +			break;
> +		}
> +
> +	return -ENOENT;
> +}
> +
> +void hyper_dmabuf_foreach_exported(
> +	void (*func)(struct exported_sgt_info *, void *attr),
> +	void *attr)
> +{
> +	struct list_entry_exported *info_entry;
> +	struct hlist_node *tmp;
> +	int bkt;
> +
> +	hash_for_each_safe(hyper_dmabuf_hash_exported, bkt, tmp,
> +			info_entry, node) {
> +		func(info_entry->exported, attr);
> +	}
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
> new file mode 100644
> index 000000000000..3c6a23ef80c6
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_list.h
> @@ -0,0 +1,73 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_LIST_H__
> +#define __HYPER_DMABUF_LIST_H__
> +
> +#include "hyper_dmabuf_struct.h"
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_EXPORTED 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_IMPORTED 7
> +
> +struct list_entry_exported {
> +	struct exported_sgt_info *exported;
> +	struct hlist_node node;
> +};
> +
> +struct list_entry_imported {
> +	struct imported_sgt_info *imported;
> +	struct hlist_node node;
> +};
> +
> +int hyper_dmabuf_table_init(void);
> +
> +int hyper_dmabuf_table_destroy(void);
> +
> +int hyper_dmabuf_register_exported(struct exported_sgt_info *info);
> +
> +/* search for pre-exported sgt and return id of it if it exist */
> +hyper_dmabuf_id_t hyper_dmabuf_find_hid_exported(struct dma_buf *dmabuf,
> +						 int domid);
> +
> +int hyper_dmabuf_register_imported(struct imported_sgt_info *info);
> +
> +struct exported_sgt_info *hyper_dmabuf_find_exported(hyper_dmabuf_id_t hid);
> +
> +struct imported_sgt_info *hyper_dmabuf_find_imported(hyper_dmabuf_id_t hid);
> +
> +int hyper_dmabuf_remove_exported(hyper_dmabuf_id_t hid);
> +
> +int hyper_dmabuf_remove_imported(hyper_dmabuf_id_t hid);
> +
> +void hyper_dmabuf_foreach_exported(void (*func)(struct exported_sgt_info *,
> +				   void *attr), void *attr);
> +
> +int hyper_dmabuf_register_sysfs(struct device *dev);
> +int hyper_dmabuf_unregister_sysfs(struct device *dev);
> +
> +#endif /* __HYPER_DMABUF_LIST_H__ */
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
> new file mode 100644
> index 000000000000..129b2ff2af2b
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
> @@ -0,0 +1,320 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_msg.h"
> +#include "hyper_dmabuf_list.h"
> +
> +struct cmd_process {
> +	struct work_struct work;
> +	struct hyper_dmabuf_req *rq;
> +	int domid;
> +};
> +
> +void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
> +			     enum hyper_dmabuf_command cmd, int *op)
can we have structures for all the types of requests/responses
defined in some protocol header file? so we avoid hardcoding?
> +{
> +	int i;
> +
> +	req->stat = HYPER_DMABUF_REQ_NOT_RESPONDED;
> +	req->cmd = cmd;
> +
> +	switch (cmd) {
> +	/* as exporter, commands to importer */
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * op0~op3 : hyper_dmabuf_id
> +		 * op4 : number of pages to be shared
> +		 * op5 : offset of data in the first page
> +		 * op6 : length of data in the last page
> +		 * op7 : top-level reference number for shared pages
> +		 */
> +
> +		memcpy(&req->op[0], &op[0], 8 * sizeof(int) + op[8]);
> +		break;
> +
> +	case HYPER_DMABUF_NOTIFY_UNEXPORT:
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : DMABUF_DESTROY,
> +		 * op0~op3 : hyper_dmabuf_id_t hid
> +		 */
> +
> +		for (i = 0; i < 4; i++)
> +			req->op[i] = op[i];
> +		break;
> +
> +	case HYPER_DMABUF_EXPORT_FD:
> +	case HYPER_DMABUF_EXPORT_FD_FAILED:
> +		/* dmabuf fd is being created on imported side or importing
> +		 * failed
> +		 *
> +		 * command : HYPER_DMABUF_EXPORT_FD or
> +		 *	     HYPER_DMABUF_EXPORT_FD_FAILED,
> +		 * op0~op3 : hyper_dmabuf_id
> +		 */
> +
> +		for (i = 0; i < 4; i++)
> +			req->op[i] = op[i];
> +		break;
> +
> +	default:
> +		/* no command found */
> +		return;
> +	}
> +}
> +
> +static void cmd_process_work(struct work_struct *work)
> +{
> +	struct imported_sgt_info *imported;
> +	struct cmd_process *proc = container_of(work,
> +						struct cmd_process, work);
> +	struct hyper_dmabuf_req *req;
> +	int domid;
> +	int i;
> +
> +	req = proc->rq;
> +	domid = proc->domid;
> +
> +	switch (req->cmd) {
> +	case HYPER_DMABUF_EXPORT:
> +		/* exporting pages for dmabuf */
> +		/* command : HYPER_DMABUF_EXPORT,
> +		 * op0~op3 : hyper_dmabuf_id
> +		 * op4 : number of pages to be shared
> +		 * op5 : offset of data in the first page
> +		 * op6 : length of data in the last page
> +		 * op7 : top-level reference number for shared pages
> +		 */
> +
> +		/* if nents == 0, it means it is a message only for
> +		 * priv synchronization. for existing imported_sgt_info
> +		 * so not creating a new one
> +		 */
> +		if (req->op[4] == 0) {
> +			hyper_dmabuf_id_t exist = {req->op[0],
> +						   {req->op[1], req->op[2],
> +						   req->op[3] } };
> +
> +			imported = hyper_dmabuf_find_imported(exist);
> +
> +			if (!imported) {
> +				dev_err(hy_drv_priv->dev,
> +					"Can't find imported sgt_info\n");
> +				break;
> +			}
> +
> +			break;
> +		}
> +
> +		imported = kcalloc(1, sizeof(*imported), GFP_KERNEL);
> +
> +		if (!imported)
> +			break;
> +
> +		imported->hid.id = req->op[0];
> +
> +		for (i = 0; i < 3; i++)
> +			imported->hid.rng_key[i] = req->op[i+1];
> +
> +		imported->nents = req->op[4];
> +		imported->frst_ofst = req->op[5];
> +		imported->last_len = req->op[6];
> +		imported->ref_handle = req->op[7];
> +
> +		dev_dbg(hy_drv_priv->dev, "DMABUF was exported\n");
> +		dev_dbg(hy_drv_priv->dev, "\thid{id:%d key:%d %d %d}\n",
> +			req->op[0], req->op[1], req->op[2],
> +			req->op[3]);
> +		dev_dbg(hy_drv_priv->dev, "\tnents %d\n", req->op[4]);
> +		dev_dbg(hy_drv_priv->dev, "\tfirst offset %d\n", req->op[5]);
> +		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
> +		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
> +
heh, and what if you have to insert something at index 1, for example?
you'll end up changing all the hardcodes...
Please have the protocol and its constants, structures etc. defined 
somewhere
> +		imported->valid = true;
> +		hyper_dmabuf_register_imported(imported);
> +
> +		break;
> +
> +	default:
> +		/* shouldn't get here */
> +		break;
> +	}
> +
> +	kfree(req);
> +	kfree(proc);
> +}
> +
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req)
This seems to be a hyper_dmabuf_msg_*handle* rather than parse...
> +{
> +	struct cmd_process *proc;
> +	struct hyper_dmabuf_req *temp_req;
> +	struct imported_sgt_info *imported;
> +	struct exported_sgt_info *exported;
> +	hyper_dmabuf_id_t hid;
> +
> +	if (!req) {
> +		dev_err(hy_drv_priv->dev, "request is NULL\n");
> +		return -EINVAL;
> +	}
> +
> +	hid.id = req->op[0];
> +	hid.rng_key[0] = req->op[1];
> +	hid.rng_key[1] = req->op[2];
> +	hid.rng_key[2] = req->op[3];
> +
> +	if ((req->cmd < HYPER_DMABUF_EXPORT) ||
> +		(req->cmd > HYPER_DMABUF_NOTIFY_UNEXPORT)) {
> +		dev_err(hy_drv_priv->dev, "invalid command\n");
> +		return -EINVAL;
> +	}
> +
> +	req->stat = HYPER_DMABUF_REQ_PROCESSED;
> +
> +	/* HYPER_DMABUF_DESTROY requires immediate
> +	 * follow up so can't be processed in workqueue
> +	 */
> +	if (req->cmd == HYPER_DMABUF_NOTIFY_UNEXPORT) {
> +		/* destroy sg_list for hyper_dmabuf_id on remote side */
> +		/* command : HYPER_DMABUF_NOTIFY_UNEXPORT,
> +		 * op0~3 : hyper_dmabuf_id
> +		 */
> +		dev_dbg(hy_drv_priv->dev,
> +			"processing HYPER_DMABUF_NOTIFY_UNEXPORT\n");
> +
> +		imported = hyper_dmabuf_find_imported(hid);
> +
> +		if (imported) {
> +			/* if anything is still using dma_buf */
> +			if (imported->importers) {
> +				/* Buffer is still in  use, just mark that
> +				 * it should not be allowed to export its fd
> +				 * anymore.
> +				 */
> +				imported->valid = false;
> +			} else {
> +				/* No one is using buffer, remove it from
> +				 * imported list
> +				 */
> +				hyper_dmabuf_remove_imported(hid);
> +				kfree(imported);
> +			}
> +		} else {
> +			req->stat = HYPER_DMABUF_REQ_ERROR;
> +		}
> +
> +		return req->cmd;
> +	}
> +
> +	/* synchronous dma_buf_fd export */
> +	if (req->cmd == HYPER_DMABUF_EXPORT_FD) {
> +		/* find a corresponding SGT for the id */
> +		dev_dbg(hy_drv_priv->dev,
> +			"HYPER_DMABUF_EXPORT_FD for {id:%d key:%d %d %d}\n",
> +			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
> +
> +		exported = hyper_dmabuf_find_exported(hid);
> +
> +		if (!exported) {
> +			dev_err(hy_drv_priv->dev,
> +				"buffer {id:%d key:%d %d %d} not found\n",
> +				hid.id, hid.rng_key[0], hid.rng_key[1],
> +				hid.rng_key[2]);
> +
> +			req->stat = HYPER_DMABUF_REQ_ERROR;
> +		} else if (!exported->valid) {
> +			dev_dbg(hy_drv_priv->dev,
> +				"Buffer no longer valid {id:%d key:%d %d %d}\n",
> +				hid.id, hid.rng_key[0], hid.rng_key[1],
> +				hid.rng_key[2]);
> +
> +			req->stat = HYPER_DMABUF_REQ_ERROR;
> +		} else {
> +			dev_dbg(hy_drv_priv->dev,
> +				"Buffer still valid {id:%d key:%d %d %d}\n",
> +				hid.id, hid.rng_key[0], hid.rng_key[1],
> +				hid.rng_key[2]);
> +
> +			exported->active++;
> +			req->stat = HYPER_DMABUF_REQ_PROCESSED;
> +		}
> +		return req->cmd;
> +	}
> +
> +	if (req->cmd == HYPER_DMABUF_EXPORT_FD_FAILED) {
> +		dev_dbg(hy_drv_priv->dev,
> +			"HYPER_DMABUF_EXPORT_FD_FAILED for {id:%d key:%d %d %d}\n",
> +			hid.id, hid.rng_key[0], hid.rng_key[1], hid.rng_key[2]);
> +
> +		exported = hyper_dmabuf_find_exported(hid);
> +
> +		if (!exported) {
> +			dev_err(hy_drv_priv->dev,
> +				"buffer {id:%d key:%d %d %d} not found\n",
> +				hid.id, hid.rng_key[0], hid.rng_key[1],
> +				hid.rng_key[2]);
> +
> +			req->stat = HYPER_DMABUF_REQ_ERROR;
> +		} else {
> +			exported->active--;
> +			req->stat = HYPER_DMABUF_REQ_PROCESSED;
> +		}
> +		return req->cmd;
> +	}
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"%s: putting request to workqueue\n", __func__);
> +	temp_req = kmalloc(sizeof(*temp_req), GFP_KERNEL);
> +
> +	if (!temp_req)
> +		return -ENOMEM;
> +
> +	memcpy(temp_req, req, sizeof(*temp_req));
> +
> +	proc = kcalloc(1, sizeof(struct cmd_process), GFP_KERNEL);
> +
> +	if (!proc) {
> +		kfree(temp_req);
> +		return -ENOMEM;
> +	}
> +
> +	proc->rq = temp_req;
> +	proc->domid = domid;
> +
> +	INIT_WORK(&(proc->work), cmd_process_work);
Why do you need to be so asynchronous and schedule a work for processing
rather than handle it now?
> +
> +	queue_work(hy_drv_priv->work_queue, &(proc->work));
> +
> +	return req->cmd;
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
> new file mode 100644
> index 000000000000..59f1528e9b1e
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
> @@ -0,0 +1,87 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_MSG_H__
> +#define __HYPER_DMABUF_MSG_H__
> +
> +#define MAX_NUMBER_OF_OPERANDS 8
> +
> +struct hyper_dmabuf_req {
> +	unsigned int req_id;
> +	unsigned int stat;
> +	unsigned int cmd;
> +	unsigned int op[MAX_NUMBER_OF_OPERANDS];
> +};
> +
> +struct hyper_dmabuf_resp {
> +	unsigned int resp_id;
> +	unsigned int stat;
> +	unsigned int cmd;
> +	unsigned int op[MAX_NUMBER_OF_OPERANDS];
> +};
> +
The structures above are of size of 11 * sizeof(int) == 44 bytes
Can these be aligned to 64 for example,
 From Xen POV: these will be sent over the shared ring, which is of 
PAGE_SIZE
size, so 4096 / 44...
> +enum hyper_dmabuf_command {
> +	HYPER_DMABUF_EXPORT = 0x10,
> +	HYPER_DMABUF_EXPORT_FD,
> +	HYPER_DMABUF_EXPORT_FD_FAILED,
> +	HYPER_DMABUF_NOTIFY_UNEXPORT,
> +};
> +
> +enum hyper_dmabuf_ops {
> +	HYPER_DMABUF_OPS_ATTACH = 0x1000,
> +	HYPER_DMABUF_OPS_DETACH,
> +	HYPER_DMABUF_OPS_MAP,
> +	HYPER_DMABUF_OPS_UNMAP,
> +	HYPER_DMABUF_OPS_RELEASE,
> +	HYPER_DMABUF_OPS_BEGIN_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_END_CPU_ACCESS,
> +	HYPER_DMABUF_OPS_KMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KUNMAP_ATOMIC,
> +	HYPER_DMABUF_OPS_KMAP,
> +	HYPER_DMABUF_OPS_KUNMAP,
> +	HYPER_DMABUF_OPS_MMAP,
> +	HYPER_DMABUF_OPS_VMAP,
> +	HYPER_DMABUF_OPS_VUNMAP,
> +};
> +
> +enum hyper_dmabuf_req_feedback {
This rather seems to be a status
> +	HYPER_DMABUF_REQ_PROCESSED = 0x100,
> +	HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP,
> +	HYPER_DMABUF_REQ_ERROR,
> +	HYPER_DMABUF_REQ_NOT_RESPONDED
> +};
> +
> +/* create a request packet with given command and operands */
> +void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
> +				 enum hyper_dmabuf_command command,
> +				 int *operands);
> +
> +/* parse incoming request packet (or response) and take
> + * appropriate actions for those
> + */
> +int hyper_dmabuf_msg_parse(int domid, struct hyper_dmabuf_req *req);
> +
> +#endif // __HYPER_DMABUF_MSG_H__
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
> new file mode 100644
> index 000000000000..b4d3c2caad73
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.c
> @@ -0,0 +1,264 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_ops.h"
> +#include "hyper_dmabuf_sgl_proc.h"
> +#include "hyper_dmabuf_id.h"
> +#include "hyper_dmabuf_msg.h"
> +#include "hyper_dmabuf_list.h"
> +
> +#define WAIT_AFTER_SYNC_REQ 0
> +#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
> +
> +static int dmabuf_refcount(struct dma_buf *dma_buf)
> +{
> +	if ((dma_buf != NULL) && (dma_buf->file != NULL))
> +		return file_count(dma_buf->file);
> +
> +	return -EINVAL;
> +}
> +
> +static int hyper_dmabuf_ops_attach(struct dma_buf *dmabuf,
> +				   struct device *dev,
> +				   struct dma_buf_attachment *attach)
> +{
> +	return 0;
> +}
> +
> +static void hyper_dmabuf_ops_detach(struct dma_buf *dmabuf,
> +				    struct dma_buf_attachment *attach)
> +{
> +}
> +
> +static struct sg_table *hyper_dmabuf_ops_map(
> +				struct dma_buf_attachment *attachment,
> +				enum dma_data_direction dir)
> +{
> +	struct sg_table *st;
> +	struct imported_sgt_info *imported;
> +	struct pages_info *pg_info;
> +
> +	if (!attachment->dmabuf->priv)
> +		return NULL;
> +
> +	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	/* extract pages from sgt */
> +	pg_info = hyper_dmabuf_ext_pgs(imported->sgt);
> +
> +	if (!pg_info)
> +		return NULL;
> +
> +	/* create a new sg_table with extracted pages */
> +	st = hyper_dmabuf_create_sgt(pg_info->pgs, pg_info->frst_ofst,
> +				     pg_info->last_len, pg_info->nents);
> +	if (!st)
> +		goto err_free_sg;
> +
> +	if (!dma_map_sg(attachment->dev, st->sgl, st->nents, dir))
> +		goto err_free_sg;
> +
> +	kfree(pg_info->pgs);
> +	kfree(pg_info);
> +
> +	return st;
> +
> +err_free_sg:
> +	if (st) {
> +		sg_free_table(st);
> +		kfree(st);
> +	}
> +
> +	kfree(pg_info->pgs);
> +	kfree(pg_info);
> +
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_unmap(struct dma_buf_attachment *attachment,
> +				   struct sg_table *sg,
> +				   enum dma_data_direction dir)
> +{
> +	struct imported_sgt_info *imported;
> +
> +	if (!attachment->dmabuf->priv)
> +		return;
> +
> +	imported = (struct imported_sgt_info *)attachment->dmabuf->priv;
> +
> +	dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
> +
> +	sg_free_table(sg);
> +	kfree(sg);
> +}
> +
> +static void hyper_dmabuf_ops_release(struct dma_buf *dma_buf)
> +{
> +	struct imported_sgt_info *imported;
> +	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
> +	int finish;
> +
> +	if (!dma_buf->priv)
> +		return;
> +
> +	imported = (struct imported_sgt_info *)dma_buf->priv;
> +
> +	if (!dmabuf_refcount(imported->dma_buf))
> +		imported->dma_buf = NULL;
> +
> +	imported->importers--;
> +
> +	if (imported->importers == 0) {
> +		bknd_ops->unmap_shared_pages(&imported->refs_info,
> +					     imported->nents);
> +
> +		if (imported->sgt) {
> +			sg_free_table(imported->sgt);
> +			kfree(imported->sgt);
> +			imported->sgt = NULL;
> +		}
> +	}
> +
> +	finish = imported && !imported->valid &&
> +		 !imported->importers;
> +
> +	/*
> +	 * Check if buffer is still valid and if not remove it
> +	 * from imported list. That has to be done after sending
> +	 * sync request
> +	 */
> +	if (finish) {
> +		hyper_dmabuf_remove_imported(imported->hid);
> +		kfree(imported);
> +	}
> +}
> +
> +static int hyper_dmabuf_ops_begin_cpu_access(struct dma_buf *dmabuf,
> +					     enum dma_data_direction dir)
> +{
> +	return 0;
> +}
> +
> +static int hyper_dmabuf_ops_end_cpu_access(struct dma_buf *dmabuf,
> +					   enum dma_data_direction dir)
> +{
> +	return 0;
> +}
> +
> +static void *hyper_dmabuf_ops_kmap_atomic(struct dma_buf *dmabuf,
> +					  unsigned long pgnum)
> +{
> +	/* TODO: NULL for now. Need to return the addr of mapped region */
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_kunmap_atomic(struct dma_buf *dmabuf,
> +					   unsigned long pgnum, void *vaddr)
> +{
> +}
> +
> +static void *hyper_dmabuf_ops_kmap(struct dma_buf *dmabuf, unsigned long pgnum)
> +{
> +	/* for now NULL.. need to return the address of mapped region */
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_kunmap(struct dma_buf *dmabuf, unsigned long pgnum,
> +				    void *vaddr)
> +{
> +}
> +
> +static int hyper_dmabuf_ops_mmap(struct dma_buf *dmabuf,
> +				 struct vm_area_struct *vma)
> +{
> +	return 0;
> +}
> +
> +static void *hyper_dmabuf_ops_vmap(struct dma_buf *dmabuf)
> +{
> +	return NULL;
> +}
> +
> +static void hyper_dmabuf_ops_vunmap(struct dma_buf *dmabuf, void *vaddr)
> +{
> +}
> +
> +static const struct dma_buf_ops hyper_dmabuf_ops = {
> +	.attach = hyper_dmabuf_ops_attach,
> +	.detach = hyper_dmabuf_ops_detach,
> +	.map_dma_buf = hyper_dmabuf_ops_map,
> +	.unmap_dma_buf = hyper_dmabuf_ops_unmap,
> +	.release = hyper_dmabuf_ops_release,
> +	.begin_cpu_access = (void *)hyper_dmabuf_ops_begin_cpu_access,
> +	.end_cpu_access = (void *)hyper_dmabuf_ops_end_cpu_access,
> +	.map_atomic = hyper_dmabuf_ops_kmap_atomic,
> +	.unmap_atomic = hyper_dmabuf_ops_kunmap_atomic,
> +	.map = hyper_dmabuf_ops_kmap,
> +	.unmap = hyper_dmabuf_ops_kunmap,
> +	.mmap = hyper_dmabuf_ops_mmap,
> +	.vmap = hyper_dmabuf_ops_vmap,
> +	.vunmap = hyper_dmabuf_ops_vunmap,
> +};
> +
> +/* exporting dmabuf as fd */
> +int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags)
> +{
> +	int fd = -1;
> +
> +	/* call hyper_dmabuf_export_dmabuf and create
> +	 * and bind a handle for it then release
> +	 */
> +	hyper_dmabuf_export_dma_buf(imported);
> +
> +	if (imported->dma_buf)
> +		fd = dma_buf_fd(imported->dma_buf, flags);
> +
> +	return fd;
> +}
> +
> +void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported)
> +{
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +
> +	exp_info.ops = &hyper_dmabuf_ops;
> +
> +	/* multiple of PAGE_SIZE, not considering offset */
> +	exp_info.size = imported->sgt->nents * PAGE_SIZE;
Here and below: it can be that PAGE_SIZE differs across VMs
> +	exp_info.flags = /* not sure about flag */ 0;
> +	exp_info.priv = imported;
> +
> +	imported->dma_buf = dma_buf_export(&exp_info);
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
> new file mode 100644
> index 000000000000..b30367f2836b
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ops.h
> @@ -0,0 +1,34 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_OPS_H__
> +#define __HYPER_DMABUF_OPS_H__
> +
> +int hyper_dmabuf_export_fd(struct imported_sgt_info *imported, int flags);
> +
> +void hyper_dmabuf_export_dma_buf(struct imported_sgt_info *imported);
> +
> +#endif /* __HYPER_DMABUF_IMP_H__ */
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
> new file mode 100644
> index 000000000000..d92ae13d8a30
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
> @@ -0,0 +1,256 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/dma-buf.h>
> +#include "hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_struct.h"
> +#include "hyper_dmabuf_sgl_proc.h"
> +
> +#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
> +
> +/* return total number of pages referenced by a sgt
> + * for pre-calculation of # of pages behind a given sgt
> + */
> +static int get_num_pgs(struct sg_table *sgt)
> +{
> +	struct scatterlist *sgl;
> +	int length, i;
> +	/* at least one page */
> +	int num_pages = 1;
> +
> +	sgl = sgt->sgl;
> +
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +
> +	/* round-up */
> +	num_pages += ((length + PAGE_SIZE - 1)/PAGE_SIZE);
DIV_ROUND_UP
> +
> +	for (i = 1; i < sgt->nents; i++) {
> +		sgl = sg_next(sgl);
> +
> +		/* round-up */
> +		num_pages += ((sgl->length + PAGE_SIZE - 1) /
> +			     PAGE_SIZE); /* round-up */
Ditto
> +	}
> +
> +	return num_pages;
> +}
> +
> +/* extract pages directly from struct sg_table */
> +struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt)
> +{
> +	struct pages_info *pg_info;
> +	int i, j, k;
> +	int length;
> +	struct scatterlist *sgl;
> +
> +	pg_info = kmalloc(sizeof(*pg_info), GFP_KERNEL);
> +	if (!pg_info)
> +		return NULL;
> +
> +	pg_info->pgs = kmalloc_array(get_num_pgs(sgt),
> +				     sizeof(struct page *),
> +				     GFP_KERNEL);
> +
> +	if (!pg_info->pgs) {
> +		kfree(pg_info);
> +		return NULL;
> +	}
> +
> +	sgl = sgt->sgl;
> +
> +	pg_info->nents = 1;
> +	pg_info->frst_ofst = sgl->offset;
> +	pg_info->pgs[0] = sg_page(sgl);
> +	length = sgl->length - PAGE_SIZE + sgl->offset;
> +	i = 1;
> +
> +	while (length > 0) {
> +		pg_info->pgs[i] = nth_page(sg_page(sgl), i);
> +		length -= PAGE_SIZE;
> +		pg_info->nents++;
> +		i++;
> +	}
> +
> +	for (j = 1; j < sgt->nents; j++) {
> +		sgl = sg_next(sgl);
> +		pg_info->pgs[i++] = sg_page(sgl);
> +		length = sgl->length - PAGE_SIZE;
> +		pg_info->nents++;
> +		k = 1;
> +
> +		while (length > 0) {
> +			pg_info->pgs[i++] = nth_page(sg_page(sgl), k++);
> +			length -= PAGE_SIZE;
> +			pg_info->nents++;
> +		}
> +	}
> +
> +	/*
> +	 * lenght at that point will be 0 or negative,
> +	 * so to calculate last page size just add it to PAGE_SIZE
> +	 */
> +	pg_info->last_len = PAGE_SIZE + length;
> +
> +	return pg_info;
> +}
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
> +					 int frst_ofst, int last_len,
> +					 int nents)
> +{
> +	struct sg_table *sgt;
> +	struct scatterlist *sgl;
> +	int i, ret;
> +
> +	sgt = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> +	if (!sgt)
> +		return NULL;
> +
> +	ret = sg_alloc_table(sgt, nents, GFP_KERNEL);
> +	if (ret) {
> +		if (sgt) {
> +			sg_free_table(sgt);
> +			kfree(sgt);
> +		}
> +
> +		return NULL;
> +	}
> +
> +	sgl = sgt->sgl;
> +
> +	sg_set_page(sgl, pgs[0], PAGE_SIZE-frst_ofst, frst_ofst);
> +
> +	for (i = 1; i < nents-1; i++) {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pgs[i], PAGE_SIZE, 0);
> +	}
> +
> +	if (nents > 1) /* more than one page */ {
> +		sgl = sg_next(sgl);
> +		sg_set_page(sgl, pgs[i], last_len, 0);
> +	}
> +
> +	return sgt;
> +}
> +
> +int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
> +				  int force)
> +{
> +	struct sgt_list *sgtl;
> +	struct attachment_list *attachl;
> +	struct kmap_vaddr_list *va_kmapl;
> +	struct vmap_vaddr_list *va_vmapl;
> +	struct hyper_dmabuf_bknd_ops *bknd_ops = hy_drv_priv->bknd_ops;
> +
> +	if (!exported) {
> +		dev_err(hy_drv_priv->dev, "invalid hyper_dmabuf_id\n");
> +		return -EINVAL;
> +	}
> +
> +	/* if force != 1, sgt_info can be released only if
> +	 * there's no activity on exported dma-buf on importer
> +	 * side.
> +	 */
> +	if (!force &&
> +	    exported->active) {
> +		dev_warn(hy_drv_priv->dev,
> +			 "dma-buf is used by importer\n");
> +
> +		return -EPERM;
> +	}
> +
> +	/* force == 1 is not recommended */
> +	while (!list_empty(&exported->va_kmapped->list)) {
> +		va_kmapl = list_first_entry(&exported->va_kmapped->list,
> +					    struct kmap_vaddr_list, list);
> +
> +		dma_buf_kunmap(exported->dma_buf, 1, va_kmapl->vaddr);
> +		list_del(&va_kmapl->list);
> +		kfree(va_kmapl);
> +	}
> +
> +	while (!list_empty(&exported->va_vmapped->list)) {
> +		va_vmapl = list_first_entry(&exported->va_vmapped->list,
> +					    struct vmap_vaddr_list, list);
> +
> +		dma_buf_vunmap(exported->dma_buf, va_vmapl->vaddr);
> +		list_del(&va_vmapl->list);
> +		kfree(va_vmapl);
> +	}
> +
> +	while (!list_empty(&exported->active_sgts->list)) {
> +		attachl = list_first_entry(&exported->active_attached->list,
> +					   struct attachment_list, list);
> +
> +		sgtl = list_first_entry(&exported->active_sgts->list,
> +					struct sgt_list, list);
> +
> +		dma_buf_unmap_attachment(attachl->attach, sgtl->sgt,
> +					 DMA_BIDIRECTIONAL);
> +		list_del(&sgtl->list);
> +		kfree(sgtl);
> +	}
> +
> +	while (!list_empty(&exported->active_sgts->list)) {
> +		attachl = list_first_entry(&exported->active_attached->list,
> +					   struct attachment_list, list);
> +
> +		dma_buf_detach(exported->dma_buf, attachl->attach);
> +		list_del(&attachl->list);
> +		kfree(attachl);
> +	}
> +
> +	/* Start cleanup of buffer in reverse order to exporting */
> +	bknd_ops->unshare_pages(&exported->refs_info, exported->nents);
is the above synchronous? can it be delayed?
> +
> +	/* unmap dma-buf */
> +	dma_buf_unmap_attachment(exported->active_attached->attach,
> +				 exported->active_sgts->sgt,
> +				 DMA_BIDIRECTIONAL);
if the above is asynchronous then this might make troubles as
we are unmapping yet shared pages
> +
> +	/* detatch dma-buf */
> +	dma_buf_detach(exported->dma_buf, exported->active_attached->attach);
> +
> +	/* close connection to dma-buf completely */
> +	dma_buf_put(exported->dma_buf);
> +	exported->dma_buf = NULL;
> +
> +	kfree(exported->active_sgts);
> +	kfree(exported->active_attached);
> +	kfree(exported->va_kmapped);
> +	kfree(exported->va_vmapped);
> +
> +	return 0;
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
> new file mode 100644
> index 000000000000..8dbc9c3dfda4
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.h
> @@ -0,0 +1,43 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_IMP_H__
> +#define __HYPER_DMABUF_IMP_H__
> +
> +/* extract pages directly from struct sg_table */
> +struct pages_info *hyper_dmabuf_ext_pgs(struct sg_table *sgt);
> +
> +/* create sg_table with given pages and other parameters */
> +struct sg_table *hyper_dmabuf_create_sgt(struct page **pgs,
> +					 int frst_ofst, int last_len,
> +					 int nents);
> +
> +int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
> +				  int force);
> +
> +void hyper_dmabuf_free_sgt(struct sg_table *sgt);
> +
> +#endif /* __HYPER_DMABUF_IMP_H__ */
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
> new file mode 100644
> index 000000000000..144e3821fbc2
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
> @@ -0,0 +1,131 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: (MIT OR GPL-2.0)
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_STRUCT_H__
> +#define __HYPER_DMABUF_STRUCT_H__
> +
> +/* stack of mapped sgts */
> +struct sgt_list {
> +	struct sg_table *sgt;
> +	struct list_head list;
> +};
> +
> +/* stack of attachments */
> +struct attachment_list {
> +	struct dma_buf_attachment *attach;
> +	struct list_head list;
> +};
> +
> +/* stack of vaddr mapped via kmap */
> +struct kmap_vaddr_list {
> +	void *vaddr;
> +	struct list_head list;
> +};
> +
> +/* stack of vaddr mapped via vmap */
> +struct vmap_vaddr_list {
> +	void *vaddr;
> +	struct list_head list;
> +};
> +
> +/* Exporter builds pages_info before sharing pages */
> +struct pages_info {
> +	int frst_ofst;
> +	int last_len;
> +	int nents;
> +	struct page **pgs;
> +};
> +
> +
> +/* Exporter stores references to sgt in a hash table
> + * Exporter keeps these references for synchronization
> + * and tracking purposes
> + */
> +struct exported_sgt_info {
> +	hyper_dmabuf_id_t hid;
> +
> +	/* VM ID of importer */
> +	int rdomid;
> +
> +	struct dma_buf *dma_buf;
> +	int nents;
> +
> +	/* list for tracking activities on dma_buf */
> +	struct sgt_list *active_sgts;
> +	struct attachment_list *active_attached;
> +	struct kmap_vaddr_list *va_kmapped;
> +	struct vmap_vaddr_list *va_vmapped;
> +
> +	/* set to 0 when unexported. Importer doesn't
> +	 * do a new mapping of buffer if valid == false
> +	 */
> +	bool valid;
> +
> +	/* active == true if the buffer is actively used
> +	 * (mapped) by importer
> +	 */
> +	int active;
> +
> +	/* hypervisor specific reference data for shared pages */
> +	void *refs_info;
> +
> +	struct delayed_work unexport;
> +	bool unexport_sched;
> +
> +	/* list for file pointers associated with all user space
> +	 * application that have exported this same buffer to
> +	 * another VM. This needs to be tracked to know whether
> +	 * the buffer can be completely freed.
> +	 */
> +	struct file *filp;
> +};
> +
> +/* imported_sgt_info contains information about imported DMA_BUF
> + * this info is kept in IMPORT list and asynchorously retrieved and
> + * used to map DMA_BUF on importer VM's side upon export fd ioctl
> + * request from user-space
> + */
> +
> +struct imported_sgt_info {
> +	hyper_dmabuf_id_t hid; /* unique id for shared dmabuf imported */
> +
> +	/* hypervisor-specific handle to pages */
> +	int ref_handle;
> +
> +	/* offset and size info of DMA_BUF */
> +	int frst_ofst;
> +	int last_len;
> +	int nents;
> +
> +	struct dma_buf *dma_buf;
> +	struct sg_table *sgt;
> +
> +	void *refs_info;
> +	bool valid;
> +	int importers;
> +};
> +
> +#endif /* __HYPER_DMABUF_STRUCT_H__ */
> diff --git a/include/uapi/linux/hyper_dmabuf.h b/include/uapi/linux/hyper_dmabuf.h
> new file mode 100644
> index 000000000000..caaae2da9d4d
> --- /dev/null
> +++ b/include/uapi/linux/hyper_dmabuf.h
> @@ -0,0 +1,87 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
> +#define __LINUX_PUBLIC_HYPER_DMABUF_H__
> +
> +typedef struct {
> +	int id;
can this be defined as a union as you seem to store count and vm_id
in this field?
> +	int rng_key[3]; /* 12bytes long random number */
> +} hyper_dmabuf_id_t;
> +
> +#define IOCTL_HYPER_DMABUF_TX_CH_SETUP \
> +_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_hyper_dmabuf_tx_ch_setup))
> +struct ioctl_hyper_dmabuf_tx_ch_setup {
> +	/* IN parameters */
> +	/* Remote domain id */
> +	int remote_domain;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_RX_CH_SETUP \
> +_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_hyper_dmabuf_rx_ch_setup))
> +struct ioctl_hyper_dmabuf_rx_ch_setup {
> +	/* IN parameters */
> +	/* Source domain id */
> +	int source_domain;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_REMOTE \
> +_IOC(_IOC_NONE, 'G', 2, sizeof(struct ioctl_hyper_dmabuf_export_remote))
> +struct ioctl_hyper_dmabuf_export_remote {
> +	/* IN parameters */
> +	/* DMA buf fd to be exported */
> +	int dmabuf_fd;
> +	/* Domain id to which buffer should be exported */
> +	int remote_domain;
> +	/* exported dma buf id */
> +	hyper_dmabuf_id_t hid;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_EXPORT_FD \
> +_IOC(_IOC_NONE, 'G', 3, sizeof(struct ioctl_hyper_dmabuf_export_fd))
> +struct ioctl_hyper_dmabuf_export_fd {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be imported */
> +	hyper_dmabuf_id_t hid;
> +	/* flags */
> +	int flags;
> +	/* OUT parameters */
> +	/* exported dma buf fd */
> +	int fd;
> +};
> +
> +#define IOCTL_HYPER_DMABUF_UNEXPORT \
> +_IOC(_IOC_NONE, 'G', 4, sizeof(struct ioctl_hyper_dmabuf_unexport))
> +struct ioctl_hyper_dmabuf_unexport {
> +	/* IN parameters */
> +	/* hyper dmabuf id to be unexported */
> +	hyper_dmabuf_id_t hid;
> +	/* delay in ms by which unexport processing will be postponed */
> +	int delay_ms;
> +	/* OUT parameters */
> +	/* Status of request */
> +	int status;
> +};
> +
> +#endif //__LINUX_PUBLIC_HYPER_DMABUF_H__
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC,v2,5/9] hyper_dmabuf: default backend for XEN hypervisor
  2018-02-14  1:50 ` [RFC PATCH v2 5/9] hyper_dmabuf: default backend for XEN hypervisor Dongwon Kim
@ 2018-04-10  9:27   ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 21+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-10  9:27 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, mateuszx.potrola

On 02/14/2018 03:50 AM, Dongwon Kim wrote:
> From: "Matuesz Polrola" <mateuszx.potrola@intel.com>
>
> The default backend for XEN hypervisor. This backend contains actual
> implementation of individual methods defined in "struct hyper_dmabuf_bknd_ops"
> defined as:
>
> struct hyper_dmabuf_bknd_ops {
>          /* backend initialization routine (optional) */
>          int (*init)(void);
>
>          /* backend cleanup routine (optional) */
>          int (*cleanup)(void);
>
>          /* retreiving id of current virtual machine */
>          int (*get_vm_id)(void);
>
>          /* get pages shared via hypervisor-specific method */
>          int (*share_pages)(struct page **, int, int, void **);
>
>          /* make shared pages unshared via hypervisor specific method */
>          int (*unshare_pages)(void **, int);
>
>          /* map remotely shared pages on importer's side via
>           * hypervisor-specific method
>           */
>          struct page ** (*map_shared_pages)(unsigned long, int, int, void **);
>
>          /* unmap and free shared pages on importer's side via
>           * hypervisor-specific method
>           */
>          int (*unmap_shared_pages)(void **, int);
>
>          /* initialize communication environment */
>          int (*init_comm_env)(void);
>
>          void (*destroy_comm)(void);
>
>          /* upstream ch setup (receiving and responding) */
>          int (*init_rx_ch)(int);
>
>          /* downstream ch setup (transmitting and parsing responses) */
>          int (*init_tx_ch)(int);
>
>          int (*send_req)(int, struct hyper_dmabuf_req *, int);
> };
>
> First two methods are for extra initialization or cleaning up possibly
> required for the current Hypervisor (optional). Third method
> (.get_vm_id) provides a way to get current VM's id, which will be used
> as an identication of source VM of shared hyper_DMABUF later.
>
> All other methods are related to either memory sharing or inter-VM
> communication, which are minimum requirement for hyper_DMABUF driver.
> (Brief description of role of each method is embedded as a comment in the
> definition of the structure above and header file.)
>
> Actual implementation of each of these methods specific to XEN is under
> backends/xen/. Their mappings are done as followed:
>
> struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
>          .init = NULL, /* not needed for xen */
>          .cleanup = NULL, /* not needed for xen */
>          .get_vm_id = xen_be_get_domid,
>          .share_pages = xen_be_share_pages,
>          .unshare_pages = xen_be_unshare_pages,
>          .map_shared_pages = (void *)xen_be_map_shared_pages,
>          .unmap_shared_pages = xen_be_unmap_shared_pages,
>          .init_comm_env = xen_be_init_comm_env,
>          .destroy_comm = xen_be_destroy_comm,
>          .init_rx_ch = xen_be_init_rx_rbuf,
>          .init_tx_ch = xen_be_init_tx_rbuf,
>          .send_req = xen_be_send_req,
> };
>
> A section for Hypervisor Backend has been added to
>
> "Documentation/hyper-dmabuf-sharing.txt" accordingly
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
> ---
>   drivers/dma-buf/hyper_dmabuf/Kconfig               |   7 +
>   drivers/dma-buf/hyper_dmabuf/Makefile              |   7 +
>   .../backends/xen/hyper_dmabuf_xen_comm.c           | 941 +++++++++++++++++++++
>   .../backends/xen/hyper_dmabuf_xen_comm.h           |  78 ++
>   .../backends/xen/hyper_dmabuf_xen_comm_list.c      | 158 ++++
>   .../backends/xen/hyper_dmabuf_xen_comm_list.h      |  67 ++
>   .../backends/xen/hyper_dmabuf_xen_drv.c            |  46 +
>   .../backends/xen/hyper_dmabuf_xen_drv.h            |  53 ++
>   .../backends/xen/hyper_dmabuf_xen_shm.c            | 525 ++++++++++++
>   .../backends/xen/hyper_dmabuf_xen_shm.h            |  46 +
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c    |  10 +
>   11 files changed, 1938 insertions(+)
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
>   create mode 100644 drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
>
> diff --git a/drivers/dma-buf/hyper_dmabuf/Kconfig b/drivers/dma-buf/hyper_dmabuf/Kconfig
> index 5ebf516d65eb..68f3d6ce2c1f 100644
> --- a/drivers/dma-buf/hyper_dmabuf/Kconfig
> +++ b/drivers/dma-buf/hyper_dmabuf/Kconfig
> @@ -20,4 +20,11 @@ config HYPER_DMABUF_SYSFS
>   
>   	  The location of sysfs is under "...."
>   
> +config HYPER_DMABUF_XEN
> +        bool "Configure hyper_dmabuf for XEN hypervisor"
> +        default y
n?
> +        depends on HYPER_DMABUF && XEN && XENFS
> +        help
> +          Enabling Hyper_DMABUF Backend for XEN hypervisor
> +
>   endmenu
> diff --git a/drivers/dma-buf/hyper_dmabuf/Makefile b/drivers/dma-buf/hyper_dmabuf/Makefile
> index 3908522b396a..b9ab4eeca6f2 100644
> --- a/drivers/dma-buf/hyper_dmabuf/Makefile
> +++ b/drivers/dma-buf/hyper_dmabuf/Makefile
> @@ -10,6 +10,13 @@ ifneq ($(KERNELRELEASE),)
>   				 hyper_dmabuf_msg.o \
>   				 hyper_dmabuf_id.o \
>   
> +ifeq ($(CONFIG_HYPER_DMABUF_XEN), y)
> +	$(TARGET_MODULE)-objs += backends/xen/hyper_dmabuf_xen_comm.o \
> +				 backends/xen/hyper_dmabuf_xen_comm_list.o \
> +				 backends/xen/hyper_dmabuf_xen_shm.o \
> +				 backends/xen/hyper_dmabuf_xen_drv.o
> +endif
> +
>   obj-$(CONFIG_HYPER_DMABUF) := $(TARGET_MODULE).o
>   
>   # If we are running without kernel build system
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
> new file mode 100644
> index 000000000000..30bc4b6304ac
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
> @@ -0,0 +1,941 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/workqueue.h>
> +#include <linux/delay.h>
> +#include <xen/grant_table.h>
> +#include <xen/events.h>
> +#include <xen/xenbus.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +#include "../../hyper_dmabuf_drv.h"
> +
> +static int export_req_id;
can we avoid this?
> +
> +struct hyper_dmabuf_req req_pending = {0};
> +
> +static void xen_get_domid_delayed(struct work_struct *unused);
> +static void xen_init_comm_env_delayed(struct work_struct *unused);
> +
> +static DECLARE_DELAYED_WORK(get_vm_id_work, xen_get_domid_delayed);
> +static DECLARE_DELAYED_WORK(xen_init_comm_env_work, xen_init_comm_env_delayed);
> +
> +/* Creates entry in xen store that will keep details of all
> + * exporter rings created by this domain
> + */
> +static int xen_comm_setup_data_dir(void)
> +{
> +	char buf[255];
> +
> +	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
> +		hy_drv_priv->domid);
Here and below: please have a string constant for that
> +
> +	return xenbus_mkdir(XBT_NIL, buf, "");
Please think of updating XenBus with a transaction, not XBT_NIL
> +}
> +
> +/* Removes entry from xenstore with exporter ring details.
> + * Other domains that has connected to any of exporter rings
> + * created by this domain, will be notified about removal of
> + * this entry and will treat that as signal to cleanup importer
> + * rings created for this domain
> + */
> +static int xen_comm_destroy_data_dir(void)
> +{
> +	char buf[255];
> +
> +	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf",
> +		hy_drv_priv->domid);
> +
> +	return xenbus_rm(XBT_NIL, buf, "");
> +}
> +
> +/* Adds xenstore entries with details of exporter ring created
> + * for given remote domain. It requires special daemon running
what is this special daemon?
> + * in dom0 to make sure that given remote domain will have right
> + * permissions to access that data.
> + */
> +static int xen_comm_expose_ring_details(int domid, int rdomid,
> +					int gref, int port)
> +{
> +	char buf[255];
> +	int ret;
> +
> +	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
> +		domid, rdomid);
> +
> +	ret = xenbus_printf(XBT_NIL, buf, "grefid", "%d", gref);
> +
> +	if (ret) {
> +		dev_err(hy_drv_priv->dev,
Please do not touch global hy_drv_priv directly
> +			"Failed to write xenbus entry %s: %d\n",
> +			buf, ret);
> +
> +		return ret;
> +	}
> +
> +	ret = xenbus_printf(XBT_NIL, buf, "port", "%d", port);
> +
> +	if (ret) {
> +		dev_err(hy_drv_priv->dev,
> +			"Failed to write xenbus entry %s: %d\n",
> +			buf, ret);
> +
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Queries details of ring exposed by remote domain.
> + */
> +static int xen_comm_get_ring_details(int domid, int rdomid,
> +				     int *grefid, int *port)
> +{
> +	char buf[255];
> +	int ret;
> +
> +	sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
> +		rdomid, domid);
> +
> +	ret = xenbus_scanf(XBT_NIL, buf, "grefid", "%d", grefid);
You'll have a race condition here as you are not using transactions,
so you might read partial data from XenBus
> +
> +	if (ret <= 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"Failed to read xenbus entry %s: %d\n",
> +			buf, ret);
> +
> +		return ret;
> +	}
> +
> +	ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", port);
Ditto
> +
> +	if (ret <= 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"Failed to read xenbus entry %s: %d\n",
> +			buf, ret);
> +
> +		return ret;
> +	}
> +
> +	return (ret <= 0 ? 1 : 0);
> +}
> +
> +static void xen_get_domid_delayed(struct work_struct *unused)
> +{
> +	struct xenbus_transaction xbt;
> +	int domid, ret;
> +
> +	/* scheduling another if driver is still running
> +	 * and xenstore has not been initialized
> +	 */
Please think of using XenBus drivers for this (struct xenbus_driver)
It might add some complexity in the backend (by dynamically registering/
unregistering XenBus driver), but will also let you run such code as
you have here synchronously, e.g. see struct xenbus_driver.otherend_changed.
This way you'll be able to implement XenBus state machine as other Xen
front/back drivers do.
> +	if (likely(xenstored_ready == 0)) {
> +		dev_dbg(hy_drv_priv->dev,
> +			"Xenstore is not ready yet. Will retry in 500ms\n");
> +		schedule_delayed_work(&get_vm_id_work, msecs_to_jiffies(500));
> +	} else {
> +		xenbus_transaction_start(&xbt);
> +
so, for consistency, please use transactions everywhere
> +		ret = xenbus_scanf(xbt, "domid", "", "%d", &domid);
> +
> +		if (ret <= 0)
> +			domid = -1;
> +
> +		xenbus_transaction_end(xbt, 0);
> +
> +		/* try again since -1 is an invalid id for domain
> +		 * (but only if driver is still running)
> +		 */
> +		if (unlikely(domid == -1)) {
> +			dev_dbg(hy_drv_priv->dev,
> +				"domid==-1 is invalid. Will retry it in 500ms\n");
> +			schedule_delayed_work(&get_vm_id_work,
> +					      msecs_to_jiffies(500));
This doesn't seem to be designed right as you need to poll for values
and have this worker
> +		} else {
> +			dev_info(hy_drv_priv->dev,
> +				 "Successfully retrieved domid from Xenstore:%d\n",
> +				 domid);
> +			hy_drv_priv->domid = domid;
> +		}
> +	}
> +}
> +
> +int xen_be_get_domid(void)
> +{
> +	struct xenbus_transaction xbt;
> +	int domid;
> +
> +	if (unlikely(xenstored_ready == 0)) {
> +		xen_get_domid_delayed(NULL);
> +		return -1;
> +	}
> +
> +	xenbus_transaction_start(&xbt);
> +
> +	if (!xenbus_scanf(xbt, "domid", "", "%d", &domid))
> +		domid = -1;
> +
> +	xenbus_transaction_end(xbt, 0);
> +
> +	return domid;
> +}
> +
> +static int xen_comm_next_req_id(void)
> +{
> +	export_req_id++;
> +	return export_req_id;
> +}
> +
> +/* For now cache latast rings as global variables TODO: keep them in list*/
> +static irqreturn_t front_ring_isr(int irq, void *info);
> +static irqreturn_t back_ring_isr(int irq, void *info);
> +
> +/* Callback function that will be called on any change of xenbus path
> + * being watched. Used for detecting creation/destruction of remote
> + * domain exporter ring.
If you implement xenbus_driver.otherend_changed and
corresponding state machine this might not be needed
> + *
> + * When remote domain's exporter ring will be detected, importer ring
> + * on this domain will be created.
> + *
> + * When remote domain's exporter ring destruction will be detected it
> + * will celanup this domain importer ring.
> + *
> + * Destruction can be caused by unloading module by remote domain or
> + * it's crash/force shutdown.
> + */
> +static void remote_dom_exporter_watch_cb(struct xenbus_watch *watch,
> +					 const char *path, const char *token)
> +{
> +	int rdom, ret;
> +	uint32_t grefid, port;
> +	struct xen_comm_rx_ring_info *ring_info;
> +
> +	/* Check which domain has changed its exporter rings */
> +	ret = sscanf(watch->node, "/local/domain/%d/", &rdom);
> +	if (ret <= 0)
> +		return;
> +
> +	/* Check if we have importer ring for given remote domain already
> +	 * created
> +	 */
> +	ring_info = xen_comm_find_rx_ring(rdom);
> +
> +	/* Try to query remote domain exporter ring details - if
> +	 * that will fail and we have importer ring that means remote
> +	 * domains has cleanup its exporter ring, so our importer ring
> +	 * is no longer useful.
> +	 *
> +	 * If querying details will succeed and we don't have importer ring,
> +	 * it means that remote domain has setup it for us and we should
> +	 * connect to it.
> +	 */
> +
> +	ret = xen_comm_get_ring_details(xen_be_get_domid(),
> +					rdom, &grefid, &port);
> +
> +	if (ring_info && ret != 0) {
> +		dev_info(hy_drv_priv->dev,
> +			 "Remote exporter closed, cleaninup importer\n");
> +		xen_be_cleanup_rx_rbuf(rdom);
> +	} else if (!ring_info && ret == 0) {
> +		dev_info(hy_drv_priv->dev,
> +			 "Registering importer\n");
> +		xen_be_init_rx_rbuf(rdom);
> +	}
> +}
> +
> +/* exporter needs to generated info for page sharing */
> +int xen_be_init_tx_rbuf(int domid)
> +{
> +	struct xen_comm_tx_ring_info *ring_info;
> +	struct xen_comm_sring *sring;
> +	struct evtchn_alloc_unbound alloc_unbound;
> +	struct evtchn_close close;
> +
> +	void *shared_ring;
> +	int ret;
> +
> +	/* check if there's any existing tx channel in the table */
> +	ring_info = xen_comm_find_tx_ring(domid);
> +
> +	if (ring_info) {
> +		dev_info(hy_drv_priv->dev,
> +			 "tx ring ch to domid = %d already exist\ngref = %d, port = %d\n",
> +		ring_info->rdomain, ring_info->gref_ring, ring_info->port);
> +		return 0;
> +	}
> +
> +	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	if (!ring_info)
> +		return -ENOMEM;
> +
> +	/* from exporter to importer */
> +	shared_ring = (void *)__get_free_pages(GFP_KERNEL, 1);
> +	if (shared_ring == 0) {
> +		kfree(ring_info);
> +		return -ENOMEM;
> +	}
> +
> +	sring = (struct xen_comm_sring *) shared_ring;
> +
> +	SHARED_RING_INIT(sring);
> +
> +	FRONT_RING_INIT(&(ring_info->ring_front), sring, PAGE_SIZE);
> +
> +	ring_info->gref_ring = gnttab_grant_foreign_access(domid,
> +						virt_to_mfn(shared_ring),
> +						0);
> +	if (ring_info->gref_ring < 0) {
> +		/* fail to get gref */
> +		kfree(ring_info);
> +		return -EFAULT;
> +	}
> +
> +	alloc_unbound.dom = DOMID_SELF;
> +	alloc_unbound.remote_dom = domid;
> +	ret = HYPERVISOR_event_channel_op(EVTCHNOP_alloc_unbound,
> +					  &alloc_unbound);
Please do not open-code: xenbus_alloc_evtchn
> +	if (ret) {
> +		dev_err(hy_drv_priv->dev,
> +			"Cannot allocate event channel\n");
> +		kfree(ring_info);
> +		return -EIO;
> +	}
> +
> +	/* setting up interrupt */
> +	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
> +					front_ring_isr, 0,
> +					NULL, (void *) ring_info);
> +
> +	if (ret < 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"Failed to setup event channel\n");
> +		close.port = alloc_unbound.port;
> +		HYPERVISOR_event_channel_op(EVTCHNOP_close, &close);
Please do not open-code: xenbus_free_evtchn
> +		gnttab_end_foreign_access(ring_info->gref_ring, 0,
> +					virt_to_mfn(shared_ring));
> +		kfree(ring_info);
> +		return -EIO;
> +	}
> +
> +	ring_info->rdomain = domid;
> +	ring_info->irq = ret;
> +	ring_info->port = alloc_unbound.port;
> +
> +	mutex_init(&ring_info->lock);
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"%s: allocated eventchannel gref %d  port: %d  irq: %d\n",
> +		__func__,
> +		ring_info->gref_ring,
> +		ring_info->port,
> +		ring_info->irq);
> +
> +	ret = xen_comm_add_tx_ring(ring_info);
> +
And what if we fail?
> +	ret = xen_comm_expose_ring_details(xen_be_get_domid(),
> +					   domid,
> +					   ring_info->gref_ring,
> +					   ring_info->port);
> +
> +	/* Register watch for remote domain exporter ring.
> +	 * When remote domain will setup its exporter ring,
> +	 * we will automatically connect our importer ring to it.
> +	 */
> +	ring_info->watch.callback = remote_dom_exporter_watch_cb;
> +	ring_info->watch.node = kmalloc(255, GFP_KERNEL);
> +
> +	if (!ring_info->watch.node) {
> +		kfree(ring_info);
> +		return -ENOMEM;
> +	}
> +
> +	sprintf((char *)ring_info->watch.node,
> +		"/local/domain/%d/data/hyper_dmabuf/%d/port",
> +		domid, xen_be_get_domid());
> +
> +	register_xenbus_watch(&ring_info->watch);
> +
> +	return ret;
> +}
> +
> +/* cleans up exporter ring created for given remote domain */
> +void xen_be_cleanup_tx_rbuf(int domid)
> +{
> +	struct xen_comm_tx_ring_info *ring_info;
> +	struct xen_comm_rx_ring_info *rx_ring_info;
> +
> +	/* check if we at all have exporter ring for given rdomain */
> +	ring_info = xen_comm_find_tx_ring(domid);
> +
> +	if (!ring_info)
> +		return;
> +
> +	xen_comm_remove_tx_ring(domid);
> +
> +	unregister_xenbus_watch(&ring_info->watch);
> +	kfree(ring_info->watch.node);
> +
> +	/* No need to close communication channel, will be done by
> +	 * this function
> +	 */
> +	unbind_from_irqhandler(ring_info->irq, (void *) ring_info);
> +
> +	/* No need to free sring page, will be freed by this function
> +	 * when other side will end its access
> +	 */
> +	gnttab_end_foreign_access(ring_info->gref_ring, 0,
> +				  (unsigned long) ring_info->ring_front.sring);
> +
> +	kfree(ring_info);
> +
> +	rx_ring_info = xen_comm_find_rx_ring(domid);
> +	if (!rx_ring_info)
> +		return;
> +
> +	BACK_RING_INIT(&(rx_ring_info->ring_back),
> +		       rx_ring_info->ring_back.sring,
> +		       PAGE_SIZE);
why init on cleanup?
> +}
> +
> +/* importer needs to know about shared page and port numbers for
> + * ring buffer and event channel
> + */
> +int xen_be_init_rx_rbuf(int domid)
> +{
> +	struct xen_comm_rx_ring_info *ring_info;
> +	struct xen_comm_sring *sring;
> +
> +	struct page *shared_ring;
> +
> +	struct gnttab_map_grant_ref *map_ops;
> +
> +	int ret;
> +	int rx_gref, rx_port;
> +
> +	/* check if there's existing rx ring channel */
> +	ring_info = xen_comm_find_rx_ring(domid);
> +
> +	if (ring_info) {
> +		dev_info(hy_drv_priv->dev,
> +			 "rx ring ch from domid = %d already exist\n",
> +			 ring_info->sdomain);
> +
> +		return 0;
> +	}
> +
> +	ret = xen_comm_get_ring_details(xen_be_get_domid(), domid,
> +					&rx_gref, &rx_port);
> +
> +	if (ret) {
> +		dev_err(hy_drv_priv->dev,
> +			"Domain %d has not created exporter ring for current domain\n",
> +			domid);
> +
> +		return ret;
> +	}
> +
> +	ring_info = kmalloc(sizeof(*ring_info), GFP_KERNEL);
> +
> +	if (!ring_info)
> +		return -ENOMEM;
> +
> +	ring_info->sdomain = domid;
> +	ring_info->evtchn = rx_port;
> +
> +	map_ops = kmalloc(sizeof(*map_ops), GFP_KERNEL);
> +
> +	if (!map_ops) {
> +		ret = -ENOMEM;
> +		goto fail_no_map_ops;
> +	}
> +
> +	if (gnttab_alloc_pages(1, &shared_ring)) {
> +		ret = -ENOMEM;
> +		goto fail_others;
> +	}
> +
Please see xenbus_grant_ring
> +	gnttab_set_map_op(&map_ops[0],
> +			  (unsigned long)pfn_to_kaddr(
> +					page_to_pfn(shared_ring)),
> +			  GNTMAP_host_map, rx_gref, domid);
> +
> +	gnttab_set_unmap_op(&ring_info->unmap_op,
> +			    (unsigned long)pfn_to_kaddr(
> +					page_to_pfn(shared_ring)),
> +			    GNTMAP_host_map, -1);
> +
> +	ret = gnttab_map_refs(map_ops, NULL, &shared_ring, 1);
> +	if (ret < 0) {
> +		dev_err(hy_drv_priv->dev, "Cannot map ring\n");
> +		ret = -EFAULT;
> +		goto fail_others;
> +	}
> +
> +	if (map_ops[0].status) {
> +		dev_err(hy_drv_priv->dev, "Ring mapping failed\n");
> +		ret = -EFAULT;
> +		goto fail_others;
> +	} else {
> +		ring_info->unmap_op.handle = map_ops[0].handle;
> +	}
> +
> +	kfree(map_ops);
> +
> +	sring = (struct xen_comm_sring *)pfn_to_kaddr(page_to_pfn(shared_ring));
> +
> +	BACK_RING_INIT(&ring_info->ring_back, sring, PAGE_SIZE);
> +
> +	ret = bind_interdomain_evtchn_to_irq(domid, rx_port);
> +
> +	if (ret < 0) {
> +		ret = -EIO;
> +		goto fail_others;
> +	}
> +
> +	ring_info->irq = ret;
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"%s: bound to eventchannel port: %d  irq: %d\n", __func__,
> +		rx_port,
> +		ring_info->irq);
> +
> +	ret = xen_comm_add_rx_ring(ring_info);
> +
> +	/* Setup communcation channel in opposite direction */
> +	if (!xen_comm_find_tx_ring(domid))
> +		ret = xen_be_init_tx_rbuf(domid);
> +
> +	ret = request_irq(ring_info->irq,
> +			  back_ring_isr, 0,
> +			  NULL, (void *)ring_info);
> +
> +	return ret;
> +
> +fail_others:
> +	kfree(map_ops);
> +
> +fail_no_map_ops:
> +	kfree(ring_info);
> +
> +	return ret;
> +}
> +
> +/* clenas up importer ring create for given source domain */
> +void xen_be_cleanup_rx_rbuf(int domid)
> +{
> +	struct xen_comm_rx_ring_info *ring_info;
> +	struct xen_comm_tx_ring_info *tx_ring_info;
> +	struct page *shared_ring;
> +
> +	/* check if we have importer ring created for given sdomain */
> +	ring_info = xen_comm_find_rx_ring(domid);
> +
> +	if (!ring_info)
> +		return;
> +
> +	xen_comm_remove_rx_ring(domid);
> +
> +	/* no need to close event channel, will be done by that function */
> +	unbind_from_irqhandler(ring_info->irq, (void *)ring_info);
> +
> +	/* unmapping shared ring page */
> +	shared_ring = virt_to_page(ring_info->ring_back.sring);
> +	gnttab_unmap_refs(&ring_info->unmap_op, NULL, &shared_ring, 1);
> +	gnttab_free_pages(1, &shared_ring);
> +
> +	kfree(ring_info);
> +
> +	tx_ring_info = xen_comm_find_tx_ring(domid);
> +	if (!tx_ring_info)
> +		return;
> +
> +	SHARED_RING_INIT(tx_ring_info->ring_front.sring);
> +	FRONT_RING_INIT(&(tx_ring_info->ring_front),
> +			tx_ring_info->ring_front.sring,
> +			PAGE_SIZE);
> +}
> +
> +#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
> +
> +static void xen_rx_ch_add_delayed(struct work_struct *unused);
> +
> +static DECLARE_DELAYED_WORK(xen_rx_ch_auto_add_work, xen_rx_ch_add_delayed);
> +
> +#define DOMID_SCAN_START	1	/*  domid = 1 */
> +#define DOMID_SCAN_END		10	/* domid = 10 */
> +
> +static void xen_rx_ch_add_delayed(struct work_struct *unused)
> +{
> +	int ret;
> +	char buf[128];
> +	int i, dummy;
> +
> +	dev_dbg(hy_drv_priv->dev,
> +		"Scanning new tx channel comming from another domain\n");
This should be synchronous IMO, no scanners
> +
> +	/* check other domains and schedule another work if driver
> +	 * is still running and backend is valid
> +	 */
> +	if (hy_drv_priv &&
> +	    hy_drv_priv->initialized) {
> +		for (i = DOMID_SCAN_START; i < DOMID_SCAN_END + 1; i++) {
> +			if (i == hy_drv_priv->domid)
> +				continue;
> +
> +			sprintf(buf, "/local/domain/%d/data/hyper_dmabuf/%d",
> +				i, hy_drv_priv->domid);
> +
> +			ret = xenbus_scanf(XBT_NIL, buf, "port", "%d", &dummy);
> +
> +			if (ret > 0) {
> +				if (xen_comm_find_rx_ring(i) != NULL)
> +					continue;
> +
> +				ret = xen_be_init_rx_rbuf(i);
> +
> +				if (!ret)
> +					dev_info(hy_drv_priv->dev,
> +						 "Done rx ch init for VM %d\n",
> +						 i);
> +			}
> +		}
> +
> +		/* check every 10 seconds */
> +		schedule_delayed_work(&xen_rx_ch_auto_add_work,
> +				      msecs_to_jiffies(10000));
> +	}
> +}
> +
> +#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
> +
> +void xen_init_comm_env_delayed(struct work_struct *unused)
> +{
> +	int ret;
> +
> +	/* scheduling another work if driver is still running
> +	 * and xenstore hasn't been initialized or dom_id hasn't
> +	 * been correctly retrieved.
> +	 */
> +	if (likely(xenstored_ready == 0 ||
> +	    hy_drv_priv->domid == -1)) {
> +		dev_dbg(hy_drv_priv->dev,
> +			"Xenstore not ready Will re-try in 500ms\n");
> +		schedule_delayed_work(&xen_init_comm_env_work,
> +				      msecs_to_jiffies(500));
> +	} else {
> +		ret = xen_comm_setup_data_dir();
> +		if (ret < 0) {
> +			dev_err(hy_drv_priv->dev,
> +				"Failed to create data dir in Xenstore\n");
> +		} else {
> +			dev_info(hy_drv_priv->dev,
> +				"Successfully finished comm env init\n");
> +			hy_drv_priv->initialized = true;
> +
> +#ifdef CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD
> +			xen_rx_ch_add_delayed(NULL);
> +#endif /* CONFIG_HYPER_DMABUF_XEN_AUTO_RX_CH_ADD */
> +		}
> +	}
> +}
> +
> +int xen_be_init_comm_env(void)
> +{
> +	int ret;
> +
> +	xen_comm_ring_table_init();
> +
> +	if (unlikely(xenstored_ready == 0 ||
> +	    hy_drv_priv->domid == -1)) {
> +		xen_init_comm_env_delayed(NULL);
> +		return -1;
> +	}
> +
> +	ret = xen_comm_setup_data_dir();
> +	if (ret < 0) {
> +		dev_err(hy_drv_priv->dev,
> +			"Failed to create data dir in Xenstore\n");
> +	} else {
> +		dev_info(hy_drv_priv->dev,
> +			"Successfully finished comm env initialization\n");
> +
> +		hy_drv_priv->initialized = true;
> +	}
> +
> +	return ret;
> +}
> +
> +/* cleans up all tx/rx rings */
> +static void xen_be_cleanup_all_rbufs(void)
> +{
> +	xen_comm_foreach_tx_ring(xen_be_cleanup_tx_rbuf);
> +	xen_comm_foreach_rx_ring(xen_be_cleanup_rx_rbuf);
> +}
> +
> +void xen_be_destroy_comm(void)
> +{
> +	xen_be_cleanup_all_rbufs();
> +	xen_comm_destroy_data_dir();
> +}
> +
> +int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
> +			      int wait)
> +{
> +	struct xen_comm_front_ring *ring;
> +	struct hyper_dmabuf_req *new_req;
> +	struct xen_comm_tx_ring_info *ring_info;
> +	int notify;
> +
> +	struct timeval tv_start, tv_end;
> +	struct timeval tv_diff;
> +
> +	int timeout = 1000;
> +
> +	/* find a ring info for the channel */
> +	ring_info = xen_comm_find_tx_ring(domid);
> +	if (!ring_info) {
> +		dev_err(hy_drv_priv->dev,
> +			"Can't find ring info for the channel\n");
> +		return -ENOENT;
> +	}
> +
> +
> +	ring = &ring_info->ring_front;
> +
> +	do_gettimeofday(&tv_start);
> +
> +	while (RING_FULL(ring)) {
> +		dev_dbg(hy_drv_priv->dev, "RING_FULL\n");
> +
> +		if (timeout == 0) {
> +			dev_err(hy_drv_priv->dev,
> +				"Timeout while waiting for an entry in the ring\n");
> +			return -EIO;
> +		}
> +		usleep_range(100, 120);
> +		timeout--;
> +	}
Heh
> +
> +	timeout = 1000;
> +
> +	mutex_lock(&ring_info->lock);
> +
> +	new_req = RING_GET_REQUEST(ring, ring->req_prod_pvt);
> +	if (!new_req) {
> +		mutex_unlock(&ring_info->lock);
> +		dev_err(hy_drv_priv->dev,
> +			"NULL REQUEST\n");
> +		return -EIO;
> +	}
> +
> +	req->req_id = xen_comm_next_req_id();
> +
> +	/* update req_pending with current request */
> +	memcpy(&req_pending, req, sizeof(req_pending));
> +
> +	/* pass current request to the ring */
> +	memcpy(new_req, req, sizeof(*new_req));
> +
> +	ring->req_prod_pvt++;
> +
> +	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(ring, notify);
> +	if (notify)
> +		notify_remote_via_irq(ring_info->irq);
> +
> +	if (wait) {
> +		while (timeout--) {
> +			if (req_pending.stat !=
> +			    HYPER_DMABUF_REQ_NOT_RESPONDED)
> +				break;
> +			usleep_range(100, 120);
> +		}
> +
> +		if (timeout < 0) {
> +			mutex_unlock(&ring_info->lock);
> +			dev_err(hy_drv_priv->dev,
> +				"request timed-out\n");
> +			return -EBUSY;
> +		}
> +
> +		mutex_unlock(&ring_info->lock);
> +		do_gettimeofday(&tv_end);
> +
> +		/* checking time duration for round-trip of a request
> +		 * for debugging
> +		 */
put it under debug #ifdef then?
> +		if (tv_end.tv_usec >= tv_start.tv_usec) {
> +			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec;
> +			tv_diff.tv_usec = tv_end.tv_usec-tv_start.tv_usec;
> +		} else {
> +			tv_diff.tv_sec = tv_end.tv_sec-tv_start.tv_sec-1;
> +			tv_diff.tv_usec = tv_end.tv_usec+1000000-
> +					  tv_start.tv_usec;
> +		}
> +
> +		if (tv_diff.tv_sec != 0 && tv_diff.tv_usec > 16000)
> +			dev_dbg(hy_drv_priv->dev,
> +				"send_req:time diff: %ld sec, %ld usec\n",
> +				tv_diff.tv_sec, tv_diff.tv_usec);
> +	}
> +
> +	mutex_unlock(&ring_info->lock);
> +
> +	return 0;
> +}
> +
> +/* ISR for handling request */
> +static irqreturn_t back_ring_isr(int irq, void *info)
> +{
> +	RING_IDX rc, rp;
> +	struct hyper_dmabuf_req req;
> +	struct hyper_dmabuf_resp resp;
> +
> +	int notify, more_to_do;
> +	int ret;
> +
> +	struct xen_comm_rx_ring_info *ring_info;
> +	struct xen_comm_back_ring *ring;
> +
> +	ring_info = (struct xen_comm_rx_ring_info *)info;
> +	ring = &ring_info->ring_back;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
> +
> +	do {
> +		rc = ring->req_cons;
> +		rp = ring->sring->req_prod;
> +		more_to_do = 0;
> +		while (rc != rp) {
> +			if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
> +				break;
> +
> +			memcpy(&req, RING_GET_REQUEST(ring, rc), sizeof(req));
> +			ring->req_cons = ++rc;
> +
> +			ret = hyper_dmabuf_msg_parse(ring_info->sdomain, &req);
> +
> +			if (ret > 0) {
> +				/* preparing a response for the request and
> +				 * send it to the requester
> +				 */
> +				memcpy(&resp, &req, sizeof(resp));
> +				memcpy(RING_GET_RESPONSE(ring,
> +							 ring->rsp_prod_pvt),
> +							 &resp, sizeof(resp));
> +				ring->rsp_prod_pvt++;
> +
> +				dev_dbg(hy_drv_priv->dev,
> +					"responding to exporter for req:%d\n",
> +					resp.resp_id);
> +
> +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(ring,
> +								     notify);
> +
> +				if (notify)
> +					notify_remote_via_irq(ring_info->irq);
> +			}
> +
> +			RING_FINAL_CHECK_FOR_REQUESTS(ring, more_to_do);
> +		}
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +/* ISR for handling responses */
> +static irqreturn_t front_ring_isr(int irq, void *info)
> +{
> +	/* front ring only care about response from back */
> +	struct hyper_dmabuf_resp *resp;
> +	RING_IDX i, rp;
> +	int more_to_do, ret;
> +
> +	struct xen_comm_tx_ring_info *ring_info;
> +	struct xen_comm_front_ring *ring;
> +
> +	ring_info = (struct xen_comm_tx_ring_info *)info;
> +	ring = &ring_info->ring_front;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s\n", __func__);
> +
> +	do {
> +		more_to_do = 0;
> +		rp = ring->sring->rsp_prod;
> +		for (i = ring->rsp_cons; i != rp; i++) {
> +			resp = RING_GET_RESPONSE(ring, i);
> +
> +			/* update pending request's status with what is
> +			 * in the response
> +			 */
> +
> +			dev_dbg(hy_drv_priv->dev,
> +				"getting response from importer\n");
> +
> +			if (req_pending.req_id == resp->resp_id)
> +				req_pending.stat = resp->stat;
> +
> +			if (resp->stat == HYPER_DMABUF_REQ_NEEDS_FOLLOW_UP) {
> +				/* parsing response */
> +				ret = hyper_dmabuf_msg_parse(ring_info->rdomain,
> +					(struct hyper_dmabuf_req *)resp);
> +
> +				if (ret < 0) {
> +					dev_err(hy_drv_priv->dev,
> +						"err while parsing resp\n");
> +				}
> +			} else if (resp->stat == HYPER_DMABUF_REQ_PROCESSED) {
> +				/* for debugging dma_buf remote synch */
> +				dev_dbg(hy_drv_priv->dev,
> +					"original request = 0x%x\n", resp->cmd);
> +				dev_dbg(hy_drv_priv->dev,
> +					"got HYPER_DMABUF_REQ_PROCESSED\n");
> +			} else if (resp->stat == HYPER_DMABUF_REQ_ERROR) {
> +				/* for debugging dma_buf remote synch */
> +				dev_dbg(hy_drv_priv->dev,
> +					"original request = 0x%x\n", resp->cmd);
> +				dev_dbg(hy_drv_priv->dev,
> +					"got HYPER_DMABUF_REQ_ERROR\n");
> +			}
> +		}
> +
> +		ring->rsp_cons = i;
> +
> +		if (i != ring->req_prod_pvt)
> +			RING_FINAL_CHECK_FOR_RESPONSES(ring, more_to_do);
> +		else
> +			ring->sring->rsp_event = i+1;
> +
> +	} while (more_to_do);
> +
> +	return IRQ_HANDLED;
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
> new file mode 100644
> index 000000000000..c0d3139ace59
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.h
> @@ -0,0 +1,78 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_XEN_COMM_H__
> +#define __HYPER_DMABUF_XEN_COMM_H__
> +
> +#include "xen/interface/io/ring.h"
> +#include "xen/xenbus.h"
> +#include "../../hyper_dmabuf_msg.h"
> +
> +extern int xenstored_ready;
> +
> +DEFINE_RING_TYPES(xen_comm, struct hyper_dmabuf_req, struct hyper_dmabuf_resp);
> +
> +struct xen_comm_tx_ring_info {
> +	struct xen_comm_front_ring ring_front;
> +	int rdomain;
> +	int gref_ring;
> +	int irq;
> +	int port;
> +	struct mutex lock;
> +	struct xenbus_watch watch;
> +};
> +
> +struct xen_comm_rx_ring_info {
> +	int sdomain;
> +	int irq;
> +	int evtchn;
> +	struct xen_comm_back_ring ring_back;
> +	struct gnttab_unmap_grant_ref unmap_op;
> +};
> +
> +int xen_be_get_domid(void);
> +
> +int xen_be_init_comm_env(void);
> +
> +/* exporter needs to generated info for page sharing */
> +int xen_be_init_tx_rbuf(int domid);
> +
> +/* importer needs to know about shared page and port numbers
> + * for ring buffer and event channel
> + */
> +int xen_be_init_rx_rbuf(int domid);
> +
> +/* cleans up exporter ring created for given domain */
> +void xen_be_cleanup_tx_rbuf(int domid);
> +
> +/* cleans up importer ring created for given domain */
> +void xen_be_cleanup_rx_rbuf(int domid);
> +
> +void xen_be_destroy_comm(void);
> +
> +/* send request to the remote domain */
> +int xen_be_send_req(int domid, struct hyper_dmabuf_req *req,
> +		    int wait);
> +
> +#endif /* __HYPER_DMABUF_XEN_COMM_H__ */
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
> new file mode 100644
> index 000000000000..5a8e9d9b737f
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.c
> @@ -0,0 +1,158 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/slab.h>
> +#include <linux/cdev.h>
> +#include <linux/hashtable.h>
> +#include <xen/grant_table.h>
> +#include "../../hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_comm_list.h"
> +
> +DECLARE_HASHTABLE(xen_comm_tx_ring_hash, MAX_ENTRY_TX_RING);
> +DECLARE_HASHTABLE(xen_comm_rx_ring_hash, MAX_ENTRY_RX_RING);
> +
> +void xen_comm_ring_table_init(void)
> +{
> +	hash_init(xen_comm_rx_ring_hash);
> +	hash_init(xen_comm_tx_ring_hash);
> +}
> +
> +int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info)
> +{
> +	struct xen_comm_tx_ring_info_entry *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	if (!info_entry)
> +		return -ENOMEM;
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(xen_comm_tx_ring_hash, &info_entry->node,
> +		info_entry->info->rdomain);
> +
> +	return 0;
> +}
> +
> +int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info)
> +{
> +	struct xen_comm_rx_ring_info_entry *info_entry;
> +
> +	info_entry = kmalloc(sizeof(*info_entry), GFP_KERNEL);
> +
> +	if (!info_entry)
> +		return -ENOMEM;
> +
> +	info_entry->info = ring_info;
> +
> +	hash_add(xen_comm_rx_ring_hash, &info_entry->node,
> +		info_entry->info->sdomain);
> +
> +	return 0;
> +}
> +
> +struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid)
> +{
> +	struct xen_comm_tx_ring_info_entry *info_entry;
> +	int bkt;
> +
> +	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
> +		if (info_entry->info->rdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid)
> +{
> +	struct xen_comm_rx_ring_info_entry *info_entry;
> +	int bkt;
> +
> +	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
> +		if (info_entry->info->sdomain == domid)
> +			return info_entry->info;
> +
> +	return NULL;
> +}
> +
> +int xen_comm_remove_tx_ring(int domid)
> +{
> +	struct xen_comm_tx_ring_info_entry *info_entry;
> +	int bkt;
> +
> +	hash_for_each(xen_comm_tx_ring_hash, bkt, info_entry, node)
> +		if (info_entry->info->rdomain == domid) {
> +			hash_del(&info_entry->node);
> +			kfree(info_entry);
> +			return 0;
> +		}
> +
> +	return -ENOENT;
> +}
> +
> +int xen_comm_remove_rx_ring(int domid)
> +{
> +	struct xen_comm_rx_ring_info_entry *info_entry;
> +	int bkt;
> +
> +	hash_for_each(xen_comm_rx_ring_hash, bkt, info_entry, node)
> +		if (info_entry->info->sdomain == domid) {
> +			hash_del(&info_entry->node);
> +			kfree(info_entry);
> +			return 0;
> +		}
> +
> +	return -ENOENT;
> +}
> +
> +void xen_comm_foreach_tx_ring(void (*func)(int domid))
> +{
> +	struct xen_comm_tx_ring_info_entry *info_entry;
> +	struct hlist_node *tmp;
> +	int bkt;
> +
> +	hash_for_each_safe(xen_comm_tx_ring_hash, bkt, tmp,
> +			   info_entry, node) {
> +		func(info_entry->info->rdomain);
> +	}
> +}
> +
> +void xen_comm_foreach_rx_ring(void (*func)(int domid))
> +{
> +	struct xen_comm_rx_ring_info_entry *info_entry;
> +	struct hlist_node *tmp;
> +	int bkt;
> +
> +	hash_for_each_safe(xen_comm_rx_ring_hash, bkt, tmp,
> +			   info_entry, node) {
> +		func(info_entry->info->sdomain);
> +	}
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
> new file mode 100644
> index 000000000000..8d4b52bd41b0
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm_list.h
> @@ -0,0 +1,67 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_XEN_COMM_LIST_H__
> +#define __HYPER_DMABUF_XEN_COMM_LIST_H__
> +
> +/* number of bits to be used for exported dmabufs hash table */
> +#define MAX_ENTRY_TX_RING 7
> +/* number of bits to be used for imported dmabufs hash table */
> +#define MAX_ENTRY_RX_RING 7
> +
> +struct xen_comm_tx_ring_info_entry {
> +	struct xen_comm_tx_ring_info *info;
> +	struct hlist_node node;
> +};
> +
> +struct xen_comm_rx_ring_info_entry {
> +	struct xen_comm_rx_ring_info *info;
> +	struct hlist_node node;
> +};
> +
> +void xen_comm_ring_table_init(void);
> +
> +int xen_comm_add_tx_ring(struct xen_comm_tx_ring_info *ring_info);
> +
> +int xen_comm_add_rx_ring(struct xen_comm_rx_ring_info *ring_info);
> +
> +int xen_comm_remove_tx_ring(int domid);
> +
> +int xen_comm_remove_rx_ring(int domid);
> +
> +struct xen_comm_tx_ring_info *xen_comm_find_tx_ring(int domid);
> +
> +struct xen_comm_rx_ring_info *xen_comm_find_rx_ring(int domid);
> +
> +/* iterates over all exporter rings and calls provided
> + * function for each of them
> + */
> +void xen_comm_foreach_tx_ring(void (*func)(int domid));
> +
> +/* iterates over all importer rings and calls provided
> + * function for each of them
> + */
> +void xen_comm_foreach_rx_ring(void (*func)(int domid));
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_LIST_H__
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
> new file mode 100644
> index 000000000000..8122dc15b4cb
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.c
> @@ -0,0 +1,46 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include "../../hyper_dmabuf_drv.h"
> +#include "hyper_dmabuf_xen_comm.h"
> +#include "hyper_dmabuf_xen_shm.h"
> +
> +struct hyper_dmabuf_bknd_ops xen_bknd_ops = {
> +	.init = NULL, /* not needed for xen */
> +	.cleanup = NULL, /* not needed for xen */
> +	.get_vm_id = xen_be_get_domid,
> +	.share_pages = xen_be_share_pages,
> +	.unshare_pages = xen_be_unshare_pages,
> +	.map_shared_pages = (void *)xen_be_map_shared_pages,
> +	.unmap_shared_pages = xen_be_unmap_shared_pages,
> +	.init_comm_env = xen_be_init_comm_env,
> +	.destroy_comm = xen_be_destroy_comm,
> +	.init_rx_ch = xen_be_init_rx_rbuf,
> +	.init_tx_ch = xen_be_init_tx_rbuf,
> +	.send_req = xen_be_send_req,
> +};
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
> new file mode 100644
> index 000000000000..c97dc1c5d042
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_drv.h
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_XEN_DRV_H__
> +#define __HYPER_DMABUF_XEN_DRV_H__
> +#include <xen/interface/grant_table.h>
> +
> +extern struct hyper_dmabuf_bknd_ops xen_bknd_ops;
> +
> +/* Main purpose of this structure is to keep
> + * all references created or acquired for sharing
> + * pages with another domain for freeing those later
> + * when unsharing.
> + */
> +struct xen_shared_pages_info {
> +	/* top level refid */
> +	grant_ref_t lvl3_gref;
> +
> +	/* page of top level addressing, it contains refids of 2nd lvl pages */
> +	grant_ref_t *lvl3_table;
> +
> +	/* table of 2nd level pages, that contains refids to data pages */
> +	grant_ref_t *lvl2_table;
> +
> +	/* unmap ops for mapped pages */
> +	struct gnttab_unmap_grant_ref *unmap_ops;
> +
> +	/* data pages to be unmapped */
> +	struct page **data_pages;
> +};
> +
> +#endif // __HYPER_DMABUF_XEN_COMM_H__
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
> new file mode 100644
> index 000000000000..b2dcef34e10f
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.c
> @@ -0,0 +1,525 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + * Authors:
> + *    Dongwon Kim <dongwon.kim@intel.com>
> + *    Mateusz Polrola <mateuszx.potrola@intel.com>
> + *
> + */
> +
> +#include <linux/slab.h>
> +#include <xen/grant_table.h>
> +#include <asm/xen/page.h>
> +#include "hyper_dmabuf_xen_drv.h"
> +#include "../../hyper_dmabuf_drv.h"
> +
> +#define REFS_PER_PAGE (PAGE_SIZE/sizeof(grant_ref_t))
> +
> +/*
> + * Creates 2 level page directory structure for referencing shared pages.
> + * Top level page is a single page that contains up to 1024 refids that
> + * point to 2nd level pages.
> + *
> + * Each 2nd level page contains up to 1024 refids that point to shared
> + * data pages.
> + *
> + * There will always be one top level page and number of 2nd level pages
> + * depends on number of shared data pages.
> + *
> + *      3rd level page                2nd level pages            Data pages
> + * +-------------------------+   ┌>+--------------------+ ┌>+------------+
> + * |2nd level page 0 refid   |---┘ |Data page 0 refid   |-┘ |Data page 0 |
> + * |2nd level page 1 refid   |---┐ |Data page 1 refid   |-┐ +------------+
> + * |           ...           |   | |     ....           | |
> + * |2nd level page 1023 refid|-┐ | |Data page 1023 refid| └>+------------+
> + * +-------------------------+ | | +--------------------+   |Data page 1 |
> + *                             | |                          +------------+
> + *                             | └>+--------------------+
> + *                             |   |Data page 1024 refid|
> + *                             |   |Data page 1025 refid|
> + *                             |   |       ...          |
> + *                             |   |Data page 2047 refid|
> + *                             |   +--------------------+
> + *                             |
> + *                             |        .....
> + *                             └-->+-----------------------+
> + *                                 |Data page 1047552 refid|
> + *                                 |Data page 1047553 refid|
> + *                                 |       ...             |
> + *                                 |Data page 1048575 refid|
> + *                                 +-----------------------+
> + *
> + * Using such 2 level structure it is possible to reference up to 4GB of
> + * shared data using single refid pointing to top level page.
> + *
> + * Returns refid of top level page.
> + */
This seems to be over-engineered, IMO
> +int xen_be_share_pages(struct page **pages, int domid, int nents,
> +		       void **refs_info)
> +{
> +	grant_ref_t lvl3_gref;
> +	grant_ref_t *lvl2_table;
> +	grant_ref_t *lvl3_table;
> +
> +	/*
> +	 * Calculate number of pages needed for 2nd level addresing:
> +	 */
> +	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
> +			   ((nents % REFS_PER_PAGE) ? 1 : 0));
> +
> +	struct xen_shared_pages_info *sh_pages_info;
> +	int i;
> +
> +	lvl3_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, 1);
> +	lvl2_table = (grant_ref_t *)__get_free_pages(GFP_KERNEL, n_lvl2_grefs);
> +
> +	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
> +
> +	if (!sh_pages_info)
> +		return -ENOMEM;
> +
> +	*refs_info = (void *)sh_pages_info;
> +
> +	/* share data pages in readonly mode for security */
> +	for (i = 0; i < nents; i++) {
> +		lvl2_table[i] = gnttab_grant_foreign_access(domid,
> +					pfn_to_mfn(page_to_pfn(pages[i])),
> +					true /* read only */);
> +		if (lvl2_table[i] == -ENOSPC) {
> +			dev_err(hy_drv_priv->dev,
> +				"No more space left in grant table\n");
> +
> +			/* Unshare all already shared pages for lvl2 */
> +			while (i--) {
> +				gnttab_end_foreign_access_ref(lvl2_table[i], 0);
> +				gnttab_free_grant_reference(lvl2_table[i]);
> +			}
> +			goto err_cleanup;
> +		}
> +	}
> +
> +	/* Share 2nd level addressing pages in readonly mode*/
> +	for (i = 0; i < n_lvl2_grefs; i++) {
> +		lvl3_table[i] = gnttab_grant_foreign_access(domid,
> +					virt_to_mfn(
> +					(unsigned long)lvl2_table+i*PAGE_SIZE),
> +					true);
> +
> +		if (lvl3_table[i] == -ENOSPC) {
> +			dev_err(hy_drv_priv->dev,
> +				"No more space left in grant table\n");
> +
> +			/* Unshare all already shared pages for lvl3 */
> +			while (i--) {
> +				gnttab_end_foreign_access_ref(lvl3_table[i], 1);
> +				gnttab_free_grant_reference(lvl3_table[i]);
> +			}
> +
> +			/* Unshare all pages for lvl2 */
> +			while (nents--) {
> +				gnttab_end_foreign_access_ref(
> +							lvl2_table[nents], 0);
> +				gnttab_free_grant_reference(lvl2_table[nents]);
> +			}
> +
> +			goto err_cleanup;
> +		}
> +	}
> +
> +	/* Share lvl3_table in readonly mode*/
> +	lvl3_gref = gnttab_grant_foreign_access(domid,
> +			virt_to_mfn((unsigned long)lvl3_table),
> +			true);
> +
> +	if (lvl3_gref == -ENOSPC) {
> +		dev_err(hy_drv_priv->dev,
> +			"No more space left in grant table\n");
> +
> +		/* Unshare all pages for lvl3 */
> +		while (i--) {
> +			gnttab_end_foreign_access_ref(lvl3_table[i], 1);
> +			gnttab_free_grant_reference(lvl3_table[i]);
> +		}
> +
> +		/* Unshare all pages for lvl2 */
> +		while (nents--) {
> +			gnttab_end_foreign_access_ref(lvl2_table[nents], 0);
> +			gnttab_free_grant_reference(lvl2_table[nents]);
> +		}
> +
> +		goto err_cleanup;
> +	}
> +
> +	/* Store lvl3_table page to be freed later */
> +	sh_pages_info->lvl3_table = lvl3_table;
> +
> +	/* Store lvl2_table pages to be freed later */
> +	sh_pages_info->lvl2_table = lvl2_table;
> +
> +
> +	/* Store exported pages refid to be unshared later */
> +	sh_pages_info->lvl3_gref = lvl3_gref;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
> +	return lvl3_gref;
> +
> +err_cleanup:
> +	free_pages((unsigned long)lvl2_table, n_lvl2_grefs);
> +	free_pages((unsigned long)lvl3_table, 1);
> +
> +	return -ENOSPC;
> +}
> +
> +int xen_be_unshare_pages(void **refs_info, int nents)
> +{
> +	struct xen_shared_pages_info *sh_pages_info;
> +	int n_lvl2_grefs = (nents/REFS_PER_PAGE +
> +			    ((nents % REFS_PER_PAGE) ? 1 : 0));
> +	int i;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
> +	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
> +
> +	if (sh_pages_info->lvl3_table == NULL ||
> +	    sh_pages_info->lvl2_table ==  NULL ||
> +	    sh_pages_info->lvl3_gref == -1) {
> +		dev_warn(hy_drv_priv->dev,
> +			 "gref table for hyper_dmabuf already cleaned up\n");
> +		return 0;
> +	}
> +
> +	/* End foreign access for data pages, but do not free them */
> +	for (i = 0; i < nents; i++) {
> +		if (gnttab_query_foreign_access(sh_pages_info->lvl2_table[i]))
> +			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
> +
> +		gnttab_end_foreign_access_ref(sh_pages_info->lvl2_table[i], 0);
> +		gnttab_free_grant_reference(sh_pages_info->lvl2_table[i]);
> +	}
> +
> +	/* End foreign access for 2nd level addressing pages */
> +	for (i = 0; i < n_lvl2_grefs; i++) {
> +		if (gnttab_query_foreign_access(sh_pages_info->lvl3_table[i]))
> +			dev_warn(hy_drv_priv->dev, "refid not shared !!\n");
> +
> +		if (!gnttab_end_foreign_access_ref(
> +					sh_pages_info->lvl3_table[i], 1))
> +			dev_warn(hy_drv_priv->dev, "refid still in use!!!\n");
> +
> +		gnttab_free_grant_reference(sh_pages_info->lvl3_table[i]);
> +	}
> +
> +	/* End foreign access for top level addressing page */
> +	if (gnttab_query_foreign_access(sh_pages_info->lvl3_gref))
> +		dev_warn(hy_drv_priv->dev, "gref not shared !!\n");
> +
> +	gnttab_end_foreign_access_ref(sh_pages_info->lvl3_gref, 1);
> +	gnttab_free_grant_reference(sh_pages_info->lvl3_gref);
> +
> +	/* freeing all pages used for 2 level addressing */
> +	free_pages((unsigned long)sh_pages_info->lvl2_table, n_lvl2_grefs);
> +	free_pages((unsigned long)sh_pages_info->lvl3_table, 1);
> +
> +	sh_pages_info->lvl3_gref = -1;
> +	sh_pages_info->lvl2_table = NULL;
> +	sh_pages_info->lvl3_table = NULL;
> +	kfree(sh_pages_info);
> +	sh_pages_info = NULL;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
> +	return 0;
> +}
> +
> +/* Maps provided top level ref id and then return array of pages
> + * containing data refs.
> + */
> +struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
> +				      int nents, void **refs_info)
> +{
> +	struct page *lvl3_table_page;
> +	struct page **lvl2_table_pages;
> +	struct page **data_pages;
> +	struct xen_shared_pages_info *sh_pages_info;
> +
> +	grant_ref_t *lvl3_table;
> +	grant_ref_t *lvl2_table;
> +
> +	struct gnttab_map_grant_ref lvl3_map_ops;
> +	struct gnttab_unmap_grant_ref lvl3_unmap_ops;
> +
> +	struct gnttab_map_grant_ref *lvl2_map_ops;
> +	struct gnttab_unmap_grant_ref *lvl2_unmap_ops;
> +
> +	struct gnttab_map_grant_ref *data_map_ops;
> +	struct gnttab_unmap_grant_ref *data_unmap_ops;
> +
> +	/* # of grefs in the last page of lvl2 table */
> +	int nents_last = (nents - 1) % REFS_PER_PAGE + 1;
> +	int n_lvl2_grefs = (nents / REFS_PER_PAGE) +
> +			   ((nents_last > 0) ? 1 : 0) -
> +			   (nents_last == REFS_PER_PAGE);
> +	int i, j, k;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
> +
> +	sh_pages_info = kmalloc(sizeof(*sh_pages_info), GFP_KERNEL);
> +	*refs_info = (void *) sh_pages_info;
> +
> +	lvl2_table_pages = kcalloc(n_lvl2_grefs, sizeof(struct page *),
> +				   GFP_KERNEL);
> +
> +	data_pages = kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
> +
> +	lvl2_map_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_map_ops),
> +			       GFP_KERNEL);
> +
> +	lvl2_unmap_ops = kcalloc(n_lvl2_grefs, sizeof(*lvl2_unmap_ops),
> +				 GFP_KERNEL);
> +
> +	data_map_ops = kcalloc(nents, sizeof(*data_map_ops), GFP_KERNEL);
> +	data_unmap_ops = kcalloc(nents, sizeof(*data_unmap_ops), GFP_KERNEL);
> +
> +	/* Map top level addressing page */
> +	if (gnttab_alloc_pages(1, &lvl3_table_page)) {
> +		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
> +		return NULL;
> +	}
> +
> +	lvl3_table = (grant_ref_t *)pfn_to_kaddr(page_to_pfn(lvl3_table_page));
> +
> +	gnttab_set_map_op(&lvl3_map_ops, (unsigned long)lvl3_table,
> +			  GNTMAP_host_map | GNTMAP_readonly,
> +			  (grant_ref_t)lvl3_gref, domid);
> +
> +	gnttab_set_unmap_op(&lvl3_unmap_ops, (unsigned long)lvl3_table,
> +			    GNTMAP_host_map | GNTMAP_readonly, -1);
> +
> +	if (gnttab_map_refs(&lvl3_map_ops, NULL, &lvl3_table_page, 1)) {
> +		dev_err(hy_drv_priv->dev,
> +			"HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	if (lvl3_map_ops.status) {
> +		dev_err(hy_drv_priv->dev,
> +			"HYPERVISOR map grant ref failed status = %d",
> +			lvl3_map_ops.status);
> +
> +		goto error_cleanup_lvl3;
> +	} else {
> +		lvl3_unmap_ops.handle = lvl3_map_ops.handle;
> +	}
> +
> +	/* Map all second level pages */
> +	if (gnttab_alloc_pages(n_lvl2_grefs, lvl2_table_pages)) {
> +		dev_err(hy_drv_priv->dev, "Cannot allocate pages\n");
> +		goto error_cleanup_lvl3;
> +	}
> +
> +	for (i = 0; i < n_lvl2_grefs; i++) {
> +		lvl2_table = (grant_ref_t *)pfn_to_kaddr(
> +					page_to_pfn(lvl2_table_pages[i]));
> +		gnttab_set_map_op(&lvl2_map_ops[i],
> +				  (unsigned long)lvl2_table, GNTMAP_host_map |
> +				  GNTMAP_readonly,
> +				  lvl3_table[i], domid);
> +		gnttab_set_unmap_op(&lvl2_unmap_ops[i],
> +				    (unsigned long)lvl2_table, GNTMAP_host_map |
> +				    GNTMAP_readonly, -1);
> +	}
> +
> +	/* Unmap top level page, as it won't be needed any longer */
> +	if (gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
> +			      &lvl3_table_page, 1)) {
> +		dev_err(hy_drv_priv->dev,
> +			"xen: cannot unmap top level page\n");
> +		return NULL;
> +	}
> +
> +	/* Mark that page was unmapped */
> +	lvl3_unmap_ops.handle = -1;
> +
> +	if (gnttab_map_refs(lvl2_map_ops, NULL,
> +			    lvl2_table_pages, n_lvl2_grefs)) {
> +		dev_err(hy_drv_priv->dev,
> +			"HYPERVISOR map grant ref failed");
> +		return NULL;
> +	}
> +
> +	/* Checks if pages were mapped correctly */
> +	for (i = 0; i < n_lvl2_grefs; i++) {
> +		if (lvl2_map_ops[i].status) {
> +			dev_err(hy_drv_priv->dev,
> +				"HYPERVISOR map grant ref failed status = %d",
> +				lvl2_map_ops[i].status);
> +			goto error_cleanup_lvl2;
> +		} else {
> +			lvl2_unmap_ops[i].handle = lvl2_map_ops[i].handle;
> +		}
> +	}
> +
> +	if (gnttab_alloc_pages(nents, data_pages)) {
> +		dev_err(hy_drv_priv->dev,
> +			"Cannot allocate pages\n");
> +		goto error_cleanup_lvl2;
> +	}
> +
> +	k = 0;
> +
> +	for (i = 0; i < n_lvl2_grefs - 1; i++) {
> +		lvl2_table = pfn_to_kaddr(page_to_pfn(lvl2_table_pages[i]));
> +		for (j = 0; j < REFS_PER_PAGE; j++) {
> +			gnttab_set_map_op(&data_map_ops[k],
> +				(unsigned long)pfn_to_kaddr(
> +						page_to_pfn(data_pages[k])),
> +				GNTMAP_host_map | GNTMAP_readonly,
> +				lvl2_table[j], domid);
> +
> +			gnttab_set_unmap_op(&data_unmap_ops[k],
> +				(unsigned long)pfn_to_kaddr(
> +						page_to_pfn(data_pages[k])),
> +				GNTMAP_host_map | GNTMAP_readonly, -1);
> +			k++;
> +		}
> +	}
> +
> +	/* for grefs in the last lvl2 table page */
> +	lvl2_table = pfn_to_kaddr(page_to_pfn(
> +				lvl2_table_pages[n_lvl2_grefs - 1]));
> +
> +	for (j = 0; j < nents_last; j++) {
> +		gnttab_set_map_op(&data_map_ops[k],
> +			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
> +			GNTMAP_host_map | GNTMAP_readonly,
> +			lvl2_table[j], domid);
> +
> +		gnttab_set_unmap_op(&data_unmap_ops[k],
> +			(unsigned long)pfn_to_kaddr(page_to_pfn(data_pages[k])),
> +			GNTMAP_host_map | GNTMAP_readonly, -1);
> +		k++;
> +	}
> +
> +	if (gnttab_map_refs(data_map_ops, NULL,
> +			    data_pages, nents)) {
> +		dev_err(hy_drv_priv->dev,
> +			"HYPERVISOR map grant ref failed\n");
> +		return NULL;
> +	}
> +
> +	/* unmapping lvl2 table pages */
> +	if (gnttab_unmap_refs(lvl2_unmap_ops,
> +			      NULL, lvl2_table_pages,
> +			      n_lvl2_grefs)) {
> +		dev_err(hy_drv_priv->dev,
> +			"Cannot unmap 2nd level refs\n");
> +		return NULL;
> +	}
> +
> +	/* Mark that pages were unmapped */
> +	for (i = 0; i < n_lvl2_grefs; i++)
> +		lvl2_unmap_ops[i].handle = -1;
> +
> +	for (i = 0; i < nents; i++) {
> +		if (data_map_ops[i].status) {
> +			dev_err(hy_drv_priv->dev,
> +				"HYPERVISOR map grant ref failed status = %d\n",
> +				data_map_ops[i].status);
> +			goto error_cleanup_data;
> +		} else {
> +			data_unmap_ops[i].handle = data_map_ops[i].handle;
> +		}
> +	}
> +
> +	/* store these references for unmapping in the future */
> +	sh_pages_info->unmap_ops = data_unmap_ops;
> +	sh_pages_info->data_pages = data_pages;
> +
> +	gnttab_free_pages(1, &lvl3_table_page);
> +	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
> +	kfree(lvl2_table_pages);
> +	kfree(lvl2_map_ops);
> +	kfree(lvl2_unmap_ops);
> +	kfree(data_map_ops);
> +
> +	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
> +	return data_pages;
> +
> +error_cleanup_data:
> +	gnttab_unmap_refs(data_unmap_ops, NULL, data_pages,
> +			  nents);
> +
> +	gnttab_free_pages(nents, data_pages);
> +
> +error_cleanup_lvl2:
> +	if (lvl2_unmap_ops[0].handle != -1)
> +		gnttab_unmap_refs(lvl2_unmap_ops, NULL,
> +				  lvl2_table_pages, n_lvl2_grefs);
> +	gnttab_free_pages(n_lvl2_grefs, lvl2_table_pages);
> +
> +error_cleanup_lvl3:
> +	if (lvl3_unmap_ops.handle != -1)
> +		gnttab_unmap_refs(&lvl3_unmap_ops, NULL,
> +				  &lvl3_table_page, 1);
> +	gnttab_free_pages(1, &lvl3_table_page);
> +
> +	kfree(lvl2_table_pages);
> +	kfree(lvl2_map_ops);
> +	kfree(lvl2_unmap_ops);
> +	kfree(data_map_ops);
> +
> +
> +	return NULL;
> +}
> +
> +int xen_be_unmap_shared_pages(void **refs_info, int nents)
> +{
> +	struct xen_shared_pages_info *sh_pages_info;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s entry\n", __func__);
> +
> +	sh_pages_info = (struct xen_shared_pages_info *)(*refs_info);
> +
> +	if (sh_pages_info->unmap_ops == NULL ||
> +	    sh_pages_info->data_pages == NULL) {
> +		dev_warn(hy_drv_priv->dev,
> +			 "pages already cleaned up or buffer not imported yet\n");
> +		return 0;
> +	}
> +
> +	if (gnttab_unmap_refs(sh_pages_info->unmap_ops, NULL,
> +			      sh_pages_info->data_pages, nents)) {
> +		dev_err(hy_drv_priv->dev, "Cannot unmap data pages\n");
> +		return -EFAULT;
> +	}
> +
> +	gnttab_free_pages(nents, sh_pages_info->data_pages);
> +
> +	kfree(sh_pages_info->data_pages);
> +	kfree(sh_pages_info->unmap_ops);
> +	sh_pages_info->unmap_ops = NULL;
> +	sh_pages_info->data_pages = NULL;
> +	kfree(sh_pages_info);
> +	sh_pages_info = NULL;
> +
> +	dev_dbg(hy_drv_priv->dev, "%s exit\n", __func__);
> +	return 0;
> +}
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
> new file mode 100644
> index 000000000000..c39f241351f8
> --- /dev/null
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_shm.h
> @@ -0,0 +1,46 @@
> +/*
> + * Copyright © 2018 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice (including the next
> + * paragraph) shall be included in all copies or substantial portions of the
> + * Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __HYPER_DMABUF_XEN_SHM_H__
> +#define __HYPER_DMABUF_XEN_SHM_H__
> +
> +/* This collects all reference numbers for 2nd level shared pages and
> + * create a table with those in 1st level shared pages then return reference
> + * numbers for this top level table.
> + */
> +int xen_be_share_pages(struct page **pages, int domid, int nents,
> +		    void **refs_info);
> +
> +int xen_be_unshare_pages(void **refs_info, int nents);
> +
> +/* Maps provided top level ref id and then return array of pages containing
> + * data refs.
> + */
> +struct page **xen_be_map_shared_pages(unsigned long lvl3_gref, int domid,
> +				      int nents,
> +				      void **refs_info);
> +
> +int xen_be_unmap_shared_pages(void **refs_info, int nents);
> +
> +#endif /* __HYPER_DMABUF_XEN_SHM_H__ */
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
> index 18c1cd735ea2..3320f9dcc769 100644
> --- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_drv.c
> @@ -42,6 +42,10 @@
>   #include "hyper_dmabuf_list.h"
>   #include "hyper_dmabuf_id.h"
>   
> +#ifdef CONFIG_HYPER_DMABUF_XEN
> +#include "backends/xen/hyper_dmabuf_xen_drv.h"
> +#endif
> +
>   MODULE_LICENSE("GPL and additional rights");
>   MODULE_AUTHOR("Intel Corporation");
>   
> @@ -145,7 +149,13 @@ static int __init hyper_dmabuf_drv_init(void)
>   		return ret;
>   	}
>   
> +/* currently only supports XEN hypervisor */
> +#ifdef CONFIG_HYPER_DMABUF_XEN
> +	hy_drv_priv->bknd_ops = &xen_bknd_ops;
> +#else
>   	hy_drv_priv->bknd_ops = NULL;
> +	pr_err("hyper_dmabuf drv currently supports XEN only.\n");
> +#endif
>   
>   	if (hy_drv_priv->bknd_ops == NULL) {
>   		pr_err("Hyper_dmabuf: no backend found\n");
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC, v2, 2/9] hyper_dmabuf: architecture specification and reference guide
  2018-02-14  1:50 ` [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide Dongwon Kim
  2018-02-23 16:15   ` [Xen-devel] " Roger Pau Monné
@ 2018-04-10  9:52   ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 21+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-10  9:52 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, mateuszx.potrola

Sorry for top-posting

Can we have all this go into some header file which

will not only describe the structures/commands/responses/etc,

but will also allow drivers to use those directly without

defining the same one more time in the code? For example,

this is how it is done in Xen [1]. This way, you can keep

documentation and the protocol implementation in sync easily


On 02/14/2018 03:50 AM, Dongwon Kim wrote:
> Reference document for hyper_DMABUF driver
>
> Documentation/hyper-dmabuf-sharing.txt
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> ---
>   Documentation/hyper-dmabuf-sharing.txt | 734 +++++++++++++++++++++++++++++++++
>   1 file changed, 734 insertions(+)
>   create mode 100644 Documentation/hyper-dmabuf-sharing.txt
>
> diff --git a/Documentation/hyper-dmabuf-sharing.txt b/Documentation/hyper-dmabuf-sharing.txt
> new file mode 100644
> index 000000000000..928e411931e3
> --- /dev/null
> +++ b/Documentation/hyper-dmabuf-sharing.txt
> @@ -0,0 +1,734 @@
> +Linux Hyper DMABUF Driver
> +
> +------------------------------------------------------------------------------
> +Section 1. Overview
> +------------------------------------------------------------------------------
> +
> +Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> +achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> +where multiple different OS instances need to share same physical data without
> +data-copy across VMs.
> +
> +To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> +exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> +producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> +for the buffer to the importing VM (so called, “importer”).
> +
> +Another instance of the Hyper_DMABUF driver on importer registers
> +a hyper_dmabuf_id together with reference information for the shared physical
> +pages associated with the DMA_BUF to its database when the export happens.
> +
> +The actual mapping of the DMA_BUF on the importer’s side is done by
> +the Hyper_DMABUF driver when user space issues the IOCTL command to access
> +the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> +exporting driver as is, that is, no special configuration is required.
> +Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> +exchange.
> +
> +------------------------------------------------------------------------------
> +Section 2. Architecture
> +------------------------------------------------------------------------------
> +
> +1. Hyper_DMABUF ID
> +
> +hyper_dmabuf_id is a global handle for shared DMA BUFs, which is compatible
> +across VMs. It is a key used by the importer to retrieve information about
> +shared Kernel pages behind the DMA_BUF structure from the IMPORT list. When
> +a DMA_BUF is exported to another domain, its hyper_dmabuf_id and META data
> +are also kept in the EXPORT list by the exporter for further synchronization
> +of control over the DMA_BUF.
> +
> +hyper_dmabuf_id is “targeted”, meaning it is valid only in exporting (owner of
> +the buffer) and importing VMs, where the corresponding hyper_dmabuf_id is
> +stored in their database (EXPORT and IMPORT lists).
> +
> +A user-space application specifies the targeted VM id in the user parameter
> +when it calls the IOCTL command to export shared DMA_BUF to another VM.
> +
> +hyper_dmabuf_id_t is a data type for hyper_dmabuf_id. It is defined as 16-byte
> +data structure, and it contains id and rng_key[3] as elements for
> +the structure.
> +
> +typedef struct {
> +        int id;
> +        int rng_key[3]; /* 12bytes long random number */
> +} hyper_dmabuf_id_t;
> +
> +The first element in the hyper_dmabuf_id structure, int id is combined data of
> +a count number generated by the driver running on the exporter and
> +the exporter’s ID. The VM’s ID is a one byte value and located at the field’s
> +SB in int id. The remaining three bytes in int id are reserved for a count
> +number.
> +
> +However, there is a limit related to this count number, which is 1000.
> +Therefore, only little more than a byte starting from the LSB is actually used
> +for storing this count number.
> +
> +#define HYPER_DMABUF_ID_CREATE(domid, id) \
> +        ((((domid) & 0xFF) << 24) | ((id) & 0xFFFFFF))
> +
> +This limit on the count number directly means the maximum number of DMA BUFs
> +that  can be shared simultaneously by one VM. The second element of
> +hyper_dmabuf_id, that is int rng_key[3], is an array of three integers. These
> +numbers are generated by Linux’s native random number generation mechanism.
> +This field is added to enhance the security of the Hyper DMABUF driver by
> +maximizing the entropy of hyper_dmabuf_id (that is, preventing it from being
> +guessed by a security attacker).
> +
> +Once DMA_BUF is no longer shared, the hyper_dmabuf_id associated with
> +the DMA_BUF is released, but the count number in hyper_dmabuf_id is saved in
> +the ID list for reuse. However, random keys stored in int rng_key[3] are not
> +reused. Instead, those keys are always filled with freshly generated random
> +keys for security.
> +
> +2. IOCTLs
> +
> +a. IOCTL_HYPER_DMABUF_TX_CH_SETUP
> +
> +This type of IOCTL is used for initialization of a one-directional transmit
> +communication channel with a remote domain.
> +
> +The user space argument for this type of IOCTL is defined as:
> +
> +struct ioctl_hyper_dmabuf_tx_ch_setup {
> +    /* IN parameters */
> +    /* Remote domain id */
> +    int remote_domain;
> +};
> +
> +b. IOCTL_HYPER_DMABUF_RX_CH_SETUP
> +
> +This type of IOCTL is used for initialization of a one-directional receive
> +communication channel with a remote domain.
> +
> +The user space argument for this type of IOCTL is defined as:
> +
> +struct ioctl_hyper_dmabuf_rx_ch_setup {
> +    /* IN parameters */
> +    /* Source domain id */
> +    int source_domain;
> +};
> +
> +c. IOCTL_HYPER_DMABUF_EXPORT_REMOTE
> +
> +This type of IOCTL is used to export a DMA BUF to another VM. When a user
> +space application makes this call to the driver, it extracts Kernel pages
> +associated with the DMA_BUF, then makes those shared with the importing VM.
> +
> +All reference information for this shared pages and hyper_dmabuf_id is
> +created, then passed to the importing domain through a communications
> +channel for synchronous registration. In the meantime, the hyper_dmabuf_id
> +for the shared DMA_BUF is also returned to user-space application.
> +
> +This IOCTL can accept a reference to “user-defined” data as well as a FD
> +for the DMA BUF. This private data is then attached to the DMA BUF and
> +exported together with it.
> +
> +More details regarding this private data can be found in chapter for
> +“Hyper_DMABUF Private Data”.
> +
> +The user space argument for this type of IOCTL is defined as:
> +
> +struct ioctl_hyper_dmabuf_export_remote {
> +    /* IN parameters */
> +    /* DMA buf fd to be exported */
> +    int dmabuf_fd;
> +    /* Domain id to which buffer should be exported */
> +    int remote_domain;
> +    /* exported dma buf id */
> +    hyper_dmabuf_id_t hid;
> +    /* size of private data */
> +    int sz_priv;
> +    /* ptr to the private data for Hyper_DMABUF */
> +    char *priv;
> +};
> +
> +d. IOCTL_HYPER_DMABUF_EXPORT_FD
> +
> +The importing VM uses this IOCTL to import and re-export a shared DMA_BUF
> +locally to the end-consumer using the standard Linux DMA_BUF framework.
> +Upon IOCTL call, the Hyper_DMABUF driver finds the reference information
> +of the shared DMA_BUF with the given hyper_dmabuf_id, then maps all shared
> +pages in its own Kernel space. The driver then constructs a scatter-gather
> +list with those mapped pages and creates a brand-new DMA_BUF with the list,
> +which is eventually exported with a file descriptor to the local consumer.
> +
> +The user space argument for this type of IOCTL is defined as:
> +
> +struct ioctl_hyper_dmabuf_export_fd {
> +    /* IN parameters */
> +    /* hyper dmabuf id to be imported */
> +    int hyper_dmabuf_id;
> +    /* flags */
> +    int flags;
> +    /* OUT parameters */
> +    /* exported dma buf fd */
> +    int fd;
> +};
> +
> +e. IOCTL_HYPER_DMABUF_UNEXPORT
> +
> +This type of IOCTL is used when it is necessary to terminate the current
> +sharing of a DMA_BUF. When called, the driver first checks if there are any
> +consumers actively using the DMA_BUF. Then, it unexports it if it is not
> +mapped or used by any consumers. Otherwise, it postpones unexporting, but
> +makes the buffer invalid to prevent any further import of the same DMA_BUF.
> +DMA_BUF is completely unexported after the last consumer releases it.
> +
> +”Unexport” means removing all reference information about the DMA_BUF from the
> +LISTs and make all pages private again.
> +
> +The user space argument for this type of IOCTL is defined as:
> +
> +struct ioctl_hyper_dmabuf_unexport {
> +    /* IN parameters */
> +    /* hyper dmabuf id to be unexported */
> +    int hyper_dmabuf_id;
> +    /* delay in ms by which unexport processing will be postponed */
> +    int delay_ms;
> +    /* OUT parameters */
> +    /* Status of request */
> +    int status;
> +};
> +
> +f. IOCTL_HYPER_DMABUF_QUERY
> +
> +This IOCTL is used to retrieve specific information about a DMA_BUF that
> +is being shared.
> +
> +The user space argument for this type of IOCTL is defined as:
> +
> +struct ioctl_hyper_dmabuf_query {
> +    /* in parameters */
> +    /* hyper dmabuf id to be queried */
> +    int hyper_dmabuf_id;
> +    /* item to be queried */
> +    int item;
> +    /* OUT parameters */
> +    /* output of query */
> +    /* info can be either value or reference */
> +    unsigned long info;
> +};
> +
> +<Available Queries>
> +
> +HYPER_DMABUF_QUERY_TYPE
> + - Return the type of DMA_BUF from the current domain, Exported or Imported.
> +
> +HYPER_DMABUF_QUERY_EXPORTER
> + - Return the exporting domain’s ID of a shared DMA_BUF.
> +
> +HYPER_DMABUF_QUERY_IMPORTER
> + - Return the importing domain’s ID of a shared DMA_BUF.
> +
> +HYPER_DMABUF_QUERY_SIZE
> + - Return the size of a shared DMA_BUF in bytes.
> +
> +HYPER_DMABUF_QUERY_BUSY
> + - Return ‘true’ if a shared DMA_BUF is currently used
> +   (mapped by the end-consumer).
> +
> +HYPER_DMABUF_QUERY_UNEXPORTED
> + - Return ‘true’ if a shared DMA_BUF is not valid anymore
> +   (so it does not allow a new consumer to map it).
> +
> +HYPER_DMABUF_QUERY_DELAYED_UNEXPORTED
> + - Return ‘true’ if a shared DMA_BUF is scheduled to be unexported
> +   (but is still valid) within a fixed time.
> +
> +HYPER_DMABUF_QUERY_PRIV_INFO
> + - Return ‘private’ data attached to shared DMA_BUF to the user space.
> +   ‘unsigned long info’ is the user space pointer for the buffer, where
> +   private data will be copied to.
> +
> +HYPER_DMABUF_QUERY_PRIV_INFO_SIZE
> + - Return the size of the private data attached to the shared DMA_BUF.
> +
> +3. Event Polling
> +
> +Event-polling can be enabled optionally by selecting the Kernel config option,
> +Enable event-generation and polling operation under xen/hypervisor in Kernel’s
> +menuconfig. The event-polling mechanism includes the generation of
> +an import-event, adding it to the event-queue and providing a notification to
> +the application so that it can retrieve the event data from the queue.
> +
> +For this mechanism, “Poll” and “Read” operations are added to the Hyper_DMABUF
> +driver. A user application that polls the driver goes into a sleep state until
> +there is a new event added to the queue. An application uses “Read” to retrieve
> +event data from the event queue. Event data contains the hyper_dmabuf_id and
> +the private data of the buffer that has been received by the importer.
> +
> +For more information on private data, refer to Section 3.5).
> +Using this method, it is possible to lower the risk of the hyper_dmabuf_id and
> +other sensitive information about the shared buffer (for example, meta-data
> +for shared images) being leaked while being transferred to the importer because
> +all of this data is shared as “private info” at the driver level. However,
> +please note there should be a way for the importer to find the correct DMA_BUF
> +in this case when there are multiple Hyper_DMABUFs being shared simultaneously.
> +For example, the surface name or the surface ID of a specific rendering surface
> +needs to be sent to the importer in advance before it is exported in a surface-
> +sharing use-case.
> +
> +Each event data given to the user-space consists of a header and the private
> +information of the buffer. The data type is defined as follows:
> +
> +struct hyper_dmabuf_event_hdr {
> +        int event_type; /* one type only for now - new import */
> +        hyper_dmabuf_id_t hid; /* hyper_dmabuf_id of specific hyper_dmabuf */
> +        int size; /* size of data */
> +};
> +
> +struct hyper_dmabuf_event_data {
> +        struct hyper_dmabuf_event_hdr hdr;
> +        void *data; /* private data */
> +};
> +
> +4. Hyper_DMABUF Private Data
> +
> +Each Hyper_DMABUF can come with private data, the size of which can be up to
> +AX_SIZE_PRIV_DATA (currently 192 byte). This private data is just a chunk of
> +plain data attached to every Hyper_DMABUF. It is guaranteed to be synchronized
> +across VMs, exporter and importer. This private data does not have any specific
> +structure defined at the driver level, so any “user-defined” format or
> +structure can be used. In addition, there is no dedicated use-case for this
> +data. It can be used virtually for any purpose. For example, it can be used to
> +share meta-data such as dimension and color formats for shared images in
> +a surface sharing model. Another example is when we share protected media
> +contents.
> +
> +This private data can be used to transfer flags related to content protection
> +information on streamed media to the importer.
> +
> +Private data is initially generated when a buffer is exported for the first
> +time. Then, it is updated whenever the same buffer is re-exported. During the
> +re-exporting process, the Hyper_DMABUF driver only updates private data on
> +both sides with new data from user-space since the same buffer already exists
> +on both the IMPORT LIST and EXPORT LIST.
> +
> +There are two different ways to retrieve this private data from user-space.
> +The first way is to use “Read” on the Hyper_DMABUF driver. “Read” returns the
> +data of events containing private data of the buffer. The second way is to
> +make a query to Hyper_DMABUF. There are two query items,
> +HYPER_DMABUF_QUERY_PRIV_INFO and HYPER_DMABUF_QUERY_PRIV_INFO_SIZE available
> +for retrieving private data and its size.
> +
> +5. Scatter-Gather List Table (SGT) Management
> +
> +SGT management is the core part of the Hyper_DMABUF driver that manages an
> +SGT, a representation of the group of kernel pages associated with a DMA_BUF.
> +This block includes four different sub-blocks:
> +
> +a. Hyper_DMABUF_id Manager
> +
> +This ID manager is responsible for generating a hyper_dmabuf_id for an
> +exported DMA_BUF. When an ID is requested, the ID Manager first checks if
> +there are any reusable IDs left in the list and returns one of those,
> +if available. Otherwise, it creates the next count number and returns it
> +to the caller.
> +
> +b. SGT Creator
> +
> +The SGT (struct sg_table) contains information about the DMA_BUF such as
> +references to all kernel pages for the buffer and their connections. The
> +SGT Creator creates a new SGT on the importer side with pages shared by
> +the hypervisor.
> +
> +c. Kernel Page Extractor
> +
> +The Page Extractor extracts pages from a given SGT before those pages
> +are shared.
> +
> +d. List Manager Interface
> +
> +The SGT manger also interacts with export and import list managers. It
> +sends out information (for example, hyper_dmabuf_id, reference, and
> +DMA_BUF information) about the exported or imported DMA_BUFs to the
> +list manager. Also, on IOCTL request, it asks the list manager to find
> +and return the information for a corresponding DMA_BUF in the list.
> +
> +6. DMA-BUF Interface
> +
> +The DMA-BUF interface provides standard methods to manage DMA_BUFs
> +reconstructed by the Hyper_DMABUF driver from shared pages. All of the
> +relevant operations are listed in struct dma_buf_ops. These operations
> +are standard DMA_BUF operations, therefore they follow standard DMA BUF
> +protocols.
> +
> +Each DMA_BUF operation communicates with the exporter at the end of the
> +routine for “indirect DMA_BUF synchronization”.
> +
> +7. Export/Import List Management
> +
> +Whenever a DMA_BUF is shared and exported, its information is added to the
> +database (EXPORT-list) on the exporting VM. Similarly, information about an
> +imported DMA_BUF is added to the importing database (IMPORT list) on the
> +importing VM, when the export happens.
> +
> +All of the entries in the lists are needed to manage the exported/imported
> +DMA_BUF more efficiently. Both lists are implemented as Linux hash tables.
> +The key to the list is hyper_dmabuf_id and the output is the information of
> +the DMA_BUF. The List Manager manages all requests from other blocks and
> +transactions within lists to ensure that all entries are up-to-date and
> +that the list structure is consistent.
> +
> +The List Manager provides basic functionality, such as:
> +
> +- Adding to the List
> +- Removal from the List
> +- Finding information about a DMA_BUF, given the hyper_dmabuf_id
> +
> +8. Page Sharing by Hypercalls
> +
> +The Hyper_DMABUF driver assumes that there is a native page-by-page memory
> +sharing mechanism available on the hypervisor. Referencing a group of pages
> +that are being shared is what the driver expects from “backend” APIs or the
> +hypervisor itself.
> +
> +For the example, xen backend integrated in current code base utilizes Xen’s
> +grant-table interface for sharing the underlying kernel pages (struct *page).
> +
> +More details about grant-table interface can be found at the following locations:
> +
> +https://wiki.xen.org/wiki/Grant_Table
> +https://xenbits.xen.org/docs/4.6-testing/misc/grant-tables.txt
> +
> +9. Message Handling
> +
> +The exporter and importer can each create a message that consists of an opcode
> +(command) and operands (parameters) and send it to each other.
> +
> +The message format is defined as:
> +
> +struct hyper_dmabuf_req {
> +        unsigned int req_id; /* Sequence number. Used for RING BUF
> +                                synchronization */
> +        unsigned int stat; /* Status.Response from receiver. */
> +        unsigned int cmd;  /* Opcode */
> +        unsigned int op[MAX_NUMBER_OF_OPERANDS]; /* Operands */
> +};
> +
> +The following table gives the list of opcodes:
> +
> +<Opcodes in Message to Exporter/Importer>
> +
> +HYPER_DMABUF_EXPORT (exporter --> importer)
> + - Export a DMA_BUF to the importer. The importer registers the corresponding
> +   DMA_BUF in its IMPORT LIST when the message is received.
> +
> +HYPER_DMABUF_EXPORT_FD (importer --> exporter)
> + - Locally exported as FD. The importer sends out this command to the exporter
> +   to notify that the buffer is now locally exported (mapped and used).
> +
> +HYPER_DMABUF_EXPORT_FD_FAILED (importer --> exporter)
> + - Failed while exporting locally. The importer sends out this command to the
> +   exporter to notify the exporter that the EXPORT_FD failed.
> +
> +HYPER_DMABUF_NOTIFY_UNEXPORT (exporter --> importer)
> + - Termination of sharing. The exporter notifies the importer that the DMA_BUF
> +   has been unexported.
> +
> +HYPER_DMABUF_OPS_TO_REMOTE (importer --> exporter)
> + - Not implemented yet.
> +
> +HYPER_DMABUF_OPS_TO_SOURCE (exporter --> importer)
> + - DMA_BUF ops to the exporter, for DMA_BUF upstream synchronization.
> +   Note: Implemented but it is done asynchronously due to performance issues.
> +
> +The following table shows the list of operands for each opcode.
> +
> +<Operands in Message to Exporter/Importer>
> +
> +- HYPER_DMABUF_EXPORT
> +
> +op0 to op3 – hyper_dmabuf_id
> +op4 – number of pages to be shared
> +op5 – offset of data in the first page
> +op6 – length of data in the last page
> +op7 – reference number for the group of shared pages
> +op8 – size of private data
> +op9 to (op9+op8)  – private data
> +
> +- HYPER_DMABUF_EXPORT_FD
> +
> +op0 to op3 – hyper_dmabuf_id
> +
> +- HYPER_DMABUF_EXPORT_FD_FAILED
> +
> +op0 to op3 – hyper_dmabuf_id
> +
> +- HYPER_DMABUF_NOTIFY_UNEXPORT
> +
> +op0 to op3 – hyper_dmabuf_id
> +
> +- HYPER_DMABUF_OPS_TO_REMOTE(Not implemented)
> +
> +- HYPER_DMABUF_OPS_TO_SOURCE
> +
> +op0 to op3 – hyper_dmabuf_id
> +op4 – type of DMA_BUF operation
> +
> +9. Inter VM (Domain) Communication
> +
> +Two different types of inter-domain communication channels are required,
> +one in kernel space and the other in user space. The communication channel
> +in user space is for transmitting or receiving the hyper_dmabuf_id. Since
> +there is no specific security (for example, encryption) involved in the
> +generation of a global id at the driver level, it is highly recommended that
> +the customer’s user application set up a very secure channel for exchanging
> +hyper_dmabuf_id between VMs.
> +
> +The communication channel in kernel space is required for exchanging messages
> +from “message management” block between two VMs. In the current reference
> +backend for Xen hypervisor, Xen ring-buffer and event-channel mechanisms are
> +used for message exchange between impoter and exporter.
> +
> +10. What are required in hypervisor
> +
> +emory sharing and message communication between VMs
> +
> +------------------------------------------------------------------------------
> +Section 3. Hyper DMABUF Sharing Flow
> +------------------------------------------------------------------------------
> +
> +1. Exporting
> +
> +To export a DMA_BUF to another VM, user space has to call an IOCTL
> +(IOCTL_HYPER_DMABUF_EXPORT_REMOTE) with a file descriptor for the buffer given
> +by the original exporter. The Hyper_DMABUF driver maps a DMA_BUF locally, then
> +issues a hyper_dmabuf_id and SGT for the DMA_BUF, which is registered to the
> +EXPORT list. Then, all pages for the SGT are extracted and each individual
> +page is shared via a hypervisor-specific memory sharing mechanism
> +(for example, in Xen this is grant-table).
> +
> +One important requirement on this memory sharing method is that it needs to
> +create a single integer value that represents the list of pages, which can
> +then be used by the importer for retrieving the group of shared pages.  For
> +this, the “Backend” in the reference driver utilizes the multiple level
> +addressing mechanism.
> +
> +Once the integer reference to the list of pages is created, the exporter
> +builds the “export” command and sends it to the importer, then notifies the
> +importer.
> +
> +2. Importing
> +
> +The Import process is divided into two sections. One is the registration
> +of DMA_BUF from the exporter. The other is the actual mapping of the buffer
> +before accessing the data in the buffer. The former (termed “Registration”)
> +happens on an export event (that is, the export command with an interrupt)
> +in the exporter.
> +
> +The latter (termed “Mapping”) is done asynchronously when the driver gets the
> +IOCTL call from user space. When the importer gets an interrupt from the
> +exporter, it checks the command in the receiving queue and if it is an
> +“export” command, the registration process is started. It first finds
> +hyper_dmabuf_id and the integer reference for the shared pages, then stores
> +all of that information together with the “domain id” of the exporting domain
> +in the IMPORT LIST.
> +
> +In the case where “event-polling” is enabled (Kernel Config - Enable event-
> +generation and polling operation), a “new sharing available” event is
> +generated right after the reference info for the new shared DMA_BUF is
> +registered to the IMPORT LIST. This event is added to the event-queue.
> +
> +The user process that polls Hyper_DMABUF driver wakes up when this event-queue
> +is not empty and is able to read back event data from the queue using the
> +driver’s “Read” function. Once the user-application calls EXPORT_FD IOCTL with
> +the proper parameters including hyper_dmabuf_id, the Hyper_DMABUF driver
> +retrieves information about the matched DMA_BUF from the IMPORT LIST. Then, it
> +maps all pages shared (referenced by the integer reference) in its kernel
> +space and creates its own DMA_BUF referencing the same shared pages. After
> +this, it exports this new DMA_BUF to the other drivers with a file descriptor.
> +DMA_BUF can then be used just in the same way a local DMA_BUF is.
> +
> +3. Indirect Synchronization of DMA_BUF
> +
> +Synchronization of a DMA_BUF within a single OS is automatically achieved
> +because all of importer’s DMA_BUF operations are done using functions defined
> +on the exporter’s side, which means there is one central place that has full
> +control over the DMA_BUF. In other words, any primary activities such as
> +attaching/detaching and mapping/un-mapping are all captured by the exporter,
> +meaning that the exporter knows basic information such as who is using the
> +DMA_BUF and how it is being used. This, however, is not applicable if this
> +sharing is done beyond a single OS because kernel space (where the exporter’s
> +DMA_BUF operations reside) is simply not visible to the importing VM.
> +
> +Therefore, “indirect synchronization” was introduced as an alternative solution,
> +which is now implemented in the Hyper_DMABUF driver. This technique makes
> +the exporter create a shadow DMA_BUF when the end-consumer of the buffer maps
> +the DMA_BUF, then duplicates any DMA_BUF operations performed on
> +the importer’s side. Through this “indirect synchronization”, the exporter is
> +able to virtually track all activities done by the consumer (mostly reference
> +counter) as if those are done in exporter’s local system.
> +
> +------------------------------------------------------------------------------
> +Section 4. Hypervisor Backend Interface
> +------------------------------------------------------------------------------
> +
> +The Hyper_DMABUF driver has a standard “Backend” structure that contains
> +mappings to various functions designed for a specific Hypervisor. Most of
> +these API functions should provide a low-level implementation of communication
> +and memory sharing capability that utilize a Hypervisor’s native mechanisms.
> +
> +struct hyper_dmabuf_backend_ops {
> +        /* retreiving id of current virtual machine */
> +        int (*get_vm_id)(void);
> +        /* get pages shared via hypervisor-specific method */
> +        int (*share_pages)(struct page **, int, int, void **);
> +        /* make shared pages unshared via hypervisor specific method */
> +        int (*unshare_pages)(void **, int);
> +        /* map remotely shared pages on importer's side via
> +         *  hypervisor-specific method
> +         */
> +        struct page ** (*map_shared_pages)(int, int, int, void **);
> +        /* unmap and free shared pages on importer's side via
> +         *  hypervisor-specific method
> +         */
> +        int (*unmap_shared_pages)(void **, int);
> +        /* initialize communication environment */
> +        int (*init_comm_env)(void);
> +        /* destroy communication channel */
> +        void (*destroy_comm)(void);
> +        /* upstream ch setup (receiving and responding) */
> +        int (*init_rx_ch)(int);
> +        /* downstream ch setup (transmitting and parsing responses) */
> +        int (*init_tx_ch)(int);
> +        /* send msg via communication ch */
> +        int (*send_req)(int, struct hyper_dmabuf_req *, int);
> +};
> +
> +<Hypervisor-specific Backend Structure>
> +
> +1. get_vm_id
> +
> +	Returns the VM (domain) ID
> +
> +	Input:
> +
> +		-ID of the current domain
> +
> +	Output:
> +
> +		None
> +
> +2. share_pages
> +
> +	Get pages shared via hypervisor-specific method and return one reference
> +	ID that represents the complete list of shared pages
> +
> +	Input:
> +
> +		-Array of pages
> +		-ID of importing VM
> +		-Number of pages
> +		-Hypervisor specific Representation of reference info of shared
> +		 pages
> +
> +	Output:
> +
> +		-Hypervisor specific integer value that represents all of
> +		 the shared pages
> +
> +3. unshare_pages
> +
> +	Stop sharing pages
> +
> +	Input:
> +
> +		-Hypervisor specific Representation of reference info of shared
> +		 pages
> +		-Number of shared pages
> +
> +	Output:
> +
> +		0
> +
> +4. map_shared_pages
> +
> +	Map shared pages locally using a hypervisor-specific method
> +
> +	Input:
> +
> +		-Reference number that represents all of shared pages
> +		-ID of exporting VM, Number of pages
> +		-Reference information for any purpose
> +
> +	Output:
> +
> +		-An array of shared pages (struct page**)
> +
> +5. unmap_shared_pages
> +
> +	Unmap shared pages
> +
> +	Input:
> +
> +		-Hypervisor specific Representation of reference info of shared pages
> +
> +	Output:
> +
> +		-0 (successful) or one of Standard Kernel errors
> +
> +6. init_comm_env
> +
> +	Setup infrastructure needed for communication channel
> +
> +	Input:
> +
> +		None
> +
> +	Output:
> +
> +		None
> +
> +7. destroy_comm
> +
> +	Cleanup everything done via init_comm_env
> +
> +	Input:
> +
> +		None
> +
> +	Output:
> +
> +		None
> +
> +8. init_rx_ch
> +
> +	Configure receive channel
> +
> +	Input:
> +
> +		-ID of VM on the other side of the channel
> +
> +	Output:
> +
> +		-0 (successful) or one of Standard Kernel errors
> +
> +9. init_tx_ch
> +
> +	Configure transmit channel
> +
> +	Input:
> +
> +		-ID of VM on the other side of the channel
> +
> +	Output:
> +
> +		-0 (success) or one of Standard Kernel errors
> +
> +10. send_req
> +
> +	Send message to other VM
> +
> +	Input:
> +
> +		-ID of VM that receives the message
> +		-Message
> +
> +	Output:
> +
> +		-0 (success) or one of Standard Kernel errors
> +
> +-------------------------------------------------------------------------------
> +-------------------------------------------------------------------------------
>

[1] 
https://elixir.bootlin.com/linux/v4.16.1/source/include/xen/interface/io/kbdif.h

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC, v2, 4/9] hyper_dmabuf: user private data attached to hyper_DMABUF
  2018-02-14  1:50 ` [RFC PATCH v2 4/9] hyper_dmabuf: user private data attached to hyper_DMABUF Dongwon Kim
@ 2018-04-10  9:59   ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 21+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-10  9:59 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, mateuszx.potrola

On 02/14/2018 03:50 AM, Dongwon Kim wrote:
> Define a private data (e.g. meta data for the buffer) attached to
> each hyper_DMABUF structure. This data is provided by userapace via
> export_remote IOCTL and its size can be up to 192 bytes.
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> Signed-off-by: Mateusz Polrola <mateuszx.potrola@intel.com>
> ---
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c  | 83 ++++++++++++++++++++--
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c    | 36 +++++++++-
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h    |  2 +-
>   .../dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c   |  1 +
>   drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h | 12 ++++
>   include/uapi/linux/hyper_dmabuf.h                  |  4 ++
>   6 files changed, 132 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
> index 020a5590a254..168ccf98f710 100644
> --- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_ioctl.c
> @@ -103,6 +103,11 @@ static int send_export_msg(struct exported_sgt_info *exported,
>   		}
>   	}
>   
> +	op[8] = exported->sz_priv;
> +
> +	/* driver/application specific private info */
> +	memcpy(&op[9], exported->priv, op[8]);
> +
>   	req = kcalloc(1, sizeof(*req), GFP_KERNEL);
>   
>   	if (!req)
> @@ -120,8 +125,9 @@ static int send_export_msg(struct exported_sgt_info *exported,
>   
>   /* Fast path exporting routine in case same buffer is already exported.
>    *
> - * If same buffer is still valid and exist in EXPORT LIST it returns 0 so
> - * that remaining normal export process can be skipped.
> + * If same buffer is still valid and exist in EXPORT LIST, it only updates
> + * user-private data for the buffer and returns 0 so that that it can skip
> + * normal export process.
>    *
>    * If "unexport" is scheduled for the buffer, it cancels it since the buffer
>    * is being re-exported.
> @@ -129,7 +135,7 @@ static int send_export_msg(struct exported_sgt_info *exported,
>    * return '1' if reexport is needed, return '0' if succeeds, return
>    * Kernel error code if something goes wrong
>    */
> -static int fastpath_export(hyper_dmabuf_id_t hid)
> +static int fastpath_export(hyper_dmabuf_id_t hid, int sz_priv, char *priv)
>   {
>   	int reexport = 1;
>   	int ret = 0;
> @@ -155,6 +161,46 @@ static int fastpath_export(hyper_dmabuf_id_t hid)
>   		exported->unexport_sched = false;
>   	}
>   
> +	/* if there's any change in size of private data.
> +	 * we reallocate space for private data with new size
> +	 */
> +	if (sz_priv != exported->sz_priv) {
> +		kfree(exported->priv);
> +
> +		/* truncating size */
> +		if (sz_priv > MAX_SIZE_PRIV_DATA)
> +			exported->sz_priv = MAX_SIZE_PRIV_DATA;
> +		else
> +			exported->sz_priv = sz_priv;
> +
> +		exported->priv = kcalloc(1, exported->sz_priv,
> +					 GFP_KERNEL);
> +
> +		if (!exported->priv) {
> +			hyper_dmabuf_remove_exported(exported->hid);
> +			hyper_dmabuf_cleanup_sgt_info(exported, true);
> +			kfree(exported);
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	/* update private data in sgt_info with new ones */
> +	ret = copy_from_user(exported->priv, priv, exported->sz_priv);
> +	if (ret) {
> +		dev_err(hy_drv_priv->dev,
> +			"Failed to load a new private data\n");
> +		ret = -EINVAL;
> +	} else {
> +		/* send an export msg for updating priv in importer */
> +		ret = send_export_msg(exported, NULL);
> +
> +		if (ret < 0) {
> +			dev_err(hy_drv_priv->dev,
> +				"Failed to send a new private data\n");
> +			ret = -EBUSY;
> +		}
> +	}
> +
>   	return ret;
>   }
>   
> @@ -191,7 +237,8 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
>   					     export_remote_attr->remote_domain);
>   
>   	if (hid.id != -1) {
> -		ret = fastpath_export(hid);
> +		ret = fastpath_export(hid, export_remote_attr->sz_priv,
> +				      export_remote_attr->priv);
>   
>   		/* return if fastpath_export succeeds or
>   		 * gets some fatal error
> @@ -225,6 +272,24 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
>   		goto fail_sgt_info_creation;
>   	}
>   
> +	/* possible truncation */
> +	if (export_remote_attr->sz_priv > MAX_SIZE_PRIV_DATA)
> +		exported->sz_priv = MAX_SIZE_PRIV_DATA;
> +	else
> +		exported->sz_priv = export_remote_attr->sz_priv;
> +
> +	/* creating buffer for private data of buffer */
> +	if (exported->sz_priv != 0) {
> +		exported->priv = kcalloc(1, exported->sz_priv, GFP_KERNEL);
> +
> +		if (!exported->priv) {
> +			ret = -ENOMEM;
> +			goto fail_priv_creation;
> +		}
> +	} else {
> +		dev_err(hy_drv_priv->dev, "size is 0\n");
> +	}
> +
>   	exported->hid = hyper_dmabuf_get_hid();
>   
>   	/* no more exported dmabuf allowed */
> @@ -279,6 +344,10 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
>   	INIT_LIST_HEAD(&exported->va_kmapped->list);
>   	INIT_LIST_HEAD(&exported->va_vmapped->list);
>   
> +	/* copy private data to sgt_info */
> +	ret = copy_from_user(exported->priv, export_remote_attr->priv,
> +			     exported->sz_priv);
> +
>   	if (ret) {
>   		dev_err(hy_drv_priv->dev,
>   			"failed to load private data\n");
> @@ -337,6 +406,9 @@ static int hyper_dmabuf_export_remote_ioctl(struct file *filp, void *data)
>   
>   fail_map_active_attached:
>   	kfree(exported->active_sgts);
> +	kfree(exported->priv);
> +
> +fail_priv_creation:
>   	kfree(exported);
>   
>   fail_map_active_sgts:
> @@ -567,6 +639,9 @@ static void delayed_unexport(struct work_struct *work)
>   		/* register hyper_dmabuf_id to the list for reuse */
>   		hyper_dmabuf_store_hid(exported->hid);
>   
> +		if (exported->sz_priv > 0 && !exported->priv)
> +			kfree(exported->priv);
> +
>   		kfree(exported);
>   	}
>   }
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
> index 129b2ff2af2b..7176fa8fb139 100644
> --- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.c
> @@ -60,9 +60,12 @@ void hyper_dmabuf_create_req(struct hyper_dmabuf_req *req,
>   		 * op5 : offset of data in the first page
>   		 * op6 : length of data in the last page
>   		 * op7 : top-level reference number for shared pages
> +		 * op8 : size of private data (from op9)
> +		 * op9 ~ : Driver-specific private data
> +		 *	   (e.g. graphic buffer's meta info)
>   		 */
>   
> -		memcpy(&req->op[0], &op[0], 8 * sizeof(int) + op[8]);
> +		memcpy(&req->op[0], &op[0], 9 * sizeof(int) + op[8]);
>   		break;
>   
>   	case HYPER_DMABUF_NOTIFY_UNEXPORT:
> @@ -116,6 +119,9 @@ static void cmd_process_work(struct work_struct *work)
>   		 * op5 : offset of data in the first page
>   		 * op6 : length of data in the last page
>   		 * op7 : top-level reference number for shared pages
> +		 * op8 : size of private data (from op9)
> +		 * op9 ~ : Driver-specific private data
> +		 *         (e.g. graphic buffer's meta info)
>   		 */
>   
>   		/* if nents == 0, it means it is a message only for
> @@ -135,6 +141,24 @@ static void cmd_process_work(struct work_struct *work)
>   				break;
>   			}
>   
> +			/* if size of new private data is different,
> +			 * we reallocate it.
> +			 */
> +			if (imported->sz_priv != req->op[8]) {
> +				kfree(imported->priv);
> +				imported->sz_priv = req->op[8];
> +				imported->priv = kcalloc(1, req->op[8],
> +							 GFP_KERNEL);
> +				if (!imported->priv) {
> +					/* set it invalid */
> +					imported->valid = 0;
> +					break;
> +				}
> +			}
> +
> +			/* updating priv data */
> +			memcpy(imported->priv, &req->op[9], req->op[8]);
> +
>   			break;
>   		}
>   
> @@ -143,6 +167,14 @@ static void cmd_process_work(struct work_struct *work)
>   		if (!imported)
>   			break;
>   
> +		imported->sz_priv = req->op[8];
> +		imported->priv = kcalloc(1, req->op[8], GFP_KERNEL);
BTW, there are plenty of the code using kcalloc with 1 element
Why not simply kzalloc?
> +
> +		if (!imported->priv) {
> +			kfree(imported);
> +			break;
> +		}
> +
>   		imported->hid.id = req->op[0];
>   
>   		for (i = 0; i < 3; i++)
> @@ -162,6 +194,8 @@ static void cmd_process_work(struct work_struct *work)
>   		dev_dbg(hy_drv_priv->dev, "\tlast len %d\n", req->op[6]);
>   		dev_dbg(hy_drv_priv->dev, "\tgrefid %d\n", req->op[7]);
>   
> +		memcpy(imported->priv, &req->op[9], req->op[8]);
> +
>   		imported->valid = true;
>   		hyper_dmabuf_register_imported(imported);
>   
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
> index 59f1528e9b1e..63a39d068d69 100644
> --- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_msg.h
> @@ -27,7 +27,7 @@
>   #ifndef __HYPER_DMABUF_MSG_H__
>   #define __HYPER_DMABUF_MSG_H__
>   
> -#define MAX_NUMBER_OF_OPERANDS 8
> +#define MAX_NUMBER_OF_OPERANDS 64
>   
So now the req/resp below become (64 + 3) ints long, 268 bytes
4096 / 268...

>   struct hyper_dmabuf_req {
>   	unsigned int req_id;
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
> index d92ae13d8a30..9032f89e0cd0 100644
> --- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_sgl_proc.c
> @@ -251,6 +251,7 @@ int hyper_dmabuf_cleanup_sgt_info(struct exported_sgt_info *exported,
>   	kfree(exported->active_attached);
>   	kfree(exported->va_kmapped);
>   	kfree(exported->va_vmapped);
> +	kfree(exported->priv);
>   
>   	return 0;
>   }
> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
> index 144e3821fbc2..a1220bbf8d0c 100644
> --- a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
> +++ b/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_struct.h
> @@ -101,6 +101,12 @@ struct exported_sgt_info {
>   	 * the buffer can be completely freed.
>   	 */
>   	struct file *filp;
> +
> +	/* size of private */
> +	size_t sz_priv;
> +
> +	/* private data associated with the exported buffer */
> +	char *priv;
>   };
>   
>   /* imported_sgt_info contains information about imported DMA_BUF
> @@ -126,6 +132,12 @@ struct imported_sgt_info {
>   	void *refs_info;
>   	bool valid;
>   	int importers;
> +
> +	/* size of private */
> +	size_t sz_priv;
> +
> +	/* private data associated with the exported buffer */
> +	char *priv;
>   };
>   
>   #endif /* __HYPER_DMABUF_STRUCT_H__ */
> diff --git a/include/uapi/linux/hyper_dmabuf.h b/include/uapi/linux/hyper_dmabuf.h
> index caaae2da9d4d..36794a4af811 100644
> --- a/include/uapi/linux/hyper_dmabuf.h
> +++ b/include/uapi/linux/hyper_dmabuf.h
> @@ -25,6 +25,8 @@
>   #ifndef __LINUX_PUBLIC_HYPER_DMABUF_H__
>   #define __LINUX_PUBLIC_HYPER_DMABUF_H__
>   
> +#define MAX_SIZE_PRIV_DATA 192
> +
>   typedef struct {
>   	int id;
>   	int rng_key[3]; /* 12bytes long random number */
> @@ -56,6 +58,8 @@ struct ioctl_hyper_dmabuf_export_remote {
>   	int remote_domain;
>   	/* exported dma buf id */
>   	hyper_dmabuf_id_t hid;
> +	int sz_priv;
> +	char *priv;
>   };
>   
>   #define IOCTL_HYPER_DMABUF_EXPORT_FD \
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC,v2,9/9] hyper_dmabuf: threaded interrupt in Xen-backend
  2018-02-14  1:50 ` [RFC PATCH v2 9/9] hyper_dmabuf: threaded interrupt in Xen-backend Dongwon Kim
@ 2018-04-10 10:04   ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 21+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-10 10:04 UTC (permalink / raw)
  To: Dongwon Kim, linux-kernel, linaro-mm-sig, xen-devel
  Cc: dri-devel, mateuszx.potrola

On 02/14/2018 03:50 AM, Dongwon Kim wrote:
> Use threaded interrupt intead of regular one because most part of ISR
> is time-critical and possibly sleeps
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> ---
>   .../hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c | 19 +++++++++++--------
>   1 file changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
> index 30bc4b6304ac..65af5ddfb2d7 100644
> --- a/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
> +++ b/drivers/dma-buf/hyper_dmabuf/backends/xen/hyper_dmabuf_xen_comm.c
> @@ -332,11 +332,14 @@ int xen_be_init_tx_rbuf(int domid)
>   	}
>   
>   	/* setting up interrupt */
> -	ret = bind_evtchn_to_irqhandler(alloc_unbound.port,
> -					front_ring_isr, 0,
> -					NULL, (void *) ring_info);
> +	ring_info->irq = bind_evtchn_to_irq(alloc_unbound.port);
>   
> -	if (ret < 0) {
> +	ret = request_threaded_irq(ring_info->irq,
> +				   NULL,
> +				   front_ring_isr,
> +				   IRQF_ONESHOT, NULL, ring_info);
> +
Why don't you go with threaded IRQ from the beginning and change it
in the patch #9?
> +	if (ret != 0) {
>   		dev_err(hy_drv_priv->dev,
>   			"Failed to setup event channel\n");
>   		close.port = alloc_unbound.port;
> @@ -348,7 +351,6 @@ int xen_be_init_tx_rbuf(int domid)
>   	}
>   
>   	ring_info->rdomain = domid;
> -	ring_info->irq = ret;
>   	ring_info->port = alloc_unbound.port;
>   
>   	mutex_init(&ring_info->lock);
> @@ -535,9 +537,10 @@ int xen_be_init_rx_rbuf(int domid)
>   	if (!xen_comm_find_tx_ring(domid))
>   		ret = xen_be_init_tx_rbuf(domid);
>   
> -	ret = request_irq(ring_info->irq,
> -			  back_ring_isr, 0,
> -			  NULL, (void *)ring_info);
> +	ret = request_threaded_irq(ring_info->irq,
> +				   NULL,
> +				   back_ring_isr, IRQF_ONESHOT,
> +				   NULL, (void *)ring_info);
>   
Ditto
>   	return ret;
>   
>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [RFC, v2, 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework
  2018-04-10  8:53   ` [RFC, v2, " Oleksandr Andrushchenko
@ 2018-04-10 10:47     ` Julien Grall
  2018-04-10 11:04       ` Oleksandr Andrushchenko
  0 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2018-04-10 10:47 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Dongwon Kim, linux-kernel,
	linaro-mm-sig, xen-devel
  Cc: mateuszx.potrola, dri-devel

Hi,

On 04/10/2018 09:53 AM, Oleksandr Andrushchenko wrote:
> On 02/14/2018 03:50 AM, Dongwon Kim wrote:
>> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h 

[...]

>> +#ifndef __HYPER_DMABUF_ID_H__
>> +#define __HYPER_DMABUF_ID_H__
>> +
>> +#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
>> +    ((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
> I would define hyper_dmabuf_id_t.id as a union or 2 separate
> fields to avoid his magic

I am not sure the union would be right here because the layout will 
differs between big and little endian. So does that value will be passed 
to other guest?

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [RFC, v2, 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework
  2018-04-10 10:47     ` [Xen-devel] " Julien Grall
@ 2018-04-10 11:04       ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 21+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-10 11:04 UTC (permalink / raw)
  To: Julien Grall, Dongwon Kim, linux-kernel, linaro-mm-sig, xen-devel
  Cc: mateuszx.potrola, dri-devel

On 04/10/2018 01:47 PM, Julien Grall wrote:
> Hi,
>
> On 04/10/2018 09:53 AM, Oleksandr Andrushchenko wrote:
>> On 02/14/2018 03:50 AM, Dongwon Kim wrote:
>>> diff --git a/drivers/dma-buf/hyper_dmabuf/hyper_dmabuf_id.h 
>
> [...]
>
>>> +#ifndef __HYPER_DMABUF_ID_H__
>>> +#define __HYPER_DMABUF_ID_H__
>>> +
>>> +#define HYPER_DMABUF_ID_CREATE(domid, cnt) \
>>> +    ((((domid) & 0xFF) << 24) | ((cnt) & 0xFFFFFF))
>> I would define hyper_dmabuf_id_t.id as a union or 2 separate
>> fields to avoid his magic
>
> I am not sure the union would be right here because the layout will 
> differs between big and little endian.
Agree
> So does that value will be passed to other guest?
As per my understanding yes, with HYPER_DMABUF_EXPORT request
>
> Cheers,
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-04-10 11:04 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-14  1:49 [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Dongwon Kim
2018-02-14  1:50 ` [RFC PATCH v2 1/9] hyper_dmabuf: initial upload of hyper_dmabuf drv core framework Dongwon Kim
2018-04-10  8:53   ` [RFC, v2, " Oleksandr Andrushchenko
2018-04-10 10:47     ` [Xen-devel] " Julien Grall
2018-04-10 11:04       ` Oleksandr Andrushchenko
2018-02-14  1:50 ` [RFC PATCH v2 2/9] hyper_dmabuf: architecture specification and reference guide Dongwon Kim
2018-02-23 16:15   ` [Xen-devel] " Roger Pau Monné
2018-02-23 19:02     ` Dongwon Kim
2018-04-10  9:52   ` [RFC, v2, " Oleksandr Andrushchenko
2018-02-14  1:50 ` [RFC PATCH v2 3/9] MAINTAINERS: adding Hyper_DMABUF driver section in MAINTAINERS Dongwon Kim
2018-02-14  1:50 ` [RFC PATCH v2 4/9] hyper_dmabuf: user private data attached to hyper_DMABUF Dongwon Kim
2018-04-10  9:59   ` [RFC, v2, " Oleksandr Andrushchenko
2018-02-14  1:50 ` [RFC PATCH v2 5/9] hyper_dmabuf: default backend for XEN hypervisor Dongwon Kim
2018-04-10  9:27   ` [RFC,v2,5/9] " Oleksandr Andrushchenko
2018-02-14  1:50 ` [RFC PATCH v2 6/9] hyper_dmabuf: hyper_DMABUF synchronization across VM Dongwon Kim
2018-02-14  1:50 ` [RFC PATCH v2 7/9] hyper_dmabuf: query ioctl for retreiving various hyper_DMABUF info Dongwon Kim
2018-02-14  1:50 ` [RFC PATCH v2 8/9] hyper_dmabuf: event-polling mechanism for detecting a new hyper_DMABUF Dongwon Kim
2018-02-14  1:50 ` [RFC PATCH v2 9/9] hyper_dmabuf: threaded interrupt in Xen-backend Dongwon Kim
2018-04-10 10:04   ` [RFC,v2,9/9] " Oleksandr Andrushchenko
2018-02-19 17:01 ` [RFC PATCH v2 0/9] hyper_dmabuf: Hyper_DMABUF driver Daniel Vetter
2018-02-21 20:18   ` Dongwon Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).