All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-03-29 13:19 ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-03-29 13:19 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: andr2000, Oleksandr Andrushchenko

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Hello!

When using Xen PV DRM frontend driver then on backend side one will need
to do copying of display buffers' contents (filled by the
frontend's user-space) into buffers allocated at the backend side.
Taking into account the size of display buffers and frames per seconds
it may result in unneeded huge data bus occupation and performance loss.

This helper driver allows implementing zero-copying use-cases
when using Xen para-virtualized frontend display driver by
implementing a DRM/KMS helper driver running on backend's side.
It utilizes PRIME buffers API to share frontend's buffers with
physical device drivers on backend's side:

 - a dumb buffer created on backend's side can be shared
   with the Xen PV frontend driver, so it directly writes
   into backend's domain memory (into the buffer exported from
   DRM/KMS driver of a physical display device)
 - a dumb buffer allocated by the frontend can be imported
   into physical device DRM/KMS driver, thus allowing to
   achieve no copying as well

For that reason number of IOCTLs are introduced:
 -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
    This will create a DRM dumb buffer from grant references provided
    by the frontend
 - DRM_XEN_ZCOPY_DUMB_TO_REFS
   This will grant references to a dumb/display buffer's memory provided
   by the backend
 - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
   This will block until the dumb buffer with the wait handle provided
   be freed

With this helper driver I was able to drop CPU usage from 17% to 3%
on Renesas R-Car M3 board.

This was tested with Renesas' Wayland-KMS and backend running as DRM master.

Thank you,
Oleksandr

Oleksandr Andrushchenko (1):
  drm/xen-zcopy: Add Xen zero-copy helper DRM driver

 Documentation/gpu/drivers.rst               |   1 +
 Documentation/gpu/xen-zcopy.rst             |  32 +
 drivers/gpu/drm/xen/Kconfig                 |  25 +
 drivers/gpu/drm/xen/Makefile                |   5 +
 drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
 include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
 8 files changed, 1264 insertions(+)
 create mode 100644 Documentation/gpu/xen-zcopy.rst
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
 create mode 100644 include/uapi/drm/xen_zcopy_drm.h

-- 
2.16.2

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-03-29 13:19 ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-03-29 13:19 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: andr2000, Oleksandr Andrushchenko

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Hello!

When using Xen PV DRM frontend driver then on backend side one will need
to do copying of display buffers' contents (filled by the
frontend's user-space) into buffers allocated at the backend side.
Taking into account the size of display buffers and frames per seconds
it may result in unneeded huge data bus occupation and performance loss.

This helper driver allows implementing zero-copying use-cases
when using Xen para-virtualized frontend display driver by
implementing a DRM/KMS helper driver running on backend's side.
It utilizes PRIME buffers API to share frontend's buffers with
physical device drivers on backend's side:

 - a dumb buffer created on backend's side can be shared
   with the Xen PV frontend driver, so it directly writes
   into backend's domain memory (into the buffer exported from
   DRM/KMS driver of a physical display device)
 - a dumb buffer allocated by the frontend can be imported
   into physical device DRM/KMS driver, thus allowing to
   achieve no copying as well

For that reason number of IOCTLs are introduced:
 -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
    This will create a DRM dumb buffer from grant references provided
    by the frontend
 - DRM_XEN_ZCOPY_DUMB_TO_REFS
   This will grant references to a dumb/display buffer's memory provided
   by the backend
 - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
   This will block until the dumb buffer with the wait handle provided
   be freed

With this helper driver I was able to drop CPU usage from 17% to 3%
on Renesas R-Car M3 board.

This was tested with Renesas' Wayland-KMS and backend running as DRM master.

Thank you,
Oleksandr

Oleksandr Andrushchenko (1):
  drm/xen-zcopy: Add Xen zero-copy helper DRM driver

 Documentation/gpu/drivers.rst               |   1 +
 Documentation/gpu/xen-zcopy.rst             |  32 +
 drivers/gpu/drm/xen/Kconfig                 |  25 +
 drivers/gpu/drm/xen/Makefile                |   5 +
 drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
 include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
 8 files changed, 1264 insertions(+)
 create mode 100644 Documentation/gpu/xen-zcopy.rst
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
 create mode 100644 include/uapi/drm/xen_zcopy_drm.h

-- 
2.16.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-03-29 13:19 ` Oleksandr Andrushchenko
@ 2018-03-29 13:19   ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-03-29 13:19 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: andr2000, Oleksandr Andrushchenko

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Introduce Xen zero-copy helper DRM driver, add user-space API of the driver:
1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
This will create a DRM dumb buffer from grant references provided
by the frontend. The intended usage is:
  - Frontend
    - creates a dumb/display buffer and allocates memory
    - grants foreign access to the buffer pages
    - passes granted references to the backend
  - Backend
    - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
      granted references and create a dumb buffer
    - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
    - requests real HW driver/consumer to import the PRIME buffer with
      DRM_IOCTL_PRIME_FD_TO_HANDLE
    - uses handle returned by the real HW driver
  - at the end:
    o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
    o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
    o closes file descriptor of the exported buffer

2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
This will grant references to a dumb/display buffer's memory provided by the
backend. The intended usage is:
  - Frontend
    - requests backend to allocate dumb/display buffer and grant references
      to its pages
  - Backend
    - requests real HW driver to create a dumb with DRM_IOCTL_MODE_CREATE_DUMB
    - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
    - requests zero-copy driver to import the PRIME buffer with
      DRM_IOCTL_PRIME_FD_TO_HANDLE
    - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
      grant references to the buffer's memory.
    - passes grant references to the frontend
 - at the end:
    - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
    - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
    - closes file descriptor of the imported buffer

Implement GEM/IOCTL handling depending on driver mode of operation:
- if GEM is created from grant references, then prepare to create
  a dumb from mapped pages
- if GEM grant references are about to be provided for the
  imported PRIME buffer, then prepare for granting references
  and providing those to user-space

Implement handling of display buffers from backend to/from front
interaction point ov view:
- when importing a buffer from the frontend:
  - allocate/free xen ballooned pages via Xen balloon driver
    or by manually allocating a DMA buffer
  - if DMA buffer is used, then increase/decrease its pages
    reservation accordingly
  - map/unmap foreign pages to the ballooned pages
- when exporting a buffer to the frontend:
  - grant references for the pages of the imported PRIME buffer
  - pass the grants back to user-space, so those can be shared
    with the frontend

Add an option to allocate DMA buffers as backing storage while
importing a frontend's buffer into host's memory:
for those use-cases when exported PRIME buffer will be used by
a device expecting CMA buffers only, it is possible to map
frontend's pages onto contiguous buffer, e.g. allocated via
DMA API.

Implement synchronous buffer deletion: for buffers, created from front's
grant references, synchronization between backend and frontend is needed
on buffer deletion as front expects us to unmap these references after
XENDISPL_OP_DBUF_DESTROY response.
For that reason introduce DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL:
this will block until dumb buffer, with the wait handle provided,
be freed.

The rationale behind implementing own wait handle:
  - dumb buffer handle cannot be used as when the PRIME buffer
    gets exported there are at least 2 handles: one is for the
    backend and another one for the importing application,
    so when backend closes its handle and the other application still
    holds the buffer then there is no way for the backend to tell
    which buffer we want to wait for while calling xen_ioctl_wait_free
  - flink cannot be used as well as it is gone when DRM core
    calls .gem_free_object_unlocked

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 Documentation/gpu/drivers.rst               |   1 +
 Documentation/gpu/xen-zcopy.rst             |  32 +
 drivers/gpu/drm/xen/Kconfig                 |  25 +
 drivers/gpu/drm/xen/Makefile                |   5 +
 drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
 include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
 8 files changed, 1264 insertions(+)
 create mode 100644 Documentation/gpu/xen-zcopy.rst
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
 create mode 100644 include/uapi/drm/xen_zcopy_drm.h

diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst
index d3ab6abae838..900ff1c3d3f1 100644
--- a/Documentation/gpu/drivers.rst
+++ b/Documentation/gpu/drivers.rst
@@ -13,6 +13,7 @@ GPU Driver Documentation
    vc4
    bridge/dw-hdmi
    xen-front
+   xen-zcopy
 
 .. only::  subproject and html
 
diff --git a/Documentation/gpu/xen-zcopy.rst b/Documentation/gpu/xen-zcopy.rst
new file mode 100644
index 000000000000..28d3942af2b8
--- /dev/null
+++ b/Documentation/gpu/xen-zcopy.rst
@@ -0,0 +1,32 @@
+===============================
+Xen zero-copy helper DRM driver
+===============================
+
+This helper driver allows implementing zero-copying use-cases
+when using Xen para-virtualized frontend display driver:
+
+ - a dumb buffer created on backend's side can be shared
+   with the Xen PV frontend driver, so it directly writes
+   into backend's domain memory (into the buffer exported from
+   DRM/KMS driver of a physical display device)
+ - a dumb buffer allocated by the frontend can be imported
+   into physical device DRM/KMS driver, thus allowing to
+   achieve no copying as well
+
+DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL
+==================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_FROM_REFS
+
+DRM_XEN_ZCOPY_DUMB_TO_REFS IOCTL
+================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_TO_REFS
+
+DRM_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL
+==================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_WAIT_FREE
diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
index 4f4abc91f3b6..31eedb410829 100644
--- a/drivers/gpu/drm/xen/Kconfig
+++ b/drivers/gpu/drm/xen/Kconfig
@@ -5,6 +5,10 @@ config DRM_XEN
 	  Choose this option if you want to enable DRM support
 	  for Xen.
 
+choice
+	prompt "Xen DRM drivers selection"
+	depends on DRM_XEN
+
 config DRM_XEN_FRONTEND
 	tristate "Para-virtualized frontend driver for Xen guest OS"
 	depends on DRM_XEN
@@ -28,3 +32,24 @@ config DRM_XEN_FRONTEND_CMA
 	  contiguous buffers.
 	  Note: in this mode driver cannot use buffers allocated
 	  by the backend.
+
+config DRM_XEN_ZCOPY
+	tristate "Zero copy helper DRM driver for Xen"
+	depends on DRM_XEN
+	depends on DRM
+	select DRM_KMS_HELPER
+	help
+	  Choose this option if you want to enable a zero copy
+	  helper DRM driver for Xen. This is implemented via mapping
+	  of foreign display buffer pages into current domain and
+	  exporting a dumb via PRIME interface. This allows
+	  driver domains to use buffers of unpriveledged guests without
+	  additional memory copying.
+
+config DRM_XEN_ZCOPY_CMA
+	bool "Use CMA to allocate buffers"
+	depends on DRM_XEN_ZCOPY
+	help
+	  Use CMA to allocate display buffers.
+
+endchoice
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index 352730dc6c13..832daea761a9 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -14,3 +14,8 @@ else
 endif
 
 obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
+
+drm_xen_zcopy-objs := xen_drm_zcopy.o \
+		      xen_drm_zcopy_balloon.o
+
+obj-$(CONFIG_DRM_XEN_ZCOPY) += drm_xen_zcopy.o
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy.c b/drivers/gpu/drm/xen/xen_drm_zcopy.c
new file mode 100644
index 000000000000..c2fa4fcf1bf6
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy.c
@@ -0,0 +1,880 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+
+#include <linux/dma-buf.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+
+#include <drm/xen_zcopy_drm.h>
+
+#include "xen_drm_zcopy_balloon.h"
+
+struct xen_gem_object {
+	struct drm_gem_object base;
+	uint32_t dumb_handle;
+
+	int otherend_id;
+
+	uint32_t num_pages;
+	grant_ref_t *grefs;
+	/* these are the pages from Xen balloon for allocated Xen GEM object */
+	struct page **pages;
+
+	struct xen_drm_zcopy_balloon balloon;
+
+	/* this will be set if we have imported a PRIME buffer */
+	struct sg_table *sgt;
+	/* map grant handles */
+	grant_handle_t *map_handles;
+	/*
+	 * these are used for synchronous object deletion, e.g.
+	 * when user-space wants to know that the grefs are unmapped
+	 */
+	struct kref refcount;
+	int wait_handle;
+};
+
+struct xen_wait_obj {
+	struct list_head list;
+	struct xen_gem_object *xen_obj;
+	struct completion completion;
+};
+
+struct xen_drv_info {
+	struct drm_device *drm_dev;
+
+	/*
+	 * For buffers, created from front's grant references, synchronization
+	 * between backend and frontend is needed on buffer deletion as front
+	 * expects us to unmap these references after XENDISPL_OP_DBUF_DESTROY
+	 * response. This means that when calling DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+	 * ioctl user-space has to provide some unique handle, so we can tell
+	 * the buffer. For that reason we use IDR to allocate a unique value.
+	 * The rationale behind implementing wait handle as IDR:
+	 * - dumb buffer handle cannot be used as when the PRIME buffer
+	 *   gets exported there are at least 2 handles: one is for the
+	 *   backend and another one for the importing application,
+	 *   so when backend closes its handle and the other application still
+	 *   holds the buffer and then there is no way for the backend to tell
+	 *   which buffer we want to wait for while calling xen_ioctl_wait_free
+	 * - flink cannot be used as well as it is gone when DRM core
+	 *   calls .gem_free_object_unlocked
+	 * - sync_file can be used, but it seems to be an overhead to use it
+	 *   only to get a unique "handle"
+	 */
+	struct list_head wait_obj_list;
+	struct idr idr;
+	spinlock_t idr_lock;
+	spinlock_t wait_list_lock;
+};
+
+static inline struct xen_gem_object *to_xen_gem_obj(
+		struct drm_gem_object *gem_obj)
+{
+	return container_of(gem_obj, struct xen_gem_object, base);
+}
+
+static struct xen_wait_obj *wait_obj_new(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	struct xen_wait_obj *wait_obj;
+
+	wait_obj = kzalloc(sizeof(*wait_obj), GFP_KERNEL);
+	if (!wait_obj)
+		return ERR_PTR(-ENOMEM);
+
+	init_completion(&wait_obj->completion);
+	wait_obj->xen_obj = xen_obj;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_add(&wait_obj->list, &drv_info->wait_obj_list);
+	spin_unlock(&drv_info->wait_list_lock);
+
+	return wait_obj;
+}
+
+static void wait_obj_free(struct xen_drv_info *drv_info,
+		struct xen_wait_obj *wait_obj)
+{
+	struct xen_wait_obj *cur_wait_obj, *q;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_for_each_entry_safe(cur_wait_obj, q,
+			&drv_info->wait_obj_list, list)
+		if (cur_wait_obj == wait_obj) {
+			list_del(&wait_obj->list);
+			kfree(wait_obj);
+			break;
+		}
+	spin_unlock(&drv_info->wait_list_lock);
+}
+
+static void wait_obj_check_pending(struct xen_drv_info *drv_info)
+{
+	/*
+	 * It is intended to be called from .last_close when
+	 * no pending wait objects should be on the list.
+	 * make sure we don't miss a bug if this is not the case.
+	 */
+	WARN(!list_empty(&drv_info->wait_obj_list),
+			"Removing with pending wait objects!\n");
+}
+
+static int wait_obj_wait(struct xen_wait_obj *wait_obj,
+		uint32_t wait_to_ms)
+{
+	if (wait_for_completion_timeout(&wait_obj->completion,
+			msecs_to_jiffies(wait_to_ms)) <= 0)
+		return -ETIMEDOUT;
+
+	return 0;
+}
+
+static void wait_obj_signal(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	struct xen_wait_obj *wait_obj, *q;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_for_each_entry_safe(wait_obj, q, &drv_info->wait_obj_list, list)
+		if (wait_obj->xen_obj == xen_obj) {
+			DRM_DEBUG("Found xen_obj in the wait list, wake\n");
+			complete_all(&wait_obj->completion);
+		}
+	spin_unlock(&drv_info->wait_list_lock);
+}
+
+static int wait_obj_handle_new(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	int ret;
+
+	idr_preload(GFP_KERNEL);
+	spin_lock(&drv_info->idr_lock);
+	ret = idr_alloc(&drv_info->idr, xen_obj, 1, 0, GFP_NOWAIT);
+	spin_unlock(&drv_info->idr_lock);
+	idr_preload_end();
+	return ret;
+}
+
+static void wait_obj_handle_free(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	spin_lock(&drv_info->idr_lock);
+	idr_remove(&drv_info->idr, xen_obj->wait_handle);
+	spin_unlock(&drv_info->idr_lock);
+}
+
+static struct xen_gem_object *get_obj_by_wait_handle(
+		struct xen_drv_info *drv_info, int wait_handle)
+{
+	struct xen_gem_object *xen_obj;
+
+	spin_lock(&drv_info->idr_lock);
+	/* check if xen_obj still exists */
+	xen_obj = idr_find(&drv_info->idr, wait_handle);
+	if (xen_obj)
+		kref_get(&xen_obj->refcount);
+	spin_unlock(&drv_info->idr_lock);
+	return xen_obj;
+}
+
+#define xen_page_to_vaddr(page) \
+	((phys_addr_t)pfn_to_kaddr(page_to_xen_pfn(page)))
+
+static int from_refs_unmap(struct device *dev,
+		struct xen_gem_object *xen_obj)
+{
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	int i, ret;
+
+	if (!xen_obj->pages || !xen_obj->map_handles)
+		return 0;
+
+	unmap_ops = kcalloc(xen_obj->num_pages, sizeof(*unmap_ops), GFP_KERNEL);
+	if (!unmap_ops)
+		return -ENOMEM;
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		phys_addr_t addr;
+
+		/*
+		 * When unmapping the grant entry for access by host CPUs:
+		 * if <host_addr> or <dev_bus_addr> is zero, that
+		 * field is ignored. If non-zero, they must refer to
+		 * a device/host mapping that is tracked by <handle>
+		 */
+		addr = xen_page_to_vaddr(xen_obj->pages[i]);
+		gnttab_set_unmap_op(&unmap_ops[i], addr,
+#if defined(CONFIG_X86)
+			GNTMAP_host_map | GNTMAP_device_map,
+#else
+			GNTMAP_host_map,
+#endif
+			xen_obj->map_handles[i]);
+		unmap_ops[i].dev_bus_addr = __pfn_to_phys(__pfn_to_mfn(
+				page_to_pfn(xen_obj->pages[i])));
+	}
+
+	ret = gnttab_unmap_refs(unmap_ops, NULL, xen_obj->pages,
+			xen_obj->num_pages);
+	/*
+	 * Even if we didn't unmap properly - continue to rescue whatever
+	 * resources we can.
+	 */
+	if (ret)
+		DRM_ERROR("Failed to unmap grant references, ret %d", ret);
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		if (unlikely(unmap_ops[i].status != GNTST_okay))
+			DRM_ERROR("Failed to unmap page %d with ref %d: %d\n",
+					i, xen_obj->grefs[i],
+					unmap_ops[i].status);
+	}
+
+	xen_drm_zcopy_ballooned_pages_free(dev, &xen_obj->balloon,
+			xen_obj->num_pages, xen_obj->pages);
+
+	kfree(xen_obj->pages);
+	xen_obj->pages = NULL;
+	kfree(xen_obj->map_handles);
+	xen_obj->map_handles = NULL;
+	kfree(unmap_ops);
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	return ret;
+}
+
+static int from_refs_map(struct device *dev, struct xen_gem_object *xen_obj)
+{
+	struct gnttab_map_grant_ref *map_ops = NULL;
+	int ret, i;
+
+	if (xen_obj->pages) {
+		DRM_ERROR("Mapping already mapped pages?\n");
+		return -EINVAL;
+	}
+
+	xen_obj->pages = kcalloc(xen_obj->num_pages, sizeof(*xen_obj->pages),
+			GFP_KERNEL);
+	if (!xen_obj->pages) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	xen_obj->map_handles = kcalloc(xen_obj->num_pages,
+			sizeof(*xen_obj->map_handles), GFP_KERNEL);
+	if (!xen_obj->map_handles) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	map_ops = kcalloc(xen_obj->num_pages, sizeof(*map_ops), GFP_KERNEL);
+	if (!map_ops) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = xen_drm_zcopy_ballooned_pages_alloc(dev, &xen_obj->balloon,
+			xen_obj->num_pages, xen_obj->pages);
+	if (ret < 0) {
+		DRM_ERROR("Cannot allocate %d ballooned pages: %d\n",
+				xen_obj->num_pages, ret);
+		goto fail;
+	}
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		phys_addr_t addr;
+
+		addr = xen_page_to_vaddr(xen_obj->pages[i]);
+		gnttab_set_map_op(&map_ops[i], addr,
+#if defined(CONFIG_X86)
+			GNTMAP_host_map | GNTMAP_device_map,
+#else
+			GNTMAP_host_map,
+#endif
+			xen_obj->grefs[i], xen_obj->otherend_id);
+	}
+	ret = gnttab_map_refs(map_ops, NULL, xen_obj->pages,
+			xen_obj->num_pages);
+
+	/* save handles even if error, so we can unmap */
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		xen_obj->map_handles[i] = map_ops[i].handle;
+		if (unlikely(map_ops[i].status != GNTST_okay))
+			DRM_ERROR("Failed to map page %d with ref %d: %d\n",
+				i, xen_obj->grefs[i], map_ops[i].status);
+	}
+
+	if (ret) {
+		DRM_ERROR("Failed to map grant references, ret %d", ret);
+		from_refs_unmap(dev, xen_obj);
+		goto fail;
+	}
+
+	kfree(map_ops);
+	return 0;
+
+fail:
+	kfree(xen_obj->pages);
+	xen_obj->pages = NULL;
+	kfree(xen_obj->map_handles);
+	xen_obj->map_handles = NULL;
+	kfree(map_ops);
+	return ret;
+
+}
+
+static void to_refs_end_foreign_access(struct xen_gem_object *xen_obj)
+{
+	int i;
+
+	if (xen_obj->grefs)
+		for (i = 0; i < xen_obj->num_pages; i++)
+			if (xen_obj->grefs[i] != GRANT_INVALID_REF)
+				gnttab_end_foreign_access(xen_obj->grefs[i],
+						0, 0UL);
+
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	xen_obj->sgt = NULL;
+}
+
+static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
+{
+	grant_ref_t priv_gref_head;
+	int ret, j, cur_ref, num_pages;
+	struct sg_page_iter sg_iter;
+
+	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
+			&priv_gref_head);
+	if (ret < 0) {
+		DRM_ERROR("Cannot allocate grant references\n");
+		return ret;
+	}
+
+	j = 0;
+	num_pages = xen_obj->num_pages;
+	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
+		struct page *page;
+
+		page = sg_page_iter_page(&sg_iter);
+		cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
+		if (cur_ref < 0)
+			return cur_ref;
+
+		gnttab_grant_foreign_access_ref(cur_ref,
+				xen_obj->otherend_id, xen_page_to_gfn(page), 0);
+		xen_obj->grefs[j++] = cur_ref;
+		num_pages--;
+	}
+
+	WARN_ON(num_pages != 0);
+
+	gnttab_free_grant_references(priv_gref_head);
+	return 0;
+}
+
+static int gem_create_with_handle(struct xen_gem_object *xen_obj,
+		struct drm_file *filp, struct drm_device *dev, int size)
+{
+	struct drm_gem_object *gem_obj;
+	int ret;
+
+	drm_gem_private_object_init(dev, &xen_obj->base, size);
+	gem_obj = &xen_obj->base;
+	ret = drm_gem_handle_create(filp, gem_obj, &xen_obj->dumb_handle);
+	/* drop reference from allocate - handle holds it now. */
+	drm_gem_object_put_unlocked(gem_obj);
+	return ret;
+}
+
+static int gem_create_obj(struct xen_gem_object *xen_obj,
+		struct drm_device *dev, struct drm_file *filp, int size)
+{
+	struct drm_gem_object *gem_obj;
+	int ret;
+
+	ret = gem_create_with_handle(xen_obj, filp, dev, size);
+	if (ret < 0)
+		goto fail;
+
+	gem_obj = drm_gem_object_lookup(filp, xen_obj->dumb_handle);
+	if (!gem_obj) {
+		DRM_ERROR("Lookup for handle %d failed\n",
+				xen_obj->dumb_handle);
+		ret = -EINVAL;
+		goto fail_destroy;
+	}
+
+	drm_gem_object_put_unlocked(gem_obj);
+	return 0;
+
+fail_destroy:
+	drm_gem_dumb_destroy(filp, dev, xen_obj->dumb_handle);
+fail:
+	DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
+	xen_obj->dumb_handle = 0;
+	return ret;
+}
+
+static int gem_init_obj(struct xen_gem_object *xen_obj,
+		struct drm_device *dev, int size)
+{
+	struct drm_gem_object *gem_obj = &xen_obj->base;
+	int ret;
+
+	ret = drm_gem_object_init(dev, gem_obj, size);
+	if (ret < 0)
+		return ret;
+
+	ret = drm_gem_create_mmap_offset(gem_obj);
+	if (ret < 0) {
+		drm_gem_object_release(gem_obj);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void obj_release(struct kref *kref)
+{
+	struct xen_gem_object *xen_obj =
+			container_of(kref, struct xen_gem_object, refcount);
+	struct xen_drv_info *drv_info = xen_obj->base.dev->dev_private;
+
+	wait_obj_signal(drv_info, xen_obj);
+	kfree(xen_obj);
+}
+
+static void gem_free_object_unlocked(struct drm_gem_object *gem_obj)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	struct xen_drv_info *drv_info = gem_obj->dev->dev_private;
+
+	DRM_DEBUG("Freeing dumb with handle %d\n", xen_obj->dumb_handle);
+	if (xen_obj->grefs) {
+		if (xen_obj->sgt) {
+			drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt);
+			to_refs_end_foreign_access(xen_obj);
+		} else
+			from_refs_unmap(gem_obj->dev->dev, xen_obj);
+	}
+
+	drm_gem_object_release(gem_obj);
+
+	wait_obj_handle_free(drv_info, xen_obj);
+	kref_put(&xen_obj->refcount, obj_release);
+}
+
+static struct sg_table *gem_prime_get_sg_table(
+		struct drm_gem_object *gem_obj)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	struct sg_table *sgt = NULL;
+
+	if (unlikely(!xen_obj->pages))
+		return NULL;
+
+	sgt = drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
+
+	if (unlikely(!sgt))
+		DRM_ERROR("Failed to export sgt\n");
+	else
+		DRM_DEBUG("Exporting %scontiguous buffer nents %d\n",
+				sgt->nents == 1 ? "" : "non-", sgt->nents);
+	return sgt;
+}
+
+struct drm_gem_object *gem_prime_import_sg_table(struct drm_device *dev,
+		struct dma_buf_attachment *attach, struct sg_table *sgt)
+{
+	struct xen_gem_object *xen_obj;
+	int ret;
+
+	xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
+	if (!xen_obj)
+		return ERR_PTR(-ENOMEM);
+
+	ret = gem_init_obj(xen_obj, dev, attach->dmabuf->size);
+	if (ret < 0)
+		goto fail;
+
+	kref_init(&xen_obj->refcount);
+	xen_obj->sgt = sgt;
+	xen_obj->num_pages = DIV_ROUND_UP(attach->dmabuf->size, PAGE_SIZE);
+
+	DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
+			attach->dmabuf->size, sgt->nents);
+	return &xen_obj->base;
+
+fail:
+	kfree(xen_obj);
+	return ERR_PTR(ret);
+}
+
+static int do_ioctl_from_refs(struct drm_device *dev,
+		struct drm_xen_zcopy_dumb_from_refs *req,
+		struct drm_file *filp)
+{
+	struct xen_drv_info *drv_info = dev->dev_private;
+	struct xen_gem_object *xen_obj;
+	int ret;
+
+	xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
+	if (!xen_obj)
+		return -ENOMEM;
+
+	kref_init(&xen_obj->refcount);
+	xen_obj->num_pages = req->num_grefs;
+	xen_obj->otherend_id = req->otherend_id;
+	xen_obj->grefs = kcalloc(xen_obj->num_pages,
+			sizeof(grant_ref_t), GFP_KERNEL);
+	if (!xen_obj->grefs) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (copy_from_user(xen_obj->grefs, req->grefs,
+			xen_obj->num_pages * sizeof(grant_ref_t))) {
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	ret = from_refs_map(dev->dev, xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	ret = gem_create_obj(xen_obj, dev, filp,
+			round_up(req->dumb.size, PAGE_SIZE));
+	if (ret < 0)
+		goto fail;
+
+	req->dumb.handle = xen_obj->dumb_handle;
+
+	/*
+	 * Get user-visible handle for this GEM object.
+	 * the wait object is not allocated at the moment,
+	 * but if need be it will be allocated at the time of
+	 * DRM_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL
+	 */
+	ret = wait_obj_handle_new(drv_info, xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	req->wait_handle = ret;
+	xen_obj->wait_handle = ret;
+	return 0;
+
+fail:
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	kfree(xen_obj);
+	return ret;
+}
+
+static int ioctl_from_refs(struct drm_device *dev,
+		void *data, struct drm_file *filp)
+{
+	struct drm_xen_zcopy_dumb_from_refs *req =
+			(struct drm_xen_zcopy_dumb_from_refs *)data;
+	struct drm_mode_create_dumb *args = &req->dumb;
+	uint32_t cpp, stride, size;
+
+	if (!req->num_grefs || !req->grefs)
+		return -EINVAL;
+
+	if (!args->width || !args->height || !args->bpp)
+		return -EINVAL;
+
+	cpp = DIV_ROUND_UP(args->bpp, 8);
+	if (!cpp || cpp > 0xffffffffU / args->width)
+		return -EINVAL;
+
+	stride = cpp * args->width;
+	if (args->height > 0xffffffffU / stride)
+		return -EINVAL;
+
+	size = args->height * stride;
+	if (PAGE_ALIGN(size) == 0)
+		return -EINVAL;
+
+	args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
+	args->size = args->pitch * args->height;
+	args->handle = 0;
+	if (req->num_grefs < DIV_ROUND_UP(args->size, PAGE_SIZE)) {
+		DRM_ERROR("Provided %d pages, need %d\n", req->num_grefs,
+				(int)DIV_ROUND_UP(args->size, PAGE_SIZE));
+		return -EINVAL;
+	}
+
+	return do_ioctl_from_refs(dev, req, filp);
+}
+
+static int ioctl_to_refs(struct drm_device *dev,
+		void *data, struct drm_file *filp)
+{
+	struct xen_gem_object *xen_obj;
+	struct drm_gem_object *gem_obj;
+	struct drm_xen_zcopy_dumb_to_refs *req =
+			(struct drm_xen_zcopy_dumb_to_refs *)data;
+	int ret;
+
+	if (!req->num_grefs || !req->grefs)
+		return -EINVAL;
+
+	gem_obj = drm_gem_object_lookup(filp, req->handle);
+	if (!gem_obj) {
+		DRM_ERROR("Lookup for handle %d failed\n", req->handle);
+		return -EINVAL;
+	}
+
+	drm_gem_object_put_unlocked(gem_obj);
+	xen_obj = to_xen_gem_obj(gem_obj);
+
+	if (xen_obj->num_pages != req->num_grefs) {
+		DRM_ERROR("Provided %d pages, need %d\n", req->num_grefs,
+				xen_obj->num_pages);
+		return -EINVAL;
+	}
+
+	xen_obj->otherend_id = req->otherend_id;
+	xen_obj->grefs = kcalloc(xen_obj->num_pages,
+			sizeof(grant_ref_t), GFP_KERNEL);
+	if (!xen_obj->grefs) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = to_refs_grant_foreign_access(xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	if (copy_to_user(req->grefs, xen_obj->grefs,
+			xen_obj->num_pages * sizeof(grant_ref_t))) {
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	return 0;
+
+fail:
+	to_refs_end_foreign_access(xen_obj);
+	return ret;
+}
+
+static int ioctl_wait_free(struct drm_device *dev,
+		void *data, struct drm_file *file_priv)
+{
+	struct drm_xen_zcopy_dumb_wait_free *req =
+			(struct drm_xen_zcopy_dumb_wait_free *)data;
+	struct xen_drv_info *drv_info = dev->dev_private;
+	struct xen_gem_object *xen_obj;
+	struct xen_wait_obj *wait_obj;
+	int wait_handle, ret;
+
+	wait_handle = req->wait_handle;
+	/*
+	 * try to find the wait handle: if not found means that
+	 * either the handle has already been freed or wrong
+	 */
+	xen_obj = get_obj_by_wait_handle(drv_info, wait_handle);
+	if (!xen_obj)
+		return -ENOENT;
+
+	/*
+	 * xen_obj still exists and is reference count locked by us now, so
+	 * prepare to wait: allocate wait object and add it to the wait list,
+	 * so we can find it on release
+	 */
+	wait_obj = wait_obj_new(drv_info, xen_obj);
+	/* put our reference and wait for xen_obj release to fire */
+	kref_put(&xen_obj->refcount, obj_release);
+	ret = PTR_ERR_OR_ZERO(wait_obj);
+	if (ret < 0) {
+		DRM_ERROR("Failed to setup wait object, ret %d\n", ret);
+		return ret;
+	}
+
+	ret = wait_obj_wait(wait_obj, req->wait_to_ms);
+	wait_obj_free(drv_info, wait_obj);
+	return ret;
+}
+
+static void lastclose(struct drm_device *dev)
+{
+	struct xen_drv_info *drv_info = dev->dev_private;
+
+	wait_obj_check_pending(drv_info);
+}
+
+static const struct drm_ioctl_desc xen_drm_ioctls[] = {
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_FROM_REFS,
+		ioctl_from_refs,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_TO_REFS,
+		ioctl_to_refs,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_WAIT_FREE,
+		ioctl_wait_free,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+};
+
+static const struct file_operations xen_drm_fops = {
+	.owner          = THIS_MODULE,
+	.open           = drm_open,
+	.release        = drm_release,
+	.unlocked_ioctl = drm_ioctl,
+};
+
+static struct drm_driver xen_drm_driver = {
+	.driver_features           = DRIVER_GEM | DRIVER_PRIME,
+	.lastclose                 = lastclose,
+	.prime_handle_to_fd        = drm_gem_prime_handle_to_fd,
+	.gem_prime_export          = drm_gem_prime_export,
+	.gem_prime_get_sg_table    = gem_prime_get_sg_table,
+	.prime_fd_to_handle        = drm_gem_prime_fd_to_handle,
+	.gem_prime_import          = drm_gem_prime_import,
+	.gem_prime_import_sg_table = gem_prime_import_sg_table,
+	.gem_free_object_unlocked  = gem_free_object_unlocked,
+	.fops                      = &xen_drm_fops,
+	.ioctls                    = xen_drm_ioctls,
+	.num_ioctls                = ARRAY_SIZE(xen_drm_ioctls),
+	.name                      = XENDRM_ZCOPY_DRIVER_NAME,
+	.desc                      = "Xen PV DRM zero copy",
+	.date                      = "20180221",
+	.major                     = 1,
+	.minor                     = 0,
+};
+
+static int xen_drm_drv_remove(struct platform_device *pdev)
+{
+	struct xen_drv_info *drv_info = platform_get_drvdata(pdev);
+
+	if (drv_info && drv_info->drm_dev) {
+		drm_dev_unregister(drv_info->drm_dev);
+		drm_dev_unref(drv_info->drm_dev);
+		idr_destroy(&drv_info->idr);
+	}
+	return 0;
+}
+
+static int xen_drm_drv_probe(struct platform_device *pdev)
+{
+	struct xen_drv_info *drv_info;
+	int ret;
+
+	DRM_INFO("Creating %s\n", xen_drm_driver.desc);
+	drv_info = kzalloc(sizeof(*drv_info), GFP_KERNEL);
+	if (!drv_info)
+		return -ENOMEM;
+
+	idr_init(&drv_info->idr);
+	spin_lock_init(&drv_info->idr_lock);
+	spin_lock_init(&drv_info->wait_list_lock);
+	INIT_LIST_HEAD(&drv_info->wait_obj_list);
+
+	/*
+	 * The device is not spawn from a device tree, so arch_setup_dma_ops
+	 * is not called, thus leaving the device with dummy DMA ops.
+	 * This makes the device return error on PRIME buffer import, which
+	 * is not correct: to fix this call of_dma_configure() with a NULL
+	 * node to set default DMA ops.
+	 */
+	of_dma_configure(&pdev->dev, NULL);
+
+	drv_info->drm_dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
+	if (!drv_info->drm_dev)
+		return -ENOMEM;
+
+	ret = drm_dev_register(drv_info->drm_dev, 0);
+	if (ret < 0)
+		goto fail;
+
+	drv_info->drm_dev->dev_private = drv_info;
+	platform_set_drvdata(pdev, drv_info);
+
+	DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
+			xen_drm_driver.name, xen_drm_driver.major,
+			xen_drm_driver.minor, xen_drm_driver.patchlevel,
+			xen_drm_driver.date, drv_info->drm_dev->primary->index);
+	return 0;
+
+fail:
+	drm_dev_unref(drv_info->drm_dev);
+	kfree(drv_info);
+	return ret;
+}
+
+static struct platform_driver zcopy_platform_drv_info = {
+	.probe		= xen_drm_drv_probe,
+	.remove		= xen_drm_drv_remove,
+	.driver		= {
+		.name	= XENDRM_ZCOPY_DRIVER_NAME,
+	},
+};
+
+struct platform_device_info zcopy_dev_info = {
+	.name = XENDRM_ZCOPY_DRIVER_NAME,
+	.id = 0,
+	.num_res = 0,
+	.dma_mask = DMA_BIT_MASK(32),
+};
+
+static struct platform_device *xen_pdev;
+
+static int __init xen_drv_init(void)
+{
+	int ret;
+
+	/* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */
+	if (XEN_PAGE_SIZE != PAGE_SIZE) {
+		DRM_ERROR(XENDRM_ZCOPY_DRIVER_NAME ": different kernel and Xen page sizes are not supported: XEN_PAGE_SIZE (%lu) != PAGE_SIZE (%lu)\n",
+				XEN_PAGE_SIZE, PAGE_SIZE);
+		return -ENODEV;
+	}
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	xen_pdev = platform_device_register_full(&zcopy_dev_info);
+	if (!xen_pdev) {
+		DRM_ERROR("Failed to register " XENDRM_ZCOPY_DRIVER_NAME " device\n");
+		return -ENODEV;
+	}
+
+	ret = platform_driver_register(&zcopy_platform_drv_info);
+	if (ret != 0) {
+		DRM_ERROR("Failed to register " XENDRM_ZCOPY_DRIVER_NAME " driver: %d\n", ret);
+		platform_device_unregister(xen_pdev);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void __exit xen_drv_fini(void)
+{
+	if (xen_pdev)
+		platform_device_unregister(xen_pdev);
+	platform_driver_unregister(&zcopy_platform_drv_info);
+}
+
+module_init(xen_drv_init);
+module_exit(xen_drv_fini);
+
+MODULE_DESCRIPTION("Xen zero-copy helper DRM device");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
new file mode 100644
index 000000000000..2679233b9f84
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
@@ -0,0 +1,154 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#include <drm/drmP.h>
+
+#if defined(CONFIG_DRM_XEN_ZCOPY_CMA)
+#include <asm/xen/hypercall.h>
+#include <xen/interface/memory.h>
+#include <xen/page.h>
+#else
+#include <xen/balloon.h>
+#endif
+
+#include "xen_drm_zcopy_balloon.h"
+
+#if defined(CONFIG_DRM_XEN_ZCOPY_CMA)
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	xen_pfn_t *frame_list;
+	size_t size;
+	int i, ret;
+	dma_addr_t dev_addr, cpu_addr;
+	void *vaddr = NULL;
+	struct xen_memory_reservation reservation = {
+		.address_bits = 0,
+		.extent_order = 0,
+		.domid        = DOMID_SELF
+	};
+
+	size = num_pages * PAGE_SIZE;
+	DRM_DEBUG("Ballooning out %d pages, size %zu\n", num_pages, size);
+	frame_list = kcalloc(num_pages, sizeof(*frame_list), GFP_KERNEL);
+	if (!frame_list)
+		return -ENOMEM;
+
+	vaddr = dma_alloc_wc(dev, size, &dev_addr, GFP_KERNEL | __GFP_NOWARN);
+	if (!vaddr) {
+		DRM_ERROR("Failed to allocate DMA buffer with size %zu\n",
+				size);
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	cpu_addr = dev_addr;
+	for (i = 0; i < num_pages; i++) {
+		pages[i] = pfn_to_page(__phys_to_pfn(cpu_addr));
+		/*
+		 * XENMEM_populate_physmap requires a PFN based on Xen
+		 * granularity.
+		 */
+		frame_list[i] = page_to_xen_pfn(pages[i]);
+		cpu_addr += PAGE_SIZE;
+	}
+
+	set_xen_guest_handle(reservation.extent_start, frame_list);
+	reservation.nr_extents = num_pages;
+	/* rc will hold number of pages processed */
+	ret = HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation);
+	if (ret <= 0) {
+		DRM_ERROR("Failed to balloon out %d pages (%d), retrying\n",
+				num_pages, ret);
+		WARN_ON(ret != num_pages);
+		ret = -EFAULT;
+		goto fail;
+	}
+
+	obj->vaddr = vaddr;
+	obj->dev_bus_addr = dev_addr;
+	kfree(frame_list);
+	return 0;
+
+fail:
+	if (vaddr)
+		dma_free_wc(dev, size, vaddr, dev_addr);
+
+	kfree(frame_list);
+	return ret;
+}
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	xen_pfn_t *frame_list;
+	int i, ret;
+	size_t size;
+	struct xen_memory_reservation reservation = {
+		.address_bits = 0,
+		.extent_order = 0,
+		.domid        = DOMID_SELF
+	};
+
+	if (!pages)
+		return;
+
+	if (!obj->vaddr)
+		return;
+
+	frame_list = kcalloc(num_pages, sizeof(*frame_list), GFP_KERNEL);
+	if (!frame_list) {
+		DRM_ERROR("Failed to balloon in %d pages\n", num_pages);
+		return;
+	}
+
+	DRM_DEBUG("Ballooning in %d pages\n", num_pages);
+	size = num_pages * PAGE_SIZE;
+	for (i = 0; i < num_pages; i++) {
+		/*
+		 * XENMEM_populate_physmap requires a PFN based on Xen
+		 * granularity.
+		 */
+		frame_list[i] = page_to_xen_pfn(pages[i]);
+	}
+
+	set_xen_guest_handle(reservation.extent_start, frame_list);
+	reservation.nr_extents = num_pages;
+	/* rc will hold number of pages processed */
+	ret = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation);
+	if (ret <= 0) {
+		DRM_ERROR("Failed to balloon in %d pages\n", num_pages);
+		WARN_ON(ret != num_pages);
+	}
+
+	if (obj->vaddr)
+		dma_free_wc(dev, size, obj->vaddr, obj->dev_bus_addr);
+
+	obj->vaddr = NULL;
+	obj->dev_bus_addr = 0;
+	kfree(frame_list);
+}
+#else
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	return alloc_xenballooned_pages(num_pages, pages);
+}
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	free_xenballooned_pages(num_pages, pages);
+}
+#endif /* defined(CONFIG_DRM_XEN_ZCOPY_CMA) */
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
new file mode 100644
index 000000000000..1151f17f9339
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#ifndef __XEN_DRM_ZCOPY_BALLOON_H_
+#define __XEN_DRM_ZCOPY_BALLOON_H_
+
+#include <linux/types.h>
+
+#ifndef GRANT_INVALID_REF
+/*
+ * Note on usage of grant reference 0 as invalid grant reference:
+ * grant reference 0 is valid, but never exposed to a PV driver,
+ * because of the fact it is already in use/reserved by the PV console.
+ */
+#define GRANT_INVALID_REF	0
+#endif
+
+struct xen_drm_zcopy_balloon {
+	void *vaddr;
+	dma_addr_t dev_bus_addr;
+};
+
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages);
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages);
+
+#endif /* __XEN_DRM_ZCOPY_BALLOON_H_ */
diff --git a/include/uapi/drm/xen_zcopy_drm.h b/include/uapi/drm/xen_zcopy_drm.h
new file mode 100644
index 000000000000..8767cfbf0350
--- /dev/null
+++ b/include/uapi/drm/xen_zcopy_drm.h
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+#ifndef __XEN_ZCOPY_DRM_H
+#define __XEN_ZCOPY_DRM_H
+
+#include "drm.h"
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+#define XENDRM_ZCOPY_DRIVER_NAME	"xen_drm_zcopy"
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_FROM_REFS
+ *
+ * This will create a DRM dumb buffer from grant references provided
+ * by the frontend:
+ *
+ * - Frontend
+ *
+ *  - creates a dumb/display buffer and allocates memory.
+ *  - grants foreign access to the buffer pages
+ *  - passes granted references to the backend
+ *
+ * - Backend
+ *
+ *  - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
+ *    granted references and create a dumb buffer.
+ *  - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
+ *  - requests real HW driver to import the PRIME buffer with
+ *    DRM_IOCTL_PRIME_FD_TO_HANDLE
+ *  - uses handle returned by the real HW driver
+ *
+ *  At the end:
+ *
+ *   - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes file descriptor of the exported buffer
+ *   - may wait for the object to be actually freed via wait_handle
+ *     and DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+ */
+#define DRM_XEN_ZCOPY_DUMB_FROM_REFS	0x00
+
+struct drm_xen_zcopy_dumb_from_refs {
+	uint32_t num_grefs;
+	/* user-space uses uint32_t instead of grant_ref_t for mapping */
+	uint32_t *grefs;
+	uint64_t otherend_id;
+	struct drm_mode_create_dumb dumb;
+	uint32_t wait_handle;
+};
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_TO_REFS
+ *
+ * This will grant references to a dumb/display buffer's memory provided by the
+ * backend:
+ *
+ * - Frontend
+ *
+ *  - requests backend to allocate dumb/display buffer and grant references
+ *    to its pages
+ *
+ * - Backend
+ *
+ *  - requests real HW driver to create a dumb with DRM_IOCTL_MODE_CREATE_DUMB
+ *  - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
+ *  - requests zero-copy driver to import the PRIME buffer with
+ *    DRM_IOCTL_PRIME_FD_TO_HANDLE
+ *  - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to grant references to the
+ *    buffer's memory.
+ *  - passes grant references to the frontend
+ *
+ *  At the end:
+ *
+ *   - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes file descriptor of the imported buffer
+ */
+#define DRM_XEN_ZCOPY_DUMB_TO_REFS	0x01
+
+struct drm_xen_zcopy_dumb_to_refs {
+	uint32_t num_grefs;
+	/* user-space uses uint32_t instead of grant_ref_t for mapping */
+	uint32_t *grefs;
+	uint64_t otherend_id;
+	uint32_t handle;
+};
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+ *
+ * This will block until the dumb buffer with the wait handle provided be freed:
+ * this is needed for synchronization between frontend and backend in case
+ * frontend provides grant references of the buffer via
+ * DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
+ * backend replies with XENDISPL_OP_DBUF_DESTROY response.
+ * wait_handle must be the same value returned while calling
+ * DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
+ */
+#define DRM_XEN_ZCOPY_DUMB_WAIT_FREE	0x02
+
+struct drm_xen_zcopy_dumb_wait_free {
+	uint32_t wait_handle;
+	uint32_t wait_to_ms;
+};
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_FROM_REFS, struct drm_xen_zcopy_dumb_from_refs)
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_TO_REFS, struct drm_xen_zcopy_dumb_to_refs)
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_WAIT_FREE, struct drm_xen_zcopy_dumb_wait_free)
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* __XEN_ZCOPY_DRM_H*/
-- 
2.16.2

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-03-29 13:19   ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-03-29 13:19 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: andr2000, Oleksandr Andrushchenko

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Introduce Xen zero-copy helper DRM driver, add user-space API of the driver:
1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
This will create a DRM dumb buffer from grant references provided
by the frontend. The intended usage is:
  - Frontend
    - creates a dumb/display buffer and allocates memory
    - grants foreign access to the buffer pages
    - passes granted references to the backend
  - Backend
    - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
      granted references and create a dumb buffer
    - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
    - requests real HW driver/consumer to import the PRIME buffer with
      DRM_IOCTL_PRIME_FD_TO_HANDLE
    - uses handle returned by the real HW driver
  - at the end:
    o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
    o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
    o closes file descriptor of the exported buffer

2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
This will grant references to a dumb/display buffer's memory provided by the
backend. The intended usage is:
  - Frontend
    - requests backend to allocate dumb/display buffer and grant references
      to its pages
  - Backend
    - requests real HW driver to create a dumb with DRM_IOCTL_MODE_CREATE_DUMB
    - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
    - requests zero-copy driver to import the PRIME buffer with
      DRM_IOCTL_PRIME_FD_TO_HANDLE
    - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
      grant references to the buffer's memory.
    - passes grant references to the frontend
 - at the end:
    - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
    - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
    - closes file descriptor of the imported buffer

Implement GEM/IOCTL handling depending on driver mode of operation:
- if GEM is created from grant references, then prepare to create
  a dumb from mapped pages
- if GEM grant references are about to be provided for the
  imported PRIME buffer, then prepare for granting references
  and providing those to user-space

Implement handling of display buffers from backend to/from front
interaction point ov view:
- when importing a buffer from the frontend:
  - allocate/free xen ballooned pages via Xen balloon driver
    or by manually allocating a DMA buffer
  - if DMA buffer is used, then increase/decrease its pages
    reservation accordingly
  - map/unmap foreign pages to the ballooned pages
- when exporting a buffer to the frontend:
  - grant references for the pages of the imported PRIME buffer
  - pass the grants back to user-space, so those can be shared
    with the frontend

Add an option to allocate DMA buffers as backing storage while
importing a frontend's buffer into host's memory:
for those use-cases when exported PRIME buffer will be used by
a device expecting CMA buffers only, it is possible to map
frontend's pages onto contiguous buffer, e.g. allocated via
DMA API.

Implement synchronous buffer deletion: for buffers, created from front's
grant references, synchronization between backend and frontend is needed
on buffer deletion as front expects us to unmap these references after
XENDISPL_OP_DBUF_DESTROY response.
For that reason introduce DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL:
this will block until dumb buffer, with the wait handle provided,
be freed.

The rationale behind implementing own wait handle:
  - dumb buffer handle cannot be used as when the PRIME buffer
    gets exported there are at least 2 handles: one is for the
    backend and another one for the importing application,
    so when backend closes its handle and the other application still
    holds the buffer then there is no way for the backend to tell
    which buffer we want to wait for while calling xen_ioctl_wait_free
  - flink cannot be used as well as it is gone when DRM core
    calls .gem_free_object_unlocked

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 Documentation/gpu/drivers.rst               |   1 +
 Documentation/gpu/xen-zcopy.rst             |  32 +
 drivers/gpu/drm/xen/Kconfig                 |  25 +
 drivers/gpu/drm/xen/Makefile                |   5 +
 drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
 include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
 8 files changed, 1264 insertions(+)
 create mode 100644 Documentation/gpu/xen-zcopy.rst
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
 create mode 100644 include/uapi/drm/xen_zcopy_drm.h

diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst
index d3ab6abae838..900ff1c3d3f1 100644
--- a/Documentation/gpu/drivers.rst
+++ b/Documentation/gpu/drivers.rst
@@ -13,6 +13,7 @@ GPU Driver Documentation
    vc4
    bridge/dw-hdmi
    xen-front
+   xen-zcopy
 
 .. only::  subproject and html
 
diff --git a/Documentation/gpu/xen-zcopy.rst b/Documentation/gpu/xen-zcopy.rst
new file mode 100644
index 000000000000..28d3942af2b8
--- /dev/null
+++ b/Documentation/gpu/xen-zcopy.rst
@@ -0,0 +1,32 @@
+===============================
+Xen zero-copy helper DRM driver
+===============================
+
+This helper driver allows implementing zero-copying use-cases
+when using Xen para-virtualized frontend display driver:
+
+ - a dumb buffer created on backend's side can be shared
+   with the Xen PV frontend driver, so it directly writes
+   into backend's domain memory (into the buffer exported from
+   DRM/KMS driver of a physical display device)
+ - a dumb buffer allocated by the frontend can be imported
+   into physical device DRM/KMS driver, thus allowing to
+   achieve no copying as well
+
+DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL
+==================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_FROM_REFS
+
+DRM_XEN_ZCOPY_DUMB_TO_REFS IOCTL
+================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_TO_REFS
+
+DRM_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL
+==================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_WAIT_FREE
diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
index 4f4abc91f3b6..31eedb410829 100644
--- a/drivers/gpu/drm/xen/Kconfig
+++ b/drivers/gpu/drm/xen/Kconfig
@@ -5,6 +5,10 @@ config DRM_XEN
 	  Choose this option if you want to enable DRM support
 	  for Xen.
 
+choice
+	prompt "Xen DRM drivers selection"
+	depends on DRM_XEN
+
 config DRM_XEN_FRONTEND
 	tristate "Para-virtualized frontend driver for Xen guest OS"
 	depends on DRM_XEN
@@ -28,3 +32,24 @@ config DRM_XEN_FRONTEND_CMA
 	  contiguous buffers.
 	  Note: in this mode driver cannot use buffers allocated
 	  by the backend.
+
+config DRM_XEN_ZCOPY
+	tristate "Zero copy helper DRM driver for Xen"
+	depends on DRM_XEN
+	depends on DRM
+	select DRM_KMS_HELPER
+	help
+	  Choose this option if you want to enable a zero copy
+	  helper DRM driver for Xen. This is implemented via mapping
+	  of foreign display buffer pages into current domain and
+	  exporting a dumb via PRIME interface. This allows
+	  driver domains to use buffers of unpriveledged guests without
+	  additional memory copying.
+
+config DRM_XEN_ZCOPY_CMA
+	bool "Use CMA to allocate buffers"
+	depends on DRM_XEN_ZCOPY
+	help
+	  Use CMA to allocate display buffers.
+
+endchoice
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index 352730dc6c13..832daea761a9 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -14,3 +14,8 @@ else
 endif
 
 obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
+
+drm_xen_zcopy-objs := xen_drm_zcopy.o \
+		      xen_drm_zcopy_balloon.o
+
+obj-$(CONFIG_DRM_XEN_ZCOPY) += drm_xen_zcopy.o
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy.c b/drivers/gpu/drm/xen/xen_drm_zcopy.c
new file mode 100644
index 000000000000..c2fa4fcf1bf6
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy.c
@@ -0,0 +1,880 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+
+#include <linux/dma-buf.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+
+#include <drm/xen_zcopy_drm.h>
+
+#include "xen_drm_zcopy_balloon.h"
+
+struct xen_gem_object {
+	struct drm_gem_object base;
+	uint32_t dumb_handle;
+
+	int otherend_id;
+
+	uint32_t num_pages;
+	grant_ref_t *grefs;
+	/* these are the pages from Xen balloon for allocated Xen GEM object */
+	struct page **pages;
+
+	struct xen_drm_zcopy_balloon balloon;
+
+	/* this will be set if we have imported a PRIME buffer */
+	struct sg_table *sgt;
+	/* map grant handles */
+	grant_handle_t *map_handles;
+	/*
+	 * these are used for synchronous object deletion, e.g.
+	 * when user-space wants to know that the grefs are unmapped
+	 */
+	struct kref refcount;
+	int wait_handle;
+};
+
+struct xen_wait_obj {
+	struct list_head list;
+	struct xen_gem_object *xen_obj;
+	struct completion completion;
+};
+
+struct xen_drv_info {
+	struct drm_device *drm_dev;
+
+	/*
+	 * For buffers, created from front's grant references, synchronization
+	 * between backend and frontend is needed on buffer deletion as front
+	 * expects us to unmap these references after XENDISPL_OP_DBUF_DESTROY
+	 * response. This means that when calling DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+	 * ioctl user-space has to provide some unique handle, so we can tell
+	 * the buffer. For that reason we use IDR to allocate a unique value.
+	 * The rationale behind implementing wait handle as IDR:
+	 * - dumb buffer handle cannot be used as when the PRIME buffer
+	 *   gets exported there are at least 2 handles: one is for the
+	 *   backend and another one for the importing application,
+	 *   so when backend closes its handle and the other application still
+	 *   holds the buffer and then there is no way for the backend to tell
+	 *   which buffer we want to wait for while calling xen_ioctl_wait_free
+	 * - flink cannot be used as well as it is gone when DRM core
+	 *   calls .gem_free_object_unlocked
+	 * - sync_file can be used, but it seems to be an overhead to use it
+	 *   only to get a unique "handle"
+	 */
+	struct list_head wait_obj_list;
+	struct idr idr;
+	spinlock_t idr_lock;
+	spinlock_t wait_list_lock;
+};
+
+static inline struct xen_gem_object *to_xen_gem_obj(
+		struct drm_gem_object *gem_obj)
+{
+	return container_of(gem_obj, struct xen_gem_object, base);
+}
+
+static struct xen_wait_obj *wait_obj_new(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	struct xen_wait_obj *wait_obj;
+
+	wait_obj = kzalloc(sizeof(*wait_obj), GFP_KERNEL);
+	if (!wait_obj)
+		return ERR_PTR(-ENOMEM);
+
+	init_completion(&wait_obj->completion);
+	wait_obj->xen_obj = xen_obj;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_add(&wait_obj->list, &drv_info->wait_obj_list);
+	spin_unlock(&drv_info->wait_list_lock);
+
+	return wait_obj;
+}
+
+static void wait_obj_free(struct xen_drv_info *drv_info,
+		struct xen_wait_obj *wait_obj)
+{
+	struct xen_wait_obj *cur_wait_obj, *q;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_for_each_entry_safe(cur_wait_obj, q,
+			&drv_info->wait_obj_list, list)
+		if (cur_wait_obj == wait_obj) {
+			list_del(&wait_obj->list);
+			kfree(wait_obj);
+			break;
+		}
+	spin_unlock(&drv_info->wait_list_lock);
+}
+
+static void wait_obj_check_pending(struct xen_drv_info *drv_info)
+{
+	/*
+	 * It is intended to be called from .last_close when
+	 * no pending wait objects should be on the list.
+	 * make sure we don't miss a bug if this is not the case.
+	 */
+	WARN(!list_empty(&drv_info->wait_obj_list),
+			"Removing with pending wait objects!\n");
+}
+
+static int wait_obj_wait(struct xen_wait_obj *wait_obj,
+		uint32_t wait_to_ms)
+{
+	if (wait_for_completion_timeout(&wait_obj->completion,
+			msecs_to_jiffies(wait_to_ms)) <= 0)
+		return -ETIMEDOUT;
+
+	return 0;
+}
+
+static void wait_obj_signal(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	struct xen_wait_obj *wait_obj, *q;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_for_each_entry_safe(wait_obj, q, &drv_info->wait_obj_list, list)
+		if (wait_obj->xen_obj == xen_obj) {
+			DRM_DEBUG("Found xen_obj in the wait list, wake\n");
+			complete_all(&wait_obj->completion);
+		}
+	spin_unlock(&drv_info->wait_list_lock);
+}
+
+static int wait_obj_handle_new(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	int ret;
+
+	idr_preload(GFP_KERNEL);
+	spin_lock(&drv_info->idr_lock);
+	ret = idr_alloc(&drv_info->idr, xen_obj, 1, 0, GFP_NOWAIT);
+	spin_unlock(&drv_info->idr_lock);
+	idr_preload_end();
+	return ret;
+}
+
+static void wait_obj_handle_free(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	spin_lock(&drv_info->idr_lock);
+	idr_remove(&drv_info->idr, xen_obj->wait_handle);
+	spin_unlock(&drv_info->idr_lock);
+}
+
+static struct xen_gem_object *get_obj_by_wait_handle(
+		struct xen_drv_info *drv_info, int wait_handle)
+{
+	struct xen_gem_object *xen_obj;
+
+	spin_lock(&drv_info->idr_lock);
+	/* check if xen_obj still exists */
+	xen_obj = idr_find(&drv_info->idr, wait_handle);
+	if (xen_obj)
+		kref_get(&xen_obj->refcount);
+	spin_unlock(&drv_info->idr_lock);
+	return xen_obj;
+}
+
+#define xen_page_to_vaddr(page) \
+	((phys_addr_t)pfn_to_kaddr(page_to_xen_pfn(page)))
+
+static int from_refs_unmap(struct device *dev,
+		struct xen_gem_object *xen_obj)
+{
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	int i, ret;
+
+	if (!xen_obj->pages || !xen_obj->map_handles)
+		return 0;
+
+	unmap_ops = kcalloc(xen_obj->num_pages, sizeof(*unmap_ops), GFP_KERNEL);
+	if (!unmap_ops)
+		return -ENOMEM;
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		phys_addr_t addr;
+
+		/*
+		 * When unmapping the grant entry for access by host CPUs:
+		 * if <host_addr> or <dev_bus_addr> is zero, that
+		 * field is ignored. If non-zero, they must refer to
+		 * a device/host mapping that is tracked by <handle>
+		 */
+		addr = xen_page_to_vaddr(xen_obj->pages[i]);
+		gnttab_set_unmap_op(&unmap_ops[i], addr,
+#if defined(CONFIG_X86)
+			GNTMAP_host_map | GNTMAP_device_map,
+#else
+			GNTMAP_host_map,
+#endif
+			xen_obj->map_handles[i]);
+		unmap_ops[i].dev_bus_addr = __pfn_to_phys(__pfn_to_mfn(
+				page_to_pfn(xen_obj->pages[i])));
+	}
+
+	ret = gnttab_unmap_refs(unmap_ops, NULL, xen_obj->pages,
+			xen_obj->num_pages);
+	/*
+	 * Even if we didn't unmap properly - continue to rescue whatever
+	 * resources we can.
+	 */
+	if (ret)
+		DRM_ERROR("Failed to unmap grant references, ret %d", ret);
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		if (unlikely(unmap_ops[i].status != GNTST_okay))
+			DRM_ERROR("Failed to unmap page %d with ref %d: %d\n",
+					i, xen_obj->grefs[i],
+					unmap_ops[i].status);
+	}
+
+	xen_drm_zcopy_ballooned_pages_free(dev, &xen_obj->balloon,
+			xen_obj->num_pages, xen_obj->pages);
+
+	kfree(xen_obj->pages);
+	xen_obj->pages = NULL;
+	kfree(xen_obj->map_handles);
+	xen_obj->map_handles = NULL;
+	kfree(unmap_ops);
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	return ret;
+}
+
+static int from_refs_map(struct device *dev, struct xen_gem_object *xen_obj)
+{
+	struct gnttab_map_grant_ref *map_ops = NULL;
+	int ret, i;
+
+	if (xen_obj->pages) {
+		DRM_ERROR("Mapping already mapped pages?\n");
+		return -EINVAL;
+	}
+
+	xen_obj->pages = kcalloc(xen_obj->num_pages, sizeof(*xen_obj->pages),
+			GFP_KERNEL);
+	if (!xen_obj->pages) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	xen_obj->map_handles = kcalloc(xen_obj->num_pages,
+			sizeof(*xen_obj->map_handles), GFP_KERNEL);
+	if (!xen_obj->map_handles) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	map_ops = kcalloc(xen_obj->num_pages, sizeof(*map_ops), GFP_KERNEL);
+	if (!map_ops) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = xen_drm_zcopy_ballooned_pages_alloc(dev, &xen_obj->balloon,
+			xen_obj->num_pages, xen_obj->pages);
+	if (ret < 0) {
+		DRM_ERROR("Cannot allocate %d ballooned pages: %d\n",
+				xen_obj->num_pages, ret);
+		goto fail;
+	}
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		phys_addr_t addr;
+
+		addr = xen_page_to_vaddr(xen_obj->pages[i]);
+		gnttab_set_map_op(&map_ops[i], addr,
+#if defined(CONFIG_X86)
+			GNTMAP_host_map | GNTMAP_device_map,
+#else
+			GNTMAP_host_map,
+#endif
+			xen_obj->grefs[i], xen_obj->otherend_id);
+	}
+	ret = gnttab_map_refs(map_ops, NULL, xen_obj->pages,
+			xen_obj->num_pages);
+
+	/* save handles even if error, so we can unmap */
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		xen_obj->map_handles[i] = map_ops[i].handle;
+		if (unlikely(map_ops[i].status != GNTST_okay))
+			DRM_ERROR("Failed to map page %d with ref %d: %d\n",
+				i, xen_obj->grefs[i], map_ops[i].status);
+	}
+
+	if (ret) {
+		DRM_ERROR("Failed to map grant references, ret %d", ret);
+		from_refs_unmap(dev, xen_obj);
+		goto fail;
+	}
+
+	kfree(map_ops);
+	return 0;
+
+fail:
+	kfree(xen_obj->pages);
+	xen_obj->pages = NULL;
+	kfree(xen_obj->map_handles);
+	xen_obj->map_handles = NULL;
+	kfree(map_ops);
+	return ret;
+
+}
+
+static void to_refs_end_foreign_access(struct xen_gem_object *xen_obj)
+{
+	int i;
+
+	if (xen_obj->grefs)
+		for (i = 0; i < xen_obj->num_pages; i++)
+			if (xen_obj->grefs[i] != GRANT_INVALID_REF)
+				gnttab_end_foreign_access(xen_obj->grefs[i],
+						0, 0UL);
+
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	xen_obj->sgt = NULL;
+}
+
+static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
+{
+	grant_ref_t priv_gref_head;
+	int ret, j, cur_ref, num_pages;
+	struct sg_page_iter sg_iter;
+
+	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
+			&priv_gref_head);
+	if (ret < 0) {
+		DRM_ERROR("Cannot allocate grant references\n");
+		return ret;
+	}
+
+	j = 0;
+	num_pages = xen_obj->num_pages;
+	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
+		struct page *page;
+
+		page = sg_page_iter_page(&sg_iter);
+		cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
+		if (cur_ref < 0)
+			return cur_ref;
+
+		gnttab_grant_foreign_access_ref(cur_ref,
+				xen_obj->otherend_id, xen_page_to_gfn(page), 0);
+		xen_obj->grefs[j++] = cur_ref;
+		num_pages--;
+	}
+
+	WARN_ON(num_pages != 0);
+
+	gnttab_free_grant_references(priv_gref_head);
+	return 0;
+}
+
+static int gem_create_with_handle(struct xen_gem_object *xen_obj,
+		struct drm_file *filp, struct drm_device *dev, int size)
+{
+	struct drm_gem_object *gem_obj;
+	int ret;
+
+	drm_gem_private_object_init(dev, &xen_obj->base, size);
+	gem_obj = &xen_obj->base;
+	ret = drm_gem_handle_create(filp, gem_obj, &xen_obj->dumb_handle);
+	/* drop reference from allocate - handle holds it now. */
+	drm_gem_object_put_unlocked(gem_obj);
+	return ret;
+}
+
+static int gem_create_obj(struct xen_gem_object *xen_obj,
+		struct drm_device *dev, struct drm_file *filp, int size)
+{
+	struct drm_gem_object *gem_obj;
+	int ret;
+
+	ret = gem_create_with_handle(xen_obj, filp, dev, size);
+	if (ret < 0)
+		goto fail;
+
+	gem_obj = drm_gem_object_lookup(filp, xen_obj->dumb_handle);
+	if (!gem_obj) {
+		DRM_ERROR("Lookup for handle %d failed\n",
+				xen_obj->dumb_handle);
+		ret = -EINVAL;
+		goto fail_destroy;
+	}
+
+	drm_gem_object_put_unlocked(gem_obj);
+	return 0;
+
+fail_destroy:
+	drm_gem_dumb_destroy(filp, dev, xen_obj->dumb_handle);
+fail:
+	DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
+	xen_obj->dumb_handle = 0;
+	return ret;
+}
+
+static int gem_init_obj(struct xen_gem_object *xen_obj,
+		struct drm_device *dev, int size)
+{
+	struct drm_gem_object *gem_obj = &xen_obj->base;
+	int ret;
+
+	ret = drm_gem_object_init(dev, gem_obj, size);
+	if (ret < 0)
+		return ret;
+
+	ret = drm_gem_create_mmap_offset(gem_obj);
+	if (ret < 0) {
+		drm_gem_object_release(gem_obj);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void obj_release(struct kref *kref)
+{
+	struct xen_gem_object *xen_obj =
+			container_of(kref, struct xen_gem_object, refcount);
+	struct xen_drv_info *drv_info = xen_obj->base.dev->dev_private;
+
+	wait_obj_signal(drv_info, xen_obj);
+	kfree(xen_obj);
+}
+
+static void gem_free_object_unlocked(struct drm_gem_object *gem_obj)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	struct xen_drv_info *drv_info = gem_obj->dev->dev_private;
+
+	DRM_DEBUG("Freeing dumb with handle %d\n", xen_obj->dumb_handle);
+	if (xen_obj->grefs) {
+		if (xen_obj->sgt) {
+			drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt);
+			to_refs_end_foreign_access(xen_obj);
+		} else
+			from_refs_unmap(gem_obj->dev->dev, xen_obj);
+	}
+
+	drm_gem_object_release(gem_obj);
+
+	wait_obj_handle_free(drv_info, xen_obj);
+	kref_put(&xen_obj->refcount, obj_release);
+}
+
+static struct sg_table *gem_prime_get_sg_table(
+		struct drm_gem_object *gem_obj)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	struct sg_table *sgt = NULL;
+
+	if (unlikely(!xen_obj->pages))
+		return NULL;
+
+	sgt = drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
+
+	if (unlikely(!sgt))
+		DRM_ERROR("Failed to export sgt\n");
+	else
+		DRM_DEBUG("Exporting %scontiguous buffer nents %d\n",
+				sgt->nents == 1 ? "" : "non-", sgt->nents);
+	return sgt;
+}
+
+struct drm_gem_object *gem_prime_import_sg_table(struct drm_device *dev,
+		struct dma_buf_attachment *attach, struct sg_table *sgt)
+{
+	struct xen_gem_object *xen_obj;
+	int ret;
+
+	xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
+	if (!xen_obj)
+		return ERR_PTR(-ENOMEM);
+
+	ret = gem_init_obj(xen_obj, dev, attach->dmabuf->size);
+	if (ret < 0)
+		goto fail;
+
+	kref_init(&xen_obj->refcount);
+	xen_obj->sgt = sgt;
+	xen_obj->num_pages = DIV_ROUND_UP(attach->dmabuf->size, PAGE_SIZE);
+
+	DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
+			attach->dmabuf->size, sgt->nents);
+	return &xen_obj->base;
+
+fail:
+	kfree(xen_obj);
+	return ERR_PTR(ret);
+}
+
+static int do_ioctl_from_refs(struct drm_device *dev,
+		struct drm_xen_zcopy_dumb_from_refs *req,
+		struct drm_file *filp)
+{
+	struct xen_drv_info *drv_info = dev->dev_private;
+	struct xen_gem_object *xen_obj;
+	int ret;
+
+	xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
+	if (!xen_obj)
+		return -ENOMEM;
+
+	kref_init(&xen_obj->refcount);
+	xen_obj->num_pages = req->num_grefs;
+	xen_obj->otherend_id = req->otherend_id;
+	xen_obj->grefs = kcalloc(xen_obj->num_pages,
+			sizeof(grant_ref_t), GFP_KERNEL);
+	if (!xen_obj->grefs) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (copy_from_user(xen_obj->grefs, req->grefs,
+			xen_obj->num_pages * sizeof(grant_ref_t))) {
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	ret = from_refs_map(dev->dev, xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	ret = gem_create_obj(xen_obj, dev, filp,
+			round_up(req->dumb.size, PAGE_SIZE));
+	if (ret < 0)
+		goto fail;
+
+	req->dumb.handle = xen_obj->dumb_handle;
+
+	/*
+	 * Get user-visible handle for this GEM object.
+	 * the wait object is not allocated at the moment,
+	 * but if need be it will be allocated at the time of
+	 * DRM_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL
+	 */
+	ret = wait_obj_handle_new(drv_info, xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	req->wait_handle = ret;
+	xen_obj->wait_handle = ret;
+	return 0;
+
+fail:
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	kfree(xen_obj);
+	return ret;
+}
+
+static int ioctl_from_refs(struct drm_device *dev,
+		void *data, struct drm_file *filp)
+{
+	struct drm_xen_zcopy_dumb_from_refs *req =
+			(struct drm_xen_zcopy_dumb_from_refs *)data;
+	struct drm_mode_create_dumb *args = &req->dumb;
+	uint32_t cpp, stride, size;
+
+	if (!req->num_grefs || !req->grefs)
+		return -EINVAL;
+
+	if (!args->width || !args->height || !args->bpp)
+		return -EINVAL;
+
+	cpp = DIV_ROUND_UP(args->bpp, 8);
+	if (!cpp || cpp > 0xffffffffU / args->width)
+		return -EINVAL;
+
+	stride = cpp * args->width;
+	if (args->height > 0xffffffffU / stride)
+		return -EINVAL;
+
+	size = args->height * stride;
+	if (PAGE_ALIGN(size) == 0)
+		return -EINVAL;
+
+	args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
+	args->size = args->pitch * args->height;
+	args->handle = 0;
+	if (req->num_grefs < DIV_ROUND_UP(args->size, PAGE_SIZE)) {
+		DRM_ERROR("Provided %d pages, need %d\n", req->num_grefs,
+				(int)DIV_ROUND_UP(args->size, PAGE_SIZE));
+		return -EINVAL;
+	}
+
+	return do_ioctl_from_refs(dev, req, filp);
+}
+
+static int ioctl_to_refs(struct drm_device *dev,
+		void *data, struct drm_file *filp)
+{
+	struct xen_gem_object *xen_obj;
+	struct drm_gem_object *gem_obj;
+	struct drm_xen_zcopy_dumb_to_refs *req =
+			(struct drm_xen_zcopy_dumb_to_refs *)data;
+	int ret;
+
+	if (!req->num_grefs || !req->grefs)
+		return -EINVAL;
+
+	gem_obj = drm_gem_object_lookup(filp, req->handle);
+	if (!gem_obj) {
+		DRM_ERROR("Lookup for handle %d failed\n", req->handle);
+		return -EINVAL;
+	}
+
+	drm_gem_object_put_unlocked(gem_obj);
+	xen_obj = to_xen_gem_obj(gem_obj);
+
+	if (xen_obj->num_pages != req->num_grefs) {
+		DRM_ERROR("Provided %d pages, need %d\n", req->num_grefs,
+				xen_obj->num_pages);
+		return -EINVAL;
+	}
+
+	xen_obj->otherend_id = req->otherend_id;
+	xen_obj->grefs = kcalloc(xen_obj->num_pages,
+			sizeof(grant_ref_t), GFP_KERNEL);
+	if (!xen_obj->grefs) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = to_refs_grant_foreign_access(xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	if (copy_to_user(req->grefs, xen_obj->grefs,
+			xen_obj->num_pages * sizeof(grant_ref_t))) {
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	return 0;
+
+fail:
+	to_refs_end_foreign_access(xen_obj);
+	return ret;
+}
+
+static int ioctl_wait_free(struct drm_device *dev,
+		void *data, struct drm_file *file_priv)
+{
+	struct drm_xen_zcopy_dumb_wait_free *req =
+			(struct drm_xen_zcopy_dumb_wait_free *)data;
+	struct xen_drv_info *drv_info = dev->dev_private;
+	struct xen_gem_object *xen_obj;
+	struct xen_wait_obj *wait_obj;
+	int wait_handle, ret;
+
+	wait_handle = req->wait_handle;
+	/*
+	 * try to find the wait handle: if not found means that
+	 * either the handle has already been freed or wrong
+	 */
+	xen_obj = get_obj_by_wait_handle(drv_info, wait_handle);
+	if (!xen_obj)
+		return -ENOENT;
+
+	/*
+	 * xen_obj still exists and is reference count locked by us now, so
+	 * prepare to wait: allocate wait object and add it to the wait list,
+	 * so we can find it on release
+	 */
+	wait_obj = wait_obj_new(drv_info, xen_obj);
+	/* put our reference and wait for xen_obj release to fire */
+	kref_put(&xen_obj->refcount, obj_release);
+	ret = PTR_ERR_OR_ZERO(wait_obj);
+	if (ret < 0) {
+		DRM_ERROR("Failed to setup wait object, ret %d\n", ret);
+		return ret;
+	}
+
+	ret = wait_obj_wait(wait_obj, req->wait_to_ms);
+	wait_obj_free(drv_info, wait_obj);
+	return ret;
+}
+
+static void lastclose(struct drm_device *dev)
+{
+	struct xen_drv_info *drv_info = dev->dev_private;
+
+	wait_obj_check_pending(drv_info);
+}
+
+static const struct drm_ioctl_desc xen_drm_ioctls[] = {
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_FROM_REFS,
+		ioctl_from_refs,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_TO_REFS,
+		ioctl_to_refs,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_WAIT_FREE,
+		ioctl_wait_free,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+};
+
+static const struct file_operations xen_drm_fops = {
+	.owner          = THIS_MODULE,
+	.open           = drm_open,
+	.release        = drm_release,
+	.unlocked_ioctl = drm_ioctl,
+};
+
+static struct drm_driver xen_drm_driver = {
+	.driver_features           = DRIVER_GEM | DRIVER_PRIME,
+	.lastclose                 = lastclose,
+	.prime_handle_to_fd        = drm_gem_prime_handle_to_fd,
+	.gem_prime_export          = drm_gem_prime_export,
+	.gem_prime_get_sg_table    = gem_prime_get_sg_table,
+	.prime_fd_to_handle        = drm_gem_prime_fd_to_handle,
+	.gem_prime_import          = drm_gem_prime_import,
+	.gem_prime_import_sg_table = gem_prime_import_sg_table,
+	.gem_free_object_unlocked  = gem_free_object_unlocked,
+	.fops                      = &xen_drm_fops,
+	.ioctls                    = xen_drm_ioctls,
+	.num_ioctls                = ARRAY_SIZE(xen_drm_ioctls),
+	.name                      = XENDRM_ZCOPY_DRIVER_NAME,
+	.desc                      = "Xen PV DRM zero copy",
+	.date                      = "20180221",
+	.major                     = 1,
+	.minor                     = 0,
+};
+
+static int xen_drm_drv_remove(struct platform_device *pdev)
+{
+	struct xen_drv_info *drv_info = platform_get_drvdata(pdev);
+
+	if (drv_info && drv_info->drm_dev) {
+		drm_dev_unregister(drv_info->drm_dev);
+		drm_dev_unref(drv_info->drm_dev);
+		idr_destroy(&drv_info->idr);
+	}
+	return 0;
+}
+
+static int xen_drm_drv_probe(struct platform_device *pdev)
+{
+	struct xen_drv_info *drv_info;
+	int ret;
+
+	DRM_INFO("Creating %s\n", xen_drm_driver.desc);
+	drv_info = kzalloc(sizeof(*drv_info), GFP_KERNEL);
+	if (!drv_info)
+		return -ENOMEM;
+
+	idr_init(&drv_info->idr);
+	spin_lock_init(&drv_info->idr_lock);
+	spin_lock_init(&drv_info->wait_list_lock);
+	INIT_LIST_HEAD(&drv_info->wait_obj_list);
+
+	/*
+	 * The device is not spawn from a device tree, so arch_setup_dma_ops
+	 * is not called, thus leaving the device with dummy DMA ops.
+	 * This makes the device return error on PRIME buffer import, which
+	 * is not correct: to fix this call of_dma_configure() with a NULL
+	 * node to set default DMA ops.
+	 */
+	of_dma_configure(&pdev->dev, NULL);
+
+	drv_info->drm_dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
+	if (!drv_info->drm_dev)
+		return -ENOMEM;
+
+	ret = drm_dev_register(drv_info->drm_dev, 0);
+	if (ret < 0)
+		goto fail;
+
+	drv_info->drm_dev->dev_private = drv_info;
+	platform_set_drvdata(pdev, drv_info);
+
+	DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
+			xen_drm_driver.name, xen_drm_driver.major,
+			xen_drm_driver.minor, xen_drm_driver.patchlevel,
+			xen_drm_driver.date, drv_info->drm_dev->primary->index);
+	return 0;
+
+fail:
+	drm_dev_unref(drv_info->drm_dev);
+	kfree(drv_info);
+	return ret;
+}
+
+static struct platform_driver zcopy_platform_drv_info = {
+	.probe		= xen_drm_drv_probe,
+	.remove		= xen_drm_drv_remove,
+	.driver		= {
+		.name	= XENDRM_ZCOPY_DRIVER_NAME,
+	},
+};
+
+struct platform_device_info zcopy_dev_info = {
+	.name = XENDRM_ZCOPY_DRIVER_NAME,
+	.id = 0,
+	.num_res = 0,
+	.dma_mask = DMA_BIT_MASK(32),
+};
+
+static struct platform_device *xen_pdev;
+
+static int __init xen_drv_init(void)
+{
+	int ret;
+
+	/* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */
+	if (XEN_PAGE_SIZE != PAGE_SIZE) {
+		DRM_ERROR(XENDRM_ZCOPY_DRIVER_NAME ": different kernel and Xen page sizes are not supported: XEN_PAGE_SIZE (%lu) != PAGE_SIZE (%lu)\n",
+				XEN_PAGE_SIZE, PAGE_SIZE);
+		return -ENODEV;
+	}
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	xen_pdev = platform_device_register_full(&zcopy_dev_info);
+	if (!xen_pdev) {
+		DRM_ERROR("Failed to register " XENDRM_ZCOPY_DRIVER_NAME " device\n");
+		return -ENODEV;
+	}
+
+	ret = platform_driver_register(&zcopy_platform_drv_info);
+	if (ret != 0) {
+		DRM_ERROR("Failed to register " XENDRM_ZCOPY_DRIVER_NAME " driver: %d\n", ret);
+		platform_device_unregister(xen_pdev);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void __exit xen_drv_fini(void)
+{
+	if (xen_pdev)
+		platform_device_unregister(xen_pdev);
+	platform_driver_unregister(&zcopy_platform_drv_info);
+}
+
+module_init(xen_drv_init);
+module_exit(xen_drv_fini);
+
+MODULE_DESCRIPTION("Xen zero-copy helper DRM device");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
new file mode 100644
index 000000000000..2679233b9f84
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
@@ -0,0 +1,154 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#include <drm/drmP.h>
+
+#if defined(CONFIG_DRM_XEN_ZCOPY_CMA)
+#include <asm/xen/hypercall.h>
+#include <xen/interface/memory.h>
+#include <xen/page.h>
+#else
+#include <xen/balloon.h>
+#endif
+
+#include "xen_drm_zcopy_balloon.h"
+
+#if defined(CONFIG_DRM_XEN_ZCOPY_CMA)
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	xen_pfn_t *frame_list;
+	size_t size;
+	int i, ret;
+	dma_addr_t dev_addr, cpu_addr;
+	void *vaddr = NULL;
+	struct xen_memory_reservation reservation = {
+		.address_bits = 0,
+		.extent_order = 0,
+		.domid        = DOMID_SELF
+	};
+
+	size = num_pages * PAGE_SIZE;
+	DRM_DEBUG("Ballooning out %d pages, size %zu\n", num_pages, size);
+	frame_list = kcalloc(num_pages, sizeof(*frame_list), GFP_KERNEL);
+	if (!frame_list)
+		return -ENOMEM;
+
+	vaddr = dma_alloc_wc(dev, size, &dev_addr, GFP_KERNEL | __GFP_NOWARN);
+	if (!vaddr) {
+		DRM_ERROR("Failed to allocate DMA buffer with size %zu\n",
+				size);
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	cpu_addr = dev_addr;
+	for (i = 0; i < num_pages; i++) {
+		pages[i] = pfn_to_page(__phys_to_pfn(cpu_addr));
+		/*
+		 * XENMEM_populate_physmap requires a PFN based on Xen
+		 * granularity.
+		 */
+		frame_list[i] = page_to_xen_pfn(pages[i]);
+		cpu_addr += PAGE_SIZE;
+	}
+
+	set_xen_guest_handle(reservation.extent_start, frame_list);
+	reservation.nr_extents = num_pages;
+	/* rc will hold number of pages processed */
+	ret = HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation);
+	if (ret <= 0) {
+		DRM_ERROR("Failed to balloon out %d pages (%d), retrying\n",
+				num_pages, ret);
+		WARN_ON(ret != num_pages);
+		ret = -EFAULT;
+		goto fail;
+	}
+
+	obj->vaddr = vaddr;
+	obj->dev_bus_addr = dev_addr;
+	kfree(frame_list);
+	return 0;
+
+fail:
+	if (vaddr)
+		dma_free_wc(dev, size, vaddr, dev_addr);
+
+	kfree(frame_list);
+	return ret;
+}
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	xen_pfn_t *frame_list;
+	int i, ret;
+	size_t size;
+	struct xen_memory_reservation reservation = {
+		.address_bits = 0,
+		.extent_order = 0,
+		.domid        = DOMID_SELF
+	};
+
+	if (!pages)
+		return;
+
+	if (!obj->vaddr)
+		return;
+
+	frame_list = kcalloc(num_pages, sizeof(*frame_list), GFP_KERNEL);
+	if (!frame_list) {
+		DRM_ERROR("Failed to balloon in %d pages\n", num_pages);
+		return;
+	}
+
+	DRM_DEBUG("Ballooning in %d pages\n", num_pages);
+	size = num_pages * PAGE_SIZE;
+	for (i = 0; i < num_pages; i++) {
+		/*
+		 * XENMEM_populate_physmap requires a PFN based on Xen
+		 * granularity.
+		 */
+		frame_list[i] = page_to_xen_pfn(pages[i]);
+	}
+
+	set_xen_guest_handle(reservation.extent_start, frame_list);
+	reservation.nr_extents = num_pages;
+	/* rc will hold number of pages processed */
+	ret = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation);
+	if (ret <= 0) {
+		DRM_ERROR("Failed to balloon in %d pages\n", num_pages);
+		WARN_ON(ret != num_pages);
+	}
+
+	if (obj->vaddr)
+		dma_free_wc(dev, size, obj->vaddr, obj->dev_bus_addr);
+
+	obj->vaddr = NULL;
+	obj->dev_bus_addr = 0;
+	kfree(frame_list);
+}
+#else
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	return alloc_xenballooned_pages(num_pages, pages);
+}
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	free_xenballooned_pages(num_pages, pages);
+}
+#endif /* defined(CONFIG_DRM_XEN_ZCOPY_CMA) */
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
new file mode 100644
index 000000000000..1151f17f9339
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#ifndef __XEN_DRM_ZCOPY_BALLOON_H_
+#define __XEN_DRM_ZCOPY_BALLOON_H_
+
+#include <linux/types.h>
+
+#ifndef GRANT_INVALID_REF
+/*
+ * Note on usage of grant reference 0 as invalid grant reference:
+ * grant reference 0 is valid, but never exposed to a PV driver,
+ * because of the fact it is already in use/reserved by the PV console.
+ */
+#define GRANT_INVALID_REF	0
+#endif
+
+struct xen_drm_zcopy_balloon {
+	void *vaddr;
+	dma_addr_t dev_bus_addr;
+};
+
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages);
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages);
+
+#endif /* __XEN_DRM_ZCOPY_BALLOON_H_ */
diff --git a/include/uapi/drm/xen_zcopy_drm.h b/include/uapi/drm/xen_zcopy_drm.h
new file mode 100644
index 000000000000..8767cfbf0350
--- /dev/null
+++ b/include/uapi/drm/xen_zcopy_drm.h
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+#ifndef __XEN_ZCOPY_DRM_H
+#define __XEN_ZCOPY_DRM_H
+
+#include "drm.h"
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+#define XENDRM_ZCOPY_DRIVER_NAME	"xen_drm_zcopy"
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_FROM_REFS
+ *
+ * This will create a DRM dumb buffer from grant references provided
+ * by the frontend:
+ *
+ * - Frontend
+ *
+ *  - creates a dumb/display buffer and allocates memory.
+ *  - grants foreign access to the buffer pages
+ *  - passes granted references to the backend
+ *
+ * - Backend
+ *
+ *  - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
+ *    granted references and create a dumb buffer.
+ *  - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
+ *  - requests real HW driver to import the PRIME buffer with
+ *    DRM_IOCTL_PRIME_FD_TO_HANDLE
+ *  - uses handle returned by the real HW driver
+ *
+ *  At the end:
+ *
+ *   - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes file descriptor of the exported buffer
+ *   - may wait for the object to be actually freed via wait_handle
+ *     and DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+ */
+#define DRM_XEN_ZCOPY_DUMB_FROM_REFS	0x00
+
+struct drm_xen_zcopy_dumb_from_refs {
+	uint32_t num_grefs;
+	/* user-space uses uint32_t instead of grant_ref_t for mapping */
+	uint32_t *grefs;
+	uint64_t otherend_id;
+	struct drm_mode_create_dumb dumb;
+	uint32_t wait_handle;
+};
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_TO_REFS
+ *
+ * This will grant references to a dumb/display buffer's memory provided by the
+ * backend:
+ *
+ * - Frontend
+ *
+ *  - requests backend to allocate dumb/display buffer and grant references
+ *    to its pages
+ *
+ * - Backend
+ *
+ *  - requests real HW driver to create a dumb with DRM_IOCTL_MODE_CREATE_DUMB
+ *  - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
+ *  - requests zero-copy driver to import the PRIME buffer with
+ *    DRM_IOCTL_PRIME_FD_TO_HANDLE
+ *  - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to grant references to the
+ *    buffer's memory.
+ *  - passes grant references to the frontend
+ *
+ *  At the end:
+ *
+ *   - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes file descriptor of the imported buffer
+ */
+#define DRM_XEN_ZCOPY_DUMB_TO_REFS	0x01
+
+struct drm_xen_zcopy_dumb_to_refs {
+	uint32_t num_grefs;
+	/* user-space uses uint32_t instead of grant_ref_t for mapping */
+	uint32_t *grefs;
+	uint64_t otherend_id;
+	uint32_t handle;
+};
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+ *
+ * This will block until the dumb buffer with the wait handle provided be freed:
+ * this is needed for synchronization between frontend and backend in case
+ * frontend provides grant references of the buffer via
+ * DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
+ * backend replies with XENDISPL_OP_DBUF_DESTROY response.
+ * wait_handle must be the same value returned while calling
+ * DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
+ */
+#define DRM_XEN_ZCOPY_DUMB_WAIT_FREE	0x02
+
+struct drm_xen_zcopy_dumb_wait_free {
+	uint32_t wait_handle;
+	uint32_t wait_to_ms;
+};
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_FROM_REFS, struct drm_xen_zcopy_dumb_from_refs)
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_TO_REFS, struct drm_xen_zcopy_dumb_to_refs)
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_WAIT_FREE, struct drm_xen_zcopy_dumb_wait_free)
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* __XEN_ZCOPY_DRM_H*/
-- 
2.16.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-03-29 13:19 ` Oleksandr Andrushchenko
  (?)
@ 2018-03-29 13:19 ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-03-29 13:19 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: andr2000, Oleksandr Andrushchenko

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Introduce Xen zero-copy helper DRM driver, add user-space API of the driver:
1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
This will create a DRM dumb buffer from grant references provided
by the frontend. The intended usage is:
  - Frontend
    - creates a dumb/display buffer and allocates memory
    - grants foreign access to the buffer pages
    - passes granted references to the backend
  - Backend
    - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
      granted references and create a dumb buffer
    - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
    - requests real HW driver/consumer to import the PRIME buffer with
      DRM_IOCTL_PRIME_FD_TO_HANDLE
    - uses handle returned by the real HW driver
  - at the end:
    o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
    o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
    o closes file descriptor of the exported buffer

2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
This will grant references to a dumb/display buffer's memory provided by the
backend. The intended usage is:
  - Frontend
    - requests backend to allocate dumb/display buffer and grant references
      to its pages
  - Backend
    - requests real HW driver to create a dumb with DRM_IOCTL_MODE_CREATE_DUMB
    - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
    - requests zero-copy driver to import the PRIME buffer with
      DRM_IOCTL_PRIME_FD_TO_HANDLE
    - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
      grant references to the buffer's memory.
    - passes grant references to the frontend
 - at the end:
    - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
    - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
    - closes file descriptor of the imported buffer

Implement GEM/IOCTL handling depending on driver mode of operation:
- if GEM is created from grant references, then prepare to create
  a dumb from mapped pages
- if GEM grant references are about to be provided for the
  imported PRIME buffer, then prepare for granting references
  and providing those to user-space

Implement handling of display buffers from backend to/from front
interaction point ov view:
- when importing a buffer from the frontend:
  - allocate/free xen ballooned pages via Xen balloon driver
    or by manually allocating a DMA buffer
  - if DMA buffer is used, then increase/decrease its pages
    reservation accordingly
  - map/unmap foreign pages to the ballooned pages
- when exporting a buffer to the frontend:
  - grant references for the pages of the imported PRIME buffer
  - pass the grants back to user-space, so those can be shared
    with the frontend

Add an option to allocate DMA buffers as backing storage while
importing a frontend's buffer into host's memory:
for those use-cases when exported PRIME buffer will be used by
a device expecting CMA buffers only, it is possible to map
frontend's pages onto contiguous buffer, e.g. allocated via
DMA API.

Implement synchronous buffer deletion: for buffers, created from front's
grant references, synchronization between backend and frontend is needed
on buffer deletion as front expects us to unmap these references after
XENDISPL_OP_DBUF_DESTROY response.
For that reason introduce DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL:
this will block until dumb buffer, with the wait handle provided,
be freed.

The rationale behind implementing own wait handle:
  - dumb buffer handle cannot be used as when the PRIME buffer
    gets exported there are at least 2 handles: one is for the
    backend and another one for the importing application,
    so when backend closes its handle and the other application still
    holds the buffer then there is no way for the backend to tell
    which buffer we want to wait for while calling xen_ioctl_wait_free
  - flink cannot be used as well as it is gone when DRM core
    calls .gem_free_object_unlocked

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 Documentation/gpu/drivers.rst               |   1 +
 Documentation/gpu/xen-zcopy.rst             |  32 +
 drivers/gpu/drm/xen/Kconfig                 |  25 +
 drivers/gpu/drm/xen/Makefile                |   5 +
 drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
 include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
 8 files changed, 1264 insertions(+)
 create mode 100644 Documentation/gpu/xen-zcopy.rst
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
 create mode 100644 include/uapi/drm/xen_zcopy_drm.h

diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst
index d3ab6abae838..900ff1c3d3f1 100644
--- a/Documentation/gpu/drivers.rst
+++ b/Documentation/gpu/drivers.rst
@@ -13,6 +13,7 @@ GPU Driver Documentation
    vc4
    bridge/dw-hdmi
    xen-front
+   xen-zcopy
 
 .. only::  subproject and html
 
diff --git a/Documentation/gpu/xen-zcopy.rst b/Documentation/gpu/xen-zcopy.rst
new file mode 100644
index 000000000000..28d3942af2b8
--- /dev/null
+++ b/Documentation/gpu/xen-zcopy.rst
@@ -0,0 +1,32 @@
+===============================
+Xen zero-copy helper DRM driver
+===============================
+
+This helper driver allows implementing zero-copying use-cases
+when using Xen para-virtualized frontend display driver:
+
+ - a dumb buffer created on backend's side can be shared
+   with the Xen PV frontend driver, so it directly writes
+   into backend's domain memory (into the buffer exported from
+   DRM/KMS driver of a physical display device)
+ - a dumb buffer allocated by the frontend can be imported
+   into physical device DRM/KMS driver, thus allowing to
+   achieve no copying as well
+
+DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL
+==================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_FROM_REFS
+
+DRM_XEN_ZCOPY_DUMB_TO_REFS IOCTL
+================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_TO_REFS
+
+DRM_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL
+==================================
+
+.. kernel-doc:: include/uapi/drm/xen_zcopy_drm.h
+   :doc: DRM_XEN_ZCOPY_DUMB_WAIT_FREE
diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
index 4f4abc91f3b6..31eedb410829 100644
--- a/drivers/gpu/drm/xen/Kconfig
+++ b/drivers/gpu/drm/xen/Kconfig
@@ -5,6 +5,10 @@ config DRM_XEN
 	  Choose this option if you want to enable DRM support
 	  for Xen.
 
+choice
+	prompt "Xen DRM drivers selection"
+	depends on DRM_XEN
+
 config DRM_XEN_FRONTEND
 	tristate "Para-virtualized frontend driver for Xen guest OS"
 	depends on DRM_XEN
@@ -28,3 +32,24 @@ config DRM_XEN_FRONTEND_CMA
 	  contiguous buffers.
 	  Note: in this mode driver cannot use buffers allocated
 	  by the backend.
+
+config DRM_XEN_ZCOPY
+	tristate "Zero copy helper DRM driver for Xen"
+	depends on DRM_XEN
+	depends on DRM
+	select DRM_KMS_HELPER
+	help
+	  Choose this option if you want to enable a zero copy
+	  helper DRM driver for Xen. This is implemented via mapping
+	  of foreign display buffer pages into current domain and
+	  exporting a dumb via PRIME interface. This allows
+	  driver domains to use buffers of unpriveledged guests without
+	  additional memory copying.
+
+config DRM_XEN_ZCOPY_CMA
+	bool "Use CMA to allocate buffers"
+	depends on DRM_XEN_ZCOPY
+	help
+	  Use CMA to allocate display buffers.
+
+endchoice
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index 352730dc6c13..832daea761a9 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -14,3 +14,8 @@ else
 endif
 
 obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
+
+drm_xen_zcopy-objs := xen_drm_zcopy.o \
+		      xen_drm_zcopy_balloon.o
+
+obj-$(CONFIG_DRM_XEN_ZCOPY) += drm_xen_zcopy.o
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy.c b/drivers/gpu/drm/xen/xen_drm_zcopy.c
new file mode 100644
index 000000000000..c2fa4fcf1bf6
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy.c
@@ -0,0 +1,880 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+
+#include <linux/dma-buf.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+
+#include <xen/grant_table.h>
+#include <asm/xen/page.h>
+
+#include <drm/xen_zcopy_drm.h>
+
+#include "xen_drm_zcopy_balloon.h"
+
+struct xen_gem_object {
+	struct drm_gem_object base;
+	uint32_t dumb_handle;
+
+	int otherend_id;
+
+	uint32_t num_pages;
+	grant_ref_t *grefs;
+	/* these are the pages from Xen balloon for allocated Xen GEM object */
+	struct page **pages;
+
+	struct xen_drm_zcopy_balloon balloon;
+
+	/* this will be set if we have imported a PRIME buffer */
+	struct sg_table *sgt;
+	/* map grant handles */
+	grant_handle_t *map_handles;
+	/*
+	 * these are used for synchronous object deletion, e.g.
+	 * when user-space wants to know that the grefs are unmapped
+	 */
+	struct kref refcount;
+	int wait_handle;
+};
+
+struct xen_wait_obj {
+	struct list_head list;
+	struct xen_gem_object *xen_obj;
+	struct completion completion;
+};
+
+struct xen_drv_info {
+	struct drm_device *drm_dev;
+
+	/*
+	 * For buffers, created from front's grant references, synchronization
+	 * between backend and frontend is needed on buffer deletion as front
+	 * expects us to unmap these references after XENDISPL_OP_DBUF_DESTROY
+	 * response. This means that when calling DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+	 * ioctl user-space has to provide some unique handle, so we can tell
+	 * the buffer. For that reason we use IDR to allocate a unique value.
+	 * The rationale behind implementing wait handle as IDR:
+	 * - dumb buffer handle cannot be used as when the PRIME buffer
+	 *   gets exported there are at least 2 handles: one is for the
+	 *   backend and another one for the importing application,
+	 *   so when backend closes its handle and the other application still
+	 *   holds the buffer and then there is no way for the backend to tell
+	 *   which buffer we want to wait for while calling xen_ioctl_wait_free
+	 * - flink cannot be used as well as it is gone when DRM core
+	 *   calls .gem_free_object_unlocked
+	 * - sync_file can be used, but it seems to be an overhead to use it
+	 *   only to get a unique "handle"
+	 */
+	struct list_head wait_obj_list;
+	struct idr idr;
+	spinlock_t idr_lock;
+	spinlock_t wait_list_lock;
+};
+
+static inline struct xen_gem_object *to_xen_gem_obj(
+		struct drm_gem_object *gem_obj)
+{
+	return container_of(gem_obj, struct xen_gem_object, base);
+}
+
+static struct xen_wait_obj *wait_obj_new(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	struct xen_wait_obj *wait_obj;
+
+	wait_obj = kzalloc(sizeof(*wait_obj), GFP_KERNEL);
+	if (!wait_obj)
+		return ERR_PTR(-ENOMEM);
+
+	init_completion(&wait_obj->completion);
+	wait_obj->xen_obj = xen_obj;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_add(&wait_obj->list, &drv_info->wait_obj_list);
+	spin_unlock(&drv_info->wait_list_lock);
+
+	return wait_obj;
+}
+
+static void wait_obj_free(struct xen_drv_info *drv_info,
+		struct xen_wait_obj *wait_obj)
+{
+	struct xen_wait_obj *cur_wait_obj, *q;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_for_each_entry_safe(cur_wait_obj, q,
+			&drv_info->wait_obj_list, list)
+		if (cur_wait_obj == wait_obj) {
+			list_del(&wait_obj->list);
+			kfree(wait_obj);
+			break;
+		}
+	spin_unlock(&drv_info->wait_list_lock);
+}
+
+static void wait_obj_check_pending(struct xen_drv_info *drv_info)
+{
+	/*
+	 * It is intended to be called from .last_close when
+	 * no pending wait objects should be on the list.
+	 * make sure we don't miss a bug if this is not the case.
+	 */
+	WARN(!list_empty(&drv_info->wait_obj_list),
+			"Removing with pending wait objects!\n");
+}
+
+static int wait_obj_wait(struct xen_wait_obj *wait_obj,
+		uint32_t wait_to_ms)
+{
+	if (wait_for_completion_timeout(&wait_obj->completion,
+			msecs_to_jiffies(wait_to_ms)) <= 0)
+		return -ETIMEDOUT;
+
+	return 0;
+}
+
+static void wait_obj_signal(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	struct xen_wait_obj *wait_obj, *q;
+
+	spin_lock(&drv_info->wait_list_lock);
+	list_for_each_entry_safe(wait_obj, q, &drv_info->wait_obj_list, list)
+		if (wait_obj->xen_obj == xen_obj) {
+			DRM_DEBUG("Found xen_obj in the wait list, wake\n");
+			complete_all(&wait_obj->completion);
+		}
+	spin_unlock(&drv_info->wait_list_lock);
+}
+
+static int wait_obj_handle_new(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	int ret;
+
+	idr_preload(GFP_KERNEL);
+	spin_lock(&drv_info->idr_lock);
+	ret = idr_alloc(&drv_info->idr, xen_obj, 1, 0, GFP_NOWAIT);
+	spin_unlock(&drv_info->idr_lock);
+	idr_preload_end();
+	return ret;
+}
+
+static void wait_obj_handle_free(struct xen_drv_info *drv_info,
+		struct xen_gem_object *xen_obj)
+{
+	spin_lock(&drv_info->idr_lock);
+	idr_remove(&drv_info->idr, xen_obj->wait_handle);
+	spin_unlock(&drv_info->idr_lock);
+}
+
+static struct xen_gem_object *get_obj_by_wait_handle(
+		struct xen_drv_info *drv_info, int wait_handle)
+{
+	struct xen_gem_object *xen_obj;
+
+	spin_lock(&drv_info->idr_lock);
+	/* check if xen_obj still exists */
+	xen_obj = idr_find(&drv_info->idr, wait_handle);
+	if (xen_obj)
+		kref_get(&xen_obj->refcount);
+	spin_unlock(&drv_info->idr_lock);
+	return xen_obj;
+}
+
+#define xen_page_to_vaddr(page) \
+	((phys_addr_t)pfn_to_kaddr(page_to_xen_pfn(page)))
+
+static int from_refs_unmap(struct device *dev,
+		struct xen_gem_object *xen_obj)
+{
+	struct gnttab_unmap_grant_ref *unmap_ops;
+	int i, ret;
+
+	if (!xen_obj->pages || !xen_obj->map_handles)
+		return 0;
+
+	unmap_ops = kcalloc(xen_obj->num_pages, sizeof(*unmap_ops), GFP_KERNEL);
+	if (!unmap_ops)
+		return -ENOMEM;
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		phys_addr_t addr;
+
+		/*
+		 * When unmapping the grant entry for access by host CPUs:
+		 * if <host_addr> or <dev_bus_addr> is zero, that
+		 * field is ignored. If non-zero, they must refer to
+		 * a device/host mapping that is tracked by <handle>
+		 */
+		addr = xen_page_to_vaddr(xen_obj->pages[i]);
+		gnttab_set_unmap_op(&unmap_ops[i], addr,
+#if defined(CONFIG_X86)
+			GNTMAP_host_map | GNTMAP_device_map,
+#else
+			GNTMAP_host_map,
+#endif
+			xen_obj->map_handles[i]);
+		unmap_ops[i].dev_bus_addr = __pfn_to_phys(__pfn_to_mfn(
+				page_to_pfn(xen_obj->pages[i])));
+	}
+
+	ret = gnttab_unmap_refs(unmap_ops, NULL, xen_obj->pages,
+			xen_obj->num_pages);
+	/*
+	 * Even if we didn't unmap properly - continue to rescue whatever
+	 * resources we can.
+	 */
+	if (ret)
+		DRM_ERROR("Failed to unmap grant references, ret %d", ret);
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		if (unlikely(unmap_ops[i].status != GNTST_okay))
+			DRM_ERROR("Failed to unmap page %d with ref %d: %d\n",
+					i, xen_obj->grefs[i],
+					unmap_ops[i].status);
+	}
+
+	xen_drm_zcopy_ballooned_pages_free(dev, &xen_obj->balloon,
+			xen_obj->num_pages, xen_obj->pages);
+
+	kfree(xen_obj->pages);
+	xen_obj->pages = NULL;
+	kfree(xen_obj->map_handles);
+	xen_obj->map_handles = NULL;
+	kfree(unmap_ops);
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	return ret;
+}
+
+static int from_refs_map(struct device *dev, struct xen_gem_object *xen_obj)
+{
+	struct gnttab_map_grant_ref *map_ops = NULL;
+	int ret, i;
+
+	if (xen_obj->pages) {
+		DRM_ERROR("Mapping already mapped pages?\n");
+		return -EINVAL;
+	}
+
+	xen_obj->pages = kcalloc(xen_obj->num_pages, sizeof(*xen_obj->pages),
+			GFP_KERNEL);
+	if (!xen_obj->pages) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	xen_obj->map_handles = kcalloc(xen_obj->num_pages,
+			sizeof(*xen_obj->map_handles), GFP_KERNEL);
+	if (!xen_obj->map_handles) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	map_ops = kcalloc(xen_obj->num_pages, sizeof(*map_ops), GFP_KERNEL);
+	if (!map_ops) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = xen_drm_zcopy_ballooned_pages_alloc(dev, &xen_obj->balloon,
+			xen_obj->num_pages, xen_obj->pages);
+	if (ret < 0) {
+		DRM_ERROR("Cannot allocate %d ballooned pages: %d\n",
+				xen_obj->num_pages, ret);
+		goto fail;
+	}
+
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		phys_addr_t addr;
+
+		addr = xen_page_to_vaddr(xen_obj->pages[i]);
+		gnttab_set_map_op(&map_ops[i], addr,
+#if defined(CONFIG_X86)
+			GNTMAP_host_map | GNTMAP_device_map,
+#else
+			GNTMAP_host_map,
+#endif
+			xen_obj->grefs[i], xen_obj->otherend_id);
+	}
+	ret = gnttab_map_refs(map_ops, NULL, xen_obj->pages,
+			xen_obj->num_pages);
+
+	/* save handles even if error, so we can unmap */
+	for (i = 0; i < xen_obj->num_pages; i++) {
+		xen_obj->map_handles[i] = map_ops[i].handle;
+		if (unlikely(map_ops[i].status != GNTST_okay))
+			DRM_ERROR("Failed to map page %d with ref %d: %d\n",
+				i, xen_obj->grefs[i], map_ops[i].status);
+	}
+
+	if (ret) {
+		DRM_ERROR("Failed to map grant references, ret %d", ret);
+		from_refs_unmap(dev, xen_obj);
+		goto fail;
+	}
+
+	kfree(map_ops);
+	return 0;
+
+fail:
+	kfree(xen_obj->pages);
+	xen_obj->pages = NULL;
+	kfree(xen_obj->map_handles);
+	xen_obj->map_handles = NULL;
+	kfree(map_ops);
+	return ret;
+
+}
+
+static void to_refs_end_foreign_access(struct xen_gem_object *xen_obj)
+{
+	int i;
+
+	if (xen_obj->grefs)
+		for (i = 0; i < xen_obj->num_pages; i++)
+			if (xen_obj->grefs[i] != GRANT_INVALID_REF)
+				gnttab_end_foreign_access(xen_obj->grefs[i],
+						0, 0UL);
+
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	xen_obj->sgt = NULL;
+}
+
+static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
+{
+	grant_ref_t priv_gref_head;
+	int ret, j, cur_ref, num_pages;
+	struct sg_page_iter sg_iter;
+
+	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
+			&priv_gref_head);
+	if (ret < 0) {
+		DRM_ERROR("Cannot allocate grant references\n");
+		return ret;
+	}
+
+	j = 0;
+	num_pages = xen_obj->num_pages;
+	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
+		struct page *page;
+
+		page = sg_page_iter_page(&sg_iter);
+		cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
+		if (cur_ref < 0)
+			return cur_ref;
+
+		gnttab_grant_foreign_access_ref(cur_ref,
+				xen_obj->otherend_id, xen_page_to_gfn(page), 0);
+		xen_obj->grefs[j++] = cur_ref;
+		num_pages--;
+	}
+
+	WARN_ON(num_pages != 0);
+
+	gnttab_free_grant_references(priv_gref_head);
+	return 0;
+}
+
+static int gem_create_with_handle(struct xen_gem_object *xen_obj,
+		struct drm_file *filp, struct drm_device *dev, int size)
+{
+	struct drm_gem_object *gem_obj;
+	int ret;
+
+	drm_gem_private_object_init(dev, &xen_obj->base, size);
+	gem_obj = &xen_obj->base;
+	ret = drm_gem_handle_create(filp, gem_obj, &xen_obj->dumb_handle);
+	/* drop reference from allocate - handle holds it now. */
+	drm_gem_object_put_unlocked(gem_obj);
+	return ret;
+}
+
+static int gem_create_obj(struct xen_gem_object *xen_obj,
+		struct drm_device *dev, struct drm_file *filp, int size)
+{
+	struct drm_gem_object *gem_obj;
+	int ret;
+
+	ret = gem_create_with_handle(xen_obj, filp, dev, size);
+	if (ret < 0)
+		goto fail;
+
+	gem_obj = drm_gem_object_lookup(filp, xen_obj->dumb_handle);
+	if (!gem_obj) {
+		DRM_ERROR("Lookup for handle %d failed\n",
+				xen_obj->dumb_handle);
+		ret = -EINVAL;
+		goto fail_destroy;
+	}
+
+	drm_gem_object_put_unlocked(gem_obj);
+	return 0;
+
+fail_destroy:
+	drm_gem_dumb_destroy(filp, dev, xen_obj->dumb_handle);
+fail:
+	DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
+	xen_obj->dumb_handle = 0;
+	return ret;
+}
+
+static int gem_init_obj(struct xen_gem_object *xen_obj,
+		struct drm_device *dev, int size)
+{
+	struct drm_gem_object *gem_obj = &xen_obj->base;
+	int ret;
+
+	ret = drm_gem_object_init(dev, gem_obj, size);
+	if (ret < 0)
+		return ret;
+
+	ret = drm_gem_create_mmap_offset(gem_obj);
+	if (ret < 0) {
+		drm_gem_object_release(gem_obj);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void obj_release(struct kref *kref)
+{
+	struct xen_gem_object *xen_obj =
+			container_of(kref, struct xen_gem_object, refcount);
+	struct xen_drv_info *drv_info = xen_obj->base.dev->dev_private;
+
+	wait_obj_signal(drv_info, xen_obj);
+	kfree(xen_obj);
+}
+
+static void gem_free_object_unlocked(struct drm_gem_object *gem_obj)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	struct xen_drv_info *drv_info = gem_obj->dev->dev_private;
+
+	DRM_DEBUG("Freeing dumb with handle %d\n", xen_obj->dumb_handle);
+	if (xen_obj->grefs) {
+		if (xen_obj->sgt) {
+			drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt);
+			to_refs_end_foreign_access(xen_obj);
+		} else
+			from_refs_unmap(gem_obj->dev->dev, xen_obj);
+	}
+
+	drm_gem_object_release(gem_obj);
+
+	wait_obj_handle_free(drv_info, xen_obj);
+	kref_put(&xen_obj->refcount, obj_release);
+}
+
+static struct sg_table *gem_prime_get_sg_table(
+		struct drm_gem_object *gem_obj)
+{
+	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	struct sg_table *sgt = NULL;
+
+	if (unlikely(!xen_obj->pages))
+		return NULL;
+
+	sgt = drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
+
+	if (unlikely(!sgt))
+		DRM_ERROR("Failed to export sgt\n");
+	else
+		DRM_DEBUG("Exporting %scontiguous buffer nents %d\n",
+				sgt->nents == 1 ? "" : "non-", sgt->nents);
+	return sgt;
+}
+
+struct drm_gem_object *gem_prime_import_sg_table(struct drm_device *dev,
+		struct dma_buf_attachment *attach, struct sg_table *sgt)
+{
+	struct xen_gem_object *xen_obj;
+	int ret;
+
+	xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
+	if (!xen_obj)
+		return ERR_PTR(-ENOMEM);
+
+	ret = gem_init_obj(xen_obj, dev, attach->dmabuf->size);
+	if (ret < 0)
+		goto fail;
+
+	kref_init(&xen_obj->refcount);
+	xen_obj->sgt = sgt;
+	xen_obj->num_pages = DIV_ROUND_UP(attach->dmabuf->size, PAGE_SIZE);
+
+	DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
+			attach->dmabuf->size, sgt->nents);
+	return &xen_obj->base;
+
+fail:
+	kfree(xen_obj);
+	return ERR_PTR(ret);
+}
+
+static int do_ioctl_from_refs(struct drm_device *dev,
+		struct drm_xen_zcopy_dumb_from_refs *req,
+		struct drm_file *filp)
+{
+	struct xen_drv_info *drv_info = dev->dev_private;
+	struct xen_gem_object *xen_obj;
+	int ret;
+
+	xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
+	if (!xen_obj)
+		return -ENOMEM;
+
+	kref_init(&xen_obj->refcount);
+	xen_obj->num_pages = req->num_grefs;
+	xen_obj->otherend_id = req->otherend_id;
+	xen_obj->grefs = kcalloc(xen_obj->num_pages,
+			sizeof(grant_ref_t), GFP_KERNEL);
+	if (!xen_obj->grefs) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	if (copy_from_user(xen_obj->grefs, req->grefs,
+			xen_obj->num_pages * sizeof(grant_ref_t))) {
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	ret = from_refs_map(dev->dev, xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	ret = gem_create_obj(xen_obj, dev, filp,
+			round_up(req->dumb.size, PAGE_SIZE));
+	if (ret < 0)
+		goto fail;
+
+	req->dumb.handle = xen_obj->dumb_handle;
+
+	/*
+	 * Get user-visible handle for this GEM object.
+	 * the wait object is not allocated at the moment,
+	 * but if need be it will be allocated at the time of
+	 * DRM_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL
+	 */
+	ret = wait_obj_handle_new(drv_info, xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	req->wait_handle = ret;
+	xen_obj->wait_handle = ret;
+	return 0;
+
+fail:
+	kfree(xen_obj->grefs);
+	xen_obj->grefs = NULL;
+	kfree(xen_obj);
+	return ret;
+}
+
+static int ioctl_from_refs(struct drm_device *dev,
+		void *data, struct drm_file *filp)
+{
+	struct drm_xen_zcopy_dumb_from_refs *req =
+			(struct drm_xen_zcopy_dumb_from_refs *)data;
+	struct drm_mode_create_dumb *args = &req->dumb;
+	uint32_t cpp, stride, size;
+
+	if (!req->num_grefs || !req->grefs)
+		return -EINVAL;
+
+	if (!args->width || !args->height || !args->bpp)
+		return -EINVAL;
+
+	cpp = DIV_ROUND_UP(args->bpp, 8);
+	if (!cpp || cpp > 0xffffffffU / args->width)
+		return -EINVAL;
+
+	stride = cpp * args->width;
+	if (args->height > 0xffffffffU / stride)
+		return -EINVAL;
+
+	size = args->height * stride;
+	if (PAGE_ALIGN(size) == 0)
+		return -EINVAL;
+
+	args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
+	args->size = args->pitch * args->height;
+	args->handle = 0;
+	if (req->num_grefs < DIV_ROUND_UP(args->size, PAGE_SIZE)) {
+		DRM_ERROR("Provided %d pages, need %d\n", req->num_grefs,
+				(int)DIV_ROUND_UP(args->size, PAGE_SIZE));
+		return -EINVAL;
+	}
+
+	return do_ioctl_from_refs(dev, req, filp);
+}
+
+static int ioctl_to_refs(struct drm_device *dev,
+		void *data, struct drm_file *filp)
+{
+	struct xen_gem_object *xen_obj;
+	struct drm_gem_object *gem_obj;
+	struct drm_xen_zcopy_dumb_to_refs *req =
+			(struct drm_xen_zcopy_dumb_to_refs *)data;
+	int ret;
+
+	if (!req->num_grefs || !req->grefs)
+		return -EINVAL;
+
+	gem_obj = drm_gem_object_lookup(filp, req->handle);
+	if (!gem_obj) {
+		DRM_ERROR("Lookup for handle %d failed\n", req->handle);
+		return -EINVAL;
+	}
+
+	drm_gem_object_put_unlocked(gem_obj);
+	xen_obj = to_xen_gem_obj(gem_obj);
+
+	if (xen_obj->num_pages != req->num_grefs) {
+		DRM_ERROR("Provided %d pages, need %d\n", req->num_grefs,
+				xen_obj->num_pages);
+		return -EINVAL;
+	}
+
+	xen_obj->otherend_id = req->otherend_id;
+	xen_obj->grefs = kcalloc(xen_obj->num_pages,
+			sizeof(grant_ref_t), GFP_KERNEL);
+	if (!xen_obj->grefs) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	ret = to_refs_grant_foreign_access(xen_obj);
+	if (ret < 0)
+		goto fail;
+
+	if (copy_to_user(req->grefs, xen_obj->grefs,
+			xen_obj->num_pages * sizeof(grant_ref_t))) {
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	return 0;
+
+fail:
+	to_refs_end_foreign_access(xen_obj);
+	return ret;
+}
+
+static int ioctl_wait_free(struct drm_device *dev,
+		void *data, struct drm_file *file_priv)
+{
+	struct drm_xen_zcopy_dumb_wait_free *req =
+			(struct drm_xen_zcopy_dumb_wait_free *)data;
+	struct xen_drv_info *drv_info = dev->dev_private;
+	struct xen_gem_object *xen_obj;
+	struct xen_wait_obj *wait_obj;
+	int wait_handle, ret;
+
+	wait_handle = req->wait_handle;
+	/*
+	 * try to find the wait handle: if not found means that
+	 * either the handle has already been freed or wrong
+	 */
+	xen_obj = get_obj_by_wait_handle(drv_info, wait_handle);
+	if (!xen_obj)
+		return -ENOENT;
+
+	/*
+	 * xen_obj still exists and is reference count locked by us now, so
+	 * prepare to wait: allocate wait object and add it to the wait list,
+	 * so we can find it on release
+	 */
+	wait_obj = wait_obj_new(drv_info, xen_obj);
+	/* put our reference and wait for xen_obj release to fire */
+	kref_put(&xen_obj->refcount, obj_release);
+	ret = PTR_ERR_OR_ZERO(wait_obj);
+	if (ret < 0) {
+		DRM_ERROR("Failed to setup wait object, ret %d\n", ret);
+		return ret;
+	}
+
+	ret = wait_obj_wait(wait_obj, req->wait_to_ms);
+	wait_obj_free(drv_info, wait_obj);
+	return ret;
+}
+
+static void lastclose(struct drm_device *dev)
+{
+	struct xen_drv_info *drv_info = dev->dev_private;
+
+	wait_obj_check_pending(drv_info);
+}
+
+static const struct drm_ioctl_desc xen_drm_ioctls[] = {
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_FROM_REFS,
+		ioctl_from_refs,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_TO_REFS,
+		ioctl_to_refs,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+	DRM_IOCTL_DEF_DRV(XEN_ZCOPY_DUMB_WAIT_FREE,
+		ioctl_wait_free,
+		DRM_AUTH | DRM_CONTROL_ALLOW | DRM_UNLOCKED),
+};
+
+static const struct file_operations xen_drm_fops = {
+	.owner          = THIS_MODULE,
+	.open           = drm_open,
+	.release        = drm_release,
+	.unlocked_ioctl = drm_ioctl,
+};
+
+static struct drm_driver xen_drm_driver = {
+	.driver_features           = DRIVER_GEM | DRIVER_PRIME,
+	.lastclose                 = lastclose,
+	.prime_handle_to_fd        = drm_gem_prime_handle_to_fd,
+	.gem_prime_export          = drm_gem_prime_export,
+	.gem_prime_get_sg_table    = gem_prime_get_sg_table,
+	.prime_fd_to_handle        = drm_gem_prime_fd_to_handle,
+	.gem_prime_import          = drm_gem_prime_import,
+	.gem_prime_import_sg_table = gem_prime_import_sg_table,
+	.gem_free_object_unlocked  = gem_free_object_unlocked,
+	.fops                      = &xen_drm_fops,
+	.ioctls                    = xen_drm_ioctls,
+	.num_ioctls                = ARRAY_SIZE(xen_drm_ioctls),
+	.name                      = XENDRM_ZCOPY_DRIVER_NAME,
+	.desc                      = "Xen PV DRM zero copy",
+	.date                      = "20180221",
+	.major                     = 1,
+	.minor                     = 0,
+};
+
+static int xen_drm_drv_remove(struct platform_device *pdev)
+{
+	struct xen_drv_info *drv_info = platform_get_drvdata(pdev);
+
+	if (drv_info && drv_info->drm_dev) {
+		drm_dev_unregister(drv_info->drm_dev);
+		drm_dev_unref(drv_info->drm_dev);
+		idr_destroy(&drv_info->idr);
+	}
+	return 0;
+}
+
+static int xen_drm_drv_probe(struct platform_device *pdev)
+{
+	struct xen_drv_info *drv_info;
+	int ret;
+
+	DRM_INFO("Creating %s\n", xen_drm_driver.desc);
+	drv_info = kzalloc(sizeof(*drv_info), GFP_KERNEL);
+	if (!drv_info)
+		return -ENOMEM;
+
+	idr_init(&drv_info->idr);
+	spin_lock_init(&drv_info->idr_lock);
+	spin_lock_init(&drv_info->wait_list_lock);
+	INIT_LIST_HEAD(&drv_info->wait_obj_list);
+
+	/*
+	 * The device is not spawn from a device tree, so arch_setup_dma_ops
+	 * is not called, thus leaving the device with dummy DMA ops.
+	 * This makes the device return error on PRIME buffer import, which
+	 * is not correct: to fix this call of_dma_configure() with a NULL
+	 * node to set default DMA ops.
+	 */
+	of_dma_configure(&pdev->dev, NULL);
+
+	drv_info->drm_dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
+	if (!drv_info->drm_dev)
+		return -ENOMEM;
+
+	ret = drm_dev_register(drv_info->drm_dev, 0);
+	if (ret < 0)
+		goto fail;
+
+	drv_info->drm_dev->dev_private = drv_info;
+	platform_set_drvdata(pdev, drv_info);
+
+	DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
+			xen_drm_driver.name, xen_drm_driver.major,
+			xen_drm_driver.minor, xen_drm_driver.patchlevel,
+			xen_drm_driver.date, drv_info->drm_dev->primary->index);
+	return 0;
+
+fail:
+	drm_dev_unref(drv_info->drm_dev);
+	kfree(drv_info);
+	return ret;
+}
+
+static struct platform_driver zcopy_platform_drv_info = {
+	.probe		= xen_drm_drv_probe,
+	.remove		= xen_drm_drv_remove,
+	.driver		= {
+		.name	= XENDRM_ZCOPY_DRIVER_NAME,
+	},
+};
+
+struct platform_device_info zcopy_dev_info = {
+	.name = XENDRM_ZCOPY_DRIVER_NAME,
+	.id = 0,
+	.num_res = 0,
+	.dma_mask = DMA_BIT_MASK(32),
+};
+
+static struct platform_device *xen_pdev;
+
+static int __init xen_drv_init(void)
+{
+	int ret;
+
+	/* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */
+	if (XEN_PAGE_SIZE != PAGE_SIZE) {
+		DRM_ERROR(XENDRM_ZCOPY_DRIVER_NAME ": different kernel and Xen page sizes are not supported: XEN_PAGE_SIZE (%lu) != PAGE_SIZE (%lu)\n",
+				XEN_PAGE_SIZE, PAGE_SIZE);
+		return -ENODEV;
+	}
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	xen_pdev = platform_device_register_full(&zcopy_dev_info);
+	if (!xen_pdev) {
+		DRM_ERROR("Failed to register " XENDRM_ZCOPY_DRIVER_NAME " device\n");
+		return -ENODEV;
+	}
+
+	ret = platform_driver_register(&zcopy_platform_drv_info);
+	if (ret != 0) {
+		DRM_ERROR("Failed to register " XENDRM_ZCOPY_DRIVER_NAME " driver: %d\n", ret);
+		platform_device_unregister(xen_pdev);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void __exit xen_drv_fini(void)
+{
+	if (xen_pdev)
+		platform_device_unregister(xen_pdev);
+	platform_driver_unregister(&zcopy_platform_drv_info);
+}
+
+module_init(xen_drv_init);
+module_exit(xen_drv_fini);
+
+MODULE_DESCRIPTION("Xen zero-copy helper DRM device");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
new file mode 100644
index 000000000000..2679233b9f84
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
@@ -0,0 +1,154 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#include <drm/drmP.h>
+
+#if defined(CONFIG_DRM_XEN_ZCOPY_CMA)
+#include <asm/xen/hypercall.h>
+#include <xen/interface/memory.h>
+#include <xen/page.h>
+#else
+#include <xen/balloon.h>
+#endif
+
+#include "xen_drm_zcopy_balloon.h"
+
+#if defined(CONFIG_DRM_XEN_ZCOPY_CMA)
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	xen_pfn_t *frame_list;
+	size_t size;
+	int i, ret;
+	dma_addr_t dev_addr, cpu_addr;
+	void *vaddr = NULL;
+	struct xen_memory_reservation reservation = {
+		.address_bits = 0,
+		.extent_order = 0,
+		.domid        = DOMID_SELF
+	};
+
+	size = num_pages * PAGE_SIZE;
+	DRM_DEBUG("Ballooning out %d pages, size %zu\n", num_pages, size);
+	frame_list = kcalloc(num_pages, sizeof(*frame_list), GFP_KERNEL);
+	if (!frame_list)
+		return -ENOMEM;
+
+	vaddr = dma_alloc_wc(dev, size, &dev_addr, GFP_KERNEL | __GFP_NOWARN);
+	if (!vaddr) {
+		DRM_ERROR("Failed to allocate DMA buffer with size %zu\n",
+				size);
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	cpu_addr = dev_addr;
+	for (i = 0; i < num_pages; i++) {
+		pages[i] = pfn_to_page(__phys_to_pfn(cpu_addr));
+		/*
+		 * XENMEM_populate_physmap requires a PFN based on Xen
+		 * granularity.
+		 */
+		frame_list[i] = page_to_xen_pfn(pages[i]);
+		cpu_addr += PAGE_SIZE;
+	}
+
+	set_xen_guest_handle(reservation.extent_start, frame_list);
+	reservation.nr_extents = num_pages;
+	/* rc will hold number of pages processed */
+	ret = HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation);
+	if (ret <= 0) {
+		DRM_ERROR("Failed to balloon out %d pages (%d), retrying\n",
+				num_pages, ret);
+		WARN_ON(ret != num_pages);
+		ret = -EFAULT;
+		goto fail;
+	}
+
+	obj->vaddr = vaddr;
+	obj->dev_bus_addr = dev_addr;
+	kfree(frame_list);
+	return 0;
+
+fail:
+	if (vaddr)
+		dma_free_wc(dev, size, vaddr, dev_addr);
+
+	kfree(frame_list);
+	return ret;
+}
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	xen_pfn_t *frame_list;
+	int i, ret;
+	size_t size;
+	struct xen_memory_reservation reservation = {
+		.address_bits = 0,
+		.extent_order = 0,
+		.domid        = DOMID_SELF
+	};
+
+	if (!pages)
+		return;
+
+	if (!obj->vaddr)
+		return;
+
+	frame_list = kcalloc(num_pages, sizeof(*frame_list), GFP_KERNEL);
+	if (!frame_list) {
+		DRM_ERROR("Failed to balloon in %d pages\n", num_pages);
+		return;
+	}
+
+	DRM_DEBUG("Ballooning in %d pages\n", num_pages);
+	size = num_pages * PAGE_SIZE;
+	for (i = 0; i < num_pages; i++) {
+		/*
+		 * XENMEM_populate_physmap requires a PFN based on Xen
+		 * granularity.
+		 */
+		frame_list[i] = page_to_xen_pfn(pages[i]);
+	}
+
+	set_xen_guest_handle(reservation.extent_start, frame_list);
+	reservation.nr_extents = num_pages;
+	/* rc will hold number of pages processed */
+	ret = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation);
+	if (ret <= 0) {
+		DRM_ERROR("Failed to balloon in %d pages\n", num_pages);
+		WARN_ON(ret != num_pages);
+	}
+
+	if (obj->vaddr)
+		dma_free_wc(dev, size, obj->vaddr, obj->dev_bus_addr);
+
+	obj->vaddr = NULL;
+	obj->dev_bus_addr = 0;
+	kfree(frame_list);
+}
+#else
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	return alloc_xenballooned_pages(num_pages, pages);
+}
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages)
+{
+	free_xenballooned_pages(num_pages, pages);
+}
+#endif /* defined(CONFIG_DRM_XEN_ZCOPY_CMA) */
diff --git a/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
new file mode 100644
index 000000000000..1151f17f9339
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+
+#ifndef __XEN_DRM_ZCOPY_BALLOON_H_
+#define __XEN_DRM_ZCOPY_BALLOON_H_
+
+#include <linux/types.h>
+
+#ifndef GRANT_INVALID_REF
+/*
+ * Note on usage of grant reference 0 as invalid grant reference:
+ * grant reference 0 is valid, but never exposed to a PV driver,
+ * because of the fact it is already in use/reserved by the PV console.
+ */
+#define GRANT_INVALID_REF	0
+#endif
+
+struct xen_drm_zcopy_balloon {
+	void *vaddr;
+	dma_addr_t dev_bus_addr;
+};
+
+int xen_drm_zcopy_ballooned_pages_alloc(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages);
+
+void xen_drm_zcopy_ballooned_pages_free(struct device *dev,
+		struct xen_drm_zcopy_balloon *obj, int num_pages,
+		struct page **pages);
+
+#endif /* __XEN_DRM_ZCOPY_BALLOON_H_ */
diff --git a/include/uapi/drm/xen_zcopy_drm.h b/include/uapi/drm/xen_zcopy_drm.h
new file mode 100644
index 000000000000..8767cfbf0350
--- /dev/null
+++ b/include/uapi/drm/xen_zcopy_drm.h
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+
+/*
+ *  Xen zero-copy helper DRM device
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
+ */
+#ifndef __XEN_ZCOPY_DRM_H
+#define __XEN_ZCOPY_DRM_H
+
+#include "drm.h"
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+#define XENDRM_ZCOPY_DRIVER_NAME	"xen_drm_zcopy"
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_FROM_REFS
+ *
+ * This will create a DRM dumb buffer from grant references provided
+ * by the frontend:
+ *
+ * - Frontend
+ *
+ *  - creates a dumb/display buffer and allocates memory.
+ *  - grants foreign access to the buffer pages
+ *  - passes granted references to the backend
+ *
+ * - Backend
+ *
+ *  - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
+ *    granted references and create a dumb buffer.
+ *  - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
+ *  - requests real HW driver to import the PRIME buffer with
+ *    DRM_IOCTL_PRIME_FD_TO_HANDLE
+ *  - uses handle returned by the real HW driver
+ *
+ *  At the end:
+ *
+ *   - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes file descriptor of the exported buffer
+ *   - may wait for the object to be actually freed via wait_handle
+ *     and DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+ */
+#define DRM_XEN_ZCOPY_DUMB_FROM_REFS	0x00
+
+struct drm_xen_zcopy_dumb_from_refs {
+	uint32_t num_grefs;
+	/* user-space uses uint32_t instead of grant_ref_t for mapping */
+	uint32_t *grefs;
+	uint64_t otherend_id;
+	struct drm_mode_create_dumb dumb;
+	uint32_t wait_handle;
+};
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_TO_REFS
+ *
+ * This will grant references to a dumb/display buffer's memory provided by the
+ * backend:
+ *
+ * - Frontend
+ *
+ *  - requests backend to allocate dumb/display buffer and grant references
+ *    to its pages
+ *
+ * - Backend
+ *
+ *  - requests real HW driver to create a dumb with DRM_IOCTL_MODE_CREATE_DUMB
+ *  - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
+ *  - requests zero-copy driver to import the PRIME buffer with
+ *    DRM_IOCTL_PRIME_FD_TO_HANDLE
+ *  - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to grant references to the
+ *    buffer's memory.
+ *  - passes grant references to the frontend
+ *
+ *  At the end:
+ *
+ *   - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
+ *   - closes file descriptor of the imported buffer
+ */
+#define DRM_XEN_ZCOPY_DUMB_TO_REFS	0x01
+
+struct drm_xen_zcopy_dumb_to_refs {
+	uint32_t num_grefs;
+	/* user-space uses uint32_t instead of grant_ref_t for mapping */
+	uint32_t *grefs;
+	uint64_t otherend_id;
+	uint32_t handle;
+};
+
+/**
+ * DOC: DRM_XEN_ZCOPY_DUMB_WAIT_FREE
+ *
+ * This will block until the dumb buffer with the wait handle provided be freed:
+ * this is needed for synchronization between frontend and backend in case
+ * frontend provides grant references of the buffer via
+ * DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
+ * backend replies with XENDISPL_OP_DBUF_DESTROY response.
+ * wait_handle must be the same value returned while calling
+ * DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
+ */
+#define DRM_XEN_ZCOPY_DUMB_WAIT_FREE	0x02
+
+struct drm_xen_zcopy_dumb_wait_free {
+	uint32_t wait_handle;
+	uint32_t wait_to_ms;
+};
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_FROM_REFS, struct drm_xen_zcopy_dumb_from_refs)
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_TO_REFS, struct drm_xen_zcopy_dumb_to_refs)
+
+#define DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE DRM_IOWR(DRM_COMMAND_BASE + \
+	DRM_XEN_ZCOPY_DUMB_WAIT_FREE, struct drm_xen_zcopy_dumb_wait_free)
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* __XEN_ZCOPY_DRM_H*/
-- 
2.16.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-03-29 13:19   ` Oleksandr Andrushchenko
@ 2018-04-03  9:47     ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-03  9:47 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk,
	Oleksandr Andrushchenko

On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
> +{
> +	grant_ref_t priv_gref_head;
> +	int ret, j, cur_ref, num_pages;
> +	struct sg_page_iter sg_iter;
> +
> +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
> +			&priv_gref_head);
> +	if (ret < 0) {
> +		DRM_ERROR("Cannot allocate grant references\n");
> +		return ret;
> +	}
> +
> +	j = 0;
> +	num_pages = xen_obj->num_pages;
> +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
> +		struct page *page;
> +
> +		page = sg_page_iter_page(&sg_iter);

Quick drive-by: You can't assume that an sgt is struct page backed.

And you probably want to check this at import/attach time.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-03  9:47     ` Daniel Vetter
  0 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-03  9:47 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, konrad.wilk, airlied, Oleksandr Andrushchenko,
	linux-kernel, dri-devel, daniel.vetter, xen-devel,
	boris.ostrovsky

On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
> +{
> +	grant_ref_t priv_gref_head;
> +	int ret, j, cur_ref, num_pages;
> +	struct sg_page_iter sg_iter;
> +
> +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
> +			&priv_gref_head);
> +	if (ret < 0) {
> +		DRM_ERROR("Cannot allocate grant references\n");
> +		return ret;
> +	}
> +
> +	j = 0;
> +	num_pages = xen_obj->num_pages;
> +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
> +		struct page *page;
> +
> +		page = sg_page_iter_page(&sg_iter);

Quick drive-by: You can't assume that an sgt is struct page backed.

And you probably want to check this at import/attach time.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-03-29 13:19   ` Oleksandr Andrushchenko
  (?)
@ 2018-04-03  9:47   ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-03  9:47 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, airlied, gustavo, Oleksandr Andrushchenko, linux-kernel,
	dri-devel, seanpaul, daniel.vetter, xen-devel, boris.ostrovsky

On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
> +{
> +	grant_ref_t priv_gref_head;
> +	int ret, j, cur_ref, num_pages;
> +	struct sg_page_iter sg_iter;
> +
> +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
> +			&priv_gref_head);
> +	if (ret < 0) {
> +		DRM_ERROR("Cannot allocate grant references\n");
> +		return ret;
> +	}
> +
> +	j = 0;
> +	num_pages = xen_obj->num_pages;
> +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
> +		struct page *page;
> +
> +		page = sg_page_iter_page(&sg_iter);

Quick drive-by: You can't assume that an sgt is struct page backed.

And you probably want to check this at import/attach time.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-03  9:47     ` Daniel Vetter
@ 2018-04-06 11:25       ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-06 11:25 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk,
	Oleksandr Andrushchenko

On 04/03/2018 12:47 PM, Daniel Vetter wrote:
> On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>> +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
>> +{
>> +	grant_ref_t priv_gref_head;
>> +	int ret, j, cur_ref, num_pages;
>> +	struct sg_page_iter sg_iter;
>> +
>> +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
>> +			&priv_gref_head);
>> +	if (ret < 0) {
>> +		DRM_ERROR("Cannot allocate grant references\n");
>> +		return ret;
>> +	}
>> +
>> +	j = 0;
>> +	num_pages = xen_obj->num_pages;
>> +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
>> +		struct page *page;
>> +
>> +		page = sg_page_iter_page(&sg_iter);
> Quick drive-by: You can't assume that an sgt is struct page backed.
Do you mean that someone could give me sgt which never
seen sg_assign_page for its entries?
What are the other use-cases for that to happen?
> And you probably want to check this at import/attach time.
The check you mean is to make sure that when I call
page = sg_page_iter_page(&sg_iter);
I have to make sure that I get a valid page?
> -Daniel
Thank you,
Oleksandr

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-06 11:25       ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-06 11:25 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk,
	Oleksandr Andrushchenko

On 04/03/2018 12:47 PM, Daniel Vetter wrote:
> On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>> +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
>> +{
>> +	grant_ref_t priv_gref_head;
>> +	int ret, j, cur_ref, num_pages;
>> +	struct sg_page_iter sg_iter;
>> +
>> +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
>> +			&priv_gref_head);
>> +	if (ret < 0) {
>> +		DRM_ERROR("Cannot allocate grant references\n");
>> +		return ret;
>> +	}
>> +
>> +	j = 0;
>> +	num_pages = xen_obj->num_pages;
>> +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
>> +		struct page *page;
>> +
>> +		page = sg_page_iter_page(&sg_iter);
> Quick drive-by: You can't assume that an sgt is struct page backed.
Do you mean that someone could give me sgt which never
seen sg_assign_page for its entries?
What are the other use-cases for that to happen?
> And you probably want to check this at import/attach time.
The check you mean is to make sure that when I call
page = sg_page_iter_page(&sg_iter);
I have to make sure that I get a valid page?
> -Daniel
Thank you,
Oleksandr
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-03  9:47     ` Daniel Vetter
  (?)
@ 2018-04-06 11:25     ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-06 11:25 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk,
	Oleksandr Andrushchenko

On 04/03/2018 12:47 PM, Daniel Vetter wrote:
> On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>> +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
>> +{
>> +	grant_ref_t priv_gref_head;
>> +	int ret, j, cur_ref, num_pages;
>> +	struct sg_page_iter sg_iter;
>> +
>> +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
>> +			&priv_gref_head);
>> +	if (ret < 0) {
>> +		DRM_ERROR("Cannot allocate grant references\n");
>> +		return ret;
>> +	}
>> +
>> +	j = 0;
>> +	num_pages = xen_obj->num_pages;
>> +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
>> +		struct page *page;
>> +
>> +		page = sg_page_iter_page(&sg_iter);
> Quick drive-by: You can't assume that an sgt is struct page backed.
Do you mean that someone could give me sgt which never
seen sg_assign_page for its entries?
What are the other use-cases for that to happen?
> And you probably want to check this at import/attach time.
The check you mean is to make sure that when I call
page = sg_page_iter_page(&sg_iter);
I have to make sure that I get a valid page?
> -Daniel
Thank you,
Oleksandr

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-06 11:25       ` Oleksandr Andrushchenko
@ 2018-04-09  8:27         ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-09  8:27 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk,
	Oleksandr Andrushchenko

On Fri, Apr 06, 2018 at 02:25:08PM +0300, Oleksandr Andrushchenko wrote:
> On 04/03/2018 12:47 PM, Daniel Vetter wrote:
> > On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
> > > From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > > +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
> > > +{
> > > +	grant_ref_t priv_gref_head;
> > > +	int ret, j, cur_ref, num_pages;
> > > +	struct sg_page_iter sg_iter;
> > > +
> > > +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
> > > +			&priv_gref_head);
> > > +	if (ret < 0) {
> > > +		DRM_ERROR("Cannot allocate grant references\n");
> > > +		return ret;
> > > +	}
> > > +
> > > +	j = 0;
> > > +	num_pages = xen_obj->num_pages;
> > > +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
> > > +		struct page *page;
> > > +
> > > +		page = sg_page_iter_page(&sg_iter);
> > Quick drive-by: You can't assume that an sgt is struct page backed.
> Do you mean that someone could give me sgt which never
> seen sg_assign_page for its entries?

Yes.

> What are the other use-cases for that to happen?

Sharing vram or other resources which are not backed by a struct page. See
Christian König's recent work to accomplish just that for admgpu.

> > And you probably want to check this at import/attach time.
> The check you mean is to make sure that when I call
> page = sg_page_iter_page(&sg_iter);
> I have to make sure that I get a valid page?

Yup.

> > -Daniel
> Thank you,
> Oleksandr

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-09  8:27         ` Daniel Vetter
  0 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-09  8:27 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, konrad.wilk, airlied, Oleksandr Andrushchenko,
	linux-kernel, dri-devel, daniel.vetter, xen-devel,
	boris.ostrovsky

On Fri, Apr 06, 2018 at 02:25:08PM +0300, Oleksandr Andrushchenko wrote:
> On 04/03/2018 12:47 PM, Daniel Vetter wrote:
> > On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
> > > From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > > +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
> > > +{
> > > +	grant_ref_t priv_gref_head;
> > > +	int ret, j, cur_ref, num_pages;
> > > +	struct sg_page_iter sg_iter;
> > > +
> > > +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
> > > +			&priv_gref_head);
> > > +	if (ret < 0) {
> > > +		DRM_ERROR("Cannot allocate grant references\n");
> > > +		return ret;
> > > +	}
> > > +
> > > +	j = 0;
> > > +	num_pages = xen_obj->num_pages;
> > > +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
> > > +		struct page *page;
> > > +
> > > +		page = sg_page_iter_page(&sg_iter);
> > Quick drive-by: You can't assume that an sgt is struct page backed.
> Do you mean that someone could give me sgt which never
> seen sg_assign_page for its entries?

Yes.

> What are the other use-cases for that to happen?

Sharing vram or other resources which are not backed by a struct page. See
Christian König's recent work to accomplish just that for admgpu.

> > And you probably want to check this at import/attach time.
> The check you mean is to make sure that when I call
> page = sg_page_iter_page(&sg_iter);
> I have to make sure that I get a valid page?

Yup.

> > -Daniel
> Thank you,
> Oleksandr

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 1/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-06 11:25       ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-09  8:27       ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-09  8:27 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, airlied, gustavo, Oleksandr Andrushchenko, linux-kernel,
	dri-devel, seanpaul, daniel.vetter, xen-devel, boris.ostrovsky

On Fri, Apr 06, 2018 at 02:25:08PM +0300, Oleksandr Andrushchenko wrote:
> On 04/03/2018 12:47 PM, Daniel Vetter wrote:
> > On Thu, Mar 29, 2018 at 04:19:31PM +0300, Oleksandr Andrushchenko wrote:
> > > From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > > +static int to_refs_grant_foreign_access(struct xen_gem_object *xen_obj)
> > > +{
> > > +	grant_ref_t priv_gref_head;
> > > +	int ret, j, cur_ref, num_pages;
> > > +	struct sg_page_iter sg_iter;
> > > +
> > > +	ret = gnttab_alloc_grant_references(xen_obj->num_pages,
> > > +			&priv_gref_head);
> > > +	if (ret < 0) {
> > > +		DRM_ERROR("Cannot allocate grant references\n");
> > > +		return ret;
> > > +	}
> > > +
> > > +	j = 0;
> > > +	num_pages = xen_obj->num_pages;
> > > +	for_each_sg_page(xen_obj->sgt->sgl, &sg_iter, xen_obj->sgt->nents, 0) {
> > > +		struct page *page;
> > > +
> > > +		page = sg_page_iter_page(&sg_iter);
> > Quick drive-by: You can't assume that an sgt is struct page backed.
> Do you mean that someone could give me sgt which never
> seen sg_assign_page for its entries?

Yes.

> What are the other use-cases for that to happen?

Sharing vram or other resources which are not backed by a struct page. See
Christian König's recent work to accomplish just that for admgpu.

> > And you probably want to check this at import/attach time.
> The check you mean is to make sure that when I call
> page = sg_page_iter_page(&sg_iter);
> I have to make sure that I get a valid page?

Yup.

> > -Daniel
> Thank you,
> Oleksandr

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-03-29 13:19 ` Oleksandr Andrushchenko
                   ` (2 preceding siblings ...)
  (?)
@ 2018-04-16 14:33 ` Oleksandr Andrushchenko
  2018-04-16 19:29   ` Dongwon Kim
  2018-04-16 19:29     ` Dongwon Kim
  -1 siblings, 2 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-16 14:33 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: Oleksandr Andrushchenko, Dongwon Kim, Potrola, MateuszX,
	Matt Roper, Artem Mygaiev

Hello, all!

After discussing xen-zcopy and hyper-dmabuf [1] approaches

it seems that xen-zcopy can be made not depend on DRM core any more

and be dma-buf centric (which it in fact is).

The DRM code was mostly there for dma-buf's FD import/export

with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if

the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and 
DRM_XEN_ZCOPY_DUMB_TO_REFS)

are extended to also provide a file descriptor of the corresponding 
dma-buf, then

PRIME stuff in the driver is not needed anymore.

That being said, xen-zcopy can safely be detached from DRM and moved from

drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).

This driver then becomes a universal way to turn any shared buffer 
between Dom0/DomD

and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant 
references

or represent a dma-buf as grant-references for export.

This way the driver can be used not only for DRM use-cases, but also for 
other

use-cases which may require zero copying between domains.

For example, the use-cases we are about to work in the nearest future 
will use

V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit

from zero copying much. Potentially, even block/net devices may benefit,

but this needs some evaluation.


I would love to hear comments for authors of the hyper-dmabuf

and Xen community, as well as DRI-Devel and other interested parties.


Thank you,

Oleksandr


On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>
> Hello!
>
> When using Xen PV DRM frontend driver then on backend side one will need
> to do copying of display buffers' contents (filled by the
> frontend's user-space) into buffers allocated at the backend side.
> Taking into account the size of display buffers and frames per seconds
> it may result in unneeded huge data bus occupation and performance loss.
>
> This helper driver allows implementing zero-copying use-cases
> when using Xen para-virtualized frontend display driver by
> implementing a DRM/KMS helper driver running on backend's side.
> It utilizes PRIME buffers API to share frontend's buffers with
> physical device drivers on backend's side:
>
>   - a dumb buffer created on backend's side can be shared
>     with the Xen PV frontend driver, so it directly writes
>     into backend's domain memory (into the buffer exported from
>     DRM/KMS driver of a physical display device)
>   - a dumb buffer allocated by the frontend can be imported
>     into physical device DRM/KMS driver, thus allowing to
>     achieve no copying as well
>
> For that reason number of IOCTLs are introduced:
>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>      This will create a DRM dumb buffer from grant references provided
>      by the frontend
>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>     This will grant references to a dumb/display buffer's memory provided
>     by the backend
>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>     This will block until the dumb buffer with the wait handle provided
>     be freed
>
> With this helper driver I was able to drop CPU usage from 17% to 3%
> on Renesas R-Car M3 board.
>
> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>
> Thank you,
> Oleksandr
>
> Oleksandr Andrushchenko (1):
>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>
>   Documentation/gpu/drivers.rst               |   1 +
>   Documentation/gpu/xen-zcopy.rst             |  32 +
>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>   drivers/gpu/drm/xen/Makefile                |   5 +
>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>   8 files changed, 1264 insertions(+)
>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>
[1] 
https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-03-29 13:19 ` Oleksandr Andrushchenko
                   ` (3 preceding siblings ...)
  (?)
@ 2018-04-16 14:33 ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-16 14:33 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: Artem Mygaiev, Matt Roper, Potrola, MateuszX, Dongwon Kim,
	Oleksandr Andrushchenko

Hello, all!

After discussing xen-zcopy and hyper-dmabuf [1] approaches

it seems that xen-zcopy can be made not depend on DRM core any more

and be dma-buf centric (which it in fact is).

The DRM code was mostly there for dma-buf's FD import/export

with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if

the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and 
DRM_XEN_ZCOPY_DUMB_TO_REFS)

are extended to also provide a file descriptor of the corresponding 
dma-buf, then

PRIME stuff in the driver is not needed anymore.

That being said, xen-zcopy can safely be detached from DRM and moved from

drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).

This driver then becomes a universal way to turn any shared buffer 
between Dom0/DomD

and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant 
references

or represent a dma-buf as grant-references for export.

This way the driver can be used not only for DRM use-cases, but also for 
other

use-cases which may require zero copying between domains.

For example, the use-cases we are about to work in the nearest future 
will use

V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit

from zero copying much. Potentially, even block/net devices may benefit,

but this needs some evaluation.


I would love to hear comments for authors of the hyper-dmabuf

and Xen community, as well as DRI-Devel and other interested parties.


Thank you,

Oleksandr


On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>
> Hello!
>
> When using Xen PV DRM frontend driver then on backend side one will need
> to do copying of display buffers' contents (filled by the
> frontend's user-space) into buffers allocated at the backend side.
> Taking into account the size of display buffers and frames per seconds
> it may result in unneeded huge data bus occupation and performance loss.
>
> This helper driver allows implementing zero-copying use-cases
> when using Xen para-virtualized frontend display driver by
> implementing a DRM/KMS helper driver running on backend's side.
> It utilizes PRIME buffers API to share frontend's buffers with
> physical device drivers on backend's side:
>
>   - a dumb buffer created on backend's side can be shared
>     with the Xen PV frontend driver, so it directly writes
>     into backend's domain memory (into the buffer exported from
>     DRM/KMS driver of a physical display device)
>   - a dumb buffer allocated by the frontend can be imported
>     into physical device DRM/KMS driver, thus allowing to
>     achieve no copying as well
>
> For that reason number of IOCTLs are introduced:
>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>      This will create a DRM dumb buffer from grant references provided
>      by the frontend
>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>     This will grant references to a dumb/display buffer's memory provided
>     by the backend
>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>     This will block until the dumb buffer with the wait handle provided
>     be freed
>
> With this helper driver I was able to drop CPU usage from 17% to 3%
> on Renesas R-Car M3 board.
>
> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>
> Thank you,
> Oleksandr
>
> Oleksandr Andrushchenko (1):
>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>
>   Documentation/gpu/drivers.rst               |   1 +
>   Documentation/gpu/xen-zcopy.rst             |  32 +
>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>   drivers/gpu/drm/xen/Makefile                |   5 +
>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>   8 files changed, 1264 insertions(+)
>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>
[1] 
https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-16 14:33 ` [PATCH 0/1] " Oleksandr Andrushchenko
@ 2018-04-16 19:29     ` Dongwon Kim
  2018-04-16 19:29     ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-16 19:29 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk,
	Oleksandr Andrushchenko, Potrola, MateuszX, Matt Roper,
	Artem Mygaiev

Yeah, I definitely agree on the idea of expanding the use case to the 
general domain where dmabuf sharing is used. However, what you are
targetting with proposed changes is identical to the core design of
hyper_dmabuf.

On top of this basic functionalities, hyper_dmabuf has driver level
inter-domain communication, that is needed for dma-buf remote tracking
(no fence forwarding though), event triggering and event handling, extra
meta data exchange and hyper_dmabuf_id that represents grefs
(grefs are shared implicitly on driver level)

Also it is designed with frontend (common core framework) + backend
(hyper visor specific comm and memory sharing) structure for portability.
We just can't limit this feature to Xen because we want to use the same
uapis not only for Xen but also other applicable hypervisor, like ACORN.

So I am wondering we can start with this hyper_dmabuf then modify it for
your use-case if needed and polish and fix any glitches if we want to 
to use this for all general dma-buf usecases.

Also, I still have one unresolved question regarding the export/import flow
in both of hyper_dmabuf and xen-zcopy.

@danvet: Would this flow (guest1->import existing dmabuf->share underlying
pages->guest2->map shared pages->create/export dmabuf) be acceptable now?

Regards,
DW
 
On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> Hello, all!
> 
> After discussing xen-zcopy and hyper-dmabuf [1] approaches
> 
> it seems that xen-zcopy can be made not depend on DRM core any more
> 
> and be dma-buf centric (which it in fact is).
> 
> The DRM code was mostly there for dma-buf's FD import/export
> 
> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> 
> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> DRM_XEN_ZCOPY_DUMB_TO_REFS)
> 
> are extended to also provide a file descriptor of the corresponding dma-buf,
> then
> 
> PRIME stuff in the driver is not needed anymore.
> 
> That being said, xen-zcopy can safely be detached from DRM and moved from
> 
> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> 
> This driver then becomes a universal way to turn any shared buffer between
> Dom0/DomD
> 
> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> references
> 
> or represent a dma-buf as grant-references for export.
> 
> This way the driver can be used not only for DRM use-cases, but also for
> other
> 
> use-cases which may require zero copying between domains.
> 
> For example, the use-cases we are about to work in the nearest future will
> use
> 
> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> 
> from zero copying much. Potentially, even block/net devices may benefit,
> 
> but this needs some evaluation.
> 
> 
> I would love to hear comments for authors of the hyper-dmabuf
> 
> and Xen community, as well as DRI-Devel and other interested parties.
> 
> 
> Thank you,
> 
> Oleksandr
> 
> 
> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >
> >Hello!
> >
> >When using Xen PV DRM frontend driver then on backend side one will need
> >to do copying of display buffers' contents (filled by the
> >frontend's user-space) into buffers allocated at the backend side.
> >Taking into account the size of display buffers and frames per seconds
> >it may result in unneeded huge data bus occupation and performance loss.
> >
> >This helper driver allows implementing zero-copying use-cases
> >when using Xen para-virtualized frontend display driver by
> >implementing a DRM/KMS helper driver running on backend's side.
> >It utilizes PRIME buffers API to share frontend's buffers with
> >physical device drivers on backend's side:
> >
> >  - a dumb buffer created on backend's side can be shared
> >    with the Xen PV frontend driver, so it directly writes
> >    into backend's domain memory (into the buffer exported from
> >    DRM/KMS driver of a physical display device)
> >  - a dumb buffer allocated by the frontend can be imported
> >    into physical device DRM/KMS driver, thus allowing to
> >    achieve no copying as well
> >
> >For that reason number of IOCTLs are introduced:
> >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >     This will create a DRM dumb buffer from grant references provided
> >     by the frontend
> >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >    This will grant references to a dumb/display buffer's memory provided
> >    by the backend
> >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >    This will block until the dumb buffer with the wait handle provided
> >    be freed
> >
> >With this helper driver I was able to drop CPU usage from 17% to 3%
> >on Renesas R-Car M3 board.
> >
> >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >
> >Thank you,
> >Oleksandr
> >
> >Oleksandr Andrushchenko (1):
> >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >
> >  Documentation/gpu/drivers.rst               |   1 +
> >  Documentation/gpu/xen-zcopy.rst             |  32 +
> >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >  drivers/gpu/drm/xen/Makefile                |   5 +
> >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >  8 files changed, 1264 insertions(+)
> >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-16 19:29     ` Dongwon Kim
  0 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-16 19:29 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, konrad.wilk, airlied,
	Oleksandr Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

Yeah, I definitely agree on the idea of expanding the use case to the 
general domain where dmabuf sharing is used. However, what you are
targetting with proposed changes is identical to the core design of
hyper_dmabuf.

On top of this basic functionalities, hyper_dmabuf has driver level
inter-domain communication, that is needed for dma-buf remote tracking
(no fence forwarding though), event triggering and event handling, extra
meta data exchange and hyper_dmabuf_id that represents grefs
(grefs are shared implicitly on driver level)

Also it is designed with frontend (common core framework) + backend
(hyper visor specific comm and memory sharing) structure for portability.
We just can't limit this feature to Xen because we want to use the same
uapis not only for Xen but also other applicable hypervisor, like ACORN.

So I am wondering we can start with this hyper_dmabuf then modify it for
your use-case if needed and polish and fix any glitches if we want to 
to use this for all general dma-buf usecases.

Also, I still have one unresolved question regarding the export/import flow
in both of hyper_dmabuf and xen-zcopy.

@danvet: Would this flow (guest1->import existing dmabuf->share underlying
pages->guest2->map shared pages->create/export dmabuf) be acceptable now?

Regards,
DW
 
On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> Hello, all!
> 
> After discussing xen-zcopy and hyper-dmabuf [1] approaches
> 
> it seems that xen-zcopy can be made not depend on DRM core any more
> 
> and be dma-buf centric (which it in fact is).
> 
> The DRM code was mostly there for dma-buf's FD import/export
> 
> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> 
> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> DRM_XEN_ZCOPY_DUMB_TO_REFS)
> 
> are extended to also provide a file descriptor of the corresponding dma-buf,
> then
> 
> PRIME stuff in the driver is not needed anymore.
> 
> That being said, xen-zcopy can safely be detached from DRM and moved from
> 
> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> 
> This driver then becomes a universal way to turn any shared buffer between
> Dom0/DomD
> 
> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> references
> 
> or represent a dma-buf as grant-references for export.
> 
> This way the driver can be used not only for DRM use-cases, but also for
> other
> 
> use-cases which may require zero copying between domains.
> 
> For example, the use-cases we are about to work in the nearest future will
> use
> 
> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> 
> from zero copying much. Potentially, even block/net devices may benefit,
> 
> but this needs some evaluation.
> 
> 
> I would love to hear comments for authors of the hyper-dmabuf
> 
> and Xen community, as well as DRI-Devel and other interested parties.
> 
> 
> Thank you,
> 
> Oleksandr
> 
> 
> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >
> >Hello!
> >
> >When using Xen PV DRM frontend driver then on backend side one will need
> >to do copying of display buffers' contents (filled by the
> >frontend's user-space) into buffers allocated at the backend side.
> >Taking into account the size of display buffers and frames per seconds
> >it may result in unneeded huge data bus occupation and performance loss.
> >
> >This helper driver allows implementing zero-copying use-cases
> >when using Xen para-virtualized frontend display driver by
> >implementing a DRM/KMS helper driver running on backend's side.
> >It utilizes PRIME buffers API to share frontend's buffers with
> >physical device drivers on backend's side:
> >
> >  - a dumb buffer created on backend's side can be shared
> >    with the Xen PV frontend driver, so it directly writes
> >    into backend's domain memory (into the buffer exported from
> >    DRM/KMS driver of a physical display device)
> >  - a dumb buffer allocated by the frontend can be imported
> >    into physical device DRM/KMS driver, thus allowing to
> >    achieve no copying as well
> >
> >For that reason number of IOCTLs are introduced:
> >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >     This will create a DRM dumb buffer from grant references provided
> >     by the frontend
> >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >    This will grant references to a dumb/display buffer's memory provided
> >    by the backend
> >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >    This will block until the dumb buffer with the wait handle provided
> >    be freed
> >
> >With this helper driver I was able to drop CPU usage from 17% to 3%
> >on Renesas R-Car M3 board.
> >
> >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >
> >Thank you,
> >Oleksandr
> >
> >Oleksandr Andrushchenko (1):
> >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >
> >  Documentation/gpu/drivers.rst               |   1 +
> >  Documentation/gpu/xen-zcopy.rst             |  32 +
> >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >  drivers/gpu/drm/xen/Makefile                |   5 +
> >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >  8 files changed, 1264 insertions(+)
> >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-16 14:33 ` [PATCH 0/1] " Oleksandr Andrushchenko
@ 2018-04-16 19:29   ` Dongwon Kim
  2018-04-16 19:29     ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-16 19:29 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, airlied, gustavo, Oleksandr Andrushchenko,
	linux-kernel, dri-devel, seanpaul, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

Yeah, I definitely agree on the idea of expanding the use case to the 
general domain where dmabuf sharing is used. However, what you are
targetting with proposed changes is identical to the core design of
hyper_dmabuf.

On top of this basic functionalities, hyper_dmabuf has driver level
inter-domain communication, that is needed for dma-buf remote tracking
(no fence forwarding though), event triggering and event handling, extra
meta data exchange and hyper_dmabuf_id that represents grefs
(grefs are shared implicitly on driver level)

Also it is designed with frontend (common core framework) + backend
(hyper visor specific comm and memory sharing) structure for portability.
We just can't limit this feature to Xen because we want to use the same
uapis not only for Xen but also other applicable hypervisor, like ACORN.

So I am wondering we can start with this hyper_dmabuf then modify it for
your use-case if needed and polish and fix any glitches if we want to 
to use this for all general dma-buf usecases.

Also, I still have one unresolved question regarding the export/import flow
in both of hyper_dmabuf and xen-zcopy.

@danvet: Would this flow (guest1->import existing dmabuf->share underlying
pages->guest2->map shared pages->create/export dmabuf) be acceptable now?

Regards,
DW
 
On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> Hello, all!
> 
> After discussing xen-zcopy and hyper-dmabuf [1] approaches
> 
> it seems that xen-zcopy can be made not depend on DRM core any more
> 
> and be dma-buf centric (which it in fact is).
> 
> The DRM code was mostly there for dma-buf's FD import/export
> 
> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> 
> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> DRM_XEN_ZCOPY_DUMB_TO_REFS)
> 
> are extended to also provide a file descriptor of the corresponding dma-buf,
> then
> 
> PRIME stuff in the driver is not needed anymore.
> 
> That being said, xen-zcopy can safely be detached from DRM and moved from
> 
> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> 
> This driver then becomes a universal way to turn any shared buffer between
> Dom0/DomD
> 
> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> references
> 
> or represent a dma-buf as grant-references for export.
> 
> This way the driver can be used not only for DRM use-cases, but also for
> other
> 
> use-cases which may require zero copying between domains.
> 
> For example, the use-cases we are about to work in the nearest future will
> use
> 
> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> 
> from zero copying much. Potentially, even block/net devices may benefit,
> 
> but this needs some evaluation.
> 
> 
> I would love to hear comments for authors of the hyper-dmabuf
> 
> and Xen community, as well as DRI-Devel and other interested parties.
> 
> 
> Thank you,
> 
> Oleksandr
> 
> 
> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >
> >Hello!
> >
> >When using Xen PV DRM frontend driver then on backend side one will need
> >to do copying of display buffers' contents (filled by the
> >frontend's user-space) into buffers allocated at the backend side.
> >Taking into account the size of display buffers and frames per seconds
> >it may result in unneeded huge data bus occupation and performance loss.
> >
> >This helper driver allows implementing zero-copying use-cases
> >when using Xen para-virtualized frontend display driver by
> >implementing a DRM/KMS helper driver running on backend's side.
> >It utilizes PRIME buffers API to share frontend's buffers with
> >physical device drivers on backend's side:
> >
> >  - a dumb buffer created on backend's side can be shared
> >    with the Xen PV frontend driver, so it directly writes
> >    into backend's domain memory (into the buffer exported from
> >    DRM/KMS driver of a physical display device)
> >  - a dumb buffer allocated by the frontend can be imported
> >    into physical device DRM/KMS driver, thus allowing to
> >    achieve no copying as well
> >
> >For that reason number of IOCTLs are introduced:
> >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >     This will create a DRM dumb buffer from grant references provided
> >     by the frontend
> >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >    This will grant references to a dumb/display buffer's memory provided
> >    by the backend
> >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >    This will block until the dumb buffer with the wait handle provided
> >    be freed
> >
> >With this helper driver I was able to drop CPU usage from 17% to 3%
> >on Renesas R-Car M3 board.
> >
> >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >
> >Thank you,
> >Oleksandr
> >
> >Oleksandr Andrushchenko (1):
> >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >
> >  Documentation/gpu/drivers.rst               |   1 +
> >  Documentation/gpu/xen-zcopy.rst             |  32 +
> >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >  drivers/gpu/drm/xen/Makefile                |   5 +
> >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >  8 files changed, 1264 insertions(+)
> >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-16 19:29     ` Dongwon Kim
@ 2018-04-17  7:59       ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-17  7:59 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: Oleksandr Andrushchenko, jgross, Artem Mygaiev, konrad.wilk,
	airlied, Oleksandr Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> Yeah, I definitely agree on the idea of expanding the use case to the 
> general domain where dmabuf sharing is used. However, what you are
> targetting with proposed changes is identical to the core design of
> hyper_dmabuf.
> 
> On top of this basic functionalities, hyper_dmabuf has driver level
> inter-domain communication, that is needed for dma-buf remote tracking
> (no fence forwarding though), event triggering and event handling, extra
> meta data exchange and hyper_dmabuf_id that represents grefs
> (grefs are shared implicitly on driver level)

This really isn't a positive design aspect of hyperdmabuf imo. The core
code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
very simple & clean.

If there's a clear need later on we can extend that. But for now xen-zcopy
seems to cover the basic use-case needs, so gets the job done.

> Also it is designed with frontend (common core framework) + backend
> (hyper visor specific comm and memory sharing) structure for portability.
> We just can't limit this feature to Xen because we want to use the same
> uapis not only for Xen but also other applicable hypervisor, like ACORN.

See the discussion around udmabuf and the needs for kvm. I think trying to
make an ioctl/uapi that works for multiple hypervisors is misguided - it
likely won't work.

On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
not even upstream yet, nor have I seen any patches proposing to land linux
support for ACRN. Since it's not upstream, it doesn't really matter for
upstream consideration. I'm doubting that ACRN will use the same grant
references as xen, so the same uapi won't work on ACRN as on Xen anyway.

> So I am wondering we can start with this hyper_dmabuf then modify it for
> your use-case if needed and polish and fix any glitches if we want to 
> to use this for all general dma-buf usecases.

Imo xen-zcopy is a much more reasonable starting point for upstream, which
can then be extended (if really proven to be necessary).

> Also, I still have one unresolved question regarding the export/import flow
> in both of hyper_dmabuf and xen-zcopy.
> 
> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?

I think if you just look at the pages, and make sure you handle the
sg_page == NULL case it's ok-ish. It's not great, but mostly it should
work. The real trouble with hyperdmabuf was the forwarding of all these
calls, instead of just passing around a list of grant references.
-Daniel

> 
> Regards,
> DW
>  
> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> > Hello, all!
> > 
> > After discussing xen-zcopy and hyper-dmabuf [1] approaches
> > 
> > it seems that xen-zcopy can be made not depend on DRM core any more
> > 
> > and be dma-buf centric (which it in fact is).
> > 
> > The DRM code was mostly there for dma-buf's FD import/export
> > 
> > with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> > 
> > the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> > DRM_XEN_ZCOPY_DUMB_TO_REFS)
> > 
> > are extended to also provide a file descriptor of the corresponding dma-buf,
> > then
> > 
> > PRIME stuff in the driver is not needed anymore.
> > 
> > That being said, xen-zcopy can safely be detached from DRM and moved from
> > 
> > drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> > 
> > This driver then becomes a universal way to turn any shared buffer between
> > Dom0/DomD
> > 
> > and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> > references
> > 
> > or represent a dma-buf as grant-references for export.
> > 
> > This way the driver can be used not only for DRM use-cases, but also for
> > other
> > 
> > use-cases which may require zero copying between domains.
> > 
> > For example, the use-cases we are about to work in the nearest future will
> > use
> > 
> > V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> > 
> > from zero copying much. Potentially, even block/net devices may benefit,
> > 
> > but this needs some evaluation.
> > 
> > 
> > I would love to hear comments for authors of the hyper-dmabuf
> > 
> > and Xen community, as well as DRI-Devel and other interested parties.
> > 
> > 
> > Thank you,
> > 
> > Oleksandr
> > 
> > 
> > On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> > >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > >
> > >Hello!
> > >
> > >When using Xen PV DRM frontend driver then on backend side one will need
> > >to do copying of display buffers' contents (filled by the
> > >frontend's user-space) into buffers allocated at the backend side.
> > >Taking into account the size of display buffers and frames per seconds
> > >it may result in unneeded huge data bus occupation and performance loss.
> > >
> > >This helper driver allows implementing zero-copying use-cases
> > >when using Xen para-virtualized frontend display driver by
> > >implementing a DRM/KMS helper driver running on backend's side.
> > >It utilizes PRIME buffers API to share frontend's buffers with
> > >physical device drivers on backend's side:
> > >
> > >  - a dumb buffer created on backend's side can be shared
> > >    with the Xen PV frontend driver, so it directly writes
> > >    into backend's domain memory (into the buffer exported from
> > >    DRM/KMS driver of a physical display device)
> > >  - a dumb buffer allocated by the frontend can be imported
> > >    into physical device DRM/KMS driver, thus allowing to
> > >    achieve no copying as well
> > >
> > >For that reason number of IOCTLs are introduced:
> > >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> > >     This will create a DRM dumb buffer from grant references provided
> > >     by the frontend
> > >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> > >    This will grant references to a dumb/display buffer's memory provided
> > >    by the backend
> > >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > >    This will block until the dumb buffer with the wait handle provided
> > >    be freed
> > >
> > >With this helper driver I was able to drop CPU usage from 17% to 3%
> > >on Renesas R-Car M3 board.
> > >
> > >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> > >
> > >Thank you,
> > >Oleksandr
> > >
> > >Oleksandr Andrushchenko (1):
> > >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> > >
> > >  Documentation/gpu/drivers.rst               |   1 +
> > >  Documentation/gpu/xen-zcopy.rst             |  32 +
> > >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> > >  drivers/gpu/drm/xen/Makefile                |   5 +
> > >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> > >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> > >  8 files changed, 1264 insertions(+)
> > >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> > >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> > >
> > [1]
> > https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-17  7:59       ` Daniel Vetter
  0 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-17  7:59 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: jgross, Artem Mygaiev, Oleksandr Andrushchenko,
	Oleksandr Andrushchenko, konrad.wilk, linux-kernel, dri-devel,
	airlied, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky

On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> Yeah, I definitely agree on the idea of expanding the use case to the 
> general domain where dmabuf sharing is used. However, what you are
> targetting with proposed changes is identical to the core design of
> hyper_dmabuf.
> 
> On top of this basic functionalities, hyper_dmabuf has driver level
> inter-domain communication, that is needed for dma-buf remote tracking
> (no fence forwarding though), event triggering and event handling, extra
> meta data exchange and hyper_dmabuf_id that represents grefs
> (grefs are shared implicitly on driver level)

This really isn't a positive design aspect of hyperdmabuf imo. The core
code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
very simple & clean.

If there's a clear need later on we can extend that. But for now xen-zcopy
seems to cover the basic use-case needs, so gets the job done.

> Also it is designed with frontend (common core framework) + backend
> (hyper visor specific comm and memory sharing) structure for portability.
> We just can't limit this feature to Xen because we want to use the same
> uapis not only for Xen but also other applicable hypervisor, like ACORN.

See the discussion around udmabuf and the needs for kvm. I think trying to
make an ioctl/uapi that works for multiple hypervisors is misguided - it
likely won't work.

On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
not even upstream yet, nor have I seen any patches proposing to land linux
support for ACRN. Since it's not upstream, it doesn't really matter for
upstream consideration. I'm doubting that ACRN will use the same grant
references as xen, so the same uapi won't work on ACRN as on Xen anyway.

> So I am wondering we can start with this hyper_dmabuf then modify it for
> your use-case if needed and polish and fix any glitches if we want to 
> to use this for all general dma-buf usecases.

Imo xen-zcopy is a much more reasonable starting point for upstream, which
can then be extended (if really proven to be necessary).

> Also, I still have one unresolved question regarding the export/import flow
> in both of hyper_dmabuf and xen-zcopy.
> 
> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?

I think if you just look at the pages, and make sure you handle the
sg_page == NULL case it's ok-ish. It's not great, but mostly it should
work. The real trouble with hyperdmabuf was the forwarding of all these
calls, instead of just passing around a list of grant references.
-Daniel

> 
> Regards,
> DW
>  
> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> > Hello, all!
> > 
> > After discussing xen-zcopy and hyper-dmabuf [1] approaches
> > 
> > it seems that xen-zcopy can be made not depend on DRM core any more
> > 
> > and be dma-buf centric (which it in fact is).
> > 
> > The DRM code was mostly there for dma-buf's FD import/export
> > 
> > with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> > 
> > the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> > DRM_XEN_ZCOPY_DUMB_TO_REFS)
> > 
> > are extended to also provide a file descriptor of the corresponding dma-buf,
> > then
> > 
> > PRIME stuff in the driver is not needed anymore.
> > 
> > That being said, xen-zcopy can safely be detached from DRM and moved from
> > 
> > drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> > 
> > This driver then becomes a universal way to turn any shared buffer between
> > Dom0/DomD
> > 
> > and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> > references
> > 
> > or represent a dma-buf as grant-references for export.
> > 
> > This way the driver can be used not only for DRM use-cases, but also for
> > other
> > 
> > use-cases which may require zero copying between domains.
> > 
> > For example, the use-cases we are about to work in the nearest future will
> > use
> > 
> > V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> > 
> > from zero copying much. Potentially, even block/net devices may benefit,
> > 
> > but this needs some evaluation.
> > 
> > 
> > I would love to hear comments for authors of the hyper-dmabuf
> > 
> > and Xen community, as well as DRI-Devel and other interested parties.
> > 
> > 
> > Thank you,
> > 
> > Oleksandr
> > 
> > 
> > On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> > >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > >
> > >Hello!
> > >
> > >When using Xen PV DRM frontend driver then on backend side one will need
> > >to do copying of display buffers' contents (filled by the
> > >frontend's user-space) into buffers allocated at the backend side.
> > >Taking into account the size of display buffers and frames per seconds
> > >it may result in unneeded huge data bus occupation and performance loss.
> > >
> > >This helper driver allows implementing zero-copying use-cases
> > >when using Xen para-virtualized frontend display driver by
> > >implementing a DRM/KMS helper driver running on backend's side.
> > >It utilizes PRIME buffers API to share frontend's buffers with
> > >physical device drivers on backend's side:
> > >
> > >  - a dumb buffer created on backend's side can be shared
> > >    with the Xen PV frontend driver, so it directly writes
> > >    into backend's domain memory (into the buffer exported from
> > >    DRM/KMS driver of a physical display device)
> > >  - a dumb buffer allocated by the frontend can be imported
> > >    into physical device DRM/KMS driver, thus allowing to
> > >    achieve no copying as well
> > >
> > >For that reason number of IOCTLs are introduced:
> > >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> > >     This will create a DRM dumb buffer from grant references provided
> > >     by the frontend
> > >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> > >    This will grant references to a dumb/display buffer's memory provided
> > >    by the backend
> > >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > >    This will block until the dumb buffer with the wait handle provided
> > >    be freed
> > >
> > >With this helper driver I was able to drop CPU usage from 17% to 3%
> > >on Renesas R-Car M3 board.
> > >
> > >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> > >
> > >Thank you,
> > >Oleksandr
> > >
> > >Oleksandr Andrushchenko (1):
> > >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> > >
> > >  Documentation/gpu/drivers.rst               |   1 +
> > >  Documentation/gpu/xen-zcopy.rst             |  32 +
> > >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> > >  drivers/gpu/drm/xen/Makefile                |   5 +
> > >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> > >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> > >  8 files changed, 1264 insertions(+)
> > >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> > >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> > >
> > [1]
> > https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-16 19:29     ` Dongwon Kim
  (?)
@ 2018-04-17  7:59     ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-17  7:59 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: jgross, Artem Mygaiev, Oleksandr Andrushchenko,
	Oleksandr Andrushchenko, linux-kernel, dri-devel, airlied,
	Potrola, MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> Yeah, I definitely agree on the idea of expanding the use case to the 
> general domain where dmabuf sharing is used. However, what you are
> targetting with proposed changes is identical to the core design of
> hyper_dmabuf.
> 
> On top of this basic functionalities, hyper_dmabuf has driver level
> inter-domain communication, that is needed for dma-buf remote tracking
> (no fence forwarding though), event triggering and event handling, extra
> meta data exchange and hyper_dmabuf_id that represents grefs
> (grefs are shared implicitly on driver level)

This really isn't a positive design aspect of hyperdmabuf imo. The core
code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
very simple & clean.

If there's a clear need later on we can extend that. But for now xen-zcopy
seems to cover the basic use-case needs, so gets the job done.

> Also it is designed with frontend (common core framework) + backend
> (hyper visor specific comm and memory sharing) structure for portability.
> We just can't limit this feature to Xen because we want to use the same
> uapis not only for Xen but also other applicable hypervisor, like ACORN.

See the discussion around udmabuf and the needs for kvm. I think trying to
make an ioctl/uapi that works for multiple hypervisors is misguided - it
likely won't work.

On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
not even upstream yet, nor have I seen any patches proposing to land linux
support for ACRN. Since it's not upstream, it doesn't really matter for
upstream consideration. I'm doubting that ACRN will use the same grant
references as xen, so the same uapi won't work on ACRN as on Xen anyway.

> So I am wondering we can start with this hyper_dmabuf then modify it for
> your use-case if needed and polish and fix any glitches if we want to 
> to use this for all general dma-buf usecases.

Imo xen-zcopy is a much more reasonable starting point for upstream, which
can then be extended (if really proven to be necessary).

> Also, I still have one unresolved question regarding the export/import flow
> in both of hyper_dmabuf and xen-zcopy.
> 
> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?

I think if you just look at the pages, and make sure you handle the
sg_page == NULL case it's ok-ish. It's not great, but mostly it should
work. The real trouble with hyperdmabuf was the forwarding of all these
calls, instead of just passing around a list of grant references.
-Daniel

> 
> Regards,
> DW
>  
> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> > Hello, all!
> > 
> > After discussing xen-zcopy and hyper-dmabuf [1] approaches
> > 
> > it seems that xen-zcopy can be made not depend on DRM core any more
> > 
> > and be dma-buf centric (which it in fact is).
> > 
> > The DRM code was mostly there for dma-buf's FD import/export
> > 
> > with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> > 
> > the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> > DRM_XEN_ZCOPY_DUMB_TO_REFS)
> > 
> > are extended to also provide a file descriptor of the corresponding dma-buf,
> > then
> > 
> > PRIME stuff in the driver is not needed anymore.
> > 
> > That being said, xen-zcopy can safely be detached from DRM and moved from
> > 
> > drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> > 
> > This driver then becomes a universal way to turn any shared buffer between
> > Dom0/DomD
> > 
> > and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> > references
> > 
> > or represent a dma-buf as grant-references for export.
> > 
> > This way the driver can be used not only for DRM use-cases, but also for
> > other
> > 
> > use-cases which may require zero copying between domains.
> > 
> > For example, the use-cases we are about to work in the nearest future will
> > use
> > 
> > V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> > 
> > from zero copying much. Potentially, even block/net devices may benefit,
> > 
> > but this needs some evaluation.
> > 
> > 
> > I would love to hear comments for authors of the hyper-dmabuf
> > 
> > and Xen community, as well as DRI-Devel and other interested parties.
> > 
> > 
> > Thank you,
> > 
> > Oleksandr
> > 
> > 
> > On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> > >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > >
> > >Hello!
> > >
> > >When using Xen PV DRM frontend driver then on backend side one will need
> > >to do copying of display buffers' contents (filled by the
> > >frontend's user-space) into buffers allocated at the backend side.
> > >Taking into account the size of display buffers and frames per seconds
> > >it may result in unneeded huge data bus occupation and performance loss.
> > >
> > >This helper driver allows implementing zero-copying use-cases
> > >when using Xen para-virtualized frontend display driver by
> > >implementing a DRM/KMS helper driver running on backend's side.
> > >It utilizes PRIME buffers API to share frontend's buffers with
> > >physical device drivers on backend's side:
> > >
> > >  - a dumb buffer created on backend's side can be shared
> > >    with the Xen PV frontend driver, so it directly writes
> > >    into backend's domain memory (into the buffer exported from
> > >    DRM/KMS driver of a physical display device)
> > >  - a dumb buffer allocated by the frontend can be imported
> > >    into physical device DRM/KMS driver, thus allowing to
> > >    achieve no copying as well
> > >
> > >For that reason number of IOCTLs are introduced:
> > >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> > >     This will create a DRM dumb buffer from grant references provided
> > >     by the frontend
> > >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> > >    This will grant references to a dumb/display buffer's memory provided
> > >    by the backend
> > >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > >    This will block until the dumb buffer with the wait handle provided
> > >    be freed
> > >
> > >With this helper driver I was able to drop CPU usage from 17% to 3%
> > >on Renesas R-Car M3 board.
> > >
> > >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> > >
> > >Thank you,
> > >Oleksandr
> > >
> > >Oleksandr Andrushchenko (1):
> > >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> > >
> > >  Documentation/gpu/drivers.rst               |   1 +
> > >  Documentation/gpu/xen-zcopy.rst             |  32 +
> > >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> > >  drivers/gpu/drm/xen/Makefile                |   5 +
> > >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> > >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> > >  8 files changed, 1264 insertions(+)
> > >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> > >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> > >
> > [1]
> > https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-17  7:59       ` Daniel Vetter
@ 2018-04-17  8:19         ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-17  8:19 UTC (permalink / raw)
  To: Dongwon Kim, jgross, Artem Mygaiev, konrad.wilk, airlied,
	Oleksandr Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

On 04/17/2018 10:59 AM, Daniel Vetter wrote:
> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>> Yeah, I definitely agree on the idea of expanding the use case to the
>> general domain where dmabuf sharing is used. However, what you are
>> targetting with proposed changes is identical to the core design of
>> hyper_dmabuf.
>>
>> On top of this basic functionalities, hyper_dmabuf has driver level
>> inter-domain communication, that is needed for dma-buf remote tracking
>> (no fence forwarding though), event triggering and event handling, extra
>> meta data exchange and hyper_dmabuf_id that represents grefs
>> (grefs are shared implicitly on driver level)
> This really isn't a positive design aspect of hyperdmabuf imo. The core
> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> very simple & clean.
>
> If there's a clear need later on we can extend that. But for now xen-zcopy
> seems to cover the basic use-case needs, so gets the job done.
After we decided to remove DRM PRIME code from the zcopy driver
I think we can extend the existing Xen drivers instead of introducing
a new one:
gntdev [1], [2] - to handle export/import of the dma-bufs to/from grefs
balloon [3] - to allow allocating CMA buffers
>> Also it is designed with frontend (common core framework) + backend
>> (hyper visor specific comm and memory sharing) structure for portability.
>> We just can't limit this feature to Xen because we want to use the same
>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
> See the discussion around udmabuf and the needs for kvm. I think trying to
> make an ioctl/uapi that works for multiple hypervisors is misguided - it
> likely won't work.
>
> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> not even upstream yet, nor have I seen any patches proposing to land linux
> support for ACRN. Since it's not upstream, it doesn't really matter for
> upstream consideration. I'm doubting that ACRN will use the same grant
> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
>
>> So I am wondering we can start with this hyper_dmabuf then modify it for
>> your use-case if needed and polish and fix any glitches if we want to
>> to use this for all general dma-buf usecases.
> Imo xen-zcopy is a much more reasonable starting point for upstream, which
> can then be extended (if really proven to be necessary).
>
>> Also, I still have one unresolved question regarding the export/import flow
>> in both of hyper_dmabuf and xen-zcopy.
>>
>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> I think if you just look at the pages, and make sure you handle the
> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> work. The real trouble with hyperdmabuf was the forwarding of all these
> calls, instead of just passing around a list of grant references.
> -Daniel
>
>> Regards,
>> DW
>>   
>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>> Hello, all!
>>>
>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
Even more context for the discussion [4], so Xen community can
catch up
>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>
>>> and be dma-buf centric (which it in fact is).
>>>
>>> The DRM code was mostly there for dma-buf's FD import/export
>>>
>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>
>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>
>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>> then
>>>
>>> PRIME stuff in the driver is not needed anymore.
>>>
>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>
>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>
>>> This driver then becomes a universal way to turn any shared buffer between
>>> Dom0/DomD
>>>
>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>> references
>>>
>>> or represent a dma-buf as grant-references for export.
>>>
>>> This way the driver can be used not only for DRM use-cases, but also for
>>> other
>>>
>>> use-cases which may require zero copying between domains.
>>>
>>> For example, the use-cases we are about to work in the nearest future will
>>> use
>>>
>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>
>>> from zero copying much. Potentially, even block/net devices may benefit,
>>>
>>> but this needs some evaluation.
>>>
>>>
>>> I would love to hear comments for authors of the hyper-dmabuf
>>>
>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>
>>>
>>> Thank you,
>>>
>>> Oleksandr
>>>
>>>
>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>
>>>> Hello!
>>>>
>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>> to do copying of display buffers' contents (filled by the
>>>> frontend's user-space) into buffers allocated at the backend side.
>>>> Taking into account the size of display buffers and frames per seconds
>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>
>>>> This helper driver allows implementing zero-copying use-cases
>>>> when using Xen para-virtualized frontend display driver by
>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>> physical device drivers on backend's side:
>>>>
>>>>   - a dumb buffer created on backend's side can be shared
>>>>     with the Xen PV frontend driver, so it directly writes
>>>>     into backend's domain memory (into the buffer exported from
>>>>     DRM/KMS driver of a physical display device)
>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>     achieve no copying as well
>>>>
>>>> For that reason number of IOCTLs are introduced:
>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>      This will create a DRM dumb buffer from grant references provided
>>>>      by the frontend
>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>     by the backend
>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>     This will block until the dumb buffer with the wait handle provided
>>>>     be freed
>>>>
>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>> on Renesas R-Car M3 board.
>>>>
>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>
>>>> Thank you,
>>>> Oleksandr
>>>>
>>>> Oleksandr Andrushchenko (1):
>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>
>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>   8 files changed, 1264 insertions(+)
>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>
>>> [1]
>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
[1] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
[2] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
[3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c
[4] https://lkml.org/lkml/2018/4/16/355

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-17  8:19         ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-17  8:19 UTC (permalink / raw)
  To: Dongwon Kim, jgross, Artem Mygaiev, konrad.wilk, airlied,
	Oleksandr Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

On 04/17/2018 10:59 AM, Daniel Vetter wrote:
> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>> Yeah, I definitely agree on the idea of expanding the use case to the
>> general domain where dmabuf sharing is used. However, what you are
>> targetting with proposed changes is identical to the core design of
>> hyper_dmabuf.
>>
>> On top of this basic functionalities, hyper_dmabuf has driver level
>> inter-domain communication, that is needed for dma-buf remote tracking
>> (no fence forwarding though), event triggering and event handling, extra
>> meta data exchange and hyper_dmabuf_id that represents grefs
>> (grefs are shared implicitly on driver level)
> This really isn't a positive design aspect of hyperdmabuf imo. The core
> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> very simple & clean.
>
> If there's a clear need later on we can extend that. But for now xen-zcopy
> seems to cover the basic use-case needs, so gets the job done.
After we decided to remove DRM PRIME code from the zcopy driver
I think we can extend the existing Xen drivers instead of introducing
a new one:
gntdev [1], [2] - to handle export/import of the dma-bufs to/from grefs
balloon [3] - to allow allocating CMA buffers
>> Also it is designed with frontend (common core framework) + backend
>> (hyper visor specific comm and memory sharing) structure for portability.
>> We just can't limit this feature to Xen because we want to use the same
>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
> See the discussion around udmabuf and the needs for kvm. I think trying to
> make an ioctl/uapi that works for multiple hypervisors is misguided - it
> likely won't work.
>
> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> not even upstream yet, nor have I seen any patches proposing to land linux
> support for ACRN. Since it's not upstream, it doesn't really matter for
> upstream consideration. I'm doubting that ACRN will use the same grant
> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
>
>> So I am wondering we can start with this hyper_dmabuf then modify it for
>> your use-case if needed and polish and fix any glitches if we want to
>> to use this for all general dma-buf usecases.
> Imo xen-zcopy is a much more reasonable starting point for upstream, which
> can then be extended (if really proven to be necessary).
>
>> Also, I still have one unresolved question regarding the export/import flow
>> in both of hyper_dmabuf and xen-zcopy.
>>
>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> I think if you just look at the pages, and make sure you handle the
> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> work. The real trouble with hyperdmabuf was the forwarding of all these
> calls, instead of just passing around a list of grant references.
> -Daniel
>
>> Regards,
>> DW
>>   
>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>> Hello, all!
>>>
>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
Even more context for the discussion [4], so Xen community can
catch up
>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>
>>> and be dma-buf centric (which it in fact is).
>>>
>>> The DRM code was mostly there for dma-buf's FD import/export
>>>
>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>
>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>
>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>> then
>>>
>>> PRIME stuff in the driver is not needed anymore.
>>>
>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>
>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>
>>> This driver then becomes a universal way to turn any shared buffer between
>>> Dom0/DomD
>>>
>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>> references
>>>
>>> or represent a dma-buf as grant-references for export.
>>>
>>> This way the driver can be used not only for DRM use-cases, but also for
>>> other
>>>
>>> use-cases which may require zero copying between domains.
>>>
>>> For example, the use-cases we are about to work in the nearest future will
>>> use
>>>
>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>
>>> from zero copying much. Potentially, even block/net devices may benefit,
>>>
>>> but this needs some evaluation.
>>>
>>>
>>> I would love to hear comments for authors of the hyper-dmabuf
>>>
>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>
>>>
>>> Thank you,
>>>
>>> Oleksandr
>>>
>>>
>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>
>>>> Hello!
>>>>
>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>> to do copying of display buffers' contents (filled by the
>>>> frontend's user-space) into buffers allocated at the backend side.
>>>> Taking into account the size of display buffers and frames per seconds
>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>
>>>> This helper driver allows implementing zero-copying use-cases
>>>> when using Xen para-virtualized frontend display driver by
>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>> physical device drivers on backend's side:
>>>>
>>>>   - a dumb buffer created on backend's side can be shared
>>>>     with the Xen PV frontend driver, so it directly writes
>>>>     into backend's domain memory (into the buffer exported from
>>>>     DRM/KMS driver of a physical display device)
>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>     achieve no copying as well
>>>>
>>>> For that reason number of IOCTLs are introduced:
>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>      This will create a DRM dumb buffer from grant references provided
>>>>      by the frontend
>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>     by the backend
>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>     This will block until the dumb buffer with the wait handle provided
>>>>     be freed
>>>>
>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>> on Renesas R-Car M3 board.
>>>>
>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>
>>>> Thank you,
>>>> Oleksandr
>>>>
>>>> Oleksandr Andrushchenko (1):
>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>
>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>   8 files changed, 1264 insertions(+)
>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>
>>> [1]
>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
[1] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
[2] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
[3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c
[4] https://lkml.org/lkml/2018/4/16/355
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-17  7:59       ` Daniel Vetter
  (?)
@ 2018-04-17  8:19       ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-17  8:19 UTC (permalink / raw)
  To: Dongwon Kim, jgross, Artem Mygaiev, konrad.wilk, airlied,
	Oleksandr Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

On 04/17/2018 10:59 AM, Daniel Vetter wrote:
> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>> Yeah, I definitely agree on the idea of expanding the use case to the
>> general domain where dmabuf sharing is used. However, what you are
>> targetting with proposed changes is identical to the core design of
>> hyper_dmabuf.
>>
>> On top of this basic functionalities, hyper_dmabuf has driver level
>> inter-domain communication, that is needed for dma-buf remote tracking
>> (no fence forwarding though), event triggering and event handling, extra
>> meta data exchange and hyper_dmabuf_id that represents grefs
>> (grefs are shared implicitly on driver level)
> This really isn't a positive design aspect of hyperdmabuf imo. The core
> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> very simple & clean.
>
> If there's a clear need later on we can extend that. But for now xen-zcopy
> seems to cover the basic use-case needs, so gets the job done.
After we decided to remove DRM PRIME code from the zcopy driver
I think we can extend the existing Xen drivers instead of introducing
a new one:
gntdev [1], [2] - to handle export/import of the dma-bufs to/from grefs
balloon [3] - to allow allocating CMA buffers
>> Also it is designed with frontend (common core framework) + backend
>> (hyper visor specific comm and memory sharing) structure for portability.
>> We just can't limit this feature to Xen because we want to use the same
>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
> See the discussion around udmabuf and the needs for kvm. I think trying to
> make an ioctl/uapi that works for multiple hypervisors is misguided - it
> likely won't work.
>
> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> not even upstream yet, nor have I seen any patches proposing to land linux
> support for ACRN. Since it's not upstream, it doesn't really matter for
> upstream consideration. I'm doubting that ACRN will use the same grant
> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
>
>> So I am wondering we can start with this hyper_dmabuf then modify it for
>> your use-case if needed and polish and fix any glitches if we want to
>> to use this for all general dma-buf usecases.
> Imo xen-zcopy is a much more reasonable starting point for upstream, which
> can then be extended (if really proven to be necessary).
>
>> Also, I still have one unresolved question regarding the export/import flow
>> in both of hyper_dmabuf and xen-zcopy.
>>
>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> I think if you just look at the pages, and make sure you handle the
> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> work. The real trouble with hyperdmabuf was the forwarding of all these
> calls, instead of just passing around a list of grant references.
> -Daniel
>
>> Regards,
>> DW
>>   
>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>> Hello, all!
>>>
>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
Even more context for the discussion [4], so Xen community can
catch up
>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>
>>> and be dma-buf centric (which it in fact is).
>>>
>>> The DRM code was mostly there for dma-buf's FD import/export
>>>
>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>
>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>
>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>> then
>>>
>>> PRIME stuff in the driver is not needed anymore.
>>>
>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>
>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>
>>> This driver then becomes a universal way to turn any shared buffer between
>>> Dom0/DomD
>>>
>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>> references
>>>
>>> or represent a dma-buf as grant-references for export.
>>>
>>> This way the driver can be used not only for DRM use-cases, but also for
>>> other
>>>
>>> use-cases which may require zero copying between domains.
>>>
>>> For example, the use-cases we are about to work in the nearest future will
>>> use
>>>
>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>
>>> from zero copying much. Potentially, even block/net devices may benefit,
>>>
>>> but this needs some evaluation.
>>>
>>>
>>> I would love to hear comments for authors of the hyper-dmabuf
>>>
>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>
>>>
>>> Thank you,
>>>
>>> Oleksandr
>>>
>>>
>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>
>>>> Hello!
>>>>
>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>> to do copying of display buffers' contents (filled by the
>>>> frontend's user-space) into buffers allocated at the backend side.
>>>> Taking into account the size of display buffers and frames per seconds
>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>
>>>> This helper driver allows implementing zero-copying use-cases
>>>> when using Xen para-virtualized frontend display driver by
>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>> physical device drivers on backend's side:
>>>>
>>>>   - a dumb buffer created on backend's side can be shared
>>>>     with the Xen PV frontend driver, so it directly writes
>>>>     into backend's domain memory (into the buffer exported from
>>>>     DRM/KMS driver of a physical display device)
>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>     achieve no copying as well
>>>>
>>>> For that reason number of IOCTLs are introduced:
>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>      This will create a DRM dumb buffer from grant references provided
>>>>      by the frontend
>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>     by the backend
>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>     This will block until the dumb buffer with the wait handle provided
>>>>     be freed
>>>>
>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>> on Renesas R-Car M3 board.
>>>>
>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>
>>>> Thank you,
>>>> Oleksandr
>>>>
>>>> Oleksandr Andrushchenko (1):
>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>
>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>   8 files changed, 1264 insertions(+)
>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>
>>> [1]
>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
[1] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
[2] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
[3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c
[4] https://lkml.org/lkml/2018/4/16/355

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-17  7:59       ` Daniel Vetter
                         ` (3 preceding siblings ...)
  (?)
@ 2018-04-17 20:57       ` Dongwon Kim
  2018-04-18  6:38         ` Oleksandr Andrushchenko
  2018-04-18  6:38         ` Oleksandr Andrushchenko
  -1 siblings, 2 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-17 20:57 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, jgross, Artem Mygaiev, konrad.wilk,
	airlied, Oleksandr Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > Yeah, I definitely agree on the idea of expanding the use case to the 
> > general domain where dmabuf sharing is used. However, what you are
> > targetting with proposed changes is identical to the core design of
> > hyper_dmabuf.
> > 
> > On top of this basic functionalities, hyper_dmabuf has driver level
> > inter-domain communication, that is needed for dma-buf remote tracking
> > (no fence forwarding though), event triggering and event handling, extra
> > meta data exchange and hyper_dmabuf_id that represents grefs
> > (grefs are shared implicitly on driver level)
> 
> This really isn't a positive design aspect of hyperdmabuf imo. The core
> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> very simple & clean.
> 
> If there's a clear need later on we can extend that. But for now xen-zcopy
> seems to cover the basic use-case needs, so gets the job done.
> 
> > Also it is designed with frontend (common core framework) + backend
> > (hyper visor specific comm and memory sharing) structure for portability.
> > We just can't limit this feature to Xen because we want to use the same
> > uapis not only for Xen but also other applicable hypervisor, like ACORN.
> 
> See the discussion around udmabuf and the needs for kvm. I think trying to
> make an ioctl/uapi that works for multiple hypervisors is misguided - it
> likely won't work.
> 
> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> not even upstream yet, nor have I seen any patches proposing to land linux
> support for ACRN. Since it's not upstream, it doesn't really matter for
> upstream consideration. I'm doubting that ACRN will use the same grant
> references as xen, so the same uapi won't work on ACRN as on Xen anyway.

Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
hyper_dmabuf has been architectured with the concept of backend.
If you look at the structure of backend, you will find that
backend is just a set of standard function calls as shown here:

struct hyper_dmabuf_bknd_ops {
        /* backend initialization routine (optional) */
        int (*init)(void);

        /* backend cleanup routine (optional) */
        int (*cleanup)(void);

        /* retreiving id of current virtual machine */
        int (*get_vm_id)(void);

        /* get pages shared via hypervisor-specific method */
        int (*share_pages)(struct page **pages, int vm_id,
                           int nents, void **refs_info);

        /* make shared pages unshared via hypervisor specific method */
        int (*unshare_pages)(void **refs_info, int nents);

        /* map remotely shared pages on importer's side via
         * hypervisor-specific method
         */
        struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
                                           int nents, void **refs_info);

        /* unmap and free shared pages on importer's side via
         * hypervisor-specific method
         */
        int (*unmap_shared_pages)(void **refs_info, int nents);

        /* initialize communication environment */
        int (*init_comm_env)(void);

        void (*destroy_comm)(void);

        /* upstream ch setup (receiving and responding) */
        int (*init_rx_ch)(int vm_id);

        /* downstream ch setup (transmitting and parsing responses) */
        int (*init_tx_ch)(int vm_id);

        int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
};

All of these can be mapped with any hypervisor specific implementation.
We designed backend implementation for Xen using grant-table, Xen event
and ring buffer communication. For ACRN, we have another backend using Virt-IO
for both memory sharing and communication.

We tried to define this structure of backend to make it general enough (or
it can be even modified or extended to support more cases.) so that it can
fit to other hypervisor cases. Only requirements/expectation on the hypervisor
are page-level memory sharing and inter-domain communication, which I think
are standard features of modern hypervisor.

And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
are very general. One is getting FD (dmabuf) and get those shared. The other
is generating dmabuf from global handle (secure handle hiding gref behind it).
On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
for any cases.

So I don't know why we wouldn't want to try to make these standard in most of
hypervisor cases instead of limiting it to certain hypervisor like Xen.
Frontend-backend structre is optimal for this I think.

> 
> > So I am wondering we can start with this hyper_dmabuf then modify it for
> > your use-case if needed and polish and fix any glitches if we want to 
> > to use this for all general dma-buf usecases.
> 
> Imo xen-zcopy is a much more reasonable starting point for upstream, which
> can then be extended (if really proven to be necessary).
> 
> > Also, I still have one unresolved question regarding the export/import flow
> > in both of hyper_dmabuf and xen-zcopy.
> > 
> > @danvet: Would this flow (guest1->import existing dmabuf->share underlying
> > pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> 
> I think if you just look at the pages, and make sure you handle the
> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> work. The real trouble with hyperdmabuf was the forwarding of all these
> calls, instead of just passing around a list of grant references.

I talked to danvet about this litte bit.

I think there was some misunderstanding on this "forwarding". Exporting
and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
what made confusion was that importing domain notifies exporting domain when
there are dmabuf operations (like attach, mapping, detach and release) so that
exporting domain can track the usage of dmabuf on the importing domain. 

I designed this for some basic tracking. We may not need to notify for every
different activity but if none of them is there, exporting domain can't
determine if it is ok to unshare the buffer or the originator (like i915)
can free the object even if it's being accessed in importing domain.

Anyway I really hope we can have enough discussion and resolve all concerns
before nailing it down.

> -Daniel
> 
> > 
> > Regards,
> > DW
> >  
> > On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> > > Hello, all!
> > > 
> > > After discussing xen-zcopy and hyper-dmabuf [1] approaches
> > > 
> > > it seems that xen-zcopy can be made not depend on DRM core any more
> > > 
> > > and be dma-buf centric (which it in fact is).
> > > 
> > > The DRM code was mostly there for dma-buf's FD import/export
> > > 
> > > with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> > > 
> > > the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> > > DRM_XEN_ZCOPY_DUMB_TO_REFS)
> > > 
> > > are extended to also provide a file descriptor of the corresponding dma-buf,
> > > then
> > > 
> > > PRIME stuff in the driver is not needed anymore.
> > > 
> > > That being said, xen-zcopy can safely be detached from DRM and moved from
> > > 
> > > drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> > > 
> > > This driver then becomes a universal way to turn any shared buffer between
> > > Dom0/DomD
> > > 
> > > and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> > > references
> > > 
> > > or represent a dma-buf as grant-references for export.
> > > 
> > > This way the driver can be used not only for DRM use-cases, but also for
> > > other
> > > 
> > > use-cases which may require zero copying between domains.
> > > 
> > > For example, the use-cases we are about to work in the nearest future will
> > > use
> > > 
> > > V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> > > 
> > > from zero copying much. Potentially, even block/net devices may benefit,
> > > 
> > > but this needs some evaluation.
> > > 
> > > 
> > > I would love to hear comments for authors of the hyper-dmabuf
> > > 
> > > and Xen community, as well as DRI-Devel and other interested parties.
> > > 
> > > 
> > > Thank you,
> > > 
> > > Oleksandr
> > > 
> > > 
> > > On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> > > >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > > >
> > > >Hello!
> > > >
> > > >When using Xen PV DRM frontend driver then on backend side one will need
> > > >to do copying of display buffers' contents (filled by the
> > > >frontend's user-space) into buffers allocated at the backend side.
> > > >Taking into account the size of display buffers and frames per seconds
> > > >it may result in unneeded huge data bus occupation and performance loss.
> > > >
> > > >This helper driver allows implementing zero-copying use-cases
> > > >when using Xen para-virtualized frontend display driver by
> > > >implementing a DRM/KMS helper driver running on backend's side.
> > > >It utilizes PRIME buffers API to share frontend's buffers with
> > > >physical device drivers on backend's side:
> > > >
> > > >  - a dumb buffer created on backend's side can be shared
> > > >    with the Xen PV frontend driver, so it directly writes
> > > >    into backend's domain memory (into the buffer exported from
> > > >    DRM/KMS driver of a physical display device)
> > > >  - a dumb buffer allocated by the frontend can be imported
> > > >    into physical device DRM/KMS driver, thus allowing to
> > > >    achieve no copying as well
> > > >
> > > >For that reason number of IOCTLs are introduced:
> > > >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> > > >     This will create a DRM dumb buffer from grant references provided
> > > >     by the frontend
> > > >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> > > >    This will grant references to a dumb/display buffer's memory provided
> > > >    by the backend
> > > >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > >    This will block until the dumb buffer with the wait handle provided
> > > >    be freed
> > > >
> > > >With this helper driver I was able to drop CPU usage from 17% to 3%
> > > >on Renesas R-Car M3 board.
> > > >
> > > >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> > > >
> > > >Thank you,
> > > >Oleksandr
> > > >
> > > >Oleksandr Andrushchenko (1):
> > > >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> > > >
> > > >  Documentation/gpu/drivers.rst               |   1 +
> > > >  Documentation/gpu/xen-zcopy.rst             |  32 +
> > > >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> > > >  drivers/gpu/drm/xen/Makefile                |   5 +
> > > >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> > > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> > > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> > > >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> > > >  8 files changed, 1264 insertions(+)
> > > >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> > > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> > > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> > > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> > > >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> > > >
> > > [1]
> > > https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-17  7:59       ` Daniel Vetter
                         ` (2 preceding siblings ...)
  (?)
@ 2018-04-17 20:57       ` Dongwon Kim
  -1 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-17 20:57 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, jgross, Artem Mygaiev, konrad.wilk,
	airlied, Oleksandr Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > Yeah, I definitely agree on the idea of expanding the use case to the 
> > general domain where dmabuf sharing is used. However, what you are
> > targetting with proposed changes is identical to the core design of
> > hyper_dmabuf.
> > 
> > On top of this basic functionalities, hyper_dmabuf has driver level
> > inter-domain communication, that is needed for dma-buf remote tracking
> > (no fence forwarding though), event triggering and event handling, extra
> > meta data exchange and hyper_dmabuf_id that represents grefs
> > (grefs are shared implicitly on driver level)
> 
> This really isn't a positive design aspect of hyperdmabuf imo. The core
> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> very simple & clean.
> 
> If there's a clear need later on we can extend that. But for now xen-zcopy
> seems to cover the basic use-case needs, so gets the job done.
> 
> > Also it is designed with frontend (common core framework) + backend
> > (hyper visor specific comm and memory sharing) structure for portability.
> > We just can't limit this feature to Xen because we want to use the same
> > uapis not only for Xen but also other applicable hypervisor, like ACORN.
> 
> See the discussion around udmabuf and the needs for kvm. I think trying to
> make an ioctl/uapi that works for multiple hypervisors is misguided - it
> likely won't work.
> 
> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> not even upstream yet, nor have I seen any patches proposing to land linux
> support for ACRN. Since it's not upstream, it doesn't really matter for
> upstream consideration. I'm doubting that ACRN will use the same grant
> references as xen, so the same uapi won't work on ACRN as on Xen anyway.

Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
hyper_dmabuf has been architectured with the concept of backend.
If you look at the structure of backend, you will find that
backend is just a set of standard function calls as shown here:

struct hyper_dmabuf_bknd_ops {
        /* backend initialization routine (optional) */
        int (*init)(void);

        /* backend cleanup routine (optional) */
        int (*cleanup)(void);

        /* retreiving id of current virtual machine */
        int (*get_vm_id)(void);

        /* get pages shared via hypervisor-specific method */
        int (*share_pages)(struct page **pages, int vm_id,
                           int nents, void **refs_info);

        /* make shared pages unshared via hypervisor specific method */
        int (*unshare_pages)(void **refs_info, int nents);

        /* map remotely shared pages on importer's side via
         * hypervisor-specific method
         */
        struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
                                           int nents, void **refs_info);

        /* unmap and free shared pages on importer's side via
         * hypervisor-specific method
         */
        int (*unmap_shared_pages)(void **refs_info, int nents);

        /* initialize communication environment */
        int (*init_comm_env)(void);

        void (*destroy_comm)(void);

        /* upstream ch setup (receiving and responding) */
        int (*init_rx_ch)(int vm_id);

        /* downstream ch setup (transmitting and parsing responses) */
        int (*init_tx_ch)(int vm_id);

        int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
};

All of these can be mapped with any hypervisor specific implementation.
We designed backend implementation for Xen using grant-table, Xen event
and ring buffer communication. For ACRN, we have another backend using Virt-IO
for both memory sharing and communication.

We tried to define this structure of backend to make it general enough (or
it can be even modified or extended to support more cases.) so that it can
fit to other hypervisor cases. Only requirements/expectation on the hypervisor
are page-level memory sharing and inter-domain communication, which I think
are standard features of modern hypervisor.

And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
are very general. One is getting FD (dmabuf) and get those shared. The other
is generating dmabuf from global handle (secure handle hiding gref behind it).
On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
for any cases.

So I don't know why we wouldn't want to try to make these standard in most of
hypervisor cases instead of limiting it to certain hypervisor like Xen.
Frontend-backend structre is optimal for this I think.

> 
> > So I am wondering we can start with this hyper_dmabuf then modify it for
> > your use-case if needed and polish and fix any glitches if we want to 
> > to use this for all general dma-buf usecases.
> 
> Imo xen-zcopy is a much more reasonable starting point for upstream, which
> can then be extended (if really proven to be necessary).
> 
> > Also, I still have one unresolved question regarding the export/import flow
> > in both of hyper_dmabuf and xen-zcopy.
> > 
> > @danvet: Would this flow (guest1->import existing dmabuf->share underlying
> > pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> 
> I think if you just look at the pages, and make sure you handle the
> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> work. The real trouble with hyperdmabuf was the forwarding of all these
> calls, instead of just passing around a list of grant references.

I talked to danvet about this litte bit.

I think there was some misunderstanding on this "forwarding". Exporting
and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
what made confusion was that importing domain notifies exporting domain when
there are dmabuf operations (like attach, mapping, detach and release) so that
exporting domain can track the usage of dmabuf on the importing domain. 

I designed this for some basic tracking. We may not need to notify for every
different activity but if none of them is there, exporting domain can't
determine if it is ok to unshare the buffer or the originator (like i915)
can free the object even if it's being accessed in importing domain.

Anyway I really hope we can have enough discussion and resolve all concerns
before nailing it down.

> -Daniel
> 
> > 
> > Regards,
> > DW
> >  
> > On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> > > Hello, all!
> > > 
> > > After discussing xen-zcopy and hyper-dmabuf [1] approaches
> > > 
> > > it seems that xen-zcopy can be made not depend on DRM core any more
> > > 
> > > and be dma-buf centric (which it in fact is).
> > > 
> > > The DRM code was mostly there for dma-buf's FD import/export
> > > 
> > > with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> > > 
> > > the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> > > DRM_XEN_ZCOPY_DUMB_TO_REFS)
> > > 
> > > are extended to also provide a file descriptor of the corresponding dma-buf,
> > > then
> > > 
> > > PRIME stuff in the driver is not needed anymore.
> > > 
> > > That being said, xen-zcopy can safely be detached from DRM and moved from
> > > 
> > > drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> > > 
> > > This driver then becomes a universal way to turn any shared buffer between
> > > Dom0/DomD
> > > 
> > > and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> > > references
> > > 
> > > or represent a dma-buf as grant-references for export.
> > > 
> > > This way the driver can be used not only for DRM use-cases, but also for
> > > other
> > > 
> > > use-cases which may require zero copying between domains.
> > > 
> > > For example, the use-cases we are about to work in the nearest future will
> > > use
> > > 
> > > V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> > > 
> > > from zero copying much. Potentially, even block/net devices may benefit,
> > > 
> > > but this needs some evaluation.
> > > 
> > > 
> > > I would love to hear comments for authors of the hyper-dmabuf
> > > 
> > > and Xen community, as well as DRI-Devel and other interested parties.
> > > 
> > > 
> > > Thank you,
> > > 
> > > Oleksandr
> > > 
> > > 
> > > On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> > > >From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> > > >
> > > >Hello!
> > > >
> > > >When using Xen PV DRM frontend driver then on backend side one will need
> > > >to do copying of display buffers' contents (filled by the
> > > >frontend's user-space) into buffers allocated at the backend side.
> > > >Taking into account the size of display buffers and frames per seconds
> > > >it may result in unneeded huge data bus occupation and performance loss.
> > > >
> > > >This helper driver allows implementing zero-copying use-cases
> > > >when using Xen para-virtualized frontend display driver by
> > > >implementing a DRM/KMS helper driver running on backend's side.
> > > >It utilizes PRIME buffers API to share frontend's buffers with
> > > >physical device drivers on backend's side:
> > > >
> > > >  - a dumb buffer created on backend's side can be shared
> > > >    with the Xen PV frontend driver, so it directly writes
> > > >    into backend's domain memory (into the buffer exported from
> > > >    DRM/KMS driver of a physical display device)
> > > >  - a dumb buffer allocated by the frontend can be imported
> > > >    into physical device DRM/KMS driver, thus allowing to
> > > >    achieve no copying as well
> > > >
> > > >For that reason number of IOCTLs are introduced:
> > > >  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> > > >     This will create a DRM dumb buffer from grant references provided
> > > >     by the frontend
> > > >  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> > > >    This will grant references to a dumb/display buffer's memory provided
> > > >    by the backend
> > > >  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > >    This will block until the dumb buffer with the wait handle provided
> > > >    be freed
> > > >
> > > >With this helper driver I was able to drop CPU usage from 17% to 3%
> > > >on Renesas R-Car M3 board.
> > > >
> > > >This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> > > >
> > > >Thank you,
> > > >Oleksandr
> > > >
> > > >Oleksandr Andrushchenko (1):
> > > >   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> > > >
> > > >  Documentation/gpu/drivers.rst               |   1 +
> > > >  Documentation/gpu/xen-zcopy.rst             |  32 +
> > > >  drivers/gpu/drm/xen/Kconfig                 |  25 +
> > > >  drivers/gpu/drm/xen/Makefile                |   5 +
> > > >  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> > > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> > > >  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> > > >  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> > > >  8 files changed, 1264 insertions(+)
> > > >  create mode 100644 Documentation/gpu/xen-zcopy.rst
> > > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> > > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> > > >  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> > > >  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> > > >
> > > [1]
> > > https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-17 20:57       ` Dongwon Kim
@ 2018-04-18  6:38         ` Oleksandr Andrushchenko
  2018-04-18  7:35           ` Roger Pau Monné
                             ` (3 more replies)
  2018-04-18  6:38         ` Oleksandr Andrushchenko
  1 sibling, 4 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18  6:38 UTC (permalink / raw)
  To: Dongwon Kim, Oleksandr_Andrushchenko, jgross, Artem Mygaiev,
	konrad.wilk, airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>> Yeah, I definitely agree on the idea of expanding the use case to the
>>> general domain where dmabuf sharing is used. However, what you are
>>> targetting with proposed changes is identical to the core design of
>>> hyper_dmabuf.
>>>
>>> On top of this basic functionalities, hyper_dmabuf has driver level
>>> inter-domain communication, that is needed for dma-buf remote tracking
>>> (no fence forwarding though), event triggering and event handling, extra
>>> meta data exchange and hyper_dmabuf_id that represents grefs
>>> (grefs are shared implicitly on driver level)
>> This really isn't a positive design aspect of hyperdmabuf imo. The core
>> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
>> very simple & clean.
>>
>> If there's a clear need later on we can extend that. But for now xen-zcopy
>> seems to cover the basic use-case needs, so gets the job done.
>>
>>> Also it is designed with frontend (common core framework) + backend
>>> (hyper visor specific comm and memory sharing) structure for portability.
>>> We just can't limit this feature to Xen because we want to use the same
>>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
>> See the discussion around udmabuf and the needs for kvm. I think trying to
>> make an ioctl/uapi that works for multiple hypervisors is misguided - it
>> likely won't work.
>>
>> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
>> not even upstream yet, nor have I seen any patches proposing to land linux
>> support for ACRN. Since it's not upstream, it doesn't really matter for
>> upstream consideration. I'm doubting that ACRN will use the same grant
>> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> hyper_dmabuf has been architectured with the concept of backend.
> If you look at the structure of backend, you will find that
> backend is just a set of standard function calls as shown here:
>
> struct hyper_dmabuf_bknd_ops {
>          /* backend initialization routine (optional) */
>          int (*init)(void);
>
>          /* backend cleanup routine (optional) */
>          int (*cleanup)(void);
>
>          /* retreiving id of current virtual machine */
>          int (*get_vm_id)(void);
>
>          /* get pages shared via hypervisor-specific method */
>          int (*share_pages)(struct page **pages, int vm_id,
>                             int nents, void **refs_info);
>
>          /* make shared pages unshared via hypervisor specific method */
>          int (*unshare_pages)(void **refs_info, int nents);
>
>          /* map remotely shared pages on importer's side via
>           * hypervisor-specific method
>           */
>          struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
>                                             int nents, void **refs_info);
>
>          /* unmap and free shared pages on importer's side via
>           * hypervisor-specific method
>           */
>          int (*unmap_shared_pages)(void **refs_info, int nents);
>
>          /* initialize communication environment */
>          int (*init_comm_env)(void);
>
>          void (*destroy_comm)(void);
>
>          /* upstream ch setup (receiving and responding) */
>          int (*init_rx_ch)(int vm_id);
>
>          /* downstream ch setup (transmitting and parsing responses) */
>          int (*init_tx_ch)(int vm_id);
>
>          int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> };
>
> All of these can be mapped with any hypervisor specific implementation.
> We designed backend implementation for Xen using grant-table, Xen event
> and ring buffer communication. For ACRN, we have another backend using Virt-IO
> for both memory sharing and communication.
>
> We tried to define this structure of backend to make it general enough (or
> it can be even modified or extended to support more cases.) so that it can
> fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> are page-level memory sharing and inter-domain communication, which I think
> are standard features of modern hypervisor.
>
> And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> are very general. One is getting FD (dmabuf) and get those shared. The other
> is generating dmabuf from global handle (secure handle hiding gref behind it).
> On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> for any cases.
>
> So I don't know why we wouldn't want to try to make these standard in most of
> hypervisor cases instead of limiting it to certain hypervisor like Xen.
> Frontend-backend structre is optimal for this I think.
>
>>> So I am wondering we can start with this hyper_dmabuf then modify it for
>>> your use-case if needed and polish and fix any glitches if we want to
>>> to use this for all general dma-buf usecases.
>> Imo xen-zcopy is a much more reasonable starting point for upstream, which
>> can then be extended (if really proven to be necessary).
>>
>>> Also, I still have one unresolved question regarding the export/import flow
>>> in both of hyper_dmabuf and xen-zcopy.
>>>
>>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
>> I think if you just look at the pages, and make sure you handle the
>> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
>> work. The real trouble with hyperdmabuf was the forwarding of all these
>> calls, instead of just passing around a list of grant references.
> I talked to danvet about this litte bit.
>
> I think there was some misunderstanding on this "forwarding". Exporting
> and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> what made confusion was that importing domain notifies exporting domain when
> there are dmabuf operations (like attach, mapping, detach and release) so that
> exporting domain can track the usage of dmabuf on the importing domain.
>
> I designed this for some basic tracking. We may not need to notify for every
> different activity but if none of them is there, exporting domain can't
> determine if it is ok to unshare the buffer or the originator (like i915)
> can free the object even if it's being accessed in importing domain.
>
> Anyway I really hope we can have enough discussion and resolve all concerns
> before nailing it down.
Let me explain how this works in case of para-virtual display
use-case with xen-zcopy.

1. There are 4 components in the system:
   - displif protocol [1]
   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
   - backend - user-space application running in Dom0
   - xen-zcopy - DRM (as of now) helper driver running in Dom0

2. All the communication between domains happens between xen-front and the
backend, so it is possible to implement para-virtual display use-case
without xen-zcopy at all (this is why it is a helper driver), but in 
this case
memory copying occurs (this is out of scope for this discussion).

3. To better understand security issues let's see what use-cases we have:

3.1 xen-front exports its dma-buf (dumb) to the backend

In this case there are no security issues at all as Dom0 (backend side)
will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
If DomU misbehaves it can only write to its own pages shared with Dom0, 
but still
cannot go beyond that, e.g. it can't access Dom0's memory.

3.2 Backend exports dma-buf to xen-front

In this case Dom0 pages are shared with DomU. As before, DomU can only write
to these pages, not any other page from Dom0, so it can be still 
considered safe.
But, the following must be considered (highlighted in xen-front's Kernel
documentation):
  - If guest domain dies then pages/grants received from the backend cannot
    be claimed back - think of it as memory lost to Dom0 (won't be used 
for any
    other guest)
  - Misbehaving guest may send too many requests to the backend exhausting
    its grant references and memory (consider this from security POV). 
As the
    backend runs in the trusted domain we also assume that it is trusted 
as well,
    e.g. must take measures to prevent DDoS attacks.

4. xen-front/backend/xen-zcopy synchronization

4.1. As I already said in 2) all the inter VM communication happens between
xen-front and the backend, xen-zcopy is NOT involved in that.
When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
This call is synchronous, so xen-front expects that backend does free the
buffer pages on return.

4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
   - closes all dumb handles/fd's of the buffer according to [3]
   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to 
make sure
     the buffer is freed (think of it as it waits for dma-buf->release 
callback)
   - replies to xen-front that the buffer can be destroyed.
This way deletion of the buffer happens synchronously on both Dom0 and DomU
sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with 
time-out error
(BTW, wait time is a parameter of this IOCTL), Xen will defer grant 
reference
removal and will retry later until those are free.

Hope this helps understand how buffers are synchronously deleted in case
of xen-zcopy with a single protocol command.

I think the above logic can also be re-used by the hyper-dmabuf driver with
some additional work:

1. xen-zcopy can be split into 2 parts and extend:
1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and 
vise versa,
implement "wait" ioctl (wait for dma-buf->release): currently these are
DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and 
DRM_XEN_ZCOPY_DUMB_WAIT_FREE
1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not 
needed
by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf 
alloc/free/wait

3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
creation/deletion and whatever else is needed (fences?).

To Xen community: please think of dma-buf here as of a buffer 
representation mechanism,
e.g. at the end of the day it's just a set of pages.

Thank you,
Oleksandr
>> -Daniel
>>
>>> Regards,
>>> DW
>>>   
>>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>>> Hello, all!
>>>>
>>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
>>>>
>>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>>
>>>> and be dma-buf centric (which it in fact is).
>>>>
>>>> The DRM code was mostly there for dma-buf's FD import/export
>>>>
>>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>>
>>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>>
>>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>>> then
>>>>
>>>> PRIME stuff in the driver is not needed anymore.
>>>>
>>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>>
>>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>>
>>>> This driver then becomes a universal way to turn any shared buffer between
>>>> Dom0/DomD
>>>>
>>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>>> references
>>>>
>>>> or represent a dma-buf as grant-references for export.
>>>>
>>>> This way the driver can be used not only for DRM use-cases, but also for
>>>> other
>>>>
>>>> use-cases which may require zero copying between domains.
>>>>
>>>> For example, the use-cases we are about to work in the nearest future will
>>>> use
>>>>
>>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>>
>>>> from zero copying much. Potentially, even block/net devices may benefit,
>>>>
>>>> but this needs some evaluation.
>>>>
>>>>
>>>> I would love to hear comments for authors of the hyper-dmabuf
>>>>
>>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>>
>>>>
>>>> Thank you,
>>>>
>>>> Oleksandr
>>>>
>>>>
>>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>
>>>>> Hello!
>>>>>
>>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>>> to do copying of display buffers' contents (filled by the
>>>>> frontend's user-space) into buffers allocated at the backend side.
>>>>> Taking into account the size of display buffers and frames per seconds
>>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>>
>>>>> This helper driver allows implementing zero-copying use-cases
>>>>> when using Xen para-virtualized frontend display driver by
>>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>>> physical device drivers on backend's side:
>>>>>
>>>>>   - a dumb buffer created on backend's side can be shared
>>>>>     with the Xen PV frontend driver, so it directly writes
>>>>>     into backend's domain memory (into the buffer exported from
>>>>>     DRM/KMS driver of a physical display device)
>>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>>     achieve no copying as well
>>>>>
>>>>> For that reason number of IOCTLs are introduced:
>>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>      This will create a DRM dumb buffer from grant references provided
>>>>>      by the frontend
>>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>>     by the backend
>>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>     This will block until the dumb buffer with the wait handle provided
>>>>>     be freed
>>>>>
>>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>>> on Renesas R-Car M3 board.
>>>>>
>>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>>
>>>>> Thank you,
>>>>> Oleksandr
>>>>>
>>>>> Oleksandr Andrushchenko (1):
>>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>>
>>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>>   8 files changed, 1264 insertions(+)
>>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>>
>>>> [1]
>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch

[1] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
[2] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
[3] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
[4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
[5] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
[6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-17 20:57       ` Dongwon Kim
  2018-04-18  6:38         ` Oleksandr Andrushchenko
@ 2018-04-18  6:38         ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18  6:38 UTC (permalink / raw)
  To: Dongwon Kim, Oleksandr_Andrushchenko, jgross, Artem Mygaiev,
	konrad.wilk, airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>> Yeah, I definitely agree on the idea of expanding the use case to the
>>> general domain where dmabuf sharing is used. However, what you are
>>> targetting with proposed changes is identical to the core design of
>>> hyper_dmabuf.
>>>
>>> On top of this basic functionalities, hyper_dmabuf has driver level
>>> inter-domain communication, that is needed for dma-buf remote tracking
>>> (no fence forwarding though), event triggering and event handling, extra
>>> meta data exchange and hyper_dmabuf_id that represents grefs
>>> (grefs are shared implicitly on driver level)
>> This really isn't a positive design aspect of hyperdmabuf imo. The core
>> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
>> very simple & clean.
>>
>> If there's a clear need later on we can extend that. But for now xen-zcopy
>> seems to cover the basic use-case needs, so gets the job done.
>>
>>> Also it is designed with frontend (common core framework) + backend
>>> (hyper visor specific comm and memory sharing) structure for portability.
>>> We just can't limit this feature to Xen because we want to use the same
>>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
>> See the discussion around udmabuf and the needs for kvm. I think trying to
>> make an ioctl/uapi that works for multiple hypervisors is misguided - it
>> likely won't work.
>>
>> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
>> not even upstream yet, nor have I seen any patches proposing to land linux
>> support for ACRN. Since it's not upstream, it doesn't really matter for
>> upstream consideration. I'm doubting that ACRN will use the same grant
>> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> hyper_dmabuf has been architectured with the concept of backend.
> If you look at the structure of backend, you will find that
> backend is just a set of standard function calls as shown here:
>
> struct hyper_dmabuf_bknd_ops {
>          /* backend initialization routine (optional) */
>          int (*init)(void);
>
>          /* backend cleanup routine (optional) */
>          int (*cleanup)(void);
>
>          /* retreiving id of current virtual machine */
>          int (*get_vm_id)(void);
>
>          /* get pages shared via hypervisor-specific method */
>          int (*share_pages)(struct page **pages, int vm_id,
>                             int nents, void **refs_info);
>
>          /* make shared pages unshared via hypervisor specific method */
>          int (*unshare_pages)(void **refs_info, int nents);
>
>          /* map remotely shared pages on importer's side via
>           * hypervisor-specific method
>           */
>          struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
>                                             int nents, void **refs_info);
>
>          /* unmap and free shared pages on importer's side via
>           * hypervisor-specific method
>           */
>          int (*unmap_shared_pages)(void **refs_info, int nents);
>
>          /* initialize communication environment */
>          int (*init_comm_env)(void);
>
>          void (*destroy_comm)(void);
>
>          /* upstream ch setup (receiving and responding) */
>          int (*init_rx_ch)(int vm_id);
>
>          /* downstream ch setup (transmitting and parsing responses) */
>          int (*init_tx_ch)(int vm_id);
>
>          int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> };
>
> All of these can be mapped with any hypervisor specific implementation.
> We designed backend implementation for Xen using grant-table, Xen event
> and ring buffer communication. For ACRN, we have another backend using Virt-IO
> for both memory sharing and communication.
>
> We tried to define this structure of backend to make it general enough (or
> it can be even modified or extended to support more cases.) so that it can
> fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> are page-level memory sharing and inter-domain communication, which I think
> are standard features of modern hypervisor.
>
> And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> are very general. One is getting FD (dmabuf) and get those shared. The other
> is generating dmabuf from global handle (secure handle hiding gref behind it).
> On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> for any cases.
>
> So I don't know why we wouldn't want to try to make these standard in most of
> hypervisor cases instead of limiting it to certain hypervisor like Xen.
> Frontend-backend structre is optimal for this I think.
>
>>> So I am wondering we can start with this hyper_dmabuf then modify it for
>>> your use-case if needed and polish and fix any glitches if we want to
>>> to use this for all general dma-buf usecases.
>> Imo xen-zcopy is a much more reasonable starting point for upstream, which
>> can then be extended (if really proven to be necessary).
>>
>>> Also, I still have one unresolved question regarding the export/import flow
>>> in both of hyper_dmabuf and xen-zcopy.
>>>
>>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
>> I think if you just look at the pages, and make sure you handle the
>> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
>> work. The real trouble with hyperdmabuf was the forwarding of all these
>> calls, instead of just passing around a list of grant references.
> I talked to danvet about this litte bit.
>
> I think there was some misunderstanding on this "forwarding". Exporting
> and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> what made confusion was that importing domain notifies exporting domain when
> there are dmabuf operations (like attach, mapping, detach and release) so that
> exporting domain can track the usage of dmabuf on the importing domain.
>
> I designed this for some basic tracking. We may not need to notify for every
> different activity but if none of them is there, exporting domain can't
> determine if it is ok to unshare the buffer or the originator (like i915)
> can free the object even if it's being accessed in importing domain.
>
> Anyway I really hope we can have enough discussion and resolve all concerns
> before nailing it down.
Let me explain how this works in case of para-virtual display
use-case with xen-zcopy.

1. There are 4 components in the system:
   - displif protocol [1]
   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
   - backend - user-space application running in Dom0
   - xen-zcopy - DRM (as of now) helper driver running in Dom0

2. All the communication between domains happens between xen-front and the
backend, so it is possible to implement para-virtual display use-case
without xen-zcopy at all (this is why it is a helper driver), but in 
this case
memory copying occurs (this is out of scope for this discussion).

3. To better understand security issues let's see what use-cases we have:

3.1 xen-front exports its dma-buf (dumb) to the backend

In this case there are no security issues at all as Dom0 (backend side)
will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
If DomU misbehaves it can only write to its own pages shared with Dom0, 
but still
cannot go beyond that, e.g. it can't access Dom0's memory.

3.2 Backend exports dma-buf to xen-front

In this case Dom0 pages are shared with DomU. As before, DomU can only write
to these pages, not any other page from Dom0, so it can be still 
considered safe.
But, the following must be considered (highlighted in xen-front's Kernel
documentation):
  - If guest domain dies then pages/grants received from the backend cannot
    be claimed back - think of it as memory lost to Dom0 (won't be used 
for any
    other guest)
  - Misbehaving guest may send too many requests to the backend exhausting
    its grant references and memory (consider this from security POV). 
As the
    backend runs in the trusted domain we also assume that it is trusted 
as well,
    e.g. must take measures to prevent DDoS attacks.

4. xen-front/backend/xen-zcopy synchronization

4.1. As I already said in 2) all the inter VM communication happens between
xen-front and the backend, xen-zcopy is NOT involved in that.
When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
This call is synchronous, so xen-front expects that backend does free the
buffer pages on return.

4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
   - closes all dumb handles/fd's of the buffer according to [3]
   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to 
make sure
     the buffer is freed (think of it as it waits for dma-buf->release 
callback)
   - replies to xen-front that the buffer can be destroyed.
This way deletion of the buffer happens synchronously on both Dom0 and DomU
sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with 
time-out error
(BTW, wait time is a parameter of this IOCTL), Xen will defer grant 
reference
removal and will retry later until those are free.

Hope this helps understand how buffers are synchronously deleted in case
of xen-zcopy with a single protocol command.

I think the above logic can also be re-used by the hyper-dmabuf driver with
some additional work:

1. xen-zcopy can be split into 2 parts and extend:
1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and 
vise versa,
implement "wait" ioctl (wait for dma-buf->release): currently these are
DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and 
DRM_XEN_ZCOPY_DUMB_WAIT_FREE
1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not 
needed
by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf 
alloc/free/wait

3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
creation/deletion and whatever else is needed (fences?).

To Xen community: please think of dma-buf here as of a buffer 
representation mechanism,
e.g. at the end of the day it's just a set of pages.

Thank you,
Oleksandr
>> -Daniel
>>
>>> Regards,
>>> DW
>>>   
>>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>>> Hello, all!
>>>>
>>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
>>>>
>>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>>
>>>> and be dma-buf centric (which it in fact is).
>>>>
>>>> The DRM code was mostly there for dma-buf's FD import/export
>>>>
>>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>>
>>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>>
>>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>>> then
>>>>
>>>> PRIME stuff in the driver is not needed anymore.
>>>>
>>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>>
>>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>>
>>>> This driver then becomes a universal way to turn any shared buffer between
>>>> Dom0/DomD
>>>>
>>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>>> references
>>>>
>>>> or represent a dma-buf as grant-references for export.
>>>>
>>>> This way the driver can be used not only for DRM use-cases, but also for
>>>> other
>>>>
>>>> use-cases which may require zero copying between domains.
>>>>
>>>> For example, the use-cases we are about to work in the nearest future will
>>>> use
>>>>
>>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>>
>>>> from zero copying much. Potentially, even block/net devices may benefit,
>>>>
>>>> but this needs some evaluation.
>>>>
>>>>
>>>> I would love to hear comments for authors of the hyper-dmabuf
>>>>
>>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>>
>>>>
>>>> Thank you,
>>>>
>>>> Oleksandr
>>>>
>>>>
>>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>
>>>>> Hello!
>>>>>
>>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>>> to do copying of display buffers' contents (filled by the
>>>>> frontend's user-space) into buffers allocated at the backend side.
>>>>> Taking into account the size of display buffers and frames per seconds
>>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>>
>>>>> This helper driver allows implementing zero-copying use-cases
>>>>> when using Xen para-virtualized frontend display driver by
>>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>>> physical device drivers on backend's side:
>>>>>
>>>>>   - a dumb buffer created on backend's side can be shared
>>>>>     with the Xen PV frontend driver, so it directly writes
>>>>>     into backend's domain memory (into the buffer exported from
>>>>>     DRM/KMS driver of a physical display device)
>>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>>     achieve no copying as well
>>>>>
>>>>> For that reason number of IOCTLs are introduced:
>>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>      This will create a DRM dumb buffer from grant references provided
>>>>>      by the frontend
>>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>>     by the backend
>>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>     This will block until the dumb buffer with the wait handle provided
>>>>>     be freed
>>>>>
>>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>>> on Renesas R-Car M3 board.
>>>>>
>>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>>
>>>>> Thank you,
>>>>> Oleksandr
>>>>>
>>>>> Oleksandr Andrushchenko (1):
>>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>>
>>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>>   8 files changed, 1264 insertions(+)
>>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>>
>>>> [1]
>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch

[1] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
[2] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
[3] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
[4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
[5] 
https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
[6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  6:38         ` Oleksandr Andrushchenko
@ 2018-04-18  7:35             ` Roger Pau Monné
  2018-04-18  7:35             ` Roger Pau Monné
                               ` (2 subsequent siblings)
  3 siblings, 0 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18  7:35 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Dongwon Kim, Oleksandr_Andrushchenko, jgross, Artem Mygaiev,
	konrad.wilk, airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> 3.2 Backend exports dma-buf to xen-front
> 
> In this case Dom0 pages are shared with DomU. As before, DomU can only write
> to these pages, not any other page from Dom0, so it can be still considered
> safe.
> But, the following must be considered (highlighted in xen-front's Kernel
> documentation):
>  - If guest domain dies then pages/grants received from the backend cannot
>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> any
>    other guest)
>  - Misbehaving guest may send too many requests to the backend exhausting
>    its grant references and memory (consider this from security POV). As the
>    backend runs in the trusted domain we also assume that it is trusted as
> well,
>    e.g. must take measures to prevent DDoS attacks.

I cannot parse the above sentence:

"As the backend runs in the trusted domain we also assume that it is
trusted as well, e.g. must take measures to prevent DDoS attacks."

What's the relation between being trusted and protecting from DoS
attacks?

In any case, all? PV protocols are implemented with the frontend
sharing pages to the backend, and I think there's a reason why this
model is used, and it should continue to be used.

Having to add logic in the backend to prevent such attacks means
that:

 - We need more code in the backend, which increases complexity and
   chances of bugs.
 - Such code/logic could be wrong, thus allowing DoS.

> 4. xen-front/backend/xen-zcopy synchronization
> 
> 4.1. As I already said in 2) all the inter VM communication happens between
> xen-front and the backend, xen-zcopy is NOT involved in that.
> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> This call is synchronous, so xen-front expects that backend does free the
> buffer pages on return.
> 
> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>   - closes all dumb handles/fd's of the buffer according to [3]
>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> sure
>     the buffer is freed (think of it as it waits for dma-buf->release
> callback)

So this zcopy thing keeps some kind of track of the memory usage? Why
can't the user-space backend keep track of the buffer usage?

>   - replies to xen-front that the buffer can be destroyed.
> This way deletion of the buffer happens synchronously on both Dom0 and DomU
> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> error
> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> reference
> removal and will retry later until those are free.
> 
> Hope this helps understand how buffers are synchronously deleted in case
> of xen-zcopy with a single protocol command.
> 
> I think the above logic can also be re-used by the hyper-dmabuf driver with
> some additional work:
> 
> 1. xen-zcopy can be split into 2 parts and extend:
> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> vise versa,

I don't know much about the dma-buf implementation in Linux, but
gntdev is a user-space device, and AFAICT user-space applications
don't have any notion of dma buffers. How are such buffers useful for
user-space? Why can't this just be called memory?

Also, (with my FreeBSD maintainer hat) how is this going to translate
to other OSes? So far the operations performed by the gntdev device
are mostly OS-agnostic because this just map/unmap memory, and in fact
they are implemented by Linux and FreeBSD.

> implement "wait" ioctl (wait for dma-buf->release): currently these are
> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> needed
> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

I think this needs clarifying. In which memory space do you need those
regions to be contiguous?

Do they need to be contiguous in host physical memory, or guest
physical memory?

If it's in guest memory space, isn't there any generic interface that
you can use?

If it's in host physical memory space, why do you need this buffer to
be contiguous in host physical memory space? The IOMMU should hide all
this.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-18  7:35             ` Roger Pau Monné
  0 siblings, 0 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18  7:35 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Dongwon Kim, Oleksandr_Andrushchenko, jgross, Artem Mygaiev,
	konrad.wilk, airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> 3.2 Backend exports dma-buf to xen-front
> 
> In this case Dom0 pages are shared with DomU. As before, DomU can only write
> to these pages, not any other page from Dom0, so it can be still considered
> safe.
> But, the following must be considered (highlighted in xen-front's Kernel
> documentation):
>  - If guest domain dies then pages/grants received from the backend cannot
>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> any
>    other guest)
>  - Misbehaving guest may send too many requests to the backend exhausting
>    its grant references and memory (consider this from security POV). As the
>    backend runs in the trusted domain we also assume that it is trusted as
> well,
>    e.g. must take measures to prevent DDoS attacks.

I cannot parse the above sentence:

"As the backend runs in the trusted domain we also assume that it is
trusted as well, e.g. must take measures to prevent DDoS attacks."

What's the relation between being trusted and protecting from DoS
attacks?

In any case, all? PV protocols are implemented with the frontend
sharing pages to the backend, and I think there's a reason why this
model is used, and it should continue to be used.

Having to add logic in the backend to prevent such attacks means
that:

 - We need more code in the backend, which increases complexity and
   chances of bugs.
 - Such code/logic could be wrong, thus allowing DoS.

> 4. xen-front/backend/xen-zcopy synchronization
> 
> 4.1. As I already said in 2) all the inter VM communication happens between
> xen-front and the backend, xen-zcopy is NOT involved in that.
> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> This call is synchronous, so xen-front expects that backend does free the
> buffer pages on return.
> 
> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>   - closes all dumb handles/fd's of the buffer according to [3]
>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> sure
>     the buffer is freed (think of it as it waits for dma-buf->release
> callback)

So this zcopy thing keeps some kind of track of the memory usage? Why
can't the user-space backend keep track of the buffer usage?

>   - replies to xen-front that the buffer can be destroyed.
> This way deletion of the buffer happens synchronously on both Dom0 and DomU
> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> error
> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> reference
> removal and will retry later until those are free.
> 
> Hope this helps understand how buffers are synchronously deleted in case
> of xen-zcopy with a single protocol command.
> 
> I think the above logic can also be re-used by the hyper-dmabuf driver with
> some additional work:
> 
> 1. xen-zcopy can be split into 2 parts and extend:
> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> vise versa,

I don't know much about the dma-buf implementation in Linux, but
gntdev is a user-space device, and AFAICT user-space applications
don't have any notion of dma buffers. How are such buffers useful for
user-space? Why can't this just be called memory?

Also, (with my FreeBSD maintainer hat) how is this going to translate
to other OSes? So far the operations performed by the gntdev device
are mostly OS-agnostic because this just map/unmap memory, and in fact
they are implemented by Linux and FreeBSD.

> implement "wait" ioctl (wait for dma-buf->release): currently these are
> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> needed
> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

I think this needs clarifying. In which memory space do you need those
regions to be contiguous?

Do they need to be contiguous in host physical memory, or guest
physical memory?

If it's in guest memory space, isn't there any generic interface that
you can use?

If it's in host physical memory space, why do you need this buffer to
be contiguous in host physical memory space? The IOMMU should hide all
this.

Thanks, Roger.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  6:38         ` Oleksandr Andrushchenko
@ 2018-04-18  7:35           ` Roger Pau Monné
  2018-04-18  7:35             ` Roger Pau Monné
                             ` (2 subsequent siblings)
  3 siblings, 0 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18  7:35 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> 3.2 Backend exports dma-buf to xen-front
> 
> In this case Dom0 pages are shared with DomU. As before, DomU can only write
> to these pages, not any other page from Dom0, so it can be still considered
> safe.
> But, the following must be considered (highlighted in xen-front's Kernel
> documentation):
>  - If guest domain dies then pages/grants received from the backend cannot
>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> any
>    other guest)
>  - Misbehaving guest may send too many requests to the backend exhausting
>    its grant references and memory (consider this from security POV). As the
>    backend runs in the trusted domain we also assume that it is trusted as
> well,
>    e.g. must take measures to prevent DDoS attacks.

I cannot parse the above sentence:

"As the backend runs in the trusted domain we also assume that it is
trusted as well, e.g. must take measures to prevent DDoS attacks."

What's the relation between being trusted and protecting from DoS
attacks?

In any case, all? PV protocols are implemented with the frontend
sharing pages to the backend, and I think there's a reason why this
model is used, and it should continue to be used.

Having to add logic in the backend to prevent such attacks means
that:

 - We need more code in the backend, which increases complexity and
   chances of bugs.
 - Such code/logic could be wrong, thus allowing DoS.

> 4. xen-front/backend/xen-zcopy synchronization
> 
> 4.1. As I already said in 2) all the inter VM communication happens between
> xen-front and the backend, xen-zcopy is NOT involved in that.
> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> This call is synchronous, so xen-front expects that backend does free the
> buffer pages on return.
> 
> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>   - closes all dumb handles/fd's of the buffer according to [3]
>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> sure
>     the buffer is freed (think of it as it waits for dma-buf->release
> callback)

So this zcopy thing keeps some kind of track of the memory usage? Why
can't the user-space backend keep track of the buffer usage?

>   - replies to xen-front that the buffer can be destroyed.
> This way deletion of the buffer happens synchronously on both Dom0 and DomU
> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> error
> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> reference
> removal and will retry later until those are free.
> 
> Hope this helps understand how buffers are synchronously deleted in case
> of xen-zcopy with a single protocol command.
> 
> I think the above logic can also be re-used by the hyper-dmabuf driver with
> some additional work:
> 
> 1. xen-zcopy can be split into 2 parts and extend:
> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> vise versa,

I don't know much about the dma-buf implementation in Linux, but
gntdev is a user-space device, and AFAICT user-space applications
don't have any notion of dma buffers. How are such buffers useful for
user-space? Why can't this just be called memory?

Also, (with my FreeBSD maintainer hat) how is this going to translate
to other OSes? So far the operations performed by the gntdev device
are mostly OS-agnostic because this just map/unmap memory, and in fact
they are implemented by Linux and FreeBSD.

> implement "wait" ioctl (wait for dma-buf->release): currently these are
> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> needed
> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

I think this needs clarifying. In which memory space do you need those
regions to be contiguous?

Do they need to be contiguous in host physical memory, or guest
physical memory?

If it's in guest memory space, isn't there any generic interface that
you can use?

If it's in host physical memory space, why do you need this buffer to
be contiguous in host physical memory space? The IOMMU should hide all
this.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  7:35             ` Roger Pau Monné
@ 2018-04-18  8:01               ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18  8:01 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Dongwon Kim, Oleksandr_Andrushchenko, jgross, Artem Mygaiev,
	konrad.wilk, airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>> 3.2 Backend exports dma-buf to xen-front
>>
>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>> to these pages, not any other page from Dom0, so it can be still considered
>> safe.
>> But, the following must be considered (highlighted in xen-front's Kernel
>> documentation):
>>   - If guest domain dies then pages/grants received from the backend cannot
>>     be claimed back - think of it as memory lost to Dom0 (won't be used for
>> any
>>     other guest)
>>   - Misbehaving guest may send too many requests to the backend exhausting
>>     its grant references and memory (consider this from security POV). As the
>>     backend runs in the trusted domain we also assume that it is trusted as
>> well,
>>     e.g. must take measures to prevent DDoS attacks.
> I cannot parse the above sentence:
>
> "As the backend runs in the trusted domain we also assume that it is
> trusted as well, e.g. must take measures to prevent DDoS attacks."
>
> What's the relation between being trusted and protecting from DoS
> attacks?
I mean that we trust the backend that it can prevent Dom0
from crashing in case DomU's frontend misbehaves, e.g.
if the frontend sends too many memory requests etc.
> In any case, all? PV protocols are implemented with the frontend
> sharing pages to the backend, and I think there's a reason why this
> model is used, and it should continue to be used.
This is the first use-case above. But there are real-world
use-cases (embedded in my case) when physically contiguous memory
needs to be shared, one of the possible ways to achieve this is
to share contiguous memory from Dom0 to DomU (the second use-case above)
> Having to add logic in the backend to prevent such attacks means
> that:
>
>   - We need more code in the backend, which increases complexity and
>     chances of bugs.
>   - Such code/logic could be wrong, thus allowing DoS.
You can live without this code at all, but this is then up to
backend which may make Dom0 down because of DomU's frontend doing evil 
things
>> 4. xen-front/backend/xen-zcopy synchronization
>>
>> 4.1. As I already said in 2) all the inter VM communication happens between
>> xen-front and the backend, xen-zcopy is NOT involved in that.
>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>> This call is synchronous, so xen-front expects that backend does free the
>> buffer pages on return.
>>
>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>    - closes all dumb handles/fd's of the buffer according to [3]
>>    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>> sure
>>      the buffer is freed (think of it as it waits for dma-buf->release
>> callback)
> So this zcopy thing keeps some kind of track of the memory usage? Why
> can't the user-space backend keep track of the buffer usage?
Because there is no dma-buf UAPI which allows to track the buffer life cycle
(e.g. wait until dma-buf's .release callback is called)
>>    - replies to xen-front that the buffer can be destroyed.
>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>> error
>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>> reference
>> removal and will retry later until those are free.
>>
>> Hope this helps understand how buffers are synchronously deleted in case
>> of xen-zcopy with a single protocol command.
>>
>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>> some additional work:
>>
>> 1. xen-zcopy can be split into 2 parts and extend:
>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>> vise versa,
> I don't know much about the dma-buf implementation in Linux, but
> gntdev is a user-space device, and AFAICT user-space applications
> don't have any notion of dma buffers. How are such buffers useful for
> user-space? Why can't this just be called memory?
A dma-buf is seen by user-space as a file descriptor and you can
pass it to different drivers then. For example, you can share a buffer
used by a display driver for scanout with a GPU, to compose a picture
into it:
1. User-space (US) allocates a display buffer from display driver
2. US asks display driver to export the dma-buf which backs up that buffer,
US gets buffer's fd: dma_buf_fd
3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
4. GPU renders contents into display buffer (dma_buf_fd)

Finally, this is indeed some memory, but a bit more [1]
>
> Also, (with my FreeBSD maintainer hat) how is this going to translate
> to other OSes? So far the operations performed by the gntdev device
> are mostly OS-agnostic because this just map/unmap memory, and in fact
> they are implemented by Linux and FreeBSD.
At the moment I can only see Linux implementation and it seems
to be perfectly ok as we do not change Xen's APIs etc. and only
use the existing ones (remember, we only extend gntdev/balloon
drivers, all the changes in the Linux kernel)
As the second note I can also think that we do not extend gntdev/balloon
drivers and have re-worked xen-zcopy driver be a separate entity,
say drivers/xen/dma-buf
>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>> needed
>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> I think this needs clarifying. In which memory space do you need those
> regions to be contiguous?
Use-case: Dom0 has a HW driver which only works with contig memory
and I want DomU to be able to directly write into that memory, thus
implementing zero copying
>
> Do they need to be contiguous in host physical memory, or guest
> physical memory?
Host
>
> If it's in guest memory space, isn't there any generic interface that
> you can use?
>
> If it's in host physical memory space, why do you need this buffer to
> be contiguous in host physical memory space? The IOMMU should hide all
> this.
There are drivers/HW which can only work with contig memory and
if it is backed by an IOMMU then still it has to be contig in IPA
space (real device doesn't know that it is actually IPA contig, not PA)
> Thanks, Roger.
Thank you,
Oleksandr

[1] https://01.org/linuxgraphics/gfx-docs/drm/driver-api/dma-buf.html

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-18  8:01               ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18  8:01 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>> 3.2 Backend exports dma-buf to xen-front
>>
>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>> to these pages, not any other page from Dom0, so it can be still considered
>> safe.
>> But, the following must be considered (highlighted in xen-front's Kernel
>> documentation):
>>   - If guest domain dies then pages/grants received from the backend cannot
>>     be claimed back - think of it as memory lost to Dom0 (won't be used for
>> any
>>     other guest)
>>   - Misbehaving guest may send too many requests to the backend exhausting
>>     its grant references and memory (consider this from security POV). As the
>>     backend runs in the trusted domain we also assume that it is trusted as
>> well,
>>     e.g. must take measures to prevent DDoS attacks.
> I cannot parse the above sentence:
>
> "As the backend runs in the trusted domain we also assume that it is
> trusted as well, e.g. must take measures to prevent DDoS attacks."
>
> What's the relation between being trusted and protecting from DoS
> attacks?
I mean that we trust the backend that it can prevent Dom0
from crashing in case DomU's frontend misbehaves, e.g.
if the frontend sends too many memory requests etc.
> In any case, all? PV protocols are implemented with the frontend
> sharing pages to the backend, and I think there's a reason why this
> model is used, and it should continue to be used.
This is the first use-case above. But there are real-world
use-cases (embedded in my case) when physically contiguous memory
needs to be shared, one of the possible ways to achieve this is
to share contiguous memory from Dom0 to DomU (the second use-case above)
> Having to add logic in the backend to prevent such attacks means
> that:
>
>   - We need more code in the backend, which increases complexity and
>     chances of bugs.
>   - Such code/logic could be wrong, thus allowing DoS.
You can live without this code at all, but this is then up to
backend which may make Dom0 down because of DomU's frontend doing evil 
things
>> 4. xen-front/backend/xen-zcopy synchronization
>>
>> 4.1. As I already said in 2) all the inter VM communication happens between
>> xen-front and the backend, xen-zcopy is NOT involved in that.
>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>> This call is synchronous, so xen-front expects that backend does free the
>> buffer pages on return.
>>
>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>    - closes all dumb handles/fd's of the buffer according to [3]
>>    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>> sure
>>      the buffer is freed (think of it as it waits for dma-buf->release
>> callback)
> So this zcopy thing keeps some kind of track of the memory usage? Why
> can't the user-space backend keep track of the buffer usage?
Because there is no dma-buf UAPI which allows to track the buffer life cycle
(e.g. wait until dma-buf's .release callback is called)
>>    - replies to xen-front that the buffer can be destroyed.
>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>> error
>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>> reference
>> removal and will retry later until those are free.
>>
>> Hope this helps understand how buffers are synchronously deleted in case
>> of xen-zcopy with a single protocol command.
>>
>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>> some additional work:
>>
>> 1. xen-zcopy can be split into 2 parts and extend:
>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>> vise versa,
> I don't know much about the dma-buf implementation in Linux, but
> gntdev is a user-space device, and AFAICT user-space applications
> don't have any notion of dma buffers. How are such buffers useful for
> user-space? Why can't this just be called memory?
A dma-buf is seen by user-space as a file descriptor and you can
pass it to different drivers then. For example, you can share a buffer
used by a display driver for scanout with a GPU, to compose a picture
into it:
1. User-space (US) allocates a display buffer from display driver
2. US asks display driver to export the dma-buf which backs up that buffer,
US gets buffer's fd: dma_buf_fd
3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
4. GPU renders contents into display buffer (dma_buf_fd)

Finally, this is indeed some memory, but a bit more [1]
>
> Also, (with my FreeBSD maintainer hat) how is this going to translate
> to other OSes? So far the operations performed by the gntdev device
> are mostly OS-agnostic because this just map/unmap memory, and in fact
> they are implemented by Linux and FreeBSD.
At the moment I can only see Linux implementation and it seems
to be perfectly ok as we do not change Xen's APIs etc. and only
use the existing ones (remember, we only extend gntdev/balloon
drivers, all the changes in the Linux kernel)
As the second note I can also think that we do not extend gntdev/balloon
drivers and have re-worked xen-zcopy driver be a separate entity,
say drivers/xen/dma-buf
>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>> needed
>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> I think this needs clarifying. In which memory space do you need those
> regions to be contiguous?
Use-case: Dom0 has a HW driver which only works with contig memory
and I want DomU to be able to directly write into that memory, thus
implementing zero copying
>
> Do they need to be contiguous in host physical memory, or guest
> physical memory?
Host
>
> If it's in guest memory space, isn't there any generic interface that
> you can use?
>
> If it's in host physical memory space, why do you need this buffer to
> be contiguous in host physical memory space? The IOMMU should hide all
> this.
There are drivers/HW which can only work with contig memory and
if it is backed by an IOMMU then still it has to be contig in IPA
space (real device doesn't know that it is actually IPA contig, not PA)
> Thanks, Roger.
Thank you,
Oleksandr

[1] https://01.org/linuxgraphics/gfx-docs/drm/driver-api/dma-buf.html
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  7:35             ` Roger Pau Monné
  (?)
@ 2018-04-18  8:01             ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18  8:01 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>> 3.2 Backend exports dma-buf to xen-front
>>
>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>> to these pages, not any other page from Dom0, so it can be still considered
>> safe.
>> But, the following must be considered (highlighted in xen-front's Kernel
>> documentation):
>>   - If guest domain dies then pages/grants received from the backend cannot
>>     be claimed back - think of it as memory lost to Dom0 (won't be used for
>> any
>>     other guest)
>>   - Misbehaving guest may send too many requests to the backend exhausting
>>     its grant references and memory (consider this from security POV). As the
>>     backend runs in the trusted domain we also assume that it is trusted as
>> well,
>>     e.g. must take measures to prevent DDoS attacks.
> I cannot parse the above sentence:
>
> "As the backend runs in the trusted domain we also assume that it is
> trusted as well, e.g. must take measures to prevent DDoS attacks."
>
> What's the relation between being trusted and protecting from DoS
> attacks?
I mean that we trust the backend that it can prevent Dom0
from crashing in case DomU's frontend misbehaves, e.g.
if the frontend sends too many memory requests etc.
> In any case, all? PV protocols are implemented with the frontend
> sharing pages to the backend, and I think there's a reason why this
> model is used, and it should continue to be used.
This is the first use-case above. But there are real-world
use-cases (embedded in my case) when physically contiguous memory
needs to be shared, one of the possible ways to achieve this is
to share contiguous memory from Dom0 to DomU (the second use-case above)
> Having to add logic in the backend to prevent such attacks means
> that:
>
>   - We need more code in the backend, which increases complexity and
>     chances of bugs.
>   - Such code/logic could be wrong, thus allowing DoS.
You can live without this code at all, but this is then up to
backend which may make Dom0 down because of DomU's frontend doing evil 
things
>> 4. xen-front/backend/xen-zcopy synchronization
>>
>> 4.1. As I already said in 2) all the inter VM communication happens between
>> xen-front and the backend, xen-zcopy is NOT involved in that.
>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>> This call is synchronous, so xen-front expects that backend does free the
>> buffer pages on return.
>>
>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>    - closes all dumb handles/fd's of the buffer according to [3]
>>    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>> sure
>>      the buffer is freed (think of it as it waits for dma-buf->release
>> callback)
> So this zcopy thing keeps some kind of track of the memory usage? Why
> can't the user-space backend keep track of the buffer usage?
Because there is no dma-buf UAPI which allows to track the buffer life cycle
(e.g. wait until dma-buf's .release callback is called)
>>    - replies to xen-front that the buffer can be destroyed.
>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>> error
>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>> reference
>> removal and will retry later until those are free.
>>
>> Hope this helps understand how buffers are synchronously deleted in case
>> of xen-zcopy with a single protocol command.
>>
>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>> some additional work:
>>
>> 1. xen-zcopy can be split into 2 parts and extend:
>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>> vise versa,
> I don't know much about the dma-buf implementation in Linux, but
> gntdev is a user-space device, and AFAICT user-space applications
> don't have any notion of dma buffers. How are such buffers useful for
> user-space? Why can't this just be called memory?
A dma-buf is seen by user-space as a file descriptor and you can
pass it to different drivers then. For example, you can share a buffer
used by a display driver for scanout with a GPU, to compose a picture
into it:
1. User-space (US) allocates a display buffer from display driver
2. US asks display driver to export the dma-buf which backs up that buffer,
US gets buffer's fd: dma_buf_fd
3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
4. GPU renders contents into display buffer (dma_buf_fd)

Finally, this is indeed some memory, but a bit more [1]
>
> Also, (with my FreeBSD maintainer hat) how is this going to translate
> to other OSes? So far the operations performed by the gntdev device
> are mostly OS-agnostic because this just map/unmap memory, and in fact
> they are implemented by Linux and FreeBSD.
At the moment I can only see Linux implementation and it seems
to be perfectly ok as we do not change Xen's APIs etc. and only
use the existing ones (remember, we only extend gntdev/balloon
drivers, all the changes in the Linux kernel)
As the second note I can also think that we do not extend gntdev/balloon
drivers and have re-worked xen-zcopy driver be a separate entity,
say drivers/xen/dma-buf
>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>> needed
>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> I think this needs clarifying. In which memory space do you need those
> regions to be contiguous?
Use-case: Dom0 has a HW driver which only works with contig memory
and I want DomU to be able to directly write into that memory, thus
implementing zero copying
>
> Do they need to be contiguous in host physical memory, or guest
> physical memory?
Host
>
> If it's in guest memory space, isn't there any generic interface that
> you can use?
>
> If it's in host physical memory space, why do you need this buffer to
> be contiguous in host physical memory space? The IOMMU should hide all
> this.
There are drivers/HW which can only work with contig memory and
if it is backed by an IOMMU then still it has to be contig in IPA
space (real device doesn't know that it is actually IPA contig, not PA)
> Thanks, Roger.
Thank you,
Oleksandr

[1] https://01.org/linuxgraphics/gfx-docs/drm/driver-api/dma-buf.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  8:01               ` Oleksandr Andrushchenko
@ 2018-04-18 10:10                 ` Roger Pau Monné
  -1 siblings, 0 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18 10:10 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Dongwon Kim, Oleksandr_Andrushchenko, jgross, Artem Mygaiev,
	konrad.wilk, airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> > > On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > > > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > > 3.2 Backend exports dma-buf to xen-front
> > > 
> > > In this case Dom0 pages are shared with DomU. As before, DomU can only write
> > > to these pages, not any other page from Dom0, so it can be still considered
> > > safe.
> > > But, the following must be considered (highlighted in xen-front's Kernel
> > > documentation):
> > >   - If guest domain dies then pages/grants received from the backend cannot
> > >     be claimed back - think of it as memory lost to Dom0 (won't be used for
> > > any
> > >     other guest)
> > >   - Misbehaving guest may send too many requests to the backend exhausting
> > >     its grant references and memory (consider this from security POV). As the
> > >     backend runs in the trusted domain we also assume that it is trusted as
> > > well,
> > >     e.g. must take measures to prevent DDoS attacks.
> > I cannot parse the above sentence:
> > 
> > "As the backend runs in the trusted domain we also assume that it is
> > trusted as well, e.g. must take measures to prevent DDoS attacks."
> > 
> > What's the relation between being trusted and protecting from DoS
> > attacks?
> I mean that we trust the backend that it can prevent Dom0
> from crashing in case DomU's frontend misbehaves, e.g.
> if the frontend sends too many memory requests etc.
> > In any case, all? PV protocols are implemented with the frontend
> > sharing pages to the backend, and I think there's a reason why this
> > model is used, and it should continue to be used.
> This is the first use-case above. But there are real-world
> use-cases (embedded in my case) when physically contiguous memory
> needs to be shared, one of the possible ways to achieve this is
> to share contiguous memory from Dom0 to DomU (the second use-case above)
> > Having to add logic in the backend to prevent such attacks means
> > that:
> > 
> >   - We need more code in the backend, which increases complexity and
> >     chances of bugs.
> >   - Such code/logic could be wrong, thus allowing DoS.
> You can live without this code at all, but this is then up to
> backend which may make Dom0 down because of DomU's frontend doing evil
> things

IMO we should design protocols that do not allow such attacks instead
of having to defend against them.

> > > 4. xen-front/backend/xen-zcopy synchronization
> > > 
> > > 4.1. As I already said in 2) all the inter VM communication happens between
> > > xen-front and the backend, xen-zcopy is NOT involved in that.
> > > When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> > > XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> > > This call is synchronous, so xen-front expects that backend does free the
> > > buffer pages on return.
> > > 
> > > 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> > >    - closes all dumb handles/fd's of the buffer according to [3]
> > >    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> > > sure
> > >      the buffer is freed (think of it as it waits for dma-buf->release
> > > callback)
> > So this zcopy thing keeps some kind of track of the memory usage? Why
> > can't the user-space backend keep track of the buffer usage?
> Because there is no dma-buf UAPI which allows to track the buffer life cycle
> (e.g. wait until dma-buf's .release callback is called)
> > >    - replies to xen-front that the buffer can be destroyed.
> > > This way deletion of the buffer happens synchronously on both Dom0 and DomU
> > > sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> > > error
> > > (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> > > reference
> > > removal and will retry later until those are free.
> > > 
> > > Hope this helps understand how buffers are synchronously deleted in case
> > > of xen-zcopy with a single protocol command.
> > > 
> > > I think the above logic can also be re-used by the hyper-dmabuf driver with
> > > some additional work:
> > > 
> > > 1. xen-zcopy can be split into 2 parts and extend:
> > > 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> > > vise versa,
> > I don't know much about the dma-buf implementation in Linux, but
> > gntdev is a user-space device, and AFAICT user-space applications
> > don't have any notion of dma buffers. How are such buffers useful for
> > user-space? Why can't this just be called memory?
> A dma-buf is seen by user-space as a file descriptor and you can
> pass it to different drivers then. For example, you can share a buffer
> used by a display driver for scanout with a GPU, to compose a picture
> into it:
> 1. User-space (US) allocates a display buffer from display driver
> 2. US asks display driver to export the dma-buf which backs up that buffer,
> US gets buffer's fd: dma_buf_fd
> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
> 4. GPU renders contents into display buffer (dma_buf_fd)

After speaking with Oleksandr on IRC, I think the main usage of the
gntdev extension is to:

1. Create a dma-buf from a set of grant references.
2. Share dma-buf and get a list of grant references.

I think this set of operations could be broken into:

1.1 Map grant references into user-space using the gntdev.
1.2 Create a dma-buf out of a set of user-space virtual addresses.

2.1 Map a dma-buf into user-space.
2.2 Get grefs out of the user-space addresses where the dma-buf is
    mapped.

So it seems like what's actually missing is a way to:

 - Create a dma-buf from a list of user-space virtual addresses.
 - Allow to map a dma-buf into user-space, so it can then be used with
   the gntdev.

I think this is generic enough that it could be implemented by a
device not tied to Xen. AFAICT the hyper_dma guys also wanted
something similar to this.

> Finally, this is indeed some memory, but a bit more [1]
> > 
> > Also, (with my FreeBSD maintainer hat) how is this going to translate
> > to other OSes? So far the operations performed by the gntdev device
> > are mostly OS-agnostic because this just map/unmap memory, and in fact
> > they are implemented by Linux and FreeBSD.
> At the moment I can only see Linux implementation and it seems
> to be perfectly ok as we do not change Xen's APIs etc. and only
> use the existing ones (remember, we only extend gntdev/balloon
> drivers, all the changes in the Linux kernel)
> As the second note I can also think that we do not extend gntdev/balloon
> drivers and have re-worked xen-zcopy driver be a separate entity,
> say drivers/xen/dma-buf
> > > implement "wait" ioctl (wait for dma-buf->release): currently these are
> > > DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> > > DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> > > needed
> > > by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> > I think this needs clarifying. In which memory space do you need those
> > regions to be contiguous?
> Use-case: Dom0 has a HW driver which only works with contig memory
> and I want DomU to be able to directly write into that memory, thus
> implementing zero copying
> > 
> > Do they need to be contiguous in host physical memory, or guest
> > physical memory?
> Host
> > 
> > If it's in guest memory space, isn't there any generic interface that
> > you can use?
> > 
> > If it's in host physical memory space, why do you need this buffer to
> > be contiguous in host physical memory space? The IOMMU should hide all
> > this.
> There are drivers/HW which can only work with contig memory and
> if it is backed by an IOMMU then still it has to be contig in IPA
> space (real device doesn't know that it is actually IPA contig, not PA)

What's IPA contig?

Thanks, Roger.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-18 10:10                 ` Roger Pau Monné
  0 siblings, 0 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18 10:10 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Dongwon Kim, Oleksandr_Andrushchenko, jgross, Artem Mygaiev,
	konrad.wilk, airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> > > On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > > > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > > 3.2 Backend exports dma-buf to xen-front
> > > 
> > > In this case Dom0 pages are shared with DomU. As before, DomU can only write
> > > to these pages, not any other page from Dom0, so it can be still considered
> > > safe.
> > > But, the following must be considered (highlighted in xen-front's Kernel
> > > documentation):
> > >   - If guest domain dies then pages/grants received from the backend cannot
> > >     be claimed back - think of it as memory lost to Dom0 (won't be used for
> > > any
> > >     other guest)
> > >   - Misbehaving guest may send too many requests to the backend exhausting
> > >     its grant references and memory (consider this from security POV). As the
> > >     backend runs in the trusted domain we also assume that it is trusted as
> > > well,
> > >     e.g. must take measures to prevent DDoS attacks.
> > I cannot parse the above sentence:
> > 
> > "As the backend runs in the trusted domain we also assume that it is
> > trusted as well, e.g. must take measures to prevent DDoS attacks."
> > 
> > What's the relation between being trusted and protecting from DoS
> > attacks?
> I mean that we trust the backend that it can prevent Dom0
> from crashing in case DomU's frontend misbehaves, e.g.
> if the frontend sends too many memory requests etc.
> > In any case, all? PV protocols are implemented with the frontend
> > sharing pages to the backend, and I think there's a reason why this
> > model is used, and it should continue to be used.
> This is the first use-case above. But there are real-world
> use-cases (embedded in my case) when physically contiguous memory
> needs to be shared, one of the possible ways to achieve this is
> to share contiguous memory from Dom0 to DomU (the second use-case above)
> > Having to add logic in the backend to prevent such attacks means
> > that:
> > 
> >   - We need more code in the backend, which increases complexity and
> >     chances of bugs.
> >   - Such code/logic could be wrong, thus allowing DoS.
> You can live without this code at all, but this is then up to
> backend which may make Dom0 down because of DomU's frontend doing evil
> things

IMO we should design protocols that do not allow such attacks instead
of having to defend against them.

> > > 4. xen-front/backend/xen-zcopy synchronization
> > > 
> > > 4.1. As I already said in 2) all the inter VM communication happens between
> > > xen-front and the backend, xen-zcopy is NOT involved in that.
> > > When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> > > XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> > > This call is synchronous, so xen-front expects that backend does free the
> > > buffer pages on return.
> > > 
> > > 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> > >    - closes all dumb handles/fd's of the buffer according to [3]
> > >    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> > > sure
> > >      the buffer is freed (think of it as it waits for dma-buf->release
> > > callback)
> > So this zcopy thing keeps some kind of track of the memory usage? Why
> > can't the user-space backend keep track of the buffer usage?
> Because there is no dma-buf UAPI which allows to track the buffer life cycle
> (e.g. wait until dma-buf's .release callback is called)
> > >    - replies to xen-front that the buffer can be destroyed.
> > > This way deletion of the buffer happens synchronously on both Dom0 and DomU
> > > sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> > > error
> > > (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> > > reference
> > > removal and will retry later until those are free.
> > > 
> > > Hope this helps understand how buffers are synchronously deleted in case
> > > of xen-zcopy with a single protocol command.
> > > 
> > > I think the above logic can also be re-used by the hyper-dmabuf driver with
> > > some additional work:
> > > 
> > > 1. xen-zcopy can be split into 2 parts and extend:
> > > 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> > > vise versa,
> > I don't know much about the dma-buf implementation in Linux, but
> > gntdev is a user-space device, and AFAICT user-space applications
> > don't have any notion of dma buffers. How are such buffers useful for
> > user-space? Why can't this just be called memory?
> A dma-buf is seen by user-space as a file descriptor and you can
> pass it to different drivers then. For example, you can share a buffer
> used by a display driver for scanout with a GPU, to compose a picture
> into it:
> 1. User-space (US) allocates a display buffer from display driver
> 2. US asks display driver to export the dma-buf which backs up that buffer,
> US gets buffer's fd: dma_buf_fd
> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
> 4. GPU renders contents into display buffer (dma_buf_fd)

After speaking with Oleksandr on IRC, I think the main usage of the
gntdev extension is to:

1. Create a dma-buf from a set of grant references.
2. Share dma-buf and get a list of grant references.

I think this set of operations could be broken into:

1.1 Map grant references into user-space using the gntdev.
1.2 Create a dma-buf out of a set of user-space virtual addresses.

2.1 Map a dma-buf into user-space.
2.2 Get grefs out of the user-space addresses where the dma-buf is
    mapped.

So it seems like what's actually missing is a way to:

 - Create a dma-buf from a list of user-space virtual addresses.
 - Allow to map a dma-buf into user-space, so it can then be used with
   the gntdev.

I think this is generic enough that it could be implemented by a
device not tied to Xen. AFAICT the hyper_dma guys also wanted
something similar to this.

> Finally, this is indeed some memory, but a bit more [1]
> > 
> > Also, (with my FreeBSD maintainer hat) how is this going to translate
> > to other OSes? So far the operations performed by the gntdev device
> > are mostly OS-agnostic because this just map/unmap memory, and in fact
> > they are implemented by Linux and FreeBSD.
> At the moment I can only see Linux implementation and it seems
> to be perfectly ok as we do not change Xen's APIs etc. and only
> use the existing ones (remember, we only extend gntdev/balloon
> drivers, all the changes in the Linux kernel)
> As the second note I can also think that we do not extend gntdev/balloon
> drivers and have re-worked xen-zcopy driver be a separate entity,
> say drivers/xen/dma-buf
> > > implement "wait" ioctl (wait for dma-buf->release): currently these are
> > > DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> > > DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> > > needed
> > > by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> > I think this needs clarifying. In which memory space do you need those
> > regions to be contiguous?
> Use-case: Dom0 has a HW driver which only works with contig memory
> and I want DomU to be able to directly write into that memory, thus
> implementing zero copying
> > 
> > Do they need to be contiguous in host physical memory, or guest
> > physical memory?
> Host
> > 
> > If it's in guest memory space, isn't there any generic interface that
> > you can use?
> > 
> > If it's in host physical memory space, why do you need this buffer to
> > be contiguous in host physical memory space? The IOMMU should hide all
> > this.
> There are drivers/HW which can only work with contig memory and
> if it is backed by an IOMMU then still it has to be contig in IPA
> space (real device doesn't know that it is actually IPA contig, not PA)

What's IPA contig?

Thanks, Roger.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  8:01               ` Oleksandr Andrushchenko
  (?)
@ 2018-04-18 10:10               ` Roger Pau Monné
  -1 siblings, 0 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18 10:10 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> > > On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > > > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > > 3.2 Backend exports dma-buf to xen-front
> > > 
> > > In this case Dom0 pages are shared with DomU. As before, DomU can only write
> > > to these pages, not any other page from Dom0, so it can be still considered
> > > safe.
> > > But, the following must be considered (highlighted in xen-front's Kernel
> > > documentation):
> > >   - If guest domain dies then pages/grants received from the backend cannot
> > >     be claimed back - think of it as memory lost to Dom0 (won't be used for
> > > any
> > >     other guest)
> > >   - Misbehaving guest may send too many requests to the backend exhausting
> > >     its grant references and memory (consider this from security POV). As the
> > >     backend runs in the trusted domain we also assume that it is trusted as
> > > well,
> > >     e.g. must take measures to prevent DDoS attacks.
> > I cannot parse the above sentence:
> > 
> > "As the backend runs in the trusted domain we also assume that it is
> > trusted as well, e.g. must take measures to prevent DDoS attacks."
> > 
> > What's the relation between being trusted and protecting from DoS
> > attacks?
> I mean that we trust the backend that it can prevent Dom0
> from crashing in case DomU's frontend misbehaves, e.g.
> if the frontend sends too many memory requests etc.
> > In any case, all? PV protocols are implemented with the frontend
> > sharing pages to the backend, and I think there's a reason why this
> > model is used, and it should continue to be used.
> This is the first use-case above. But there are real-world
> use-cases (embedded in my case) when physically contiguous memory
> needs to be shared, one of the possible ways to achieve this is
> to share contiguous memory from Dom0 to DomU (the second use-case above)
> > Having to add logic in the backend to prevent such attacks means
> > that:
> > 
> >   - We need more code in the backend, which increases complexity and
> >     chances of bugs.
> >   - Such code/logic could be wrong, thus allowing DoS.
> You can live without this code at all, but this is then up to
> backend which may make Dom0 down because of DomU's frontend doing evil
> things

IMO we should design protocols that do not allow such attacks instead
of having to defend against them.

> > > 4. xen-front/backend/xen-zcopy synchronization
> > > 
> > > 4.1. As I already said in 2) all the inter VM communication happens between
> > > xen-front and the backend, xen-zcopy is NOT involved in that.
> > > When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> > > XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> > > This call is synchronous, so xen-front expects that backend does free the
> > > buffer pages on return.
> > > 
> > > 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> > >    - closes all dumb handles/fd's of the buffer according to [3]
> > >    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> > > sure
> > >      the buffer is freed (think of it as it waits for dma-buf->release
> > > callback)
> > So this zcopy thing keeps some kind of track of the memory usage? Why
> > can't the user-space backend keep track of the buffer usage?
> Because there is no dma-buf UAPI which allows to track the buffer life cycle
> (e.g. wait until dma-buf's .release callback is called)
> > >    - replies to xen-front that the buffer can be destroyed.
> > > This way deletion of the buffer happens synchronously on both Dom0 and DomU
> > > sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> > > error
> > > (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> > > reference
> > > removal and will retry later until those are free.
> > > 
> > > Hope this helps understand how buffers are synchronously deleted in case
> > > of xen-zcopy with a single protocol command.
> > > 
> > > I think the above logic can also be re-used by the hyper-dmabuf driver with
> > > some additional work:
> > > 
> > > 1. xen-zcopy can be split into 2 parts and extend:
> > > 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> > > vise versa,
> > I don't know much about the dma-buf implementation in Linux, but
> > gntdev is a user-space device, and AFAICT user-space applications
> > don't have any notion of dma buffers. How are such buffers useful for
> > user-space? Why can't this just be called memory?
> A dma-buf is seen by user-space as a file descriptor and you can
> pass it to different drivers then. For example, you can share a buffer
> used by a display driver for scanout with a GPU, to compose a picture
> into it:
> 1. User-space (US) allocates a display buffer from display driver
> 2. US asks display driver to export the dma-buf which backs up that buffer,
> US gets buffer's fd: dma_buf_fd
> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
> 4. GPU renders contents into display buffer (dma_buf_fd)

After speaking with Oleksandr on IRC, I think the main usage of the
gntdev extension is to:

1. Create a dma-buf from a set of grant references.
2. Share dma-buf and get a list of grant references.

I think this set of operations could be broken into:

1.1 Map grant references into user-space using the gntdev.
1.2 Create a dma-buf out of a set of user-space virtual addresses.

2.1 Map a dma-buf into user-space.
2.2 Get grefs out of the user-space addresses where the dma-buf is
    mapped.

So it seems like what's actually missing is a way to:

 - Create a dma-buf from a list of user-space virtual addresses.
 - Allow to map a dma-buf into user-space, so it can then be used with
   the gntdev.

I think this is generic enough that it could be implemented by a
device not tied to Xen. AFAICT the hyper_dma guys also wanted
something similar to this.

> Finally, this is indeed some memory, but a bit more [1]
> > 
> > Also, (with my FreeBSD maintainer hat) how is this going to translate
> > to other OSes? So far the operations performed by the gntdev device
> > are mostly OS-agnostic because this just map/unmap memory, and in fact
> > they are implemented by Linux and FreeBSD.
> At the moment I can only see Linux implementation and it seems
> to be perfectly ok as we do not change Xen's APIs etc. and only
> use the existing ones (remember, we only extend gntdev/balloon
> drivers, all the changes in the Linux kernel)
> As the second note I can also think that we do not extend gntdev/balloon
> drivers and have re-worked xen-zcopy driver be a separate entity,
> say drivers/xen/dma-buf
> > > implement "wait" ioctl (wait for dma-buf->release): currently these are
> > > DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> > > DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> > > needed
> > > by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> > I think this needs clarifying. In which memory space do you need those
> > regions to be contiguous?
> Use-case: Dom0 has a HW driver which only works with contig memory
> and I want DomU to be able to directly write into that memory, thus
> implementing zero copying
> > 
> > Do they need to be contiguous in host physical memory, or guest
> > physical memory?
> Host
> > 
> > If it's in guest memory space, isn't there any generic interface that
> > you can use?
> > 
> > If it's in host physical memory space, why do you need this buffer to
> > be contiguous in host physical memory space? The IOMMU should hide all
> > this.
> There are drivers/HW which can only work with contig memory and
> if it is backed by an IOMMU then still it has to be contig in IPA
> space (real device doesn't know that it is actually IPA contig, not PA)

What's IPA contig?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* RE: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:10                 ` Roger Pau Monné
  (?)
@ 2018-04-18 10:18                 ` Paul Durrant
  2018-04-18 10:21                     ` Oleksandr Andrushchenko
                                     ` (3 more replies)
  -1 siblings, 4 replies; 131+ messages in thread
From: Paul Durrant @ 2018-04-18 10:18 UTC (permalink / raw)
  To: Roger Pau Monne, Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Roger Pau Monné
> Sent: 18 April 2018 11:11
> To: Oleksandr Andrushchenko <andr2000@gmail.com>
> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Potrola, MateuszX
> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> <matthew.d.roper@intel.com>
> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> helper DRM driver
> 
> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> wrote:
> > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
> wrote:
> > > > On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > > > > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > > > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > > > 3.2 Backend exports dma-buf to xen-front
> > > >
> > > > In this case Dom0 pages are shared with DomU. As before, DomU can
> only write
> > > > to these pages, not any other page from Dom0, so it can be still
> considered
> > > > safe.
> > > > But, the following must be considered (highlighted in xen-front's Kernel
> > > > documentation):
> > > >   - If guest domain dies then pages/grants received from the backend
> cannot
> > > >     be claimed back - think of it as memory lost to Dom0 (won't be used
> for
> > > > any
> > > >     other guest)
> > > >   - Misbehaving guest may send too many requests to the backend
> exhausting
> > > >     its grant references and memory (consider this from security POV).
> As the
> > > >     backend runs in the trusted domain we also assume that it is trusted
> as
> > > > well,
> > > >     e.g. must take measures to prevent DDoS attacks.
> > > I cannot parse the above sentence:
> > >
> > > "As the backend runs in the trusted domain we also assume that it is
> > > trusted as well, e.g. must take measures to prevent DDoS attacks."
> > >
> > > What's the relation between being trusted and protecting from DoS
> > > attacks?
> > I mean that we trust the backend that it can prevent Dom0
> > from crashing in case DomU's frontend misbehaves, e.g.
> > if the frontend sends too many memory requests etc.
> > > In any case, all? PV protocols are implemented with the frontend
> > > sharing pages to the backend, and I think there's a reason why this
> > > model is used, and it should continue to be used.
> > This is the first use-case above. But there are real-world
> > use-cases (embedded in my case) when physically contiguous memory
> > needs to be shared, one of the possible ways to achieve this is
> > to share contiguous memory from Dom0 to DomU (the second use-case
> above)
> > > Having to add logic in the backend to prevent such attacks means
> > > that:
> > >
> > >   - We need more code in the backend, which increases complexity and
> > >     chances of bugs.
> > >   - Such code/logic could be wrong, thus allowing DoS.
> > You can live without this code at all, but this is then up to
> > backend which may make Dom0 down because of DomU's frontend doing
> evil
> > things
> 
> IMO we should design protocols that do not allow such attacks instead
> of having to defend against them.
> 
> > > > 4. xen-front/backend/xen-zcopy synchronization
> > > >
> > > > 4.1. As I already said in 2) all the inter VM communication happens
> between
> > > > xen-front and the backend, xen-zcopy is NOT involved in that.
> > > > When xen-front wants to destroy a display buffer (dumb/dma-buf) it
> issues a
> > > > XENDISPL_OP_DBUF_DESTROY command (opposite to
> XENDISPL_OP_DBUF_CREATE).
> > > > This call is synchronous, so xen-front expects that backend does free
> the
> > > > buffer pages on return.
> > > >
> > > > 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> > > >    - closes all dumb handles/fd's of the buffer according to [3]
> > > >    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
> zcopy to make
> > > > sure
> > > >      the buffer is freed (think of it as it waits for dma-buf->release
> > > > callback)
> > > So this zcopy thing keeps some kind of track of the memory usage? Why
> > > can't the user-space backend keep track of the buffer usage?
> > Because there is no dma-buf UAPI which allows to track the buffer life cycle
> > (e.g. wait until dma-buf's .release callback is called)
> > > >    - replies to xen-front that the buffer can be destroyed.
> > > > This way deletion of the buffer happens synchronously on both Dom0
> and DomU
> > > > sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
> with time-out
> > > > error
> > > > (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> > > > reference
> > > > removal and will retry later until those are free.
> > > >
> > > > Hope this helps understand how buffers are synchronously deleted in
> case
> > > > of xen-zcopy with a single protocol command.
> > > >
> > > > I think the above logic can also be re-used by the hyper-dmabuf driver
> with
> > > > some additional work:
> > > >
> > > > 1. xen-zcopy can be split into 2 parts and extend:
> > > > 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> > > > vise versa,
> > > I don't know much about the dma-buf implementation in Linux, but
> > > gntdev is a user-space device, and AFAICT user-space applications
> > > don't have any notion of dma buffers. How are such buffers useful for
> > > user-space? Why can't this just be called memory?
> > A dma-buf is seen by user-space as a file descriptor and you can
> > pass it to different drivers then. For example, you can share a buffer
> > used by a display driver for scanout with a GPU, to compose a picture
> > into it:
> > 1. User-space (US) allocates a display buffer from display driver
> > 2. US asks display driver to export the dma-buf which backs up that buffer,
> > US gets buffer's fd: dma_buf_fd
> > 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
> > 4. GPU renders contents into display buffer (dma_buf_fd)
> 
> After speaking with Oleksandr on IRC, I think the main usage of the
> gntdev extension is to:
> 
> 1. Create a dma-buf from a set of grant references.
> 2. Share dma-buf and get a list of grant references.
> 
> I think this set of operations could be broken into:
> 
> 1.1 Map grant references into user-space using the gntdev.
> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> 
> 2.1 Map a dma-buf into user-space.
> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>     mapped.
> 
> So it seems like what's actually missing is a way to:
> 
>  - Create a dma-buf from a list of user-space virtual addresses.
>  - Allow to map a dma-buf into user-space, so it can then be used with
>    the gntdev.
> 
> I think this is generic enough that it could be implemented by a
> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> something similar to this.
> 
> > Finally, this is indeed some memory, but a bit more [1]
> > >
> > > Also, (with my FreeBSD maintainer hat) how is this going to translate
> > > to other OSes? So far the operations performed by the gntdev device
> > > are mostly OS-agnostic because this just map/unmap memory, and in fact
> > > they are implemented by Linux and FreeBSD.
> > At the moment I can only see Linux implementation and it seems
> > to be perfectly ok as we do not change Xen's APIs etc. and only
> > use the existing ones (remember, we only extend gntdev/balloon
> > drivers, all the changes in the Linux kernel)
> > As the second note I can also think that we do not extend gntdev/balloon
> > drivers and have re-worked xen-zcopy driver be a separate entity,
> > say drivers/xen/dma-buf
> > > > implement "wait" ioctl (wait for dma-buf->release): currently these are
> > > > DRM_XEN_ZCOPY_DUMB_FROM_REFS,
> DRM_XEN_ZCOPY_DUMB_TO_REFS and
> > > > DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> > > > needed
> > > > by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> > > I think this needs clarifying. In which memory space do you need those
> > > regions to be contiguous?
> > Use-case: Dom0 has a HW driver which only works with contig memory
> > and I want DomU to be able to directly write into that memory, thus
> > implementing zero copying
> > >
> > > Do they need to be contiguous in host physical memory, or guest
> > > physical memory?
> > Host
> > >
> > > If it's in guest memory space, isn't there any generic interface that
> > > you can use?
> > >
> > > If it's in host physical memory space, why do you need this buffer to
> > > be contiguous in host physical memory space? The IOMMU should hide
> all
> > > this.
> > There are drivers/HW which can only work with contig memory and
> > if it is backed by an IOMMU then still it has to be contig in IPA
> > space (real device doesn't know that it is actually IPA contig, not PA)
> 
> What's IPA contig?

I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?

  Paul

> 
> Thanks, Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:10                 ` Roger Pau Monné
  (?)
  (?)
@ 2018-04-18 10:18                 ` Paul Durrant
  -1 siblings, 0 replies; 131+ messages in thread
From: Paul Durrant @ 2018-04-18 10:18 UTC (permalink / raw)
  To: Roger Pau Monne, Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> Of Roger Pau Monné
> Sent: 18 April 2018 11:11
> To: Oleksandr Andrushchenko <andr2000@gmail.com>
> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Potrola, MateuszX
> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> <matthew.d.roper@intel.com>
> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> helper DRM driver
> 
> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> wrote:
> > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
> wrote:
> > > > On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > > > > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > > > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > > > 3.2 Backend exports dma-buf to xen-front
> > > >
> > > > In this case Dom0 pages are shared with DomU. As before, DomU can
> only write
> > > > to these pages, not any other page from Dom0, so it can be still
> considered
> > > > safe.
> > > > But, the following must be considered (highlighted in xen-front's Kernel
> > > > documentation):
> > > >   - If guest domain dies then pages/grants received from the backend
> cannot
> > > >     be claimed back - think of it as memory lost to Dom0 (won't be used
> for
> > > > any
> > > >     other guest)
> > > >   - Misbehaving guest may send too many requests to the backend
> exhausting
> > > >     its grant references and memory (consider this from security POV).
> As the
> > > >     backend runs in the trusted domain we also assume that it is trusted
> as
> > > > well,
> > > >     e.g. must take measures to prevent DDoS attacks.
> > > I cannot parse the above sentence:
> > >
> > > "As the backend runs in the trusted domain we also assume that it is
> > > trusted as well, e.g. must take measures to prevent DDoS attacks."
> > >
> > > What's the relation between being trusted and protecting from DoS
> > > attacks?
> > I mean that we trust the backend that it can prevent Dom0
> > from crashing in case DomU's frontend misbehaves, e.g.
> > if the frontend sends too many memory requests etc.
> > > In any case, all? PV protocols are implemented with the frontend
> > > sharing pages to the backend, and I think there's a reason why this
> > > model is used, and it should continue to be used.
> > This is the first use-case above. But there are real-world
> > use-cases (embedded in my case) when physically contiguous memory
> > needs to be shared, one of the possible ways to achieve this is
> > to share contiguous memory from Dom0 to DomU (the second use-case
> above)
> > > Having to add logic in the backend to prevent such attacks means
> > > that:
> > >
> > >   - We need more code in the backend, which increases complexity and
> > >     chances of bugs.
> > >   - Such code/logic could be wrong, thus allowing DoS.
> > You can live without this code at all, but this is then up to
> > backend which may make Dom0 down because of DomU's frontend doing
> evil
> > things
> 
> IMO we should design protocols that do not allow such attacks instead
> of having to defend against them.
> 
> > > > 4. xen-front/backend/xen-zcopy synchronization
> > > >
> > > > 4.1. As I already said in 2) all the inter VM communication happens
> between
> > > > xen-front and the backend, xen-zcopy is NOT involved in that.
> > > > When xen-front wants to destroy a display buffer (dumb/dma-buf) it
> issues a
> > > > XENDISPL_OP_DBUF_DESTROY command (opposite to
> XENDISPL_OP_DBUF_CREATE).
> > > > This call is synchronous, so xen-front expects that backend does free
> the
> > > > buffer pages on return.
> > > >
> > > > 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> > > >    - closes all dumb handles/fd's of the buffer according to [3]
> > > >    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
> zcopy to make
> > > > sure
> > > >      the buffer is freed (think of it as it waits for dma-buf->release
> > > > callback)
> > > So this zcopy thing keeps some kind of track of the memory usage? Why
> > > can't the user-space backend keep track of the buffer usage?
> > Because there is no dma-buf UAPI which allows to track the buffer life cycle
> > (e.g. wait until dma-buf's .release callback is called)
> > > >    - replies to xen-front that the buffer can be destroyed.
> > > > This way deletion of the buffer happens synchronously on both Dom0
> and DomU
> > > > sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
> with time-out
> > > > error
> > > > (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> > > > reference
> > > > removal and will retry later until those are free.
> > > >
> > > > Hope this helps understand how buffers are synchronously deleted in
> case
> > > > of xen-zcopy with a single protocol command.
> > > >
> > > > I think the above logic can also be re-used by the hyper-dmabuf driver
> with
> > > > some additional work:
> > > >
> > > > 1. xen-zcopy can be split into 2 parts and extend:
> > > > 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> > > > vise versa,
> > > I don't know much about the dma-buf implementation in Linux, but
> > > gntdev is a user-space device, and AFAICT user-space applications
> > > don't have any notion of dma buffers. How are such buffers useful for
> > > user-space? Why can't this just be called memory?
> > A dma-buf is seen by user-space as a file descriptor and you can
> > pass it to different drivers then. For example, you can share a buffer
> > used by a display driver for scanout with a GPU, to compose a picture
> > into it:
> > 1. User-space (US) allocates a display buffer from display driver
> > 2. US asks display driver to export the dma-buf which backs up that buffer,
> > US gets buffer's fd: dma_buf_fd
> > 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
> > 4. GPU renders contents into display buffer (dma_buf_fd)
> 
> After speaking with Oleksandr on IRC, I think the main usage of the
> gntdev extension is to:
> 
> 1. Create a dma-buf from a set of grant references.
> 2. Share dma-buf and get a list of grant references.
> 
> I think this set of operations could be broken into:
> 
> 1.1 Map grant references into user-space using the gntdev.
> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> 
> 2.1 Map a dma-buf into user-space.
> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>     mapped.
> 
> So it seems like what's actually missing is a way to:
> 
>  - Create a dma-buf from a list of user-space virtual addresses.
>  - Allow to map a dma-buf into user-space, so it can then be used with
>    the gntdev.
> 
> I think this is generic enough that it could be implemented by a
> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> something similar to this.
> 
> > Finally, this is indeed some memory, but a bit more [1]
> > >
> > > Also, (with my FreeBSD maintainer hat) how is this going to translate
> > > to other OSes? So far the operations performed by the gntdev device
> > > are mostly OS-agnostic because this just map/unmap memory, and in fact
> > > they are implemented by Linux and FreeBSD.
> > At the moment I can only see Linux implementation and it seems
> > to be perfectly ok as we do not change Xen's APIs etc. and only
> > use the existing ones (remember, we only extend gntdev/balloon
> > drivers, all the changes in the Linux kernel)
> > As the second note I can also think that we do not extend gntdev/balloon
> > drivers and have re-worked xen-zcopy driver be a separate entity,
> > say drivers/xen/dma-buf
> > > > implement "wait" ioctl (wait for dma-buf->release): currently these are
> > > > DRM_XEN_ZCOPY_DUMB_FROM_REFS,
> DRM_XEN_ZCOPY_DUMB_TO_REFS and
> > > > DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> > > > needed
> > > > by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> > > I think this needs clarifying. In which memory space do you need those
> > > regions to be contiguous?
> > Use-case: Dom0 has a HW driver which only works with contig memory
> > and I want DomU to be able to directly write into that memory, thus
> > implementing zero copying
> > >
> > > Do they need to be contiguous in host physical memory, or guest
> > > physical memory?
> > Host
> > >
> > > If it's in guest memory space, isn't there any generic interface that
> > > you can use?
> > >
> > > If it's in host physical memory space, why do you need this buffer to
> > > be contiguous in host physical memory space? The IOMMU should hide
> all
> > > this.
> > There are drivers/HW which can only work with contig memory and
> > if it is backed by an IOMMU then still it has to be contig in IPA
> > space (real device doesn't know that it is actually IPA contig, not PA)
> 
> What's IPA contig?

I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?

  Paul

> 
> Thanks, Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:18                 ` Paul Durrant
@ 2018-04-18 10:21                     ` Oleksandr Andrushchenko
  2018-04-18 10:21                   ` Oleksandr Andrushchenko
                                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:21 UTC (permalink / raw)
  To: Paul Durrant, Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On 04/18/2018 01:18 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Roger Pau Monné
>> Sent: 18 April 2018 11:11
>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>> wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>> wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>> only write
>>>>> to these pages, not any other page from Dom0, so it can be still
>> considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend
>> cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
>> for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend
>> exhausting
>>>>>      its grant references and memory (consider this from security POV).
>> As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted
>> as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case
>> above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing
>> evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>> between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>> issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>> XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free
>> the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>> zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0
>> and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>> with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in
>> case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>> with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
>>
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide
>> all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?
sorry for not being clear here: I mean that the device sees contiguous 
range of
Intermediate Phys Addresses
>    Paul
>
>> Thanks, Roger.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-18 10:21                     ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:21 UTC (permalink / raw)
  To: Paul Durrant, Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky

On 04/18/2018 01:18 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Roger Pau Monné
>> Sent: 18 April 2018 11:11
>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>> wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>> wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>> only write
>>>>> to these pages, not any other page from Dom0, so it can be still
>> considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend
>> cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
>> for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend
>> exhausting
>>>>>      its grant references and memory (consider this from security POV).
>> As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted
>> as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case
>> above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing
>> evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>> between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>> issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>> XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free
>> the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>> zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0
>> and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>> with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in
>> case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>> with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
>>
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide
>> all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?
sorry for not being clear here: I mean that the device sees contiguous 
range of
Intermediate Phys Addresses
>    Paul
>
>> Thanks, Roger.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:18                 ` Paul Durrant
  2018-04-18 10:21                     ` Oleksandr Andrushchenko
@ 2018-04-18 10:21                   ` Oleksandr Andrushchenko
  2018-04-18 10:39                     ` Oleksandr Andrushchenko
  2018-04-18 10:39                   ` Oleksandr Andrushchenko
  3 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:21 UTC (permalink / raw)
  To: Paul Durrant, Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On 04/18/2018 01:18 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Roger Pau Monné
>> Sent: 18 April 2018 11:11
>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>> wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>> wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>> only write
>>>>> to these pages, not any other page from Dom0, so it can be still
>> considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend
>> cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
>> for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend
>> exhausting
>>>>>      its grant references and memory (consider this from security POV).
>> As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted
>> as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case
>> above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing
>> evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>> between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>> issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>> XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free
>> the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>> zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0
>> and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>> with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in
>> case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>> with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
>>
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide
>> all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?
sorry for not being clear here: I mean that the device sees contiguous 
range of
Intermediate Phys Addresses
>    Paul
>
>> Thanks, Roger.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* RE: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:21                     ` Oleksandr Andrushchenko
  (?)
@ 2018-04-18 10:23                     ` Paul Durrant
  2018-04-18 10:31                       ` Oleksandr Andrushchenko
  2018-04-18 10:31                       ` Oleksandr Andrushchenko
  -1 siblings, 2 replies; 131+ messages in thread
From: Paul Durrant @ 2018-04-18 10:23 UTC (permalink / raw)
  To: 'Oleksandr Andrushchenko', Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

> -----Original Message-----
> From: Oleksandr Andrushchenko [mailto:andr2000@gmail.com]
> Sent: 18 April 2018 11:21
> To: Paul Durrant <Paul.Durrant@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>
> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Potrola, MateuszX
> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> <matthew.d.roper@intel.com>
> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> helper DRM driver
> 
> On 04/18/2018 01:18 PM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> Behalf
> >> Of Roger Pau Monné
> >> Sent: 18 April 2018 11:11
> >> To: Oleksandr Andrushchenko <andr2000@gmail.com>
> >> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> >> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> >> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org;
> dri-
> >> devel@lists.freedesktop.org; Potrola, MateuszX
> >> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> >> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> >> <matthew.d.roper@intel.com>
> >> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> >> helper DRM driver
> >>
> >> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> >> wrote:
> >>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> >>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
> >> wrote:
> >>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>>> 3.2 Backend exports dma-buf to xen-front
> >>>>>
> >>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
> >> only write
> >>>>> to these pages, not any other page from Dom0, so it can be still
> >> considered
> >>>>> safe.
> >>>>> But, the following must be considered (highlighted in xen-front's
> Kernel
> >>>>> documentation):
> >>>>>    - If guest domain dies then pages/grants received from the backend
> >> cannot
> >>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
> >> for
> >>>>> any
> >>>>>      other guest)
> >>>>>    - Misbehaving guest may send too many requests to the backend
> >> exhausting
> >>>>>      its grant references and memory (consider this from security POV).
> >> As the
> >>>>>      backend runs in the trusted domain we also assume that it is
> trusted
> >> as
> >>>>> well,
> >>>>>      e.g. must take measures to prevent DDoS attacks.
> >>>> I cannot parse the above sentence:
> >>>>
> >>>> "As the backend runs in the trusted domain we also assume that it is
> >>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
> >>>>
> >>>> What's the relation between being trusted and protecting from DoS
> >>>> attacks?
> >>> I mean that we trust the backend that it can prevent Dom0
> >>> from crashing in case DomU's frontend misbehaves, e.g.
> >>> if the frontend sends too many memory requests etc.
> >>>> In any case, all? PV protocols are implemented with the frontend
> >>>> sharing pages to the backend, and I think there's a reason why this
> >>>> model is used, and it should continue to be used.
> >>> This is the first use-case above. But there are real-world
> >>> use-cases (embedded in my case) when physically contiguous memory
> >>> needs to be shared, one of the possible ways to achieve this is
> >>> to share contiguous memory from Dom0 to DomU (the second use-case
> >> above)
> >>>> Having to add logic in the backend to prevent such attacks means
> >>>> that:
> >>>>
> >>>>    - We need more code in the backend, which increases complexity and
> >>>>      chances of bugs.
> >>>>    - Such code/logic could be wrong, thus allowing DoS.
> >>> You can live without this code at all, but this is then up to
> >>> backend which may make Dom0 down because of DomU's frontend
> doing
> >> evil
> >>> things
> >> IMO we should design protocols that do not allow such attacks instead
> >> of having to defend against them.
> >>
> >>>>> 4. xen-front/backend/xen-zcopy synchronization
> >>>>>
> >>>>> 4.1. As I already said in 2) all the inter VM communication happens
> >> between
> >>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
> >>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
> >> issues a
> >>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
> >> XENDISPL_OP_DBUF_CREATE).
> >>>>> This call is synchronous, so xen-front expects that backend does free
> >> the
> >>>>> buffer pages on return.
> >>>>>
> >>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> >>>>>     - closes all dumb handles/fd's of the buffer according to [3]
> >>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
> >> zcopy to make
> >>>>> sure
> >>>>>       the buffer is freed (think of it as it waits for dma-buf->release
> >>>>> callback)
> >>>> So this zcopy thing keeps some kind of track of the memory usage?
> Why
> >>>> can't the user-space backend keep track of the buffer usage?
> >>> Because there is no dma-buf UAPI which allows to track the buffer life
> cycle
> >>> (e.g. wait until dma-buf's .release callback is called)
> >>>>>     - replies to xen-front that the buffer can be destroyed.
> >>>>> This way deletion of the buffer happens synchronously on both Dom0
> >> and DomU
> >>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
> >> with time-out
> >>>>> error
> >>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> >>>>> reference
> >>>>> removal and will retry later until those are free.
> >>>>>
> >>>>> Hope this helps understand how buffers are synchronously deleted in
> >> case
> >>>>> of xen-zcopy with a single protocol command.
> >>>>>
> >>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
> >> with
> >>>>> some additional work:
> >>>>>
> >>>>> 1. xen-zcopy can be split into 2 parts and extend:
> >>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs
> and
> >>>>> vise versa,
> >>>> I don't know much about the dma-buf implementation in Linux, but
> >>>> gntdev is a user-space device, and AFAICT user-space applications
> >>>> don't have any notion of dma buffers. How are such buffers useful for
> >>>> user-space? Why can't this just be called memory?
> >>> A dma-buf is seen by user-space as a file descriptor and you can
> >>> pass it to different drivers then. For example, you can share a buffer
> >>> used by a display driver for scanout with a GPU, to compose a picture
> >>> into it:
> >>> 1. User-space (US) allocates a display buffer from display driver
> >>> 2. US asks display driver to export the dma-buf which backs up that
> buffer,
> >>> US gets buffer's fd: dma_buf_fd
> >>> 3. US asks GPU driver to import a buffer and provides it with
> dma_buf_fd
> >>> 4. GPU renders contents into display buffer (dma_buf_fd)
> >> After speaking with Oleksandr on IRC, I think the main usage of the
> >> gntdev extension is to:
> >>
> >> 1. Create a dma-buf from a set of grant references.
> >> 2. Share dma-buf and get a list of grant references.
> >>
> >> I think this set of operations could be broken into:
> >>
> >> 1.1 Map grant references into user-space using the gntdev.
> >> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> >>
> >> 2.1 Map a dma-buf into user-space.
> >> 2.2 Get grefs out of the user-space addresses where the dma-buf is
> >>      mapped.
> >>
> >> So it seems like what's actually missing is a way to:
> >>
> >>   - Create a dma-buf from a list of user-space virtual addresses.
> >>   - Allow to map a dma-buf into user-space, so it can then be used with
> >>     the gntdev.
> >>
> >> I think this is generic enough that it could be implemented by a
> >> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >> something similar to this.
> >>
> >>> Finally, this is indeed some memory, but a bit more [1]
> >>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
> >>>> to other OSes? So far the operations performed by the gntdev device
> >>>> are mostly OS-agnostic because this just map/unmap memory, and in
> fact
> >>>> they are implemented by Linux and FreeBSD.
> >>> At the moment I can only see Linux implementation and it seems
> >>> to be perfectly ok as we do not change Xen's APIs etc. and only
> >>> use the existing ones (remember, we only extend gntdev/balloon
> >>> drivers, all the changes in the Linux kernel)
> >>> As the second note I can also think that we do not extend
> gntdev/balloon
> >>> drivers and have re-worked xen-zcopy driver be a separate entity,
> >>> say drivers/xen/dma-buf
> >>>>> implement "wait" ioctl (wait for dma-buf->release): currently these
> are
> >>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
> >> DRM_XEN_ZCOPY_DUMB_TO_REFS and
> >>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> >>>>> needed
> >>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> >>>> I think this needs clarifying. In which memory space do you need those
> >>>> regions to be contiguous?
> >>> Use-case: Dom0 has a HW driver which only works with contig memory
> >>> and I want DomU to be able to directly write into that memory, thus
> >>> implementing zero copying
> >>>> Do they need to be contiguous in host physical memory, or guest
> >>>> physical memory?
> >>> Host
> >>>> If it's in guest memory space, isn't there any generic interface that
> >>>> you can use?
> >>>>
> >>>> If it's in host physical memory space, why do you need this buffer to
> >>>> be contiguous in host physical memory space? The IOMMU should hide
> >> all
> >>>> this.
> >>> There are drivers/HW which can only work with contig memory and
> >>> if it is backed by an IOMMU then still it has to be contig in IPA
> >>> space (real device doesn't know that it is actually IPA contig, not PA)
> >> What's IPA contig?
> > I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this
> means what I've termed 'Bus Address' elsewhere?
> sorry for not being clear here: I mean that the device sees contiguous
> range of
> Intermediate Phys Addresses

Still not clear (to me at least) what that means. Are you talking about the address space used by the device? If so, that is essentially virtual address space translated by the IOMMU and we have general termed this 'bus address space'.

  Paul

> >    Paul
> >
> >> Thanks, Roger.
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xenproject.org
> >> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:21                     ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-18 10:23                     ` Paul Durrant
  -1 siblings, 0 replies; 131+ messages in thread
From: Paul Durrant @ 2018-04-18 10:23 UTC (permalink / raw)
  To: 'Oleksandr Andrushchenko', Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

> -----Original Message-----
> From: Oleksandr Andrushchenko [mailto:andr2000@gmail.com]
> Sent: 18 April 2018 11:21
> To: Paul Durrant <Paul.Durrant@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>
> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Potrola, MateuszX
> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> <matthew.d.roper@intel.com>
> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> helper DRM driver
> 
> On 04/18/2018 01:18 PM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
> Behalf
> >> Of Roger Pau Monné
> >> Sent: 18 April 2018 11:11
> >> To: Oleksandr Andrushchenko <andr2000@gmail.com>
> >> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> >> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> >> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org;
> dri-
> >> devel@lists.freedesktop.org; Potrola, MateuszX
> >> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> >> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> >> <matthew.d.roper@intel.com>
> >> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> >> helper DRM driver
> >>
> >> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> >> wrote:
> >>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> >>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
> >> wrote:
> >>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>>> 3.2 Backend exports dma-buf to xen-front
> >>>>>
> >>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
> >> only write
> >>>>> to these pages, not any other page from Dom0, so it can be still
> >> considered
> >>>>> safe.
> >>>>> But, the following must be considered (highlighted in xen-front's
> Kernel
> >>>>> documentation):
> >>>>>    - If guest domain dies then pages/grants received from the backend
> >> cannot
> >>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
> >> for
> >>>>> any
> >>>>>      other guest)
> >>>>>    - Misbehaving guest may send too many requests to the backend
> >> exhausting
> >>>>>      its grant references and memory (consider this from security POV).
> >> As the
> >>>>>      backend runs in the trusted domain we also assume that it is
> trusted
> >> as
> >>>>> well,
> >>>>>      e.g. must take measures to prevent DDoS attacks.
> >>>> I cannot parse the above sentence:
> >>>>
> >>>> "As the backend runs in the trusted domain we also assume that it is
> >>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
> >>>>
> >>>> What's the relation between being trusted and protecting from DoS
> >>>> attacks?
> >>> I mean that we trust the backend that it can prevent Dom0
> >>> from crashing in case DomU's frontend misbehaves, e.g.
> >>> if the frontend sends too many memory requests etc.
> >>>> In any case, all? PV protocols are implemented with the frontend
> >>>> sharing pages to the backend, and I think there's a reason why this
> >>>> model is used, and it should continue to be used.
> >>> This is the first use-case above. But there are real-world
> >>> use-cases (embedded in my case) when physically contiguous memory
> >>> needs to be shared, one of the possible ways to achieve this is
> >>> to share contiguous memory from Dom0 to DomU (the second use-case
> >> above)
> >>>> Having to add logic in the backend to prevent such attacks means
> >>>> that:
> >>>>
> >>>>    - We need more code in the backend, which increases complexity and
> >>>>      chances of bugs.
> >>>>    - Such code/logic could be wrong, thus allowing DoS.
> >>> You can live without this code at all, but this is then up to
> >>> backend which may make Dom0 down because of DomU's frontend
> doing
> >> evil
> >>> things
> >> IMO we should design protocols that do not allow such attacks instead
> >> of having to defend against them.
> >>
> >>>>> 4. xen-front/backend/xen-zcopy synchronization
> >>>>>
> >>>>> 4.1. As I already said in 2) all the inter VM communication happens
> >> between
> >>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
> >>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
> >> issues a
> >>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
> >> XENDISPL_OP_DBUF_CREATE).
> >>>>> This call is synchronous, so xen-front expects that backend does free
> >> the
> >>>>> buffer pages on return.
> >>>>>
> >>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> >>>>>     - closes all dumb handles/fd's of the buffer according to [3]
> >>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
> >> zcopy to make
> >>>>> sure
> >>>>>       the buffer is freed (think of it as it waits for dma-buf->release
> >>>>> callback)
> >>>> So this zcopy thing keeps some kind of track of the memory usage?
> Why
> >>>> can't the user-space backend keep track of the buffer usage?
> >>> Because there is no dma-buf UAPI which allows to track the buffer life
> cycle
> >>> (e.g. wait until dma-buf's .release callback is called)
> >>>>>     - replies to xen-front that the buffer can be destroyed.
> >>>>> This way deletion of the buffer happens synchronously on both Dom0
> >> and DomU
> >>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
> >> with time-out
> >>>>> error
> >>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> >>>>> reference
> >>>>> removal and will retry later until those are free.
> >>>>>
> >>>>> Hope this helps understand how buffers are synchronously deleted in
> >> case
> >>>>> of xen-zcopy with a single protocol command.
> >>>>>
> >>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
> >> with
> >>>>> some additional work:
> >>>>>
> >>>>> 1. xen-zcopy can be split into 2 parts and extend:
> >>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs
> and
> >>>>> vise versa,
> >>>> I don't know much about the dma-buf implementation in Linux, but
> >>>> gntdev is a user-space device, and AFAICT user-space applications
> >>>> don't have any notion of dma buffers. How are such buffers useful for
> >>>> user-space? Why can't this just be called memory?
> >>> A dma-buf is seen by user-space as a file descriptor and you can
> >>> pass it to different drivers then. For example, you can share a buffer
> >>> used by a display driver for scanout with a GPU, to compose a picture
> >>> into it:
> >>> 1. User-space (US) allocates a display buffer from display driver
> >>> 2. US asks display driver to export the dma-buf which backs up that
> buffer,
> >>> US gets buffer's fd: dma_buf_fd
> >>> 3. US asks GPU driver to import a buffer and provides it with
> dma_buf_fd
> >>> 4. GPU renders contents into display buffer (dma_buf_fd)
> >> After speaking with Oleksandr on IRC, I think the main usage of the
> >> gntdev extension is to:
> >>
> >> 1. Create a dma-buf from a set of grant references.
> >> 2. Share dma-buf and get a list of grant references.
> >>
> >> I think this set of operations could be broken into:
> >>
> >> 1.1 Map grant references into user-space using the gntdev.
> >> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> >>
> >> 2.1 Map a dma-buf into user-space.
> >> 2.2 Get grefs out of the user-space addresses where the dma-buf is
> >>      mapped.
> >>
> >> So it seems like what's actually missing is a way to:
> >>
> >>   - Create a dma-buf from a list of user-space virtual addresses.
> >>   - Allow to map a dma-buf into user-space, so it can then be used with
> >>     the gntdev.
> >>
> >> I think this is generic enough that it could be implemented by a
> >> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >> something similar to this.
> >>
> >>> Finally, this is indeed some memory, but a bit more [1]
> >>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
> >>>> to other OSes? So far the operations performed by the gntdev device
> >>>> are mostly OS-agnostic because this just map/unmap memory, and in
> fact
> >>>> they are implemented by Linux and FreeBSD.
> >>> At the moment I can only see Linux implementation and it seems
> >>> to be perfectly ok as we do not change Xen's APIs etc. and only
> >>> use the existing ones (remember, we only extend gntdev/balloon
> >>> drivers, all the changes in the Linux kernel)
> >>> As the second note I can also think that we do not extend
> gntdev/balloon
> >>> drivers and have re-worked xen-zcopy driver be a separate entity,
> >>> say drivers/xen/dma-buf
> >>>>> implement "wait" ioctl (wait for dma-buf->release): currently these
> are
> >>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
> >> DRM_XEN_ZCOPY_DUMB_TO_REFS and
> >>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> >>>>> needed
> >>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> >>>> I think this needs clarifying. In which memory space do you need those
> >>>> regions to be contiguous?
> >>> Use-case: Dom0 has a HW driver which only works with contig memory
> >>> and I want DomU to be able to directly write into that memory, thus
> >>> implementing zero copying
> >>>> Do they need to be contiguous in host physical memory, or guest
> >>>> physical memory?
> >>> Host
> >>>> If it's in guest memory space, isn't there any generic interface that
> >>>> you can use?
> >>>>
> >>>> If it's in host physical memory space, why do you need this buffer to
> >>>> be contiguous in host physical memory space? The IOMMU should hide
> >> all
> >>>> this.
> >>> There are drivers/HW which can only work with contig memory and
> >>> if it is backed by an IOMMU then still it has to be contig in IPA
> >>> space (real device doesn't know that it is actually IPA contig, not PA)
> >> What's IPA contig?
> > I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this
> means what I've termed 'Bus Address' elsewhere?
> sorry for not being clear here: I mean that the device sees contiguous
> range of
> Intermediate Phys Addresses

Still not clear (to me at least) what that means. Are you talking about the address space used by the device? If so, that is essentially virtual address space translated by the IOMMU and we have general termed this 'bus address space'.

  Paul

> >    Paul
> >
> >> Thanks, Roger.
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xenproject.org
> >> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:23                     ` Paul Durrant
@ 2018-04-18 10:31                       ` Oleksandr Andrushchenko
  2018-04-18 10:31                       ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:31 UTC (permalink / raw)
  To: Paul Durrant, 'Oleksandr Andrushchenko', Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied, linux-kernel,
	dri-devel, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky, Matt Roper

On 04/18/2018 01:23 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Oleksandr Andrushchenko [mailto:andr2000@gmail.com]
>> Sent: 18 April 2018 11:21
>> To: Paul Durrant <Paul.Durrant@citrix.com>; Roger Pau Monne
>> <roger.pau@citrix.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On 04/18/2018 01:18 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
>> Behalf
>>>> Of Roger Pau Monné
>>>> Sent: 18 April 2018 11:11
>>>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>>>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>>>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>>>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org;
>> dri-
>>>> devel@lists.freedesktop.org; Potrola, MateuszX
>>>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>>>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>>>> <matthew.d.roper@intel.com>
>>>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>>>> helper DRM driver
>>>>
>>>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>>>
>>>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>>>> only write
>>>>>>> to these pages, not any other page from Dom0, so it can be still
>>>> considered
>>>>>>> safe.
>>>>>>> But, the following must be considered (highlighted in xen-front's
>> Kernel
>>>>>>> documentation):
>>>>>>>     - If guest domain dies then pages/grants received from the backend
>>>> cannot
>>>>>>>       be claimed back - think of it as memory lost to Dom0 (won't be used
>>>> for
>>>>>>> any
>>>>>>>       other guest)
>>>>>>>     - Misbehaving guest may send too many requests to the backend
>>>> exhausting
>>>>>>>       its grant references and memory (consider this from security POV).
>>>> As the
>>>>>>>       backend runs in the trusted domain we also assume that it is
>> trusted
>>>> as
>>>>>>> well,
>>>>>>>       e.g. must take measures to prevent DDoS attacks.
>>>>>> I cannot parse the above sentence:
>>>>>>
>>>>>> "As the backend runs in the trusted domain we also assume that it is
>>>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>>>
>>>>>> What's the relation between being trusted and protecting from DoS
>>>>>> attacks?
>>>>> I mean that we trust the backend that it can prevent Dom0
>>>>> from crashing in case DomU's frontend misbehaves, e.g.
>>>>> if the frontend sends too many memory requests etc.
>>>>>> In any case, all? PV protocols are implemented with the frontend
>>>>>> sharing pages to the backend, and I think there's a reason why this
>>>>>> model is used, and it should continue to be used.
>>>>> This is the first use-case above. But there are real-world
>>>>> use-cases (embedded in my case) when physically contiguous memory
>>>>> needs to be shared, one of the possible ways to achieve this is
>>>>> to share contiguous memory from Dom0 to DomU (the second use-case
>>>> above)
>>>>>> Having to add logic in the backend to prevent such attacks means
>>>>>> that:
>>>>>>
>>>>>>     - We need more code in the backend, which increases complexity and
>>>>>>       chances of bugs.
>>>>>>     - Such code/logic could be wrong, thus allowing DoS.
>>>>> You can live without this code at all, but this is then up to
>>>>> backend which may make Dom0 down because of DomU's frontend
>> doing
>>>> evil
>>>>> things
>>>> IMO we should design protocols that do not allow such attacks instead
>>>> of having to defend against them.
>>>>
>>>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>>>
>>>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>>>> between
>>>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>>>> issues a
>>>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>>>> XENDISPL_OP_DBUF_CREATE).
>>>>>>> This call is synchronous, so xen-front expects that backend does free
>>>> the
>>>>>>> buffer pages on return.
>>>>>>>
>>>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>>>      - closes all dumb handles/fd's of the buffer according to [3]
>>>>>>>      - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>>>> zcopy to make
>>>>>>> sure
>>>>>>>        the buffer is freed (think of it as it waits for dma-buf->release
>>>>>>> callback)
>>>>>> So this zcopy thing keeps some kind of track of the memory usage?
>> Why
>>>>>> can't the user-space backend keep track of the buffer usage?
>>>>> Because there is no dma-buf UAPI which allows to track the buffer life
>> cycle
>>>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>>>      - replies to xen-front that the buffer can be destroyed.
>>>>>>> This way deletion of the buffer happens synchronously on both Dom0
>>>> and DomU
>>>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>>>> with time-out
>>>>>>> error
>>>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>>>> reference
>>>>>>> removal and will retry later until those are free.
>>>>>>>
>>>>>>> Hope this helps understand how buffers are synchronously deleted in
>>>> case
>>>>>>> of xen-zcopy with a single protocol command.
>>>>>>>
>>>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>>>> with
>>>>>>> some additional work:
>>>>>>>
>>>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs
>> and
>>>>>>> vise versa,
>>>>>> I don't know much about the dma-buf implementation in Linux, but
>>>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>>>> user-space? Why can't this just be called memory?
>>>>> A dma-buf is seen by user-space as a file descriptor and you can
>>>>> pass it to different drivers then. For example, you can share a buffer
>>>>> used by a display driver for scanout with a GPU, to compose a picture
>>>>> into it:
>>>>> 1. User-space (US) allocates a display buffer from display driver
>>>>> 2. US asks display driver to export the dma-buf which backs up that
>> buffer,
>>>>> US gets buffer's fd: dma_buf_fd
>>>>> 3. US asks GPU driver to import a buffer and provides it with
>> dma_buf_fd
>>>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>>>> After speaking with Oleksandr on IRC, I think the main usage of the
>>>> gntdev extension is to:
>>>>
>>>> 1. Create a dma-buf from a set of grant references.
>>>> 2. Share dma-buf and get a list of grant references.
>>>>
>>>> I think this set of operations could be broken into:
>>>>
>>>> 1.1 Map grant references into user-space using the gntdev.
>>>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>>>
>>>> 2.1 Map a dma-buf into user-space.
>>>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>>>       mapped.
>>>>
>>>> So it seems like what's actually missing is a way to:
>>>>
>>>>    - Create a dma-buf from a list of user-space virtual addresses.
>>>>    - Allow to map a dma-buf into user-space, so it can then be used with
>>>>      the gntdev.
>>>>
>>>> I think this is generic enough that it could be implemented by a
>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>> something similar to this.
>>>>
>>>>> Finally, this is indeed some memory, but a bit more [1]
>>>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>>>> to other OSes? So far the operations performed by the gntdev device
>>>>>> are mostly OS-agnostic because this just map/unmap memory, and in
>> fact
>>>>>> they are implemented by Linux and FreeBSD.
>>>>> At the moment I can only see Linux implementation and it seems
>>>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>>>> use the existing ones (remember, we only extend gntdev/balloon
>>>>> drivers, all the changes in the Linux kernel)
>>>>> As the second note I can also think that we do not extend
>> gntdev/balloon
>>>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>>>> say drivers/xen/dma-buf
>>>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these
>> are
>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>>>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>>>> needed
>>>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>>>> I think this needs clarifying. In which memory space do you need those
>>>>>> regions to be contiguous?
>>>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>>>> and I want DomU to be able to directly write into that memory, thus
>>>>> implementing zero copying
>>>>>> Do they need to be contiguous in host physical memory, or guest
>>>>>> physical memory?
>>>>> Host
>>>>>> If it's in guest memory space, isn't there any generic interface that
>>>>>> you can use?
>>>>>>
>>>>>> If it's in host physical memory space, why do you need this buffer to
>>>>>> be contiguous in host physical memory space? The IOMMU should hide
>>>> all
>>>>>> this.
>>>>> There are drivers/HW which can only work with contig memory and
>>>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>>>> space (real device doesn't know that it is actually IPA contig, not PA)
>>>> What's IPA contig?
>>> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this
>> means what I've termed 'Bus Address' elsewhere?
>> sorry for not being clear here: I mean that the device sees contiguous
>> range of
>> Intermediate Phys Addresses
> Still not clear (to me at least) what that means. Are you talking about the address space used by the device? If so, that is essentially virtual address space translated by the IOMMU and we have general termed this 'bus address space'.
this is it
>    Paul
>
>>>     Paul
>>>
>>>> Thanks, Roger.
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xenproject.org
>>>> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:23                     ` Paul Durrant
  2018-04-18 10:31                       ` Oleksandr Andrushchenko
@ 2018-04-18 10:31                       ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:31 UTC (permalink / raw)
  To: Paul Durrant, 'Oleksandr Andrushchenko', Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied, linux-kernel,
	dri-devel, Potrola, MateuszX, daniel.vetter, xen-devel,
	boris.ostrovsky, Matt Roper

On 04/18/2018 01:23 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Oleksandr Andrushchenko [mailto:andr2000@gmail.com]
>> Sent: 18 April 2018 11:21
>> To: Paul Durrant <Paul.Durrant@citrix.com>; Roger Pau Monne
>> <roger.pau@citrix.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On 04/18/2018 01:18 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On
>> Behalf
>>>> Of Roger Pau Monné
>>>> Sent: 18 April 2018 11:11
>>>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>>>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>>>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>>>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org;
>> dri-
>>>> devel@lists.freedesktop.org; Potrola, MateuszX
>>>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>>>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>>>> <matthew.d.roper@intel.com>
>>>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>>>> helper DRM driver
>>>>
>>>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>>>
>>>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>>>> only write
>>>>>>> to these pages, not any other page from Dom0, so it can be still
>>>> considered
>>>>>>> safe.
>>>>>>> But, the following must be considered (highlighted in xen-front's
>> Kernel
>>>>>>> documentation):
>>>>>>>     - If guest domain dies then pages/grants received from the backend
>>>> cannot
>>>>>>>       be claimed back - think of it as memory lost to Dom0 (won't be used
>>>> for
>>>>>>> any
>>>>>>>       other guest)
>>>>>>>     - Misbehaving guest may send too many requests to the backend
>>>> exhausting
>>>>>>>       its grant references and memory (consider this from security POV).
>>>> As the
>>>>>>>       backend runs in the trusted domain we also assume that it is
>> trusted
>>>> as
>>>>>>> well,
>>>>>>>       e.g. must take measures to prevent DDoS attacks.
>>>>>> I cannot parse the above sentence:
>>>>>>
>>>>>> "As the backend runs in the trusted domain we also assume that it is
>>>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>>>
>>>>>> What's the relation between being trusted and protecting from DoS
>>>>>> attacks?
>>>>> I mean that we trust the backend that it can prevent Dom0
>>>>> from crashing in case DomU's frontend misbehaves, e.g.
>>>>> if the frontend sends too many memory requests etc.
>>>>>> In any case, all? PV protocols are implemented with the frontend
>>>>>> sharing pages to the backend, and I think there's a reason why this
>>>>>> model is used, and it should continue to be used.
>>>>> This is the first use-case above. But there are real-world
>>>>> use-cases (embedded in my case) when physically contiguous memory
>>>>> needs to be shared, one of the possible ways to achieve this is
>>>>> to share contiguous memory from Dom0 to DomU (the second use-case
>>>> above)
>>>>>> Having to add logic in the backend to prevent such attacks means
>>>>>> that:
>>>>>>
>>>>>>     - We need more code in the backend, which increases complexity and
>>>>>>       chances of bugs.
>>>>>>     - Such code/logic could be wrong, thus allowing DoS.
>>>>> You can live without this code at all, but this is then up to
>>>>> backend which may make Dom0 down because of DomU's frontend
>> doing
>>>> evil
>>>>> things
>>>> IMO we should design protocols that do not allow such attacks instead
>>>> of having to defend against them.
>>>>
>>>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>>>
>>>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>>>> between
>>>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>>>> issues a
>>>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>>>> XENDISPL_OP_DBUF_CREATE).
>>>>>>> This call is synchronous, so xen-front expects that backend does free
>>>> the
>>>>>>> buffer pages on return.
>>>>>>>
>>>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>>>      - closes all dumb handles/fd's of the buffer according to [3]
>>>>>>>      - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>>>> zcopy to make
>>>>>>> sure
>>>>>>>        the buffer is freed (think of it as it waits for dma-buf->release
>>>>>>> callback)
>>>>>> So this zcopy thing keeps some kind of track of the memory usage?
>> Why
>>>>>> can't the user-space backend keep track of the buffer usage?
>>>>> Because there is no dma-buf UAPI which allows to track the buffer life
>> cycle
>>>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>>>      - replies to xen-front that the buffer can be destroyed.
>>>>>>> This way deletion of the buffer happens synchronously on both Dom0
>>>> and DomU
>>>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>>>> with time-out
>>>>>>> error
>>>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>>>> reference
>>>>>>> removal and will retry later until those are free.
>>>>>>>
>>>>>>> Hope this helps understand how buffers are synchronously deleted in
>>>> case
>>>>>>> of xen-zcopy with a single protocol command.
>>>>>>>
>>>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>>>> with
>>>>>>> some additional work:
>>>>>>>
>>>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs
>> and
>>>>>>> vise versa,
>>>>>> I don't know much about the dma-buf implementation in Linux, but
>>>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>>>> user-space? Why can't this just be called memory?
>>>>> A dma-buf is seen by user-space as a file descriptor and you can
>>>>> pass it to different drivers then. For example, you can share a buffer
>>>>> used by a display driver for scanout with a GPU, to compose a picture
>>>>> into it:
>>>>> 1. User-space (US) allocates a display buffer from display driver
>>>>> 2. US asks display driver to export the dma-buf which backs up that
>> buffer,
>>>>> US gets buffer's fd: dma_buf_fd
>>>>> 3. US asks GPU driver to import a buffer and provides it with
>> dma_buf_fd
>>>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>>>> After speaking with Oleksandr on IRC, I think the main usage of the
>>>> gntdev extension is to:
>>>>
>>>> 1. Create a dma-buf from a set of grant references.
>>>> 2. Share dma-buf and get a list of grant references.
>>>>
>>>> I think this set of operations could be broken into:
>>>>
>>>> 1.1 Map grant references into user-space using the gntdev.
>>>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>>>
>>>> 2.1 Map a dma-buf into user-space.
>>>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>>>       mapped.
>>>>
>>>> So it seems like what's actually missing is a way to:
>>>>
>>>>    - Create a dma-buf from a list of user-space virtual addresses.
>>>>    - Allow to map a dma-buf into user-space, so it can then be used with
>>>>      the gntdev.
>>>>
>>>> I think this is generic enough that it could be implemented by a
>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>> something similar to this.
>>>>
>>>>> Finally, this is indeed some memory, but a bit more [1]
>>>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>>>> to other OSes? So far the operations performed by the gntdev device
>>>>>> are mostly OS-agnostic because this just map/unmap memory, and in
>> fact
>>>>>> they are implemented by Linux and FreeBSD.
>>>>> At the moment I can only see Linux implementation and it seems
>>>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>>>> use the existing ones (remember, we only extend gntdev/balloon
>>>>> drivers, all the changes in the Linux kernel)
>>>>> As the second note I can also think that we do not extend
>> gntdev/balloon
>>>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>>>> say drivers/xen/dma-buf
>>>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these
>> are
>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>>>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>>>> needed
>>>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>>>> I think this needs clarifying. In which memory space do you need those
>>>>>> regions to be contiguous?
>>>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>>>> and I want DomU to be able to directly write into that memory, thus
>>>>> implementing zero copying
>>>>>> Do they need to be contiguous in host physical memory, or guest
>>>>>> physical memory?
>>>>> Host
>>>>>> If it's in guest memory space, isn't there any generic interface that
>>>>>> you can use?
>>>>>>
>>>>>> If it's in host physical memory space, why do you need this buffer to
>>>>>> be contiguous in host physical memory space? The IOMMU should hide
>>>> all
>>>>>> this.
>>>>> There are drivers/HW which can only work with contig memory and
>>>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>>>> space (real device doesn't know that it is actually IPA contig, not PA)
>>>> What's IPA contig?
>>> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this
>> means what I've termed 'Bus Address' elsewhere?
>> sorry for not being clear here: I mean that the device sees contiguous
>> range of
>> Intermediate Phys Addresses
> Still not clear (to me at least) what that means. Are you talking about the address space used by the device? If so, that is essentially virtual address space translated by the IOMMU and we have general termed this 'bus address space'.
this is it
>    Paul
>
>>>     Paul
>>>
>>>> Thanks, Roger.
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xenproject.org
>>>> https://lists.xenproject.org/mailman/listinfo/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:18                 ` Paul Durrant
@ 2018-04-18 10:39                     ` Oleksandr Andrushchenko
  2018-04-18 10:21                   ` Oleksandr Andrushchenko
                                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:39 UTC (permalink / raw)
  To: Paul Durrant, Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On 04/18/2018 01:18 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Roger Pau Monné
>> Sent: 18 April 2018 11:11
>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>> wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>> wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>> only write
>>>>> to these pages, not any other page from Dom0, so it can be still
>> considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend
>> cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
>> for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend
>> exhausting
>>>>>      its grant references and memory (consider this from security POV).
>> As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted
>> as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case
>> above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing
>> evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>> between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>> issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>> XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free
>> the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>> zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0
>> and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>> with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in
>> case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>> with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
are no go from your POV? Instead, we have to make all that fancy stuff
with VAs <-> device-X and have that device-X driver live out of drivers/xen
as it is not a Xen specific driver?
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide
>> all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?
>
>    Paul
>
>> Thanks, Roger.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-18 10:39                     ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:39 UTC (permalink / raw)
  To: Paul Durrant, Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky

On 04/18/2018 01:18 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Roger Pau Monné
>> Sent: 18 April 2018 11:11
>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>> wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>> wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>> only write
>>>>> to these pages, not any other page from Dom0, so it can be still
>> considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend
>> cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
>> for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend
>> exhausting
>>>>>      its grant references and memory (consider this from security POV).
>> As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted
>> as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case
>> above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing
>> evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>> between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>> issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>> XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free
>> the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>> zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0
>> and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>> with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in
>> case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>> with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
are no go from your POV? Instead, we have to make all that fancy stuff
with VAs <-> device-X and have that device-X driver live out of drivers/xen
as it is not a Xen specific driver?
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide
>> all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?
>
>    Paul
>
>> Thanks, Roger.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:18                 ` Paul Durrant
                                     ` (2 preceding siblings ...)
  2018-04-18 10:39                     ` Oleksandr Andrushchenko
@ 2018-04-18 10:39                   ` Oleksandr Andrushchenko
  3 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 10:39 UTC (permalink / raw)
  To: Paul Durrant, Roger Pau Monne
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On 04/18/2018 01:18 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>> Of Roger Pau Monné
>> Sent: 18 April 2018 11:11
>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>> devel@lists.freedesktop.org; Potrola, MateuszX
>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>> <matthew.d.roper@intel.com>
>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>> helper DRM driver
>>
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>> wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko
>> wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can
>> only write
>>>>> to these pages, not any other page from Dom0, so it can be still
>> considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend
>> cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used
>> for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend
>> exhausting
>>>>>      its grant references and memory (consider this from security POV).
>> As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted
>> as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case
>> above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing
>> evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens
>> between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it
>> issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to
>> XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free
>> the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-
>> zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0
>> and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns
>> with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in
>> case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver
>> with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
are no go from your POV? Instead, we have to make all that fancy stuff
with VAs <-> device-X and have that device-X driver live out of drivers/xen
as it is not a Xen specific driver?
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS,
>> DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide
>> all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
> I assume 'IPA' means 'IOMMU Physical Address'. I wonder whether this means what I've termed 'Bus Address' elsewhere?
>
>    Paul
>
>> Thanks, Roger.
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:39                     ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-18 10:55                     ` Roger Pau Monné
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
                                         ` (3 more replies)
  -1 siblings, 4 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18 10:55 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Paul Durrant, jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 01:18 PM, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> > > Of Roger Pau Monné
> > > Sent: 18 April 2018 11:11
> > > To: Oleksandr Andrushchenko <andr2000@gmail.com>
> > > Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> > > Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> > > Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> > > devel@lists.freedesktop.org; Potrola, MateuszX
> > > <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> > > daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> > > <matthew.d.roper@intel.com>
> > > Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> > > helper DRM driver
> > > 
> > > On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> > > wrote:
> > > > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > After speaking with Oleksandr on IRC, I think the main usage of the
> > > gntdev extension is to:
> > > 
> > > 1. Create a dma-buf from a set of grant references.
> > > 2. Share dma-buf and get a list of grant references.
> > > 
> > > I think this set of operations could be broken into:
> > > 
> > > 1.1 Map grant references into user-space using the gntdev.
> > > 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> > > 
> > > 2.1 Map a dma-buf into user-space.
> > > 2.2 Get grefs out of the user-space addresses where the dma-buf is
> > >      mapped.
> > > 
> > > So it seems like what's actually missing is a way to:
> > > 
> > >   - Create a dma-buf from a list of user-space virtual addresses.
> > >   - Allow to map a dma-buf into user-space, so it can then be used with
> > >     the gntdev.
> > > 
> > > I think this is generic enough that it could be implemented by a
> > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > something similar to this.
> Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> are no go from your POV?

My opinion is that there seems to be a more generic way to implement
this, and thus I would prefer that one.

> Instead, we have to make all that fancy stuff
> with VAs <-> device-X and have that device-X driver live out of drivers/xen
> as it is not a Xen specific driver?

That would be my preference if feasible, simply because it can be
reused by other use-cases that need to create dma-bufs in user-space.

In any case I just knew about dma-bufs this morning, there might be
things that I'm missing.

Roger.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:39                     ` Oleksandr Andrushchenko
  (?)
@ 2018-04-18 10:55                     ` Roger Pau Monné
  -1 siblings, 0 replies; 131+ messages in thread
From: Roger Pau Monné @ 2018-04-18 10:55 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Paul Durrant, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 01:18 PM, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> > > Of Roger Pau Monné
> > > Sent: 18 April 2018 11:11
> > > To: Oleksandr Andrushchenko <andr2000@gmail.com>
> > > Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> > > Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> > > Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> > > devel@lists.freedesktop.org; Potrola, MateuszX
> > > <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> > > daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> > > <matthew.d.roper@intel.com>
> > > Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> > > helper DRM driver
> > > 
> > > On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> > > wrote:
> > > > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > After speaking with Oleksandr on IRC, I think the main usage of the
> > > gntdev extension is to:
> > > 
> > > 1. Create a dma-buf from a set of grant references.
> > > 2. Share dma-buf and get a list of grant references.
> > > 
> > > I think this set of operations could be broken into:
> > > 
> > > 1.1 Map grant references into user-space using the gntdev.
> > > 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> > > 
> > > 2.1 Map a dma-buf into user-space.
> > > 2.2 Get grefs out of the user-space addresses where the dma-buf is
> > >      mapped.
> > > 
> > > So it seems like what's actually missing is a way to:
> > > 
> > >   - Create a dma-buf from a list of user-space virtual addresses.
> > >   - Allow to map a dma-buf into user-space, so it can then be used with
> > >     the gntdev.
> > > 
> > > I think this is generic enough that it could be implemented by a
> > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > something similar to this.
> Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> are no go from your POV?

My opinion is that there seems to be a more generic way to implement
this, and thus I would prefer that one.

> Instead, we have to make all that fancy stuff
> with VAs <-> device-X and have that device-X driver live out of drivers/xen
> as it is not a Xen specific driver?

That would be my preference if feasible, simply because it can be
reused by other use-cases that need to create dma-bufs in user-space.

In any case I just knew about dma-bufs this morning, there might be
things that I'm missing.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:55                     ` [Xen-devel] " Roger Pau Monné
@ 2018-04-18 12:42                       ` Oleksandr Andrushchenko
  2018-04-18 16:01                           ` Dongwon Kim
  2018-04-18 16:01                         ` Dongwon Kim
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
                                         ` (2 subsequent siblings)
  3 siblings, 2 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 12:42 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Paul Durrant, jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/18/2018 01:18 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>>>> Of Roger Pau Monné
>>>> Sent: 18 April 2018 11:11
>>>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>>>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>>>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>>>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>>>> devel@lists.freedesktop.org; Potrola, MateuszX
>>>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>>>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>>>> <matthew.d.roper@intel.com>
>>>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>>>> helper DRM driver
>>>>
>>>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> After speaking with Oleksandr on IRC, I think the main usage of the
>>>> gntdev extension is to:
>>>>
>>>> 1. Create a dma-buf from a set of grant references.
>>>> 2. Share dma-buf and get a list of grant references.
>>>>
>>>> I think this set of operations could be broken into:
>>>>
>>>> 1.1 Map grant references into user-space using the gntdev.
>>>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>>>
>>>> 2.1 Map a dma-buf into user-space.
>>>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>>>       mapped.
>>>>
>>>> So it seems like what's actually missing is a way to:
>>>>
>>>>    - Create a dma-buf from a list of user-space virtual addresses.
>>>>    - Allow to map a dma-buf into user-space, so it can then be used with
>>>>      the gntdev.
>>>>
>>>> I think this is generic enough that it could be implemented by a
>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>> something similar to this.
>> Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
>> are no go from your POV?
> My opinion is that there seems to be a more generic way to implement
> this, and thus I would prefer that one.
>
>> Instead, we have to make all that fancy stuff
>> with VAs <-> device-X and have that device-X driver live out of drivers/xen
>> as it is not a Xen specific driver?
> That would be my preference if feasible, simply because it can be
> reused by other use-cases that need to create dma-bufs in user-space.
There is a use-case I have: a display unit on my target has a DMA
controller which can't do scatter-gather, e.g. it only expects a
single starting address of the buffer.
In order to create a dma-buf from grefs in this case
I allocate memory with dma_alloc_xxx and then balloon pages of the
buffer and finally map grefs onto this DMA buffer.
This way I can give this shared buffer to the display unit as its bus
addresses are contiguous.

With the proposed solution (gntdev + device-X) I won't be able to 
achieve this,
as I have no control over from where gntdev/balloon drivers get the pages
(even more, those can easily be out of DMA address space of the display 
unit).

Thus, even if implemented, I can't use this approach.
>
> In any case I just knew about dma-bufs this morning, there might be
> things that I'm missing.
>
> Roger.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:55                     ` [Xen-devel] " Roger Pau Monné
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
@ 2018-04-18 12:42                       ` Oleksandr Andrushchenko
  2018-04-20  7:22                       ` Daniel Vetter
  2018-04-20  7:22                         ` Daniel Vetter
  3 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-18 12:42 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	airlied, linux-kernel, dri-devel, Paul Durrant, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/18/2018 01:18 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>>>> Of Roger Pau Monné
>>>> Sent: 18 April 2018 11:11
>>>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>>>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>>>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>>>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>>>> devel@lists.freedesktop.org; Potrola, MateuszX
>>>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>>>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>>>> <matthew.d.roper@intel.com>
>>>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>>>> helper DRM driver
>>>>
>>>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> After speaking with Oleksandr on IRC, I think the main usage of the
>>>> gntdev extension is to:
>>>>
>>>> 1. Create a dma-buf from a set of grant references.
>>>> 2. Share dma-buf and get a list of grant references.
>>>>
>>>> I think this set of operations could be broken into:
>>>>
>>>> 1.1 Map grant references into user-space using the gntdev.
>>>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>>>
>>>> 2.1 Map a dma-buf into user-space.
>>>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>>>       mapped.
>>>>
>>>> So it seems like what's actually missing is a way to:
>>>>
>>>>    - Create a dma-buf from a list of user-space virtual addresses.
>>>>    - Allow to map a dma-buf into user-space, so it can then be used with
>>>>      the gntdev.
>>>>
>>>> I think this is generic enough that it could be implemented by a
>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>> something similar to this.
>> Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
>> are no go from your POV?
> My opinion is that there seems to be a more generic way to implement
> this, and thus I would prefer that one.
>
>> Instead, we have to make all that fancy stuff
>> with VAs <-> device-X and have that device-X driver live out of drivers/xen
>> as it is not a Xen specific driver?
> That would be my preference if feasible, simply because it can be
> reused by other use-cases that need to create dma-bufs in user-space.
There is a use-case I have: a display unit on my target has a DMA
controller which can't do scatter-gather, e.g. it only expects a
single starting address of the buffer.
In order to create a dma-buf from grefs in this case
I allocate memory with dma_alloc_xxx and then balloon pages of the
buffer and finally map grefs onto this DMA buffer.
This way I can give this shared buffer to the display unit as its bus
addresses are contiguous.

With the proposed solution (gntdev + device-X) I won't be able to 
achieve this,
as I have no control over from where gntdev/balloon drivers get the pages
(even more, those can easily be out of DMA address space of the display 
unit).

Thus, even if implemented, I can't use this approach.
>
> In any case I just knew about dma-bufs this morning, there might be
> things that I'm missing.
>
> Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
@ 2018-04-18 16:01                           ` Dongwon Kim
  2018-04-18 16:01                         ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-18 16:01 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Roger Pau Monné,
	Paul Durrant, jgross, Artem Mygaiev, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 03:42:29PM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
> >On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/18/2018 01:18 PM, Paul Durrant wrote:
> >>>>-----Original Message-----
> >>>>From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> >>>>Of Roger Pau Monné
> >>>>Sent: 18 April 2018 11:11
> >>>>To: Oleksandr Andrushchenko <andr2000@gmail.com>
> >>>>Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> >>>>Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> >>>>Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> >>>>devel@lists.freedesktop.org; Potrola, MateuszX
> >>>><mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> >>>>daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> >>>><matthew.d.roper@intel.com>
> >>>>Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> >>>>helper DRM driver
> >>>>
> >>>>On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> >>>>wrote:
> >>>>>On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> >>>>After speaking with Oleksandr on IRC, I think the main usage of the
> >>>>gntdev extension is to:
> >>>>
> >>>>1. Create a dma-buf from a set of grant references.
> >>>>2. Share dma-buf and get a list of grant references.
> >>>>
> >>>>I think this set of operations could be broken into:
> >>>>
> >>>>1.1 Map grant references into user-space using the gntdev.
> >>>>1.2 Create a dma-buf out of a set of user-space virtual addresses.
> >>>>
> >>>>2.1 Map a dma-buf into user-space.
> >>>>2.2 Get grefs out of the user-space addresses where the dma-buf is
> >>>>      mapped.
> >>>>
> >>>>So it seems like what's actually missing is a way to:
> >>>>
> >>>>   - Create a dma-buf from a list of user-space virtual addresses.
> >>>>   - Allow to map a dma-buf into user-space, so it can then be used with
> >>>>     the gntdev.
> >>>>
> >>>>I think this is generic enough that it could be implemented by a
> >>>>device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>something similar to this.
> >>Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> >>are no go from your POV?

FYI,

our use-case is "surface sharing" or "graphic obj sharing" where a client
application in one guest renders and export this render target(e.g. EGL surface)
as dma-buf. This dma-buf is then exported to another guest/host via hyper_dmabuf
drv where a compositor is running. This importing domain creates a dmabuf with
shared reference then it is imported as EGL image that later can be used as
texture object via EGL api. Mapping dmabuf to the userspace or vice versa
might be possible with modifying user space drivers/applications but it is an
unnecessary extra step from our perspective. Also, we want to keep all objects
in the kernel level.

> >My opinion is that there seems to be a more generic way to implement
> >this, and thus I would prefer that one.
> >
> >>Instead, we have to make all that fancy stuff
> >>with VAs <-> device-X and have that device-X driver live out of drivers/xen
> >>as it is not a Xen specific driver?
> >That would be my preference if feasible, simply because it can be
> >reused by other use-cases that need to create dma-bufs in user-space.
> There is a use-case I have: a display unit on my target has a DMA
> controller which can't do scatter-gather, e.g. it only expects a
> single starting address of the buffer.
> In order to create a dma-buf from grefs in this case
> I allocate memory with dma_alloc_xxx and then balloon pages of the
> buffer and finally map grefs onto this DMA buffer.
> This way I can give this shared buffer to the display unit as its bus
> addresses are contiguous.
> 
> With the proposed solution (gntdev + device-X) I won't be able to achieve
> this,
> as I have no control over from where gntdev/balloon drivers get the pages
> (even more, those can easily be out of DMA address space of the display
> unit).
> 
> Thus, even if implemented, I can't use this approach.
> >
> >In any case I just knew about dma-bufs this morning, there might be
> >things that I'm missing.
> >
> >Roger.
> 

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-18 16:01                           ` Dongwon Kim
  0 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-18 16:01 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Paul Durrant, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Roger Pau Monné

On Wed, Apr 18, 2018 at 03:42:29PM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
> >On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/18/2018 01:18 PM, Paul Durrant wrote:
> >>>>-----Original Message-----
> >>>>From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> >>>>Of Roger Pau Monné
> >>>>Sent: 18 April 2018 11:11
> >>>>To: Oleksandr Andrushchenko <andr2000@gmail.com>
> >>>>Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> >>>>Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> >>>>Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> >>>>devel@lists.freedesktop.org; Potrola, MateuszX
> >>>><mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> >>>>daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> >>>><matthew.d.roper@intel.com>
> >>>>Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> >>>>helper DRM driver
> >>>>
> >>>>On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> >>>>wrote:
> >>>>>On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> >>>>After speaking with Oleksandr on IRC, I think the main usage of the
> >>>>gntdev extension is to:
> >>>>
> >>>>1. Create a dma-buf from a set of grant references.
> >>>>2. Share dma-buf and get a list of grant references.
> >>>>
> >>>>I think this set of operations could be broken into:
> >>>>
> >>>>1.1 Map grant references into user-space using the gntdev.
> >>>>1.2 Create a dma-buf out of a set of user-space virtual addresses.
> >>>>
> >>>>2.1 Map a dma-buf into user-space.
> >>>>2.2 Get grefs out of the user-space addresses where the dma-buf is
> >>>>      mapped.
> >>>>
> >>>>So it seems like what's actually missing is a way to:
> >>>>
> >>>>   - Create a dma-buf from a list of user-space virtual addresses.
> >>>>   - Allow to map a dma-buf into user-space, so it can then be used with
> >>>>     the gntdev.
> >>>>
> >>>>I think this is generic enough that it could be implemented by a
> >>>>device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>something similar to this.
> >>Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> >>are no go from your POV?

FYI,

our use-case is "surface sharing" or "graphic obj sharing" where a client
application in one guest renders and export this render target(e.g. EGL surface)
as dma-buf. This dma-buf is then exported to another guest/host via hyper_dmabuf
drv where a compositor is running. This importing domain creates a dmabuf with
shared reference then it is imported as EGL image that later can be used as
texture object via EGL api. Mapping dmabuf to the userspace or vice versa
might be possible with modifying user space drivers/applications but it is an
unnecessary extra step from our perspective. Also, we want to keep all objects
in the kernel level.

> >My opinion is that there seems to be a more generic way to implement
> >this, and thus I would prefer that one.
> >
> >>Instead, we have to make all that fancy stuff
> >>with VAs <-> device-X and have that device-X driver live out of drivers/xen
> >>as it is not a Xen specific driver?
> >That would be my preference if feasible, simply because it can be
> >reused by other use-cases that need to create dma-bufs in user-space.
> There is a use-case I have: a display unit on my target has a DMA
> controller which can't do scatter-gather, e.g. it only expects a
> single starting address of the buffer.
> In order to create a dma-buf from grefs in this case
> I allocate memory with dma_alloc_xxx and then balloon pages of the
> buffer and finally map grefs onto this DMA buffer.
> This way I can give this shared buffer to the display unit as its bus
> addresses are contiguous.
> 
> With the proposed solution (gntdev + device-X) I won't be able to achieve
> this,
> as I have no control over from where gntdev/balloon drivers get the pages
> (even more, those can easily be out of DMA address space of the display
> unit).
> 
> Thus, even if implemented, I can't use this approach.
> >
> >In any case I just knew about dma-bufs this morning, there might be
> >things that I'm missing.
> >
> >Roger.
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
  2018-04-18 16:01                           ` Dongwon Kim
@ 2018-04-18 16:01                         ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-18 16:01 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Paul Durrant, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper,
	Roger Pau Monné

On Wed, Apr 18, 2018 at 03:42:29PM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
> >On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/18/2018 01:18 PM, Paul Durrant wrote:
> >>>>-----Original Message-----
> >>>>From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> >>>>Of Roger Pau Monné
> >>>>Sent: 18 April 2018 11:11
> >>>>To: Oleksandr Andrushchenko <andr2000@gmail.com>
> >>>>Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> >>>>Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> >>>>Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> >>>>devel@lists.freedesktop.org; Potrola, MateuszX
> >>>><mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> >>>>daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> >>>><matthew.d.roper@intel.com>
> >>>>Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> >>>>helper DRM driver
> >>>>
> >>>>On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> >>>>wrote:
> >>>>>On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> >>>>After speaking with Oleksandr on IRC, I think the main usage of the
> >>>>gntdev extension is to:
> >>>>
> >>>>1. Create a dma-buf from a set of grant references.
> >>>>2. Share dma-buf and get a list of grant references.
> >>>>
> >>>>I think this set of operations could be broken into:
> >>>>
> >>>>1.1 Map grant references into user-space using the gntdev.
> >>>>1.2 Create a dma-buf out of a set of user-space virtual addresses.
> >>>>
> >>>>2.1 Map a dma-buf into user-space.
> >>>>2.2 Get grefs out of the user-space addresses where the dma-buf is
> >>>>      mapped.
> >>>>
> >>>>So it seems like what's actually missing is a way to:
> >>>>
> >>>>   - Create a dma-buf from a list of user-space virtual addresses.
> >>>>   - Allow to map a dma-buf into user-space, so it can then be used with
> >>>>     the gntdev.
> >>>>
> >>>>I think this is generic enough that it could be implemented by a
> >>>>device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>something similar to this.
> >>Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> >>are no go from your POV?

FYI,

our use-case is "surface sharing" or "graphic obj sharing" where a client
application in one guest renders and export this render target(e.g. EGL surface)
as dma-buf. This dma-buf is then exported to another guest/host via hyper_dmabuf
drv where a compositor is running. This importing domain creates a dmabuf with
shared reference then it is imported as EGL image that later can be used as
texture object via EGL api. Mapping dmabuf to the userspace or vice versa
might be possible with modifying user space drivers/applications but it is an
unnecessary extra step from our perspective. Also, we want to keep all objects
in the kernel level.

> >My opinion is that there seems to be a more generic way to implement
> >this, and thus I would prefer that one.
> >
> >>Instead, we have to make all that fancy stuff
> >>with VAs <-> device-X and have that device-X driver live out of drivers/xen
> >>as it is not a Xen specific driver?
> >That would be my preference if feasible, simply because it can be
> >reused by other use-cases that need to create dma-bufs in user-space.
> There is a use-case I have: a display unit on my target has a DMA
> controller which can't do scatter-gather, e.g. it only expects a
> single starting address of the buffer.
> In order to create a dma-buf from grefs in this case
> I allocate memory with dma_alloc_xxx and then balloon pages of the
> buffer and finally map grefs onto this DMA buffer.
> This way I can give this shared buffer to the display unit as its bus
> addresses are contiguous.
> 
> With the proposed solution (gntdev + device-X) I won't be able to achieve
> this,
> as I have no control over from where gntdev/balloon drivers get the pages
> (even more, those can easily be out of DMA address space of the display
> unit).
> 
> Thus, even if implemented, I can't use this approach.
> >
> >In any case I just knew about dma-bufs this morning, there might be
> >things that I'm missing.
> >
> >Roger.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  6:38         ` Oleksandr Andrushchenko
@ 2018-04-18 17:01             ` Dongwon Kim
  2018-04-18  7:35             ` Roger Pau Monné
                               ` (2 subsequent siblings)
  3 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-18 17:01 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Oleksandr_Andrushchenko, jgross, Artem Mygaiev, konrad.wilk,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>Yeah, I definitely agree on the idea of expanding the use case to the
> >>>general domain where dmabuf sharing is used. However, what you are
> >>>targetting with proposed changes is identical to the core design of
> >>>hyper_dmabuf.
> >>>
> >>>On top of this basic functionalities, hyper_dmabuf has driver level
> >>>inter-domain communication, that is needed for dma-buf remote tracking
> >>>(no fence forwarding though), event triggering and event handling, extra
> >>>meta data exchange and hyper_dmabuf_id that represents grefs
> >>>(grefs are shared implicitly on driver level)
> >>This really isn't a positive design aspect of hyperdmabuf imo. The core
> >>code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> >>very simple & clean.
> >>
> >>If there's a clear need later on we can extend that. But for now xen-zcopy
> >>seems to cover the basic use-case needs, so gets the job done.
> >>
> >>>Also it is designed with frontend (common core framework) + backend
> >>>(hyper visor specific comm and memory sharing) structure for portability.
> >>>We just can't limit this feature to Xen because we want to use the same
> >>>uapis not only for Xen but also other applicable hypervisor, like ACORN.
> >>See the discussion around udmabuf and the needs for kvm. I think trying to
> >>make an ioctl/uapi that works for multiple hypervisors is misguided - it
> >>likely won't work.
> >>
> >>On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> >>not even upstream yet, nor have I seen any patches proposing to land linux
> >>support for ACRN. Since it's not upstream, it doesn't really matter for
> >>upstream consideration. I'm doubting that ACRN will use the same grant
> >>references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> >Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> >hyper_dmabuf has been architectured with the concept of backend.
> >If you look at the structure of backend, you will find that
> >backend is just a set of standard function calls as shown here:
> >
> >struct hyper_dmabuf_bknd_ops {
> >         /* backend initialization routine (optional) */
> >         int (*init)(void);
> >
> >         /* backend cleanup routine (optional) */
> >         int (*cleanup)(void);
> >
> >         /* retreiving id of current virtual machine */
> >         int (*get_vm_id)(void);
> >
> >         /* get pages shared via hypervisor-specific method */
> >         int (*share_pages)(struct page **pages, int vm_id,
> >                            int nents, void **refs_info);
> >
> >         /* make shared pages unshared via hypervisor specific method */
> >         int (*unshare_pages)(void **refs_info, int nents);
> >
> >         /* map remotely shared pages on importer's side via
> >          * hypervisor-specific method
> >          */
> >         struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
> >                                            int nents, void **refs_info);
> >
> >         /* unmap and free shared pages on importer's side via
> >          * hypervisor-specific method
> >          */
> >         int (*unmap_shared_pages)(void **refs_info, int nents);
> >
> >         /* initialize communication environment */
> >         int (*init_comm_env)(void);
> >
> >         void (*destroy_comm)(void);
> >
> >         /* upstream ch setup (receiving and responding) */
> >         int (*init_rx_ch)(int vm_id);
> >
> >         /* downstream ch setup (transmitting and parsing responses) */
> >         int (*init_tx_ch)(int vm_id);
> >
> >         int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> >};
> >
> >All of these can be mapped with any hypervisor specific implementation.
> >We designed backend implementation for Xen using grant-table, Xen event
> >and ring buffer communication. For ACRN, we have another backend using Virt-IO
> >for both memory sharing and communication.
> >
> >We tried to define this structure of backend to make it general enough (or
> >it can be even modified or extended to support more cases.) so that it can
> >fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> >are page-level memory sharing and inter-domain communication, which I think
> >are standard features of modern hypervisor.
> >
> >And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> >are very general. One is getting FD (dmabuf) and get those shared. The other
> >is generating dmabuf from global handle (secure handle hiding gref behind it).
> >On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> >for any cases.
> >
> >So I don't know why we wouldn't want to try to make these standard in most of
> >hypervisor cases instead of limiting it to certain hypervisor like Xen.
> >Frontend-backend structre is optimal for this I think.
> >
> >>>So I am wondering we can start with this hyper_dmabuf then modify it for
> >>>your use-case if needed and polish and fix any glitches if we want to
> >>>to use this for all general dma-buf usecases.
> >>Imo xen-zcopy is a much more reasonable starting point for upstream, which
> >>can then be extended (if really proven to be necessary).
> >>
> >>>Also, I still have one unresolved question regarding the export/import flow
> >>>in both of hyper_dmabuf and xen-zcopy.
> >>>
> >>>@danvet: Would this flow (guest1->import existing dmabuf->share underlying
> >>>pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> >>I think if you just look at the pages, and make sure you handle the
> >>sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> >>work. The real trouble with hyperdmabuf was the forwarding of all these
> >>calls, instead of just passing around a list of grant references.
> >I talked to danvet about this litte bit.
> >
> >I think there was some misunderstanding on this "forwarding". Exporting
> >and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> >what made confusion was that importing domain notifies exporting domain when
> >there are dmabuf operations (like attach, mapping, detach and release) so that
> >exporting domain can track the usage of dmabuf on the importing domain.
> >
> >I designed this for some basic tracking. We may not need to notify for every
> >different activity but if none of them is there, exporting domain can't
> >determine if it is ok to unshare the buffer or the originator (like i915)
> >can free the object even if it's being accessed in importing domain.
> >
> >Anyway I really hope we can have enough discussion and resolve all concerns
> >before nailing it down.
> Let me explain how this works in case of para-virtual display
> use-case with xen-zcopy.
> 
> 1. There are 4 components in the system:
>   - displif protocol [1]
>   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
>   - backend - user-space application running in Dom0
>   - xen-zcopy - DRM (as of now) helper driver running in Dom0
> 
> 2. All the communication between domains happens between xen-front and the
> backend, so it is possible to implement para-virtual display use-case
> without xen-zcopy at all (this is why it is a helper driver), but in this
> case
> memory copying occurs (this is out of scope for this discussion).
> 
> 3. To better understand security issues let's see what use-cases we have:
> 
> 3.1 xen-front exports its dma-buf (dumb) to the backend
> 
> In this case there are no security issues at all as Dom0 (backend side)
> will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
> we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
> If DomU misbehaves it can only write to its own pages shared with Dom0, but
> still
> cannot go beyond that, e.g. it can't access Dom0's memory.
> 
> 3.2 Backend exports dma-buf to xen-front
> 
> In this case Dom0 pages are shared with DomU. As before, DomU can only write
> to these pages, not any other page from Dom0, so it can be still considered
> safe.
> But, the following must be considered (highlighted in xen-front's Kernel
> documentation):
>  - If guest domain dies then pages/grants received from the backend cannot
>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> any
>    other guest)
>  - Misbehaving guest may send too many requests to the backend exhausting
>    its grant references and memory (consider this from security POV). As the
>    backend runs in the trusted domain we also assume that it is trusted as
> well,
>    e.g. must take measures to prevent DDoS attacks.
> 

There is another security issue that this driver itself can cause. Using the
grant-reference as is is not very safe because it's easy to guess (counting
number probably) and any attackers running on the same importing domain can
use these references to map shared pages and access the data. This is why we
implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
almost impossible to guess. All grant references for pages are shared in the
driver level. This is another reason for having inter-VM comm.

> 4. xen-front/backend/xen-zcopy synchronization
> 
> 4.1. As I already said in 2) all the inter VM communication happens between
> xen-front and the backend, xen-zcopy is NOT involved in that.

Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
driver itself provide some way to synchronize between two VMs? I think the
assumption behind this is that Xen PV display interface and backend (running
on the userspace) are used together with xen-zcopy but what if an user space
just want to use xen-zcopy separately? Since it exposes ioctls, this is
possible unless you add some dependency configuration there.

> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> This call is synchronous, so xen-front expects that backend does free the
> buffer pages on return.

Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
exporting) makes a destory request to the exporting VM? But isn't it
the domU to make such decision since it's the owner of buffer.

And what about the other way around? For example, what happens if the
originator of buffer (like i915) decides to free the object behind dmabuf?
Would i915 or exporting side of xen-zcopy know whether dom0 currently
uses the dmabuf or not?

And again, I think this tracking should be handled in the driver itself
implicitly without any userspace involvement if we want to this dmabuf
sharing exist as a generic feature.

> 
> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>   - closes all dumb handles/fd's of the buffer according to [3]
>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> sure
>     the buffer is freed (think of it as it waits for dma-buf->release
> callback)
>   - replies to xen-front that the buffer can be destroyed.
> This way deletion of the buffer happens synchronously on both Dom0 and DomU
> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> error
> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> reference
> removal and will retry later until those are free.
> 
> Hope this helps understand how buffers are synchronously deleted in case
> of xen-zcopy with a single protocol command.
> 
> I think the above logic can also be re-used by the hyper-dmabuf driver with
> some additional work:
> 
> 1. xen-zcopy can be split into 2 parts and extend:
> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> vise versa,
> implement "wait" ioctl (wait for dma-buf->release): currently these are
> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> needed
> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

Not sure how to match our use case to xen-zcopy's case but we don't do alloc
/free all the time. Also, dom0 won't make any freeing request to domU since it
doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
/release, which are tracked by domU (exporting VM). And for destruction of
sharing, we have separate IOCTL for that, which revoke grant references "IF"
there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
destruction of sharing until it gets final dmabuf release message from dom0.

Also, in our usecase, (although we didn't intend to do so) it ends up using
3~4 buffers repeately. This is because DRM in domU (that renders) doesn't
allocate more object for EGL image since there is always free objects used
before exist in the list. And we actually don't do full-path exporting
(extracting pages -> grant-references -> get those shared) all the time.
If the same dmabuf is exported already, we just update private message then
notifies dom0 (reason for hash tables for keeping exported and importer
dmabufs). 

> 
> 2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
> alloc/free/wait
> 
> 3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
> creation/deletion and whatever else is needed (fences?).
> 
> To Xen community: please think of dma-buf here as of a buffer representation
> mechanism,
> e.g. at the end of the day it's just a set of pages.
> 
> Thank you,
> Oleksandr
> >>-Daniel
> >>
> >>>Regards,
> >>>DW
> >>>On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> >>>>Hello, all!
> >>>>
> >>>>After discussing xen-zcopy and hyper-dmabuf [1] approaches
> >>>>
> >>>>it seems that xen-zcopy can be made not depend on DRM core any more
> >>>>
> >>>>and be dma-buf centric (which it in fact is).
> >>>>
> >>>>The DRM code was mostly there for dma-buf's FD import/export
> >>>>
> >>>>with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> >>>>
> >>>>the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> >>>>DRM_XEN_ZCOPY_DUMB_TO_REFS)
> >>>>
> >>>>are extended to also provide a file descriptor of the corresponding dma-buf,
> >>>>then
> >>>>
> >>>>PRIME stuff in the driver is not needed anymore.
> >>>>
> >>>>That being said, xen-zcopy can safely be detached from DRM and moved from
> >>>>
> >>>>drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> >>>>
> >>>>This driver then becomes a universal way to turn any shared buffer between
> >>>>Dom0/DomD
> >>>>
> >>>>and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> >>>>references
> >>>>
> >>>>or represent a dma-buf as grant-references for export.
> >>>>
> >>>>This way the driver can be used not only for DRM use-cases, but also for
> >>>>other
> >>>>
> >>>>use-cases which may require zero copying between domains.
> >>>>
> >>>>For example, the use-cases we are about to work in the nearest future will
> >>>>use
> >>>>
> >>>>V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> >>>>
> >>>>from zero copying much. Potentially, even block/net devices may benefit,
> >>>>
> >>>>but this needs some evaluation.
> >>>>
> >>>>
> >>>>I would love to hear comments for authors of the hyper-dmabuf
> >>>>
> >>>>and Xen community, as well as DRI-Devel and other interested parties.
> >>>>
> >>>>
> >>>>Thank you,
> >>>>
> >>>>Oleksandr
> >>>>
> >>>>
> >>>>On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >>>>>From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>>
> >>>>>Hello!
> >>>>>
> >>>>>When using Xen PV DRM frontend driver then on backend side one will need
> >>>>>to do copying of display buffers' contents (filled by the
> >>>>>frontend's user-space) into buffers allocated at the backend side.
> >>>>>Taking into account the size of display buffers and frames per seconds
> >>>>>it may result in unneeded huge data bus occupation and performance loss.
> >>>>>
> >>>>>This helper driver allows implementing zero-copying use-cases
> >>>>>when using Xen para-virtualized frontend display driver by
> >>>>>implementing a DRM/KMS helper driver running on backend's side.
> >>>>>It utilizes PRIME buffers API to share frontend's buffers with
> >>>>>physical device drivers on backend's side:
> >>>>>
> >>>>>  - a dumb buffer created on backend's side can be shared
> >>>>>    with the Xen PV frontend driver, so it directly writes
> >>>>>    into backend's domain memory (into the buffer exported from
> >>>>>    DRM/KMS driver of a physical display device)
> >>>>>  - a dumb buffer allocated by the frontend can be imported
> >>>>>    into physical device DRM/KMS driver, thus allowing to
> >>>>>    achieve no copying as well
> >>>>>
> >>>>>For that reason number of IOCTLs are introduced:
> >>>>>  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>     This will create a DRM dumb buffer from grant references provided
> >>>>>     by the frontend
> >>>>>  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>    This will grant references to a dumb/display buffer's memory provided
> >>>>>    by the backend
> >>>>>  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>    This will block until the dumb buffer with the wait handle provided
> >>>>>    be freed
> >>>>>
> >>>>>With this helper driver I was able to drop CPU usage from 17% to 3%
> >>>>>on Renesas R-Car M3 board.
> >>>>>
> >>>>>This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >>>>>
> >>>>>Thank you,
> >>>>>Oleksandr
> >>>>>
> >>>>>Oleksandr Andrushchenko (1):
> >>>>>   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >>>>>
> >>>>>  Documentation/gpu/drivers.rst               |   1 +
> >>>>>  Documentation/gpu/xen-zcopy.rst             |  32 +
> >>>>>  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >>>>>  drivers/gpu/drm/xen/Makefile                |   5 +
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >>>>>  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >>>>>  8 files changed, 1264 insertions(+)
> >>>>>  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >>>>>  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >>>>>
> >>>>[1]
> >>>>https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> >>>_______________________________________________
> >>>dri-devel mailing list
> >>>dri-devel@lists.freedesktop.org
> >>>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>-- 
> >>Daniel Vetter
> >>Software Engineer, Intel Corporation
> >>http://blog.ffwll.ch
> 
> [1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
> [2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
> [3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
> [4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
> [5]
> https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
> [6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-18 17:01             ` Dongwon Kim
  0 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-18 17:01 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>Yeah, I definitely agree on the idea of expanding the use case to the
> >>>general domain where dmabuf sharing is used. However, what you are
> >>>targetting with proposed changes is identical to the core design of
> >>>hyper_dmabuf.
> >>>
> >>>On top of this basic functionalities, hyper_dmabuf has driver level
> >>>inter-domain communication, that is needed for dma-buf remote tracking
> >>>(no fence forwarding though), event triggering and event handling, extra
> >>>meta data exchange and hyper_dmabuf_id that represents grefs
> >>>(grefs are shared implicitly on driver level)
> >>This really isn't a positive design aspect of hyperdmabuf imo. The core
> >>code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> >>very simple & clean.
> >>
> >>If there's a clear need later on we can extend that. But for now xen-zcopy
> >>seems to cover the basic use-case needs, so gets the job done.
> >>
> >>>Also it is designed with frontend (common core framework) + backend
> >>>(hyper visor specific comm and memory sharing) structure for portability.
> >>>We just can't limit this feature to Xen because we want to use the same
> >>>uapis not only for Xen but also other applicable hypervisor, like ACORN.
> >>See the discussion around udmabuf and the needs for kvm. I think trying to
> >>make an ioctl/uapi that works for multiple hypervisors is misguided - it
> >>likely won't work.
> >>
> >>On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> >>not even upstream yet, nor have I seen any patches proposing to land linux
> >>support for ACRN. Since it's not upstream, it doesn't really matter for
> >>upstream consideration. I'm doubting that ACRN will use the same grant
> >>references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> >Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> >hyper_dmabuf has been architectured with the concept of backend.
> >If you look at the structure of backend, you will find that
> >backend is just a set of standard function calls as shown here:
> >
> >struct hyper_dmabuf_bknd_ops {
> >         /* backend initialization routine (optional) */
> >         int (*init)(void);
> >
> >         /* backend cleanup routine (optional) */
> >         int (*cleanup)(void);
> >
> >         /* retreiving id of current virtual machine */
> >         int (*get_vm_id)(void);
> >
> >         /* get pages shared via hypervisor-specific method */
> >         int (*share_pages)(struct page **pages, int vm_id,
> >                            int nents, void **refs_info);
> >
> >         /* make shared pages unshared via hypervisor specific method */
> >         int (*unshare_pages)(void **refs_info, int nents);
> >
> >         /* map remotely shared pages on importer's side via
> >          * hypervisor-specific method
> >          */
> >         struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
> >                                            int nents, void **refs_info);
> >
> >         /* unmap and free shared pages on importer's side via
> >          * hypervisor-specific method
> >          */
> >         int (*unmap_shared_pages)(void **refs_info, int nents);
> >
> >         /* initialize communication environment */
> >         int (*init_comm_env)(void);
> >
> >         void (*destroy_comm)(void);
> >
> >         /* upstream ch setup (receiving and responding) */
> >         int (*init_rx_ch)(int vm_id);
> >
> >         /* downstream ch setup (transmitting and parsing responses) */
> >         int (*init_tx_ch)(int vm_id);
> >
> >         int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> >};
> >
> >All of these can be mapped with any hypervisor specific implementation.
> >We designed backend implementation for Xen using grant-table, Xen event
> >and ring buffer communication. For ACRN, we have another backend using Virt-IO
> >for both memory sharing and communication.
> >
> >We tried to define this structure of backend to make it general enough (or
> >it can be even modified or extended to support more cases.) so that it can
> >fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> >are page-level memory sharing and inter-domain communication, which I think
> >are standard features of modern hypervisor.
> >
> >And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> >are very general. One is getting FD (dmabuf) and get those shared. The other
> >is generating dmabuf from global handle (secure handle hiding gref behind it).
> >On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> >for any cases.
> >
> >So I don't know why we wouldn't want to try to make these standard in most of
> >hypervisor cases instead of limiting it to certain hypervisor like Xen.
> >Frontend-backend structre is optimal for this I think.
> >
> >>>So I am wondering we can start with this hyper_dmabuf then modify it for
> >>>your use-case if needed and polish and fix any glitches if we want to
> >>>to use this for all general dma-buf usecases.
> >>Imo xen-zcopy is a much more reasonable starting point for upstream, which
> >>can then be extended (if really proven to be necessary).
> >>
> >>>Also, I still have one unresolved question regarding the export/import flow
> >>>in both of hyper_dmabuf and xen-zcopy.
> >>>
> >>>@danvet: Would this flow (guest1->import existing dmabuf->share underlying
> >>>pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> >>I think if you just look at the pages, and make sure you handle the
> >>sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> >>work. The real trouble with hyperdmabuf was the forwarding of all these
> >>calls, instead of just passing around a list of grant references.
> >I talked to danvet about this litte bit.
> >
> >I think there was some misunderstanding on this "forwarding". Exporting
> >and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> >what made confusion was that importing domain notifies exporting domain when
> >there are dmabuf operations (like attach, mapping, detach and release) so that
> >exporting domain can track the usage of dmabuf on the importing domain.
> >
> >I designed this for some basic tracking. We may not need to notify for every
> >different activity but if none of them is there, exporting domain can't
> >determine if it is ok to unshare the buffer or the originator (like i915)
> >can free the object even if it's being accessed in importing domain.
> >
> >Anyway I really hope we can have enough discussion and resolve all concerns
> >before nailing it down.
> Let me explain how this works in case of para-virtual display
> use-case with xen-zcopy.
> 
> 1. There are 4 components in the system:
>   - displif protocol [1]
>   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
>   - backend - user-space application running in Dom0
>   - xen-zcopy - DRM (as of now) helper driver running in Dom0
> 
> 2. All the communication between domains happens between xen-front and the
> backend, so it is possible to implement para-virtual display use-case
> without xen-zcopy at all (this is why it is a helper driver), but in this
> case
> memory copying occurs (this is out of scope for this discussion).
> 
> 3. To better understand security issues let's see what use-cases we have:
> 
> 3.1 xen-front exports its dma-buf (dumb) to the backend
> 
> In this case there are no security issues at all as Dom0 (backend side)
> will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
> we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
> If DomU misbehaves it can only write to its own pages shared with Dom0, but
> still
> cannot go beyond that, e.g. it can't access Dom0's memory.
> 
> 3.2 Backend exports dma-buf to xen-front
> 
> In this case Dom0 pages are shared with DomU. As before, DomU can only write
> to these pages, not any other page from Dom0, so it can be still considered
> safe.
> But, the following must be considered (highlighted in xen-front's Kernel
> documentation):
>  - If guest domain dies then pages/grants received from the backend cannot
>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> any
>    other guest)
>  - Misbehaving guest may send too many requests to the backend exhausting
>    its grant references and memory (consider this from security POV). As the
>    backend runs in the trusted domain we also assume that it is trusted as
> well,
>    e.g. must take measures to prevent DDoS attacks.
> 

There is another security issue that this driver itself can cause. Using the
grant-reference as is is not very safe because it's easy to guess (counting
number probably) and any attackers running on the same importing domain can
use these references to map shared pages and access the data. This is why we
implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
almost impossible to guess. All grant references for pages are shared in the
driver level. This is another reason for having inter-VM comm.

> 4. xen-front/backend/xen-zcopy synchronization
> 
> 4.1. As I already said in 2) all the inter VM communication happens between
> xen-front and the backend, xen-zcopy is NOT involved in that.

Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
driver itself provide some way to synchronize between two VMs? I think the
assumption behind this is that Xen PV display interface and backend (running
on the userspace) are used together with xen-zcopy but what if an user space
just want to use xen-zcopy separately? Since it exposes ioctls, this is
possible unless you add some dependency configuration there.

> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> This call is synchronous, so xen-front expects that backend does free the
> buffer pages on return.

Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
exporting) makes a destory request to the exporting VM? But isn't it
the domU to make such decision since it's the owner of buffer.

And what about the other way around? For example, what happens if the
originator of buffer (like i915) decides to free the object behind dmabuf?
Would i915 or exporting side of xen-zcopy know whether dom0 currently
uses the dmabuf or not?

And again, I think this tracking should be handled in the driver itself
implicitly without any userspace involvement if we want to this dmabuf
sharing exist as a generic feature.

> 
> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>   - closes all dumb handles/fd's of the buffer according to [3]
>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> sure
>     the buffer is freed (think of it as it waits for dma-buf->release
> callback)
>   - replies to xen-front that the buffer can be destroyed.
> This way deletion of the buffer happens synchronously on both Dom0 and DomU
> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> error
> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> reference
> removal and will retry later until those are free.
> 
> Hope this helps understand how buffers are synchronously deleted in case
> of xen-zcopy with a single protocol command.
> 
> I think the above logic can also be re-used by the hyper-dmabuf driver with
> some additional work:
> 
> 1. xen-zcopy can be split into 2 parts and extend:
> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> vise versa,
> implement "wait" ioctl (wait for dma-buf->release): currently these are
> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> needed
> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

Not sure how to match our use case to xen-zcopy's case but we don't do alloc
/free all the time. Also, dom0 won't make any freeing request to domU since it
doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
/release, which are tracked by domU (exporting VM). And for destruction of
sharing, we have separate IOCTL for that, which revoke grant references "IF"
there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
destruction of sharing until it gets final dmabuf release message from dom0.

Also, in our usecase, (although we didn't intend to do so) it ends up using
3~4 buffers repeately. This is because DRM in domU (that renders) doesn't
allocate more object for EGL image since there is always free objects used
before exist in the list. And we actually don't do full-path exporting
(extracting pages -> grant-references -> get those shared) all the time.
If the same dmabuf is exported already, we just update private message then
notifies dom0 (reason for hash tables for keeping exported and importer
dmabufs). 

> 
> 2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
> alloc/free/wait
> 
> 3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
> creation/deletion and whatever else is needed (fences?).
> 
> To Xen community: please think of dma-buf here as of a buffer representation
> mechanism,
> e.g. at the end of the day it's just a set of pages.
> 
> Thank you,
> Oleksandr
> >>-Daniel
> >>
> >>>Regards,
> >>>DW
> >>>On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> >>>>Hello, all!
> >>>>
> >>>>After discussing xen-zcopy and hyper-dmabuf [1] approaches
> >>>>
> >>>>it seems that xen-zcopy can be made not depend on DRM core any more
> >>>>
> >>>>and be dma-buf centric (which it in fact is).
> >>>>
> >>>>The DRM code was mostly there for dma-buf's FD import/export
> >>>>
> >>>>with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> >>>>
> >>>>the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> >>>>DRM_XEN_ZCOPY_DUMB_TO_REFS)
> >>>>
> >>>>are extended to also provide a file descriptor of the corresponding dma-buf,
> >>>>then
> >>>>
> >>>>PRIME stuff in the driver is not needed anymore.
> >>>>
> >>>>That being said, xen-zcopy can safely be detached from DRM and moved from
> >>>>
> >>>>drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> >>>>
> >>>>This driver then becomes a universal way to turn any shared buffer between
> >>>>Dom0/DomD
> >>>>
> >>>>and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> >>>>references
> >>>>
> >>>>or represent a dma-buf as grant-references for export.
> >>>>
> >>>>This way the driver can be used not only for DRM use-cases, but also for
> >>>>other
> >>>>
> >>>>use-cases which may require zero copying between domains.
> >>>>
> >>>>For example, the use-cases we are about to work in the nearest future will
> >>>>use
> >>>>
> >>>>V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> >>>>
> >>>>from zero copying much. Potentially, even block/net devices may benefit,
> >>>>
> >>>>but this needs some evaluation.
> >>>>
> >>>>
> >>>>I would love to hear comments for authors of the hyper-dmabuf
> >>>>
> >>>>and Xen community, as well as DRI-Devel and other interested parties.
> >>>>
> >>>>
> >>>>Thank you,
> >>>>
> >>>>Oleksandr
> >>>>
> >>>>
> >>>>On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >>>>>From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>>
> >>>>>Hello!
> >>>>>
> >>>>>When using Xen PV DRM frontend driver then on backend side one will need
> >>>>>to do copying of display buffers' contents (filled by the
> >>>>>frontend's user-space) into buffers allocated at the backend side.
> >>>>>Taking into account the size of display buffers and frames per seconds
> >>>>>it may result in unneeded huge data bus occupation and performance loss.
> >>>>>
> >>>>>This helper driver allows implementing zero-copying use-cases
> >>>>>when using Xen para-virtualized frontend display driver by
> >>>>>implementing a DRM/KMS helper driver running on backend's side.
> >>>>>It utilizes PRIME buffers API to share frontend's buffers with
> >>>>>physical device drivers on backend's side:
> >>>>>
> >>>>>  - a dumb buffer created on backend's side can be shared
> >>>>>    with the Xen PV frontend driver, so it directly writes
> >>>>>    into backend's domain memory (into the buffer exported from
> >>>>>    DRM/KMS driver of a physical display device)
> >>>>>  - a dumb buffer allocated by the frontend can be imported
> >>>>>    into physical device DRM/KMS driver, thus allowing to
> >>>>>    achieve no copying as well
> >>>>>
> >>>>>For that reason number of IOCTLs are introduced:
> >>>>>  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>     This will create a DRM dumb buffer from grant references provided
> >>>>>     by the frontend
> >>>>>  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>    This will grant references to a dumb/display buffer's memory provided
> >>>>>    by the backend
> >>>>>  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>    This will block until the dumb buffer with the wait handle provided
> >>>>>    be freed
> >>>>>
> >>>>>With this helper driver I was able to drop CPU usage from 17% to 3%
> >>>>>on Renesas R-Car M3 board.
> >>>>>
> >>>>>This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >>>>>
> >>>>>Thank you,
> >>>>>Oleksandr
> >>>>>
> >>>>>Oleksandr Andrushchenko (1):
> >>>>>   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >>>>>
> >>>>>  Documentation/gpu/drivers.rst               |   1 +
> >>>>>  Documentation/gpu/xen-zcopy.rst             |  32 +
> >>>>>  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >>>>>  drivers/gpu/drm/xen/Makefile                |   5 +
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >>>>>  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >>>>>  8 files changed, 1264 insertions(+)
> >>>>>  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >>>>>  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >>>>>
> >>>>[1]
> >>>>https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> >>>_______________________________________________
> >>>dri-devel mailing list
> >>>dri-devel@lists.freedesktop.org
> >>>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>-- 
> >>Daniel Vetter
> >>Software Engineer, Intel Corporation
> >>http://blog.ffwll.ch
> 
> [1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
> [2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
> [3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
> [4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
> [5]
> https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
> [6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18  6:38         ` Oleksandr Andrushchenko
  2018-04-18  7:35           ` Roger Pau Monné
  2018-04-18  7:35             ` Roger Pau Monné
@ 2018-04-18 17:01           ` Dongwon Kim
  2018-04-18 17:01             ` Dongwon Kim
  3 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-18 17:01 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, airlied, Oleksandr_Andrushchenko,
	linux-kernel, dri-devel, Potrola, MateuszX, xen-devel,
	daniel.vetter, boris.ostrovsky, Matt Roper

On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>Yeah, I definitely agree on the idea of expanding the use case to the
> >>>general domain where dmabuf sharing is used. However, what you are
> >>>targetting with proposed changes is identical to the core design of
> >>>hyper_dmabuf.
> >>>
> >>>On top of this basic functionalities, hyper_dmabuf has driver level
> >>>inter-domain communication, that is needed for dma-buf remote tracking
> >>>(no fence forwarding though), event triggering and event handling, extra
> >>>meta data exchange and hyper_dmabuf_id that represents grefs
> >>>(grefs are shared implicitly on driver level)
> >>This really isn't a positive design aspect of hyperdmabuf imo. The core
> >>code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> >>very simple & clean.
> >>
> >>If there's a clear need later on we can extend that. But for now xen-zcopy
> >>seems to cover the basic use-case needs, so gets the job done.
> >>
> >>>Also it is designed with frontend (common core framework) + backend
> >>>(hyper visor specific comm and memory sharing) structure for portability.
> >>>We just can't limit this feature to Xen because we want to use the same
> >>>uapis not only for Xen but also other applicable hypervisor, like ACORN.
> >>See the discussion around udmabuf and the needs for kvm. I think trying to
> >>make an ioctl/uapi that works for multiple hypervisors is misguided - it
> >>likely won't work.
> >>
> >>On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> >>not even upstream yet, nor have I seen any patches proposing to land linux
> >>support for ACRN. Since it's not upstream, it doesn't really matter for
> >>upstream consideration. I'm doubting that ACRN will use the same grant
> >>references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> >Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> >hyper_dmabuf has been architectured with the concept of backend.
> >If you look at the structure of backend, you will find that
> >backend is just a set of standard function calls as shown here:
> >
> >struct hyper_dmabuf_bknd_ops {
> >         /* backend initialization routine (optional) */
> >         int (*init)(void);
> >
> >         /* backend cleanup routine (optional) */
> >         int (*cleanup)(void);
> >
> >         /* retreiving id of current virtual machine */
> >         int (*get_vm_id)(void);
> >
> >         /* get pages shared via hypervisor-specific method */
> >         int (*share_pages)(struct page **pages, int vm_id,
> >                            int nents, void **refs_info);
> >
> >         /* make shared pages unshared via hypervisor specific method */
> >         int (*unshare_pages)(void **refs_info, int nents);
> >
> >         /* map remotely shared pages on importer's side via
> >          * hypervisor-specific method
> >          */
> >         struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
> >                                            int nents, void **refs_info);
> >
> >         /* unmap and free shared pages on importer's side via
> >          * hypervisor-specific method
> >          */
> >         int (*unmap_shared_pages)(void **refs_info, int nents);
> >
> >         /* initialize communication environment */
> >         int (*init_comm_env)(void);
> >
> >         void (*destroy_comm)(void);
> >
> >         /* upstream ch setup (receiving and responding) */
> >         int (*init_rx_ch)(int vm_id);
> >
> >         /* downstream ch setup (transmitting and parsing responses) */
> >         int (*init_tx_ch)(int vm_id);
> >
> >         int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> >};
> >
> >All of these can be mapped with any hypervisor specific implementation.
> >We designed backend implementation for Xen using grant-table, Xen event
> >and ring buffer communication. For ACRN, we have another backend using Virt-IO
> >for both memory sharing and communication.
> >
> >We tried to define this structure of backend to make it general enough (or
> >it can be even modified or extended to support more cases.) so that it can
> >fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> >are page-level memory sharing and inter-domain communication, which I think
> >are standard features of modern hypervisor.
> >
> >And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> >are very general. One is getting FD (dmabuf) and get those shared. The other
> >is generating dmabuf from global handle (secure handle hiding gref behind it).
> >On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> >for any cases.
> >
> >So I don't know why we wouldn't want to try to make these standard in most of
> >hypervisor cases instead of limiting it to certain hypervisor like Xen.
> >Frontend-backend structre is optimal for this I think.
> >
> >>>So I am wondering we can start with this hyper_dmabuf then modify it for
> >>>your use-case if needed and polish and fix any glitches if we want to
> >>>to use this for all general dma-buf usecases.
> >>Imo xen-zcopy is a much more reasonable starting point for upstream, which
> >>can then be extended (if really proven to be necessary).
> >>
> >>>Also, I still have one unresolved question regarding the export/import flow
> >>>in both of hyper_dmabuf and xen-zcopy.
> >>>
> >>>@danvet: Would this flow (guest1->import existing dmabuf->share underlying
> >>>pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> >>I think if you just look at the pages, and make sure you handle the
> >>sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> >>work. The real trouble with hyperdmabuf was the forwarding of all these
> >>calls, instead of just passing around a list of grant references.
> >I talked to danvet about this litte bit.
> >
> >I think there was some misunderstanding on this "forwarding". Exporting
> >and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> >what made confusion was that importing domain notifies exporting domain when
> >there are dmabuf operations (like attach, mapping, detach and release) so that
> >exporting domain can track the usage of dmabuf on the importing domain.
> >
> >I designed this for some basic tracking. We may not need to notify for every
> >different activity but if none of them is there, exporting domain can't
> >determine if it is ok to unshare the buffer or the originator (like i915)
> >can free the object even if it's being accessed in importing domain.
> >
> >Anyway I really hope we can have enough discussion and resolve all concerns
> >before nailing it down.
> Let me explain how this works in case of para-virtual display
> use-case with xen-zcopy.
> 
> 1. There are 4 components in the system:
>   - displif protocol [1]
>   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
>   - backend - user-space application running in Dom0
>   - xen-zcopy - DRM (as of now) helper driver running in Dom0
> 
> 2. All the communication between domains happens between xen-front and the
> backend, so it is possible to implement para-virtual display use-case
> without xen-zcopy at all (this is why it is a helper driver), but in this
> case
> memory copying occurs (this is out of scope for this discussion).
> 
> 3. To better understand security issues let's see what use-cases we have:
> 
> 3.1 xen-front exports its dma-buf (dumb) to the backend
> 
> In this case there are no security issues at all as Dom0 (backend side)
> will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
> we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
> If DomU misbehaves it can only write to its own pages shared with Dom0, but
> still
> cannot go beyond that, e.g. it can't access Dom0's memory.
> 
> 3.2 Backend exports dma-buf to xen-front
> 
> In this case Dom0 pages are shared with DomU. As before, DomU can only write
> to these pages, not any other page from Dom0, so it can be still considered
> safe.
> But, the following must be considered (highlighted in xen-front's Kernel
> documentation):
>  - If guest domain dies then pages/grants received from the backend cannot
>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> any
>    other guest)
>  - Misbehaving guest may send too many requests to the backend exhausting
>    its grant references and memory (consider this from security POV). As the
>    backend runs in the trusted domain we also assume that it is trusted as
> well,
>    e.g. must take measures to prevent DDoS attacks.
> 

There is another security issue that this driver itself can cause. Using the
grant-reference as is is not very safe because it's easy to guess (counting
number probably) and any attackers running on the same importing domain can
use these references to map shared pages and access the data. This is why we
implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
almost impossible to guess. All grant references for pages are shared in the
driver level. This is another reason for having inter-VM comm.

> 4. xen-front/backend/xen-zcopy synchronization
> 
> 4.1. As I already said in 2) all the inter VM communication happens between
> xen-front and the backend, xen-zcopy is NOT involved in that.

Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
driver itself provide some way to synchronize between two VMs? I think the
assumption behind this is that Xen PV display interface and backend (running
on the userspace) are used together with xen-zcopy but what if an user space
just want to use xen-zcopy separately? Since it exposes ioctls, this is
possible unless you add some dependency configuration there.

> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> This call is synchronous, so xen-front expects that backend does free the
> buffer pages on return.

Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
exporting) makes a destory request to the exporting VM? But isn't it
the domU to make such decision since it's the owner of buffer.

And what about the other way around? For example, what happens if the
originator of buffer (like i915) decides to free the object behind dmabuf?
Would i915 or exporting side of xen-zcopy know whether dom0 currently
uses the dmabuf or not?

And again, I think this tracking should be handled in the driver itself
implicitly without any userspace involvement if we want to this dmabuf
sharing exist as a generic feature.

> 
> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>   - closes all dumb handles/fd's of the buffer according to [3]
>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> sure
>     the buffer is freed (think of it as it waits for dma-buf->release
> callback)
>   - replies to xen-front that the buffer can be destroyed.
> This way deletion of the buffer happens synchronously on both Dom0 and DomU
> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> error
> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> reference
> removal and will retry later until those are free.
> 
> Hope this helps understand how buffers are synchronously deleted in case
> of xen-zcopy with a single protocol command.
> 
> I think the above logic can also be re-used by the hyper-dmabuf driver with
> some additional work:
> 
> 1. xen-zcopy can be split into 2 parts and extend:
> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> vise versa,
> implement "wait" ioctl (wait for dma-buf->release): currently these are
> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> needed
> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)

Not sure how to match our use case to xen-zcopy's case but we don't do alloc
/free all the time. Also, dom0 won't make any freeing request to domU since it
doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
/release, which are tracked by domU (exporting VM). And for destruction of
sharing, we have separate IOCTL for that, which revoke grant references "IF"
there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
destruction of sharing until it gets final dmabuf release message from dom0.

Also, in our usecase, (although we didn't intend to do so) it ends up using
3~4 buffers repeately. This is because DRM in domU (that renders) doesn't
allocate more object for EGL image since there is always free objects used
before exist in the list. And we actually don't do full-path exporting
(extracting pages -> grant-references -> get those shared) all the time.
If the same dmabuf is exported already, we just update private message then
notifies dom0 (reason for hash tables for keeping exported and importer
dmabufs). 

> 
> 2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
> alloc/free/wait
> 
> 3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
> creation/deletion and whatever else is needed (fences?).
> 
> To Xen community: please think of dma-buf here as of a buffer representation
> mechanism,
> e.g. at the end of the day it's just a set of pages.
> 
> Thank you,
> Oleksandr
> >>-Daniel
> >>
> >>>Regards,
> >>>DW
> >>>On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> >>>>Hello, all!
> >>>>
> >>>>After discussing xen-zcopy and hyper-dmabuf [1] approaches
> >>>>
> >>>>it seems that xen-zcopy can be made not depend on DRM core any more
> >>>>
> >>>>and be dma-buf centric (which it in fact is).
> >>>>
> >>>>The DRM code was mostly there for dma-buf's FD import/export
> >>>>
> >>>>with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> >>>>
> >>>>the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> >>>>DRM_XEN_ZCOPY_DUMB_TO_REFS)
> >>>>
> >>>>are extended to also provide a file descriptor of the corresponding dma-buf,
> >>>>then
> >>>>
> >>>>PRIME stuff in the driver is not needed anymore.
> >>>>
> >>>>That being said, xen-zcopy can safely be detached from DRM and moved from
> >>>>
> >>>>drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> >>>>
> >>>>This driver then becomes a universal way to turn any shared buffer between
> >>>>Dom0/DomD
> >>>>
> >>>>and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> >>>>references
> >>>>
> >>>>or represent a dma-buf as grant-references for export.
> >>>>
> >>>>This way the driver can be used not only for DRM use-cases, but also for
> >>>>other
> >>>>
> >>>>use-cases which may require zero copying between domains.
> >>>>
> >>>>For example, the use-cases we are about to work in the nearest future will
> >>>>use
> >>>>
> >>>>V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> >>>>
> >>>>from zero copying much. Potentially, even block/net devices may benefit,
> >>>>
> >>>>but this needs some evaluation.
> >>>>
> >>>>
> >>>>I would love to hear comments for authors of the hyper-dmabuf
> >>>>
> >>>>and Xen community, as well as DRI-Devel and other interested parties.
> >>>>
> >>>>
> >>>>Thank you,
> >>>>
> >>>>Oleksandr
> >>>>
> >>>>
> >>>>On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >>>>>From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>>
> >>>>>Hello!
> >>>>>
> >>>>>When using Xen PV DRM frontend driver then on backend side one will need
> >>>>>to do copying of display buffers' contents (filled by the
> >>>>>frontend's user-space) into buffers allocated at the backend side.
> >>>>>Taking into account the size of display buffers and frames per seconds
> >>>>>it may result in unneeded huge data bus occupation and performance loss.
> >>>>>
> >>>>>This helper driver allows implementing zero-copying use-cases
> >>>>>when using Xen para-virtualized frontend display driver by
> >>>>>implementing a DRM/KMS helper driver running on backend's side.
> >>>>>It utilizes PRIME buffers API to share frontend's buffers with
> >>>>>physical device drivers on backend's side:
> >>>>>
> >>>>>  - a dumb buffer created on backend's side can be shared
> >>>>>    with the Xen PV frontend driver, so it directly writes
> >>>>>    into backend's domain memory (into the buffer exported from
> >>>>>    DRM/KMS driver of a physical display device)
> >>>>>  - a dumb buffer allocated by the frontend can be imported
> >>>>>    into physical device DRM/KMS driver, thus allowing to
> >>>>>    achieve no copying as well
> >>>>>
> >>>>>For that reason number of IOCTLs are introduced:
> >>>>>  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>     This will create a DRM dumb buffer from grant references provided
> >>>>>     by the frontend
> >>>>>  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>    This will grant references to a dumb/display buffer's memory provided
> >>>>>    by the backend
> >>>>>  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>    This will block until the dumb buffer with the wait handle provided
> >>>>>    be freed
> >>>>>
> >>>>>With this helper driver I was able to drop CPU usage from 17% to 3%
> >>>>>on Renesas R-Car M3 board.
> >>>>>
> >>>>>This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >>>>>
> >>>>>Thank you,
> >>>>>Oleksandr
> >>>>>
> >>>>>Oleksandr Andrushchenko (1):
> >>>>>   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >>>>>
> >>>>>  Documentation/gpu/drivers.rst               |   1 +
> >>>>>  Documentation/gpu/xen-zcopy.rst             |  32 +
> >>>>>  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >>>>>  drivers/gpu/drm/xen/Makefile                |   5 +
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >>>>>  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >>>>>  8 files changed, 1264 insertions(+)
> >>>>>  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >>>>>  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >>>>>
> >>>>[1]
> >>>>https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> >>>_______________________________________________
> >>>dri-devel mailing list
> >>>dri-devel@lists.freedesktop.org
> >>>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>-- 
> >>Daniel Vetter
> >>Software Engineer, Intel Corporation
> >>http://blog.ffwll.ch
> 
> [1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
> [2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
> [3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
> [4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
> [5]
> https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
> [6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 17:01             ` Dongwon Kim
@ 2018-04-19  8:14               ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-19  8:14 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: Oleksandr_Andrushchenko, jgross, Artem Mygaiev, konrad.wilk,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On 04/18/2018 08:01 PM, Dongwon Kim wrote:
> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> Yeah, I definitely agree on the idea of expanding the use case to the
>>>>> general domain where dmabuf sharing is used. However, what you are
>>>>> targetting with proposed changes is identical to the core design of
>>>>> hyper_dmabuf.
>>>>>
>>>>> On top of this basic functionalities, hyper_dmabuf has driver level
>>>>> inter-domain communication, that is needed for dma-buf remote tracking
>>>>> (no fence forwarding though), event triggering and event handling, extra
>>>>> meta data exchange and hyper_dmabuf_id that represents grefs
>>>>> (grefs are shared implicitly on driver level)
>>>> This really isn't a positive design aspect of hyperdmabuf imo. The core
>>>> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
>>>> very simple & clean.
>>>>
>>>> If there's a clear need later on we can extend that. But for now xen-zcopy
>>>> seems to cover the basic use-case needs, so gets the job done.
>>>>
>>>>> Also it is designed with frontend (common core framework) + backend
>>>>> (hyper visor specific comm and memory sharing) structure for portability.
>>>>> We just can't limit this feature to Xen because we want to use the same
>>>>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
>>>> See the discussion around udmabuf and the needs for kvm. I think trying to
>>>> make an ioctl/uapi that works for multiple hypervisors is misguided - it
>>>> likely won't work.
>>>>
>>>> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
>>>> not even upstream yet, nor have I seen any patches proposing to land linux
>>>> support for ACRN. Since it's not upstream, it doesn't really matter for
>>>> upstream consideration. I'm doubting that ACRN will use the same grant
>>>> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
>>> Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
>>> hyper_dmabuf has been architectured with the concept of backend.
>>> If you look at the structure of backend, you will find that
>>> backend is just a set of standard function calls as shown here:
>>>
>>> struct hyper_dmabuf_bknd_ops {
>>>          /* backend initialization routine (optional) */
>>>          int (*init)(void);
>>>
>>>          /* backend cleanup routine (optional) */
>>>          int (*cleanup)(void);
>>>
>>>          /* retreiving id of current virtual machine */
>>>          int (*get_vm_id)(void);
>>>
>>>          /* get pages shared via hypervisor-specific method */
>>>          int (*share_pages)(struct page **pages, int vm_id,
>>>                             int nents, void **refs_info);
>>>
>>>          /* make shared pages unshared via hypervisor specific method */
>>>          int (*unshare_pages)(void **refs_info, int nents);
>>>
>>>          /* map remotely shared pages on importer's side via
>>>           * hypervisor-specific method
>>>           */
>>>          struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
>>>                                             int nents, void **refs_info);
>>>
>>>          /* unmap and free shared pages on importer's side via
>>>           * hypervisor-specific method
>>>           */
>>>          int (*unmap_shared_pages)(void **refs_info, int nents);
>>>
>>>          /* initialize communication environment */
>>>          int (*init_comm_env)(void);
>>>
>>>          void (*destroy_comm)(void);
>>>
>>>          /* upstream ch setup (receiving and responding) */
>>>          int (*init_rx_ch)(int vm_id);
>>>
>>>          /* downstream ch setup (transmitting and parsing responses) */
>>>          int (*init_tx_ch)(int vm_id);
>>>
>>>          int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
>>> };
>>>
>>> All of these can be mapped with any hypervisor specific implementation.
>>> We designed backend implementation for Xen using grant-table, Xen event
>>> and ring buffer communication. For ACRN, we have another backend using Virt-IO
>>> for both memory sharing and communication.
>>>
>>> We tried to define this structure of backend to make it general enough (or
>>> it can be even modified or extended to support more cases.) so that it can
>>> fit to other hypervisor cases. Only requirements/expectation on the hypervisor
>>> are page-level memory sharing and inter-domain communication, which I think
>>> are standard features of modern hypervisor.
>>>
>>> And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
>>> are very general. One is getting FD (dmabuf) and get those shared. The other
>>> is generating dmabuf from global handle (secure handle hiding gref behind it).
>>> On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
>>> for any cases.
>>>
>>> So I don't know why we wouldn't want to try to make these standard in most of
>>> hypervisor cases instead of limiting it to certain hypervisor like Xen.
>>> Frontend-backend structre is optimal for this I think.
>>>
>>>>> So I am wondering we can start with this hyper_dmabuf then modify it for
>>>>> your use-case if needed and polish and fix any glitches if we want to
>>>>> to use this for all general dma-buf usecases.
>>>> Imo xen-zcopy is a much more reasonable starting point for upstream, which
>>>> can then be extended (if really proven to be necessary).
>>>>
>>>>> Also, I still have one unresolved question regarding the export/import flow
>>>>> in both of hyper_dmabuf and xen-zcopy.
>>>>>
>>>>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>>>>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
>>>> I think if you just look at the pages, and make sure you handle the
>>>> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
>>>> work. The real trouble with hyperdmabuf was the forwarding of all these
>>>> calls, instead of just passing around a list of grant references.
>>> I talked to danvet about this litte bit.
>>>
>>> I think there was some misunderstanding on this "forwarding". Exporting
>>> and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
>>> what made confusion was that importing domain notifies exporting domain when
>>> there are dmabuf operations (like attach, mapping, detach and release) so that
>>> exporting domain can track the usage of dmabuf on the importing domain.
>>>
>>> I designed this for some basic tracking. We may not need to notify for every
>>> different activity but if none of them is there, exporting domain can't
>>> determine if it is ok to unshare the buffer or the originator (like i915)
>>> can free the object even if it's being accessed in importing domain.
>>>
>>> Anyway I really hope we can have enough discussion and resolve all concerns
>>> before nailing it down.
>> Let me explain how this works in case of para-virtual display
>> use-case with xen-zcopy.
>>
>> 1. There are 4 components in the system:
>>    - displif protocol [1]
>>    - xen-front - para-virtual DRM driver running in DomU (Guest) VM
>>    - backend - user-space application running in Dom0
>>    - xen-zcopy - DRM (as of now) helper driver running in Dom0
>>
>> 2. All the communication between domains happens between xen-front and the
>> backend, so it is possible to implement para-virtual display use-case
>> without xen-zcopy at all (this is why it is a helper driver), but in this
>> case
>> memory copying occurs (this is out of scope for this discussion).
>>
>> 3. To better understand security issues let's see what use-cases we have:
>>
>> 3.1 xen-front exports its dma-buf (dumb) to the backend
>>
>> In this case there are no security issues at all as Dom0 (backend side)
>> will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
>> we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
>> If DomU misbehaves it can only write to its own pages shared with Dom0, but
>> still
>> cannot go beyond that, e.g. it can't access Dom0's memory.
>>
>> 3.2 Backend exports dma-buf to xen-front
>>
>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>> to these pages, not any other page from Dom0, so it can be still considered
>> safe.
>> But, the following must be considered (highlighted in xen-front's Kernel
>> documentation):
>>   - If guest domain dies then pages/grants received from the backend cannot
>>     be claimed back - think of it as memory lost to Dom0 (won't be used for
>> any
>>     other guest)
>>   - Misbehaving guest may send too many requests to the backend exhausting
>>     its grant references and memory (consider this from security POV). As the
>>     backend runs in the trusted domain we also assume that it is trusted as
>> well,
>>     e.g. must take measures to prevent DDoS attacks.
>>
> There is another security issue that this driver itself can cause. Using the
> grant-reference as is is not very safe because it's easy to guess (counting
> number probably) and any attackers running on the same importing domain can
> use these references to map shared pages and access the data. This is why we
> implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
> almost impossible to guess.
Yes, there is something to think about in general, not related
to dma-buf/zcopy. This is a question to Xen community what they
see as the right approach here.
>   All grant references for pages are shared in the
> driver level. This is another reason for having inter-VM comm.
>
>> 4. xen-front/backend/xen-zcopy synchronization
>>
>> 4.1. As I already said in 2) all the inter VM communication happens between
>> xen-front and the backend, xen-zcopy is NOT involved in that.
> Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
> is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
> driver itself provide some way to synchronize between two VMs?
No, because xen-zcopy is a *helper* driver, not more.
>   I think the
> assumption behind this is that Xen PV display interface and backend (running
> on the userspace) are used together with xen-zcopy
Backend may use xen-zcopy or may not - it depends if you need
zero copy or not, e.g. it is not a must for the backend
> but what if an user space
> just want to use xen-zcopy separately? Since it exposes ioctls, this is
> possible unless you add some dependency configuration there.
It is possible, any backend (user-space application) can use xen-zcopy
Even more, one can extend it to provide kernel side API
>
>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>> This call is synchronous, so xen-front expects that backend does free the
>> buffer pages on return.
> Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
> exporting) makes a destory request to the exporting VM?
No, the requester is always DomU, so "destroy buffer" request
will always come from DomU
>   But isn't it
> the domU to make such decision since it's the owner of buffer.
See above
>
> And what about the other way around? For example, what happens if the
> originator of buffer (like i915) decides to free the object behind dmabuf?
For that reason there is ref-counting for dma-buf, e.g.
if i915 decides to free then the backend (in my case) still holds
the buffer, thus not allowing it do disappear. Basically, this is
the backend which creates dma-buf from refs and owns it.
> Would i915 or exporting side of xen-zcopy know whether dom0 currently
> uses the dmabuf or not?
Why do you need this to know (probably I don't understand the use-case).
I could be obvious here, but if ref-count of the dma-buf is not zero
it is still exists and used?
>
> And again, I think this tracking should be handled in the driver itself
> implicitly without any userspace involvement if we want to this dmabuf
> sharing exist as a generic feature.
Why not allow dma-buf Linux framework do that for you?
>
>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>    - closes all dumb handles/fd's of the buffer according to [3]
>>    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>> sure
>>      the buffer is freed (think of it as it waits for dma-buf->release
>> callback)
>>    - replies to xen-front that the buffer can be destroyed.
>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>> error
>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>> reference
>> removal and will retry later until those are free.
>>
>> Hope this helps understand how buffers are synchronously deleted in case
>> of xen-zcopy with a single protocol command.
>>
>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>> some additional work:
>>
>> 1. xen-zcopy can be split into 2 parts and extend:
>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>> vise versa,
>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>> needed
>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> Not sure how to match our use case to xen-zcopy's case but we don't do alloc
> /free all the time.
We also don't
>   Also, dom0 won't make any freeing request to domU since it
> doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
> /release,
Similar here
>   which are tracked by domU (exporting VM). And for destruction of
> sharing, we have separate IOCTL for that, which revoke grant references "IF"
> there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
> destruction of sharing until it gets final dmabuf release message from dom0.
We block instead with 3sec timeout + some other logic
(out of context now)
>
> Also, in our usecase, (although we didn't intend to do so) it ends up using
> 3~4 buffers repeately.
2-3 in our use-cases
> This is because DRM in domU (that renders) doesn't
> allocate more object for EGL image since there is always free objects used
> before exist in the list. And we actually don't do full-path exporting
> (extracting pages -> grant-references -> get those shared) all the time.
> If the same dmabuf is exported already, we just update private message then
> notifies dom0 (reason for hash tables for keeping exported and importer
> dmabufs).
In my case these 2-3 buffers are allocated at start and not freed
until the end - these are used as frame buffers which are constantly
flipped. So, in my case there is no much profit in trying to cache
which adds unneeded complexity (in my use-case, of course).
If those 3-4 buffers you allocate are the only buffers used you may
also try going without caching, but this depends on your use-case

>> 2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
>> alloc/free/wait
>>
>> 3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
>> creation/deletion and whatever else is needed (fences?).
>>
>> To Xen community: please think of dma-buf here as of a buffer representation
>> mechanism,
>> e.g. at the end of the day it's just a set of pages.
>>
>> Thank you,
>> Oleksandr
>>>> -Daniel
>>>>
>>>>> Regards,
>>>>> DW
>>>>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>>>>> Hello, all!
>>>>>>
>>>>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
>>>>>>
>>>>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>>>>
>>>>>> and be dma-buf centric (which it in fact is).
>>>>>>
>>>>>> The DRM code was mostly there for dma-buf's FD import/export
>>>>>>
>>>>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>>>>
>>>>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>>>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>>>>
>>>>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>>>>> then
>>>>>>
>>>>>> PRIME stuff in the driver is not needed anymore.
>>>>>>
>>>>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>>>>
>>>>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>>>>
>>>>>> This driver then becomes a universal way to turn any shared buffer between
>>>>>> Dom0/DomD
>>>>>>
>>>>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>>>>> references
>>>>>>
>>>>>> or represent a dma-buf as grant-references for export.
>>>>>>
>>>>>> This way the driver can be used not only for DRM use-cases, but also for
>>>>>> other
>>>>>>
>>>>>> use-cases which may require zero copying between domains.
>>>>>>
>>>>>> For example, the use-cases we are about to work in the nearest future will
>>>>>> use
>>>>>>
>>>>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>>>>
>>>>> >from zero copying much. Potentially, even block/net devices may benefit,
>>>>>> but this needs some evaluation.
>>>>>>
>>>>>>
>>>>>> I would love to hear comments for authors of the hyper-dmabuf
>>>>>>
>>>>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>>>>
>>>>>>
>>>>>> Thank you,
>>>>>>
>>>>>> Oleksandr
>>>>>>
>>>>>>
>>>>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>
>>>>>>> Hello!
>>>>>>>
>>>>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>>>>> to do copying of display buffers' contents (filled by the
>>>>>>> frontend's user-space) into buffers allocated at the backend side.
>>>>>>> Taking into account the size of display buffers and frames per seconds
>>>>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>>>>
>>>>>>> This helper driver allows implementing zero-copying use-cases
>>>>>>> when using Xen para-virtualized frontend display driver by
>>>>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>>>>> physical device drivers on backend's side:
>>>>>>>
>>>>>>>   - a dumb buffer created on backend's side can be shared
>>>>>>>     with the Xen PV frontend driver, so it directly writes
>>>>>>>     into backend's domain memory (into the buffer exported from
>>>>>>>     DRM/KMS driver of a physical display device)
>>>>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>>>>     achieve no copying as well
>>>>>>>
>>>>>>> For that reason number of IOCTLs are introduced:
>>>>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>      This will create a DRM dumb buffer from grant references provided
>>>>>>>      by the frontend
>>>>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>>>>     by the backend
>>>>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>     This will block until the dumb buffer with the wait handle provided
>>>>>>>     be freed
>>>>>>>
>>>>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>>>>> on Renesas R-Car M3 board.
>>>>>>>
>>>>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Oleksandr
>>>>>>>
>>>>>>> Oleksandr Andrushchenko (1):
>>>>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>>>>
>>>>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>>>>   8 files changed, 1264 insertions(+)
>>>>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>>>>
>>>>>> [1]
>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>>>>> _______________________________________________
>>>>> dri-devel mailing list
>>>>> dri-devel@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>> -- 
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>> [1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
>> [2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
>> [3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
>> [4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
>> [5]
>> https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
>> [6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-19  8:14               ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-19  8:14 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: jgross, Artem Mygaiev, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On 04/18/2018 08:01 PM, Dongwon Kim wrote:
> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> Yeah, I definitely agree on the idea of expanding the use case to the
>>>>> general domain where dmabuf sharing is used. However, what you are
>>>>> targetting with proposed changes is identical to the core design of
>>>>> hyper_dmabuf.
>>>>>
>>>>> On top of this basic functionalities, hyper_dmabuf has driver level
>>>>> inter-domain communication, that is needed for dma-buf remote tracking
>>>>> (no fence forwarding though), event triggering and event handling, extra
>>>>> meta data exchange and hyper_dmabuf_id that represents grefs
>>>>> (grefs are shared implicitly on driver level)
>>>> This really isn't a positive design aspect of hyperdmabuf imo. The core
>>>> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
>>>> very simple & clean.
>>>>
>>>> If there's a clear need later on we can extend that. But for now xen-zcopy
>>>> seems to cover the basic use-case needs, so gets the job done.
>>>>
>>>>> Also it is designed with frontend (common core framework) + backend
>>>>> (hyper visor specific comm and memory sharing) structure for portability.
>>>>> We just can't limit this feature to Xen because we want to use the same
>>>>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
>>>> See the discussion around udmabuf and the needs for kvm. I think trying to
>>>> make an ioctl/uapi that works for multiple hypervisors is misguided - it
>>>> likely won't work.
>>>>
>>>> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
>>>> not even upstream yet, nor have I seen any patches proposing to land linux
>>>> support for ACRN. Since it's not upstream, it doesn't really matter for
>>>> upstream consideration. I'm doubting that ACRN will use the same grant
>>>> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
>>> Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
>>> hyper_dmabuf has been architectured with the concept of backend.
>>> If you look at the structure of backend, you will find that
>>> backend is just a set of standard function calls as shown here:
>>>
>>> struct hyper_dmabuf_bknd_ops {
>>>          /* backend initialization routine (optional) */
>>>          int (*init)(void);
>>>
>>>          /* backend cleanup routine (optional) */
>>>          int (*cleanup)(void);
>>>
>>>          /* retreiving id of current virtual machine */
>>>          int (*get_vm_id)(void);
>>>
>>>          /* get pages shared via hypervisor-specific method */
>>>          int (*share_pages)(struct page **pages, int vm_id,
>>>                             int nents, void **refs_info);
>>>
>>>          /* make shared pages unshared via hypervisor specific method */
>>>          int (*unshare_pages)(void **refs_info, int nents);
>>>
>>>          /* map remotely shared pages on importer's side via
>>>           * hypervisor-specific method
>>>           */
>>>          struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
>>>                                             int nents, void **refs_info);
>>>
>>>          /* unmap and free shared pages on importer's side via
>>>           * hypervisor-specific method
>>>           */
>>>          int (*unmap_shared_pages)(void **refs_info, int nents);
>>>
>>>          /* initialize communication environment */
>>>          int (*init_comm_env)(void);
>>>
>>>          void (*destroy_comm)(void);
>>>
>>>          /* upstream ch setup (receiving and responding) */
>>>          int (*init_rx_ch)(int vm_id);
>>>
>>>          /* downstream ch setup (transmitting and parsing responses) */
>>>          int (*init_tx_ch)(int vm_id);
>>>
>>>          int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
>>> };
>>>
>>> All of these can be mapped with any hypervisor specific implementation.
>>> We designed backend implementation for Xen using grant-table, Xen event
>>> and ring buffer communication. For ACRN, we have another backend using Virt-IO
>>> for both memory sharing and communication.
>>>
>>> We tried to define this structure of backend to make it general enough (or
>>> it can be even modified or extended to support more cases.) so that it can
>>> fit to other hypervisor cases. Only requirements/expectation on the hypervisor
>>> are page-level memory sharing and inter-domain communication, which I think
>>> are standard features of modern hypervisor.
>>>
>>> And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
>>> are very general. One is getting FD (dmabuf) and get those shared. The other
>>> is generating dmabuf from global handle (secure handle hiding gref behind it).
>>> On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
>>> for any cases.
>>>
>>> So I don't know why we wouldn't want to try to make these standard in most of
>>> hypervisor cases instead of limiting it to certain hypervisor like Xen.
>>> Frontend-backend structre is optimal for this I think.
>>>
>>>>> So I am wondering we can start with this hyper_dmabuf then modify it for
>>>>> your use-case if needed and polish and fix any glitches if we want to
>>>>> to use this for all general dma-buf usecases.
>>>> Imo xen-zcopy is a much more reasonable starting point for upstream, which
>>>> can then be extended (if really proven to be necessary).
>>>>
>>>>> Also, I still have one unresolved question regarding the export/import flow
>>>>> in both of hyper_dmabuf and xen-zcopy.
>>>>>
>>>>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>>>>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
>>>> I think if you just look at the pages, and make sure you handle the
>>>> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
>>>> work. The real trouble with hyperdmabuf was the forwarding of all these
>>>> calls, instead of just passing around a list of grant references.
>>> I talked to danvet about this litte bit.
>>>
>>> I think there was some misunderstanding on this "forwarding". Exporting
>>> and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
>>> what made confusion was that importing domain notifies exporting domain when
>>> there are dmabuf operations (like attach, mapping, detach and release) so that
>>> exporting domain can track the usage of dmabuf on the importing domain.
>>>
>>> I designed this for some basic tracking. We may not need to notify for every
>>> different activity but if none of them is there, exporting domain can't
>>> determine if it is ok to unshare the buffer or the originator (like i915)
>>> can free the object even if it's being accessed in importing domain.
>>>
>>> Anyway I really hope we can have enough discussion and resolve all concerns
>>> before nailing it down.
>> Let me explain how this works in case of para-virtual display
>> use-case with xen-zcopy.
>>
>> 1. There are 4 components in the system:
>>    - displif protocol [1]
>>    - xen-front - para-virtual DRM driver running in DomU (Guest) VM
>>    - backend - user-space application running in Dom0
>>    - xen-zcopy - DRM (as of now) helper driver running in Dom0
>>
>> 2. All the communication between domains happens between xen-front and the
>> backend, so it is possible to implement para-virtual display use-case
>> without xen-zcopy at all (this is why it is a helper driver), but in this
>> case
>> memory copying occurs (this is out of scope for this discussion).
>>
>> 3. To better understand security issues let's see what use-cases we have:
>>
>> 3.1 xen-front exports its dma-buf (dumb) to the backend
>>
>> In this case there are no security issues at all as Dom0 (backend side)
>> will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
>> we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
>> If DomU misbehaves it can only write to its own pages shared with Dom0, but
>> still
>> cannot go beyond that, e.g. it can't access Dom0's memory.
>>
>> 3.2 Backend exports dma-buf to xen-front
>>
>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>> to these pages, not any other page from Dom0, so it can be still considered
>> safe.
>> But, the following must be considered (highlighted in xen-front's Kernel
>> documentation):
>>   - If guest domain dies then pages/grants received from the backend cannot
>>     be claimed back - think of it as memory lost to Dom0 (won't be used for
>> any
>>     other guest)
>>   - Misbehaving guest may send too many requests to the backend exhausting
>>     its grant references and memory (consider this from security POV). As the
>>     backend runs in the trusted domain we also assume that it is trusted as
>> well,
>>     e.g. must take measures to prevent DDoS attacks.
>>
> There is another security issue that this driver itself can cause. Using the
> grant-reference as is is not very safe because it's easy to guess (counting
> number probably) and any attackers running on the same importing domain can
> use these references to map shared pages and access the data. This is why we
> implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
> almost impossible to guess.
Yes, there is something to think about in general, not related
to dma-buf/zcopy. This is a question to Xen community what they
see as the right approach here.
>   All grant references for pages are shared in the
> driver level. This is another reason for having inter-VM comm.
>
>> 4. xen-front/backend/xen-zcopy synchronization
>>
>> 4.1. As I already said in 2) all the inter VM communication happens between
>> xen-front and the backend, xen-zcopy is NOT involved in that.
> Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
> is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
> driver itself provide some way to synchronize between two VMs?
No, because xen-zcopy is a *helper* driver, not more.
>   I think the
> assumption behind this is that Xen PV display interface and backend (running
> on the userspace) are used together with xen-zcopy
Backend may use xen-zcopy or may not - it depends if you need
zero copy or not, e.g. it is not a must for the backend
> but what if an user space
> just want to use xen-zcopy separately? Since it exposes ioctls, this is
> possible unless you add some dependency configuration there.
It is possible, any backend (user-space application) can use xen-zcopy
Even more, one can extend it to provide kernel side API
>
>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>> This call is synchronous, so xen-front expects that backend does free the
>> buffer pages on return.
> Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
> exporting) makes a destory request to the exporting VM?
No, the requester is always DomU, so "destroy buffer" request
will always come from DomU
>   But isn't it
> the domU to make such decision since it's the owner of buffer.
See above
>
> And what about the other way around? For example, what happens if the
> originator of buffer (like i915) decides to free the object behind dmabuf?
For that reason there is ref-counting for dma-buf, e.g.
if i915 decides to free then the backend (in my case) still holds
the buffer, thus not allowing it do disappear. Basically, this is
the backend which creates dma-buf from refs and owns it.
> Would i915 or exporting side of xen-zcopy know whether dom0 currently
> uses the dmabuf or not?
Why do you need this to know (probably I don't understand the use-case).
I could be obvious here, but if ref-count of the dma-buf is not zero
it is still exists and used?
>
> And again, I think this tracking should be handled in the driver itself
> implicitly without any userspace involvement if we want to this dmabuf
> sharing exist as a generic feature.
Why not allow dma-buf Linux framework do that for you?
>
>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>    - closes all dumb handles/fd's of the buffer according to [3]
>>    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>> sure
>>      the buffer is freed (think of it as it waits for dma-buf->release
>> callback)
>>    - replies to xen-front that the buffer can be destroyed.
>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>> error
>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>> reference
>> removal and will retry later until those are free.
>>
>> Hope this helps understand how buffers are synchronously deleted in case
>> of xen-zcopy with a single protocol command.
>>
>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>> some additional work:
>>
>> 1. xen-zcopy can be split into 2 parts and extend:
>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>> vise versa,
>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>> needed
>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> Not sure how to match our use case to xen-zcopy's case but we don't do alloc
> /free all the time.
We also don't
>   Also, dom0 won't make any freeing request to domU since it
> doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
> /release,
Similar here
>   which are tracked by domU (exporting VM). And for destruction of
> sharing, we have separate IOCTL for that, which revoke grant references "IF"
> there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
> destruction of sharing until it gets final dmabuf release message from dom0.
We block instead with 3sec timeout + some other logic
(out of context now)
>
> Also, in our usecase, (although we didn't intend to do so) it ends up using
> 3~4 buffers repeately.
2-3 in our use-cases
> This is because DRM in domU (that renders) doesn't
> allocate more object for EGL image since there is always free objects used
> before exist in the list. And we actually don't do full-path exporting
> (extracting pages -> grant-references -> get those shared) all the time.
> If the same dmabuf is exported already, we just update private message then
> notifies dom0 (reason for hash tables for keeping exported and importer
> dmabufs).
In my case these 2-3 buffers are allocated at start and not freed
until the end - these are used as frame buffers which are constantly
flipped. So, in my case there is no much profit in trying to cache
which adds unneeded complexity (in my use-case, of course).
If those 3-4 buffers you allocate are the only buffers used you may
also try going without caching, but this depends on your use-case

>> 2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
>> alloc/free/wait
>>
>> 3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
>> creation/deletion and whatever else is needed (fences?).
>>
>> To Xen community: please think of dma-buf here as of a buffer representation
>> mechanism,
>> e.g. at the end of the day it's just a set of pages.
>>
>> Thank you,
>> Oleksandr
>>>> -Daniel
>>>>
>>>>> Regards,
>>>>> DW
>>>>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>>>>> Hello, all!
>>>>>>
>>>>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
>>>>>>
>>>>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>>>>
>>>>>> and be dma-buf centric (which it in fact is).
>>>>>>
>>>>>> The DRM code was mostly there for dma-buf's FD import/export
>>>>>>
>>>>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>>>>
>>>>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>>>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>>>>
>>>>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>>>>> then
>>>>>>
>>>>>> PRIME stuff in the driver is not needed anymore.
>>>>>>
>>>>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>>>>
>>>>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>>>>
>>>>>> This driver then becomes a universal way to turn any shared buffer between
>>>>>> Dom0/DomD
>>>>>>
>>>>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>>>>> references
>>>>>>
>>>>>> or represent a dma-buf as grant-references for export.
>>>>>>
>>>>>> This way the driver can be used not only for DRM use-cases, but also for
>>>>>> other
>>>>>>
>>>>>> use-cases which may require zero copying between domains.
>>>>>>
>>>>>> For example, the use-cases we are about to work in the nearest future will
>>>>>> use
>>>>>>
>>>>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>>>>
>>>>> >from zero copying much. Potentially, even block/net devices may benefit,
>>>>>> but this needs some evaluation.
>>>>>>
>>>>>>
>>>>>> I would love to hear comments for authors of the hyper-dmabuf
>>>>>>
>>>>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>>>>
>>>>>>
>>>>>> Thank you,
>>>>>>
>>>>>> Oleksandr
>>>>>>
>>>>>>
>>>>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>
>>>>>>> Hello!
>>>>>>>
>>>>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>>>>> to do copying of display buffers' contents (filled by the
>>>>>>> frontend's user-space) into buffers allocated at the backend side.
>>>>>>> Taking into account the size of display buffers and frames per seconds
>>>>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>>>>
>>>>>>> This helper driver allows implementing zero-copying use-cases
>>>>>>> when using Xen para-virtualized frontend display driver by
>>>>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>>>>> physical device drivers on backend's side:
>>>>>>>
>>>>>>>   - a dumb buffer created on backend's side can be shared
>>>>>>>     with the Xen PV frontend driver, so it directly writes
>>>>>>>     into backend's domain memory (into the buffer exported from
>>>>>>>     DRM/KMS driver of a physical display device)
>>>>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>>>>     achieve no copying as well
>>>>>>>
>>>>>>> For that reason number of IOCTLs are introduced:
>>>>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>      This will create a DRM dumb buffer from grant references provided
>>>>>>>      by the frontend
>>>>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>>>>     by the backend
>>>>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>     This will block until the dumb buffer with the wait handle provided
>>>>>>>     be freed
>>>>>>>
>>>>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>>>>> on Renesas R-Car M3 board.
>>>>>>>
>>>>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Oleksandr
>>>>>>>
>>>>>>> Oleksandr Andrushchenko (1):
>>>>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>>>>
>>>>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>>>>   8 files changed, 1264 insertions(+)
>>>>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>>>>
>>>>>> [1]
>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>>>>> _______________________________________________
>>>>> dri-devel mailing list
>>>>> dri-devel@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>> -- 
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>> [1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
>> [2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
>> [3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
>> [4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
>> [5]
>> https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
>> [6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 17:01             ` Dongwon Kim
  (?)
@ 2018-04-19  8:14             ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-19  8:14 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: jgross, Artem Mygaiev, airlied, Oleksandr_Andrushchenko,
	linux-kernel, dri-devel, Potrola, MateuszX, xen-devel,
	daniel.vetter, boris.ostrovsky, Matt Roper

On 04/18/2018 08:01 PM, Dongwon Kim wrote:
> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> Yeah, I definitely agree on the idea of expanding the use case to the
>>>>> general domain where dmabuf sharing is used. However, what you are
>>>>> targetting with proposed changes is identical to the core design of
>>>>> hyper_dmabuf.
>>>>>
>>>>> On top of this basic functionalities, hyper_dmabuf has driver level
>>>>> inter-domain communication, that is needed for dma-buf remote tracking
>>>>> (no fence forwarding though), event triggering and event handling, extra
>>>>> meta data exchange and hyper_dmabuf_id that represents grefs
>>>>> (grefs are shared implicitly on driver level)
>>>> This really isn't a positive design aspect of hyperdmabuf imo. The core
>>>> code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
>>>> very simple & clean.
>>>>
>>>> If there's a clear need later on we can extend that. But for now xen-zcopy
>>>> seems to cover the basic use-case needs, so gets the job done.
>>>>
>>>>> Also it is designed with frontend (common core framework) + backend
>>>>> (hyper visor specific comm and memory sharing) structure for portability.
>>>>> We just can't limit this feature to Xen because we want to use the same
>>>>> uapis not only for Xen but also other applicable hypervisor, like ACORN.
>>>> See the discussion around udmabuf and the needs for kvm. I think trying to
>>>> make an ioctl/uapi that works for multiple hypervisors is misguided - it
>>>> likely won't work.
>>>>
>>>> On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
>>>> not even upstream yet, nor have I seen any patches proposing to land linux
>>>> support for ACRN. Since it's not upstream, it doesn't really matter for
>>>> upstream consideration. I'm doubting that ACRN will use the same grant
>>>> references as xen, so the same uapi won't work on ACRN as on Xen anyway.
>>> Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
>>> hyper_dmabuf has been architectured with the concept of backend.
>>> If you look at the structure of backend, you will find that
>>> backend is just a set of standard function calls as shown here:
>>>
>>> struct hyper_dmabuf_bknd_ops {
>>>          /* backend initialization routine (optional) */
>>>          int (*init)(void);
>>>
>>>          /* backend cleanup routine (optional) */
>>>          int (*cleanup)(void);
>>>
>>>          /* retreiving id of current virtual machine */
>>>          int (*get_vm_id)(void);
>>>
>>>          /* get pages shared via hypervisor-specific method */
>>>          int (*share_pages)(struct page **pages, int vm_id,
>>>                             int nents, void **refs_info);
>>>
>>>          /* make shared pages unshared via hypervisor specific method */
>>>          int (*unshare_pages)(void **refs_info, int nents);
>>>
>>>          /* map remotely shared pages on importer's side via
>>>           * hypervisor-specific method
>>>           */
>>>          struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
>>>                                             int nents, void **refs_info);
>>>
>>>          /* unmap and free shared pages on importer's side via
>>>           * hypervisor-specific method
>>>           */
>>>          int (*unmap_shared_pages)(void **refs_info, int nents);
>>>
>>>          /* initialize communication environment */
>>>          int (*init_comm_env)(void);
>>>
>>>          void (*destroy_comm)(void);
>>>
>>>          /* upstream ch setup (receiving and responding) */
>>>          int (*init_rx_ch)(int vm_id);
>>>
>>>          /* downstream ch setup (transmitting and parsing responses) */
>>>          int (*init_tx_ch)(int vm_id);
>>>
>>>          int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
>>> };
>>>
>>> All of these can be mapped with any hypervisor specific implementation.
>>> We designed backend implementation for Xen using grant-table, Xen event
>>> and ring buffer communication. For ACRN, we have another backend using Virt-IO
>>> for both memory sharing and communication.
>>>
>>> We tried to define this structure of backend to make it general enough (or
>>> it can be even modified or extended to support more cases.) so that it can
>>> fit to other hypervisor cases. Only requirements/expectation on the hypervisor
>>> are page-level memory sharing and inter-domain communication, which I think
>>> are standard features of modern hypervisor.
>>>
>>> And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
>>> are very general. One is getting FD (dmabuf) and get those shared. The other
>>> is generating dmabuf from global handle (secure handle hiding gref behind it).
>>> On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
>>> for any cases.
>>>
>>> So I don't know why we wouldn't want to try to make these standard in most of
>>> hypervisor cases instead of limiting it to certain hypervisor like Xen.
>>> Frontend-backend structre is optimal for this I think.
>>>
>>>>> So I am wondering we can start with this hyper_dmabuf then modify it for
>>>>> your use-case if needed and polish and fix any glitches if we want to
>>>>> to use this for all general dma-buf usecases.
>>>> Imo xen-zcopy is a much more reasonable starting point for upstream, which
>>>> can then be extended (if really proven to be necessary).
>>>>
>>>>> Also, I still have one unresolved question regarding the export/import flow
>>>>> in both of hyper_dmabuf and xen-zcopy.
>>>>>
>>>>> @danvet: Would this flow (guest1->import existing dmabuf->share underlying
>>>>> pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
>>>> I think if you just look at the pages, and make sure you handle the
>>>> sg_page == NULL case it's ok-ish. It's not great, but mostly it should
>>>> work. The real trouble with hyperdmabuf was the forwarding of all these
>>>> calls, instead of just passing around a list of grant references.
>>> I talked to danvet about this litte bit.
>>>
>>> I think there was some misunderstanding on this "forwarding". Exporting
>>> and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
>>> what made confusion was that importing domain notifies exporting domain when
>>> there are dmabuf operations (like attach, mapping, detach and release) so that
>>> exporting domain can track the usage of dmabuf on the importing domain.
>>>
>>> I designed this for some basic tracking. We may not need to notify for every
>>> different activity but if none of them is there, exporting domain can't
>>> determine if it is ok to unshare the buffer or the originator (like i915)
>>> can free the object even if it's being accessed in importing domain.
>>>
>>> Anyway I really hope we can have enough discussion and resolve all concerns
>>> before nailing it down.
>> Let me explain how this works in case of para-virtual display
>> use-case with xen-zcopy.
>>
>> 1. There are 4 components in the system:
>>    - displif protocol [1]
>>    - xen-front - para-virtual DRM driver running in DomU (Guest) VM
>>    - backend - user-space application running in Dom0
>>    - xen-zcopy - DRM (as of now) helper driver running in Dom0
>>
>> 2. All the communication between domains happens between xen-front and the
>> backend, so it is possible to implement para-virtual display use-case
>> without xen-zcopy at all (this is why it is a helper driver), but in this
>> case
>> memory copying occurs (this is out of scope for this discussion).
>>
>> 3. To better understand security issues let's see what use-cases we have:
>>
>> 3.1 xen-front exports its dma-buf (dumb) to the backend
>>
>> In this case there are no security issues at all as Dom0 (backend side)
>> will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
>> we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
>> If DomU misbehaves it can only write to its own pages shared with Dom0, but
>> still
>> cannot go beyond that, e.g. it can't access Dom0's memory.
>>
>> 3.2 Backend exports dma-buf to xen-front
>>
>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>> to these pages, not any other page from Dom0, so it can be still considered
>> safe.
>> But, the following must be considered (highlighted in xen-front's Kernel
>> documentation):
>>   - If guest domain dies then pages/grants received from the backend cannot
>>     be claimed back - think of it as memory lost to Dom0 (won't be used for
>> any
>>     other guest)
>>   - Misbehaving guest may send too many requests to the backend exhausting
>>     its grant references and memory (consider this from security POV). As the
>>     backend runs in the trusted domain we also assume that it is trusted as
>> well,
>>     e.g. must take measures to prevent DDoS attacks.
>>
> There is another security issue that this driver itself can cause. Using the
> grant-reference as is is not very safe because it's easy to guess (counting
> number probably) and any attackers running on the same importing domain can
> use these references to map shared pages and access the data. This is why we
> implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
> almost impossible to guess.
Yes, there is something to think about in general, not related
to dma-buf/zcopy. This is a question to Xen community what they
see as the right approach here.
>   All grant references for pages are shared in the
> driver level. This is another reason for having inter-VM comm.
>
>> 4. xen-front/backend/xen-zcopy synchronization
>>
>> 4.1. As I already said in 2) all the inter VM communication happens between
>> xen-front and the backend, xen-zcopy is NOT involved in that.
> Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
> is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
> driver itself provide some way to synchronize between two VMs?
No, because xen-zcopy is a *helper* driver, not more.
>   I think the
> assumption behind this is that Xen PV display interface and backend (running
> on the userspace) are used together with xen-zcopy
Backend may use xen-zcopy or may not - it depends if you need
zero copy or not, e.g. it is not a must for the backend
> but what if an user space
> just want to use xen-zcopy separately? Since it exposes ioctls, this is
> possible unless you add some dependency configuration there.
It is possible, any backend (user-space application) can use xen-zcopy
Even more, one can extend it to provide kernel side API
>
>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>> This call is synchronous, so xen-front expects that backend does free the
>> buffer pages on return.
> Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
> exporting) makes a destory request to the exporting VM?
No, the requester is always DomU, so "destroy buffer" request
will always come from DomU
>   But isn't it
> the domU to make such decision since it's the owner of buffer.
See above
>
> And what about the other way around? For example, what happens if the
> originator of buffer (like i915) decides to free the object behind dmabuf?
For that reason there is ref-counting for dma-buf, e.g.
if i915 decides to free then the backend (in my case) still holds
the buffer, thus not allowing it do disappear. Basically, this is
the backend which creates dma-buf from refs and owns it.
> Would i915 or exporting side of xen-zcopy know whether dom0 currently
> uses the dmabuf or not?
Why do you need this to know (probably I don't understand the use-case).
I could be obvious here, but if ref-count of the dma-buf is not zero
it is still exists and used?
>
> And again, I think this tracking should be handled in the driver itself
> implicitly without any userspace involvement if we want to this dmabuf
> sharing exist as a generic feature.
Why not allow dma-buf Linux framework do that for you?
>
>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>    - closes all dumb handles/fd's of the buffer according to [3]
>>    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>> sure
>>      the buffer is freed (think of it as it waits for dma-buf->release
>> callback)
>>    - replies to xen-front that the buffer can be destroyed.
>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>> error
>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>> reference
>> removal and will retry later until those are free.
>>
>> Hope this helps understand how buffers are synchronously deleted in case
>> of xen-zcopy with a single protocol command.
>>
>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>> some additional work:
>>
>> 1. xen-zcopy can be split into 2 parts and extend:
>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>> vise versa,
>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>> needed
>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> Not sure how to match our use case to xen-zcopy's case but we don't do alloc
> /free all the time.
We also don't
>   Also, dom0 won't make any freeing request to domU since it
> doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
> /release,
Similar here
>   which are tracked by domU (exporting VM). And for destruction of
> sharing, we have separate IOCTL for that, which revoke grant references "IF"
> there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
> destruction of sharing until it gets final dmabuf release message from dom0.
We block instead with 3sec timeout + some other logic
(out of context now)
>
> Also, in our usecase, (although we didn't intend to do so) it ends up using
> 3~4 buffers repeately.
2-3 in our use-cases
> This is because DRM in domU (that renders) doesn't
> allocate more object for EGL image since there is always free objects used
> before exist in the list. And we actually don't do full-path exporting
> (extracting pages -> grant-references -> get those shared) all the time.
> If the same dmabuf is exported already, we just update private message then
> notifies dom0 (reason for hash tables for keeping exported and importer
> dmabufs).
In my case these 2-3 buffers are allocated at start and not freed
until the end - these are used as frame buffers which are constantly
flipped. So, in my case there is no much profit in trying to cache
which adds unneeded complexity (in my use-case, of course).
If those 3-4 buffers you allocate are the only buffers used you may
also try going without caching, but this depends on your use-case

>> 2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
>> alloc/free/wait
>>
>> 3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
>> creation/deletion and whatever else is needed (fences?).
>>
>> To Xen community: please think of dma-buf here as of a buffer representation
>> mechanism,
>> e.g. at the end of the day it's just a set of pages.
>>
>> Thank you,
>> Oleksandr
>>>> -Daniel
>>>>
>>>>> Regards,
>>>>> DW
>>>>> On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
>>>>>> Hello, all!
>>>>>>
>>>>>> After discussing xen-zcopy and hyper-dmabuf [1] approaches
>>>>>>
>>>>>> it seems that xen-zcopy can be made not depend on DRM core any more
>>>>>>
>>>>>> and be dma-buf centric (which it in fact is).
>>>>>>
>>>>>> The DRM code was mostly there for dma-buf's FD import/export
>>>>>>
>>>>>> with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
>>>>>>
>>>>>> the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
>>>>>> DRM_XEN_ZCOPY_DUMB_TO_REFS)
>>>>>>
>>>>>> are extended to also provide a file descriptor of the corresponding dma-buf,
>>>>>> then
>>>>>>
>>>>>> PRIME stuff in the driver is not needed anymore.
>>>>>>
>>>>>> That being said, xen-zcopy can safely be detached from DRM and moved from
>>>>>>
>>>>>> drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
>>>>>>
>>>>>> This driver then becomes a universal way to turn any shared buffer between
>>>>>> Dom0/DomD
>>>>>>
>>>>>> and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
>>>>>> references
>>>>>>
>>>>>> or represent a dma-buf as grant-references for export.
>>>>>>
>>>>>> This way the driver can be used not only for DRM use-cases, but also for
>>>>>> other
>>>>>>
>>>>>> use-cases which may require zero copying between domains.
>>>>>>
>>>>>> For example, the use-cases we are about to work in the nearest future will
>>>>>> use
>>>>>>
>>>>>> V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
>>>>>>
>>>>> >from zero copying much. Potentially, even block/net devices may benefit,
>>>>>> but this needs some evaluation.
>>>>>>
>>>>>>
>>>>>> I would love to hear comments for authors of the hyper-dmabuf
>>>>>>
>>>>>> and Xen community, as well as DRI-Devel and other interested parties.
>>>>>>
>>>>>>
>>>>>> Thank you,
>>>>>>
>>>>>> Oleksandr
>>>>>>
>>>>>>
>>>>>> On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>>>
>>>>>>> Hello!
>>>>>>>
>>>>>>> When using Xen PV DRM frontend driver then on backend side one will need
>>>>>>> to do copying of display buffers' contents (filled by the
>>>>>>> frontend's user-space) into buffers allocated at the backend side.
>>>>>>> Taking into account the size of display buffers and frames per seconds
>>>>>>> it may result in unneeded huge data bus occupation and performance loss.
>>>>>>>
>>>>>>> This helper driver allows implementing zero-copying use-cases
>>>>>>> when using Xen para-virtualized frontend display driver by
>>>>>>> implementing a DRM/KMS helper driver running on backend's side.
>>>>>>> It utilizes PRIME buffers API to share frontend's buffers with
>>>>>>> physical device drivers on backend's side:
>>>>>>>
>>>>>>>   - a dumb buffer created on backend's side can be shared
>>>>>>>     with the Xen PV frontend driver, so it directly writes
>>>>>>>     into backend's domain memory (into the buffer exported from
>>>>>>>     DRM/KMS driver of a physical display device)
>>>>>>>   - a dumb buffer allocated by the frontend can be imported
>>>>>>>     into physical device DRM/KMS driver, thus allowing to
>>>>>>>     achieve no copying as well
>>>>>>>
>>>>>>> For that reason number of IOCTLs are introduced:
>>>>>>>   -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>      This will create a DRM dumb buffer from grant references provided
>>>>>>>      by the frontend
>>>>>>>   - DRM_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>     This will grant references to a dumb/display buffer's memory provided
>>>>>>>     by the backend
>>>>>>>   - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>     This will block until the dumb buffer with the wait handle provided
>>>>>>>     be freed
>>>>>>>
>>>>>>> With this helper driver I was able to drop CPU usage from 17% to 3%
>>>>>>> on Renesas R-Car M3 board.
>>>>>>>
>>>>>>> This was tested with Renesas' Wayland-KMS and backend running as DRM master.
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Oleksandr
>>>>>>>
>>>>>>> Oleksandr Andrushchenko (1):
>>>>>>>    drm/xen-zcopy: Add Xen zero-copy helper DRM driver
>>>>>>>
>>>>>>>   Documentation/gpu/drivers.rst               |   1 +
>>>>>>>   Documentation/gpu/xen-zcopy.rst             |  32 +
>>>>>>>   drivers/gpu/drm/xen/Kconfig                 |  25 +
>>>>>>>   drivers/gpu/drm/xen/Makefile                |   5 +
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
>>>>>>>   drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
>>>>>>>   include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
>>>>>>>   8 files changed, 1264 insertions(+)
>>>>>>>   create mode 100644 Documentation/gpu/xen-zcopy.rst
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
>>>>>>>   create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
>>>>>>>   create mode 100644 include/uapi/drm/xen_zcopy_drm.h
>>>>>>>
>>>>>> [1]
>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
>>>>> _______________________________________________
>>>>> dri-devel mailing list
>>>>> dri-devel@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>> -- 
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>> [1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
>> [2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
>> [3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
>> [4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
>> [5]
>> https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
>> [6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 16:01                           ` Dongwon Kim
@ 2018-04-19  8:19                             ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-19  8:19 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: Roger Pau Monné,
	Paul Durrant, jgross, Artem Mygaiev, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Matt Roper

On 04/18/2018 07:01 PM, Dongwon Kim wrote:
> On Wed, Apr 18, 2018 at 03:42:29PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
>>> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
>>>> On 04/18/2018 01:18 PM, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>>>>>> Of Roger Pau Monné
>>>>>> Sent: 18 April 2018 11:11
>>>>>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>>>>>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>>>>>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>>>>>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>>>>>> devel@lists.freedesktop.org; Potrola, MateuszX
>>>>>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>>>>>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>>>>>> <matthew.d.roper@intel.com>
>>>>>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>>>>>> helper DRM driver
>>>>>>
>>>>>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>>>>>> wrote:
>>>>>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>>>> After speaking with Oleksandr on IRC, I think the main usage of the
>>>>>> gntdev extension is to:
>>>>>>
>>>>>> 1. Create a dma-buf from a set of grant references.
>>>>>> 2. Share dma-buf and get a list of grant references.
>>>>>>
>>>>>> I think this set of operations could be broken into:
>>>>>>
>>>>>> 1.1 Map grant references into user-space using the gntdev.
>>>>>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>>>>>
>>>>>> 2.1 Map a dma-buf into user-space.
>>>>>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>>>>>       mapped.
>>>>>>
>>>>>> So it seems like what's actually missing is a way to:
>>>>>>
>>>>>>    - Create a dma-buf from a list of user-space virtual addresses.
>>>>>>    - Allow to map a dma-buf into user-space, so it can then be used with
>>>>>>      the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>> Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
>>>> are no go from your POV?
> FYI,
>
> our use-case is "surface sharing" or "graphic obj sharing" where a client
> application in one guest renders and export this render target(e.g. EGL surface)
> as dma-buf. This dma-buf is then exported to another guest/host via hyper_dmabuf
> drv where a compositor is running. This importing domain creates a dmabuf with
> shared reference then it is imported as EGL image that later can be used as
> texture object via EGL api.

>   Mapping dmabuf to the userspace or vice versa
> might be possible with modifying user space drivers/applications but it is an
> unnecessary extra step from our perspective.
+1. I also feel like if it is implemented in the kernel space it
will be *much* more easier then inventing workarounds with
gntdev, user-space and helper dma-buf driver (which obviously can be
implemented). Of course, this approach is easier for Xen as we do not
touch its kernel code ;)
But there is a demand for changes as number of embedded/multimedia use-cases
is constantly growing and we have to react.
> Also, we want to keep all objects
> in the kernel level.
>
>>> My opinion is that there seems to be a more generic way to implement
>>> this, and thus I would prefer that one.
>>>
>>>> Instead, we have to make all that fancy stuff
>>>> with VAs <-> device-X and have that device-X driver live out of drivers/xen
>>>> as it is not a Xen specific driver?
>>> That would be my preference if feasible, simply because it can be
>>> reused by other use-cases that need to create dma-bufs in user-space.
>> There is a use-case I have: a display unit on my target has a DMA
>> controller which can't do scatter-gather, e.g. it only expects a
>> single starting address of the buffer.
>> In order to create a dma-buf from grefs in this case
>> I allocate memory with dma_alloc_xxx and then balloon pages of the
>> buffer and finally map grefs onto this DMA buffer.
>> This way I can give this shared buffer to the display unit as its bus
>> addresses are contiguous.
>>
>> With the proposed solution (gntdev + device-X) I won't be able to achieve
>> this,
>> as I have no control over from where gntdev/balloon drivers get the pages
>> (even more, those can easily be out of DMA address space of the display
>> unit).
>>
>> Thus, even if implemented, I can't use this approach.
>>> In any case I just knew about dma-bufs this morning, there might be
>>> things that I'm missing.
>>>
>>> Roger.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-19  8:19                             ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-19  8:19 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: jgross, Artem Mygaiev, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Paul Durrant, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Roger Pau Monné

On 04/18/2018 07:01 PM, Dongwon Kim wrote:
> On Wed, Apr 18, 2018 at 03:42:29PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
>>> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
>>>> On 04/18/2018 01:18 PM, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>>>>>> Of Roger Pau Monné
>>>>>> Sent: 18 April 2018 11:11
>>>>>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>>>>>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>>>>>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>>>>>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>>>>>> devel@lists.freedesktop.org; Potrola, MateuszX
>>>>>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>>>>>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>>>>>> <matthew.d.roper@intel.com>
>>>>>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>>>>>> helper DRM driver
>>>>>>
>>>>>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>>>>>> wrote:
>>>>>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>>>> After speaking with Oleksandr on IRC, I think the main usage of the
>>>>>> gntdev extension is to:
>>>>>>
>>>>>> 1. Create a dma-buf from a set of grant references.
>>>>>> 2. Share dma-buf and get a list of grant references.
>>>>>>
>>>>>> I think this set of operations could be broken into:
>>>>>>
>>>>>> 1.1 Map grant references into user-space using the gntdev.
>>>>>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>>>>>
>>>>>> 2.1 Map a dma-buf into user-space.
>>>>>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>>>>>       mapped.
>>>>>>
>>>>>> So it seems like what's actually missing is a way to:
>>>>>>
>>>>>>    - Create a dma-buf from a list of user-space virtual addresses.
>>>>>>    - Allow to map a dma-buf into user-space, so it can then be used with
>>>>>>      the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>> Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
>>>> are no go from your POV?
> FYI,
>
> our use-case is "surface sharing" or "graphic obj sharing" where a client
> application in one guest renders and export this render target(e.g. EGL surface)
> as dma-buf. This dma-buf is then exported to another guest/host via hyper_dmabuf
> drv where a compositor is running. This importing domain creates a dmabuf with
> shared reference then it is imported as EGL image that later can be used as
> texture object via EGL api.

>   Mapping dmabuf to the userspace or vice versa
> might be possible with modifying user space drivers/applications but it is an
> unnecessary extra step from our perspective.
+1. I also feel like if it is implemented in the kernel space it
will be *much* more easier then inventing workarounds with
gntdev, user-space and helper dma-buf driver (which obviously can be
implemented). Of course, this approach is easier for Xen as we do not
touch its kernel code ;)
But there is a demand for changes as number of embedded/multimedia use-cases
is constantly growing and we have to react.
> Also, we want to keep all objects
> in the kernel level.
>
>>> My opinion is that there seems to be a more generic way to implement
>>> this, and thus I would prefer that one.
>>>
>>>> Instead, we have to make all that fancy stuff
>>>> with VAs <-> device-X and have that device-X driver live out of drivers/xen
>>>> as it is not a Xen specific driver?
>>> That would be my preference if feasible, simply because it can be
>>> reused by other use-cases that need to create dma-bufs in user-space.
>> There is a use-case I have: a display unit on my target has a DMA
>> controller which can't do scatter-gather, e.g. it only expects a
>> single starting address of the buffer.
>> In order to create a dma-buf from grefs in this case
>> I allocate memory with dma_alloc_xxx and then balloon pages of the
>> buffer and finally map grefs onto this DMA buffer.
>> This way I can give this shared buffer to the display unit as its bus
>> addresses are contiguous.
>>
>> With the proposed solution (gntdev + device-X) I won't be able to achieve
>> this,
>> as I have no control over from where gntdev/balloon drivers get the pages
>> (even more, those can easily be out of DMA address space of the display
>> unit).
>>
>> Thus, even if implemented, I can't use this approach.
>>> In any case I just knew about dma-bufs this morning, there might be
>>> things that I'm missing.
>>>
>>> Roger.

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 16:01                           ` Dongwon Kim
  (?)
@ 2018-04-19  8:19                           ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-19  8:19 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: jgross, Artem Mygaiev, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Paul Durrant, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper,
	Roger Pau Monné

On 04/18/2018 07:01 PM, Dongwon Kim wrote:
> On Wed, Apr 18, 2018 at 03:42:29PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/18/2018 01:55 PM, Roger Pau Monné wrote:
>>> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
>>>> On 04/18/2018 01:18 PM, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
>>>>>> Of Roger Pau Monné
>>>>>> Sent: 18 April 2018 11:11
>>>>>> To: Oleksandr Andrushchenko <andr2000@gmail.com>
>>>>>> Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
>>>>>> Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
>>>>>> Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
>>>>>> devel@lists.freedesktop.org; Potrola, MateuszX
>>>>>> <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
>>>>>> daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
>>>>>> <matthew.d.roper@intel.com>
>>>>>> Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
>>>>>> helper DRM driver
>>>>>>
>>>>>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
>>>>>> wrote:
>>>>>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>>>> After speaking with Oleksandr on IRC, I think the main usage of the
>>>>>> gntdev extension is to:
>>>>>>
>>>>>> 1. Create a dma-buf from a set of grant references.
>>>>>> 2. Share dma-buf and get a list of grant references.
>>>>>>
>>>>>> I think this set of operations could be broken into:
>>>>>>
>>>>>> 1.1 Map grant references into user-space using the gntdev.
>>>>>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>>>>>
>>>>>> 2.1 Map a dma-buf into user-space.
>>>>>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>>>>>       mapped.
>>>>>>
>>>>>> So it seems like what's actually missing is a way to:
>>>>>>
>>>>>>    - Create a dma-buf from a list of user-space virtual addresses.
>>>>>>    - Allow to map a dma-buf into user-space, so it can then be used with
>>>>>>      the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>> Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
>>>> are no go from your POV?
> FYI,
>
> our use-case is "surface sharing" or "graphic obj sharing" where a client
> application in one guest renders and export this render target(e.g. EGL surface)
> as dma-buf. This dma-buf is then exported to another guest/host via hyper_dmabuf
> drv where a compositor is running. This importing domain creates a dmabuf with
> shared reference then it is imported as EGL image that later can be used as
> texture object via EGL api.

>   Mapping dmabuf to the userspace or vice versa
> might be possible with modifying user space drivers/applications but it is an
> unnecessary extra step from our perspective.
+1. I also feel like if it is implemented in the kernel space it
will be *much* more easier then inventing workarounds with
gntdev, user-space and helper dma-buf driver (which obviously can be
implemented). Of course, this approach is easier for Xen as we do not
touch its kernel code ;)
But there is a demand for changes as number of embedded/multimedia use-cases
is constantly growing and we have to react.
> Also, we want to keep all objects
> in the kernel level.
>
>>> My opinion is that there seems to be a more generic way to implement
>>> this, and thus I would prefer that one.
>>>
>>>> Instead, we have to make all that fancy stuff
>>>> with VAs <-> device-X and have that device-X driver live out of drivers/xen
>>>> as it is not a Xen specific driver?
>>> That would be my preference if feasible, simply because it can be
>>> reused by other use-cases that need to create dma-bufs in user-space.
>> There is a use-case I have: a display unit on my target has a DMA
>> controller which can't do scatter-gather, e.g. it only expects a
>> single starting address of the buffer.
>> In order to create a dma-buf from grefs in this case
>> I allocate memory with dma_alloc_xxx and then balloon pages of the
>> buffer and finally map grefs onto this DMA buffer.
>> This way I can give this shared buffer to the display unit as its bus
>> addresses are contiguous.
>>
>> With the proposed solution (gntdev + device-X) I won't be able to achieve
>> this,
>> as I have no control over from where gntdev/balloon drivers get the pages
>> (even more, those can easily be out of DMA address space of the display
>> unit).
>>
>> Thus, even if implemented, I can't use this approach.
>>> In any case I just knew about dma-bufs this morning, there might be
>>> things that I'm missing.
>>>
>>> Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-19  8:14               ` Oleksandr Andrushchenko
@ 2018-04-19 17:55                 ` Dongwon Kim
  -1 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-19 17:55 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Oleksandr_Andrushchenko, jgross, Artem Mygaiev, konrad.wilk,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Matt Roper

On Thu, Apr 19, 2018 at 11:14:02AM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 08:01 PM, Dongwon Kim wrote:
> >On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >>>On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>>>On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>>>Yeah, I definitely agree on the idea of expanding the use case to the
> >>>>>general domain where dmabuf sharing is used. However, what you are
> >>>>>targetting with proposed changes is identical to the core design of
> >>>>>hyper_dmabuf.
> >>>>>
> >>>>>On top of this basic functionalities, hyper_dmabuf has driver level
> >>>>>inter-domain communication, that is needed for dma-buf remote tracking
> >>>>>(no fence forwarding though), event triggering and event handling, extra
> >>>>>meta data exchange and hyper_dmabuf_id that represents grefs
> >>>>>(grefs are shared implicitly on driver level)
> >>>>This really isn't a positive design aspect of hyperdmabuf imo. The core
> >>>>code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> >>>>very simple & clean.
> >>>>
> >>>>If there's a clear need later on we can extend that. But for now xen-zcopy
> >>>>seems to cover the basic use-case needs, so gets the job done.
> >>>>
> >>>>>Also it is designed with frontend (common core framework) + backend
> >>>>>(hyper visor specific comm and memory sharing) structure for portability.
> >>>>>We just can't limit this feature to Xen because we want to use the same
> >>>>>uapis not only for Xen but also other applicable hypervisor, like ACORN.
> >>>>See the discussion around udmabuf and the needs for kvm. I think trying to
> >>>>make an ioctl/uapi that works for multiple hypervisors is misguided - it
> >>>>likely won't work.
> >>>>
> >>>>On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> >>>>not even upstream yet, nor have I seen any patches proposing to land linux
> >>>>support for ACRN. Since it's not upstream, it doesn't really matter for
> >>>>upstream consideration. I'm doubting that ACRN will use the same grant
> >>>>references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> >>>Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> >>>hyper_dmabuf has been architectured with the concept of backend.
> >>>If you look at the structure of backend, you will find that
> >>>backend is just a set of standard function calls as shown here:
> >>>
> >>>struct hyper_dmabuf_bknd_ops {
> >>>         /* backend initialization routine (optional) */
> >>>         int (*init)(void);
> >>>
> >>>         /* backend cleanup routine (optional) */
> >>>         int (*cleanup)(void);
> >>>
> >>>         /* retreiving id of current virtual machine */
> >>>         int (*get_vm_id)(void);
> >>>
> >>>         /* get pages shared via hypervisor-specific method */
> >>>         int (*share_pages)(struct page **pages, int vm_id,
> >>>                            int nents, void **refs_info);
> >>>
> >>>         /* make shared pages unshared via hypervisor specific method */
> >>>         int (*unshare_pages)(void **refs_info, int nents);
> >>>
> >>>         /* map remotely shared pages on importer's side via
> >>>          * hypervisor-specific method
> >>>          */
> >>>         struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
> >>>                                            int nents, void **refs_info);
> >>>
> >>>         /* unmap and free shared pages on importer's side via
> >>>          * hypervisor-specific method
> >>>          */
> >>>         int (*unmap_shared_pages)(void **refs_info, int nents);
> >>>
> >>>         /* initialize communication environment */
> >>>         int (*init_comm_env)(void);
> >>>
> >>>         void (*destroy_comm)(void);
> >>>
> >>>         /* upstream ch setup (receiving and responding) */
> >>>         int (*init_rx_ch)(int vm_id);
> >>>
> >>>         /* downstream ch setup (transmitting and parsing responses) */
> >>>         int (*init_tx_ch)(int vm_id);
> >>>
> >>>         int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> >>>};
> >>>
> >>>All of these can be mapped with any hypervisor specific implementation.
> >>>We designed backend implementation for Xen using grant-table, Xen event
> >>>and ring buffer communication. For ACRN, we have another backend using Virt-IO
> >>>for both memory sharing and communication.
> >>>
> >>>We tried to define this structure of backend to make it general enough (or
> >>>it can be even modified or extended to support more cases.) so that it can
> >>>fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> >>>are page-level memory sharing and inter-domain communication, which I think
> >>>are standard features of modern hypervisor.
> >>>
> >>>And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> >>>are very general. One is getting FD (dmabuf) and get those shared. The other
> >>>is generating dmabuf from global handle (secure handle hiding gref behind it).
> >>>On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> >>>for any cases.
> >>>
> >>>So I don't know why we wouldn't want to try to make these standard in most of
> >>>hypervisor cases instead of limiting it to certain hypervisor like Xen.
> >>>Frontend-backend structre is optimal for this I think.
> >>>
> >>>>>So I am wondering we can start with this hyper_dmabuf then modify it for
> >>>>>your use-case if needed and polish and fix any glitches if we want to
> >>>>>to use this for all general dma-buf usecases.
> >>>>Imo xen-zcopy is a much more reasonable starting point for upstream, which
> >>>>can then be extended (if really proven to be necessary).
> >>>>
> >>>>>Also, I still have one unresolved question regarding the export/import flow
> >>>>>in both of hyper_dmabuf and xen-zcopy.
> >>>>>
> >>>>>@danvet: Would this flow (guest1->import existing dmabuf->share underlying
> >>>>>pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> >>>>I think if you just look at the pages, and make sure you handle the
> >>>>sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> >>>>work. The real trouble with hyperdmabuf was the forwarding of all these
> >>>>calls, instead of just passing around a list of grant references.
> >>>I talked to danvet about this litte bit.
> >>>
> >>>I think there was some misunderstanding on this "forwarding". Exporting
> >>>and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> >>>what made confusion was that importing domain notifies exporting domain when
> >>>there are dmabuf operations (like attach, mapping, detach and release) so that
> >>>exporting domain can track the usage of dmabuf on the importing domain.
> >>>
> >>>I designed this for some basic tracking. We may not need to notify for every
> >>>different activity but if none of them is there, exporting domain can't
> >>>determine if it is ok to unshare the buffer or the originator (like i915)
> >>>can free the object even if it's being accessed in importing domain.
> >>>
> >>>Anyway I really hope we can have enough discussion and resolve all concerns
> >>>before nailing it down.
> >>Let me explain how this works in case of para-virtual display
> >>use-case with xen-zcopy.
> >>
> >>1. There are 4 components in the system:
> >>   - displif protocol [1]
> >>   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
> >>   - backend - user-space application running in Dom0
> >>   - xen-zcopy - DRM (as of now) helper driver running in Dom0
> >>
> >>2. All the communication between domains happens between xen-front and the
> >>backend, so it is possible to implement para-virtual display use-case
> >>without xen-zcopy at all (this is why it is a helper driver), but in this
> >>case
> >>memory copying occurs (this is out of scope for this discussion).
> >>
> >>3. To better understand security issues let's see what use-cases we have:
> >>
> >>3.1 xen-front exports its dma-buf (dumb) to the backend
> >>
> >>In this case there are no security issues at all as Dom0 (backend side)
> >>will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
> >>we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
> >>If DomU misbehaves it can only write to its own pages shared with Dom0, but
> >>still
> >>cannot go beyond that, e.g. it can't access Dom0's memory.
> >>
> >>3.2 Backend exports dma-buf to xen-front
> >>
> >>In this case Dom0 pages are shared with DomU. As before, DomU can only write
> >>to these pages, not any other page from Dom0, so it can be still considered
> >>safe.
> >>But, the following must be considered (highlighted in xen-front's Kernel
> >>documentation):
> >>  - If guest domain dies then pages/grants received from the backend cannot
> >>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> >>any
> >>    other guest)
> >>  - Misbehaving guest may send too many requests to the backend exhausting
> >>    its grant references and memory (consider this from security POV). As the
> >>    backend runs in the trusted domain we also assume that it is trusted as
> >>well,
> >>    e.g. must take measures to prevent DDoS attacks.
> >>
> >There is another security issue that this driver itself can cause. Using the
> >grant-reference as is is not very safe because it's easy to guess (counting
> >number probably) and any attackers running on the same importing domain can
> >use these references to map shared pages and access the data. This is why we
> >implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
> >almost impossible to guess.
> Yes, there is something to think about in general, not related
> to dma-buf/zcopy. This is a question to Xen community what they
> see as the right approach here.

IMO, this secure global handle should be taken into consideration because
grefs are just plain references generated from in-kernel functions and
how securely this is delivered and used is up to another driver that exposes
these via uapi. And proper way to protect these and also prevent any "guessed"
references from being used may be exchanging and keeping those in the kernel
level. 

> >  All grant references for pages are shared in the
> >driver level. This is another reason for having inter-VM comm.
> >
> >>4. xen-front/backend/xen-zcopy synchronization
> >>
> >>4.1. As I already said in 2) all the inter VM communication happens between
> >>xen-front and the backend, xen-zcopy is NOT involved in that.
> >Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
> >is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
> >driver itself provide some way to synchronize between two VMs?
> No, because xen-zcopy is a *helper* driver, not more.
> >  I think the
> >assumption behind this is that Xen PV display interface and backend (running
> >on the userspace) are used together with xen-zcopy
> Backend may use xen-zcopy or may not - it depends if you need
> zero copy or not, e.g. it is not a must for the backend
> >but what if an user space
> >just want to use xen-zcopy separately? Since it exposes ioctls, this is
> >possible unless you add some dependency configuration there.
> It is possible, any backend (user-space application) can use xen-zcopy
> Even more, one can extend it to provide kernel side API
> >
> >>When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> >>XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> >>This call is synchronous, so xen-front expects that backend does free the
> >>buffer pages on return.
> >Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
> >exporting) makes a destory request to the exporting VM?
> No, the requester is always DomU, so "destroy buffer" request
> will always come from DomU
> >  But isn't it
> >the domU to make such decision since it's the owner of buffer.
> See above
> >
> >And what about the other way around? For example, what happens if the
> >originator of buffer (like i915) decides to free the object behind dmabuf?
> For that reason there is ref-counting for dma-buf, e.g.
> if i915 decides to free then the backend (in my case) still holds
> the buffer, thus not allowing it do disappear. Basically, this is
> the backend which creates dma-buf from refs and owns it.

ok, I got it. So the xen-zcopy is staying as importer holding one ref
of dmabuf from i915. I see no problem here then. But actually my concern
is more about between domains (below).

> >Would i915 or exporting side of xen-zcopy know whether dom0 currently
> >uses the dmabuf or not?
> Why do you need this to know (probably I don't understand the use-case).
> I could be obvious here, but if ref-count of the dma-buf is not zero
> it is still exists and used?
> >
> >And again, I think this tracking should be handled in the driver itself
> >implicitly without any userspace involvement if we want to this dmabuf
> >sharing exist as a generic feature.
> Why not allow dma-buf Linux framework do that for you?

yes, between hyper_dmabuf/xen-zcopy and i915 (domU) and between end-consumer
and hyper_dmabuf/xen-zcopy (dom0), standard dma-buf protocols work. What
I am referring to is more about between domains. Let's say you want to
clear up sharing from domU, how does it know if it's safe? Does wait ioctl
handles this remote activities? Possibly this can be done by backend in your
scheme but again, this means there's dependency and another reason xen-zcopy
may not be used safely in general dmabuf sharing cases. I think this
part is guaranteed inside the driver that does export/import dmabuf
to/from other domain.

> >
> >>4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> >>   - closes all dumb handles/fd's of the buffer according to [3]
> >>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> >>sure
> >>     the buffer is freed (think of it as it waits for dma-buf->release
> >>callback)
> >>   - replies to xen-front that the buffer can be destroyed.
> >>This way deletion of the buffer happens synchronously on both Dom0 and DomU
> >>sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> >>error
> >>(BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> >>reference
> >>removal and will retry later until those are free.
> >>
> >>Hope this helps understand how buffers are synchronously deleted in case
> >>of xen-zcopy with a single protocol command.
> >>
> >>I think the above logic can also be re-used by the hyper-dmabuf driver with
> >>some additional work:
> >>
> >>1. xen-zcopy can be split into 2 parts and extend:
> >>1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> >>vise versa,
> >>implement "wait" ioctl (wait for dma-buf->release): currently these are
> >>DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> >>DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> >>needed
> >>by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> >Not sure how to match our use case to xen-zcopy's case but we don't do alloc
> >/free all the time.
> We also don't
> >  Also, dom0 won't make any freeing request to domU since it
> >doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
> >/release,
> Similar here
> >  which are tracked by domU (exporting VM). And for destruction of
> >sharing, we have separate IOCTL for that, which revoke grant references "IF"
> >there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
> >destruction of sharing until it gets final dmabuf release message from dom0.
> We block instead with 3sec timeout + some other logic
> (out of context now)

Does it mean it's not upon some kind of release signal from dom0?

> >
> >Also, in our usecase, (although we didn't intend to do so) it ends up using
> >3~4 buffers repeately.
> 2-3 in our use-cases
> >This is because DRM in domU (that renders) doesn't
> >allocate more object for EGL image since there is always free objects used
> >before exist in the list. And we actually don't do full-path exporting
> >(extracting pages -> grant-references -> get those shared) all the time.
> >If the same dmabuf is exported already, we just update private message then
> >notifies dom0 (reason for hash tables for keeping exported and importer
> >dmabufs).
> In my case these 2-3 buffers are allocated at start and not freed
> until the end - these are used as frame buffers which are constantly
> flipped. So, in my case there is no much profit in trying to cache
> which adds unneeded complexity (in my use-case, of course).
> If those 3-4 buffers you allocate are the only buffers used you may
> also try going without caching, but this depends on your use-case

I have a question about your use case. So is display manager on dom0
importing 2~3 dedicated dmabufs from domU during the initialization then use
those only or domU keeps exporting buffers to dom0 at every swap but those
happen to be same one used before in normal situation?

This is just my thought and might be limited to our usecase but
Wouldn't the latter be more natural and flexible where client app in
domU renders to its own object then export it to the compositor running on
dom0 at every swap. And meanwhile the hyper_dmabuf/xen-zcopy handles a way
to do this efficiently without any duplication of sharing. With this,
userspace doesn't even have to know about intial preparation or protocol
it needs to follow to share buffers between two domains (e.g. preallocation
of sharable objects)

> 
> >>2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
> >>alloc/free/wait
> >>
> >>3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
> >>creation/deletion and whatever else is needed (fences?).
> >>
> >>To Xen community: please think of dma-buf here as of a buffer representation
> >>mechanism,
> >>e.g. at the end of the day it's just a set of pages.
> >>
> >>Thank you,
> >>Oleksandr
> >>>>-Daniel
> >>>>
> >>>>>Regards,
> >>>>>DW
> >>>>>On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> >>>>>>Hello, all!
> >>>>>>
> >>>>>>After discussing xen-zcopy and hyper-dmabuf [1] approaches
> >>>>>>
> >>>>>>it seems that xen-zcopy can be made not depend on DRM core any more
> >>>>>>
> >>>>>>and be dma-buf centric (which it in fact is).
> >>>>>>
> >>>>>>The DRM code was mostly there for dma-buf's FD import/export
> >>>>>>
> >>>>>>with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> >>>>>>
> >>>>>>the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> >>>>>>DRM_XEN_ZCOPY_DUMB_TO_REFS)
> >>>>>>
> >>>>>>are extended to also provide a file descriptor of the corresponding dma-buf,
> >>>>>>then
> >>>>>>
> >>>>>>PRIME stuff in the driver is not needed anymore.
> >>>>>>
> >>>>>>That being said, xen-zcopy can safely be detached from DRM and moved from
> >>>>>>
> >>>>>>drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> >>>>>>
> >>>>>>This driver then becomes a universal way to turn any shared buffer between
> >>>>>>Dom0/DomD
> >>>>>>
> >>>>>>and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> >>>>>>references
> >>>>>>
> >>>>>>or represent a dma-buf as grant-references for export.
> >>>>>>
> >>>>>>This way the driver can be used not only for DRM use-cases, but also for
> >>>>>>other
> >>>>>>
> >>>>>>use-cases which may require zero copying between domains.
> >>>>>>
> >>>>>>For example, the use-cases we are about to work in the nearest future will
> >>>>>>use
> >>>>>>
> >>>>>>V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> >>>>>>
> >>>>>>from zero copying much. Potentially, even block/net devices may benefit,
> >>>>>>but this needs some evaluation.
> >>>>>>
> >>>>>>
> >>>>>>I would love to hear comments for authors of the hyper-dmabuf
> >>>>>>
> >>>>>>and Xen community, as well as DRI-Devel and other interested parties.
> >>>>>>
> >>>>>>
> >>>>>>Thank you,
> >>>>>>
> >>>>>>Oleksandr
> >>>>>>
> >>>>>>
> >>>>>>On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >>>>>>>From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>>>>
> >>>>>>>Hello!
> >>>>>>>
> >>>>>>>When using Xen PV DRM frontend driver then on backend side one will need
> >>>>>>>to do copying of display buffers' contents (filled by the
> >>>>>>>frontend's user-space) into buffers allocated at the backend side.
> >>>>>>>Taking into account the size of display buffers and frames per seconds
> >>>>>>>it may result in unneeded huge data bus occupation and performance loss.
> >>>>>>>
> >>>>>>>This helper driver allows implementing zero-copying use-cases
> >>>>>>>when using Xen para-virtualized frontend display driver by
> >>>>>>>implementing a DRM/KMS helper driver running on backend's side.
> >>>>>>>It utilizes PRIME buffers API to share frontend's buffers with
> >>>>>>>physical device drivers on backend's side:
> >>>>>>>
> >>>>>>>  - a dumb buffer created on backend's side can be shared
> >>>>>>>    with the Xen PV frontend driver, so it directly writes
> >>>>>>>    into backend's domain memory (into the buffer exported from
> >>>>>>>    DRM/KMS driver of a physical display device)
> >>>>>>>  - a dumb buffer allocated by the frontend can be imported
> >>>>>>>    into physical device DRM/KMS driver, thus allowing to
> >>>>>>>    achieve no copying as well
> >>>>>>>
> >>>>>>>For that reason number of IOCTLs are introduced:
> >>>>>>>  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>>>     This will create a DRM dumb buffer from grant references provided
> >>>>>>>     by the frontend
> >>>>>>>  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>>>    This will grant references to a dumb/display buffer's memory provided
> >>>>>>>    by the backend
> >>>>>>>  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>>>    This will block until the dumb buffer with the wait handle provided
> >>>>>>>    be freed
> >>>>>>>
> >>>>>>>With this helper driver I was able to drop CPU usage from 17% to 3%
> >>>>>>>on Renesas R-Car M3 board.
> >>>>>>>
> >>>>>>>This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >>>>>>>
> >>>>>>>Thank you,
> >>>>>>>Oleksandr
> >>>>>>>
> >>>>>>>Oleksandr Andrushchenko (1):
> >>>>>>>   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >>>>>>>
> >>>>>>>  Documentation/gpu/drivers.rst               |   1 +
> >>>>>>>  Documentation/gpu/xen-zcopy.rst             |  32 +
> >>>>>>>  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >>>>>>>  drivers/gpu/drm/xen/Makefile                |   5 +
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >>>>>>>  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >>>>>>>  8 files changed, 1264 insertions(+)
> >>>>>>>  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >>>>>>>  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >>>>>>>
> >>>>>>[1]
> >>>>>>https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> >>>>>_______________________________________________
> >>>>>dri-devel mailing list
> >>>>>dri-devel@lists.freedesktop.org
> >>>>>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>>>-- 
> >>>>Daniel Vetter
> >>>>Software Engineer, Intel Corporation
> >>>>http://blog.ffwll.ch
> >>[1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
> >>[2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
> >>[3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
> >>[4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
> >>[5]
> >>https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
> >>[6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c
> 

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-19 17:55                 ` Dongwon Kim
  0 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-19 17:55 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On Thu, Apr 19, 2018 at 11:14:02AM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 08:01 PM, Dongwon Kim wrote:
> >On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >>>On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>>>On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>>>Yeah, I definitely agree on the idea of expanding the use case to the
> >>>>>general domain where dmabuf sharing is used. However, what you are
> >>>>>targetting with proposed changes is identical to the core design of
> >>>>>hyper_dmabuf.
> >>>>>
> >>>>>On top of this basic functionalities, hyper_dmabuf has driver level
> >>>>>inter-domain communication, that is needed for dma-buf remote tracking
> >>>>>(no fence forwarding though), event triggering and event handling, extra
> >>>>>meta data exchange and hyper_dmabuf_id that represents grefs
> >>>>>(grefs are shared implicitly on driver level)
> >>>>This really isn't a positive design aspect of hyperdmabuf imo. The core
> >>>>code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> >>>>very simple & clean.
> >>>>
> >>>>If there's a clear need later on we can extend that. But for now xen-zcopy
> >>>>seems to cover the basic use-case needs, so gets the job done.
> >>>>
> >>>>>Also it is designed with frontend (common core framework) + backend
> >>>>>(hyper visor specific comm and memory sharing) structure for portability.
> >>>>>We just can't limit this feature to Xen because we want to use the same
> >>>>>uapis not only for Xen but also other applicable hypervisor, like ACORN.
> >>>>See the discussion around udmabuf and the needs for kvm. I think trying to
> >>>>make an ioctl/uapi that works for multiple hypervisors is misguided - it
> >>>>likely won't work.
> >>>>
> >>>>On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> >>>>not even upstream yet, nor have I seen any patches proposing to land linux
> >>>>support for ACRN. Since it's not upstream, it doesn't really matter for
> >>>>upstream consideration. I'm doubting that ACRN will use the same grant
> >>>>references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> >>>Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> >>>hyper_dmabuf has been architectured with the concept of backend.
> >>>If you look at the structure of backend, you will find that
> >>>backend is just a set of standard function calls as shown here:
> >>>
> >>>struct hyper_dmabuf_bknd_ops {
> >>>         /* backend initialization routine (optional) */
> >>>         int (*init)(void);
> >>>
> >>>         /* backend cleanup routine (optional) */
> >>>         int (*cleanup)(void);
> >>>
> >>>         /* retreiving id of current virtual machine */
> >>>         int (*get_vm_id)(void);
> >>>
> >>>         /* get pages shared via hypervisor-specific method */
> >>>         int (*share_pages)(struct page **pages, int vm_id,
> >>>                            int nents, void **refs_info);
> >>>
> >>>         /* make shared pages unshared via hypervisor specific method */
> >>>         int (*unshare_pages)(void **refs_info, int nents);
> >>>
> >>>         /* map remotely shared pages on importer's side via
> >>>          * hypervisor-specific method
> >>>          */
> >>>         struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
> >>>                                            int nents, void **refs_info);
> >>>
> >>>         /* unmap and free shared pages on importer's side via
> >>>          * hypervisor-specific method
> >>>          */
> >>>         int (*unmap_shared_pages)(void **refs_info, int nents);
> >>>
> >>>         /* initialize communication environment */
> >>>         int (*init_comm_env)(void);
> >>>
> >>>         void (*destroy_comm)(void);
> >>>
> >>>         /* upstream ch setup (receiving and responding) */
> >>>         int (*init_rx_ch)(int vm_id);
> >>>
> >>>         /* downstream ch setup (transmitting and parsing responses) */
> >>>         int (*init_tx_ch)(int vm_id);
> >>>
> >>>         int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> >>>};
> >>>
> >>>All of these can be mapped with any hypervisor specific implementation.
> >>>We designed backend implementation for Xen using grant-table, Xen event
> >>>and ring buffer communication. For ACRN, we have another backend using Virt-IO
> >>>for both memory sharing and communication.
> >>>
> >>>We tried to define this structure of backend to make it general enough (or
> >>>it can be even modified or extended to support more cases.) so that it can
> >>>fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> >>>are page-level memory sharing and inter-domain communication, which I think
> >>>are standard features of modern hypervisor.
> >>>
> >>>And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> >>>are very general. One is getting FD (dmabuf) and get those shared. The other
> >>>is generating dmabuf from global handle (secure handle hiding gref behind it).
> >>>On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> >>>for any cases.
> >>>
> >>>So I don't know why we wouldn't want to try to make these standard in most of
> >>>hypervisor cases instead of limiting it to certain hypervisor like Xen.
> >>>Frontend-backend structre is optimal for this I think.
> >>>
> >>>>>So I am wondering we can start with this hyper_dmabuf then modify it for
> >>>>>your use-case if needed and polish and fix any glitches if we want to
> >>>>>to use this for all general dma-buf usecases.
> >>>>Imo xen-zcopy is a much more reasonable starting point for upstream, which
> >>>>can then be extended (if really proven to be necessary).
> >>>>
> >>>>>Also, I still have one unresolved question regarding the export/import flow
> >>>>>in both of hyper_dmabuf and xen-zcopy.
> >>>>>
> >>>>>@danvet: Would this flow (guest1->import existing dmabuf->share underlying
> >>>>>pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> >>>>I think if you just look at the pages, and make sure you handle the
> >>>>sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> >>>>work. The real trouble with hyperdmabuf was the forwarding of all these
> >>>>calls, instead of just passing around a list of grant references.
> >>>I talked to danvet about this litte bit.
> >>>
> >>>I think there was some misunderstanding on this "forwarding". Exporting
> >>>and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> >>>what made confusion was that importing domain notifies exporting domain when
> >>>there are dmabuf operations (like attach, mapping, detach and release) so that
> >>>exporting domain can track the usage of dmabuf on the importing domain.
> >>>
> >>>I designed this for some basic tracking. We may not need to notify for every
> >>>different activity but if none of them is there, exporting domain can't
> >>>determine if it is ok to unshare the buffer or the originator (like i915)
> >>>can free the object even if it's being accessed in importing domain.
> >>>
> >>>Anyway I really hope we can have enough discussion and resolve all concerns
> >>>before nailing it down.
> >>Let me explain how this works in case of para-virtual display
> >>use-case with xen-zcopy.
> >>
> >>1. There are 4 components in the system:
> >>   - displif protocol [1]
> >>   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
> >>   - backend - user-space application running in Dom0
> >>   - xen-zcopy - DRM (as of now) helper driver running in Dom0
> >>
> >>2. All the communication between domains happens between xen-front and the
> >>backend, so it is possible to implement para-virtual display use-case
> >>without xen-zcopy at all (this is why it is a helper driver), but in this
> >>case
> >>memory copying occurs (this is out of scope for this discussion).
> >>
> >>3. To better understand security issues let's see what use-cases we have:
> >>
> >>3.1 xen-front exports its dma-buf (dumb) to the backend
> >>
> >>In this case there are no security issues at all as Dom0 (backend side)
> >>will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
> >>we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
> >>If DomU misbehaves it can only write to its own pages shared with Dom0, but
> >>still
> >>cannot go beyond that, e.g. it can't access Dom0's memory.
> >>
> >>3.2 Backend exports dma-buf to xen-front
> >>
> >>In this case Dom0 pages are shared with DomU. As before, DomU can only write
> >>to these pages, not any other page from Dom0, so it can be still considered
> >>safe.
> >>But, the following must be considered (highlighted in xen-front's Kernel
> >>documentation):
> >>  - If guest domain dies then pages/grants received from the backend cannot
> >>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> >>any
> >>    other guest)
> >>  - Misbehaving guest may send too many requests to the backend exhausting
> >>    its grant references and memory (consider this from security POV). As the
> >>    backend runs in the trusted domain we also assume that it is trusted as
> >>well,
> >>    e.g. must take measures to prevent DDoS attacks.
> >>
> >There is another security issue that this driver itself can cause. Using the
> >grant-reference as is is not very safe because it's easy to guess (counting
> >number probably) and any attackers running on the same importing domain can
> >use these references to map shared pages and access the data. This is why we
> >implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
> >almost impossible to guess.
> Yes, there is something to think about in general, not related
> to dma-buf/zcopy. This is a question to Xen community what they
> see as the right approach here.

IMO, this secure global handle should be taken into consideration because
grefs are just plain references generated from in-kernel functions and
how securely this is delivered and used is up to another driver that exposes
these via uapi. And proper way to protect these and also prevent any "guessed"
references from being used may be exchanging and keeping those in the kernel
level. 

> >  All grant references for pages are shared in the
> >driver level. This is another reason for having inter-VM comm.
> >
> >>4. xen-front/backend/xen-zcopy synchronization
> >>
> >>4.1. As I already said in 2) all the inter VM communication happens between
> >>xen-front and the backend, xen-zcopy is NOT involved in that.
> >Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
> >is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
> >driver itself provide some way to synchronize between two VMs?
> No, because xen-zcopy is a *helper* driver, not more.
> >  I think the
> >assumption behind this is that Xen PV display interface and backend (running
> >on the userspace) are used together with xen-zcopy
> Backend may use xen-zcopy or may not - it depends if you need
> zero copy or not, e.g. it is not a must for the backend
> >but what if an user space
> >just want to use xen-zcopy separately? Since it exposes ioctls, this is
> >possible unless you add some dependency configuration there.
> It is possible, any backend (user-space application) can use xen-zcopy
> Even more, one can extend it to provide kernel side API
> >
> >>When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> >>XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> >>This call is synchronous, so xen-front expects that backend does free the
> >>buffer pages on return.
> >Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
> >exporting) makes a destory request to the exporting VM?
> No, the requester is always DomU, so "destroy buffer" request
> will always come from DomU
> >  But isn't it
> >the domU to make such decision since it's the owner of buffer.
> See above
> >
> >And what about the other way around? For example, what happens if the
> >originator of buffer (like i915) decides to free the object behind dmabuf?
> For that reason there is ref-counting for dma-buf, e.g.
> if i915 decides to free then the backend (in my case) still holds
> the buffer, thus not allowing it do disappear. Basically, this is
> the backend which creates dma-buf from refs and owns it.

ok, I got it. So the xen-zcopy is staying as importer holding one ref
of dmabuf from i915. I see no problem here then. But actually my concern
is more about between domains (below).

> >Would i915 or exporting side of xen-zcopy know whether dom0 currently
> >uses the dmabuf or not?
> Why do you need this to know (probably I don't understand the use-case).
> I could be obvious here, but if ref-count of the dma-buf is not zero
> it is still exists and used?
> >
> >And again, I think this tracking should be handled in the driver itself
> >implicitly without any userspace involvement if we want to this dmabuf
> >sharing exist as a generic feature.
> Why not allow dma-buf Linux framework do that for you?

yes, between hyper_dmabuf/xen-zcopy and i915 (domU) and between end-consumer
and hyper_dmabuf/xen-zcopy (dom0), standard dma-buf protocols work. What
I am referring to is more about between domains. Let's say you want to
clear up sharing from domU, how does it know if it's safe? Does wait ioctl
handles this remote activities? Possibly this can be done by backend in your
scheme but again, this means there's dependency and another reason xen-zcopy
may not be used safely in general dmabuf sharing cases. I think this
part is guaranteed inside the driver that does export/import dmabuf
to/from other domain.

> >
> >>4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> >>   - closes all dumb handles/fd's of the buffer according to [3]
> >>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> >>sure
> >>     the buffer is freed (think of it as it waits for dma-buf->release
> >>callback)
> >>   - replies to xen-front that the buffer can be destroyed.
> >>This way deletion of the buffer happens synchronously on both Dom0 and DomU
> >>sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> >>error
> >>(BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> >>reference
> >>removal and will retry later until those are free.
> >>
> >>Hope this helps understand how buffers are synchronously deleted in case
> >>of xen-zcopy with a single protocol command.
> >>
> >>I think the above logic can also be re-used by the hyper-dmabuf driver with
> >>some additional work:
> >>
> >>1. xen-zcopy can be split into 2 parts and extend:
> >>1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> >>vise versa,
> >>implement "wait" ioctl (wait for dma-buf->release): currently these are
> >>DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> >>DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> >>needed
> >>by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> >Not sure how to match our use case to xen-zcopy's case but we don't do alloc
> >/free all the time.
> We also don't
> >  Also, dom0 won't make any freeing request to domU since it
> >doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
> >/release,
> Similar here
> >  which are tracked by domU (exporting VM). And for destruction of
> >sharing, we have separate IOCTL for that, which revoke grant references "IF"
> >there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
> >destruction of sharing until it gets final dmabuf release message from dom0.
> We block instead with 3sec timeout + some other logic
> (out of context now)

Does it mean it's not upon some kind of release signal from dom0?

> >
> >Also, in our usecase, (although we didn't intend to do so) it ends up using
> >3~4 buffers repeately.
> 2-3 in our use-cases
> >This is because DRM in domU (that renders) doesn't
> >allocate more object for EGL image since there is always free objects used
> >before exist in the list. And we actually don't do full-path exporting
> >(extracting pages -> grant-references -> get those shared) all the time.
> >If the same dmabuf is exported already, we just update private message then
> >notifies dom0 (reason for hash tables for keeping exported and importer
> >dmabufs).
> In my case these 2-3 buffers are allocated at start and not freed
> until the end - these are used as frame buffers which are constantly
> flipped. So, in my case there is no much profit in trying to cache
> which adds unneeded complexity (in my use-case, of course).
> If those 3-4 buffers you allocate are the only buffers used you may
> also try going without caching, but this depends on your use-case

I have a question about your use case. So is display manager on dom0
importing 2~3 dedicated dmabufs from domU during the initialization then use
those only or domU keeps exporting buffers to dom0 at every swap but those
happen to be same one used before in normal situation?

This is just my thought and might be limited to our usecase but
Wouldn't the latter be more natural and flexible where client app in
domU renders to its own object then export it to the compositor running on
dom0 at every swap. And meanwhile the hyper_dmabuf/xen-zcopy handles a way
to do this efficiently without any duplication of sharing. With this,
userspace doesn't even have to know about intial preparation or protocol
it needs to follow to share buffers between two domains (e.g. preallocation
of sharable objects)

> 
> >>2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
> >>alloc/free/wait
> >>
> >>3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
> >>creation/deletion and whatever else is needed (fences?).
> >>
> >>To Xen community: please think of dma-buf here as of a buffer representation
> >>mechanism,
> >>e.g. at the end of the day it's just a set of pages.
> >>
> >>Thank you,
> >>Oleksandr
> >>>>-Daniel
> >>>>
> >>>>>Regards,
> >>>>>DW
> >>>>>On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> >>>>>>Hello, all!
> >>>>>>
> >>>>>>After discussing xen-zcopy and hyper-dmabuf [1] approaches
> >>>>>>
> >>>>>>it seems that xen-zcopy can be made not depend on DRM core any more
> >>>>>>
> >>>>>>and be dma-buf centric (which it in fact is).
> >>>>>>
> >>>>>>The DRM code was mostly there for dma-buf's FD import/export
> >>>>>>
> >>>>>>with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> >>>>>>
> >>>>>>the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> >>>>>>DRM_XEN_ZCOPY_DUMB_TO_REFS)
> >>>>>>
> >>>>>>are extended to also provide a file descriptor of the corresponding dma-buf,
> >>>>>>then
> >>>>>>
> >>>>>>PRIME stuff in the driver is not needed anymore.
> >>>>>>
> >>>>>>That being said, xen-zcopy can safely be detached from DRM and moved from
> >>>>>>
> >>>>>>drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> >>>>>>
> >>>>>>This driver then becomes a universal way to turn any shared buffer between
> >>>>>>Dom0/DomD
> >>>>>>
> >>>>>>and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> >>>>>>references
> >>>>>>
> >>>>>>or represent a dma-buf as grant-references for export.
> >>>>>>
> >>>>>>This way the driver can be used not only for DRM use-cases, but also for
> >>>>>>other
> >>>>>>
> >>>>>>use-cases which may require zero copying between domains.
> >>>>>>
> >>>>>>For example, the use-cases we are about to work in the nearest future will
> >>>>>>use
> >>>>>>
> >>>>>>V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> >>>>>>
> >>>>>>from zero copying much. Potentially, even block/net devices may benefit,
> >>>>>>but this needs some evaluation.
> >>>>>>
> >>>>>>
> >>>>>>I would love to hear comments for authors of the hyper-dmabuf
> >>>>>>
> >>>>>>and Xen community, as well as DRI-Devel and other interested parties.
> >>>>>>
> >>>>>>
> >>>>>>Thank you,
> >>>>>>
> >>>>>>Oleksandr
> >>>>>>
> >>>>>>
> >>>>>>On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >>>>>>>From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>>>>
> >>>>>>>Hello!
> >>>>>>>
> >>>>>>>When using Xen PV DRM frontend driver then on backend side one will need
> >>>>>>>to do copying of display buffers' contents (filled by the
> >>>>>>>frontend's user-space) into buffers allocated at the backend side.
> >>>>>>>Taking into account the size of display buffers and frames per seconds
> >>>>>>>it may result in unneeded huge data bus occupation and performance loss.
> >>>>>>>
> >>>>>>>This helper driver allows implementing zero-copying use-cases
> >>>>>>>when using Xen para-virtualized frontend display driver by
> >>>>>>>implementing a DRM/KMS helper driver running on backend's side.
> >>>>>>>It utilizes PRIME buffers API to share frontend's buffers with
> >>>>>>>physical device drivers on backend's side:
> >>>>>>>
> >>>>>>>  - a dumb buffer created on backend's side can be shared
> >>>>>>>    with the Xen PV frontend driver, so it directly writes
> >>>>>>>    into backend's domain memory (into the buffer exported from
> >>>>>>>    DRM/KMS driver of a physical display device)
> >>>>>>>  - a dumb buffer allocated by the frontend can be imported
> >>>>>>>    into physical device DRM/KMS driver, thus allowing to
> >>>>>>>    achieve no copying as well
> >>>>>>>
> >>>>>>>For that reason number of IOCTLs are introduced:
> >>>>>>>  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>>>     This will create a DRM dumb buffer from grant references provided
> >>>>>>>     by the frontend
> >>>>>>>  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>>>    This will grant references to a dumb/display buffer's memory provided
> >>>>>>>    by the backend
> >>>>>>>  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>>>    This will block until the dumb buffer with the wait handle provided
> >>>>>>>    be freed
> >>>>>>>
> >>>>>>>With this helper driver I was able to drop CPU usage from 17% to 3%
> >>>>>>>on Renesas R-Car M3 board.
> >>>>>>>
> >>>>>>>This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >>>>>>>
> >>>>>>>Thank you,
> >>>>>>>Oleksandr
> >>>>>>>
> >>>>>>>Oleksandr Andrushchenko (1):
> >>>>>>>   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >>>>>>>
> >>>>>>>  Documentation/gpu/drivers.rst               |   1 +
> >>>>>>>  Documentation/gpu/xen-zcopy.rst             |  32 +
> >>>>>>>  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >>>>>>>  drivers/gpu/drm/xen/Makefile                |   5 +
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >>>>>>>  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >>>>>>>  8 files changed, 1264 insertions(+)
> >>>>>>>  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >>>>>>>  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >>>>>>>
> >>>>>>[1]
> >>>>>>https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> >>>>>_______________________________________________
> >>>>>dri-devel mailing list
> >>>>>dri-devel@lists.freedesktop.org
> >>>>>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>>>-- 
> >>>>Daniel Vetter
> >>>>Software Engineer, Intel Corporation
> >>>>http://blog.ffwll.ch
> >>[1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
> >>[2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
> >>[3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
> >>[4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
> >>[5]
> >>https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
> >>[6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-19  8:14               ` Oleksandr Andrushchenko
  (?)
@ 2018-04-19 17:55               ` Dongwon Kim
  -1 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-19 17:55 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, airlied, Oleksandr_Andrushchenko,
	linux-kernel, dri-devel, Potrola, MateuszX, xen-devel,
	daniel.vetter, boris.ostrovsky, Matt Roper

On Thu, Apr 19, 2018 at 11:14:02AM +0300, Oleksandr Andrushchenko wrote:
> On 04/18/2018 08:01 PM, Dongwon Kim wrote:
> >On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> >>>On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> >>>>On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> >>>>>Yeah, I definitely agree on the idea of expanding the use case to the
> >>>>>general domain where dmabuf sharing is used. However, what you are
> >>>>>targetting with proposed changes is identical to the core design of
> >>>>>hyper_dmabuf.
> >>>>>
> >>>>>On top of this basic functionalities, hyper_dmabuf has driver level
> >>>>>inter-domain communication, that is needed for dma-buf remote tracking
> >>>>>(no fence forwarding though), event triggering and event handling, extra
> >>>>>meta data exchange and hyper_dmabuf_id that represents grefs
> >>>>>(grefs are shared implicitly on driver level)
> >>>>This really isn't a positive design aspect of hyperdmabuf imo. The core
> >>>>code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is
> >>>>very simple & clean.
> >>>>
> >>>>If there's a clear need later on we can extend that. But for now xen-zcopy
> >>>>seems to cover the basic use-case needs, so gets the job done.
> >>>>
> >>>>>Also it is designed with frontend (common core framework) + backend
> >>>>>(hyper visor specific comm and memory sharing) structure for portability.
> >>>>>We just can't limit this feature to Xen because we want to use the same
> >>>>>uapis not only for Xen but also other applicable hypervisor, like ACORN.
> >>>>See the discussion around udmabuf and the needs for kvm. I think trying to
> >>>>make an ioctl/uapi that works for multiple hypervisors is misguided - it
> >>>>likely won't work.
> >>>>
> >>>>On top of that the 2nd hypervisor you're aiming to support is ACRN. That's
> >>>>not even upstream yet, nor have I seen any patches proposing to land linux
> >>>>support for ACRN. Since it's not upstream, it doesn't really matter for
> >>>>upstream consideration. I'm doubting that ACRN will use the same grant
> >>>>references as xen, so the same uapi won't work on ACRN as on Xen anyway.
> >>>Yeah, ACRN doesn't have grant-table. Only Xen supports it. But that is why
> >>>hyper_dmabuf has been architectured with the concept of backend.
> >>>If you look at the structure of backend, you will find that
> >>>backend is just a set of standard function calls as shown here:
> >>>
> >>>struct hyper_dmabuf_bknd_ops {
> >>>         /* backend initialization routine (optional) */
> >>>         int (*init)(void);
> >>>
> >>>         /* backend cleanup routine (optional) */
> >>>         int (*cleanup)(void);
> >>>
> >>>         /* retreiving id of current virtual machine */
> >>>         int (*get_vm_id)(void);
> >>>
> >>>         /* get pages shared via hypervisor-specific method */
> >>>         int (*share_pages)(struct page **pages, int vm_id,
> >>>                            int nents, void **refs_info);
> >>>
> >>>         /* make shared pages unshared via hypervisor specific method */
> >>>         int (*unshare_pages)(void **refs_info, int nents);
> >>>
> >>>         /* map remotely shared pages on importer's side via
> >>>          * hypervisor-specific method
> >>>          */
> >>>         struct page ** (*map_shared_pages)(unsigned long ref, int vm_id,
> >>>                                            int nents, void **refs_info);
> >>>
> >>>         /* unmap and free shared pages on importer's side via
> >>>          * hypervisor-specific method
> >>>          */
> >>>         int (*unmap_shared_pages)(void **refs_info, int nents);
> >>>
> >>>         /* initialize communication environment */
> >>>         int (*init_comm_env)(void);
> >>>
> >>>         void (*destroy_comm)(void);
> >>>
> >>>         /* upstream ch setup (receiving and responding) */
> >>>         int (*init_rx_ch)(int vm_id);
> >>>
> >>>         /* downstream ch setup (transmitting and parsing responses) */
> >>>         int (*init_tx_ch)(int vm_id);
> >>>
> >>>         int (*send_req)(int vm_id, struct hyper_dmabuf_req *req, int wait);
> >>>};
> >>>
> >>>All of these can be mapped with any hypervisor specific implementation.
> >>>We designed backend implementation for Xen using grant-table, Xen event
> >>>and ring buffer communication. For ACRN, we have another backend using Virt-IO
> >>>for both memory sharing and communication.
> >>>
> >>>We tried to define this structure of backend to make it general enough (or
> >>>it can be even modified or extended to support more cases.) so that it can
> >>>fit to other hypervisor cases. Only requirements/expectation on the hypervisor
> >>>are page-level memory sharing and inter-domain communication, which I think
> >>>are standard features of modern hypervisor.
> >>>
> >>>And please review common UAPIs that hyper_dmabuf and xen-zcopy supports. They
> >>>are very general. One is getting FD (dmabuf) and get those shared. The other
> >>>is generating dmabuf from global handle (secure handle hiding gref behind it).
> >>>On top of this, hyper_dmabuf has "unshare" and "query" which are also useful
> >>>for any cases.
> >>>
> >>>So I don't know why we wouldn't want to try to make these standard in most of
> >>>hypervisor cases instead of limiting it to certain hypervisor like Xen.
> >>>Frontend-backend structre is optimal for this I think.
> >>>
> >>>>>So I am wondering we can start with this hyper_dmabuf then modify it for
> >>>>>your use-case if needed and polish and fix any glitches if we want to
> >>>>>to use this for all general dma-buf usecases.
> >>>>Imo xen-zcopy is a much more reasonable starting point for upstream, which
> >>>>can then be extended (if really proven to be necessary).
> >>>>
> >>>>>Also, I still have one unresolved question regarding the export/import flow
> >>>>>in both of hyper_dmabuf and xen-zcopy.
> >>>>>
> >>>>>@danvet: Would this flow (guest1->import existing dmabuf->share underlying
> >>>>>pages->guest2->map shared pages->create/export dmabuf) be acceptable now?
> >>>>I think if you just look at the pages, and make sure you handle the
> >>>>sg_page == NULL case it's ok-ish. It's not great, but mostly it should
> >>>>work. The real trouble with hyperdmabuf was the forwarding of all these
> >>>>calls, instead of just passing around a list of grant references.
> >>>I talked to danvet about this litte bit.
> >>>
> >>>I think there was some misunderstanding on this "forwarding". Exporting
> >>>and importing flow in hyper_dmabuf are basically same as xen-zcopy's. I think
> >>>what made confusion was that importing domain notifies exporting domain when
> >>>there are dmabuf operations (like attach, mapping, detach and release) so that
> >>>exporting domain can track the usage of dmabuf on the importing domain.
> >>>
> >>>I designed this for some basic tracking. We may not need to notify for every
> >>>different activity but if none of them is there, exporting domain can't
> >>>determine if it is ok to unshare the buffer or the originator (like i915)
> >>>can free the object even if it's being accessed in importing domain.
> >>>
> >>>Anyway I really hope we can have enough discussion and resolve all concerns
> >>>before nailing it down.
> >>Let me explain how this works in case of para-virtual display
> >>use-case with xen-zcopy.
> >>
> >>1. There are 4 components in the system:
> >>   - displif protocol [1]
> >>   - xen-front - para-virtual DRM driver running in DomU (Guest) VM
> >>   - backend - user-space application running in Dom0
> >>   - xen-zcopy - DRM (as of now) helper driver running in Dom0
> >>
> >>2. All the communication between domains happens between xen-front and the
> >>backend, so it is possible to implement para-virtual display use-case
> >>without xen-zcopy at all (this is why it is a helper driver), but in this
> >>case
> >>memory copying occurs (this is out of scope for this discussion).
> >>
> >>3. To better understand security issues let's see what use-cases we have:
> >>
> >>3.1 xen-front exports its dma-buf (dumb) to the backend
> >>
> >>In this case there are no security issues at all as Dom0 (backend side)
> >>will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so
> >>we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.
> >>If DomU misbehaves it can only write to its own pages shared with Dom0, but
> >>still
> >>cannot go beyond that, e.g. it can't access Dom0's memory.
> >>
> >>3.2 Backend exports dma-buf to xen-front
> >>
> >>In this case Dom0 pages are shared with DomU. As before, DomU can only write
> >>to these pages, not any other page from Dom0, so it can be still considered
> >>safe.
> >>But, the following must be considered (highlighted in xen-front's Kernel
> >>documentation):
> >>  - If guest domain dies then pages/grants received from the backend cannot
> >>    be claimed back - think of it as memory lost to Dom0 (won't be used for
> >>any
> >>    other guest)
> >>  - Misbehaving guest may send too many requests to the backend exhausting
> >>    its grant references and memory (consider this from security POV). As the
> >>    backend runs in the trusted domain we also assume that it is trusted as
> >>well,
> >>    e.g. must take measures to prevent DDoS attacks.
> >>
> >There is another security issue that this driver itself can cause. Using the
> >grant-reference as is is not very safe because it's easy to guess (counting
> >number probably) and any attackers running on the same importing domain can
> >use these references to map shared pages and access the data. This is why we
> >implemented "hyper_dmabuf_id" that contains 96 bit random number to make it
> >almost impossible to guess.
> Yes, there is something to think about in general, not related
> to dma-buf/zcopy. This is a question to Xen community what they
> see as the right approach here.

IMO, this secure global handle should be taken into consideration because
grefs are just plain references generated from in-kernel functions and
how securely this is delivered and used is up to another driver that exposes
these via uapi. And proper way to protect these and also prevent any "guessed"
references from being used may be exchanging and keeping those in the kernel
level. 

> >  All grant references for pages are shared in the
> >driver level. This is another reason for having inter-VM comm.
> >
> >>4. xen-front/backend/xen-zcopy synchronization
> >>
> >>4.1. As I already said in 2) all the inter VM communication happens between
> >>xen-front and the backend, xen-zcopy is NOT involved in that.
> >Yeah, understood but this is also my point. Both hyper_dmabuf and xen-zcopy
> >is a driver that expands dmabuf sharing to inter-VM level. Then shouldn't this
> >driver itself provide some way to synchronize between two VMs?
> No, because xen-zcopy is a *helper* driver, not more.
> >  I think the
> >assumption behind this is that Xen PV display interface and backend (running
> >on the userspace) are used together with xen-zcopy
> Backend may use xen-zcopy or may not - it depends if you need
> zero copy or not, e.g. it is not a must for the backend
> >but what if an user space
> >just want to use xen-zcopy separately? Since it exposes ioctls, this is
> >possible unless you add some dependency configuration there.
> It is possible, any backend (user-space application) can use xen-zcopy
> Even more, one can extend it to provide kernel side API
> >
> >>When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> >>XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> >>This call is synchronous, so xen-front expects that backend does free the
> >>buffer pages on return.
> >Does it mean importing domain (dom0 assuming we do domU -> dom0 dmabuf
> >exporting) makes a destory request to the exporting VM?
> No, the requester is always DomU, so "destroy buffer" request
> will always come from DomU
> >  But isn't it
> >the domU to make such decision since it's the owner of buffer.
> See above
> >
> >And what about the other way around? For example, what happens if the
> >originator of buffer (like i915) decides to free the object behind dmabuf?
> For that reason there is ref-counting for dma-buf, e.g.
> if i915 decides to free then the backend (in my case) still holds
> the buffer, thus not allowing it do disappear. Basically, this is
> the backend which creates dma-buf from refs and owns it.

ok, I got it. So the xen-zcopy is staying as importer holding one ref
of dmabuf from i915. I see no problem here then. But actually my concern
is more about between domains (below).

> >Would i915 or exporting side of xen-zcopy know whether dom0 currently
> >uses the dmabuf or not?
> Why do you need this to know (probably I don't understand the use-case).
> I could be obvious here, but if ref-count of the dma-buf is not zero
> it is still exists and used?
> >
> >And again, I think this tracking should be handled in the driver itself
> >implicitly without any userspace involvement if we want to this dmabuf
> >sharing exist as a generic feature.
> Why not allow dma-buf Linux framework do that for you?

yes, between hyper_dmabuf/xen-zcopy and i915 (domU) and between end-consumer
and hyper_dmabuf/xen-zcopy (dom0), standard dma-buf protocols work. What
I am referring to is more about between domains. Let's say you want to
clear up sharing from domU, how does it know if it's safe? Does wait ioctl
handles this remote activities? Possibly this can be done by backend in your
scheme but again, this means there's dependency and another reason xen-zcopy
may not be used safely in general dmabuf sharing cases. I think this
part is guaranteed inside the driver that does export/import dmabuf
to/from other domain.

> >
> >>4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> >>   - closes all dumb handles/fd's of the buffer according to [3]
> >>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> >>sure
> >>     the buffer is freed (think of it as it waits for dma-buf->release
> >>callback)
> >>   - replies to xen-front that the buffer can be destroyed.
> >>This way deletion of the buffer happens synchronously on both Dom0 and DomU
> >>sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> >>error
> >>(BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> >>reference
> >>removal and will retry later until those are free.
> >>
> >>Hope this helps understand how buffers are synchronously deleted in case
> >>of xen-zcopy with a single protocol command.
> >>
> >>I think the above logic can also be re-used by the hyper-dmabuf driver with
> >>some additional work:
> >>
> >>1. xen-zcopy can be split into 2 parts and extend:
> >>1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> >>vise versa,
> >>implement "wait" ioctl (wait for dma-buf->release): currently these are
> >>DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> >>DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> >>needed
> >>by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> >Not sure how to match our use case to xen-zcopy's case but we don't do alloc
> >/free all the time.
> We also don't
> >  Also, dom0 won't make any freeing request to domU since it
> >doesn't own the buffer. It only follows dmabuf protocol as such attach/detach
> >/release,
> Similar here
> >  which are tracked by domU (exporting VM). And for destruction of
> >sharing, we have separate IOCTL for that, which revoke grant references "IF"
> >there is no drivers attached to the dmabuf in dom0. Otherwise, it schedules
> >destruction of sharing until it gets final dmabuf release message from dom0.
> We block instead with 3sec timeout + some other logic
> (out of context now)

Does it mean it's not upon some kind of release signal from dom0?

> >
> >Also, in our usecase, (although we didn't intend to do so) it ends up using
> >3~4 buffers repeately.
> 2-3 in our use-cases
> >This is because DRM in domU (that renders) doesn't
> >allocate more object for EGL image since there is always free objects used
> >before exist in the list. And we actually don't do full-path exporting
> >(extracting pages -> grant-references -> get those shared) all the time.
> >If the same dmabuf is exported already, we just update private message then
> >notifies dom0 (reason for hash tables for keeping exported and importer
> >dmabufs).
> In my case these 2-3 buffers are allocated at start and not freed
> until the end - these are used as frame buffers which are constantly
> flipped. So, in my case there is no much profit in trying to cache
> which adds unneeded complexity (in my use-case, of course).
> If those 3-4 buffers you allocate are the only buffers used you may
> also try going without caching, but this depends on your use-case

I have a question about your use case. So is display manager on dom0
importing 2~3 dedicated dmabufs from domU during the initialization then use
those only or domU keeps exporting buffers to dom0 at every swap but those
happen to be same one used before in normal situation?

This is just my thought and might be limited to our usecase but
Wouldn't the latter be more natural and flexible where client app in
domU renders to its own object then export it to the compositor running on
dom0 at every swap. And meanwhile the hyper_dmabuf/xen-zcopy handles a way
to do this efficiently without any duplication of sharing. With this,
userspace doesn't even have to know about intial preparation or protocol
it needs to follow to share buffers between two domains (e.g. preallocation
of sharable objects)

> 
> >>2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf
> >>alloc/free/wait
> >>
> >>3. hyper-dmabuf uses its own protocol between VMs to communicate buffer
> >>creation/deletion and whatever else is needed (fences?).
> >>
> >>To Xen community: please think of dma-buf here as of a buffer representation
> >>mechanism,
> >>e.g. at the end of the day it's just a set of pages.
> >>
> >>Thank you,
> >>Oleksandr
> >>>>-Daniel
> >>>>
> >>>>>Regards,
> >>>>>DW
> >>>>>On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:
> >>>>>>Hello, all!
> >>>>>>
> >>>>>>After discussing xen-zcopy and hyper-dmabuf [1] approaches
> >>>>>>
> >>>>>>it seems that xen-zcopy can be made not depend on DRM core any more
> >>>>>>
> >>>>>>and be dma-buf centric (which it in fact is).
> >>>>>>
> >>>>>>The DRM code was mostly there for dma-buf's FD import/export
> >>>>>>
> >>>>>>with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if
> >>>>>>
> >>>>>>the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and
> >>>>>>DRM_XEN_ZCOPY_DUMB_TO_REFS)
> >>>>>>
> >>>>>>are extended to also provide a file descriptor of the corresponding dma-buf,
> >>>>>>then
> >>>>>>
> >>>>>>PRIME stuff in the driver is not needed anymore.
> >>>>>>
> >>>>>>That being said, xen-zcopy can safely be detached from DRM and moved from
> >>>>>>
> >>>>>>drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?).
> >>>>>>
> >>>>>>This driver then becomes a universal way to turn any shared buffer between
> >>>>>>Dom0/DomD
> >>>>>>
> >>>>>>and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant
> >>>>>>references
> >>>>>>
> >>>>>>or represent a dma-buf as grant-references for export.
> >>>>>>
> >>>>>>This way the driver can be used not only for DRM use-cases, but also for
> >>>>>>other
> >>>>>>
> >>>>>>use-cases which may require zero copying between domains.
> >>>>>>
> >>>>>>For example, the use-cases we are about to work in the nearest future will
> >>>>>>use
> >>>>>>
> >>>>>>V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit
> >>>>>>
> >>>>>>from zero copying much. Potentially, even block/net devices may benefit,
> >>>>>>but this needs some evaluation.
> >>>>>>
> >>>>>>
> >>>>>>I would love to hear comments for authors of the hyper-dmabuf
> >>>>>>
> >>>>>>and Xen community, as well as DRI-Devel and other interested parties.
> >>>>>>
> >>>>>>
> >>>>>>Thank you,
> >>>>>>
> >>>>>>Oleksandr
> >>>>>>
> >>>>>>
> >>>>>>On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote:
> >>>>>>>From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> >>>>>>>
> >>>>>>>Hello!
> >>>>>>>
> >>>>>>>When using Xen PV DRM frontend driver then on backend side one will need
> >>>>>>>to do copying of display buffers' contents (filled by the
> >>>>>>>frontend's user-space) into buffers allocated at the backend side.
> >>>>>>>Taking into account the size of display buffers and frames per seconds
> >>>>>>>it may result in unneeded huge data bus occupation and performance loss.
> >>>>>>>
> >>>>>>>This helper driver allows implementing zero-copying use-cases
> >>>>>>>when using Xen para-virtualized frontend display driver by
> >>>>>>>implementing a DRM/KMS helper driver running on backend's side.
> >>>>>>>It utilizes PRIME buffers API to share frontend's buffers with
> >>>>>>>physical device drivers on backend's side:
> >>>>>>>
> >>>>>>>  - a dumb buffer created on backend's side can be shared
> >>>>>>>    with the Xen PV frontend driver, so it directly writes
> >>>>>>>    into backend's domain memory (into the buffer exported from
> >>>>>>>    DRM/KMS driver of a physical display device)
> >>>>>>>  - a dumb buffer allocated by the frontend can be imported
> >>>>>>>    into physical device DRM/KMS driver, thus allowing to
> >>>>>>>    achieve no copying as well
> >>>>>>>
> >>>>>>>For that reason number of IOCTLs are introduced:
> >>>>>>>  -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>>>     This will create a DRM dumb buffer from grant references provided
> >>>>>>>     by the frontend
> >>>>>>>  - DRM_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>>>    This will grant references to a dumb/display buffer's memory provided
> >>>>>>>    by the backend
> >>>>>>>  - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>>>    This will block until the dumb buffer with the wait handle provided
> >>>>>>>    be freed
> >>>>>>>
> >>>>>>>With this helper driver I was able to drop CPU usage from 17% to 3%
> >>>>>>>on Renesas R-Car M3 board.
> >>>>>>>
> >>>>>>>This was tested with Renesas' Wayland-KMS and backend running as DRM master.
> >>>>>>>
> >>>>>>>Thank you,
> >>>>>>>Oleksandr
> >>>>>>>
> >>>>>>>Oleksandr Andrushchenko (1):
> >>>>>>>   drm/xen-zcopy: Add Xen zero-copy helper DRM driver
> >>>>>>>
> >>>>>>>  Documentation/gpu/drivers.rst               |   1 +
> >>>>>>>  Documentation/gpu/xen-zcopy.rst             |  32 +
> >>>>>>>  drivers/gpu/drm/xen/Kconfig                 |  25 +
> >>>>>>>  drivers/gpu/drm/xen/Makefile                |   5 +
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
> >>>>>>>  drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
> >>>>>>>  include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
> >>>>>>>  8 files changed, 1264 insertions(+)
> >>>>>>>  create mode 100644 Documentation/gpu/xen-zcopy.rst
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
> >>>>>>>  create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
> >>>>>>>  create mode 100644 include/uapi/drm/xen_zcopy_drm.h
> >>>>>>>
> >>>>>>[1]
> >>>>>>https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html
> >>>>>_______________________________________________
> >>>>>dri-devel mailing list
> >>>>>dri-devel@lists.freedesktop.org
> >>>>>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>>>-- 
> >>>>Daniel Vetter
> >>>>Software Engineer, Intel Corporation
> >>>>http://blog.ffwll.ch
> >>[1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h
> >>[2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539
> >>[3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39
> >>[4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c
> >>[5]
> >>https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h
> >>[6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:10                 ` Roger Pau Monné
                                   ` (2 preceding siblings ...)
  (?)
@ 2018-04-20  7:19                 ` Daniel Vetter
  2018-04-20 11:25                   ` Oleksandr Andrushchenko
  2018-04-20 11:25                     ` Oleksandr Andrushchenko
  -1 siblings, 2 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-20  7:19 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Oleksandr Andrushchenko, jgross, Artem Mygaiev, Dongwon Kim,
	konrad.wilk, airlied, Oleksandr_Andrushchenko, linux-kernel,
	dri-devel, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky

On Wed, Apr 18, 2018 at 11:10:58AM +0100, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
> > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> > > > On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > > > > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > > > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > > > 3.2 Backend exports dma-buf to xen-front
> > > > 
> > > > In this case Dom0 pages are shared with DomU. As before, DomU can only write
> > > > to these pages, not any other page from Dom0, so it can be still considered
> > > > safe.
> > > > But, the following must be considered (highlighted in xen-front's Kernel
> > > > documentation):
> > > >   - If guest domain dies then pages/grants received from the backend cannot
> > > >     be claimed back - think of it as memory lost to Dom0 (won't be used for
> > > > any
> > > >     other guest)
> > > >   - Misbehaving guest may send too many requests to the backend exhausting
> > > >     its grant references and memory (consider this from security POV). As the
> > > >     backend runs in the trusted domain we also assume that it is trusted as
> > > > well,
> > > >     e.g. must take measures to prevent DDoS attacks.
> > > I cannot parse the above sentence:
> > > 
> > > "As the backend runs in the trusted domain we also assume that it is
> > > trusted as well, e.g. must take measures to prevent DDoS attacks."
> > > 
> > > What's the relation between being trusted and protecting from DoS
> > > attacks?
> > I mean that we trust the backend that it can prevent Dom0
> > from crashing in case DomU's frontend misbehaves, e.g.
> > if the frontend sends too many memory requests etc.
> > > In any case, all? PV protocols are implemented with the frontend
> > > sharing pages to the backend, and I think there's a reason why this
> > > model is used, and it should continue to be used.
> > This is the first use-case above. But there are real-world
> > use-cases (embedded in my case) when physically contiguous memory
> > needs to be shared, one of the possible ways to achieve this is
> > to share contiguous memory from Dom0 to DomU (the second use-case above)
> > > Having to add logic in the backend to prevent such attacks means
> > > that:
> > > 
> > >   - We need more code in the backend, which increases complexity and
> > >     chances of bugs.
> > >   - Such code/logic could be wrong, thus allowing DoS.
> > You can live without this code at all, but this is then up to
> > backend which may make Dom0 down because of DomU's frontend doing evil
> > things
> 
> IMO we should design protocols that do not allow such attacks instead
> of having to defend against them.
> 
> > > > 4. xen-front/backend/xen-zcopy synchronization
> > > > 
> > > > 4.1. As I already said in 2) all the inter VM communication happens between
> > > > xen-front and the backend, xen-zcopy is NOT involved in that.
> > > > When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> > > > XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> > > > This call is synchronous, so xen-front expects that backend does free the
> > > > buffer pages on return.
> > > > 
> > > > 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> > > >    - closes all dumb handles/fd's of the buffer according to [3]
> > > >    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> > > > sure
> > > >      the buffer is freed (think of it as it waits for dma-buf->release
> > > > callback)
> > > So this zcopy thing keeps some kind of track of the memory usage? Why
> > > can't the user-space backend keep track of the buffer usage?
> > Because there is no dma-buf UAPI which allows to track the buffer life cycle
> > (e.g. wait until dma-buf's .release callback is called)
> > > >    - replies to xen-front that the buffer can be destroyed.
> > > > This way deletion of the buffer happens synchronously on both Dom0 and DomU
> > > > sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> > > > error
> > > > (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> > > > reference
> > > > removal and will retry later until those are free.
> > > > 
> > > > Hope this helps understand how buffers are synchronously deleted in case
> > > > of xen-zcopy with a single protocol command.
> > > > 
> > > > I think the above logic can also be re-used by the hyper-dmabuf driver with
> > > > some additional work:
> > > > 
> > > > 1. xen-zcopy can be split into 2 parts and extend:
> > > > 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> > > > vise versa,
> > > I don't know much about the dma-buf implementation in Linux, but
> > > gntdev is a user-space device, and AFAICT user-space applications
> > > don't have any notion of dma buffers. How are such buffers useful for
> > > user-space? Why can't this just be called memory?
> > A dma-buf is seen by user-space as a file descriptor and you can
> > pass it to different drivers then. For example, you can share a buffer
> > used by a display driver for scanout with a GPU, to compose a picture
> > into it:
> > 1. User-space (US) allocates a display buffer from display driver
> > 2. US asks display driver to export the dma-buf which backs up that buffer,
> > US gets buffer's fd: dma_buf_fd
> > 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
> > 4. GPU renders contents into display buffer (dma_buf_fd)
> 
> After speaking with Oleksandr on IRC, I think the main usage of the
> gntdev extension is to:
> 
> 1. Create a dma-buf from a set of grant references.
> 2. Share dma-buf and get a list of grant references.
> 
> I think this set of operations could be broken into:
> 
> 1.1 Map grant references into user-space using the gntdev.
> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> 
> 2.1 Map a dma-buf into user-space.
> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>     mapped.
> 
> So it seems like what's actually missing is a way to:
> 
>  - Create a dma-buf from a list of user-space virtual addresses.
>  - Allow to map a dma-buf into user-space, so it can then be used with
>    the gntdev.
> 
> I think this is generic enough that it could be implemented by a
> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> something similar to this.

You can't just wrap random userspace memory into a dma-buf. We've just had
this discussion with kvm/qemu folks, who proposed just that, and after a
bit of discussion they'll now try to have a driver which just wraps a
memfd into a dma-buf.

Yes i915 and amdgpu and a few other drivers do have facilities to wrap
userspace memory into a gpu buffer object. But we don't allow those to be
exported to other drivers, because the core mm magic needed to make this
all work is way too tricky, even within the context of just 1 driver. And
dma-buf does not have the required callbacks and semantics to make it
work.
-Daniel

> 
> > Finally, this is indeed some memory, but a bit more [1]
> > > 
> > > Also, (with my FreeBSD maintainer hat) how is this going to translate
> > > to other OSes? So far the operations performed by the gntdev device
> > > are mostly OS-agnostic because this just map/unmap memory, and in fact
> > > they are implemented by Linux and FreeBSD.
> > At the moment I can only see Linux implementation and it seems
> > to be perfectly ok as we do not change Xen's APIs etc. and only
> > use the existing ones (remember, we only extend gntdev/balloon
> > drivers, all the changes in the Linux kernel)
> > As the second note I can also think that we do not extend gntdev/balloon
> > drivers and have re-worked xen-zcopy driver be a separate entity,
> > say drivers/xen/dma-buf
> > > > implement "wait" ioctl (wait for dma-buf->release): currently these are
> > > > DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> > > > DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> > > > needed
> > > > by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> > > I think this needs clarifying. In which memory space do you need those
> > > regions to be contiguous?
> > Use-case: Dom0 has a HW driver which only works with contig memory
> > and I want DomU to be able to directly write into that memory, thus
> > implementing zero copying
> > > 
> > > Do they need to be contiguous in host physical memory, or guest
> > > physical memory?
> > Host
> > > 
> > > If it's in guest memory space, isn't there any generic interface that
> > > you can use?
> > > 
> > > If it's in host physical memory space, why do you need this buffer to
> > > be contiguous in host physical memory space? The IOMMU should hide all
> > > this.
> > There are drivers/HW which can only work with contig memory and
> > if it is backed by an IOMMU then still it has to be contig in IPA
> > space (real device doesn't know that it is actually IPA contig, not PA)
> 
> What's IPA contig?
> 
> Thanks, Roger.
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:10                 ` Roger Pau Monné
                                   ` (3 preceding siblings ...)
  (?)
@ 2018-04-20  7:19                 ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-20  7:19 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr Andrushchenko,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, airlied,
	Potrola, MateuszX, daniel.vetter, xen-devel, boris.ostrovsky

On Wed, Apr 18, 2018 at 11:10:58AM +0100, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
> > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
> > > > On 04/17/2018 11:57 PM, Dongwon Kim wrote:
> > > > > On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
> > > > > > On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
> > > > 3.2 Backend exports dma-buf to xen-front
> > > > 
> > > > In this case Dom0 pages are shared with DomU. As before, DomU can only write
> > > > to these pages, not any other page from Dom0, so it can be still considered
> > > > safe.
> > > > But, the following must be considered (highlighted in xen-front's Kernel
> > > > documentation):
> > > >   - If guest domain dies then pages/grants received from the backend cannot
> > > >     be claimed back - think of it as memory lost to Dom0 (won't be used for
> > > > any
> > > >     other guest)
> > > >   - Misbehaving guest may send too many requests to the backend exhausting
> > > >     its grant references and memory (consider this from security POV). As the
> > > >     backend runs in the trusted domain we also assume that it is trusted as
> > > > well,
> > > >     e.g. must take measures to prevent DDoS attacks.
> > > I cannot parse the above sentence:
> > > 
> > > "As the backend runs in the trusted domain we also assume that it is
> > > trusted as well, e.g. must take measures to prevent DDoS attacks."
> > > 
> > > What's the relation between being trusted and protecting from DoS
> > > attacks?
> > I mean that we trust the backend that it can prevent Dom0
> > from crashing in case DomU's frontend misbehaves, e.g.
> > if the frontend sends too many memory requests etc.
> > > In any case, all? PV protocols are implemented with the frontend
> > > sharing pages to the backend, and I think there's a reason why this
> > > model is used, and it should continue to be used.
> > This is the first use-case above. But there are real-world
> > use-cases (embedded in my case) when physically contiguous memory
> > needs to be shared, one of the possible ways to achieve this is
> > to share contiguous memory from Dom0 to DomU (the second use-case above)
> > > Having to add logic in the backend to prevent such attacks means
> > > that:
> > > 
> > >   - We need more code in the backend, which increases complexity and
> > >     chances of bugs.
> > >   - Such code/logic could be wrong, thus allowing DoS.
> > You can live without this code at all, but this is then up to
> > backend which may make Dom0 down because of DomU's frontend doing evil
> > things
> 
> IMO we should design protocols that do not allow such attacks instead
> of having to defend against them.
> 
> > > > 4. xen-front/backend/xen-zcopy synchronization
> > > > 
> > > > 4.1. As I already said in 2) all the inter VM communication happens between
> > > > xen-front and the backend, xen-zcopy is NOT involved in that.
> > > > When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
> > > > XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
> > > > This call is synchronous, so xen-front expects that backend does free the
> > > > buffer pages on return.
> > > > 
> > > > 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
> > > >    - closes all dumb handles/fd's of the buffer according to [3]
> > > >    - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
> > > > sure
> > > >      the buffer is freed (think of it as it waits for dma-buf->release
> > > > callback)
> > > So this zcopy thing keeps some kind of track of the memory usage? Why
> > > can't the user-space backend keep track of the buffer usage?
> > Because there is no dma-buf UAPI which allows to track the buffer life cycle
> > (e.g. wait until dma-buf's .release callback is called)
> > > >    - replies to xen-front that the buffer can be destroyed.
> > > > This way deletion of the buffer happens synchronously on both Dom0 and DomU
> > > > sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
> > > > error
> > > > (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
> > > > reference
> > > > removal and will retry later until those are free.
> > > > 
> > > > Hope this helps understand how buffers are synchronously deleted in case
> > > > of xen-zcopy with a single protocol command.
> > > > 
> > > > I think the above logic can also be re-used by the hyper-dmabuf driver with
> > > > some additional work:
> > > > 
> > > > 1. xen-zcopy can be split into 2 parts and extend:
> > > > 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
> > > > vise versa,
> > > I don't know much about the dma-buf implementation in Linux, but
> > > gntdev is a user-space device, and AFAICT user-space applications
> > > don't have any notion of dma buffers. How are such buffers useful for
> > > user-space? Why can't this just be called memory?
> > A dma-buf is seen by user-space as a file descriptor and you can
> > pass it to different drivers then. For example, you can share a buffer
> > used by a display driver for scanout with a GPU, to compose a picture
> > into it:
> > 1. User-space (US) allocates a display buffer from display driver
> > 2. US asks display driver to export the dma-buf which backs up that buffer,
> > US gets buffer's fd: dma_buf_fd
> > 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
> > 4. GPU renders contents into display buffer (dma_buf_fd)
> 
> After speaking with Oleksandr on IRC, I think the main usage of the
> gntdev extension is to:
> 
> 1. Create a dma-buf from a set of grant references.
> 2. Share dma-buf and get a list of grant references.
> 
> I think this set of operations could be broken into:
> 
> 1.1 Map grant references into user-space using the gntdev.
> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> 
> 2.1 Map a dma-buf into user-space.
> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>     mapped.
> 
> So it seems like what's actually missing is a way to:
> 
>  - Create a dma-buf from a list of user-space virtual addresses.
>  - Allow to map a dma-buf into user-space, so it can then be used with
>    the gntdev.
> 
> I think this is generic enough that it could be implemented by a
> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> something similar to this.

You can't just wrap random userspace memory into a dma-buf. We've just had
this discussion with kvm/qemu folks, who proposed just that, and after a
bit of discussion they'll now try to have a driver which just wraps a
memfd into a dma-buf.

Yes i915 and amdgpu and a few other drivers do have facilities to wrap
userspace memory into a gpu buffer object. But we don't allow those to be
exported to other drivers, because the core mm magic needed to make this
all work is way too tricky, even within the context of just 1 driver. And
dma-buf does not have the required callbacks and semantics to make it
work.
-Daniel

> 
> > Finally, this is indeed some memory, but a bit more [1]
> > > 
> > > Also, (with my FreeBSD maintainer hat) how is this going to translate
> > > to other OSes? So far the operations performed by the gntdev device
> > > are mostly OS-agnostic because this just map/unmap memory, and in fact
> > > they are implemented by Linux and FreeBSD.
> > At the moment I can only see Linux implementation and it seems
> > to be perfectly ok as we do not change Xen's APIs etc. and only
> > use the existing ones (remember, we only extend gntdev/balloon
> > drivers, all the changes in the Linux kernel)
> > As the second note I can also think that we do not extend gntdev/balloon
> > drivers and have re-worked xen-zcopy driver be a separate entity,
> > say drivers/xen/dma-buf
> > > > implement "wait" ioctl (wait for dma-buf->release): currently these are
> > > > DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
> > > > DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
> > > > needed
> > > > by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
> > > I think this needs clarifying. In which memory space do you need those
> > > regions to be contiguous?
> > Use-case: Dom0 has a HW driver which only works with contig memory
> > and I want DomU to be able to directly write into that memory, thus
> > implementing zero copying
> > > 
> > > Do they need to be contiguous in host physical memory, or guest
> > > physical memory?
> > Host
> > > 
> > > If it's in guest memory space, isn't there any generic interface that
> > > you can use?
> > > 
> > > If it's in host physical memory space, why do you need this buffer to
> > > be contiguous in host physical memory space? The IOMMU should hide all
> > > this.
> > There are drivers/HW which can only work with contig memory and
> > if it is backed by an IOMMU then still it has to be contig in IPA
> > space (real device doesn't know that it is actually IPA contig, not PA)
> 
> What's IPA contig?
> 
> Thanks, Roger.
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:55                     ` [Xen-devel] " Roger Pau Monné
@ 2018-04-20  7:22                         ` Daniel Vetter
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
                                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-20  7:22 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Oleksandr Andrushchenko, jgross, Artem Mygaiev, Dongwon Kim,
	Oleksandr_Andrushchenko, airlied, linux-kernel, dri-devel,
	Paul Durrant, Potrola, MateuszX, daniel.vetter, xen-devel,
	boris.ostrovsky

On Wed, Apr 18, 2018 at 11:55:26AM +0100, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> > On 04/18/2018 01:18 PM, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> > > > Of Roger Pau Monné
> > > > Sent: 18 April 2018 11:11
> > > > To: Oleksandr Andrushchenko <andr2000@gmail.com>
> > > > Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> > > > Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> > > > Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> > > > devel@lists.freedesktop.org; Potrola, MateuszX
> > > > <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> > > > daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> > > > <matthew.d.roper@intel.com>
> > > > Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> > > > helper DRM driver
> > > > 
> > > > On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> > > > wrote:
> > > > > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > > After speaking with Oleksandr on IRC, I think the main usage of the
> > > > gntdev extension is to:
> > > > 
> > > > 1. Create a dma-buf from a set of grant references.
> > > > 2. Share dma-buf and get a list of grant references.
> > > > 
> > > > I think this set of operations could be broken into:
> > > > 
> > > > 1.1 Map grant references into user-space using the gntdev.
> > > > 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> > > > 
> > > > 2.1 Map a dma-buf into user-space.
> > > > 2.2 Get grefs out of the user-space addresses where the dma-buf is
> > > >      mapped.
> > > > 
> > > > So it seems like what's actually missing is a way to:
> > > > 
> > > >   - Create a dma-buf from a list of user-space virtual addresses.
> > > >   - Allow to map a dma-buf into user-space, so it can then be used with
> > > >     the gntdev.
> > > > 
> > > > I think this is generic enough that it could be implemented by a
> > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > something similar to this.
> > Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> > are no go from your POV?
> 
> My opinion is that there seems to be a more generic way to implement
> this, and thus I would prefer that one.
> 
> > Instead, we have to make all that fancy stuff
> > with VAs <-> device-X and have that device-X driver live out of drivers/xen
> > as it is not a Xen specific driver?
> 
> That would be my preference if feasible, simply because it can be
> reused by other use-cases that need to create dma-bufs in user-space.
> 
> In any case I just knew about dma-bufs this morning, there might be
> things that I'm missing.

See my other reply, I really don't want a generic userspace memory ->
dma-buf thing, it doesn't work. Having a xen-specific grant references <->
dma-buf conversion thing (and I'm honestly still not sure whether the
dma-buf -> grant renferences is a good idea) seems much more reasonable to
me.

Imo simplest way forward would be to start out with a grant refs ->
dma-buf ioctl, and see how far that gets us.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-20  7:22                         ` Daniel Vetter
  0 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-20  7:22 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	Oleksandr Andrushchenko, linux-kernel, dri-devel, airlied,
	Paul Durrant, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky

On Wed, Apr 18, 2018 at 11:55:26AM +0100, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> > On 04/18/2018 01:18 PM, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> > > > Of Roger Pau Monné
> > > > Sent: 18 April 2018 11:11
> > > > To: Oleksandr Andrushchenko <andr2000@gmail.com>
> > > > Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> > > > Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> > > > Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> > > > devel@lists.freedesktop.org; Potrola, MateuszX
> > > > <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> > > > daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> > > > <matthew.d.roper@intel.com>
> > > > Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> > > > helper DRM driver
> > > > 
> > > > On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> > > > wrote:
> > > > > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > > After speaking with Oleksandr on IRC, I think the main usage of the
> > > > gntdev extension is to:
> > > > 
> > > > 1. Create a dma-buf from a set of grant references.
> > > > 2. Share dma-buf and get a list of grant references.
> > > > 
> > > > I think this set of operations could be broken into:
> > > > 
> > > > 1.1 Map grant references into user-space using the gntdev.
> > > > 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> > > > 
> > > > 2.1 Map a dma-buf into user-space.
> > > > 2.2 Get grefs out of the user-space addresses where the dma-buf is
> > > >      mapped.
> > > > 
> > > > So it seems like what's actually missing is a way to:
> > > > 
> > > >   - Create a dma-buf from a list of user-space virtual addresses.
> > > >   - Allow to map a dma-buf into user-space, so it can then be used with
> > > >     the gntdev.
> > > > 
> > > > I think this is generic enough that it could be implemented by a
> > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > something similar to this.
> > Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> > are no go from your POV?
> 
> My opinion is that there seems to be a more generic way to implement
> this, and thus I would prefer that one.
> 
> > Instead, we have to make all that fancy stuff
> > with VAs <-> device-X and have that device-X driver live out of drivers/xen
> > as it is not a Xen specific driver?
> 
> That would be my preference if feasible, simply because it can be
> reused by other use-cases that need to create dma-bufs in user-space.
> 
> In any case I just knew about dma-bufs this morning, there might be
> things that I'm missing.

See my other reply, I really don't want a generic userspace memory ->
dma-buf thing, it doesn't work. Having a xen-specific grant references <->
dma-buf conversion thing (and I'm honestly still not sure whether the
dma-buf -> grant renferences is a good idea) seems much more reasonable to
me.

Imo simplest way forward would be to start out with a grant refs ->
dma-buf ioctl, and see how far that gets us.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-18 10:55                     ` [Xen-devel] " Roger Pau Monné
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
  2018-04-18 12:42                       ` Oleksandr Andrushchenko
@ 2018-04-20  7:22                       ` Daniel Vetter
  2018-04-20  7:22                         ` Daniel Vetter
  3 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-20  7:22 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: jgross, Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko,
	Oleksandr Andrushchenko, linux-kernel, dri-devel, airlied,
	Paul Durrant, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky

On Wed, Apr 18, 2018 at 11:55:26AM +0100, Roger Pau Monné wrote:
> On Wed, Apr 18, 2018 at 01:39:35PM +0300, Oleksandr Andrushchenko wrote:
> > On 04/18/2018 01:18 PM, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Xen-devel [mailto:xen-devel-bounces@lists.xenproject.org] On Behalf
> > > > Of Roger Pau Monné
> > > > Sent: 18 April 2018 11:11
> > > > To: Oleksandr Andrushchenko <andr2000@gmail.com>
> > > > Cc: jgross@suse.com; Artem Mygaiev <Artem_Mygaiev@epam.com>;
> > > > Dongwon Kim <dongwon.kim@intel.com>; airlied@linux.ie;
> > > > Oleksandr_Andrushchenko@epam.com; linux-kernel@vger.kernel.org; dri-
> > > > devel@lists.freedesktop.org; Potrola, MateuszX
> > > > <mateuszx.potrola@intel.com>; xen-devel@lists.xenproject.org;
> > > > daniel.vetter@intel.com; boris.ostrovsky@oracle.com; Matt Roper
> > > > <matthew.d.roper@intel.com>
> > > > Subject: Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy
> > > > helper DRM driver
> > > > 
> > > > On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko
> > > > wrote:
> > > > > On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
> > > > After speaking with Oleksandr on IRC, I think the main usage of the
> > > > gntdev extension is to:
> > > > 
> > > > 1. Create a dma-buf from a set of grant references.
> > > > 2. Share dma-buf and get a list of grant references.
> > > > 
> > > > I think this set of operations could be broken into:
> > > > 
> > > > 1.1 Map grant references into user-space using the gntdev.
> > > > 1.2 Create a dma-buf out of a set of user-space virtual addresses.
> > > > 
> > > > 2.1 Map a dma-buf into user-space.
> > > > 2.2 Get grefs out of the user-space addresses where the dma-buf is
> > > >      mapped.
> > > > 
> > > > So it seems like what's actually missing is a way to:
> > > > 
> > > >   - Create a dma-buf from a list of user-space virtual addresses.
> > > >   - Allow to map a dma-buf into user-space, so it can then be used with
> > > >     the gntdev.
> > > > 
> > > > I think this is generic enough that it could be implemented by a
> > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > something similar to this.
> > Ok, so just to summarize, xen-zcopy/hyper-dmabuf as they are now,
> > are no go from your POV?
> 
> My opinion is that there seems to be a more generic way to implement
> this, and thus I would prefer that one.
> 
> > Instead, we have to make all that fancy stuff
> > with VAs <-> device-X and have that device-X driver live out of drivers/xen
> > as it is not a Xen specific driver?
> 
> That would be my preference if feasible, simply because it can be
> reused by other use-cases that need to create dma-bufs in user-space.
> 
> In any case I just knew about dma-bufs this morning, there might be
> things that I'm missing.

See my other reply, I really don't want a generic userspace memory ->
dma-buf thing, it doesn't work. Having a xen-specific grant references <->
dma-buf conversion thing (and I'm honestly still not sure whether the
dma-buf -> grant renferences is a good idea) seems much more reasonable to
me.

Imo simplest way forward would be to start out with a grant refs ->
dma-buf ioctl, and see how far that gets us.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-20  7:19                 ` [Xen-devel] " Daniel Vetter
@ 2018-04-20 11:25                     ` Oleksandr Andrushchenko
  2018-04-20 11:25                     ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-20 11:25 UTC (permalink / raw)
  To: Roger Pau Monné,
	jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On 04/20/2018 10:19 AM, Daniel Vetter wrote:
> On Wed, Apr 18, 2018 at 11:10:58AM +0100, Roger Pau Monné wrote:
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>>>>> to these pages, not any other page from Dom0, so it can be still considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend exhausting
>>>>>      its grant references and memory (consider this from security POV). As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
> You can't just wrap random userspace memory into a dma-buf. We've just had
> this discussion with kvm/qemu folks, who proposed just that, and after a
> bit of discussion they'll now try to have a driver which just wraps a
> memfd into a dma-buf.
So, we have to decide either we introduce a new driver
(say, under drivers/xen/xen-dma-buf) or extend the existing
gntdev/balloon to support dma-buf use-cases.

Can anybody from Xen community express their preference here?

And I hope that there is no objection to have it all in the kernel,
without going to user-space with VAs and back (device-X driver)
>
> Yes i915 and amdgpu and a few other drivers do have facilities to wrap
> userspace memory into a gpu buffer object. But we don't allow those to be
> exported to other drivers, because the core mm magic needed to make this
> all work is way too tricky, even within the context of just 1 driver. And
> dma-buf does not have the required callbacks and semantics to make it
> work.
> -Daniel
>
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
>>
>> Thanks, Roger.
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-20 11:25                     ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-20 11:25 UTC (permalink / raw)
  To: Roger Pau Monné,
	jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On 04/20/2018 10:19 AM, Daniel Vetter wrote:
> On Wed, Apr 18, 2018 at 11:10:58AM +0100, Roger Pau Monné wrote:
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>>>>> to these pages, not any other page from Dom0, so it can be still considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend exhausting
>>>>>      its grant references and memory (consider this from security POV). As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
> You can't just wrap random userspace memory into a dma-buf. We've just had
> this discussion with kvm/qemu folks, who proposed just that, and after a
> bit of discussion they'll now try to have a driver which just wraps a
> memfd into a dma-buf.
So, we have to decide either we introduce a new driver
(say, under drivers/xen/xen-dma-buf) or extend the existing
gntdev/balloon to support dma-buf use-cases.

Can anybody from Xen community express their preference here?

And I hope that there is no objection to have it all in the kernel,
without going to user-space with VAs and back (device-X driver)
>
> Yes i915 and amdgpu and a few other drivers do have facilities to wrap
> userspace memory into a gpu buffer object. But we don't allow those to be
> exported to other drivers, because the core mm magic needed to make this
> all work is way too tricky, even within the context of just 1 driver. And
> dma-buf does not have the required callbacks and semantics to make it
> work.
> -Daniel
>
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
>>
>> Thanks, Roger.
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-20  7:19                 ` [Xen-devel] " Daniel Vetter
@ 2018-04-20 11:25                   ` Oleksandr Andrushchenko
  2018-04-20 11:25                     ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-20 11:25 UTC (permalink / raw)
  To: Roger Pau Monné,
	jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On 04/20/2018 10:19 AM, Daniel Vetter wrote:
> On Wed, Apr 18, 2018 at 11:10:58AM +0100, Roger Pau Monné wrote:
>> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
>>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
>>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
>>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
>>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
>>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
>>>>> 3.2 Backend exports dma-buf to xen-front
>>>>>
>>>>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
>>>>> to these pages, not any other page from Dom0, so it can be still considered
>>>>> safe.
>>>>> But, the following must be considered (highlighted in xen-front's Kernel
>>>>> documentation):
>>>>>    - If guest domain dies then pages/grants received from the backend cannot
>>>>>      be claimed back - think of it as memory lost to Dom0 (won't be used for
>>>>> any
>>>>>      other guest)
>>>>>    - Misbehaving guest may send too many requests to the backend exhausting
>>>>>      its grant references and memory (consider this from security POV). As the
>>>>>      backend runs in the trusted domain we also assume that it is trusted as
>>>>> well,
>>>>>      e.g. must take measures to prevent DDoS attacks.
>>>> I cannot parse the above sentence:
>>>>
>>>> "As the backend runs in the trusted domain we also assume that it is
>>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
>>>>
>>>> What's the relation between being trusted and protecting from DoS
>>>> attacks?
>>> I mean that we trust the backend that it can prevent Dom0
>>> from crashing in case DomU's frontend misbehaves, e.g.
>>> if the frontend sends too many memory requests etc.
>>>> In any case, all? PV protocols are implemented with the frontend
>>>> sharing pages to the backend, and I think there's a reason why this
>>>> model is used, and it should continue to be used.
>>> This is the first use-case above. But there are real-world
>>> use-cases (embedded in my case) when physically contiguous memory
>>> needs to be shared, one of the possible ways to achieve this is
>>> to share contiguous memory from Dom0 to DomU (the second use-case above)
>>>> Having to add logic in the backend to prevent such attacks means
>>>> that:
>>>>
>>>>    - We need more code in the backend, which increases complexity and
>>>>      chances of bugs.
>>>>    - Such code/logic could be wrong, thus allowing DoS.
>>> You can live without this code at all, but this is then up to
>>> backend which may make Dom0 down because of DomU's frontend doing evil
>>> things
>> IMO we should design protocols that do not allow such attacks instead
>> of having to defend against them.
>>
>>>>> 4. xen-front/backend/xen-zcopy synchronization
>>>>>
>>>>> 4.1. As I already said in 2) all the inter VM communication happens between
>>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
>>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
>>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
>>>>> This call is synchronous, so xen-front expects that backend does free the
>>>>> buffer pages on return.
>>>>>
>>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
>>>>>     - closes all dumb handles/fd's of the buffer according to [3]
>>>>>     - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
>>>>> sure
>>>>>       the buffer is freed (think of it as it waits for dma-buf->release
>>>>> callback)
>>>> So this zcopy thing keeps some kind of track of the memory usage? Why
>>>> can't the user-space backend keep track of the buffer usage?
>>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
>>> (e.g. wait until dma-buf's .release callback is called)
>>>>>     - replies to xen-front that the buffer can be destroyed.
>>>>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
>>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
>>>>> error
>>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
>>>>> reference
>>>>> removal and will retry later until those are free.
>>>>>
>>>>> Hope this helps understand how buffers are synchronously deleted in case
>>>>> of xen-zcopy with a single protocol command.
>>>>>
>>>>> I think the above logic can also be re-used by the hyper-dmabuf driver with
>>>>> some additional work:
>>>>>
>>>>> 1. xen-zcopy can be split into 2 parts and extend:
>>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
>>>>> vise versa,
>>>> I don't know much about the dma-buf implementation in Linux, but
>>>> gntdev is a user-space device, and AFAICT user-space applications
>>>> don't have any notion of dma buffers. How are such buffers useful for
>>>> user-space? Why can't this just be called memory?
>>> A dma-buf is seen by user-space as a file descriptor and you can
>>> pass it to different drivers then. For example, you can share a buffer
>>> used by a display driver for scanout with a GPU, to compose a picture
>>> into it:
>>> 1. User-space (US) allocates a display buffer from display driver
>>> 2. US asks display driver to export the dma-buf which backs up that buffer,
>>> US gets buffer's fd: dma_buf_fd
>>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
>>> 4. GPU renders contents into display buffer (dma_buf_fd)
>> After speaking with Oleksandr on IRC, I think the main usage of the
>> gntdev extension is to:
>>
>> 1. Create a dma-buf from a set of grant references.
>> 2. Share dma-buf and get a list of grant references.
>>
>> I think this set of operations could be broken into:
>>
>> 1.1 Map grant references into user-space using the gntdev.
>> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
>>
>> 2.1 Map a dma-buf into user-space.
>> 2.2 Get grefs out of the user-space addresses where the dma-buf is
>>      mapped.
>>
>> So it seems like what's actually missing is a way to:
>>
>>   - Create a dma-buf from a list of user-space virtual addresses.
>>   - Allow to map a dma-buf into user-space, so it can then be used with
>>     the gntdev.
>>
>> I think this is generic enough that it could be implemented by a
>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>> something similar to this.
> You can't just wrap random userspace memory into a dma-buf. We've just had
> this discussion with kvm/qemu folks, who proposed just that, and after a
> bit of discussion they'll now try to have a driver which just wraps a
> memfd into a dma-buf.
So, we have to decide either we introduce a new driver
(say, under drivers/xen/xen-dma-buf) or extend the existing
gntdev/balloon to support dma-buf use-cases.

Can anybody from Xen community express their preference here?

And I hope that there is no objection to have it all in the kernel,
without going to user-space with VAs and back (device-X driver)
>
> Yes i915 and amdgpu and a few other drivers do have facilities to wrap
> userspace memory into a gpu buffer object. But we don't allow those to be
> exported to other drivers, because the core mm magic needed to make this
> all work is way too tricky, even within the context of just 1 driver. And
> dma-buf does not have the required callbacks and semantics to make it
> work.
> -Daniel
>
>>> Finally, this is indeed some memory, but a bit more [1]
>>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
>>>> to other OSes? So far the operations performed by the gntdev device
>>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
>>>> they are implemented by Linux and FreeBSD.
>>> At the moment I can only see Linux implementation and it seems
>>> to be perfectly ok as we do not change Xen's APIs etc. and only
>>> use the existing ones (remember, we only extend gntdev/balloon
>>> drivers, all the changes in the Linux kernel)
>>> As the second note I can also think that we do not extend gntdev/balloon
>>> drivers and have re-worked xen-zcopy driver be a separate entity,
>>> say drivers/xen/dma-buf
>>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
>>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
>>>>> needed
>>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
>>>> I think this needs clarifying. In which memory space do you need those
>>>> regions to be contiguous?
>>> Use-case: Dom0 has a HW driver which only works with contig memory
>>> and I want DomU to be able to directly write into that memory, thus
>>> implementing zero copying
>>>> Do they need to be contiguous in host physical memory, or guest
>>>> physical memory?
>>> Host
>>>> If it's in guest memory space, isn't there any generic interface that
>>>> you can use?
>>>>
>>>> If it's in host physical memory space, why do you need this buffer to
>>>> be contiguous in host physical memory space? The IOMMU should hide all
>>>> this.
>>> There are drivers/HW which can only work with contig memory and
>>> if it is backed by an IOMMU then still it has to be contig in IPA
>>> space (real device doesn't know that it is actually IPA contig, not PA)
>> What's IPA contig?
>>
>> Thanks, Roger.
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-20 11:25                     ` Oleksandr Andrushchenko
  (?)
@ 2018-04-23 11:52                     ` Wei Liu
  2018-04-23 12:10                         ` Oleksandr Andrushchenko
  2018-04-23 12:10                       ` Oleksandr Andrushchenko
  -1 siblings, 2 replies; 131+ messages in thread
From: Wei Liu @ 2018-04-23 11:52 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Roger Pau Monné,
	jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky, Wei Liu

On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > >     the gntdev.
> > > 
> > > I think this is generic enough that it could be implemented by a
> > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > something similar to this.
> > You can't just wrap random userspace memory into a dma-buf. We've just had
> > this discussion with kvm/qemu folks, who proposed just that, and after a
> > bit of discussion they'll now try to have a driver which just wraps a
> > memfd into a dma-buf.
> So, we have to decide either we introduce a new driver
> (say, under drivers/xen/xen-dma-buf) or extend the existing
> gntdev/balloon to support dma-buf use-cases.
> 
> Can anybody from Xen community express their preference here?
> 

Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
be added to either existing drivers or a new driver.

I went through this thread twice and skimmed through the relevant
documents, but I couldn't see any obvious pros and cons for either
approach. So I don't really have an opinion on this.

But, assuming if implemented in existing drivers, those IOCTLs need to
be added to different drivers, which means userspace program needs to
write more code and get more handles, it would be slightly better to
implement a new driver from that perspective.

Wei.

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-20 11:25                     ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-23 11:52                     ` Wei Liu
  -1 siblings, 0 replies; 131+ messages in thread
From: Wei Liu @ 2018-04-23 11:52 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Wei Liu, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky,
	Roger Pau Monné

On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > >     the gntdev.
> > > 
> > > I think this is generic enough that it could be implemented by a
> > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > something similar to this.
> > You can't just wrap random userspace memory into a dma-buf. We've just had
> > this discussion with kvm/qemu folks, who proposed just that, and after a
> > bit of discussion they'll now try to have a driver which just wraps a
> > memfd into a dma-buf.
> So, we have to decide either we introduce a new driver
> (say, under drivers/xen/xen-dma-buf) or extend the existing
> gntdev/balloon to support dma-buf use-cases.
> 
> Can anybody from Xen community express their preference here?
> 

Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
be added to either existing drivers or a new driver.

I went through this thread twice and skimmed through the relevant
documents, but I couldn't see any obvious pros and cons for either
approach. So I don't really have an opinion on this.

But, assuming if implemented in existing drivers, those IOCTLs need to
be added to different drivers, which means userspace program needs to
write more code and get more handles, it would be slightly better to
implement a new driver from that perspective.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 11:52                     ` Wei Liu
@ 2018-04-23 12:10                         ` Oleksandr Andrushchenko
  2018-04-23 12:10                       ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-23 12:10 UTC (permalink / raw)
  To: Wei Liu
  Cc: Roger Pau Monné,
	jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky

On 04/23/2018 02:52 PM, Wei Liu wrote:
> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>      the gntdev.
>>>>
>>>> I think this is generic enough that it could be implemented by a
>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>> something similar to this.
>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>> bit of discussion they'll now try to have a driver which just wraps a
>>> memfd into a dma-buf.
>> So, we have to decide either we introduce a new driver
>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>> gntdev/balloon to support dma-buf use-cases.
>>
>> Can anybody from Xen community express their preference here?
>>
> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> be added to either existing drivers or a new driver.
>
> I went through this thread twice and skimmed through the relevant
> documents, but I couldn't see any obvious pros and cons for either
> approach. So I don't really have an opinion on this.
>
> But, assuming if implemented in existing drivers, those IOCTLs need to
> be added to different drivers, which means userspace program needs to
> write more code and get more handles, it would be slightly better to
> implement a new driver from that perspective.
If gntdev/balloon extension is still considered:

All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE

Balloon driver extension, which is needed for contiguous/DMA
buffers, will be to provide new *kernel API*, no UAPI is needed.

> Wei.
Thank you,
Oleksandr

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-23 12:10                         ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-23 12:10 UTC (permalink / raw)
  To: Wei Liu
  Cc: jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky,
	Roger Pau Monné

On 04/23/2018 02:52 PM, Wei Liu wrote:
> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>      the gntdev.
>>>>
>>>> I think this is generic enough that it could be implemented by a
>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>> something similar to this.
>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>> bit of discussion they'll now try to have a driver which just wraps a
>>> memfd into a dma-buf.
>> So, we have to decide either we introduce a new driver
>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>> gntdev/balloon to support dma-buf use-cases.
>>
>> Can anybody from Xen community express their preference here?
>>
> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> be added to either existing drivers or a new driver.
>
> I went through this thread twice and skimmed through the relevant
> documents, but I couldn't see any obvious pros and cons for either
> approach. So I don't really have an opinion on this.
>
> But, assuming if implemented in existing drivers, those IOCTLs need to
> be added to different drivers, which means userspace program needs to
> write more code and get more handles, it would be slightly better to
> implement a new driver from that perspective.
If gntdev/balloon extension is still considered:

All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE

Balloon driver extension, which is needed for contiguous/DMA
buffers, will be to provide new *kernel API*, no UAPI is needed.

> Wei.
Thank you,
Oleksandr
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 11:52                     ` Wei Liu
  2018-04-23 12:10                         ` Oleksandr Andrushchenko
@ 2018-04-23 12:10                       ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-23 12:10 UTC (permalink / raw)
  To: Wei Liu
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky,
	Roger Pau Monné

On 04/23/2018 02:52 PM, Wei Liu wrote:
> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>      the gntdev.
>>>>
>>>> I think this is generic enough that it could be implemented by a
>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>> something similar to this.
>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>> bit of discussion they'll now try to have a driver which just wraps a
>>> memfd into a dma-buf.
>> So, we have to decide either we introduce a new driver
>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>> gntdev/balloon to support dma-buf use-cases.
>>
>> Can anybody from Xen community express their preference here?
>>
> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> be added to either existing drivers or a new driver.
>
> I went through this thread twice and skimmed through the relevant
> documents, but I couldn't see any obvious pros and cons for either
> approach. So I don't really have an opinion on this.
>
> But, assuming if implemented in existing drivers, those IOCTLs need to
> be added to different drivers, which means userspace program needs to
> write more code and get more handles, it would be slightly better to
> implement a new driver from that perspective.
If gntdev/balloon extension is still considered:

All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE

Balloon driver extension, which is needed for contiguous/DMA
buffers, will be to provide new *kernel API*, no UAPI is needed.

> Wei.
Thank you,
Oleksandr

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 12:10                         ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-23 22:41                         ` Boris Ostrovsky
  2018-04-24  5:43                           ` Oleksandr Andrushchenko
  2018-04-24  5:43                             ` Oleksandr Andrushchenko
  -1 siblings, 2 replies; 131+ messages in thread
From: Boris Ostrovsky @ 2018-04-23 22:41 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Wei Liu
  Cc: Roger Pau Monné,
	jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter

On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
> On 04/23/2018 02:52 PM, Wei Liu wrote:
>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>      the gntdev.
>>>>>
>>>>> I think this is generic enough that it could be implemented by a
>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>> something similar to this.
>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>> just had
>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>> after a
>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>> memfd into a dma-buf.
>>> So, we have to decide either we introduce a new driver
>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>> gntdev/balloon to support dma-buf use-cases.
>>>
>>> Can anybody from Xen community express their preference here?
>>>
>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>> be added to either existing drivers or a new driver.
>>
>> I went through this thread twice and skimmed through the relevant
>> documents, but I couldn't see any obvious pros and cons for either
>> approach. So I don't really have an opinion on this.
>>
>> But, assuming if implemented in existing drivers, those IOCTLs need to
>> be added to different drivers, which means userspace program needs to
>> write more code and get more handles, it would be slightly better to
>> implement a new driver from that perspective.
> If gntdev/balloon extension is still considered:
>
> All the IOCTLs will be in gntdev driver (in current xen-zcopy
> terminology):
>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>
> Balloon driver extension, which is needed for contiguous/DMA
> buffers, will be to provide new *kernel API*, no UAPI is needed.
>


So I am obviously a bit late to this thread, but why do you need to add
new ioctls to gntdev and balloon? Doesn't this driver manage to do what
you want without any extensions?

-boris

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 12:10                         ` Oleksandr Andrushchenko
  (?)
@ 2018-04-23 22:41                         ` Boris Ostrovsky
  -1 siblings, 0 replies; 131+ messages in thread
From: Boris Ostrovsky @ 2018-04-23 22:41 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Wei Liu
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, Roger Pau Monné

On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
> On 04/23/2018 02:52 PM, Wei Liu wrote:
>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>      the gntdev.
>>>>>
>>>>> I think this is generic enough that it could be implemented by a
>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>> something similar to this.
>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>> just had
>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>> after a
>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>> memfd into a dma-buf.
>>> So, we have to decide either we introduce a new driver
>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>> gntdev/balloon to support dma-buf use-cases.
>>>
>>> Can anybody from Xen community express their preference here?
>>>
>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>> be added to either existing drivers or a new driver.
>>
>> I went through this thread twice and skimmed through the relevant
>> documents, but I couldn't see any obvious pros and cons for either
>> approach. So I don't really have an opinion on this.
>>
>> But, assuming if implemented in existing drivers, those IOCTLs need to
>> be added to different drivers, which means userspace program needs to
>> write more code and get more handles, it would be slightly better to
>> implement a new driver from that perspective.
> If gntdev/balloon extension is still considered:
>
> All the IOCTLs will be in gntdev driver (in current xen-zcopy
> terminology):
>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>
> Balloon driver extension, which is needed for contiguous/DMA
> buffers, will be to provide new *kernel API*, no UAPI is needed.
>


So I am obviously a bit late to this thread, but why do you need to add
new ioctls to gntdev and balloon? Doesn't this driver manage to do what
you want without any extensions?

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 22:41                         ` [Xen-devel] " Boris Ostrovsky
@ 2018-04-24  5:43                             ` Oleksandr Andrushchenko
  2018-04-24  5:43                             ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  5:43 UTC (permalink / raw)
  To: Boris Ostrovsky, Wei Liu
  Cc: Roger Pau Monné,
	jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter

On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>       the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>> just had
>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>> after a
>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>> memfd into a dma-buf.
>>>> So, we have to decide either we introduce a new driver
>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>> gntdev/balloon to support dma-buf use-cases.
>>>>
>>>> Can anybody from Xen community express their preference here?
>>>>
>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>> be added to either existing drivers or a new driver.
>>>
>>> I went through this thread twice and skimmed through the relevant
>>> documents, but I couldn't see any obvious pros and cons for either
>>> approach. So I don't really have an opinion on this.
>>>
>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>> be added to different drivers, which means userspace program needs to
>>> write more code and get more handles, it would be slightly better to
>>> implement a new driver from that perspective.
>> If gntdev/balloon extension is still considered:
>>
>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>> terminology):
>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>
>> Balloon driver extension, which is needed for contiguous/DMA
>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>
>
> So I am obviously a bit late to this thread, but why do you need to add
> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
> you want without any extensions?
1. I only (may) need to add IOCTLs to gntdev
2. balloon driver needs to be extended, so it can allocate
contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
in the kernel.
3. The reason I need to extend gnttab with new IOCTLs is to
provide new functionality to create a dma-buf from grant references
and to produce grant references for a dma-buf. This is what I have as UAPI
description for xen-zcopy driver:

1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
This will create a DRM dumb buffer from grant references provided
by the frontend. The intended usage is:
   - Frontend
     - creates a dumb/display buffer and allocates memory
     - grants foreign access to the buffer pages
     - passes granted references to the backend
   - Backend
     - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
       granted references and create a dumb buffer
     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
     - requests real HW driver/consumer to import the PRIME buffer with
       DRM_IOCTL_PRIME_FD_TO_HANDLE
     - uses handle returned by the real HW driver
   - at the end:
     o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
     o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
     o closes file descriptor of the exported buffer

2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
This will grant references to a dumb/display buffer's memory provided by the
backend. The intended usage is:
   - Frontend
     - requests backend to allocate dumb/display buffer and grant references
       to its pages
   - Backend
     - requests real HW driver to create a dumb with 
DRM_IOCTL_MODE_CREATE_DUMB
     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
     - requests zero-copy driver to import the PRIME buffer with
       DRM_IOCTL_PRIME_FD_TO_HANDLE
     - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
       grant references to the buffer's memory.
     - passes grant references to the frontend
  - at the end:
     - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
     - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
     - closes file descriptor of the imported buffer

3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
This will block until the dumb buffer with the wait handle provided be 
freed:
this is needed for synchronization between frontend and backend in case
frontend provides grant references of the buffer via
DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
backend replies with XENDISPL_OP_DBUF_DESTROY response.
wait_handle must be the same value returned while calling
DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.

So, as you can see the above functionality is not covered by the 
existing UAPI
of the gntdev driver.
Now, if we change dumb -> dma-buf and remove DRM code (which is only a 
wrapper
here on top of dma-buf) we get new driver for dma-buf for Xen.

This is why I have 2 options here: either create a dedicated driver for this
(e.g. re-work xen-zcopy to be DRM independent and put it under
drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
with the above UAPI + make changes to the balloon driver to provide kernel
API for DMA buffer allocations.

So, this is where I need to understand Xen community's preference, so the
implementation is not questioned later on.

> -boris
Thank you,
Oleksandr

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-24  5:43                             ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  5:43 UTC (permalink / raw)
  To: Boris Ostrovsky, Wei Liu
  Cc: jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, Roger Pau Monné

On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>       the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>> just had
>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>> after a
>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>> memfd into a dma-buf.
>>>> So, we have to decide either we introduce a new driver
>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>> gntdev/balloon to support dma-buf use-cases.
>>>>
>>>> Can anybody from Xen community express their preference here?
>>>>
>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>> be added to either existing drivers or a new driver.
>>>
>>> I went through this thread twice and skimmed through the relevant
>>> documents, but I couldn't see any obvious pros and cons for either
>>> approach. So I don't really have an opinion on this.
>>>
>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>> be added to different drivers, which means userspace program needs to
>>> write more code and get more handles, it would be slightly better to
>>> implement a new driver from that perspective.
>> If gntdev/balloon extension is still considered:
>>
>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>> terminology):
>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>
>> Balloon driver extension, which is needed for contiguous/DMA
>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>
>
> So I am obviously a bit late to this thread, but why do you need to add
> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
> you want without any extensions?
1. I only (may) need to add IOCTLs to gntdev
2. balloon driver needs to be extended, so it can allocate
contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
in the kernel.
3. The reason I need to extend gnttab with new IOCTLs is to
provide new functionality to create a dma-buf from grant references
and to produce grant references for a dma-buf. This is what I have as UAPI
description for xen-zcopy driver:

1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
This will create a DRM dumb buffer from grant references provided
by the frontend. The intended usage is:
   - Frontend
     - creates a dumb/display buffer and allocates memory
     - grants foreign access to the buffer pages
     - passes granted references to the backend
   - Backend
     - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
       granted references and create a dumb buffer
     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
     - requests real HW driver/consumer to import the PRIME buffer with
       DRM_IOCTL_PRIME_FD_TO_HANDLE
     - uses handle returned by the real HW driver
   - at the end:
     o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
     o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
     o closes file descriptor of the exported buffer

2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
This will grant references to a dumb/display buffer's memory provided by the
backend. The intended usage is:
   - Frontend
     - requests backend to allocate dumb/display buffer and grant references
       to its pages
   - Backend
     - requests real HW driver to create a dumb with 
DRM_IOCTL_MODE_CREATE_DUMB
     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
     - requests zero-copy driver to import the PRIME buffer with
       DRM_IOCTL_PRIME_FD_TO_HANDLE
     - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
       grant references to the buffer's memory.
     - passes grant references to the frontend
  - at the end:
     - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
     - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
     - closes file descriptor of the imported buffer

3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
This will block until the dumb buffer with the wait handle provided be 
freed:
this is needed for synchronization between frontend and backend in case
frontend provides grant references of the buffer via
DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
backend replies with XENDISPL_OP_DBUF_DESTROY response.
wait_handle must be the same value returned while calling
DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.

So, as you can see the above functionality is not covered by the 
existing UAPI
of the gntdev driver.
Now, if we change dumb -> dma-buf and remove DRM code (which is only a 
wrapper
here on top of dma-buf) we get new driver for dma-buf for Xen.

This is why I have 2 options here: either create a dedicated driver for this
(e.g. re-work xen-zcopy to be DRM independent and put it under
drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
with the above UAPI + make changes to the balloon driver to provide kernel
API for DMA buffer allocations.

So, this is where I need to understand Xen community's preference, so the
implementation is not questioned later on.

> -boris
Thank you,
Oleksandr
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 22:41                         ` [Xen-devel] " Boris Ostrovsky
@ 2018-04-24  5:43                           ` Oleksandr Andrushchenko
  2018-04-24  5:43                             ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  5:43 UTC (permalink / raw)
  To: Boris Ostrovsky, Wei Liu
  Cc: jgross, Artem Mygaiev, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, Roger Pau Monné

On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>       the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>> just had
>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>> after a
>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>> memfd into a dma-buf.
>>>> So, we have to decide either we introduce a new driver
>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>> gntdev/balloon to support dma-buf use-cases.
>>>>
>>>> Can anybody from Xen community express their preference here?
>>>>
>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>> be added to either existing drivers or a new driver.
>>>
>>> I went through this thread twice and skimmed through the relevant
>>> documents, but I couldn't see any obvious pros and cons for either
>>> approach. So I don't really have an opinion on this.
>>>
>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>> be added to different drivers, which means userspace program needs to
>>> write more code and get more handles, it would be slightly better to
>>> implement a new driver from that perspective.
>> If gntdev/balloon extension is still considered:
>>
>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>> terminology):
>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>
>> Balloon driver extension, which is needed for contiguous/DMA
>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>
>
> So I am obviously a bit late to this thread, but why do you need to add
> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
> you want without any extensions?
1. I only (may) need to add IOCTLs to gntdev
2. balloon driver needs to be extended, so it can allocate
contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
in the kernel.
3. The reason I need to extend gnttab with new IOCTLs is to
provide new functionality to create a dma-buf from grant references
and to produce grant references for a dma-buf. This is what I have as UAPI
description for xen-zcopy driver:

1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
This will create a DRM dumb buffer from grant references provided
by the frontend. The intended usage is:
   - Frontend
     - creates a dumb/display buffer and allocates memory
     - grants foreign access to the buffer pages
     - passes granted references to the backend
   - Backend
     - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
       granted references and create a dumb buffer
     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
     - requests real HW driver/consumer to import the PRIME buffer with
       DRM_IOCTL_PRIME_FD_TO_HANDLE
     - uses handle returned by the real HW driver
   - at the end:
     o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
     o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
     o closes file descriptor of the exported buffer

2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
This will grant references to a dumb/display buffer's memory provided by the
backend. The intended usage is:
   - Frontend
     - requests backend to allocate dumb/display buffer and grant references
       to its pages
   - Backend
     - requests real HW driver to create a dumb with 
DRM_IOCTL_MODE_CREATE_DUMB
     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
     - requests zero-copy driver to import the PRIME buffer with
       DRM_IOCTL_PRIME_FD_TO_HANDLE
     - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
       grant references to the buffer's memory.
     - passes grant references to the frontend
  - at the end:
     - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
     - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
     - closes file descriptor of the imported buffer

3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
This will block until the dumb buffer with the wait handle provided be 
freed:
this is needed for synchronization between frontend and backend in case
frontend provides grant references of the buffer via
DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
backend replies with XENDISPL_OP_DBUF_DESTROY response.
wait_handle must be the same value returned while calling
DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.

So, as you can see the above functionality is not covered by the 
existing UAPI
of the gntdev driver.
Now, if we change dumb -> dma-buf and remove DRM code (which is only a 
wrapper
here on top of dma-buf) we get new driver for dma-buf for Xen.

This is why I have 2 options here: either create a dedicated driver for this
(e.g. re-work xen-zcopy to be DRM independent and put it under
drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
with the above UAPI + make changes to the balloon driver to provide kernel
API for DMA buffer allocations.

So, this is where I need to understand Xen community's preference, so the
implementation is not questioned later on.

> -boris
Thank you,
Oleksandr

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  5:43                             ` Oleksandr Andrushchenko
  (?)
@ 2018-04-24  7:51                             ` Juergen Gross
  2018-04-24  8:07                               ` Oleksandr Andrushchenko
  2018-04-24  8:07                                 ` Oleksandr Andrushchenko
  -1 siblings, 2 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24  7:51 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Roger Pau Monné,
	Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter

On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>>>>       the gntdev.
>>>>>>>
>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>> something similar to this.
>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>> just had
>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>> after a
>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>> memfd into a dma-buf.
>>>>> So, we have to decide either we introduce a new driver
>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>
>>>>> Can anybody from Xen community express their preference here?
>>>>>
>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>> be added to either existing drivers or a new driver.
>>>>
>>>> I went through this thread twice and skimmed through the relevant
>>>> documents, but I couldn't see any obvious pros and cons for either
>>>> approach. So I don't really have an opinion on this.
>>>>
>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>> be added to different drivers, which means userspace program needs to
>>>> write more code and get more handles, it would be slightly better to
>>>> implement a new driver from that perspective.
>>> If gntdev/balloon extension is still considered:
>>>
>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>> terminology):
>>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>
>>> Balloon driver extension, which is needed for contiguous/DMA
>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>
>>
>> So I am obviously a bit late to this thread, but why do you need to add
>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>> you want without any extensions?
> 1. I only (may) need to add IOCTLs to gntdev
> 2. balloon driver needs to be extended, so it can allocate
> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
> in the kernel.
> 3. The reason I need to extend gnttab with new IOCTLs is to
> provide new functionality to create a dma-buf from grant references
> and to produce grant references for a dma-buf. This is what I have as UAPI
> description for xen-zcopy driver:
> 
> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
> This will create a DRM dumb buffer from grant references provided
> by the frontend. The intended usage is:
>   - Frontend
>     - creates a dumb/display buffer and allocates memory
>     - grants foreign access to the buffer pages
>     - passes granted references to the backend
>   - Backend
>     - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>       granted references and create a dumb buffer
>     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>     - requests real HW driver/consumer to import the PRIME buffer with
>       DRM_IOCTL_PRIME_FD_TO_HANDLE
>     - uses handle returned by the real HW driver
>   - at the end:
>     o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>     o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>     o closes file descriptor of the exported buffer
> 
> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> This will grant references to a dumb/display buffer's memory provided by
> the
> backend. The intended usage is:
>   - Frontend
>     - requests backend to allocate dumb/display buffer and grant references
>       to its pages
>   - Backend
>     - requests real HW driver to create a dumb with
> DRM_IOCTL_MODE_CREATE_DUMB
>     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>     - requests zero-copy driver to import the PRIME buffer with
>       DRM_IOCTL_PRIME_FD_TO_HANDLE
>     - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>       grant references to the buffer's memory.
>     - passes grant references to the frontend
>  - at the end:
>     - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>     - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>     - closes file descriptor of the imported buffer
> 
> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> This will block until the dumb buffer with the wait handle provided be
> freed:
> this is needed for synchronization between frontend and backend in case
> frontend provides grant references of the buffer via
> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
> backend replies with XENDISPL_OP_DBUF_DESTROY response.
> wait_handle must be the same value returned while calling
> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
> 
> So, as you can see the above functionality is not covered by the
> existing UAPI
> of the gntdev driver.
> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
> wrapper
> here on top of dma-buf) we get new driver for dma-buf for Xen.
> 
> This is why I have 2 options here: either create a dedicated driver for
> this
> (e.g. re-work xen-zcopy to be DRM independent and put it under
> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
> with the above UAPI + make changes to the balloon driver to provide kernel
> API for DMA buffer allocations.

Which user component would use the new ioctls?

I'm asking because I'm not very fond of adding more linux specific
functions to libgnttab which are not related to a specific Xen version,
but to a kernel version.

So doing this in a separate driver seems to be the better option in
this regard.


Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  5:43                             ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-24  7:51                             ` Juergen Gross
  -1 siblings, 0 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24  7:51 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Potrola, MateuszX, daniel.vetter,
	xen-devel, Roger Pau Monné

On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>> wrote:
>>>>>>>       the gntdev.
>>>>>>>
>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>> something similar to this.
>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>> just had
>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>> after a
>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>> memfd into a dma-buf.
>>>>> So, we have to decide either we introduce a new driver
>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>
>>>>> Can anybody from Xen community express their preference here?
>>>>>
>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>> be added to either existing drivers or a new driver.
>>>>
>>>> I went through this thread twice and skimmed through the relevant
>>>> documents, but I couldn't see any obvious pros and cons for either
>>>> approach. So I don't really have an opinion on this.
>>>>
>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>> be added to different drivers, which means userspace program needs to
>>>> write more code and get more handles, it would be slightly better to
>>>> implement a new driver from that perspective.
>>> If gntdev/balloon extension is still considered:
>>>
>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>> terminology):
>>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>
>>> Balloon driver extension, which is needed for contiguous/DMA
>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>
>>
>> So I am obviously a bit late to this thread, but why do you need to add
>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>> you want without any extensions?
> 1. I only (may) need to add IOCTLs to gntdev
> 2. balloon driver needs to be extended, so it can allocate
> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
> in the kernel.
> 3. The reason I need to extend gnttab with new IOCTLs is to
> provide new functionality to create a dma-buf from grant references
> and to produce grant references for a dma-buf. This is what I have as UAPI
> description for xen-zcopy driver:
> 
> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
> This will create a DRM dumb buffer from grant references provided
> by the frontend. The intended usage is:
>   - Frontend
>     - creates a dumb/display buffer and allocates memory
>     - grants foreign access to the buffer pages
>     - passes granted references to the backend
>   - Backend
>     - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>       granted references and create a dumb buffer
>     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>     - requests real HW driver/consumer to import the PRIME buffer with
>       DRM_IOCTL_PRIME_FD_TO_HANDLE
>     - uses handle returned by the real HW driver
>   - at the end:
>     o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>     o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>     o closes file descriptor of the exported buffer
> 
> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> This will grant references to a dumb/display buffer's memory provided by
> the
> backend. The intended usage is:
>   - Frontend
>     - requests backend to allocate dumb/display buffer and grant references
>       to its pages
>   - Backend
>     - requests real HW driver to create a dumb with
> DRM_IOCTL_MODE_CREATE_DUMB
>     - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>     - requests zero-copy driver to import the PRIME buffer with
>       DRM_IOCTL_PRIME_FD_TO_HANDLE
>     - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>       grant references to the buffer's memory.
>     - passes grant references to the frontend
>  - at the end:
>     - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>     - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>     - closes file descriptor of the imported buffer
> 
> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> This will block until the dumb buffer with the wait handle provided be
> freed:
> this is needed for synchronization between frontend and backend in case
> frontend provides grant references of the buffer via
> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
> backend replies with XENDISPL_OP_DBUF_DESTROY response.
> wait_handle must be the same value returned while calling
> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
> 
> So, as you can see the above functionality is not covered by the
> existing UAPI
> of the gntdev driver.
> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
> wrapper
> here on top of dma-buf) we get new driver for dma-buf for Xen.
> 
> This is why I have 2 options here: either create a dedicated driver for
> this
> (e.g. re-work xen-zcopy to be DRM independent and put it under
> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
> with the above UAPI + make changes to the balloon driver to provide kernel
> API for DMA buffer allocations.

Which user component would use the new ioctls?

I'm asking because I'm not very fond of adding more linux specific
functions to libgnttab which are not related to a specific Xen version,
but to a kernel version.

So doing this in a separate driver seems to be the better option in
this regard.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  7:51                             ` Juergen Gross
@ 2018-04-24  8:07                                 ` Oleksandr Andrushchenko
  2018-04-24  8:07                                 ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  8:07 UTC (permalink / raw)
  To: Juergen Gross, Boris Ostrovsky, Wei Liu
  Cc: Roger Pau Monné,
	Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter

On 04/24/2018 10:51 AM, Juergen Gross wrote:
> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>> wrote:
>>>>>>>>        the gntdev.
>>>>>>>>
>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>> something similar to this.
>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>> just had
>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>> after a
>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>> memfd into a dma-buf.
>>>>>> So, we have to decide either we introduce a new driver
>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>
>>>>>> Can anybody from Xen community express their preference here?
>>>>>>
>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>> be added to either existing drivers or a new driver.
>>>>>
>>>>> I went through this thread twice and skimmed through the relevant
>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>> approach. So I don't really have an opinion on this.
>>>>>
>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>> be added to different drivers, which means userspace program needs to
>>>>> write more code and get more handles, it would be slightly better to
>>>>> implement a new driver from that perspective.
>>>> If gntdev/balloon extension is still considered:
>>>>
>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>> terminology):
>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>
>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>
>>> So I am obviously a bit late to this thread, but why do you need to add
>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>> you want without any extensions?
>> 1. I only (may) need to add IOCTLs to gntdev
>> 2. balloon driver needs to be extended, so it can allocate
>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>> in the kernel.
>> 3. The reason I need to extend gnttab with new IOCTLs is to
>> provide new functionality to create a dma-buf from grant references
>> and to produce grant references for a dma-buf. This is what I have as UAPI
>> description for xen-zcopy driver:
>>
>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>> This will create a DRM dumb buffer from grant references provided
>> by the frontend. The intended usage is:
>>    - Frontend
>>      - creates a dumb/display buffer and allocates memory
>>      - grants foreign access to the buffer pages
>>      - passes granted references to the backend
>>    - Backend
>>      - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>        granted references and create a dumb buffer
>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>      - requests real HW driver/consumer to import the PRIME buffer with
>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>      - uses handle returned by the real HW driver
>>    - at the end:
>>      o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>      o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>      o closes file descriptor of the exported buffer
>>
>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>> This will grant references to a dumb/display buffer's memory provided by
>> the
>> backend. The intended usage is:
>>    - Frontend
>>      - requests backend to allocate dumb/display buffer and grant references
>>        to its pages
>>    - Backend
>>      - requests real HW driver to create a dumb with
>> DRM_IOCTL_MODE_CREATE_DUMB
>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>      - requests zero-copy driver to import the PRIME buffer with
>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>      - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>        grant references to the buffer's memory.
>>      - passes grant references to the frontend
>>   - at the end:
>>      - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>      - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>      - closes file descriptor of the imported buffer
>>
>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> This will block until the dumb buffer with the wait handle provided be
>> freed:
>> this is needed for synchronization between frontend and backend in case
>> frontend provides grant references of the buffer via
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>> wait_handle must be the same value returned while calling
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>
>> So, as you can see the above functionality is not covered by the
>> existing UAPI
>> of the gntdev driver.
>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>> wrapper
>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>
>> This is why I have 2 options here: either create a dedicated driver for
>> this
>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
>> with the above UAPI + make changes to the balloon driver to provide kernel
>> API for DMA buffer allocations.
> Which user component would use the new ioctls?
It is currently used by the display backend [1] and will
probably be used by the hyper-dmabuf frontend/backend
(Dongwon from Intel can provide more info on this).
>
> I'm asking because I'm not very fond of adding more linux specific
> functions to libgnttab which are not related to a specific Xen version,
> but to a kernel version.
Hm, I was not thinking about this UAPI to be added to libgnttab.
It seems it can be used directly w/o wrappers in user-space
>
> So doing this in a separate driver seems to be the better option in
> this regard.
Well, from maintenance POV it is easier for me to have it all in
a separate driver as all dma-buf related functionality will
reside at one place. This also means that no changes to existing
drivers will be needed (if it is ok to have ballooning in/out
code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon 
driver)
>
> Juergen
[1] https://github.com/xen-troops/displ_be

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-24  8:07                                 ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  8:07 UTC (permalink / raw)
  To: Juergen Gross, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko, airlied,
	konrad.wilk, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, Roger Pau Monné

On 04/24/2018 10:51 AM, Juergen Gross wrote:
> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>> wrote:
>>>>>>>>        the gntdev.
>>>>>>>>
>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>> something similar to this.
>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>> just had
>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>> after a
>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>> memfd into a dma-buf.
>>>>>> So, we have to decide either we introduce a new driver
>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>
>>>>>> Can anybody from Xen community express their preference here?
>>>>>>
>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>> be added to either existing drivers or a new driver.
>>>>>
>>>>> I went through this thread twice and skimmed through the relevant
>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>> approach. So I don't really have an opinion on this.
>>>>>
>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>> be added to different drivers, which means userspace program needs to
>>>>> write more code and get more handles, it would be slightly better to
>>>>> implement a new driver from that perspective.
>>>> If gntdev/balloon extension is still considered:
>>>>
>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>> terminology):
>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>
>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>
>>> So I am obviously a bit late to this thread, but why do you need to add
>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>> you want without any extensions?
>> 1. I only (may) need to add IOCTLs to gntdev
>> 2. balloon driver needs to be extended, so it can allocate
>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>> in the kernel.
>> 3. The reason I need to extend gnttab with new IOCTLs is to
>> provide new functionality to create a dma-buf from grant references
>> and to produce grant references for a dma-buf. This is what I have as UAPI
>> description for xen-zcopy driver:
>>
>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>> This will create a DRM dumb buffer from grant references provided
>> by the frontend. The intended usage is:
>>    - Frontend
>>      - creates a dumb/display buffer and allocates memory
>>      - grants foreign access to the buffer pages
>>      - passes granted references to the backend
>>    - Backend
>>      - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>        granted references and create a dumb buffer
>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>      - requests real HW driver/consumer to import the PRIME buffer with
>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>      - uses handle returned by the real HW driver
>>    - at the end:
>>      o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>      o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>      o closes file descriptor of the exported buffer
>>
>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>> This will grant references to a dumb/display buffer's memory provided by
>> the
>> backend. The intended usage is:
>>    - Frontend
>>      - requests backend to allocate dumb/display buffer and grant references
>>        to its pages
>>    - Backend
>>      - requests real HW driver to create a dumb with
>> DRM_IOCTL_MODE_CREATE_DUMB
>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>      - requests zero-copy driver to import the PRIME buffer with
>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>      - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>        grant references to the buffer's memory.
>>      - passes grant references to the frontend
>>   - at the end:
>>      - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>      - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>      - closes file descriptor of the imported buffer
>>
>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> This will block until the dumb buffer with the wait handle provided be
>> freed:
>> this is needed for synchronization between frontend and backend in case
>> frontend provides grant references of the buffer via
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>> wait_handle must be the same value returned while calling
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>
>> So, as you can see the above functionality is not covered by the
>> existing UAPI
>> of the gntdev driver.
>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>> wrapper
>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>
>> This is why I have 2 options here: either create a dedicated driver for
>> this
>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
>> with the above UAPI + make changes to the balloon driver to provide kernel
>> API for DMA buffer allocations.
> Which user component would use the new ioctls?
It is currently used by the display backend [1] and will
probably be used by the hyper-dmabuf frontend/backend
(Dongwon from Intel can provide more info on this).
>
> I'm asking because I'm not very fond of adding more linux specific
> functions to libgnttab which are not related to a specific Xen version,
> but to a kernel version.
Hm, I was not thinking about this UAPI to be added to libgnttab.
It seems it can be used directly w/o wrappers in user-space
>
> So doing this in a separate driver seems to be the better option in
> this regard.
Well, from maintenance POV it is easier for me to have it all in
a separate driver as all dma-buf related functionality will
reside at one place. This also means that no changes to existing
drivers will be needed (if it is ok to have ballooning in/out
code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon 
driver)
>
> Juergen
[1] https://github.com/xen-troops/displ_be
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  7:51                             ` Juergen Gross
@ 2018-04-24  8:07                               ` Oleksandr Andrushchenko
  2018-04-24  8:07                                 ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  8:07 UTC (permalink / raw)
  To: Juergen Gross, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Potrola, MateuszX, daniel.vetter,
	xen-devel, Roger Pau Monné

On 04/24/2018 10:51 AM, Juergen Gross wrote:
> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>> wrote:
>>>>>>>>        the gntdev.
>>>>>>>>
>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>> something similar to this.
>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>> just had
>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>> after a
>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>> memfd into a dma-buf.
>>>>>> So, we have to decide either we introduce a new driver
>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>
>>>>>> Can anybody from Xen community express their preference here?
>>>>>>
>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>> be added to either existing drivers or a new driver.
>>>>>
>>>>> I went through this thread twice and skimmed through the relevant
>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>> approach. So I don't really have an opinion on this.
>>>>>
>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>> be added to different drivers, which means userspace program needs to
>>>>> write more code and get more handles, it would be slightly better to
>>>>> implement a new driver from that perspective.
>>>> If gntdev/balloon extension is still considered:
>>>>
>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>> terminology):
>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>
>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>
>>> So I am obviously a bit late to this thread, but why do you need to add
>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>> you want without any extensions?
>> 1. I only (may) need to add IOCTLs to gntdev
>> 2. balloon driver needs to be extended, so it can allocate
>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>> in the kernel.
>> 3. The reason I need to extend gnttab with new IOCTLs is to
>> provide new functionality to create a dma-buf from grant references
>> and to produce grant references for a dma-buf. This is what I have as UAPI
>> description for xen-zcopy driver:
>>
>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>> This will create a DRM dumb buffer from grant references provided
>> by the frontend. The intended usage is:
>>    - Frontend
>>      - creates a dumb/display buffer and allocates memory
>>      - grants foreign access to the buffer pages
>>      - passes granted references to the backend
>>    - Backend
>>      - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>        granted references and create a dumb buffer
>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>      - requests real HW driver/consumer to import the PRIME buffer with
>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>      - uses handle returned by the real HW driver
>>    - at the end:
>>      o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>      o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>      o closes file descriptor of the exported buffer
>>
>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>> This will grant references to a dumb/display buffer's memory provided by
>> the
>> backend. The intended usage is:
>>    - Frontend
>>      - requests backend to allocate dumb/display buffer and grant references
>>        to its pages
>>    - Backend
>>      - requests real HW driver to create a dumb with
>> DRM_IOCTL_MODE_CREATE_DUMB
>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>      - requests zero-copy driver to import the PRIME buffer with
>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>      - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>        grant references to the buffer's memory.
>>      - passes grant references to the frontend
>>   - at the end:
>>      - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>      - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>      - closes file descriptor of the imported buffer
>>
>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>> This will block until the dumb buffer with the wait handle provided be
>> freed:
>> this is needed for synchronization between frontend and backend in case
>> frontend provides grant references of the buffer via
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>> wait_handle must be the same value returned while calling
>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>
>> So, as you can see the above functionality is not covered by the
>> existing UAPI
>> of the gntdev driver.
>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>> wrapper
>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>
>> This is why I have 2 options here: either create a dedicated driver for
>> this
>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev driver
>> with the above UAPI + make changes to the balloon driver to provide kernel
>> API for DMA buffer allocations.
> Which user component would use the new ioctls?
It is currently used by the display backend [1] and will
probably be used by the hyper-dmabuf frontend/backend
(Dongwon from Intel can provide more info on this).
>
> I'm asking because I'm not very fond of adding more linux specific
> functions to libgnttab which are not related to a specific Xen version,
> but to a kernel version.
Hm, I was not thinking about this UAPI to be added to libgnttab.
It seems it can be used directly w/o wrappers in user-space
>
> So doing this in a separate driver seems to be the better option in
> this regard.
Well, from maintenance POV it is easier for me to have it all in
a separate driver as all dma-buf related functionality will
reside at one place. This also means that no changes to existing
drivers will be needed (if it is ok to have ballooning in/out
code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon 
driver)
>
> Juergen
[1] https://github.com/xen-troops/displ_be

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  8:07                                 ` Oleksandr Andrushchenko
  (?)
@ 2018-04-24  8:40                                 ` Juergen Gross
  2018-04-24  9:03                                     ` Oleksandr Andrushchenko
  2018-04-24  9:03                                   ` Oleksandr Andrushchenko
  -1 siblings, 2 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24  8:40 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Roger Pau Monné,
	Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter

On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>> wrote:
>>>>>>>>>        the gntdev.
>>>>>>>>>
>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>> something similar to this.
>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>> just had
>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>> after a
>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>> wraps a
>>>>>>>> memfd into a dma-buf.
>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>
>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>
>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>> need to
>>>>>> be added to either existing drivers or a new driver.
>>>>>>
>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>> approach. So I don't really have an opinion on this.
>>>>>>
>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>> need to
>>>>>> be added to different drivers, which means userspace program needs to
>>>>>> write more code and get more handles, it would be slightly better to
>>>>>> implement a new driver from that perspective.
>>>>> If gntdev/balloon extension is still considered:
>>>>>
>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>> terminology):
>>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>
>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>
>>>> So I am obviously a bit late to this thread, but why do you need to add
>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>>> you want without any extensions?
>>> 1. I only (may) need to add IOCTLs to gntdev
>>> 2. balloon driver needs to be extended, so it can allocate
>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>> in the kernel.
>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>> provide new functionality to create a dma-buf from grant references
>>> and to produce grant references for a dma-buf. This is what I have as
>>> UAPI
>>> description for xen-zcopy driver:
>>>
>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>> This will create a DRM dumb buffer from grant references provided
>>> by the frontend. The intended usage is:
>>>    - Frontend
>>>      - creates a dumb/display buffer and allocates memory
>>>      - grants foreign access to the buffer pages
>>>      - passes granted references to the backend
>>>    - Backend
>>>      - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>        granted references and create a dumb buffer
>>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>      - requests real HW driver/consumer to import the PRIME buffer with
>>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>      - uses handle returned by the real HW driver
>>>    - at the end:
>>>      o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      o closes file descriptor of the exported buffer
>>>
>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>> This will grant references to a dumb/display buffer's memory provided by
>>> the
>>> backend. The intended usage is:
>>>    - Frontend
>>>      - requests backend to allocate dumb/display buffer and grant
>>> references
>>>        to its pages
>>>    - Backend
>>>      - requests real HW driver to create a dumb with
>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>      - requests zero-copy driver to import the PRIME buffer with
>>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>      - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>        grant references to the buffer's memory.
>>>      - passes grant references to the frontend
>>>   - at the end:
>>>      - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      - closes file descriptor of the imported buffer
>>>
>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>> This will block until the dumb buffer with the wait handle provided be
>>> freed:
>>> this is needed for synchronization between frontend and backend in case
>>> frontend provides grant references of the buffer via
>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>> wait_handle must be the same value returned while calling
>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>
>>> So, as you can see the above functionality is not covered by the
>>> existing UAPI
>>> of the gntdev driver.
>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>> wrapper
>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>
>>> This is why I have 2 options here: either create a dedicated driver for
>>> this
>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>> driver
>>> with the above UAPI + make changes to the balloon driver to provide
>>> kernel
>>> API for DMA buffer allocations.
>> Which user component would use the new ioctls?
> It is currently used by the display backend [1] and will
> probably be used by the hyper-dmabuf frontend/backend
> (Dongwon from Intel can provide more info on this).
>>
>> I'm asking because I'm not very fond of adding more linux specific
>> functions to libgnttab which are not related to a specific Xen version,
>> but to a kernel version.
> Hm, I was not thinking about this UAPI to be added to libgnttab.
> It seems it can be used directly w/o wrappers in user-space

Would this program use libgnttab in parallel? If yes how would the two
usage paths be combined (same applies to the separate driver, btw)? The
gntdev driver manages resources per file descriptor and libgnttab is
hiding the file descriptor it is using for a connection. Or would the
user program use only the new driver for communicating with the gntdev
driver? In this case it might be an option to extend the gntdev driver
to present a new device (e.g. "gntdmadev") for that purpose.

>>
>> So doing this in a separate driver seems to be the better option in
>> this regard.
> Well, from maintenance POV it is easier for me to have it all in
> a separate driver as all dma-buf related functionality will
> reside at one place. This also means that no changes to existing
> drivers will be needed (if it is ok to have ballooning in/out
> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
> driver)

I think in the end this really depends on how the complete solution
will look like. gntdev is a special wrapper for the gnttab driver.
In case the new dma-buf driver needs to use parts of gntdev I'd rather
have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.


Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  8:07                                 ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-24  8:40                                 ` Juergen Gross
  -1 siblings, 0 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24  8:40 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Potrola, MateuszX, daniel.vetter,
	xen-devel, Roger Pau Monné

On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>> wrote:
>>>>>>>>>        the gntdev.
>>>>>>>>>
>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>> something similar to this.
>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>> just had
>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>> after a
>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>> wraps a
>>>>>>>> memfd into a dma-buf.
>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>
>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>
>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>> need to
>>>>>> be added to either existing drivers or a new driver.
>>>>>>
>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>> approach. So I don't really have an opinion on this.
>>>>>>
>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>> need to
>>>>>> be added to different drivers, which means userspace program needs to
>>>>>> write more code and get more handles, it would be slightly better to
>>>>>> implement a new driver from that perspective.
>>>>> If gntdev/balloon extension is still considered:
>>>>>
>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>> terminology):
>>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>
>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>
>>>> So I am obviously a bit late to this thread, but why do you need to add
>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>>> you want without any extensions?
>>> 1. I only (may) need to add IOCTLs to gntdev
>>> 2. balloon driver needs to be extended, so it can allocate
>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>> in the kernel.
>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>> provide new functionality to create a dma-buf from grant references
>>> and to produce grant references for a dma-buf. This is what I have as
>>> UAPI
>>> description for xen-zcopy driver:
>>>
>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>> This will create a DRM dumb buffer from grant references provided
>>> by the frontend. The intended usage is:
>>>    - Frontend
>>>      - creates a dumb/display buffer and allocates memory
>>>      - grants foreign access to the buffer pages
>>>      - passes granted references to the backend
>>>    - Backend
>>>      - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>        granted references and create a dumb buffer
>>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>      - requests real HW driver/consumer to import the PRIME buffer with
>>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>      - uses handle returned by the real HW driver
>>>    - at the end:
>>>      o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      o closes file descriptor of the exported buffer
>>>
>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>> This will grant references to a dumb/display buffer's memory provided by
>>> the
>>> backend. The intended usage is:
>>>    - Frontend
>>>      - requests backend to allocate dumb/display buffer and grant
>>> references
>>>        to its pages
>>>    - Backend
>>>      - requests real HW driver to create a dumb with
>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>      - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>      - requests zero-copy driver to import the PRIME buffer with
>>>        DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>      - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>        grant references to the buffer's memory.
>>>      - passes grant references to the frontend
>>>   - at the end:
>>>      - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>      - closes file descriptor of the imported buffer
>>>
>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>> This will block until the dumb buffer with the wait handle provided be
>>> freed:
>>> this is needed for synchronization between frontend and backend in case
>>> frontend provides grant references of the buffer via
>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>> wait_handle must be the same value returned while calling
>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>
>>> So, as you can see the above functionality is not covered by the
>>> existing UAPI
>>> of the gntdev driver.
>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>> wrapper
>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>
>>> This is why I have 2 options here: either create a dedicated driver for
>>> this
>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>> driver
>>> with the above UAPI + make changes to the balloon driver to provide
>>> kernel
>>> API for DMA buffer allocations.
>> Which user component would use the new ioctls?
> It is currently used by the display backend [1] and will
> probably be used by the hyper-dmabuf frontend/backend
> (Dongwon from Intel can provide more info on this).
>>
>> I'm asking because I'm not very fond of adding more linux specific
>> functions to libgnttab which are not related to a specific Xen version,
>> but to a kernel version.
> Hm, I was not thinking about this UAPI to be added to libgnttab.
> It seems it can be used directly w/o wrappers in user-space

Would this program use libgnttab in parallel? If yes how would the two
usage paths be combined (same applies to the separate driver, btw)? The
gntdev driver manages resources per file descriptor and libgnttab is
hiding the file descriptor it is using for a connection. Or would the
user program use only the new driver for communicating with the gntdev
driver? In this case it might be an option to extend the gntdev driver
to present a new device (e.g. "gntdmadev") for that purpose.

>>
>> So doing this in a separate driver seems to be the better option in
>> this regard.
> Well, from maintenance POV it is easier for me to have it all in
> a separate driver as all dma-buf related functionality will
> reside at one place. This also means that no changes to existing
> drivers will be needed (if it is ok to have ballooning in/out
> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
> driver)

I think in the end this really depends on how the complete solution
will look like. gntdev is a special wrapper for the gnttab driver.
In case the new dma-buf driver needs to use parts of gntdev I'd rather
have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  8:40                                 ` Juergen Gross
@ 2018-04-24  9:03                                     ` Oleksandr Andrushchenko
  2018-04-24  9:03                                   ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  9:03 UTC (permalink / raw)
  To: Juergen Gross, Boris Ostrovsky, Wei Liu
  Cc: Roger Pau Monné,
	Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter

On 04/24/2018 11:40 AM, Juergen Gross wrote:
> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>> wrote:
>>>>>>>>>>         the gntdev.
>>>>>>>>>>
>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>> something similar to this.
>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>> just had
>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>> after a
>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>> wraps a
>>>>>>>>> memfd into a dma-buf.
>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>
>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>
>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>> need to
>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>
>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>
>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>> need to
>>>>>>> be added to different drivers, which means userspace program needs to
>>>>>>> write more code and get more handles, it would be slightly better to
>>>>>>> implement a new driver from that perspective.
>>>>>> If gntdev/balloon extension is still considered:
>>>>>>
>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>> terminology):
>>>>>>     - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>
>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>
>>>>> So I am obviously a bit late to this thread, but why do you need to add
>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>>>> you want without any extensions?
>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>> 2. balloon driver needs to be extended, so it can allocate
>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>> in the kernel.
>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>> provide new functionality to create a dma-buf from grant references
>>>> and to produce grant references for a dma-buf. This is what I have as
>>>> UAPI
>>>> description for xen-zcopy driver:
>>>>
>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>> This will create a DRM dumb buffer from grant references provided
>>>> by the frontend. The intended usage is:
>>>>     - Frontend
>>>>       - creates a dumb/display buffer and allocates memory
>>>>       - grants foreign access to the buffer pages
>>>>       - passes granted references to the backend
>>>>     - Backend
>>>>       - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>         granted references and create a dumb buffer
>>>>       - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>       - requests real HW driver/consumer to import the PRIME buffer with
>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>       - uses handle returned by the real HW driver
>>>>     - at the end:
>>>>       o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       o closes file descriptor of the exported buffer
>>>>
>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>> This will grant references to a dumb/display buffer's memory provided by
>>>> the
>>>> backend. The intended usage is:
>>>>     - Frontend
>>>>       - requests backend to allocate dumb/display buffer and grant
>>>> references
>>>>         to its pages
>>>>     - Backend
>>>>       - requests real HW driver to create a dumb with
>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>       - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>       - requests zero-copy driver to import the PRIME buffer with
>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>       - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>         grant references to the buffer's memory.
>>>>       - passes grant references to the frontend
>>>>    - at the end:
>>>>       - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       - closes file descriptor of the imported buffer
>>>>
>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>> This will block until the dumb buffer with the wait handle provided be
>>>> freed:
>>>> this is needed for synchronization between frontend and backend in case
>>>> frontend provides grant references of the buffer via
>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>> wait_handle must be the same value returned while calling
>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>
>>>> So, as you can see the above functionality is not covered by the
>>>> existing UAPI
>>>> of the gntdev driver.
>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>> wrapper
>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>
>>>> This is why I have 2 options here: either create a dedicated driver for
>>>> this
>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>> driver
>>>> with the above UAPI + make changes to the balloon driver to provide
>>>> kernel
>>>> API for DMA buffer allocations.
>>> Which user component would use the new ioctls?
>> It is currently used by the display backend [1] and will
>> probably be used by the hyper-dmabuf frontend/backend
>> (Dongwon from Intel can provide more info on this).
>>> I'm asking because I'm not very fond of adding more linux specific
>>> functions to libgnttab which are not related to a specific Xen version,
>>> but to a kernel version.
>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>> It seems it can be used directly w/o wrappers in user-space
> Would this program use libgnttab in parallel?
In case of the display backend - yes, for shared rings,
extracting grefs from displif protocol it uses gntdev via
helper library [1]
>   If yes how would the two
> usage paths be combined (same applies to the separate driver, btw)? The
> gntdev driver manages resources per file descriptor and libgnttab is
> hiding the file descriptor it is using for a connection.
Ah, at the moment the UAPI was not used in parallel as there were
2 drivers for that: gntdev + xen-zcopy with different UAPIs.
But now, if we extend gntdev with the new API then you are rigth:
either libgnttab needs to be extended or that new part of the
gntdev UAPI needs to be open-coded by the backend
>   Or would the
> user program use only the new driver for communicating with the gntdev
> driver? In this case it might be an option to extend the gntdev driver
> to present a new device (e.g. "gntdmadev") for that purpose.
No, it seems that libgnttab and this new driver's UAPI will be used
in parallel
>>> So doing this in a separate driver seems to be the better option in
>>> this regard.
>> Well, from maintenance POV it is easier for me to have it all in
>> a separate driver as all dma-buf related functionality will
>> reside at one place. This also means that no changes to existing
>> drivers will be needed (if it is ok to have ballooning in/out
>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>> driver)
> I think in the end this really depends on how the complete solution
> will look like. gntdev is a special wrapper for the gnttab driver.
> In case the new dma-buf driver needs to use parts of gntdev I'd rather
> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
The new driver doesn't use gntdev's existing API, but extends it,
e.g. by adding new ways to export/import grefs for a dma-buf and
manage dma-buf's kernel ops. Thus, gntdev, which already provides
UAPI, seems to be a good candidate for such an extension
>
> Juergen
[1] https://github.com/xen-troops/libxenbe

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-24  9:03                                     ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  9:03 UTC (permalink / raw)
  To: Juergen Gross, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko, airlied,
	konrad.wilk, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, Roger Pau Monné

On 04/24/2018 11:40 AM, Juergen Gross wrote:
> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>> wrote:
>>>>>>>>>>         the gntdev.
>>>>>>>>>>
>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>> something similar to this.
>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>> just had
>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>> after a
>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>> wraps a
>>>>>>>>> memfd into a dma-buf.
>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>
>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>
>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>> need to
>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>
>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>
>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>> need to
>>>>>>> be added to different drivers, which means userspace program needs to
>>>>>>> write more code and get more handles, it would be slightly better to
>>>>>>> implement a new driver from that perspective.
>>>>>> If gntdev/balloon extension is still considered:
>>>>>>
>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>> terminology):
>>>>>>     - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>
>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>
>>>>> So I am obviously a bit late to this thread, but why do you need to add
>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>>>> you want without any extensions?
>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>> 2. balloon driver needs to be extended, so it can allocate
>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>> in the kernel.
>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>> provide new functionality to create a dma-buf from grant references
>>>> and to produce grant references for a dma-buf. This is what I have as
>>>> UAPI
>>>> description for xen-zcopy driver:
>>>>
>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>> This will create a DRM dumb buffer from grant references provided
>>>> by the frontend. The intended usage is:
>>>>     - Frontend
>>>>       - creates a dumb/display buffer and allocates memory
>>>>       - grants foreign access to the buffer pages
>>>>       - passes granted references to the backend
>>>>     - Backend
>>>>       - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>         granted references and create a dumb buffer
>>>>       - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>       - requests real HW driver/consumer to import the PRIME buffer with
>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>       - uses handle returned by the real HW driver
>>>>     - at the end:
>>>>       o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       o closes file descriptor of the exported buffer
>>>>
>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>> This will grant references to a dumb/display buffer's memory provided by
>>>> the
>>>> backend. The intended usage is:
>>>>     - Frontend
>>>>       - requests backend to allocate dumb/display buffer and grant
>>>> references
>>>>         to its pages
>>>>     - Backend
>>>>       - requests real HW driver to create a dumb with
>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>       - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>       - requests zero-copy driver to import the PRIME buffer with
>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>       - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>         grant references to the buffer's memory.
>>>>       - passes grant references to the frontend
>>>>    - at the end:
>>>>       - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       - closes file descriptor of the imported buffer
>>>>
>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>> This will block until the dumb buffer with the wait handle provided be
>>>> freed:
>>>> this is needed for synchronization between frontend and backend in case
>>>> frontend provides grant references of the buffer via
>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>> wait_handle must be the same value returned while calling
>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>
>>>> So, as you can see the above functionality is not covered by the
>>>> existing UAPI
>>>> of the gntdev driver.
>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>> wrapper
>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>
>>>> This is why I have 2 options here: either create a dedicated driver for
>>>> this
>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>> driver
>>>> with the above UAPI + make changes to the balloon driver to provide
>>>> kernel
>>>> API for DMA buffer allocations.
>>> Which user component would use the new ioctls?
>> It is currently used by the display backend [1] and will
>> probably be used by the hyper-dmabuf frontend/backend
>> (Dongwon from Intel can provide more info on this).
>>> I'm asking because I'm not very fond of adding more linux specific
>>> functions to libgnttab which are not related to a specific Xen version,
>>> but to a kernel version.
>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>> It seems it can be used directly w/o wrappers in user-space
> Would this program use libgnttab in parallel?
In case of the display backend - yes, for shared rings,
extracting grefs from displif protocol it uses gntdev via
helper library [1]
>   If yes how would the two
> usage paths be combined (same applies to the separate driver, btw)? The
> gntdev driver manages resources per file descriptor and libgnttab is
> hiding the file descriptor it is using for a connection.
Ah, at the moment the UAPI was not used in parallel as there were
2 drivers for that: gntdev + xen-zcopy with different UAPIs.
But now, if we extend gntdev with the new API then you are rigth:
either libgnttab needs to be extended or that new part of the
gntdev UAPI needs to be open-coded by the backend
>   Or would the
> user program use only the new driver for communicating with the gntdev
> driver? In this case it might be an option to extend the gntdev driver
> to present a new device (e.g. "gntdmadev") for that purpose.
No, it seems that libgnttab and this new driver's UAPI will be used
in parallel
>>> So doing this in a separate driver seems to be the better option in
>>> this regard.
>> Well, from maintenance POV it is easier for me to have it all in
>> a separate driver as all dma-buf related functionality will
>> reside at one place. This also means that no changes to existing
>> drivers will be needed (if it is ok to have ballooning in/out
>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>> driver)
> I think in the end this really depends on how the complete solution
> will look like. gntdev is a special wrapper for the gnttab driver.
> In case the new dma-buf driver needs to use parts of gntdev I'd rather
> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
The new driver doesn't use gntdev's existing API, but extends it,
e.g. by adding new ways to export/import grefs for a dma-buf and
manage dma-buf's kernel ops. Thus, gntdev, which already provides
UAPI, seems to be a good candidate for such an extension
>
> Juergen
[1] https://github.com/xen-troops/libxenbe
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  8:40                                 ` Juergen Gross
  2018-04-24  9:03                                     ` Oleksandr Andrushchenko
@ 2018-04-24  9:03                                   ` Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  9:03 UTC (permalink / raw)
  To: Juergen Gross, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Potrola, MateuszX, daniel.vetter,
	xen-devel, Roger Pau Monné

On 04/24/2018 11:40 AM, Juergen Gross wrote:
> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>> wrote:
>>>>>>>>>>         the gntdev.
>>>>>>>>>>
>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>> something similar to this.
>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>> just had
>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>> after a
>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>> wraps a
>>>>>>>>> memfd into a dma-buf.
>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>
>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>
>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>> need to
>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>
>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>
>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>> need to
>>>>>>> be added to different drivers, which means userspace program needs to
>>>>>>> write more code and get more handles, it would be slightly better to
>>>>>>> implement a new driver from that perspective.
>>>>>> If gntdev/balloon extension is still considered:
>>>>>>
>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>> terminology):
>>>>>>     - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>
>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>
>>>>> So I am obviously a bit late to this thread, but why do you need to add
>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do what
>>>>> you want without any extensions?
>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>> 2. balloon driver needs to be extended, so it can allocate
>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>> in the kernel.
>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>> provide new functionality to create a dma-buf from grant references
>>>> and to produce grant references for a dma-buf. This is what I have as
>>>> UAPI
>>>> description for xen-zcopy driver:
>>>>
>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>> This will create a DRM dumb buffer from grant references provided
>>>> by the frontend. The intended usage is:
>>>>     - Frontend
>>>>       - creates a dumb/display buffer and allocates memory
>>>>       - grants foreign access to the buffer pages
>>>>       - passes granted references to the backend
>>>>     - Backend
>>>>       - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>         granted references and create a dumb buffer
>>>>       - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>       - requests real HW driver/consumer to import the PRIME buffer with
>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>       - uses handle returned by the real HW driver
>>>>     - at the end:
>>>>       o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       o closes file descriptor of the exported buffer
>>>>
>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>> This will grant references to a dumb/display buffer's memory provided by
>>>> the
>>>> backend. The intended usage is:
>>>>     - Frontend
>>>>       - requests backend to allocate dumb/display buffer and grant
>>>> references
>>>>         to its pages
>>>>     - Backend
>>>>       - requests real HW driver to create a dumb with
>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>       - requests handle to fd conversion via DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>       - requests zero-copy driver to import the PRIME buffer with
>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>       - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>         grant references to the buffer's memory.
>>>>       - passes grant references to the frontend
>>>>    - at the end:
>>>>       - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>       - closes file descriptor of the imported buffer
>>>>
>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>> This will block until the dumb buffer with the wait handle provided be
>>>> freed:
>>>> this is needed for synchronization between frontend and backend in case
>>>> frontend provides grant references of the buffer via
>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>> wait_handle must be the same value returned while calling
>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>
>>>> So, as you can see the above functionality is not covered by the
>>>> existing UAPI
>>>> of the gntdev driver.
>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>> wrapper
>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>
>>>> This is why I have 2 options here: either create a dedicated driver for
>>>> this
>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>> driver
>>>> with the above UAPI + make changes to the balloon driver to provide
>>>> kernel
>>>> API for DMA buffer allocations.
>>> Which user component would use the new ioctls?
>> It is currently used by the display backend [1] and will
>> probably be used by the hyper-dmabuf frontend/backend
>> (Dongwon from Intel can provide more info on this).
>>> I'm asking because I'm not very fond of adding more linux specific
>>> functions to libgnttab which are not related to a specific Xen version,
>>> but to a kernel version.
>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>> It seems it can be used directly w/o wrappers in user-space
> Would this program use libgnttab in parallel?
In case of the display backend - yes, for shared rings,
extracting grefs from displif protocol it uses gntdev via
helper library [1]
>   If yes how would the two
> usage paths be combined (same applies to the separate driver, btw)? The
> gntdev driver manages resources per file descriptor and libgnttab is
> hiding the file descriptor it is using for a connection.
Ah, at the moment the UAPI was not used in parallel as there were
2 drivers for that: gntdev + xen-zcopy with different UAPIs.
But now, if we extend gntdev with the new API then you are rigth:
either libgnttab needs to be extended or that new part of the
gntdev UAPI needs to be open-coded by the backend
>   Or would the
> user program use only the new driver for communicating with the gntdev
> driver? In this case it might be an option to extend the gntdev driver
> to present a new device (e.g. "gntdmadev") for that purpose.
No, it seems that libgnttab and this new driver's UAPI will be used
in parallel
>>> So doing this in a separate driver seems to be the better option in
>>> this regard.
>> Well, from maintenance POV it is easier for me to have it all in
>> a separate driver as all dma-buf related functionality will
>> reside at one place. This also means that no changes to existing
>> drivers will be needed (if it is ok to have ballooning in/out
>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>> driver)
> I think in the end this really depends on how the complete solution
> will look like. gntdev is a special wrapper for the gnttab driver.
> In case the new dma-buf driver needs to use parts of gntdev I'd rather
> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
The new driver doesn't use gntdev's existing API, but extends it,
e.g. by adding new ways to export/import grefs for a dma-buf and
manage dma-buf's kernel ops. Thus, gntdev, which already provides
UAPI, seems to be a good candidate for such an extension
>
> Juergen
[1] https://github.com/xen-troops/libxenbe

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  9:03                                     ` Oleksandr Andrushchenko
  (?)
@ 2018-04-24  9:08                                     ` Juergen Gross
  2018-04-24  9:13                                       ` Oleksandr Andrushchenko
                                                         ` (3 more replies)
  -1 siblings, 4 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24  9:08 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Roger Pau Monné,
	Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter

On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>>> wrote:
>>>>>>>>>>>         the gntdev.
>>>>>>>>>>>
>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>> something similar to this.
>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>>> just had
>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>>> after a
>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>> wraps a
>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>
>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>
>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>> need to
>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>
>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>
>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>> need to
>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>> needs to
>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>> better to
>>>>>>>> implement a new driver from that perspective.
>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>
>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>> terminology):
>>>>>>>     - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>
>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>
>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>> to add
>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>> what
>>>>>> you want without any extensions?
>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>> in the kernel.
>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>> provide new functionality to create a dma-buf from grant references
>>>>> and to produce grant references for a dma-buf. This is what I have as
>>>>> UAPI
>>>>> description for xen-zcopy driver:
>>>>>
>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>> This will create a DRM dumb buffer from grant references provided
>>>>> by the frontend. The intended usage is:
>>>>>     - Frontend
>>>>>       - creates a dumb/display buffer and allocates memory
>>>>>       - grants foreign access to the buffer pages
>>>>>       - passes granted references to the backend
>>>>>     - Backend
>>>>>       - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>         granted references and create a dumb buffer
>>>>>       - requests handle to fd conversion via
>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>       - requests real HW driver/consumer to import the PRIME buffer
>>>>> with
>>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>       - uses handle returned by the real HW driver
>>>>>     - at the end:
>>>>>       o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       o closes file descriptor of the exported buffer
>>>>>
>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>> This will grant references to a dumb/display buffer's memory
>>>>> provided by
>>>>> the
>>>>> backend. The intended usage is:
>>>>>     - Frontend
>>>>>       - requests backend to allocate dumb/display buffer and grant
>>>>> references
>>>>>         to its pages
>>>>>     - Backend
>>>>>       - requests real HW driver to create a dumb with
>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>       - requests handle to fd conversion via
>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>       - requests zero-copy driver to import the PRIME buffer with
>>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>       - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>         grant references to the buffer's memory.
>>>>>       - passes grant references to the frontend
>>>>>    - at the end:
>>>>>       - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       - closes file descriptor of the imported buffer
>>>>>
>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> This will block until the dumb buffer with the wait handle provided be
>>>>> freed:
>>>>> this is needed for synchronization between frontend and backend in
>>>>> case
>>>>> frontend provides grant references of the buffer via
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>> wait_handle must be the same value returned while calling
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>
>>>>> So, as you can see the above functionality is not covered by the
>>>>> existing UAPI
>>>>> of the gntdev driver.
>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>>> wrapper
>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>
>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>> for
>>>>> this
>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>> driver
>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>> kernel
>>>>> API for DMA buffer allocations.
>>>> Which user component would use the new ioctls?
>>> It is currently used by the display backend [1] and will
>>> probably be used by the hyper-dmabuf frontend/backend
>>> (Dongwon from Intel can provide more info on this).
>>>> I'm asking because I'm not very fond of adding more linux specific
>>>> functions to libgnttab which are not related to a specific Xen version,
>>>> but to a kernel version.
>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>> It seems it can be used directly w/o wrappers in user-space
>> Would this program use libgnttab in parallel?
> In case of the display backend - yes, for shared rings,
> extracting grefs from displif protocol it uses gntdev via
> helper library [1]
>>   If yes how would the two
>> usage paths be combined (same applies to the separate driver, btw)? The
>> gntdev driver manages resources per file descriptor and libgnttab is
>> hiding the file descriptor it is using for a connection.
> Ah, at the moment the UAPI was not used in parallel as there were
> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
> But now, if we extend gntdev with the new API then you are rigth:
> either libgnttab needs to be extended or that new part of the
> gntdev UAPI needs to be open-coded by the backend
>>   Or would the
>> user program use only the new driver for communicating with the gntdev
>> driver? In this case it might be an option to extend the gntdev driver
>> to present a new device (e.g. "gntdmadev") for that purpose.
> No, it seems that libgnttab and this new driver's UAPI will be used
> in parallel
>>>> So doing this in a separate driver seems to be the better option in
>>>> this regard.
>>> Well, from maintenance POV it is easier for me to have it all in
>>> a separate driver as all dma-buf related functionality will
>>> reside at one place. This also means that no changes to existing
>>> drivers will be needed (if it is ok to have ballooning in/out
>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>>> driver)
>> I think in the end this really depends on how the complete solution
>> will look like. gntdev is a special wrapper for the gnttab driver.
>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
> The new driver doesn't use gntdev's existing API, but extends it,
> e.g. by adding new ways to export/import grefs for a dma-buf and
> manage dma-buf's kernel ops. Thus, gntdev, which already provides
> UAPI, seems to be a good candidate for such an extension

So this would mean you need a modification of libgnttab, right? This is
something the Xen tools maintainers need to decide. In case they don't
object extending the gntdev driver would be the natural thing to do.


Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  9:03                                     ` Oleksandr Andrushchenko
  (?)
  (?)
@ 2018-04-24  9:08                                     ` Juergen Gross
  -1 siblings, 0 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24  9:08 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, Oleksandr_Andrushchenko, airlied,
	linux-kernel, dri-devel, Potrola, MateuszX, daniel.vetter,
	xen-devel, Roger Pau Monné

On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>>> wrote:
>>>>>>>>>>>         the gntdev.
>>>>>>>>>>>
>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>> something similar to this.
>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>>> just had
>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>>> after a
>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>> wraps a
>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>
>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>
>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>> need to
>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>
>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>
>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>> need to
>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>> needs to
>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>> better to
>>>>>>>> implement a new driver from that perspective.
>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>
>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>> terminology):
>>>>>>>     - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>
>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>
>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>> to add
>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>> what
>>>>>> you want without any extensions?
>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>> in the kernel.
>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>> provide new functionality to create a dma-buf from grant references
>>>>> and to produce grant references for a dma-buf. This is what I have as
>>>>> UAPI
>>>>> description for xen-zcopy driver:
>>>>>
>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>> This will create a DRM dumb buffer from grant references provided
>>>>> by the frontend. The intended usage is:
>>>>>     - Frontend
>>>>>       - creates a dumb/display buffer and allocates memory
>>>>>       - grants foreign access to the buffer pages
>>>>>       - passes granted references to the backend
>>>>>     - Backend
>>>>>       - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>         granted references and create a dumb buffer
>>>>>       - requests handle to fd conversion via
>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>       - requests real HW driver/consumer to import the PRIME buffer
>>>>> with
>>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>       - uses handle returned by the real HW driver
>>>>>     - at the end:
>>>>>       o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       o closes file descriptor of the exported buffer
>>>>>
>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>> This will grant references to a dumb/display buffer's memory
>>>>> provided by
>>>>> the
>>>>> backend. The intended usage is:
>>>>>     - Frontend
>>>>>       - requests backend to allocate dumb/display buffer and grant
>>>>> references
>>>>>         to its pages
>>>>>     - Backend
>>>>>       - requests real HW driver to create a dumb with
>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>       - requests handle to fd conversion via
>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>       - requests zero-copy driver to import the PRIME buffer with
>>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>       - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>         grant references to the buffer's memory.
>>>>>       - passes grant references to the frontend
>>>>>    - at the end:
>>>>>       - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>       - closes file descriptor of the imported buffer
>>>>>
>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>> This will block until the dumb buffer with the wait handle provided be
>>>>> freed:
>>>>> this is needed for synchronization between frontend and backend in
>>>>> case
>>>>> frontend provides grant references of the buffer via
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>> wait_handle must be the same value returned while calling
>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>
>>>>> So, as you can see the above functionality is not covered by the
>>>>> existing UAPI
>>>>> of the gntdev driver.
>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>>> wrapper
>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>
>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>> for
>>>>> this
>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>> driver
>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>> kernel
>>>>> API for DMA buffer allocations.
>>>> Which user component would use the new ioctls?
>>> It is currently used by the display backend [1] and will
>>> probably be used by the hyper-dmabuf frontend/backend
>>> (Dongwon from Intel can provide more info on this).
>>>> I'm asking because I'm not very fond of adding more linux specific
>>>> functions to libgnttab which are not related to a specific Xen version,
>>>> but to a kernel version.
>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>> It seems it can be used directly w/o wrappers in user-space
>> Would this program use libgnttab in parallel?
> In case of the display backend - yes, for shared rings,
> extracting grefs from displif protocol it uses gntdev via
> helper library [1]
>>   If yes how would the two
>> usage paths be combined (same applies to the separate driver, btw)? The
>> gntdev driver manages resources per file descriptor and libgnttab is
>> hiding the file descriptor it is using for a connection.
> Ah, at the moment the UAPI was not used in parallel as there were
> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
> But now, if we extend gntdev with the new API then you are rigth:
> either libgnttab needs to be extended or that new part of the
> gntdev UAPI needs to be open-coded by the backend
>>   Or would the
>> user program use only the new driver for communicating with the gntdev
>> driver? In this case it might be an option to extend the gntdev driver
>> to present a new device (e.g. "gntdmadev") for that purpose.
> No, it seems that libgnttab and this new driver's UAPI will be used
> in parallel
>>>> So doing this in a separate driver seems to be the better option in
>>>> this regard.
>>> Well, from maintenance POV it is easier for me to have it all in
>>> a separate driver as all dma-buf related functionality will
>>> reside at one place. This also means that no changes to existing
>>> drivers will be needed (if it is ok to have ballooning in/out
>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>>> driver)
>> I think in the end this really depends on how the complete solution
>> will look like. gntdev is a special wrapper for the gnttab driver.
>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
> The new driver doesn't use gntdev's existing API, but extends it,
> e.g. by adding new ways to export/import grefs for a dma-buf and
> manage dma-buf's kernel ops. Thus, gntdev, which already provides
> UAPI, seems to be a good candidate for such an extension

So this would mean you need a modification of libgnttab, right? This is
something the Xen tools maintainers need to decide. In case they don't
object extending the gntdev driver would be the natural thing to do.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  9:08                                     ` Juergen Gross
@ 2018-04-24  9:13                                       ` Oleksandr Andrushchenko
  2018-04-24  9:13                                       ` Oleksandr Andrushchenko
                                                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  9:13 UTC (permalink / raw)
  To: Juergen Gross, Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Roger Pau Monné,
	Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied, linux-kernel,
	dri-devel, Potrola, MateuszX, xen-devel, daniel.vetter,
	ian.jackson

On 04/24/2018 12:08 PM, Juergen Gross wrote:
> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>>>> wrote:
>>>>>>>>>>>>          the gntdev.
>>>>>>>>>>>>
>>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>>> something similar to this.
>>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>>>> just had
>>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>>>> after a
>>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>>> wraps a
>>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>>
>>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>>
>>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>>> need to
>>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>>
>>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>>
>>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>>> need to
>>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>>> needs to
>>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>>> better to
>>>>>>>>> implement a new driver from that perspective.
>>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>>
>>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>>> terminology):
>>>>>>>>      - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>>
>>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>>
>>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>>> to add
>>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>>> what
>>>>>>> you want without any extensions?
>>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>>> in the kernel.
>>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>>> provide new functionality to create a dma-buf from grant references
>>>>>> and to produce grant references for a dma-buf. This is what I have as
>>>>>> UAPI
>>>>>> description for xen-zcopy driver:
>>>>>>
>>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>> This will create a DRM dumb buffer from grant references provided
>>>>>> by the frontend. The intended usage is:
>>>>>>      - Frontend
>>>>>>        - creates a dumb/display buffer and allocates memory
>>>>>>        - grants foreign access to the buffer pages
>>>>>>        - passes granted references to the backend
>>>>>>      - Backend
>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>>          granted references and create a dumb buffer
>>>>>>        - requests handle to fd conversion via
>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>        - requests real HW driver/consumer to import the PRIME buffer
>>>>>> with
>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>        - uses handle returned by the real HW driver
>>>>>>      - at the end:
>>>>>>        o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        o closes file descriptor of the exported buffer
>>>>>>
>>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>> This will grant references to a dumb/display buffer's memory
>>>>>> provided by
>>>>>> the
>>>>>> backend. The intended usage is:
>>>>>>      - Frontend
>>>>>>        - requests backend to allocate dumb/display buffer and grant
>>>>>> references
>>>>>>          to its pages
>>>>>>      - Backend
>>>>>>        - requests real HW driver to create a dumb with
>>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>>        - requests handle to fd conversion via
>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>        - requests zero-copy driver to import the PRIME buffer with
>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>>          grant references to the buffer's memory.
>>>>>>        - passes grant references to the frontend
>>>>>>     - at the end:
>>>>>>        - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        - closes file descriptor of the imported buffer
>>>>>>
>>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>> This will block until the dumb buffer with the wait handle provided be
>>>>>> freed:
>>>>>> this is needed for synchronization between frontend and backend in
>>>>>> case
>>>>>> frontend provides grant references of the buffer via
>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>>> wait_handle must be the same value returned while calling
>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>>
>>>>>> So, as you can see the above functionality is not covered by the
>>>>>> existing UAPI
>>>>>> of the gntdev driver.
>>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>>>> wrapper
>>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>>
>>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>>> for
>>>>>> this
>>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>>> driver
>>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>>> kernel
>>>>>> API for DMA buffer allocations.
>>>>> Which user component would use the new ioctls?
>>>> It is currently used by the display backend [1] and will
>>>> probably be used by the hyper-dmabuf frontend/backend
>>>> (Dongwon from Intel can provide more info on this).
>>>>> I'm asking because I'm not very fond of adding more linux specific
>>>>> functions to libgnttab which are not related to a specific Xen version,
>>>>> but to a kernel version.
>>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>>> It seems it can be used directly w/o wrappers in user-space
>>> Would this program use libgnttab in parallel?
>> In case of the display backend - yes, for shared rings,
>> extracting grefs from displif protocol it uses gntdev via
>> helper library [1]
>>>    If yes how would the two
>>> usage paths be combined (same applies to the separate driver, btw)? The
>>> gntdev driver manages resources per file descriptor and libgnttab is
>>> hiding the file descriptor it is using for a connection.
>> Ah, at the moment the UAPI was not used in parallel as there were
>> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
>> But now, if we extend gntdev with the new API then you are rigth:
>> either libgnttab needs to be extended or that new part of the
>> gntdev UAPI needs to be open-coded by the backend
>>>    Or would the
>>> user program use only the new driver for communicating with the gntdev
>>> driver? In this case it might be an option to extend the gntdev driver
>>> to present a new device (e.g. "gntdmadev") for that purpose.
>> No, it seems that libgnttab and this new driver's UAPI will be used
>> in parallel
>>>>> So doing this in a separate driver seems to be the better option in
>>>>> this regard.
>>>> Well, from maintenance POV it is easier for me to have it all in
>>>> a separate driver as all dma-buf related functionality will
>>>> reside at one place. This also means that no changes to existing
>>>> drivers will be needed (if it is ok to have ballooning in/out
>>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>>>> driver)
>>> I think in the end this really depends on how the complete solution
>>> will look like. gntdev is a special wrapper for the gnttab driver.
>>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>>> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
>> The new driver doesn't use gntdev's existing API, but extends it,
>> e.g. by adding new ways to export/import grefs for a dma-buf and
>> manage dma-buf's kernel ops. Thus, gntdev, which already provides
>> UAPI, seems to be a good candidate for such an extension
> So this would mean you need a modification of libgnttab, right? This is
> something the Xen tools maintainers need to decide. In case they don't
> object extending the gntdev driver would be the natural thing to do.
Wei is already in the thread, adding Ian
>
> Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  9:08                                     ` Juergen Gross
  2018-04-24  9:13                                       ` Oleksandr Andrushchenko
@ 2018-04-24  9:13                                       ` Oleksandr Andrushchenko
  2018-04-24 10:01                                       ` [Xen-devel] " Wei Liu
  2018-04-24 10:01                                       ` Wei Liu
  3 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24  9:13 UTC (permalink / raw)
  To: Juergen Gross, Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu
  Cc: Artem Mygaiev, Dongwon Kim, airlied, ian.jackson, linux-kernel,
	dri-devel, Potrola, MateuszX, daniel.vetter, xen-devel,
	Roger Pau Monné

On 04/24/2018 12:08 PM, Juergen Gross wrote:
> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>>>> wrote:
>>>>>>>>>>>>          the gntdev.
>>>>>>>>>>>>
>>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>>> something similar to this.
>>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>>>> just had
>>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>>>> after a
>>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>>> wraps a
>>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>>
>>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>>
>>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>>> need to
>>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>>
>>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>>
>>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>>> need to
>>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>>> needs to
>>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>>> better to
>>>>>>>>> implement a new driver from that perspective.
>>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>>
>>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>>> terminology):
>>>>>>>>      - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>>
>>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>>
>>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>>> to add
>>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>>> what
>>>>>>> you want without any extensions?
>>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>>> in the kernel.
>>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>>> provide new functionality to create a dma-buf from grant references
>>>>>> and to produce grant references for a dma-buf. This is what I have as
>>>>>> UAPI
>>>>>> description for xen-zcopy driver:
>>>>>>
>>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>> This will create a DRM dumb buffer from grant references provided
>>>>>> by the frontend. The intended usage is:
>>>>>>      - Frontend
>>>>>>        - creates a dumb/display buffer and allocates memory
>>>>>>        - grants foreign access to the buffer pages
>>>>>>        - passes granted references to the backend
>>>>>>      - Backend
>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>>          granted references and create a dumb buffer
>>>>>>        - requests handle to fd conversion via
>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>        - requests real HW driver/consumer to import the PRIME buffer
>>>>>> with
>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>        - uses handle returned by the real HW driver
>>>>>>      - at the end:
>>>>>>        o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        o closes file descriptor of the exported buffer
>>>>>>
>>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>> This will grant references to a dumb/display buffer's memory
>>>>>> provided by
>>>>>> the
>>>>>> backend. The intended usage is:
>>>>>>      - Frontend
>>>>>>        - requests backend to allocate dumb/display buffer and grant
>>>>>> references
>>>>>>          to its pages
>>>>>>      - Backend
>>>>>>        - requests real HW driver to create a dumb with
>>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>>        - requests handle to fd conversion via
>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>        - requests zero-copy driver to import the PRIME buffer with
>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>>          grant references to the buffer's memory.
>>>>>>        - passes grant references to the frontend
>>>>>>     - at the end:
>>>>>>        - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>        - closes file descriptor of the imported buffer
>>>>>>
>>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>> This will block until the dumb buffer with the wait handle provided be
>>>>>> freed:
>>>>>> this is needed for synchronization between frontend and backend in
>>>>>> case
>>>>>> frontend provides grant references of the buffer via
>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>>> wait_handle must be the same value returned while calling
>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>>
>>>>>> So, as you can see the above functionality is not covered by the
>>>>>> existing UAPI
>>>>>> of the gntdev driver.
>>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>>>> wrapper
>>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>>
>>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>>> for
>>>>>> this
>>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>>> driver
>>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>>> kernel
>>>>>> API for DMA buffer allocations.
>>>>> Which user component would use the new ioctls?
>>>> It is currently used by the display backend [1] and will
>>>> probably be used by the hyper-dmabuf frontend/backend
>>>> (Dongwon from Intel can provide more info on this).
>>>>> I'm asking because I'm not very fond of adding more linux specific
>>>>> functions to libgnttab which are not related to a specific Xen version,
>>>>> but to a kernel version.
>>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>>> It seems it can be used directly w/o wrappers in user-space
>>> Would this program use libgnttab in parallel?
>> In case of the display backend - yes, for shared rings,
>> extracting grefs from displif protocol it uses gntdev via
>> helper library [1]
>>>    If yes how would the two
>>> usage paths be combined (same applies to the separate driver, btw)? The
>>> gntdev driver manages resources per file descriptor and libgnttab is
>>> hiding the file descriptor it is using for a connection.
>> Ah, at the moment the UAPI was not used in parallel as there were
>> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
>> But now, if we extend gntdev with the new API then you are rigth:
>> either libgnttab needs to be extended or that new part of the
>> gntdev UAPI needs to be open-coded by the backend
>>>    Or would the
>>> user program use only the new driver for communicating with the gntdev
>>> driver? In this case it might be an option to extend the gntdev driver
>>> to present a new device (e.g. "gntdmadev") for that purpose.
>> No, it seems that libgnttab and this new driver's UAPI will be used
>> in parallel
>>>>> So doing this in a separate driver seems to be the better option in
>>>>> this regard.
>>>> Well, from maintenance POV it is easier for me to have it all in
>>>> a separate driver as all dma-buf related functionality will
>>>> reside at one place. This also means that no changes to existing
>>>> drivers will be needed (if it is ok to have ballooning in/out
>>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>>>> driver)
>>> I think in the end this really depends on how the complete solution
>>> will look like. gntdev is a special wrapper for the gnttab driver.
>>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>>> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
>> The new driver doesn't use gntdev's existing API, but extends it,
>> e.g. by adding new ways to export/import grefs for a dma-buf and
>> manage dma-buf's kernel ops. Thus, gntdev, which already provides
>> UAPI, seems to be a good candidate for such an extension
> So this would mean you need a modification of libgnttab, right? This is
> something the Xen tools maintainers need to decide. In case they don't
> object extending the gntdev driver would be the natural thing to do.
Wei is already in the thread, adding Ian
>
> Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  9:08                                     ` Juergen Gross
  2018-04-24  9:13                                       ` Oleksandr Andrushchenko
  2018-04-24  9:13                                       ` Oleksandr Andrushchenko
@ 2018-04-24 10:01                                       ` Wei Liu
  2018-04-24 10:14                                         ` Oleksandr Andrushchenko
  2018-04-24 10:14                                         ` [Xen-devel] " Oleksandr Andrushchenko
  2018-04-24 10:01                                       ` Wei Liu
  3 siblings, 2 replies; 131+ messages in thread
From: Wei Liu @ 2018-04-24 10:01 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Oleksandr Andrushchenko, Boris Ostrovsky, Wei Liu,
	Roger Pau Monné,
	Artem Mygaiev, Dongwon Kim, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, Ian Jackson

On Tue, Apr 24, 2018 at 11:08:41AM +0200, Juergen Gross wrote:
> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
> > On 04/24/2018 11:40 AM, Juergen Gross wrote:
> >> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
> >>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
> >>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
> >>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
> >>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
> >>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
> >>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
> >>>>>>>> wrote:
> >>>>>>>>>>>         the gntdev.
> >>>>>>>>>>>
> >>>>>>>>>>> I think this is generic enough that it could be implemented by a
> >>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>>>>>>>> something similar to this.
> >>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
> >>>>>>>>>> just had
> >>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
> >>>>>>>>>> after a
> >>>>>>>>>> bit of discussion they'll now try to have a driver which just
> >>>>>>>>>> wraps a
> >>>>>>>>>> memfd into a dma-buf.
> >>>>>>>>> So, we have to decide either we introduce a new driver
> >>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
> >>>>>>>>> gntdev/balloon to support dma-buf use-cases.
> >>>>>>>>>
> >>>>>>>>> Can anybody from Xen community express their preference here?
> >>>>>>>>>
> >>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
> >>>>>>>> need to
> >>>>>>>> be added to either existing drivers or a new driver.
> >>>>>>>>
> >>>>>>>> I went through this thread twice and skimmed through the relevant
> >>>>>>>> documents, but I couldn't see any obvious pros and cons for either
> >>>>>>>> approach. So I don't really have an opinion on this.
> >>>>>>>>
> >>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
> >>>>>>>> need to
> >>>>>>>> be added to different drivers, which means userspace program
> >>>>>>>> needs to
> >>>>>>>> write more code and get more handles, it would be slightly
> >>>>>>>> better to
> >>>>>>>> implement a new driver from that perspective.
> >>>>>>> If gntdev/balloon extension is still considered:
> >>>>>>>
> >>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
> >>>>>>> terminology):
> >>>>>>>     - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>>>
> >>>>>>> Balloon driver extension, which is needed for contiguous/DMA
> >>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
> >>>>>>>
> >>>>>> So I am obviously a bit late to this thread, but why do you need
> >>>>>> to add
> >>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
> >>>>>> what
> >>>>>> you want without any extensions?
> >>>>> 1. I only (may) need to add IOCTLs to gntdev
> >>>>> 2. balloon driver needs to be extended, so it can allocate
> >>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
> >>>>> in the kernel.
> >>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
> >>>>> provide new functionality to create a dma-buf from grant references
> >>>>> and to produce grant references for a dma-buf. This is what I have as
> >>>>> UAPI
> >>>>> description for xen-zcopy driver:
> >>>>>
> >>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>> This will create a DRM dumb buffer from grant references provided
> >>>>> by the frontend. The intended usage is:
> >>>>>     - Frontend
> >>>>>       - creates a dumb/display buffer and allocates memory
> >>>>>       - grants foreign access to the buffer pages
> >>>>>       - passes granted references to the backend
> >>>>>     - Backend
> >>>>>       - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
> >>>>>         granted references and create a dumb buffer
> >>>>>       - requests handle to fd conversion via
> >>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
> >>>>>       - requests real HW driver/consumer to import the PRIME buffer
> >>>>> with
> >>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
> >>>>>       - uses handle returned by the real HW driver
> >>>>>     - at the end:
> >>>>>       o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       o closes file descriptor of the exported buffer
> >>>>>
> >>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> >>>>> This will grant references to a dumb/display buffer's memory
> >>>>> provided by
> >>>>> the
> >>>>> backend. The intended usage is:
> >>>>>     - Frontend
> >>>>>       - requests backend to allocate dumb/display buffer and grant
> >>>>> references
> >>>>>         to its pages
> >>>>>     - Backend
> >>>>>       - requests real HW driver to create a dumb with
> >>>>> DRM_IOCTL_MODE_CREATE_DUMB
> >>>>>       - requests handle to fd conversion via
> >>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
> >>>>>       - requests zero-copy driver to import the PRIME buffer with
> >>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
> >>>>>       - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
> >>>>>         grant references to the buffer's memory.
> >>>>>       - passes grant references to the frontend
> >>>>>    - at the end:
> >>>>>       - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       - closes file descriptor of the imported buffer
> >>>>>
> >>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>> This will block until the dumb buffer with the wait handle provided be
> >>>>> freed:
> >>>>> this is needed for synchronization between frontend and backend in
> >>>>> case
> >>>>> frontend provides grant references of the buffer via
> >>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
> >>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
> >>>>> wait_handle must be the same value returned while calling
> >>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
> >>>>>
> >>>>> So, as you can see the above functionality is not covered by the
> >>>>> existing UAPI
> >>>>> of the gntdev driver.
> >>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
> >>>>> wrapper
> >>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
> >>>>>
> >>>>> This is why I have 2 options here: either create a dedicated driver
> >>>>> for
> >>>>> this
> >>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
> >>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
> >>>>> driver
> >>>>> with the above UAPI + make changes to the balloon driver to provide
> >>>>> kernel
> >>>>> API for DMA buffer allocations.
> >>>> Which user component would use the new ioctls?
> >>> It is currently used by the display backend [1] and will
> >>> probably be used by the hyper-dmabuf frontend/backend
> >>> (Dongwon from Intel can provide more info on this).
> >>>> I'm asking because I'm not very fond of adding more linux specific
> >>>> functions to libgnttab which are not related to a specific Xen version,
> >>>> but to a kernel version.
> >>> Hm, I was not thinking about this UAPI to be added to libgnttab.
> >>> It seems it can be used directly w/o wrappers in user-space
> >> Would this program use libgnttab in parallel?
> > In case of the display backend - yes, for shared rings,
> > extracting grefs from displif protocol it uses gntdev via
> > helper library [1]
> >>   If yes how would the two
> >> usage paths be combined (same applies to the separate driver, btw)? The
> >> gntdev driver manages resources per file descriptor and libgnttab is
> >> hiding the file descriptor it is using for a connection.
> > Ah, at the moment the UAPI was not used in parallel as there were
> > 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
> > But now, if we extend gntdev with the new API then you are rigth:
> > either libgnttab needs to be extended or that new part of the
> > gntdev UAPI needs to be open-coded by the backend
> >>   Or would the
> >> user program use only the new driver for communicating with the gntdev
> >> driver? In this case it might be an option to extend the gntdev driver
> >> to present a new device (e.g. "gntdmadev") for that purpose.
> > No, it seems that libgnttab and this new driver's UAPI will be used
> > in parallel
> >>>> So doing this in a separate driver seems to be the better option in
> >>>> this regard.
> >>> Well, from maintenance POV it is easier for me to have it all in
> >>> a separate driver as all dma-buf related functionality will
> >>> reside at one place. This also means that no changes to existing
> >>> drivers will be needed (if it is ok to have ballooning in/out
> >>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
> >>> driver)
> >> I think in the end this really depends on how the complete solution
> >> will look like. gntdev is a special wrapper for the gnttab driver.
> >> In case the new dma-buf driver needs to use parts of gntdev I'd rather
> >> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
> > The new driver doesn't use gntdev's existing API, but extends it,
> > e.g. by adding new ways to export/import grefs for a dma-buf and
> > manage dma-buf's kernel ops. Thus, gntdev, which already provides
> > UAPI, seems to be a good candidate for such an extension
> 
> So this would mean you need a modification of libgnttab, right? This is
> something the Xen tools maintainers need to decide. In case they don't
> object extending the gntdev driver would be the natural thing to do.
> 

That should be fine. Most of libgnttab does is to wrap existing kernel
interfaces and expose them sensibly to user space programs. If gnttab
device is extended, libgnttab should be extended accordingly. If a new
device is created, a new library should be added. Either way there will
be new toolstack code involved, which is not a problem in general.

Wei.

> 
> Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24  9:08                                     ` Juergen Gross
                                                         ` (2 preceding siblings ...)
  2018-04-24 10:01                                       ` [Xen-devel] " Wei Liu
@ 2018-04-24 10:01                                       ` Wei Liu
  3 siblings, 0 replies; 131+ messages in thread
From: Wei Liu @ 2018-04-24 10:01 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Artem Mygaiev, Wei Liu, Dongwon Kim, Oleksandr Andrushchenko,
	Oleksandr_Andrushchenko, Ian Jackson, linux-kernel, dri-devel,
	airlied, Potrola, MateuszX, daniel.vetter, xen-devel,
	Boris Ostrovsky, Roger Pau Monné

On Tue, Apr 24, 2018 at 11:08:41AM +0200, Juergen Gross wrote:
> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
> > On 04/24/2018 11:40 AM, Juergen Gross wrote:
> >> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
> >>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
> >>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
> >>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
> >>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
> >>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
> >>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
> >>>>>>>> wrote:
> >>>>>>>>>>>         the gntdev.
> >>>>>>>>>>>
> >>>>>>>>>>> I think this is generic enough that it could be implemented by a
> >>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>>>>>>>> something similar to this.
> >>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
> >>>>>>>>>> just had
> >>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
> >>>>>>>>>> after a
> >>>>>>>>>> bit of discussion they'll now try to have a driver which just
> >>>>>>>>>> wraps a
> >>>>>>>>>> memfd into a dma-buf.
> >>>>>>>>> So, we have to decide either we introduce a new driver
> >>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
> >>>>>>>>> gntdev/balloon to support dma-buf use-cases.
> >>>>>>>>>
> >>>>>>>>> Can anybody from Xen community express their preference here?
> >>>>>>>>>
> >>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
> >>>>>>>> need to
> >>>>>>>> be added to either existing drivers or a new driver.
> >>>>>>>>
> >>>>>>>> I went through this thread twice and skimmed through the relevant
> >>>>>>>> documents, but I couldn't see any obvious pros and cons for either
> >>>>>>>> approach. So I don't really have an opinion on this.
> >>>>>>>>
> >>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
> >>>>>>>> need to
> >>>>>>>> be added to different drivers, which means userspace program
> >>>>>>>> needs to
> >>>>>>>> write more code and get more handles, it would be slightly
> >>>>>>>> better to
> >>>>>>>> implement a new driver from that perspective.
> >>>>>>> If gntdev/balloon extension is still considered:
> >>>>>>>
> >>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
> >>>>>>> terminology):
> >>>>>>>     - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> >>>>>>>     - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>>>>
> >>>>>>> Balloon driver extension, which is needed for contiguous/DMA
> >>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
> >>>>>>>
> >>>>>> So I am obviously a bit late to this thread, but why do you need
> >>>>>> to add
> >>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
> >>>>>> what
> >>>>>> you want without any extensions?
> >>>>> 1. I only (may) need to add IOCTLs to gntdev
> >>>>> 2. balloon driver needs to be extended, so it can allocate
> >>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
> >>>>> in the kernel.
> >>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
> >>>>> provide new functionality to create a dma-buf from grant references
> >>>>> and to produce grant references for a dma-buf. This is what I have as
> >>>>> UAPI
> >>>>> description for xen-zcopy driver:
> >>>>>
> >>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
> >>>>> This will create a DRM dumb buffer from grant references provided
> >>>>> by the frontend. The intended usage is:
> >>>>>     - Frontend
> >>>>>       - creates a dumb/display buffer and allocates memory
> >>>>>       - grants foreign access to the buffer pages
> >>>>>       - passes granted references to the backend
> >>>>>     - Backend
> >>>>>       - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
> >>>>>         granted references and create a dumb buffer
> >>>>>       - requests handle to fd conversion via
> >>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
> >>>>>       - requests real HW driver/consumer to import the PRIME buffer
> >>>>> with
> >>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
> >>>>>       - uses handle returned by the real HW driver
> >>>>>     - at the end:
> >>>>>       o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       o closes file descriptor of the exported buffer
> >>>>>
> >>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> >>>>> This will grant references to a dumb/display buffer's memory
> >>>>> provided by
> >>>>> the
> >>>>> backend. The intended usage is:
> >>>>>     - Frontend
> >>>>>       - requests backend to allocate dumb/display buffer and grant
> >>>>> references
> >>>>>         to its pages
> >>>>>     - Backend
> >>>>>       - requests real HW driver to create a dumb with
> >>>>> DRM_IOCTL_MODE_CREATE_DUMB
> >>>>>       - requests handle to fd conversion via
> >>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
> >>>>>       - requests zero-copy driver to import the PRIME buffer with
> >>>>>         DRM_IOCTL_PRIME_FD_TO_HANDLE
> >>>>>       - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
> >>>>>         grant references to the buffer's memory.
> >>>>>       - passes grant references to the frontend
> >>>>>    - at the end:
> >>>>>       - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
> >>>>>       - closes file descriptor of the imported buffer
> >>>>>
> >>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
> >>>>> This will block until the dumb buffer with the wait handle provided be
> >>>>> freed:
> >>>>> this is needed for synchronization between frontend and backend in
> >>>>> case
> >>>>> frontend provides grant references of the buffer via
> >>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
> >>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
> >>>>> wait_handle must be the same value returned while calling
> >>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
> >>>>>
> >>>>> So, as you can see the above functionality is not covered by the
> >>>>> existing UAPI
> >>>>> of the gntdev driver.
> >>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
> >>>>> wrapper
> >>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
> >>>>>
> >>>>> This is why I have 2 options here: either create a dedicated driver
> >>>>> for
> >>>>> this
> >>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
> >>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
> >>>>> driver
> >>>>> with the above UAPI + make changes to the balloon driver to provide
> >>>>> kernel
> >>>>> API for DMA buffer allocations.
> >>>> Which user component would use the new ioctls?
> >>> It is currently used by the display backend [1] and will
> >>> probably be used by the hyper-dmabuf frontend/backend
> >>> (Dongwon from Intel can provide more info on this).
> >>>> I'm asking because I'm not very fond of adding more linux specific
> >>>> functions to libgnttab which are not related to a specific Xen version,
> >>>> but to a kernel version.
> >>> Hm, I was not thinking about this UAPI to be added to libgnttab.
> >>> It seems it can be used directly w/o wrappers in user-space
> >> Would this program use libgnttab in parallel?
> > In case of the display backend - yes, for shared rings,
> > extracting grefs from displif protocol it uses gntdev via
> > helper library [1]
> >>   If yes how would the two
> >> usage paths be combined (same applies to the separate driver, btw)? The
> >> gntdev driver manages resources per file descriptor and libgnttab is
> >> hiding the file descriptor it is using for a connection.
> > Ah, at the moment the UAPI was not used in parallel as there were
> > 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
> > But now, if we extend gntdev with the new API then you are rigth:
> > either libgnttab needs to be extended or that new part of the
> > gntdev UAPI needs to be open-coded by the backend
> >>   Or would the
> >> user program use only the new driver for communicating with the gntdev
> >> driver? In this case it might be an option to extend the gntdev driver
> >> to present a new device (e.g. "gntdmadev") for that purpose.
> > No, it seems that libgnttab and this new driver's UAPI will be used
> > in parallel
> >>>> So doing this in a separate driver seems to be the better option in
> >>>> this regard.
> >>> Well, from maintenance POV it is easier for me to have it all in
> >>> a separate driver as all dma-buf related functionality will
> >>> reside at one place. This also means that no changes to existing
> >>> drivers will be needed (if it is ok to have ballooning in/out
> >>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
> >>> driver)
> >> I think in the end this really depends on how the complete solution
> >> will look like. gntdev is a special wrapper for the gnttab driver.
> >> In case the new dma-buf driver needs to use parts of gntdev I'd rather
> >> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
> > The new driver doesn't use gntdev's existing API, but extends it,
> > e.g. by adding new ways to export/import grefs for a dma-buf and
> > manage dma-buf's kernel ops. Thus, gntdev, which already provides
> > UAPI, seems to be a good candidate for such an extension
> 
> So this would mean you need a modification of libgnttab, right? This is
> something the Xen tools maintainers need to decide. In case they don't
> object extending the gntdev driver would be the natural thing to do.
> 

That should be fine. Most of libgnttab does is to wrap existing kernel
interfaces and expose them sensibly to user space programs. If gnttab
device is extended, libgnttab should be extended accordingly. If a new
device is created, a new library should be added. Either way there will
be new toolstack code involved, which is not a problem in general.

Wei.

> 
> Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 10:01                                       ` [Xen-devel] " Wei Liu
  2018-04-24 10:14                                         ` Oleksandr Andrushchenko
@ 2018-04-24 10:14                                         ` Oleksandr Andrushchenko
  2018-04-24 10:24                                           ` Juergen Gross
  2018-04-24 10:24                                           ` [Xen-devel] " Juergen Gross
  1 sibling, 2 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24 10:14 UTC (permalink / raw)
  To: Wei Liu, Juergen Gross, Dongwon Kim
  Cc: Oleksandr Andrushchenko, Boris Ostrovsky, Roger Pau Monné,
	Artem Mygaiev, konrad.wilk, airlied, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, daniel.vetter, Ian Jackson

On 04/24/2018 01:01 PM, Wei Liu wrote:
> On Tue, Apr 24, 2018 at 11:08:41AM +0200, Juergen Gross wrote:
>> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>>>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>>>>> wrote:
>>>>>>>>>>>>>          the gntdev.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>>>> something similar to this.
>>>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>>>>> just had
>>>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>>>>> after a
>>>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>>>> wraps a
>>>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>>>
>>>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>>>
>>>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>>>> need to
>>>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>>>
>>>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>>>
>>>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>>>> need to
>>>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>>>> needs to
>>>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>>>> better to
>>>>>>>>>> implement a new driver from that perspective.
>>>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>>>
>>>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>>>> terminology):
>>>>>>>>>      - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>>>
>>>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>>>
>>>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>>>> to add
>>>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>>>> what
>>>>>>>> you want without any extensions?
>>>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>>>> in the kernel.
>>>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>>>> provide new functionality to create a dma-buf from grant references
>>>>>>> and to produce grant references for a dma-buf. This is what I have as
>>>>>>> UAPI
>>>>>>> description for xen-zcopy driver:
>>>>>>>
>>>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>> This will create a DRM dumb buffer from grant references provided
>>>>>>> by the frontend. The intended usage is:
>>>>>>>      - Frontend
>>>>>>>        - creates a dumb/display buffer and allocates memory
>>>>>>>        - grants foreign access to the buffer pages
>>>>>>>        - passes granted references to the backend
>>>>>>>      - Backend
>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>>>          granted references and create a dumb buffer
>>>>>>>        - requests handle to fd conversion via
>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>        - requests real HW driver/consumer to import the PRIME buffer
>>>>>>> with
>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>        - uses handle returned by the real HW driver
>>>>>>>      - at the end:
>>>>>>>        o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        o closes file descriptor of the exported buffer
>>>>>>>
>>>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>> This will grant references to a dumb/display buffer's memory
>>>>>>> provided by
>>>>>>> the
>>>>>>> backend. The intended usage is:
>>>>>>>      - Frontend
>>>>>>>        - requests backend to allocate dumb/display buffer and grant
>>>>>>> references
>>>>>>>          to its pages
>>>>>>>      - Backend
>>>>>>>        - requests real HW driver to create a dumb with
>>>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>>>        - requests handle to fd conversion via
>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>        - requests zero-copy driver to import the PRIME buffer with
>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>>>          grant references to the buffer's memory.
>>>>>>>        - passes grant references to the frontend
>>>>>>>     - at the end:
>>>>>>>        - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        - closes file descriptor of the imported buffer
>>>>>>>
>>>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>> This will block until the dumb buffer with the wait handle provided be
>>>>>>> freed:
>>>>>>> this is needed for synchronization between frontend and backend in
>>>>>>> case
>>>>>>> frontend provides grant references of the buffer via
>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>>>> wait_handle must be the same value returned while calling
>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>>>
>>>>>>> So, as you can see the above functionality is not covered by the
>>>>>>> existing UAPI
>>>>>>> of the gntdev driver.
>>>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>>>>> wrapper
>>>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>>>
>>>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>>>> for
>>>>>>> this
>>>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>>>> driver
>>>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>>>> kernel
>>>>>>> API for DMA buffer allocations.
>>>>>> Which user component would use the new ioctls?
>>>>> It is currently used by the display backend [1] and will
>>>>> probably be used by the hyper-dmabuf frontend/backend
>>>>> (Dongwon from Intel can provide more info on this).
>>>>>> I'm asking because I'm not very fond of adding more linux specific
>>>>>> functions to libgnttab which are not related to a specific Xen version,
>>>>>> but to a kernel version.
>>>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>>>> It seems it can be used directly w/o wrappers in user-space
>>>> Would this program use libgnttab in parallel?
>>> In case of the display backend - yes, for shared rings,
>>> extracting grefs from displif protocol it uses gntdev via
>>> helper library [1]
>>>>    If yes how would the two
>>>> usage paths be combined (same applies to the separate driver, btw)? The
>>>> gntdev driver manages resources per file descriptor and libgnttab is
>>>> hiding the file descriptor it is using for a connection.
>>> Ah, at the moment the UAPI was not used in parallel as there were
>>> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
>>> But now, if we extend gntdev with the new API then you are rigth:
>>> either libgnttab needs to be extended or that new part of the
>>> gntdev UAPI needs to be open-coded by the backend
>>>>    Or would the
>>>> user program use only the new driver for communicating with the gntdev
>>>> driver? In this case it might be an option to extend the gntdev driver
>>>> to present a new device (e.g. "gntdmadev") for that purpose.
>>> No, it seems that libgnttab and this new driver's UAPI will be used
>>> in parallel
>>>>>> So doing this in a separate driver seems to be the better option in
>>>>>> this regard.
>>>>> Well, from maintenance POV it is easier for me to have it all in
>>>>> a separate driver as all dma-buf related functionality will
>>>>> reside at one place. This also means that no changes to existing
>>>>> drivers will be needed (if it is ok to have ballooning in/out
>>>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>>>>> driver)
>>>> I think in the end this really depends on how the complete solution
>>>> will look like. gntdev is a special wrapper for the gnttab driver.
>>>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>>>> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
>>> The new driver doesn't use gntdev's existing API, but extends it,
>>> e.g. by adding new ways to export/import grefs for a dma-buf and
>>> manage dma-buf's kernel ops. Thus, gntdev, which already provides
>>> UAPI, seems to be a good candidate for such an extension
>> So this would mean you need a modification of libgnttab, right? This is
>> something the Xen tools maintainers need to decide. In case they don't
>> object extending the gntdev driver would be the natural thing to do.
>>
> That should be fine. Most of libgnttab does is to wrap existing kernel
> interfaces and expose them sensibly to user space programs. If gnttab
> device is extended, libgnttab should be extended accordingly. If a new
> device is created, a new library should be added. Either way there will
> be new toolstack code involved, which is not a problem in general.
Great, so finally I see the following approach to have generic
dma-buf use-cases support for Xen (which can be used for many purposes,
e.g. GPU/DRM buffer sharing, V4L, hyper-dmabuf etc.):

1. Extend Linux gntdev driver to support 3 new IOCTLs discussed previously
2. Extend libgnttab to provide UAPI for those - Linux only as dma-buf
is a Linux thing
3. Extend kernel API of the Linux balloon driver to allow dma_alloc_xxx way
of memory allocations

If the above looks ok, then I can start prototyping, so we can discuss
implementation details

Dongwong - could you please comment on all this if it fits your use-cases
(I do believe it does)?
> Wei.
>
>> Juergen
Thank you,
Oleksandr

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 10:01                                       ` [Xen-devel] " Wei Liu
@ 2018-04-24 10:14                                         ` Oleksandr Andrushchenko
  2018-04-24 10:14                                         ` [Xen-devel] " Oleksandr Andrushchenko
  1 sibling, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24 10:14 UTC (permalink / raw)
  To: Wei Liu, Juergen Gross, Dongwon Kim
  Cc: Artem Mygaiev, Oleksandr Andrushchenko, Ian Jackson,
	linux-kernel, dri-devel, airlied, Potrola, MateuszX,
	daniel.vetter, xen-devel, Boris Ostrovsky, Roger Pau Monné

On 04/24/2018 01:01 PM, Wei Liu wrote:
> On Tue, Apr 24, 2018 at 11:08:41AM +0200, Juergen Gross wrote:
>> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>>>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko
>>>>>>>>>> wrote:
>>>>>>>>>>>>>          the gntdev.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>>>> something similar to this.
>>>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've
>>>>>>>>>>>> just had
>>>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and
>>>>>>>>>>>> after a
>>>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>>>> wraps a
>>>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>>>
>>>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>>>
>>>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>>>> need to
>>>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>>>
>>>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>>>
>>>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>>>> need to
>>>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>>>> needs to
>>>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>>>> better to
>>>>>>>>>> implement a new driver from that perspective.
>>>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>>>
>>>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>>>> terminology):
>>>>>>>>>      - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>>>
>>>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>>>
>>>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>>>> to add
>>>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>>>> what
>>>>>>>> you want without any extensions?
>>>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>>>> in the kernel.
>>>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>>>> provide new functionality to create a dma-buf from grant references
>>>>>>> and to produce grant references for a dma-buf. This is what I have as
>>>>>>> UAPI
>>>>>>> description for xen-zcopy driver:
>>>>>>>
>>>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>> This will create a DRM dumb buffer from grant references provided
>>>>>>> by the frontend. The intended usage is:
>>>>>>>      - Frontend
>>>>>>>        - creates a dumb/display buffer and allocates memory
>>>>>>>        - grants foreign access to the buffer pages
>>>>>>>        - passes granted references to the backend
>>>>>>>      - Backend
>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>>>          granted references and create a dumb buffer
>>>>>>>        - requests handle to fd conversion via
>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>        - requests real HW driver/consumer to import the PRIME buffer
>>>>>>> with
>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>        - uses handle returned by the real HW driver
>>>>>>>      - at the end:
>>>>>>>        o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        o closes file descriptor of the exported buffer
>>>>>>>
>>>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>> This will grant references to a dumb/display buffer's memory
>>>>>>> provided by
>>>>>>> the
>>>>>>> backend. The intended usage is:
>>>>>>>      - Frontend
>>>>>>>        - requests backend to allocate dumb/display buffer and grant
>>>>>>> references
>>>>>>>          to its pages
>>>>>>>      - Backend
>>>>>>>        - requests real HW driver to create a dumb with
>>>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>>>        - requests handle to fd conversion via
>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>        - requests zero-copy driver to import the PRIME buffer with
>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>>>          grant references to the buffer's memory.
>>>>>>>        - passes grant references to the frontend
>>>>>>>     - at the end:
>>>>>>>        - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>        - closes file descriptor of the imported buffer
>>>>>>>
>>>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>> This will block until the dumb buffer with the wait handle provided be
>>>>>>> freed:
>>>>>>> this is needed for synchronization between frontend and backend in
>>>>>>> case
>>>>>>> frontend provides grant references of the buffer via
>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released before
>>>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>>>> wait_handle must be the same value returned while calling
>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>>>
>>>>>>> So, as you can see the above functionality is not covered by the
>>>>>>> existing UAPI
>>>>>>> of the gntdev driver.
>>>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is only a
>>>>>>> wrapper
>>>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>>>
>>>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>>>> for
>>>>>>> this
>>>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>>>> driver
>>>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>>>> kernel
>>>>>>> API for DMA buffer allocations.
>>>>>> Which user component would use the new ioctls?
>>>>> It is currently used by the display backend [1] and will
>>>>> probably be used by the hyper-dmabuf frontend/backend
>>>>> (Dongwon from Intel can provide more info on this).
>>>>>> I'm asking because I'm not very fond of adding more linux specific
>>>>>> functions to libgnttab which are not related to a specific Xen version,
>>>>>> but to a kernel version.
>>>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>>>> It seems it can be used directly w/o wrappers in user-space
>>>> Would this program use libgnttab in parallel?
>>> In case of the display backend - yes, for shared rings,
>>> extracting grefs from displif protocol it uses gntdev via
>>> helper library [1]
>>>>    If yes how would the two
>>>> usage paths be combined (same applies to the separate driver, btw)? The
>>>> gntdev driver manages resources per file descriptor and libgnttab is
>>>> hiding the file descriptor it is using for a connection.
>>> Ah, at the moment the UAPI was not used in parallel as there were
>>> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
>>> But now, if we extend gntdev with the new API then you are rigth:
>>> either libgnttab needs to be extended or that new part of the
>>> gntdev UAPI needs to be open-coded by the backend
>>>>    Or would the
>>>> user program use only the new driver for communicating with the gntdev
>>>> driver? In this case it might be an option to extend the gntdev driver
>>>> to present a new device (e.g. "gntdmadev") for that purpose.
>>> No, it seems that libgnttab and this new driver's UAPI will be used
>>> in parallel
>>>>>> So doing this in a separate driver seems to be the better option in
>>>>>> this regard.
>>>>> Well, from maintenance POV it is easier for me to have it all in
>>>>> a separate driver as all dma-buf related functionality will
>>>>> reside at one place. This also means that no changes to existing
>>>>> drivers will be needed (if it is ok to have ballooning in/out
>>>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the balloon
>>>>> driver)
>>>> I think in the end this really depends on how the complete solution
>>>> will look like. gntdev is a special wrapper for the gnttab driver.
>>>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>>>> have a new driver above gnttab ("gntuser"?) used by gntdev and dma-buf.
>>> The new driver doesn't use gntdev's existing API, but extends it,
>>> e.g. by adding new ways to export/import grefs for a dma-buf and
>>> manage dma-buf's kernel ops. Thus, gntdev, which already provides
>>> UAPI, seems to be a good candidate for such an extension
>> So this would mean you need a modification of libgnttab, right? This is
>> something the Xen tools maintainers need to decide. In case they don't
>> object extending the gntdev driver would be the natural thing to do.
>>
> That should be fine. Most of libgnttab does is to wrap existing kernel
> interfaces and expose them sensibly to user space programs. If gnttab
> device is extended, libgnttab should be extended accordingly. If a new
> device is created, a new library should be added. Either way there will
> be new toolstack code involved, which is not a problem in general.
Great, so finally I see the following approach to have generic
dma-buf use-cases support for Xen (which can be used for many purposes,
e.g. GPU/DRM buffer sharing, V4L, hyper-dmabuf etc.):

1. Extend Linux gntdev driver to support 3 new IOCTLs discussed previously
2. Extend libgnttab to provide UAPI for those - Linux only as dma-buf
is a Linux thing
3. Extend kernel API of the Linux balloon driver to allow dma_alloc_xxx way
of memory allocations

If the above looks ok, then I can start prototyping, so we can discuss
implementation details

Dongwong - could you please comment on all this if it fits your use-cases
(I do believe it does)?
> Wei.
>
>> Juergen
Thank you,
Oleksandr

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 10:14                                         ` [Xen-devel] " Oleksandr Andrushchenko
  2018-04-24 10:24                                           ` Juergen Gross
@ 2018-04-24 10:24                                           ` Juergen Gross
  1 sibling, 0 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24 10:24 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Wei Liu, Dongwon Kim
  Cc: Oleksandr Andrushchenko, Boris Ostrovsky, Roger Pau Monné,
	Artem Mygaiev, konrad.wilk, airlied, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, daniel.vetter, Ian Jackson

On 24/04/18 12:14, Oleksandr Andrushchenko wrote:
> On 04/24/2018 01:01 PM, Wei Liu wrote:
>> On Tue, Apr 24, 2018 at 11:08:41AM +0200, Juergen Gross wrote:
>>> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
>>>> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>>>>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>>>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr
>>>>>>>>>>> Andrushchenko
>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>          the gntdev.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I think this is generic enough that it could be
>>>>>>>>>>>>>> implemented by a
>>>>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>>>>> something similar to this.
>>>>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf.
>>>>>>>>>>>>> We've
>>>>>>>>>>>>> just had
>>>>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just
>>>>>>>>>>>>> that, and
>>>>>>>>>>>>> after a
>>>>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>>>>> wraps a
>>>>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>>>>
>>>>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>>>>
>>>>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>>>>> need to
>>>>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>>>>
>>>>>>>>>>> I went through this thread twice and skimmed through the
>>>>>>>>>>> relevant
>>>>>>>>>>> documents, but I couldn't see any obvious pros and cons for
>>>>>>>>>>> either
>>>>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>>>>
>>>>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>>>>> need to
>>>>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>>>>> needs to
>>>>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>>>>> better to
>>>>>>>>>>> implement a new driver from that perspective.
>>>>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>>>>
>>>>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>>>>> terminology):
>>>>>>>>>>      - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>>>>
>>>>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>>>>
>>>>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>>>>> to add
>>>>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>>>>> what
>>>>>>>>> you want without any extensions?
>>>>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>>>>> in the kernel.
>>>>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>>>>> provide new functionality to create a dma-buf from grant references
>>>>>>>> and to produce grant references for a dma-buf. This is what I
>>>>>>>> have as
>>>>>>>> UAPI
>>>>>>>> description for xen-zcopy driver:
>>>>>>>>
>>>>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>> This will create a DRM dumb buffer from grant references provided
>>>>>>>> by the frontend. The intended usage is:
>>>>>>>>      - Frontend
>>>>>>>>        - creates a dumb/display buffer and allocates memory
>>>>>>>>        - grants foreign access to the buffer pages
>>>>>>>>        - passes granted references to the backend
>>>>>>>>      - Backend
>>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>>>>          granted references and create a dumb buffer
>>>>>>>>        - requests handle to fd conversion via
>>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>>        - requests real HW driver/consumer to import the PRIME
>>>>>>>> buffer
>>>>>>>> with
>>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>>        - uses handle returned by the real HW driver
>>>>>>>>      - at the end:
>>>>>>>>        o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        o closes file descriptor of the exported buffer
>>>>>>>>
>>>>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>> This will grant references to a dumb/display buffer's memory
>>>>>>>> provided by
>>>>>>>> the
>>>>>>>> backend. The intended usage is:
>>>>>>>>      - Frontend
>>>>>>>>        - requests backend to allocate dumb/display buffer and grant
>>>>>>>> references
>>>>>>>>          to its pages
>>>>>>>>      - Backend
>>>>>>>>        - requests real HW driver to create a dumb with
>>>>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>>>>        - requests handle to fd conversion via
>>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>>        - requests zero-copy driver to import the PRIME buffer with
>>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>>>>          grant references to the buffer's memory.
>>>>>>>>        - passes grant references to the frontend
>>>>>>>>     - at the end:
>>>>>>>>        - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        - closes file descriptor of the imported buffer
>>>>>>>>
>>>>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>> This will block until the dumb buffer with the wait handle
>>>>>>>> provided be
>>>>>>>> freed:
>>>>>>>> this is needed for synchronization between frontend and backend in
>>>>>>>> case
>>>>>>>> frontend provides grant references of the buffer via
>>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released
>>>>>>>> before
>>>>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>>>>> wait_handle must be the same value returned while calling
>>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>>>>
>>>>>>>> So, as you can see the above functionality is not covered by the
>>>>>>>> existing UAPI
>>>>>>>> of the gntdev driver.
>>>>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is
>>>>>>>> only a
>>>>>>>> wrapper
>>>>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>>>>
>>>>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>>>>> for
>>>>>>>> this
>>>>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>>>>> driver
>>>>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>>>>> kernel
>>>>>>>> API for DMA buffer allocations.
>>>>>>> Which user component would use the new ioctls?
>>>>>> It is currently used by the display backend [1] and will
>>>>>> probably be used by the hyper-dmabuf frontend/backend
>>>>>> (Dongwon from Intel can provide more info on this).
>>>>>>> I'm asking because I'm not very fond of adding more linux specific
>>>>>>> functions to libgnttab which are not related to a specific Xen
>>>>>>> version,
>>>>>>> but to a kernel version.
>>>>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>>>>> It seems it can be used directly w/o wrappers in user-space
>>>>> Would this program use libgnttab in parallel?
>>>> In case of the display backend - yes, for shared rings,
>>>> extracting grefs from displif protocol it uses gntdev via
>>>> helper library [1]
>>>>>    If yes how would the two
>>>>> usage paths be combined (same applies to the separate driver, btw)?
>>>>> The
>>>>> gntdev driver manages resources per file descriptor and libgnttab is
>>>>> hiding the file descriptor it is using for a connection.
>>>> Ah, at the moment the UAPI was not used in parallel as there were
>>>> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
>>>> But now, if we extend gntdev with the new API then you are rigth:
>>>> either libgnttab needs to be extended or that new part of the
>>>> gntdev UAPI needs to be open-coded by the backend
>>>>>    Or would the
>>>>> user program use only the new driver for communicating with the gntdev
>>>>> driver? In this case it might be an option to extend the gntdev driver
>>>>> to present a new device (e.g. "gntdmadev") for that purpose.
>>>> No, it seems that libgnttab and this new driver's UAPI will be used
>>>> in parallel
>>>>>>> So doing this in a separate driver seems to be the better option in
>>>>>>> this regard.
>>>>>> Well, from maintenance POV it is easier for me to have it all in
>>>>>> a separate driver as all dma-buf related functionality will
>>>>>> reside at one place. This also means that no changes to existing
>>>>>> drivers will be needed (if it is ok to have ballooning in/out
>>>>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the
>>>>>> balloon
>>>>>> driver)
>>>>> I think in the end this really depends on how the complete solution
>>>>> will look like. gntdev is a special wrapper for the gnttab driver.
>>>>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>>>>> have a new driver above gnttab ("gntuser"?) used by gntdev and
>>>>> dma-buf.
>>>> The new driver doesn't use gntdev's existing API, but extends it,
>>>> e.g. by adding new ways to export/import grefs for a dma-buf and
>>>> manage dma-buf's kernel ops. Thus, gntdev, which already provides
>>>> UAPI, seems to be a good candidate for such an extension
>>> So this would mean you need a modification of libgnttab, right? This is
>>> something the Xen tools maintainers need to decide. In case they don't
>>> object extending the gntdev driver would be the natural thing to do.
>>>
>> That should be fine. Most of libgnttab does is to wrap existing kernel
>> interfaces and expose them sensibly to user space programs. If gnttab
>> device is extended, libgnttab should be extended accordingly. If a new
>> device is created, a new library should be added. Either way there will
>> be new toolstack code involved, which is not a problem in general.
> Great, so finally I see the following approach to have generic
> dma-buf use-cases support for Xen (which can be used for many purposes,
> e.g. GPU/DRM buffer sharing, V4L, hyper-dmabuf etc.):
> 
> 1. Extend Linux gntdev driver to support 3 new IOCTLs discussed previously
> 2. Extend libgnttab to provide UAPI for those - Linux only as dma-buf
> is a Linux thing
> 3. Extend kernel API of the Linux balloon driver to allow dma_alloc_xxx way
> of memory allocations
> 
> If the above looks ok, then I can start prototyping, so we can discuss
> implementation details

Fine for me.


Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 10:14                                         ` [Xen-devel] " Oleksandr Andrushchenko
@ 2018-04-24 10:24                                           ` Juergen Gross
  2018-04-24 10:24                                           ` [Xen-devel] " Juergen Gross
  1 sibling, 0 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-24 10:24 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, Wei Liu, Dongwon Kim
  Cc: Artem Mygaiev, Oleksandr Andrushchenko, Ian Jackson,
	linux-kernel, dri-devel, airlied, Potrola, MateuszX,
	daniel.vetter, xen-devel, Boris Ostrovsky, Roger Pau Monné

On 24/04/18 12:14, Oleksandr Andrushchenko wrote:
> On 04/24/2018 01:01 PM, Wei Liu wrote:
>> On Tue, Apr 24, 2018 at 11:08:41AM +0200, Juergen Gross wrote:
>>> On 24/04/18 11:03, Oleksandr Andrushchenko wrote:
>>>> On 04/24/2018 11:40 AM, Juergen Gross wrote:
>>>>> On 24/04/18 10:07, Oleksandr Andrushchenko wrote:
>>>>>> On 04/24/2018 10:51 AM, Juergen Gross wrote:
>>>>>>> On 24/04/18 07:43, Oleksandr Andrushchenko wrote:
>>>>>>>> On 04/24/2018 01:41 AM, Boris Ostrovsky wrote:
>>>>>>>>> On 04/23/2018 08:10 AM, Oleksandr Andrushchenko wrote:
>>>>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr
>>>>>>>>>>> Andrushchenko
>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>          the gntdev.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I think this is generic enough that it could be
>>>>>>>>>>>>>> implemented by a
>>>>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>>>>> something similar to this.
>>>>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf.
>>>>>>>>>>>>> We've
>>>>>>>>>>>>> just had
>>>>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just
>>>>>>>>>>>>> that, and
>>>>>>>>>>>>> after a
>>>>>>>>>>>>> bit of discussion they'll now try to have a driver which just
>>>>>>>>>>>>> wraps a
>>>>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>>>>
>>>>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>>>>
>>>>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs
>>>>>>>>>>> need to
>>>>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>>>>
>>>>>>>>>>> I went through this thread twice and skimmed through the
>>>>>>>>>>> relevant
>>>>>>>>>>> documents, but I couldn't see any obvious pros and cons for
>>>>>>>>>>> either
>>>>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>>>>
>>>>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs
>>>>>>>>>>> need to
>>>>>>>>>>> be added to different drivers, which means userspace program
>>>>>>>>>>> needs to
>>>>>>>>>>> write more code and get more handles, it would be slightly
>>>>>>>>>>> better to
>>>>>>>>>>> implement a new driver from that perspective.
>>>>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>>>>
>>>>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy
>>>>>>>>>> terminology):
>>>>>>>>>>      - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>>>>      - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>>>>
>>>>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>>>>
>>>>>>>>> So I am obviously a bit late to this thread, but why do you need
>>>>>>>>> to add
>>>>>>>>> new ioctls to gntdev and balloon? Doesn't this driver manage to do
>>>>>>>>> what
>>>>>>>>> you want without any extensions?
>>>>>>>> 1. I only (may) need to add IOCTLs to gntdev
>>>>>>>> 2. balloon driver needs to be extended, so it can allocate
>>>>>>>> contiguous (DMA) memory, not IOCTLs/UAPI here, all lives
>>>>>>>> in the kernel.
>>>>>>>> 3. The reason I need to extend gnttab with new IOCTLs is to
>>>>>>>> provide new functionality to create a dma-buf from grant references
>>>>>>>> and to produce grant references for a dma-buf. This is what I
>>>>>>>> have as
>>>>>>>> UAPI
>>>>>>>> description for xen-zcopy driver:
>>>>>>>>
>>>>>>>> 1. DRM_IOCTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>> This will create a DRM dumb buffer from grant references provided
>>>>>>>> by the frontend. The intended usage is:
>>>>>>>>      - Frontend
>>>>>>>>        - creates a dumb/display buffer and allocates memory
>>>>>>>>        - grants foreign access to the buffer pages
>>>>>>>>        - passes granted references to the backend
>>>>>>>>      - Backend
>>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_FROM_REFS ioctl to map
>>>>>>>>          granted references and create a dumb buffer
>>>>>>>>        - requests handle to fd conversion via
>>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>>        - requests real HW driver/consumer to import the PRIME
>>>>>>>> buffer
>>>>>>>> with
>>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>>        - uses handle returned by the real HW driver
>>>>>>>>      - at the end:
>>>>>>>>        o closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        o closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        o closes file descriptor of the exported buffer
>>>>>>>>
>>>>>>>> 2. DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>> This will grant references to a dumb/display buffer's memory
>>>>>>>> provided by
>>>>>>>> the
>>>>>>>> backend. The intended usage is:
>>>>>>>>      - Frontend
>>>>>>>>        - requests backend to allocate dumb/display buffer and grant
>>>>>>>> references
>>>>>>>>          to its pages
>>>>>>>>      - Backend
>>>>>>>>        - requests real HW driver to create a dumb with
>>>>>>>> DRM_IOCTL_MODE_CREATE_DUMB
>>>>>>>>        - requests handle to fd conversion via
>>>>>>>> DRM_IOCTL_PRIME_HANDLE_TO_FD
>>>>>>>>        - requests zero-copy driver to import the PRIME buffer with
>>>>>>>>          DRM_IOCTL_PRIME_FD_TO_HANDLE
>>>>>>>>        - issues DRM_XEN_ZCOPY_DUMB_TO_REFS ioctl to
>>>>>>>>          grant references to the buffer's memory.
>>>>>>>>        - passes grant references to the frontend
>>>>>>>>     - at the end:
>>>>>>>>        - closes zero-copy driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        - closes real HW driver's handle with DRM_IOCTL_GEM_CLOSE
>>>>>>>>        - closes file descriptor of the imported buffer
>>>>>>>>
>>>>>>>> 3. DRM_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>>>> This will block until the dumb buffer with the wait handle
>>>>>>>> provided be
>>>>>>>> freed:
>>>>>>>> this is needed for synchronization between frontend and backend in
>>>>>>>> case
>>>>>>>> frontend provides grant references of the buffer via
>>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL and which must be released
>>>>>>>> before
>>>>>>>> backend replies with XENDISPL_OP_DBUF_DESTROY response.
>>>>>>>> wait_handle must be the same value returned while calling
>>>>>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS IOCTL.
>>>>>>>>
>>>>>>>> So, as you can see the above functionality is not covered by the
>>>>>>>> existing UAPI
>>>>>>>> of the gntdev driver.
>>>>>>>> Now, if we change dumb -> dma-buf and remove DRM code (which is
>>>>>>>> only a
>>>>>>>> wrapper
>>>>>>>> here on top of dma-buf) we get new driver for dma-buf for Xen.
>>>>>>>>
>>>>>>>> This is why I have 2 options here: either create a dedicated driver
>>>>>>>> for
>>>>>>>> this
>>>>>>>> (e.g. re-work xen-zcopy to be DRM independent and put it under
>>>>>>>> drivers/xen/xen-dma-buf, for example) or extend the existing gntdev
>>>>>>>> driver
>>>>>>>> with the above UAPI + make changes to the balloon driver to provide
>>>>>>>> kernel
>>>>>>>> API for DMA buffer allocations.
>>>>>>> Which user component would use the new ioctls?
>>>>>> It is currently used by the display backend [1] and will
>>>>>> probably be used by the hyper-dmabuf frontend/backend
>>>>>> (Dongwon from Intel can provide more info on this).
>>>>>>> I'm asking because I'm not very fond of adding more linux specific
>>>>>>> functions to libgnttab which are not related to a specific Xen
>>>>>>> version,
>>>>>>> but to a kernel version.
>>>>>> Hm, I was not thinking about this UAPI to be added to libgnttab.
>>>>>> It seems it can be used directly w/o wrappers in user-space
>>>>> Would this program use libgnttab in parallel?
>>>> In case of the display backend - yes, for shared rings,
>>>> extracting grefs from displif protocol it uses gntdev via
>>>> helper library [1]
>>>>>    If yes how would the two
>>>>> usage paths be combined (same applies to the separate driver, btw)?
>>>>> The
>>>>> gntdev driver manages resources per file descriptor and libgnttab is
>>>>> hiding the file descriptor it is using for a connection.
>>>> Ah, at the moment the UAPI was not used in parallel as there were
>>>> 2 drivers for that: gntdev + xen-zcopy with different UAPIs.
>>>> But now, if we extend gntdev with the new API then you are rigth:
>>>> either libgnttab needs to be extended or that new part of the
>>>> gntdev UAPI needs to be open-coded by the backend
>>>>>    Or would the
>>>>> user program use only the new driver for communicating with the gntdev
>>>>> driver? In this case it might be an option to extend the gntdev driver
>>>>> to present a new device (e.g. "gntdmadev") for that purpose.
>>>> No, it seems that libgnttab and this new driver's UAPI will be used
>>>> in parallel
>>>>>>> So doing this in a separate driver seems to be the better option in
>>>>>>> this regard.
>>>>>> Well, from maintenance POV it is easier for me to have it all in
>>>>>> a separate driver as all dma-buf related functionality will
>>>>>> reside at one place. This also means that no changes to existing
>>>>>> drivers will be needed (if it is ok to have ballooning in/out
>>>>>> code for DMA buffers (allocated with dma_alloc_xxx) not in the
>>>>>> balloon
>>>>>> driver)
>>>>> I think in the end this really depends on how the complete solution
>>>>> will look like. gntdev is a special wrapper for the gnttab driver.
>>>>> In case the new dma-buf driver needs to use parts of gntdev I'd rather
>>>>> have a new driver above gnttab ("gntuser"?) used by gntdev and
>>>>> dma-buf.
>>>> The new driver doesn't use gntdev's existing API, but extends it,
>>>> e.g. by adding new ways to export/import grefs for a dma-buf and
>>>> manage dma-buf's kernel ops. Thus, gntdev, which already provides
>>>> UAPI, seems to be a good candidate for such an extension
>>> So this would mean you need a modification of libgnttab, right? This is
>>> something the Xen tools maintainers need to decide. In case they don't
>>> object extending the gntdev driver would be the natural thing to do.
>>>
>> That should be fine. Most of libgnttab does is to wrap existing kernel
>> interfaces and expose them sensibly to user space programs. If gnttab
>> device is extended, libgnttab should be extended accordingly. If a new
>> device is created, a new library should be added. Either way there will
>> be new toolstack code involved, which is not a problem in general.
> Great, so finally I see the following approach to have generic
> dma-buf use-cases support for Xen (which can be used for many purposes,
> e.g. GPU/DRM buffer sharing, V4L, hyper-dmabuf etc.):
> 
> 1. Extend Linux gntdev driver to support 3 new IOCTLs discussed previously
> 2. Extend libgnttab to provide UAPI for those - Linux only as dma-buf
> is a Linux thing
> 3. Extend kernel API of the Linux balloon driver to allow dma_alloc_xxx way
> of memory allocations
> 
> If the above looks ok, then I can start prototyping, so we can discuss
> implementation details

Fine for me.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 12:10                         ` Oleksandr Andrushchenko
@ 2018-04-24 11:54                           ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-24 11:54 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Wei Liu, jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk,
	airlied, Oleksandr_Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, daniel.vetter, xen-devel, boris.ostrovsky,
	Roger Pau Monné

On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> On 04/23/2018 02:52 PM, Wei Liu wrote:
> > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > >      the gntdev.
> > > > > 
> > > > > I think this is generic enough that it could be implemented by a
> > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > something similar to this.
> > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > memfd into a dma-buf.
> > > So, we have to decide either we introduce a new driver
> > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > gntdev/balloon to support dma-buf use-cases.
> > > 
> > > Can anybody from Xen community express their preference here?
> > > 
> > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > be added to either existing drivers or a new driver.
> > 
> > I went through this thread twice and skimmed through the relevant
> > documents, but I couldn't see any obvious pros and cons for either
> > approach. So I don't really have an opinion on this.
> > 
> > But, assuming if implemented in existing drivers, those IOCTLs need to
> > be added to different drivers, which means userspace program needs to
> > write more code and get more handles, it would be slightly better to
> > implement a new driver from that perspective.
> If gntdev/balloon extension is still considered:
> 
> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE

s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
the dumb scanout buffer support in the drm/gfx subsystem. This here can be
used for any zcopy sharing among guests (as long as your endpoints
understands dma-buf, which most relevant drivers do).
-Daniel

> 
> Balloon driver extension, which is needed for contiguous/DMA
> buffers, will be to provide new *kernel API*, no UAPI is needed.
> 
> > Wei.
> Thank you,
> Oleksandr
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-24 11:54                           ` Daniel Vetter
  0 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-24 11:54 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Wei Liu, Dongwon Kim, konrad.wilk,
	airlied, Oleksandr_Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> On 04/23/2018 02:52 PM, Wei Liu wrote:
> > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > >      the gntdev.
> > > > > 
> > > > > I think this is generic enough that it could be implemented by a
> > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > something similar to this.
> > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > memfd into a dma-buf.
> > > So, we have to decide either we introduce a new driver
> > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > gntdev/balloon to support dma-buf use-cases.
> > > 
> > > Can anybody from Xen community express their preference here?
> > > 
> > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > be added to either existing drivers or a new driver.
> > 
> > I went through this thread twice and skimmed through the relevant
> > documents, but I couldn't see any obvious pros and cons for either
> > approach. So I don't really have an opinion on this.
> > 
> > But, assuming if implemented in existing drivers, those IOCTLs need to
> > be added to different drivers, which means userspace program needs to
> > write more code and get more handles, it would be slightly better to
> > implement a new driver from that perspective.
> If gntdev/balloon extension is still considered:
> 
> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE

s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
the dumb scanout buffer support in the drm/gfx subsystem. This here can be
used for any zcopy sharing among guests (as long as your endpoints
understands dma-buf, which most relevant drivers do).
-Daniel

> 
> Balloon driver extension, which is needed for contiguous/DMA
> buffers, will be to provide new *kernel API*, no UAPI is needed.
> 
> > Wei.
> Thank you,
> Oleksandr
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-23 12:10                         ` Oleksandr Andrushchenko
                                           ` (2 preceding siblings ...)
  (?)
@ 2018-04-24 11:54                         ` Daniel Vetter
  -1 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-24 11:54 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Wei Liu, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> On 04/23/2018 02:52 PM, Wei Liu wrote:
> > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > >      the gntdev.
> > > > > 
> > > > > I think this is generic enough that it could be implemented by a
> > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > something similar to this.
> > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > memfd into a dma-buf.
> > > So, we have to decide either we introduce a new driver
> > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > gntdev/balloon to support dma-buf use-cases.
> > > 
> > > Can anybody from Xen community express their preference here?
> > > 
> > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > be added to either existing drivers or a new driver.
> > 
> > I went through this thread twice and skimmed through the relevant
> > documents, but I couldn't see any obvious pros and cons for either
> > approach. So I don't really have an opinion on this.
> > 
> > But, assuming if implemented in existing drivers, those IOCTLs need to
> > be added to different drivers, which means userspace program needs to
> > write more code and get more handles, it would be slightly better to
> > implement a new driver from that perspective.
> If gntdev/balloon extension is still considered:
> 
> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE

s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
the dumb scanout buffer support in the drm/gfx subsystem. This here can be
used for any zcopy sharing among guests (as long as your endpoints
understands dma-buf, which most relevant drivers do).
-Daniel

> 
> Balloon driver extension, which is needed for contiguous/DMA
> buffers, will be to provide new *kernel API*, no UAPI is needed.
> 
> > Wei.
> Thank you,
> Oleksandr
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 11:54                           ` Daniel Vetter
  (?)
@ 2018-04-24 11:59                           ` Oleksandr Andrushchenko
  2018-04-24 20:35                             ` Dongwon Kim
  2018-04-24 20:35                               ` Dongwon Kim
  -1 siblings, 2 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24 11:59 UTC (permalink / raw)
  To: Wei Liu, jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Roger Pau Monné
  Cc: Oleksandr_Andrushchenko

On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>       the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>> memfd into a dma-buf.
>>>> So, we have to decide either we introduce a new driver
>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>> gntdev/balloon to support dma-buf use-cases.
>>>>
>>>> Can anybody from Xen community express their preference here?
>>>>
>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>> be added to either existing drivers or a new driver.
>>>
>>> I went through this thread twice and skimmed through the relevant
>>> documents, but I couldn't see any obvious pros and cons for either
>>> approach. So I don't really have an opinion on this.
>>>
>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>> be added to different drivers, which means userspace program needs to
>>> write more code and get more handles, it would be slightly better to
>>> implement a new driver from that perspective.
>> If gntdev/balloon extension is still considered:
>>
>> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
I was lazy to change dumb to dma-buf, so put this notice ;)
>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> used for any zcopy sharing among guests (as long as your endpoints
> understands dma-buf, which most relevant drivers do).
Of course, please see above
> -Daniel
>
>> Balloon driver extension, which is needed for contiguous/DMA
>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>
>>> Wei.
>> Thank you,
>> Oleksandr
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 11:54                           ` Daniel Vetter
  (?)
  (?)
@ 2018-04-24 11:59                           ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-24 11:59 UTC (permalink / raw)
  To: Wei Liu, jgross, Artem Mygaiev, Dongwon Kim, konrad.wilk,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Roger Pau Monné
  Cc: Oleksandr_Andrushchenko

On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>       the gntdev.
>>>>>>
>>>>>> I think this is generic enough that it could be implemented by a
>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>> something similar to this.
>>>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>> memfd into a dma-buf.
>>>> So, we have to decide either we introduce a new driver
>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>> gntdev/balloon to support dma-buf use-cases.
>>>>
>>>> Can anybody from Xen community express their preference here?
>>>>
>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>> be added to either existing drivers or a new driver.
>>>
>>> I went through this thread twice and skimmed through the relevant
>>> documents, but I couldn't see any obvious pros and cons for either
>>> approach. So I don't really have an opinion on this.
>>>
>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>> be added to different drivers, which means userspace program needs to
>>> write more code and get more handles, it would be slightly better to
>>> implement a new driver from that perspective.
>> If gntdev/balloon extension is still considered:
>>
>> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
I was lazy to change dumb to dma-buf, so put this notice ;)
>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> used for any zcopy sharing among guests (as long as your endpoints
> understands dma-buf, which most relevant drivers do).
Of course, please see above
> -Daniel
>
>> Balloon driver extension, which is needed for contiguous/DMA
>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>
>>> Wei.
>> Thank you,
>> Oleksandr
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 11:59                           ` Oleksandr Andrushchenko
@ 2018-04-24 20:35                               ` Dongwon Kim
  2018-04-24 20:35                               ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-24 20:35 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Wei Liu, jgross, Artem Mygaiev, konrad.wilk, airlied,
	linux-kernel, dri-devel, Potrola, MateuszX, daniel.vetter,
	xen-devel, boris.ostrovsky, Roger Pau Monné,
	Oleksandr_Andrushchenko

Had a meeting with Daniel and talked about bringing out generic
part of hyper-dmabuf to the userspace, which means we most likely
reuse IOCTLs defined in xen-zcopy for our use-case if we follow
his suggestion.

So assuming we use these IOCTLs as they are,
Several things I would like you to double-check..

1. returning gref as is to the user space is still unsafe because
it is a constant, easy to guess and any process that hijacks it can easily
exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
-gref or gref-to-dmabuf in kernel space and add other layers on top
of those in actual IOCTLs to add some safety.. We introduced flink like
hyper_dmabuf_id including random number but many says even that is still
not safe.

2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
out of xen-zcopy and put those in a new helper library. 

3. please consider the case where original DMA-BUF's first offset
and last length are not 0 and PAGE_SIZE respectively. I assume current
xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.

thanks,
DW

On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> >On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/23/2018 02:52 PM, Wei Liu wrote:
> >>>On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> >>>>>>      the gntdev.
> >>>>>>
> >>>>>>I think this is generic enough that it could be implemented by a
> >>>>>>device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>>>something similar to this.
> >>>>>You can't just wrap random userspace memory into a dma-buf. We've just had
> >>>>>this discussion with kvm/qemu folks, who proposed just that, and after a
> >>>>>bit of discussion they'll now try to have a driver which just wraps a
> >>>>>memfd into a dma-buf.
> >>>>So, we have to decide either we introduce a new driver
> >>>>(say, under drivers/xen/xen-dma-buf) or extend the existing
> >>>>gntdev/balloon to support dma-buf use-cases.
> >>>>
> >>>>Can anybody from Xen community express their preference here?
> >>>>
> >>>Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> >>>be added to either existing drivers or a new driver.
> >>>
> >>>I went through this thread twice and skimmed through the relevant
> >>>documents, but I couldn't see any obvious pros and cons for either
> >>>approach. So I don't really have an opinion on this.
> >>>
> >>>But, assuming if implemented in existing drivers, those IOCTLs need to
> >>>be added to different drivers, which means userspace program needs to
> >>>write more code and get more handles, it would be slightly better to
> >>>implement a new driver from that perspective.
> >>If gntdev/balloon extension is still considered:
> >>
> >>All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> I was lazy to change dumb to dma-buf, so put this notice ;)
> >>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> >>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> >>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> >s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> >the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> >used for any zcopy sharing among guests (as long as your endpoints
> >understands dma-buf, which most relevant drivers do).
> Of course, please see above
> >-Daniel
> >
> >>Balloon driver extension, which is needed for contiguous/DMA
> >>buffers, will be to provide new *kernel API*, no UAPI is needed.
> >>
> >>>Wei.
> >>Thank you,
> >>Oleksandr
> >>_______________________________________________
> >>dri-devel mailing list
> >>dri-devel@lists.freedesktop.org
> >>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-24 20:35                               ` Dongwon Kim
  0 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-24 20:35 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Wei Liu, konrad.wilk, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

Had a meeting with Daniel and talked about bringing out generic
part of hyper-dmabuf to the userspace, which means we most likely
reuse IOCTLs defined in xen-zcopy for our use-case if we follow
his suggestion.

So assuming we use these IOCTLs as they are,
Several things I would like you to double-check..

1. returning gref as is to the user space is still unsafe because
it is a constant, easy to guess and any process that hijacks it can easily
exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
-gref or gref-to-dmabuf in kernel space and add other layers on top
of those in actual IOCTLs to add some safety.. We introduced flink like
hyper_dmabuf_id including random number but many says even that is still
not safe.

2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
out of xen-zcopy and put those in a new helper library. 

3. please consider the case where original DMA-BUF's first offset
and last length are not 0 and PAGE_SIZE respectively. I assume current
xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.

thanks,
DW

On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> >On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/23/2018 02:52 PM, Wei Liu wrote:
> >>>On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> >>>>>>      the gntdev.
> >>>>>>
> >>>>>>I think this is generic enough that it could be implemented by a
> >>>>>>device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>>>something similar to this.
> >>>>>You can't just wrap random userspace memory into a dma-buf. We've just had
> >>>>>this discussion with kvm/qemu folks, who proposed just that, and after a
> >>>>>bit of discussion they'll now try to have a driver which just wraps a
> >>>>>memfd into a dma-buf.
> >>>>So, we have to decide either we introduce a new driver
> >>>>(say, under drivers/xen/xen-dma-buf) or extend the existing
> >>>>gntdev/balloon to support dma-buf use-cases.
> >>>>
> >>>>Can anybody from Xen community express their preference here?
> >>>>
> >>>Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> >>>be added to either existing drivers or a new driver.
> >>>
> >>>I went through this thread twice and skimmed through the relevant
> >>>documents, but I couldn't see any obvious pros and cons for either
> >>>approach. So I don't really have an opinion on this.
> >>>
> >>>But, assuming if implemented in existing drivers, those IOCTLs need to
> >>>be added to different drivers, which means userspace program needs to
> >>>write more code and get more handles, it would be slightly better to
> >>>implement a new driver from that perspective.
> >>If gntdev/balloon extension is still considered:
> >>
> >>All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> I was lazy to change dumb to dma-buf, so put this notice ;)
> >>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> >>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> >>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> >s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> >the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> >used for any zcopy sharing among guests (as long as your endpoints
> >understands dma-buf, which most relevant drivers do).
> Of course, please see above
> >-Daniel
> >
> >>Balloon driver extension, which is needed for contiguous/DMA
> >>buffers, will be to provide new *kernel API*, no UAPI is needed.
> >>
> >>>Wei.
> >>Thank you,
> >>Oleksandr
> >>_______________________________________________
> >>dri-devel mailing list
> >>dri-devel@lists.freedesktop.org
> >>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 11:59                           ` Oleksandr Andrushchenko
@ 2018-04-24 20:35                             ` Dongwon Kim
  2018-04-24 20:35                               ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-24 20:35 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Wei Liu, airlied, Oleksandr_Andrushchenko,
	linux-kernel, dri-devel, Potrola, MateuszX, xen-devel,
	daniel.vetter, boris.ostrovsky, Roger Pau Monné

Had a meeting with Daniel and talked about bringing out generic
part of hyper-dmabuf to the userspace, which means we most likely
reuse IOCTLs defined in xen-zcopy for our use-case if we follow
his suggestion.

So assuming we use these IOCTLs as they are,
Several things I would like you to double-check..

1. returning gref as is to the user space is still unsafe because
it is a constant, easy to guess and any process that hijacks it can easily
exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
-gref or gref-to-dmabuf in kernel space and add other layers on top
of those in actual IOCTLs to add some safety.. We introduced flink like
hyper_dmabuf_id including random number but many says even that is still
not safe.

2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
out of xen-zcopy and put those in a new helper library. 

3. please consider the case where original DMA-BUF's first offset
and last length are not 0 and PAGE_SIZE respectively. I assume current
xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.

thanks,
DW

On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> >On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> >>On 04/23/2018 02:52 PM, Wei Liu wrote:
> >>>On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> >>>>>>      the gntdev.
> >>>>>>
> >>>>>>I think this is generic enough that it could be implemented by a
> >>>>>>device not tied to Xen. AFAICT the hyper_dma guys also wanted
> >>>>>>something similar to this.
> >>>>>You can't just wrap random userspace memory into a dma-buf. We've just had
> >>>>>this discussion with kvm/qemu folks, who proposed just that, and after a
> >>>>>bit of discussion they'll now try to have a driver which just wraps a
> >>>>>memfd into a dma-buf.
> >>>>So, we have to decide either we introduce a new driver
> >>>>(say, under drivers/xen/xen-dma-buf) or extend the existing
> >>>>gntdev/balloon to support dma-buf use-cases.
> >>>>
> >>>>Can anybody from Xen community express their preference here?
> >>>>
> >>>Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> >>>be added to either existing drivers or a new driver.
> >>>
> >>>I went through this thread twice and skimmed through the relevant
> >>>documents, but I couldn't see any obvious pros and cons for either
> >>>approach. So I don't really have an opinion on this.
> >>>
> >>>But, assuming if implemented in existing drivers, those IOCTLs need to
> >>>be added to different drivers, which means userspace program needs to
> >>>write more code and get more handles, it would be slightly better to
> >>>implement a new driver from that perspective.
> >>If gntdev/balloon extension is still considered:
> >>
> >>All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> I was lazy to change dumb to dma-buf, so put this notice ;)
> >>  - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> >>  - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> >>  - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> >s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> >the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> >used for any zcopy sharing among guests (as long as your endpoints
> >understands dma-buf, which most relevant drivers do).
> Of course, please see above
> >-Daniel
> >
> >>Balloon driver extension, which is needed for contiguous/DMA
> >>buffers, will be to provide new *kernel API*, no UAPI is needed.
> >>
> >>>Wei.
> >>Thank you,
> >>Oleksandr
> >>_______________________________________________
> >>dri-devel mailing list
> >>dri-devel@lists.freedesktop.org
> >>https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 20:35                               ` Dongwon Kim
  (?)
@ 2018-04-25  6:07                               ` Oleksandr Andrushchenko
  2018-04-25  6:34                                 ` Daniel Vetter
  2018-04-25  6:34                                   ` Daniel Vetter
  -1 siblings, 2 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-25  6:07 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: Wei Liu, jgross, Artem Mygaiev, konrad.wilk, airlied,
	linux-kernel, dri-devel, Potrola, MateuszX, daniel.vetter,
	xen-devel, boris.ostrovsky, Roger Pau Monné,
	Oleksandr_Andrushchenko

On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> Had a meeting with Daniel and talked about bringing out generic
> part of hyper-dmabuf to the userspace, which means we most likely
> reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> his suggestion.
I will still have kernel side API, so backends/frontends implemented
in the kernel can access that functionality as well.
>
> So assuming we use these IOCTLs as they are,
> Several things I would like you to double-check..
>
> 1. returning gref as is to the user space is still unsafe because
> it is a constant, easy to guess and any process that hijacks it can easily
> exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> -gref or gref-to-dmabuf in kernel space and add other layers on top
> of those in actual IOCTLs to add some safety.. We introduced flink like
> hyper_dmabuf_id including random number but many says even that is still
> not safe.
Yes, it is generally unsafe. But even if we have implemented
the approach you have in hyper-dmabuf or similar, what stops
malicious software from doing the same with the existing gntdev UAPI?
No need to brute force new UAPI if there is a simpler one.
That being said, I'll put security aside at the first stage,
but of course we can start investigating ways to improve
(I assume you already have use-cases where security issues must
be considered, so, probably you can tell more on what was investigated
so far).
>
> 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> out of xen-zcopy and put those in a new helper library.
I believe this can be done, but at the first stage I would go without
that helper library, so it is clearly seen what can be moved to it later
(I know that you want to run ACRN as well, but can I run it on ARM? ;)

> 3. please consider the case where original DMA-BUF's first offset
> and last length are not 0 and PAGE_SIZE respectively. I assume current
> xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
Hm, what is the use-case for that?
> thanks,
> DW
Thank you,
Oleksandr
> On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
>>> On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>>>       the gntdev.
>>>>>>>>
>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>> something similar to this.
>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>> memfd into a dma-buf.
>>>>>> So, we have to decide either we introduce a new driver
>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>
>>>>>> Can anybody from Xen community express their preference here?
>>>>>>
>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>> be added to either existing drivers or a new driver.
>>>>>
>>>>> I went through this thread twice and skimmed through the relevant
>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>> approach. So I don't really have an opinion on this.
>>>>>
>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>> be added to different drivers, which means userspace program needs to
>>>>> write more code and get more handles, it would be slightly better to
>>>>> implement a new driver from that perspective.
>>>> If gntdev/balloon extension is still considered:
>>>>
>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>> I was lazy to change dumb to dma-buf, so put this notice ;)
>>>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>> s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
>>> the dumb scanout buffer support in the drm/gfx subsystem. This here can be
>>> used for any zcopy sharing among guests (as long as your endpoints
>>> understands dma-buf, which most relevant drivers do).
>> Of course, please see above
>>> -Daniel
>>>
>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>
>>>>> Wei.
>>>> Thank you,
>>>> Oleksandr
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 20:35                               ` Dongwon Kim
  (?)
  (?)
@ 2018-04-25  6:07                               ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-25  6:07 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: jgross, Artem Mygaiev, Wei Liu, airlied, Oleksandr_Andrushchenko,
	linux-kernel, dri-devel, Potrola, MateuszX, xen-devel,
	daniel.vetter, boris.ostrovsky, Roger Pau Monné

On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> Had a meeting with Daniel and talked about bringing out generic
> part of hyper-dmabuf to the userspace, which means we most likely
> reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> his suggestion.
I will still have kernel side API, so backends/frontends implemented
in the kernel can access that functionality as well.
>
> So assuming we use these IOCTLs as they are,
> Several things I would like you to double-check..
>
> 1. returning gref as is to the user space is still unsafe because
> it is a constant, easy to guess and any process that hijacks it can easily
> exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> -gref or gref-to-dmabuf in kernel space and add other layers on top
> of those in actual IOCTLs to add some safety.. We introduced flink like
> hyper_dmabuf_id including random number but many says even that is still
> not safe.
Yes, it is generally unsafe. But even if we have implemented
the approach you have in hyper-dmabuf or similar, what stops
malicious software from doing the same with the existing gntdev UAPI?
No need to brute force new UAPI if there is a simpler one.
That being said, I'll put security aside at the first stage,
but of course we can start investigating ways to improve
(I assume you already have use-cases where security issues must
be considered, so, probably you can tell more on what was investigated
so far).
>
> 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> out of xen-zcopy and put those in a new helper library.
I believe this can be done, but at the first stage I would go without
that helper library, so it is clearly seen what can be moved to it later
(I know that you want to run ACRN as well, but can I run it on ARM? ;)

> 3. please consider the case where original DMA-BUF's first offset
> and last length are not 0 and PAGE_SIZE respectively. I assume current
> xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
Hm, what is the use-case for that?
> thanks,
> DW
Thank you,
Oleksandr
> On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
>> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
>>> On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>>>       the gntdev.
>>>>>>>>
>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>> something similar to this.
>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>> memfd into a dma-buf.
>>>>>> So, we have to decide either we introduce a new driver
>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>
>>>>>> Can anybody from Xen community express their preference here?
>>>>>>
>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>> be added to either existing drivers or a new driver.
>>>>>
>>>>> I went through this thread twice and skimmed through the relevant
>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>> approach. So I don't really have an opinion on this.
>>>>>
>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>> be added to different drivers, which means userspace program needs to
>>>>> write more code and get more handles, it would be slightly better to
>>>>> implement a new driver from that perspective.
>>>> If gntdev/balloon extension is still considered:
>>>>
>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>> I was lazy to change dumb to dma-buf, so put this notice ;)
>>>>   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>> s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
>>> the dumb scanout buffer support in the drm/gfx subsystem. This here can be
>>> used for any zcopy sharing among guests (as long as your endpoints
>>> understands dma-buf, which most relevant drivers do).
>> Of course, please see above
>>> -Daniel
>>>
>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>
>>>>> Wei.
>>>> Thank you,
>>>> Oleksandr
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 20:35                               ` Dongwon Kim
                                                 ` (2 preceding siblings ...)
  (?)
@ 2018-04-25  6:12                               ` Juergen Gross
  2018-04-30 18:43                                 ` Dongwon Kim
  2018-04-30 18:43                                 ` Dongwon Kim
  -1 siblings, 2 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-25  6:12 UTC (permalink / raw)
  To: Dongwon Kim, Oleksandr Andrushchenko
  Cc: Wei Liu, Artem Mygaiev, konrad.wilk, airlied, linux-kernel,
	dri-devel, Potrola, MateuszX, daniel.vetter, xen-devel,
	boris.ostrovsky, Roger Pau Monné,
	Oleksandr_Andrushchenko

On 24/04/18 22:35, Dongwon Kim wrote:
> Had a meeting with Daniel and talked about bringing out generic
> part of hyper-dmabuf to the userspace, which means we most likely
> reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> his suggestion.
> 
> So assuming we use these IOCTLs as they are,
> Several things I would like you to double-check..
> 
> 1. returning gref as is to the user space is still unsafe because
> it is a constant, easy to guess and any process that hijacks it can easily
> exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> -gref or gref-to-dmabuf in kernel space and add other layers on top
> of those in actual IOCTLs to add some safety.. We introduced flink like
> hyper_dmabuf_id including random number but many says even that is still
> not safe.

grefs are usable by root only. When you have root access in dom0 you can
do evil things to all VMs even without using grants. That is in no way
different to root being able to control all other processes on the
system.


Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-24 20:35                               ` Dongwon Kim
                                                 ` (3 preceding siblings ...)
  (?)
@ 2018-04-25  6:12                               ` Juergen Gross
  -1 siblings, 0 replies; 131+ messages in thread
From: Juergen Gross @ 2018-04-25  6:12 UTC (permalink / raw)
  To: Dongwon Kim, Oleksandr Andrushchenko
  Cc: Artem Mygaiev, Wei Liu, airlied, Oleksandr_Andrushchenko,
	linux-kernel, dri-devel, Potrola, MateuszX, xen-devel,
	daniel.vetter, boris.ostrovsky, Roger Pau Monné

On 24/04/18 22:35, Dongwon Kim wrote:
> Had a meeting with Daniel and talked about bringing out generic
> part of hyper-dmabuf to the userspace, which means we most likely
> reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> his suggestion.
> 
> So assuming we use these IOCTLs as they are,
> Several things I would like you to double-check..
> 
> 1. returning gref as is to the user space is still unsafe because
> it is a constant, easy to guess and any process that hijacks it can easily
> exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> -gref or gref-to-dmabuf in kernel space and add other layers on top
> of those in actual IOCTLs to add some safety.. We introduced flink like
> hyper_dmabuf_id including random number but many says even that is still
> not safe.

grefs are usable by root only. When you have root access in dom0 you can
do evil things to all VMs even without using grants. That is in no way
different to root being able to control all other processes on the
system.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25  6:07                               ` Oleksandr Andrushchenko
@ 2018-04-25  6:34                                   ` Daniel Vetter
  2018-04-25  6:34                                   ` Daniel Vetter
  1 sibling, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-25  6:34 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: Dongwon Kim, jgross, Artem Mygaiev, Wei Liu, konrad.wilk,
	airlied, Oleksandr_Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
> On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> > Had a meeting with Daniel and talked about bringing out generic
> > part of hyper-dmabuf to the userspace, which means we most likely
> > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > his suggestion.
> I will still have kernel side API, so backends/frontends implemented
> in the kernel can access that functionality as well.
> > 
> > So assuming we use these IOCTLs as they are,
> > Several things I would like you to double-check..
> > 
> > 1. returning gref as is to the user space is still unsafe because
> > it is a constant, easy to guess and any process that hijacks it can easily
> > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > of those in actual IOCTLs to add some safety.. We introduced flink like
> > hyper_dmabuf_id including random number but many says even that is still
> > not safe.
> Yes, it is generally unsafe. But even if we have implemented
> the approach you have in hyper-dmabuf or similar, what stops
> malicious software from doing the same with the existing gntdev UAPI?
> No need to brute force new UAPI if there is a simpler one.
> That being said, I'll put security aside at the first stage,
> but of course we can start investigating ways to improve
> (I assume you already have use-cases where security issues must
> be considered, so, probably you can tell more on what was investigated
> so far).

Maybe a bit more context here:

So in graphics we have this old flink approach for buffer sharing with
processes, and it's unsafe because way too easy to guess the buffer
handles. And anyone with access to the graphics driver can then import
that buffer object. We switched to file descriptor passing to make sure
only the intended recipient can import a buffer.

So at the vm->vm level it sounds like grefs are safe, because they're only
for a specific other guest (or sets of guests, not sure about). That means
security is only within the OS. For that you need to make sure that
unpriviledge userspace simply can't ever access a gref. If that doesn't
work out, then I guess we should improve the xen gref stuff to have a more
secure cookie.

> > 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> > out of xen-zcopy and put those in a new helper library.
> I believe this can be done, but at the first stage I would go without
> that helper library, so it is clearly seen what can be moved to it later
> (I know that you want to run ACRN as well, but can I run it on ARM? ;)

There's already helpers for walking sgtables and adding pages/enumerating
pages. I don't think we need more.

> > 3. please consider the case where original DMA-BUF's first offset
> > and last length are not 0 and PAGE_SIZE respectively. I assume current
> > xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
> Hm, what is the use-case for that?

dma-buf is always page-aligned. That's a hard constraint of the linux
dma-buf interface spec.
-Daniel

> > thanks,
> > DW
> Thank you,
> Oleksandr
> > On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> > > On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> > > > On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> > > > > On 04/23/2018 02:52 PM, Wei Liu wrote:
> > > > > > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > > > >       the gntdev.
> > > > > > > > > 
> > > > > > > > > I think this is generic enough that it could be implemented by a
> > > > > > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > > > > > something similar to this.
> > > > > > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > > > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > > > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > > > > > memfd into a dma-buf.
> > > > > > > So, we have to decide either we introduce a new driver
> > > > > > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > > > > > gntdev/balloon to support dma-buf use-cases.
> > > > > > > 
> > > > > > > Can anybody from Xen community express their preference here?
> > > > > > > 
> > > > > > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > > > > > be added to either existing drivers or a new driver.
> > > > > > 
> > > > > > I went through this thread twice and skimmed through the relevant
> > > > > > documents, but I couldn't see any obvious pros and cons for either
> > > > > > approach. So I don't really have an opinion on this.
> > > > > > 
> > > > > > But, assuming if implemented in existing drivers, those IOCTLs need to
> > > > > > be added to different drivers, which means userspace program needs to
> > > > > > write more code and get more handles, it would be slightly better to
> > > > > > implement a new driver from that perspective.
> > > > > If gntdev/balloon extension is still considered:
> > > > > 
> > > > > All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> > > I was lazy to change dumb to dma-buf, so put this notice ;)
> > > > >   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> > > > the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> > > > used for any zcopy sharing among guests (as long as your endpoints
> > > > understands dma-buf, which most relevant drivers do).
> > > Of course, please see above
> > > > -Daniel
> > > > 
> > > > > Balloon driver extension, which is needed for contiguous/DMA
> > > > > buffers, will be to provide new *kernel API*, no UAPI is needed.
> > > > > 
> > > > > > Wei.
> > > > > Thank you,
> > > > > Oleksandr
> > > > > _______________________________________________
> > > > > dri-devel mailing list
> > > > > dri-devel@lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-25  6:34                                   ` Daniel Vetter
  0 siblings, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-25  6:34 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Wei Liu, Dongwon Kim, konrad.wilk,
	airlied, Oleksandr_Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, daniel.vetter, xen-devel, boris.ostrovsky,
	Roger Pau Monné

On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
> On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> > Had a meeting with Daniel and talked about bringing out generic
> > part of hyper-dmabuf to the userspace, which means we most likely
> > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > his suggestion.
> I will still have kernel side API, so backends/frontends implemented
> in the kernel can access that functionality as well.
> > 
> > So assuming we use these IOCTLs as they are,
> > Several things I would like you to double-check..
> > 
> > 1. returning gref as is to the user space is still unsafe because
> > it is a constant, easy to guess and any process that hijacks it can easily
> > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > of those in actual IOCTLs to add some safety.. We introduced flink like
> > hyper_dmabuf_id including random number but many says even that is still
> > not safe.
> Yes, it is generally unsafe. But even if we have implemented
> the approach you have in hyper-dmabuf or similar, what stops
> malicious software from doing the same with the existing gntdev UAPI?
> No need to brute force new UAPI if there is a simpler one.
> That being said, I'll put security aside at the first stage,
> but of course we can start investigating ways to improve
> (I assume you already have use-cases where security issues must
> be considered, so, probably you can tell more on what was investigated
> so far).

Maybe a bit more context here:

So in graphics we have this old flink approach for buffer sharing with
processes, and it's unsafe because way too easy to guess the buffer
handles. And anyone with access to the graphics driver can then import
that buffer object. We switched to file descriptor passing to make sure
only the intended recipient can import a buffer.

So at the vm->vm level it sounds like grefs are safe, because they're only
for a specific other guest (or sets of guests, not sure about). That means
security is only within the OS. For that you need to make sure that
unpriviledge userspace simply can't ever access a gref. If that doesn't
work out, then I guess we should improve the xen gref stuff to have a more
secure cookie.

> > 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> > out of xen-zcopy and put those in a new helper library.
> I believe this can be done, but at the first stage I would go without
> that helper library, so it is clearly seen what can be moved to it later
> (I know that you want to run ACRN as well, but can I run it on ARM? ;)

There's already helpers for walking sgtables and adding pages/enumerating
pages. I don't think we need more.

> > 3. please consider the case where original DMA-BUF's first offset
> > and last length are not 0 and PAGE_SIZE respectively. I assume current
> > xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
> Hm, what is the use-case for that?

dma-buf is always page-aligned. That's a hard constraint of the linux
dma-buf interface spec.
-Daniel

> > thanks,
> > DW
> Thank you,
> Oleksandr
> > On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> > > On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> > > > On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> > > > > On 04/23/2018 02:52 PM, Wei Liu wrote:
> > > > > > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > > > >       the gntdev.
> > > > > > > > > 
> > > > > > > > > I think this is generic enough that it could be implemented by a
> > > > > > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > > > > > something similar to this.
> > > > > > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > > > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > > > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > > > > > memfd into a dma-buf.
> > > > > > > So, we have to decide either we introduce a new driver
> > > > > > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > > > > > gntdev/balloon to support dma-buf use-cases.
> > > > > > > 
> > > > > > > Can anybody from Xen community express their preference here?
> > > > > > > 
> > > > > > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > > > > > be added to either existing drivers or a new driver.
> > > > > > 
> > > > > > I went through this thread twice and skimmed through the relevant
> > > > > > documents, but I couldn't see any obvious pros and cons for either
> > > > > > approach. So I don't really have an opinion on this.
> > > > > > 
> > > > > > But, assuming if implemented in existing drivers, those IOCTLs need to
> > > > > > be added to different drivers, which means userspace program needs to
> > > > > > write more code and get more handles, it would be slightly better to
> > > > > > implement a new driver from that perspective.
> > > > > If gntdev/balloon extension is still considered:
> > > > > 
> > > > > All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> > > I was lazy to change dumb to dma-buf, so put this notice ;)
> > > > >   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> > > > the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> > > > used for any zcopy sharing among guests (as long as your endpoints
> > > > understands dma-buf, which most relevant drivers do).
> > > Of course, please see above
> > > > -Daniel
> > > > 
> > > > > Balloon driver extension, which is needed for contiguous/DMA
> > > > > buffers, will be to provide new *kernel API*, no UAPI is needed.
> > > > > 
> > > > > > Wei.
> > > > > Thank you,
> > > > > Oleksandr
> > > > > _______________________________________________
> > > > > dri-devel mailing list
> > > > > dri-devel@lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25  6:07                               ` Oleksandr Andrushchenko
@ 2018-04-25  6:34                                 ` Daniel Vetter
  2018-04-25  6:34                                   ` Daniel Vetter
  1 sibling, 0 replies; 131+ messages in thread
From: Daniel Vetter @ 2018-04-25  6:34 UTC (permalink / raw)
  To: Oleksandr Andrushchenko
  Cc: jgross, Artem Mygaiev, Wei Liu, Dongwon Kim, airlied,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, Potrola,
	MateuszX, daniel.vetter, xen-devel, boris.ostrovsky,
	Roger Pau Monné

On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
> On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> > Had a meeting with Daniel and talked about bringing out generic
> > part of hyper-dmabuf to the userspace, which means we most likely
> > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > his suggestion.
> I will still have kernel side API, so backends/frontends implemented
> in the kernel can access that functionality as well.
> > 
> > So assuming we use these IOCTLs as they are,
> > Several things I would like you to double-check..
> > 
> > 1. returning gref as is to the user space is still unsafe because
> > it is a constant, easy to guess and any process that hijacks it can easily
> > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > of those in actual IOCTLs to add some safety.. We introduced flink like
> > hyper_dmabuf_id including random number but many says even that is still
> > not safe.
> Yes, it is generally unsafe. But even if we have implemented
> the approach you have in hyper-dmabuf or similar, what stops
> malicious software from doing the same with the existing gntdev UAPI?
> No need to brute force new UAPI if there is a simpler one.
> That being said, I'll put security aside at the first stage,
> but of course we can start investigating ways to improve
> (I assume you already have use-cases where security issues must
> be considered, so, probably you can tell more on what was investigated
> so far).

Maybe a bit more context here:

So in graphics we have this old flink approach for buffer sharing with
processes, and it's unsafe because way too easy to guess the buffer
handles. And anyone with access to the graphics driver can then import
that buffer object. We switched to file descriptor passing to make sure
only the intended recipient can import a buffer.

So at the vm->vm level it sounds like grefs are safe, because they're only
for a specific other guest (or sets of guests, not sure about). That means
security is only within the OS. For that you need to make sure that
unpriviledge userspace simply can't ever access a gref. If that doesn't
work out, then I guess we should improve the xen gref stuff to have a more
secure cookie.

> > 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> > out of xen-zcopy and put those in a new helper library.
> I believe this can be done, but at the first stage I would go without
> that helper library, so it is clearly seen what can be moved to it later
> (I know that you want to run ACRN as well, but can I run it on ARM? ;)

There's already helpers for walking sgtables and adding pages/enumerating
pages. I don't think we need more.

> > 3. please consider the case where original DMA-BUF's first offset
> > and last length are not 0 and PAGE_SIZE respectively. I assume current
> > xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
> Hm, what is the use-case for that?

dma-buf is always page-aligned. That's a hard constraint of the linux
dma-buf interface spec.
-Daniel

> > thanks,
> > DW
> Thank you,
> Oleksandr
> > On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> > > On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> > > > On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> > > > > On 04/23/2018 02:52 PM, Wei Liu wrote:
> > > > > > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > > > >       the gntdev.
> > > > > > > > > 
> > > > > > > > > I think this is generic enough that it could be implemented by a
> > > > > > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > > > > > something similar to this.
> > > > > > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > > > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > > > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > > > > > memfd into a dma-buf.
> > > > > > > So, we have to decide either we introduce a new driver
> > > > > > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > > > > > gntdev/balloon to support dma-buf use-cases.
> > > > > > > 
> > > > > > > Can anybody from Xen community express their preference here?
> > > > > > > 
> > > > > > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > > > > > be added to either existing drivers or a new driver.
> > > > > > 
> > > > > > I went through this thread twice and skimmed through the relevant
> > > > > > documents, but I couldn't see any obvious pros and cons for either
> > > > > > approach. So I don't really have an opinion on this.
> > > > > > 
> > > > > > But, assuming if implemented in existing drivers, those IOCTLs need to
> > > > > > be added to different drivers, which means userspace program needs to
> > > > > > write more code and get more handles, it would be slightly better to
> > > > > > implement a new driver from that perspective.
> > > > > If gntdev/balloon extension is still considered:
> > > > > 
> > > > > All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> > > I was lazy to change dumb to dma-buf, so put this notice ;)
> > > > >   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> > > > the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> > > > used for any zcopy sharing among guests (as long as your endpoints
> > > > understands dma-buf, which most relevant drivers do).
> > > Of course, please see above
> > > > -Daniel
> > > > 
> > > > > Balloon driver extension, which is needed for contiguous/DMA
> > > > > buffers, will be to provide new *kernel API*, no UAPI is needed.
> > > > > 
> > > > > > Wei.
> > > > > Thank you,
> > > > > Oleksandr
> > > > > _______________________________________________
> > > > > dri-devel mailing list
> > > > > dri-devel@lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25  6:34                                   ` Daniel Vetter
@ 2018-04-25 17:16                                     ` Dongwon Kim
  -1 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-25 17:16 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, jgross, Artem Mygaiev, Wei Liu,
	konrad.wilk, airlied, Oleksandr_Andrushchenko, linux-kernel,
	dri-devel, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky, Roger Pau Monné

On Wed, Apr 25, 2018 at 08:34:55AM +0200, Daniel Vetter wrote:
> On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
> > On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> > > Had a meeting with Daniel and talked about bringing out generic
> > > part of hyper-dmabuf to the userspace, which means we most likely
> > > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > > his suggestion.
> > I will still have kernel side API, so backends/frontends implemented
> > in the kernel can access that functionality as well.
> > > 
> > > So assuming we use these IOCTLs as they are,
> > > Several things I would like you to double-check..
> > > 
> > > 1. returning gref as is to the user space is still unsafe because
> > > it is a constant, easy to guess and any process that hijacks it can easily
> > > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > > of those in actual IOCTLs to add some safety.. We introduced flink like
> > > hyper_dmabuf_id including random number but many says even that is still
> > > not safe.
> > Yes, it is generally unsafe. But even if we have implemented
> > the approach you have in hyper-dmabuf or similar, what stops
> > malicious software from doing the same with the existing gntdev UAPI?
> > No need to brute force new UAPI if there is a simpler one.
> > That being said, I'll put security aside at the first stage,
> > but of course we can start investigating ways to improve
> > (I assume you already have use-cases where security issues must
> > be considered, so, probably you can tell more on what was investigated
> > so far).

Yeah, although we think we lowered the chance of guessing the right id
by adding random number to it, the security hole is still there as far
as we use a constant id across VMs. We understood this from the beginning
but couldn't find a better way. So what we proposed is to make sure our
customer understand this and prepare very secure way to handle this id
in the userspace (mattrope however recently proposed a "hyper-pipe" which
FD-type id can be converted and exchanged safely through. So we are looking
into this now.)

And another approach we have proposed is to use event-polling, that lets
the privileged userapp in importing guest to know about a new exported
DMABUF so that it can retrieve it from the queue then redistribute to
other applications. This method is not very flexible however, is one way
to hide ID from userspace completely.

Anyway, yes, we can continue to investigate the possible way to make it
more secure. 

> 
> Maybe a bit more context here:
> 
> So in graphics we have this old flink approach for buffer sharing with
> processes, and it's unsafe because way too easy to guess the buffer
> handles. And anyone with access to the graphics driver can then import
> that buffer object. We switched to file descriptor passing to make sure
> only the intended recipient can import a buffer.
> 
> So at the vm->vm level it sounds like grefs are safe, because they're only
> for a specific other guest (or sets of guests, not sure about). That means
> security is only within the OS. For that you need to make sure that
> unpriviledge userspace simply can't ever access a gref. If that doesn't
> work out, then I guess we should improve the xen gref stuff to have a more
> secure cookie.
> 
> > > 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> > > out of xen-zcopy and put those in a new helper library.
> > I believe this can be done, but at the first stage I would go without
> > that helper library, so it is clearly seen what can be moved to it later
> > (I know that you want to run ACRN as well, but can I run it on ARM? ;)
> 
> There's already helpers for walking sgtables and adding pages/enumerating
> pages. I don't think we need more.

ok, where would that helpers be located? If we consider we will use these
with other hypervisor drivers, maybe it's better to place those in some
common area?

> 
> > > 3. please consider the case where original DMA-BUF's first offset
> > > and last length are not 0 and PAGE_SIZE respectively. I assume current
> > > xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
> > Hm, what is the use-case for that?

Just in general use-case.. I was just considering the case (might be corner
case..) where sg->offset != 0 or sg->length != PAGE_SIZE. Hyper dmabuf sends
this information (first offset and last length) together with references for
pages. So I was wondering if we should so similar thing in zcopy since your
goal is now to cover general dma-buf use-cases (however, danvet mentioned
hard constaint of dma-buf below.. so if this can't happen according to the
spec, then we can ignore it..)

> 
> dma-buf is always page-aligned. That's a hard constraint of the linux
> dma-buf interface spec.
> -Daniel

Hmm.. I am little bit confused..
So does it mean dmabuf->size is always n*PAGE_SIZE? What is the sgt behind
dmabuf has an offset other than 0 for the first sgl or the length of the
last sgl is not PAGE_SIZE? You are saying this case is not acceptable for
dmabuf?

> 
> > > thanks,
> > > DW
> > Thank you,
> > Oleksandr
> > > On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> > > > On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> > > > > On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > On 04/23/2018 02:52 PM, Wei Liu wrote:
> > > > > > > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > > > > >       the gntdev.
> > > > > > > > > > 
> > > > > > > > > > I think this is generic enough that it could be implemented by a
> > > > > > > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > > > > > > something similar to this.
> > > > > > > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > > > > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > > > > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > > > > > > memfd into a dma-buf.
> > > > > > > > So, we have to decide either we introduce a new driver
> > > > > > > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > > > > > > gntdev/balloon to support dma-buf use-cases.
> > > > > > > > 
> > > > > > > > Can anybody from Xen community express their preference here?
> > > > > > > > 
> > > > > > > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > > > > > > be added to either existing drivers or a new driver.
> > > > > > > 
> > > > > > > I went through this thread twice and skimmed through the relevant
> > > > > > > documents, but I couldn't see any obvious pros and cons for either
> > > > > > > approach. So I don't really have an opinion on this.
> > > > > > > 
> > > > > > > But, assuming if implemented in existing drivers, those IOCTLs need to
> > > > > > > be added to different drivers, which means userspace program needs to
> > > > > > > write more code and get more handles, it would be slightly better to
> > > > > > > implement a new driver from that perspective.
> > > > > > If gntdev/balloon extension is still considered:
> > > > > > 
> > > > > > All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> > > > I was lazy to change dumb to dma-buf, so put this notice ;)
> > > > > >   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> > > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> > > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > > s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> > > > > the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> > > > > used for any zcopy sharing among guests (as long as your endpoints
> > > > > understands dma-buf, which most relevant drivers do).
> > > > Of course, please see above
> > > > > -Daniel
> > > > > 
> > > > > > Balloon driver extension, which is needed for contiguous/DMA
> > > > > > buffers, will be to provide new *kernel API*, no UAPI is needed.
> > > > > > 
> > > > > > > Wei.
> > > > > > Thank you,
> > > > > > Oleksandr
> > > > > > _______________________________________________
> > > > > > dri-devel mailing list
> > > > > > dri-devel@lists.freedesktop.org
> > > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-25 17:16                                     ` Dongwon Kim
  0 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-25 17:16 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, jgross, Artem Mygaiev, Wei Liu,
	konrad.wilk, airlied, Oleksandr_Andrushchenko, linux-kernel,
	dri-devel, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky, Roger Pau Monné

On Wed, Apr 25, 2018 at 08:34:55AM +0200, Daniel Vetter wrote:
> On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
> > On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> > > Had a meeting with Daniel and talked about bringing out generic
> > > part of hyper-dmabuf to the userspace, which means we most likely
> > > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > > his suggestion.
> > I will still have kernel side API, so backends/frontends implemented
> > in the kernel can access that functionality as well.
> > > 
> > > So assuming we use these IOCTLs as they are,
> > > Several things I would like you to double-check..
> > > 
> > > 1. returning gref as is to the user space is still unsafe because
> > > it is a constant, easy to guess and any process that hijacks it can easily
> > > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > > of those in actual IOCTLs to add some safety.. We introduced flink like
> > > hyper_dmabuf_id including random number but many says even that is still
> > > not safe.
> > Yes, it is generally unsafe. But even if we have implemented
> > the approach you have in hyper-dmabuf or similar, what stops
> > malicious software from doing the same with the existing gntdev UAPI?
> > No need to brute force new UAPI if there is a simpler one.
> > That being said, I'll put security aside at the first stage,
> > but of course we can start investigating ways to improve
> > (I assume you already have use-cases where security issues must
> > be considered, so, probably you can tell more on what was investigated
> > so far).

Yeah, although we think we lowered the chance of guessing the right id
by adding random number to it, the security hole is still there as far
as we use a constant id across VMs. We understood this from the beginning
but couldn't find a better way. So what we proposed is to make sure our
customer understand this and prepare very secure way to handle this id
in the userspace (mattrope however recently proposed a "hyper-pipe" which
FD-type id can be converted and exchanged safely through. So we are looking
into this now.)

And another approach we have proposed is to use event-polling, that lets
the privileged userapp in importing guest to know about a new exported
DMABUF so that it can retrieve it from the queue then redistribute to
other applications. This method is not very flexible however, is one way
to hide ID from userspace completely.

Anyway, yes, we can continue to investigate the possible way to make it
more secure. 

> 
> Maybe a bit more context here:
> 
> So in graphics we have this old flink approach for buffer sharing with
> processes, and it's unsafe because way too easy to guess the buffer
> handles. And anyone with access to the graphics driver can then import
> that buffer object. We switched to file descriptor passing to make sure
> only the intended recipient can import a buffer.
> 
> So at the vm->vm level it sounds like grefs are safe, because they're only
> for a specific other guest (or sets of guests, not sure about). That means
> security is only within the OS. For that you need to make sure that
> unpriviledge userspace simply can't ever access a gref. If that doesn't
> work out, then I guess we should improve the xen gref stuff to have a more
> secure cookie.
> 
> > > 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> > > out of xen-zcopy and put those in a new helper library.
> > I believe this can be done, but at the first stage I would go without
> > that helper library, so it is clearly seen what can be moved to it later
> > (I know that you want to run ACRN as well, but can I run it on ARM? ;)
> 
> There's already helpers for walking sgtables and adding pages/enumerating
> pages. I don't think we need more.

ok, where would that helpers be located? If we consider we will use these
with other hypervisor drivers, maybe it's better to place those in some
common area?

> 
> > > 3. please consider the case where original DMA-BUF's first offset
> > > and last length are not 0 and PAGE_SIZE respectively. I assume current
> > > xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
> > Hm, what is the use-case for that?

Just in general use-case.. I was just considering the case (might be corner
case..) where sg->offset != 0 or sg->length != PAGE_SIZE. Hyper dmabuf sends
this information (first offset and last length) together with references for
pages. So I was wondering if we should so similar thing in zcopy since your
goal is now to cover general dma-buf use-cases (however, danvet mentioned
hard constaint of dma-buf below.. so if this can't happen according to the
spec, then we can ignore it..)

> 
> dma-buf is always page-aligned. That's a hard constraint of the linux
> dma-buf interface spec.
> -Daniel

Hmm.. I am little bit confused..
So does it mean dmabuf->size is always n*PAGE_SIZE? What is the sgt behind
dmabuf has an offset other than 0 for the first sgl or the length of the
last sgl is not PAGE_SIZE? You are saying this case is not acceptable for
dmabuf?

> 
> > > thanks,
> > > DW
> > Thank you,
> > Oleksandr
> > > On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> > > > On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> > > > > On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > On 04/23/2018 02:52 PM, Wei Liu wrote:
> > > > > > > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > > > > >       the gntdev.
> > > > > > > > > > 
> > > > > > > > > > I think this is generic enough that it could be implemented by a
> > > > > > > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > > > > > > something similar to this.
> > > > > > > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > > > > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > > > > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > > > > > > memfd into a dma-buf.
> > > > > > > > So, we have to decide either we introduce a new driver
> > > > > > > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > > > > > > gntdev/balloon to support dma-buf use-cases.
> > > > > > > > 
> > > > > > > > Can anybody from Xen community express their preference here?
> > > > > > > > 
> > > > > > > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > > > > > > be added to either existing drivers or a new driver.
> > > > > > > 
> > > > > > > I went through this thread twice and skimmed through the relevant
> > > > > > > documents, but I couldn't see any obvious pros and cons for either
> > > > > > > approach. So I don't really have an opinion on this.
> > > > > > > 
> > > > > > > But, assuming if implemented in existing drivers, those IOCTLs need to
> > > > > > > be added to different drivers, which means userspace program needs to
> > > > > > > write more code and get more handles, it would be slightly better to
> > > > > > > implement a new driver from that perspective.
> > > > > > If gntdev/balloon extension is still considered:
> > > > > > 
> > > > > > All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> > > > I was lazy to change dumb to dma-buf, so put this notice ;)
> > > > > >   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> > > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> > > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > > s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> > > > > the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> > > > > used for any zcopy sharing among guests (as long as your endpoints
> > > > > understands dma-buf, which most relevant drivers do).
> > > > Of course, please see above
> > > > > -Daniel
> > > > > 
> > > > > > Balloon driver extension, which is needed for contiguous/DMA
> > > > > > buffers, will be to provide new *kernel API*, no UAPI is needed.
> > > > > > 
> > > > > > > Wei.
> > > > > > Thank you,
> > > > > > Oleksandr
> > > > > > _______________________________________________
> > > > > > dri-devel mailing list
> > > > > > dri-devel@lists.freedesktop.org
> > > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25  6:34                                   ` Daniel Vetter
  (?)
  (?)
@ 2018-04-25 17:16                                   ` Dongwon Kim
  -1 siblings, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-25 17:16 UTC (permalink / raw)
  To: Oleksandr Andrushchenko, jgross, Artem Mygaiev, Wei Liu,
	konrad.wilk, airlied, Oleksandr_Andrushchenko, linux-kernel,
	dri-devel, Potrola, MateuszX, xen-devel, daniel.vetter,
	boris.ostrovsky, Roger Pau Monné

On Wed, Apr 25, 2018 at 08:34:55AM +0200, Daniel Vetter wrote:
> On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
> > On 04/24/2018 11:35 PM, Dongwon Kim wrote:
> > > Had a meeting with Daniel and talked about bringing out generic
> > > part of hyper-dmabuf to the userspace, which means we most likely
> > > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > > his suggestion.
> > I will still have kernel side API, so backends/frontends implemented
> > in the kernel can access that functionality as well.
> > > 
> > > So assuming we use these IOCTLs as they are,
> > > Several things I would like you to double-check..
> > > 
> > > 1. returning gref as is to the user space is still unsafe because
> > > it is a constant, easy to guess and any process that hijacks it can easily
> > > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > > of those in actual IOCTLs to add some safety.. We introduced flink like
> > > hyper_dmabuf_id including random number but many says even that is still
> > > not safe.
> > Yes, it is generally unsafe. But even if we have implemented
> > the approach you have in hyper-dmabuf or similar, what stops
> > malicious software from doing the same with the existing gntdev UAPI?
> > No need to brute force new UAPI if there is a simpler one.
> > That being said, I'll put security aside at the first stage,
> > but of course we can start investigating ways to improve
> > (I assume you already have use-cases where security issues must
> > be considered, so, probably you can tell more on what was investigated
> > so far).

Yeah, although we think we lowered the chance of guessing the right id
by adding random number to it, the security hole is still there as far
as we use a constant id across VMs. We understood this from the beginning
but couldn't find a better way. So what we proposed is to make sure our
customer understand this and prepare very secure way to handle this id
in the userspace (mattrope however recently proposed a "hyper-pipe" which
FD-type id can be converted and exchanged safely through. So we are looking
into this now.)

And another approach we have proposed is to use event-polling, that lets
the privileged userapp in importing guest to know about a new exported
DMABUF so that it can retrieve it from the queue then redistribute to
other applications. This method is not very flexible however, is one way
to hide ID from userspace completely.

Anyway, yes, we can continue to investigate the possible way to make it
more secure. 

> 
> Maybe a bit more context here:
> 
> So in graphics we have this old flink approach for buffer sharing with
> processes, and it's unsafe because way too easy to guess the buffer
> handles. And anyone with access to the graphics driver can then import
> that buffer object. We switched to file descriptor passing to make sure
> only the intended recipient can import a buffer.
> 
> So at the vm->vm level it sounds like grefs are safe, because they're only
> for a specific other guest (or sets of guests, not sure about). That means
> security is only within the OS. For that you need to make sure that
> unpriviledge userspace simply can't ever access a gref. If that doesn't
> work out, then I guess we should improve the xen gref stuff to have a more
> secure cookie.
> 
> > > 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
> > > out of xen-zcopy and put those in a new helper library.
> > I believe this can be done, but at the first stage I would go without
> > that helper library, so it is clearly seen what can be moved to it later
> > (I know that you want to run ACRN as well, but can I run it on ARM? ;)
> 
> There's already helpers for walking sgtables and adding pages/enumerating
> pages. I don't think we need more.

ok, where would that helpers be located? If we consider we will use these
with other hypervisor drivers, maybe it's better to place those in some
common area?

> 
> > > 3. please consider the case where original DMA-BUF's first offset
> > > and last length are not 0 and PAGE_SIZE respectively. I assume current
> > > xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
> > Hm, what is the use-case for that?

Just in general use-case.. I was just considering the case (might be corner
case..) where sg->offset != 0 or sg->length != PAGE_SIZE. Hyper dmabuf sends
this information (first offset and last length) together with references for
pages. So I was wondering if we should so similar thing in zcopy since your
goal is now to cover general dma-buf use-cases (however, danvet mentioned
hard constaint of dma-buf below.. so if this can't happen according to the
spec, then we can ignore it..)

> 
> dma-buf is always page-aligned. That's a hard constraint of the linux
> dma-buf interface spec.
> -Daniel

Hmm.. I am little bit confused..
So does it mean dmabuf->size is always n*PAGE_SIZE? What is the sgt behind
dmabuf has an offset other than 0 for the first sgl or the length of the
last sgl is not PAGE_SIZE? You are saying this case is not acceptable for
dmabuf?

> 
> > > thanks,
> > > DW
> > Thank you,
> > Oleksandr
> > > On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
> > > > On 04/24/2018 02:54 PM, Daniel Vetter wrote:
> > > > > On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > On 04/23/2018 02:52 PM, Wei Liu wrote:
> > > > > > > On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
> > > > > > > > > >       the gntdev.
> > > > > > > > > > 
> > > > > > > > > > I think this is generic enough that it could be implemented by a
> > > > > > > > > > device not tied to Xen. AFAICT the hyper_dma guys also wanted
> > > > > > > > > > something similar to this.
> > > > > > > > > You can't just wrap random userspace memory into a dma-buf. We've just had
> > > > > > > > > this discussion with kvm/qemu folks, who proposed just that, and after a
> > > > > > > > > bit of discussion they'll now try to have a driver which just wraps a
> > > > > > > > > memfd into a dma-buf.
> > > > > > > > So, we have to decide either we introduce a new driver
> > > > > > > > (say, under drivers/xen/xen-dma-buf) or extend the existing
> > > > > > > > gntdev/balloon to support dma-buf use-cases.
> > > > > > > > 
> > > > > > > > Can anybody from Xen community express their preference here?
> > > > > > > > 
> > > > > > > Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
> > > > > > > be added to either existing drivers or a new driver.
> > > > > > > 
> > > > > > > I went through this thread twice and skimmed through the relevant
> > > > > > > documents, but I couldn't see any obvious pros and cons for either
> > > > > > > approach. So I don't really have an opinion on this.
> > > > > > > 
> > > > > > > But, assuming if implemented in existing drivers, those IOCTLs need to
> > > > > > > be added to different drivers, which means userspace program needs to
> > > > > > > write more code and get more handles, it would be slightly better to
> > > > > > > implement a new driver from that perspective.
> > > > > > If gntdev/balloon extension is still considered:
> > > > > > 
> > > > > > All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
> > > > I was lazy to change dumb to dma-buf, so put this notice ;)
> > > > > >   - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
> > > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
> > > > > >   - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
> > > > > s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
> > > > > the dumb scanout buffer support in the drm/gfx subsystem. This here can be
> > > > > used for any zcopy sharing among guests (as long as your endpoints
> > > > > understands dma-buf, which most relevant drivers do).
> > > > Of course, please see above
> > > > > -Daniel
> > > > > 
> > > > > > Balloon driver extension, which is needed for contiguous/DMA
> > > > > > buffers, will be to provide new *kernel API*, no UAPI is needed.
> > > > > > 
> > > > > > > Wei.
> > > > > > Thank you,
> > > > > > Oleksandr
> > > > > > _______________________________________________
> > > > > > dri-devel mailing list
> > > > > > dri-devel@lists.freedesktop.org
> > > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > 
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25 17:16                                     ` Dongwon Kim
@ 2018-04-27  6:54                                       ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-27  6:54 UTC (permalink / raw)
  To: Dongwon Kim, jgross, Artem Mygaiev, Wei Liu, konrad.wilk,
	airlied, Oleksandr_Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

On 04/25/2018 08:16 PM, Dongwon Kim wrote:
> On Wed, Apr 25, 2018 at 08:34:55AM +0200, Daniel Vetter wrote:
>> On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 11:35 PM, Dongwon Kim wrote:
>>>> Had a meeting with Daniel and talked about bringing out generic
>>>> part of hyper-dmabuf to the userspace, which means we most likely
>>>> reuse IOCTLs defined in xen-zcopy for our use-case if we follow
>>>> his suggestion.
>>> I will still have kernel side API, so backends/frontends implemented
>>> in the kernel can access that functionality as well.
>>>> So assuming we use these IOCTLs as they are,
>>>> Several things I would like you to double-check..
>>>>
>>>> 1. returning gref as is to the user space is still unsafe because
>>>> it is a constant, easy to guess and any process that hijacks it can easily
>>>> exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
>>>> -gref or gref-to-dmabuf in kernel space and add other layers on top
>>>> of those in actual IOCTLs to add some safety.. We introduced flink like
>>>> hyper_dmabuf_id including random number but many says even that is still
>>>> not safe.
>>> Yes, it is generally unsafe. But even if we have implemented
>>> the approach you have in hyper-dmabuf or similar, what stops
>>> malicious software from doing the same with the existing gntdev UAPI?
>>> No need to brute force new UAPI if there is a simpler one.
>>> That being said, I'll put security aside at the first stage,
>>> but of course we can start investigating ways to improve
>>> (I assume you already have use-cases where security issues must
>>> be considered, so, probably you can tell more on what was investigated
>>> so far).
> Yeah, although we think we lowered the chance of guessing the right id
> by adding random number to it, the security hole is still there as far
> as we use a constant id across VMs. We understood this from the beginning
> but couldn't find a better way. So what we proposed is to make sure our
> customer understand this and prepare very secure way to handle this id
> in the userspace (mattrope however recently proposed a "hyper-pipe" which
> FD-type id can be converted and exchanged safely through. So we are looking
> into this now.)
>
> And another approach we have proposed is to use event-polling, that lets
> the privileged userapp in importing guest to know about a new exported
> DMABUF so that it can retrieve it from the queue then redistribute to
> other applications. This method is not very flexible however, is one way
> to hide ID from userspace completely.
>
> Anyway, yes, we can continue to investigate the possible way to make it
> more secure.
Great, if you come up with something then you'll be able
to plumb this in
>> Maybe a bit more context here:
>>
>> So in graphics we have this old flink approach for buffer sharing with
>> processes, and it's unsafe because way too easy to guess the buffer
>> handles. And anyone with access to the graphics driver can then import
>> that buffer object. We switched to file descriptor passing to make sure
>> only the intended recipient can import a buffer.
>>
>> So at the vm->vm level it sounds like grefs are safe, because they're only
>> for a specific other guest (or sets of guests, not sure about). That means
>> security is only within the OS. For that you need to make sure that
>> unpriviledge userspace simply can't ever access a gref. If that doesn't
>> work out, then I guess we should improve the xen gref stuff to have a more
>> secure cookie.
>>
>>>> 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
>>>> out of xen-zcopy and put those in a new helper library.
>>> I believe this can be done, but at the first stage I would go without
>>> that helper library, so it is clearly seen what can be moved to it later
>>> (I know that you want to run ACRN as well, but can I run it on ARM? ;)
>> There's already helpers for walking sgtables and adding pages/enumerating
>> pages. I don't think we need more.
> ok, where would that helpers be located? If we consider we will use these
> with other hypervisor drivers, maybe it's better to place those in some
> common area?
I am not quite sure what and if those helpers be really needed.
Let's try to prototype the thing and then see what can be
moved to a helper library and where it should live
>>>> 3. please consider the case where original DMA-BUF's first offset
>>>> and last length are not 0 and PAGE_SIZE respectively. I assume current
>>>> xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
>>> Hm, what is the use-case for that?
> Just in general use-case.. I was just considering the case (might be corner
> case..) where sg->offset != 0 or sg->length != PAGE_SIZE. Hyper dmabuf sends
> this information (first offset and last length) together with references for
> pages. So I was wondering if we should so similar thing in zcopy since your
> goal is now to cover general dma-buf use-cases (however, danvet mentioned
> hard constaint of dma-buf below.. so if this can't happen according to the
> spec, then we can ignore it..)
I won't be considering this use-case during prototyping as
it seems it doesn't have a *real* ground underneath
>> dma-buf is always page-aligned. That's a hard constraint of the linux
>> dma-buf interface spec.
>> -Daniel
> Hmm.. I am little bit confused..
> So does it mean dmabuf->size is always n*PAGE_SIZE? What is the sgt behind
> dmabuf has an offset other than 0 for the first sgl or the length of the
> last sgl is not PAGE_SIZE? You are saying this case is not acceptable for
> dmabuf?
IMO, yes, see above
>>>> thanks,
>>>> DW
>>> Thank you,
>>> Oleksandr
>>>> On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
>>>>> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
>>>>>> On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>>>>>>        the gntdev.
>>>>>>>>>>>
>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>> something similar to this.
>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>>>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>
>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>
>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>
>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>
>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>>>>> be added to different drivers, which means userspace program needs to
>>>>>>>> write more code and get more handles, it would be slightly better to
>>>>>>>> implement a new driver from that perspective.
>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>
>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>>>>> I was lazy to change dumb to dma-buf, so put this notice ;)
>>>>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>> s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
>>>>>> the dumb scanout buffer support in the drm/gfx subsystem. This here can be
>>>>>> used for any zcopy sharing among guests (as long as your endpoints
>>>>>> understands dma-buf, which most relevant drivers do).
>>>>> Of course, please see above
>>>>>> -Daniel
>>>>>>
>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>
>>>>>>>> Wei.
>>>>>>> Thank you,
>>>>>>> Oleksandr
>>>>>>> _______________________________________________
>>>>>>> dri-devel mailing list
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-04-27  6:54                                       ` Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-27  6:54 UTC (permalink / raw)
  To: Dongwon Kim, jgross, Artem Mygaiev, Wei Liu, konrad.wilk,
	airlied, Oleksandr_Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

On 04/25/2018 08:16 PM, Dongwon Kim wrote:
> On Wed, Apr 25, 2018 at 08:34:55AM +0200, Daniel Vetter wrote:
>> On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 11:35 PM, Dongwon Kim wrote:
>>>> Had a meeting with Daniel and talked about bringing out generic
>>>> part of hyper-dmabuf to the userspace, which means we most likely
>>>> reuse IOCTLs defined in xen-zcopy for our use-case if we follow
>>>> his suggestion.
>>> I will still have kernel side API, so backends/frontends implemented
>>> in the kernel can access that functionality as well.
>>>> So assuming we use these IOCTLs as they are,
>>>> Several things I would like you to double-check..
>>>>
>>>> 1. returning gref as is to the user space is still unsafe because
>>>> it is a constant, easy to guess and any process that hijacks it can easily
>>>> exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
>>>> -gref or gref-to-dmabuf in kernel space and add other layers on top
>>>> of those in actual IOCTLs to add some safety.. We introduced flink like
>>>> hyper_dmabuf_id including random number but many says even that is still
>>>> not safe.
>>> Yes, it is generally unsafe. But even if we have implemented
>>> the approach you have in hyper-dmabuf or similar, what stops
>>> malicious software from doing the same with the existing gntdev UAPI?
>>> No need to brute force new UAPI if there is a simpler one.
>>> That being said, I'll put security aside at the first stage,
>>> but of course we can start investigating ways to improve
>>> (I assume you already have use-cases where security issues must
>>> be considered, so, probably you can tell more on what was investigated
>>> so far).
> Yeah, although we think we lowered the chance of guessing the right id
> by adding random number to it, the security hole is still there as far
> as we use a constant id across VMs. We understood this from the beginning
> but couldn't find a better way. So what we proposed is to make sure our
> customer understand this and prepare very secure way to handle this id
> in the userspace (mattrope however recently proposed a "hyper-pipe" which
> FD-type id can be converted and exchanged safely through. So we are looking
> into this now.)
>
> And another approach we have proposed is to use event-polling, that lets
> the privileged userapp in importing guest to know about a new exported
> DMABUF so that it can retrieve it from the queue then redistribute to
> other applications. This method is not very flexible however, is one way
> to hide ID from userspace completely.
>
> Anyway, yes, we can continue to investigate the possible way to make it
> more secure.
Great, if you come up with something then you'll be able
to plumb this in
>> Maybe a bit more context here:
>>
>> So in graphics we have this old flink approach for buffer sharing with
>> processes, and it's unsafe because way too easy to guess the buffer
>> handles. And anyone with access to the graphics driver can then import
>> that buffer object. We switched to file descriptor passing to make sure
>> only the intended recipient can import a buffer.
>>
>> So at the vm->vm level it sounds like grefs are safe, because they're only
>> for a specific other guest (or sets of guests, not sure about). That means
>> security is only within the OS. For that you need to make sure that
>> unpriviledge userspace simply can't ever access a gref. If that doesn't
>> work out, then I guess we should improve the xen gref stuff to have a more
>> secure cookie.
>>
>>>> 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
>>>> out of xen-zcopy and put those in a new helper library.
>>> I believe this can be done, but at the first stage I would go without
>>> that helper library, so it is clearly seen what can be moved to it later
>>> (I know that you want to run ACRN as well, but can I run it on ARM? ;)
>> There's already helpers for walking sgtables and adding pages/enumerating
>> pages. I don't think we need more.
> ok, where would that helpers be located? If we consider we will use these
> with other hypervisor drivers, maybe it's better to place those in some
> common area?
I am not quite sure what and if those helpers be really needed.
Let's try to prototype the thing and then see what can be
moved to a helper library and where it should live
>>>> 3. please consider the case where original DMA-BUF's first offset
>>>> and last length are not 0 and PAGE_SIZE respectively. I assume current
>>>> xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
>>> Hm, what is the use-case for that?
> Just in general use-case.. I was just considering the case (might be corner
> case..) where sg->offset != 0 or sg->length != PAGE_SIZE. Hyper dmabuf sends
> this information (first offset and last length) together with references for
> pages. So I was wondering if we should so similar thing in zcopy since your
> goal is now to cover general dma-buf use-cases (however, danvet mentioned
> hard constaint of dma-buf below.. so if this can't happen according to the
> spec, then we can ignore it..)
I won't be considering this use-case during prototyping as
it seems it doesn't have a *real* ground underneath
>> dma-buf is always page-aligned. That's a hard constraint of the linux
>> dma-buf interface spec.
>> -Daniel
> Hmm.. I am little bit confused..
> So does it mean dmabuf->size is always n*PAGE_SIZE? What is the sgt behind
> dmabuf has an offset other than 0 for the first sgl or the length of the
> last sgl is not PAGE_SIZE? You are saying this case is not acceptable for
> dmabuf?
IMO, yes, see above
>>>> thanks,
>>>> DW
>>> Thank you,
>>> Oleksandr
>>>> On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
>>>>> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
>>>>>> On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>>>>>>        the gntdev.
>>>>>>>>>>>
>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>> something similar to this.
>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>>>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>
>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>
>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>
>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>
>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>>>>> be added to different drivers, which means userspace program needs to
>>>>>>>> write more code and get more handles, it would be slightly better to
>>>>>>>> implement a new driver from that perspective.
>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>
>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>>>>> I was lazy to change dumb to dma-buf, so put this notice ;)
>>>>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>> s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
>>>>>> the dumb scanout buffer support in the drm/gfx subsystem. This here can be
>>>>>> used for any zcopy sharing among guests (as long as your endpoints
>>>>>> understands dma-buf, which most relevant drivers do).
>>>>> Of course, please see above
>>>>>> -Daniel
>>>>>>
>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>
>>>>>>>> Wei.
>>>>>>> Thank you,
>>>>>>> Oleksandr
>>>>>>> _______________________________________________
>>>>>>> dri-devel mailing list
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25 17:16                                     ` Dongwon Kim
  (?)
@ 2018-04-27  6:54                                     ` Oleksandr Andrushchenko
  -1 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-04-27  6:54 UTC (permalink / raw)
  To: Dongwon Kim, jgross, Artem Mygaiev, Wei Liu, konrad.wilk,
	airlied, Oleksandr_Andrushchenko, linux-kernel, dri-devel,
	Potrola, MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

On 04/25/2018 08:16 PM, Dongwon Kim wrote:
> On Wed, Apr 25, 2018 at 08:34:55AM +0200, Daniel Vetter wrote:
>> On Wed, Apr 25, 2018 at 09:07:07AM +0300, Oleksandr Andrushchenko wrote:
>>> On 04/24/2018 11:35 PM, Dongwon Kim wrote:
>>>> Had a meeting with Daniel and talked about bringing out generic
>>>> part of hyper-dmabuf to the userspace, which means we most likely
>>>> reuse IOCTLs defined in xen-zcopy for our use-case if we follow
>>>> his suggestion.
>>> I will still have kernel side API, so backends/frontends implemented
>>> in the kernel can access that functionality as well.
>>>> So assuming we use these IOCTLs as they are,
>>>> Several things I would like you to double-check..
>>>>
>>>> 1. returning gref as is to the user space is still unsafe because
>>>> it is a constant, easy to guess and any process that hijacks it can easily
>>>> exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
>>>> -gref or gref-to-dmabuf in kernel space and add other layers on top
>>>> of those in actual IOCTLs to add some safety.. We introduced flink like
>>>> hyper_dmabuf_id including random number but many says even that is still
>>>> not safe.
>>> Yes, it is generally unsafe. But even if we have implemented
>>> the approach you have in hyper-dmabuf or similar, what stops
>>> malicious software from doing the same with the existing gntdev UAPI?
>>> No need to brute force new UAPI if there is a simpler one.
>>> That being said, I'll put security aside at the first stage,
>>> but of course we can start investigating ways to improve
>>> (I assume you already have use-cases where security issues must
>>> be considered, so, probably you can tell more on what was investigated
>>> so far).
> Yeah, although we think we lowered the chance of guessing the right id
> by adding random number to it, the security hole is still there as far
> as we use a constant id across VMs. We understood this from the beginning
> but couldn't find a better way. So what we proposed is to make sure our
> customer understand this and prepare very secure way to handle this id
> in the userspace (mattrope however recently proposed a "hyper-pipe" which
> FD-type id can be converted and exchanged safely through. So we are looking
> into this now.)
>
> And another approach we have proposed is to use event-polling, that lets
> the privileged userapp in importing guest to know about a new exported
> DMABUF so that it can retrieve it from the queue then redistribute to
> other applications. This method is not very flexible however, is one way
> to hide ID from userspace completely.
>
> Anyway, yes, we can continue to investigate the possible way to make it
> more secure.
Great, if you come up with something then you'll be able
to plumb this in
>> Maybe a bit more context here:
>>
>> So in graphics we have this old flink approach for buffer sharing with
>> processes, and it's unsafe because way too easy to guess the buffer
>> handles. And anyone with access to the graphics driver can then import
>> that buffer object. We switched to file descriptor passing to make sure
>> only the intended recipient can import a buffer.
>>
>> So at the vm->vm level it sounds like grefs are safe, because they're only
>> for a specific other guest (or sets of guests, not sure about). That means
>> security is only within the OS. For that you need to make sure that
>> unpriviledge userspace simply can't ever access a gref. If that doesn't
>> work out, then I guess we should improve the xen gref stuff to have a more
>> secure cookie.
>>
>>>> 2. maybe we could take hypervisor-independent process (e.g. SGT<->page)
>>>> out of xen-zcopy and put those in a new helper library.
>>> I believe this can be done, but at the first stage I would go without
>>> that helper library, so it is clearly seen what can be moved to it later
>>> (I know that you want to run ACRN as well, but can I run it on ARM? ;)
>> There's already helpers for walking sgtables and adding pages/enumerating
>> pages. I don't think we need more.
> ok, where would that helpers be located? If we consider we will use these
> with other hypervisor drivers, maybe it's better to place those in some
> common area?
I am not quite sure what and if those helpers be really needed.
Let's try to prototype the thing and then see what can be
moved to a helper library and where it should live
>>>> 3. please consider the case where original DMA-BUF's first offset
>>>> and last length are not 0 and PAGE_SIZE respectively. I assume current
>>>> xen-zcopy only supports page-aligned buffer with PAGE_SIZE x n big.
>>> Hm, what is the use-case for that?
> Just in general use-case.. I was just considering the case (might be corner
> case..) where sg->offset != 0 or sg->length != PAGE_SIZE. Hyper dmabuf sends
> this information (first offset and last length) together with references for
> pages. So I was wondering if we should so similar thing in zcopy since your
> goal is now to cover general dma-buf use-cases (however, danvet mentioned
> hard constaint of dma-buf below.. so if this can't happen according to the
> spec, then we can ignore it..)
I won't be considering this use-case during prototyping as
it seems it doesn't have a *real* ground underneath
>> dma-buf is always page-aligned. That's a hard constraint of the linux
>> dma-buf interface spec.
>> -Daniel
> Hmm.. I am little bit confused..
> So does it mean dmabuf->size is always n*PAGE_SIZE? What is the sgt behind
> dmabuf has an offset other than 0 for the first sgl or the length of the
> last sgl is not PAGE_SIZE? You are saying this case is not acceptable for
> dmabuf?
IMO, yes, see above
>>>> thanks,
>>>> DW
>>> Thank you,
>>> Oleksandr
>>>> On Tue, Apr 24, 2018 at 02:59:39PM +0300, Oleksandr Andrushchenko wrote:
>>>>> On 04/24/2018 02:54 PM, Daniel Vetter wrote:
>>>>>> On Mon, Apr 23, 2018 at 03:10:35PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>> On 04/23/2018 02:52 PM, Wei Liu wrote:
>>>>>>>> On Fri, Apr 20, 2018 at 02:25:20PM +0300, Oleksandr Andrushchenko wrote:
>>>>>>>>>>>        the gntdev.
>>>>>>>>>>>
>>>>>>>>>>> I think this is generic enough that it could be implemented by a
>>>>>>>>>>> device not tied to Xen. AFAICT the hyper_dma guys also wanted
>>>>>>>>>>> something similar to this.
>>>>>>>>>> You can't just wrap random userspace memory into a dma-buf. We've just had
>>>>>>>>>> this discussion with kvm/qemu folks, who proposed just that, and after a
>>>>>>>>>> bit of discussion they'll now try to have a driver which just wraps a
>>>>>>>>>> memfd into a dma-buf.
>>>>>>>>> So, we have to decide either we introduce a new driver
>>>>>>>>> (say, under drivers/xen/xen-dma-buf) or extend the existing
>>>>>>>>> gntdev/balloon to support dma-buf use-cases.
>>>>>>>>>
>>>>>>>>> Can anybody from Xen community express their preference here?
>>>>>>>>>
>>>>>>>> Oleksandr talked to me on IRC about this, he said a few IOCTLs need to
>>>>>>>> be added to either existing drivers or a new driver.
>>>>>>>>
>>>>>>>> I went through this thread twice and skimmed through the relevant
>>>>>>>> documents, but I couldn't see any obvious pros and cons for either
>>>>>>>> approach. So I don't really have an opinion on this.
>>>>>>>>
>>>>>>>> But, assuming if implemented in existing drivers, those IOCTLs need to
>>>>>>>> be added to different drivers, which means userspace program needs to
>>>>>>>> write more code and get more handles, it would be slightly better to
>>>>>>>> implement a new driver from that perspective.
>>>>>>> If gntdev/balloon extension is still considered:
>>>>>>>
>>>>>>> All the IOCTLs will be in gntdev driver (in current xen-zcopy terminology):
>>>>> I was lazy to change dumb to dma-buf, so put this notice ;)
>>>>>>>    - DRM_ICOTL_XEN_ZCOPY_DUMB_FROM_REFS
>>>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_TO_REFS
>>>>>>>    - DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE
>>>>>> s/DUMB/DMA_BUF/ please. This is generic dma-buf, it has nothing to do with
>>>>>> the dumb scanout buffer support in the drm/gfx subsystem. This here can be
>>>>>> used for any zcopy sharing among guests (as long as your endpoints
>>>>>> understands dma-buf, which most relevant drivers do).
>>>>> Of course, please see above
>>>>>> -Daniel
>>>>>>
>>>>>>> Balloon driver extension, which is needed for contiguous/DMA
>>>>>>> buffers, will be to provide new *kernel API*, no UAPI is needed.
>>>>>>>
>>>>>>>> Wei.
>>>>>>> Thank you,
>>>>>>> Oleksandr
>>>>>>> _______________________________________________
>>>>>>> dri-devel mailing list
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25  6:12                               ` [Xen-devel] " Juergen Gross
@ 2018-04-30 18:43                                 ` Dongwon Kim
  2018-04-30 18:43                                 ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-30 18:43 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Oleksandr Andrushchenko, Wei Liu, Artem Mygaiev, konrad.wilk,
	airlied, linux-kernel, dri-devel, Potrola, MateuszX,
	daniel.vetter, xen-devel, boris.ostrovsky, Roger Pau Monné,
	Oleksandr_Andrushchenko

On Wed, Apr 25, 2018 at 08:12:08AM +0200, Juergen Gross wrote:
> On 24/04/18 22:35, Dongwon Kim wrote:
> > Had a meeting with Daniel and talked about bringing out generic
> > part of hyper-dmabuf to the userspace, which means we most likely
> > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > his suggestion.
> > 
> > So assuming we use these IOCTLs as they are,
> > Several things I would like you to double-check..
> > 
> > 1. returning gref as is to the user space is still unsafe because
> > it is a constant, easy to guess and any process that hijacks it can easily
> > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > of those in actual IOCTLs to add some safety.. We introduced flink like
> > hyper_dmabuf_id including random number but many says even that is still
> > not safe.
> 
> grefs are usable by root only. When you have root access in dom0 you can
> do evil things to all VMs even without using grants. That is in no way
> different to root being able to control all other processes on the
> system.

I honestly didn't know about this. I believed kernel code simply can map those
pages. However, out of curiosity, how is non-root usage of gref prevented in
current design? Is there privilege check in grant table driver or hypercalls
needed by this page mapping is only enabled for root in hypervisor level?

And this is pretty critical information for any use-case using grant-table.
Is there any place(doc/website) this is specified/explained?

Thanks,
DW


> 
> 
> Juergen

^ permalink raw reply	[flat|nested] 131+ messages in thread

* Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
  2018-04-25  6:12                               ` [Xen-devel] " Juergen Gross
  2018-04-30 18:43                                 ` Dongwon Kim
@ 2018-04-30 18:43                                 ` Dongwon Kim
  1 sibling, 0 replies; 131+ messages in thread
From: Dongwon Kim @ 2018-04-30 18:43 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Artem Mygaiev, Wei Liu, Oleksandr Andrushchenko,
	Oleksandr_Andrushchenko, linux-kernel, dri-devel, airlied,
	Potrola, MateuszX, xen-devel, daniel.vetter, boris.ostrovsky,
	Roger Pau Monné

On Wed, Apr 25, 2018 at 08:12:08AM +0200, Juergen Gross wrote:
> On 24/04/18 22:35, Dongwon Kim wrote:
> > Had a meeting with Daniel and talked about bringing out generic
> > part of hyper-dmabuf to the userspace, which means we most likely
> > reuse IOCTLs defined in xen-zcopy for our use-case if we follow
> > his suggestion.
> > 
> > So assuming we use these IOCTLs as they are,
> > Several things I would like you to double-check..
> > 
> > 1. returning gref as is to the user space is still unsafe because
> > it is a constant, easy to guess and any process that hijacks it can easily
> > exploit the buffer. So I am wondering if it's possible to keep dmabuf-to
> > -gref or gref-to-dmabuf in kernel space and add other layers on top
> > of those in actual IOCTLs to add some safety.. We introduced flink like
> > hyper_dmabuf_id including random number but many says even that is still
> > not safe.
> 
> grefs are usable by root only. When you have root access in dom0 you can
> do evil things to all VMs even without using grants. That is in no way
> different to root being able to control all other processes on the
> system.

I honestly didn't know about this. I believed kernel code simply can map those
pages. However, out of curiosity, how is non-root usage of gref prevented in
current design? Is there privilege check in grant table driver or hypercalls
needed by this page mapping is only enabled for root in hypervisor level?

And this is pretty critical information for any use-case using grant-table.
Is there any place(doc/website) this is specified/explained?

Thanks,
DW


> 
> 
> Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

* [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
@ 2018-03-29 13:19 Oleksandr Andrushchenko
  0 siblings, 0 replies; 131+ messages in thread
From: Oleksandr Andrushchenko @ 2018-03-29 13:19 UTC (permalink / raw)
  To: xen-devel, linux-kernel, dri-devel, airlied, daniel.vetter,
	seanpaul, gustavo, jgross, boris.ostrovsky, konrad.wilk
  Cc: andr2000, Oleksandr Andrushchenko

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Hello!

When using Xen PV DRM frontend driver then on backend side one will need
to do copying of display buffers' contents (filled by the
frontend's user-space) into buffers allocated at the backend side.
Taking into account the size of display buffers and frames per seconds
it may result in unneeded huge data bus occupation and performance loss.

This helper driver allows implementing zero-copying use-cases
when using Xen para-virtualized frontend display driver by
implementing a DRM/KMS helper driver running on backend's side.
It utilizes PRIME buffers API to share frontend's buffers with
physical device drivers on backend's side:

 - a dumb buffer created on backend's side can be shared
   with the Xen PV frontend driver, so it directly writes
   into backend's domain memory (into the buffer exported from
   DRM/KMS driver of a physical display device)
 - a dumb buffer allocated by the frontend can be imported
   into physical device DRM/KMS driver, thus allowing to
   achieve no copying as well

For that reason number of IOCTLs are introduced:
 -  DRM_XEN_ZCOPY_DUMB_FROM_REFS
    This will create a DRM dumb buffer from grant references provided
    by the frontend
 - DRM_XEN_ZCOPY_DUMB_TO_REFS
   This will grant references to a dumb/display buffer's memory provided
   by the backend
 - DRM_XEN_ZCOPY_DUMB_WAIT_FREE
   This will block until the dumb buffer with the wait handle provided
   be freed

With this helper driver I was able to drop CPU usage from 17% to 3%
on Renesas R-Car M3 board.

This was tested with Renesas' Wayland-KMS and backend running as DRM master.

Thank you,
Oleksandr

Oleksandr Andrushchenko (1):
  drm/xen-zcopy: Add Xen zero-copy helper DRM driver

 Documentation/gpu/drivers.rst               |   1 +
 Documentation/gpu/xen-zcopy.rst             |  32 +
 drivers/gpu/drm/xen/Kconfig                 |  25 +
 drivers/gpu/drm/xen/Makefile                |   5 +
 drivers/gpu/drm/xen/xen_drm_zcopy.c         | 880 ++++++++++++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++
 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h |  38 ++
 include/uapi/drm/xen_zcopy_drm.h            | 129 ++++
 8 files changed, 1264 insertions(+)
 create mode 100644 Documentation/gpu/xen-zcopy.rst
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c
 create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h
 create mode 100644 include/uapi/drm/xen_zcopy_drm.h

-- 
2.16.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 131+ messages in thread

end of thread, other threads:[~2018-04-30 18:44 UTC | newest]

Thread overview: 131+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-29 13:19 [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver Oleksandr Andrushchenko
2018-03-29 13:19 ` Oleksandr Andrushchenko
2018-03-29 13:19 ` [PATCH 1/1] " Oleksandr Andrushchenko
2018-03-29 13:19 ` Oleksandr Andrushchenko
2018-03-29 13:19   ` Oleksandr Andrushchenko
2018-04-03  9:47   ` Daniel Vetter
2018-04-03  9:47   ` Daniel Vetter
2018-04-03  9:47     ` Daniel Vetter
2018-04-06 11:25     ` Oleksandr Andrushchenko
2018-04-06 11:25     ` Oleksandr Andrushchenko
2018-04-06 11:25       ` Oleksandr Andrushchenko
2018-04-09  8:27       ` Daniel Vetter
2018-04-09  8:27         ` Daniel Vetter
2018-04-09  8:27       ` Daniel Vetter
2018-04-16 14:33 ` [PATCH 0/1] " Oleksandr Andrushchenko
2018-04-16 19:29   ` Dongwon Kim
2018-04-16 19:29   ` Dongwon Kim
2018-04-16 19:29     ` Dongwon Kim
2018-04-17  7:59     ` Daniel Vetter
2018-04-17  7:59     ` Daniel Vetter
2018-04-17  7:59       ` Daniel Vetter
2018-04-17  8:19       ` Oleksandr Andrushchenko
2018-04-17  8:19       ` Oleksandr Andrushchenko
2018-04-17  8:19         ` Oleksandr Andrushchenko
2018-04-17 20:57       ` Dongwon Kim
2018-04-17 20:57       ` Dongwon Kim
2018-04-18  6:38         ` Oleksandr Andrushchenko
2018-04-18  7:35           ` Roger Pau Monné
2018-04-18  7:35           ` [Xen-devel] " Roger Pau Monné
2018-04-18  7:35             ` Roger Pau Monné
2018-04-18  8:01             ` Oleksandr Andrushchenko
2018-04-18  8:01             ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-18  8:01               ` Oleksandr Andrushchenko
2018-04-18 10:10               ` Roger Pau Monné
2018-04-18 10:10               ` [Xen-devel] " Roger Pau Monné
2018-04-18 10:10                 ` Roger Pau Monné
2018-04-18 10:18                 ` Paul Durrant
2018-04-18 10:21                   ` Oleksandr Andrushchenko
2018-04-18 10:21                     ` Oleksandr Andrushchenko
2018-04-18 10:23                     ` Paul Durrant
2018-04-18 10:31                       ` Oleksandr Andrushchenko
2018-04-18 10:31                       ` Oleksandr Andrushchenko
2018-04-18 10:23                     ` Paul Durrant
2018-04-18 10:21                   ` Oleksandr Andrushchenko
2018-04-18 10:39                   ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-18 10:39                     ` Oleksandr Andrushchenko
2018-04-18 10:55                     ` Roger Pau Monné
2018-04-18 10:55                     ` [Xen-devel] " Roger Pau Monné
2018-04-18 12:42                       ` Oleksandr Andrushchenko
2018-04-18 16:01                         ` Dongwon Kim
2018-04-18 16:01                           ` Dongwon Kim
2018-04-19  8:19                           ` Oleksandr Andrushchenko
2018-04-19  8:19                           ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-19  8:19                             ` Oleksandr Andrushchenko
2018-04-18 16:01                         ` Dongwon Kim
2018-04-18 12:42                       ` Oleksandr Andrushchenko
2018-04-20  7:22                       ` Daniel Vetter
2018-04-20  7:22                       ` [Xen-devel] " Daniel Vetter
2018-04-20  7:22                         ` Daniel Vetter
2018-04-18 10:39                   ` Oleksandr Andrushchenko
2018-04-18 10:18                 ` Paul Durrant
2018-04-20  7:19                 ` [Xen-devel] " Daniel Vetter
2018-04-20 11:25                   ` Oleksandr Andrushchenko
2018-04-20 11:25                   ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-20 11:25                     ` Oleksandr Andrushchenko
2018-04-23 11:52                     ` Wei Liu
2018-04-23 12:10                       ` Oleksandr Andrushchenko
2018-04-23 12:10                         ` Oleksandr Andrushchenko
2018-04-23 22:41                         ` Boris Ostrovsky
2018-04-23 22:41                         ` [Xen-devel] " Boris Ostrovsky
2018-04-24  5:43                           ` Oleksandr Andrushchenko
2018-04-24  5:43                           ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-24  5:43                             ` Oleksandr Andrushchenko
2018-04-24  7:51                             ` Juergen Gross
2018-04-24  8:07                               ` Oleksandr Andrushchenko
2018-04-24  8:07                               ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-24  8:07                                 ` Oleksandr Andrushchenko
2018-04-24  8:40                                 ` Juergen Gross
2018-04-24  9:03                                   ` Oleksandr Andrushchenko
2018-04-24  9:03                                     ` Oleksandr Andrushchenko
2018-04-24  9:08                                     ` Juergen Gross
2018-04-24  9:13                                       ` Oleksandr Andrushchenko
2018-04-24  9:13                                       ` Oleksandr Andrushchenko
2018-04-24 10:01                                       ` [Xen-devel] " Wei Liu
2018-04-24 10:14                                         ` Oleksandr Andrushchenko
2018-04-24 10:14                                         ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-24 10:24                                           ` Juergen Gross
2018-04-24 10:24                                           ` [Xen-devel] " Juergen Gross
2018-04-24 10:01                                       ` Wei Liu
2018-04-24  9:08                                     ` Juergen Gross
2018-04-24  9:03                                   ` Oleksandr Andrushchenko
2018-04-24  8:40                                 ` Juergen Gross
2018-04-24  7:51                             ` Juergen Gross
2018-04-24 11:54                         ` Daniel Vetter
2018-04-24 11:54                         ` [Xen-devel] " Daniel Vetter
2018-04-24 11:54                           ` Daniel Vetter
2018-04-24 11:59                           ` Oleksandr Andrushchenko
2018-04-24 20:35                             ` Dongwon Kim
2018-04-24 20:35                             ` [Xen-devel] " Dongwon Kim
2018-04-24 20:35                               ` Dongwon Kim
2018-04-25  6:07                               ` Oleksandr Andrushchenko
2018-04-25  6:34                                 ` Daniel Vetter
2018-04-25  6:34                                 ` [Xen-devel] " Daniel Vetter
2018-04-25  6:34                                   ` Daniel Vetter
2018-04-25 17:16                                   ` Dongwon Kim
2018-04-25 17:16                                     ` Dongwon Kim
2018-04-27  6:54                                     ` Oleksandr Andrushchenko
2018-04-27  6:54                                     ` [Xen-devel] " Oleksandr Andrushchenko
2018-04-27  6:54                                       ` Oleksandr Andrushchenko
2018-04-25 17:16                                   ` Dongwon Kim
2018-04-25  6:07                               ` Oleksandr Andrushchenko
2018-04-25  6:12                               ` [Xen-devel] " Juergen Gross
2018-04-30 18:43                                 ` Dongwon Kim
2018-04-30 18:43                                 ` Dongwon Kim
2018-04-25  6:12                               ` Juergen Gross
2018-04-24 11:59                           ` Oleksandr Andrushchenko
2018-04-23 12:10                       ` Oleksandr Andrushchenko
2018-04-23 11:52                     ` Wei Liu
2018-04-20  7:19                 ` Daniel Vetter
2018-04-18 17:01           ` Dongwon Kim
2018-04-18 17:01           ` Dongwon Kim
2018-04-18 17:01             ` Dongwon Kim
2018-04-19  8:14             ` Oleksandr Andrushchenko
2018-04-19  8:14             ` Oleksandr Andrushchenko
2018-04-19  8:14               ` Oleksandr Andrushchenko
2018-04-19 17:55               ` Dongwon Kim
2018-04-19 17:55               ` Dongwon Kim
2018-04-19 17:55                 ` Dongwon Kim
2018-04-18  6:38         ` Oleksandr Andrushchenko
2018-04-16 14:33 ` Oleksandr Andrushchenko
  -- strict thread matches above, loose matches on Subject: below --
2018-03-29 13:19 Oleksandr Andrushchenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.