All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] virtio-gpu api: memory and resource management
@ 2019-04-10 11:42 Gerd Hoffmann
  2019-04-10 11:42   ` Gerd Hoffmann
                   ` (5 more replies)
  0 siblings, 6 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: Tomeu Vizoso, Michael S. Tsirkin, Gurchetan Singh, Gerd Hoffmann,
	David Airlie

  Hi folks,

Revisited the old VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 approach and figured
this is maybe not the best way to go forward.  I think it is much more
useful to separate memory and resource management (see patch #2).
VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 is still there (see patch #3).

For now I have the virtio protocol changes only, looking for comments
on the idea.  Will try add support to the virtio-gpu.ko driver and qemu
next.  Support in virglrenderer and the virtio-gpu ioctl interface is
likewise on the TODO list still.

cheers,
  Gerd

Gerd Hoffmann (3):
  virtio-gpu api: comment feature flags
  virtio-gpu api: VIRTIO_GPU_F_MEMORY
  virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2

 include/uapi/linux/virtio_gpu.h | 81 ++++++++++++++++++++++++++++++++-
 1 file changed, 79 insertions(+), 2 deletions(-)

-- 
2.18.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] virtio-gpu api: comment feature flags
  2019-04-10 11:42 [PATCH 0/3] virtio-gpu api: memory and resource management Gerd Hoffmann
@ 2019-04-10 11:42   ` Gerd Hoffmann
  2019-04-10 11:42 ` Gerd Hoffmann
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: David Airlie, Michael S. Tsirkin, Marc-André Lureau,
	Tomeu Vizoso, Gurchetan Singh, Gerd Hoffmann, Jason Wang,
	David Airlie, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	open list

Add comments to the existing feature flags,
documenting which commands belong to them.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 8e88eba1fa7a..0c85914d9369 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -40,8 +40,16 @@
 
 #include <linux/types.h>
 
-#define VIRTIO_GPU_F_VIRGL 0
-#define VIRTIO_GPU_F_EDID  1
+/*
+ * VIRTIO_GPU_CMD_CTX_*
+ * VIRTIO_GPU_CMD_*_3D
+ */
+#define VIRTIO_GPU_F_VIRGL               0
+
+/*
+ * VIRTIO_GPU_CMD_GET_EDID
+ */
+#define VIRTIO_GPU_F_EDID                1
 
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 1/3] virtio-gpu api: comment feature flags
  2019-04-10 11:42 [PATCH 0/3] virtio-gpu api: memory and resource management Gerd Hoffmann
  2019-04-10 11:42   ` Gerd Hoffmann
@ 2019-04-10 11:42 ` Gerd Hoffmann
  2019-04-10 11:42 ` [PATCH 2/3] virtio-gpu api: VIRTIO_GPU_F_MEMORY Gerd Hoffmann
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, open list,
	Gurchetan Singh, Marc-André Lureau, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS

Add comments to the existing feature flags,
documenting which commands belong to them.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 8e88eba1fa7a..0c85914d9369 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -40,8 +40,16 @@
 
 #include <linux/types.h>
 
-#define VIRTIO_GPU_F_VIRGL 0
-#define VIRTIO_GPU_F_EDID  1
+/*
+ * VIRTIO_GPU_CMD_CTX_*
+ * VIRTIO_GPU_CMD_*_3D
+ */
+#define VIRTIO_GPU_F_VIRGL               0
+
+/*
+ * VIRTIO_GPU_CMD_GET_EDID
+ */
+#define VIRTIO_GPU_F_EDID                1
 
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
-- 
2.18.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 1/3] virtio-gpu api: comment feature flags
@ 2019-04-10 11:42   ` Gerd Hoffmann
  0 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: David Airlie, Michael S. Tsirkin, Marc-André Lureau,
	Tomeu Vizoso, Gurchetan Singh, Gerd Hoffmann, Jason Wang,
	David Airlie, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	open list

Add comments to the existing feature flags,
documenting which commands belong to them.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 8e88eba1fa7a..0c85914d9369 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -40,8 +40,16 @@
 
 #include <linux/types.h>
 
-#define VIRTIO_GPU_F_VIRGL 0
-#define VIRTIO_GPU_F_EDID  1
+/*
+ * VIRTIO_GPU_CMD_CTX_*
+ * VIRTIO_GPU_CMD_*_3D
+ */
+#define VIRTIO_GPU_F_VIRGL               0
+
+/*
+ * VIRTIO_GPU_CMD_GET_EDID
+ */
+#define VIRTIO_GPU_F_EDID                1
 
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
-- 
2.18.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] virtio-gpu api: VIRTIO_GPU_F_MEMORY
  2019-04-10 11:42 [PATCH 0/3] virtio-gpu api: memory and resource management Gerd Hoffmann
@ 2019-04-10 11:42   ` Gerd Hoffmann
  2019-04-10 11:42 ` Gerd Hoffmann
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: David Airlie, Michael S. Tsirkin, Marc-André Lureau,
	Tomeu Vizoso, Gurchetan Singh, Gerd Hoffmann, Jason Wang,
	David Airlie, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	open list

Introduce the concept of memory regions to virtio-gpu.  Initially only
memory regions composed of guest pages are supported (pretty much like
current backing storage for resources).  I expect support for other
memory types will be added later on.

VIRTIO_GPU_CMD_MEMORY_CREATE:
    creates a new memory region.

VIRTIO_GPU_CMD_MEMORY_UNREF:
    destroys a memory region.

VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY:
    Use memory region as backing storage for the given resource.
    The existing VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING command
    can be used to detach.

Motivation: This separates storage management from resource management.
It allows memory pooling (vulkan support will most likely need this).
It makes things a bit more flexible in general, for example we can
represent gem objects as memory regions even if we don't know the format
yet (happens on dma-buf import for example).

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 38 +++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 0c85914d9369..732bb16a39f8 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -51,6 +51,13 @@
  */
 #define VIRTIO_GPU_F_EDID                1
 
+/*
+ * VIRTIO_GPU_CMD_MEMORY_CREATE
+ * VIRTIO_GPU_CMD_MEMORY_UNREF
+ * VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY
+ */
+#define VIRTIO_GPU_F_MEMORY              2
+
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
 
@@ -66,6 +73,9 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_CMD_GET_CAPSET_INFO,
 	VIRTIO_GPU_CMD_GET_CAPSET,
 	VIRTIO_GPU_CMD_GET_EDID,
+	VIRTIO_GPU_CMD_MEMORY_CREATE,
+	VIRTIO_GPU_CMD_MEMORY_UNREF,
+	VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY,
 
 	/* 3d commands */
 	VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
@@ -95,6 +105,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID,
 	VIRTIO_GPU_RESP_ERR_INVALID_CONTEXT_ID,
 	VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER,
+	VIRTIO_GPU_RESP_ERR_INVALID_MEMORY_ID,
 };
 
 #define VIRTIO_GPU_FLAG_FENCE (1 << 0)
@@ -187,6 +198,7 @@ struct virtio_gpu_resource_attach_backing {
 	struct virtio_gpu_ctrl_hdr hdr;
 	__le32 resource_id;
 	__le32 nr_entries;
+	/* struct virtio_gpu_mem_entry entries follow here */
 };
 
 /* VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING */
@@ -270,6 +282,32 @@ struct virtio_gpu_cmd_submit {
 	__le32 padding;
 };
 
+/* VIRTIO_GPU_CMD_MEMORY_CREATE */
+struct virtio_gpu_cmd_memory_create {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le64 size;
+	__le32 memory_id;
+	__le32 nr_entries;
+	__le32 flags;
+	__le32 padding;
+	/* struct virtio_gpu_mem_entry entries follow here */
+};
+
+/* VIRTIO_GPU_CMD_MEMORY_UNREF */
+struct virtio_gpu_cmd_memory_unref {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 memory_id;
+	__le32 padding;
+};
+
+/* VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY */
+struct virtio_gpu_cmd_resource_attach_memory {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 resource_id;
+	__le32 memory_id;
+	__le64 offset[4];
+};
+
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] virtio-gpu api: VIRTIO_GPU_F_MEMORY
  2019-04-10 11:42 [PATCH 0/3] virtio-gpu api: memory and resource management Gerd Hoffmann
  2019-04-10 11:42   ` Gerd Hoffmann
  2019-04-10 11:42 ` Gerd Hoffmann
@ 2019-04-10 11:42 ` Gerd Hoffmann
  2019-04-10 11:42   ` Gerd Hoffmann
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, open list,
	Gurchetan Singh, Marc-André Lureau, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS

Introduce the concept of memory regions to virtio-gpu.  Initially only
memory regions composed of guest pages are supported (pretty much like
current backing storage for resources).  I expect support for other
memory types will be added later on.

VIRTIO_GPU_CMD_MEMORY_CREATE:
    creates a new memory region.

VIRTIO_GPU_CMD_MEMORY_UNREF:
    destroys a memory region.

VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY:
    Use memory region as backing storage for the given resource.
    The existing VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING command
    can be used to detach.

Motivation: This separates storage management from resource management.
It allows memory pooling (vulkan support will most likely need this).
It makes things a bit more flexible in general, for example we can
represent gem objects as memory regions even if we don't know the format
yet (happens on dma-buf import for example).

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 38 +++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 0c85914d9369..732bb16a39f8 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -51,6 +51,13 @@
  */
 #define VIRTIO_GPU_F_EDID                1
 
+/*
+ * VIRTIO_GPU_CMD_MEMORY_CREATE
+ * VIRTIO_GPU_CMD_MEMORY_UNREF
+ * VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY
+ */
+#define VIRTIO_GPU_F_MEMORY              2
+
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
 
@@ -66,6 +73,9 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_CMD_GET_CAPSET_INFO,
 	VIRTIO_GPU_CMD_GET_CAPSET,
 	VIRTIO_GPU_CMD_GET_EDID,
+	VIRTIO_GPU_CMD_MEMORY_CREATE,
+	VIRTIO_GPU_CMD_MEMORY_UNREF,
+	VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY,
 
 	/* 3d commands */
 	VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
@@ -95,6 +105,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID,
 	VIRTIO_GPU_RESP_ERR_INVALID_CONTEXT_ID,
 	VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER,
+	VIRTIO_GPU_RESP_ERR_INVALID_MEMORY_ID,
 };
 
 #define VIRTIO_GPU_FLAG_FENCE (1 << 0)
@@ -187,6 +198,7 @@ struct virtio_gpu_resource_attach_backing {
 	struct virtio_gpu_ctrl_hdr hdr;
 	__le32 resource_id;
 	__le32 nr_entries;
+	/* struct virtio_gpu_mem_entry entries follow here */
 };
 
 /* VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING */
@@ -270,6 +282,32 @@ struct virtio_gpu_cmd_submit {
 	__le32 padding;
 };
 
+/* VIRTIO_GPU_CMD_MEMORY_CREATE */
+struct virtio_gpu_cmd_memory_create {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le64 size;
+	__le32 memory_id;
+	__le32 nr_entries;
+	__le32 flags;
+	__le32 padding;
+	/* struct virtio_gpu_mem_entry entries follow here */
+};
+
+/* VIRTIO_GPU_CMD_MEMORY_UNREF */
+struct virtio_gpu_cmd_memory_unref {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 memory_id;
+	__le32 padding;
+};
+
+/* VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY */
+struct virtio_gpu_cmd_resource_attach_memory {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 resource_id;
+	__le32 memory_id;
+	__le64 offset[4];
+};
+
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
 
-- 
2.18.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] virtio-gpu api: VIRTIO_GPU_F_MEMORY
@ 2019-04-10 11:42   ` Gerd Hoffmann
  0 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: David Airlie, Michael S. Tsirkin, Marc-André Lureau,
	Tomeu Vizoso, Gurchetan Singh, Gerd Hoffmann, Jason Wang,
	David Airlie, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	open list

Introduce the concept of memory regions to virtio-gpu.  Initially only
memory regions composed of guest pages are supported (pretty much like
current backing storage for resources).  I expect support for other
memory types will be added later on.

VIRTIO_GPU_CMD_MEMORY_CREATE:
    creates a new memory region.

VIRTIO_GPU_CMD_MEMORY_UNREF:
    destroys a memory region.

VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY:
    Use memory region as backing storage for the given resource.
    The existing VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING command
    can be used to detach.

Motivation: This separates storage management from resource management.
It allows memory pooling (vulkan support will most likely need this).
It makes things a bit more flexible in general, for example we can
represent gem objects as memory regions even if we don't know the format
yet (happens on dma-buf import for example).

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 38 +++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 0c85914d9369..732bb16a39f8 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -51,6 +51,13 @@
  */
 #define VIRTIO_GPU_F_EDID                1
 
+/*
+ * VIRTIO_GPU_CMD_MEMORY_CREATE
+ * VIRTIO_GPU_CMD_MEMORY_UNREF
+ * VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY
+ */
+#define VIRTIO_GPU_F_MEMORY              2
+
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
 
@@ -66,6 +73,9 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_CMD_GET_CAPSET_INFO,
 	VIRTIO_GPU_CMD_GET_CAPSET,
 	VIRTIO_GPU_CMD_GET_EDID,
+	VIRTIO_GPU_CMD_MEMORY_CREATE,
+	VIRTIO_GPU_CMD_MEMORY_UNREF,
+	VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY,
 
 	/* 3d commands */
 	VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
@@ -95,6 +105,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID,
 	VIRTIO_GPU_RESP_ERR_INVALID_CONTEXT_ID,
 	VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER,
+	VIRTIO_GPU_RESP_ERR_INVALID_MEMORY_ID,
 };
 
 #define VIRTIO_GPU_FLAG_FENCE (1 << 0)
@@ -187,6 +198,7 @@ struct virtio_gpu_resource_attach_backing {
 	struct virtio_gpu_ctrl_hdr hdr;
 	__le32 resource_id;
 	__le32 nr_entries;
+	/* struct virtio_gpu_mem_entry entries follow here */
 };
 
 /* VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING */
@@ -270,6 +282,32 @@ struct virtio_gpu_cmd_submit {
 	__le32 padding;
 };
 
+/* VIRTIO_GPU_CMD_MEMORY_CREATE */
+struct virtio_gpu_cmd_memory_create {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le64 size;
+	__le32 memory_id;
+	__le32 nr_entries;
+	__le32 flags;
+	__le32 padding;
+	/* struct virtio_gpu_mem_entry entries follow here */
+};
+
+/* VIRTIO_GPU_CMD_MEMORY_UNREF */
+struct virtio_gpu_cmd_memory_unref {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 memory_id;
+	__le32 padding;
+};
+
+/* VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY */
+struct virtio_gpu_cmd_resource_attach_memory {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 resource_id;
+	__le32 memory_id;
+	__le64 offset[4];
+};
+
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
 
-- 
2.18.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-10 11:42 [PATCH 0/3] virtio-gpu api: memory and resource management Gerd Hoffmann
@ 2019-04-10 11:42   ` Gerd Hoffmann
  2019-04-10 11:42 ` Gerd Hoffmann
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: David Airlie, Michael S. Tsirkin, Marc-André Lureau,
	Tomeu Vizoso, Gurchetan Singh, Gerd Hoffmann, Jason Wang,
	David Airlie, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	open list

Add new command VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 to create resources.
It adds (a) support planar resources and (b) returns stride and size of
the resource planes.  The later will be needed in case we support
mapping host resources into the guest some day.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 732bb16a39f8..00010315e500 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -58,6 +58,11 @@
  */
 #define VIRTIO_GPU_F_MEMORY              2
 
+/*
+ * VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 command
+ */
+#define VIRTIO_GPU_F_RESSOURCE_V2        3
+
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
 
@@ -76,6 +81,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_CMD_MEMORY_CREATE,
 	VIRTIO_GPU_CMD_MEMORY_UNREF,
 	VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY,
+	VIRTIO_GPU_CMD_RESOURCE_CREATE_V2,
 
 	/* 3d commands */
 	VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
@@ -97,6 +103,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_RESP_OK_CAPSET_INFO,
 	VIRTIO_GPU_RESP_OK_CAPSET,
 	VIRTIO_GPU_RESP_OK_EDID,
+	VIRTIO_GPU_RESP_OK_RESOURCE_INFO,
 
 	/* error responses */
 	VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -308,6 +315,30 @@ struct virtio_gpu_cmd_resource_attach_memory {
 	__le64 offset[4];
 };
 
+/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
+struct virtio_gpu_cmd_resource_create_v2 {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 resource_id;
+	__le32 format;
+	__le32 width;
+	__le32 height;
+	/* 3d only */
+	__le32 target;
+	__le32 bind;
+	__le32 depth;
+	__le32 array_size;
+	__le32 last_level;
+	__le32 nr_samples;
+	__le32 flags;
+};
+
+/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
+struct virtio_gpu_resp_resource_info {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 stride[4];
+	__le32 size[4];
+};
+
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
 
-- 
2.18.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-10 11:42 [PATCH 0/3] virtio-gpu api: memory and resource management Gerd Hoffmann
                   ` (4 preceding siblings ...)
  2019-04-10 11:42   ` Gerd Hoffmann
@ 2019-04-10 11:42 ` Gerd Hoffmann
  5 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, open list,
	Gurchetan Singh, Marc-André Lureau, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS

Add new command VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 to create resources.
It adds (a) support planar resources and (b) returns stride and size of
the resource planes.  The later will be needed in case we support
mapping host resources into the guest some day.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 732bb16a39f8..00010315e500 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -58,6 +58,11 @@
  */
 #define VIRTIO_GPU_F_MEMORY              2
 
+/*
+ * VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 command
+ */
+#define VIRTIO_GPU_F_RESSOURCE_V2        3
+
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
 
@@ -76,6 +81,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_CMD_MEMORY_CREATE,
 	VIRTIO_GPU_CMD_MEMORY_UNREF,
 	VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY,
+	VIRTIO_GPU_CMD_RESOURCE_CREATE_V2,
 
 	/* 3d commands */
 	VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
@@ -97,6 +103,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_RESP_OK_CAPSET_INFO,
 	VIRTIO_GPU_RESP_OK_CAPSET,
 	VIRTIO_GPU_RESP_OK_EDID,
+	VIRTIO_GPU_RESP_OK_RESOURCE_INFO,
 
 	/* error responses */
 	VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -308,6 +315,30 @@ struct virtio_gpu_cmd_resource_attach_memory {
 	__le64 offset[4];
 };
 
+/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
+struct virtio_gpu_cmd_resource_create_v2 {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 resource_id;
+	__le32 format;
+	__le32 width;
+	__le32 height;
+	/* 3d only */
+	__le32 target;
+	__le32 bind;
+	__le32 depth;
+	__le32 array_size;
+	__le32 last_level;
+	__le32 nr_samples;
+	__le32 flags;
+};
+
+/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
+struct virtio_gpu_resp_resource_info {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 stride[4];
+	__le32 size[4];
+};
+
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
 
-- 
2.18.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
@ 2019-04-10 11:42   ` Gerd Hoffmann
  0 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-10 11:42 UTC (permalink / raw)
  To: dri-devel, virtio
  Cc: David Airlie, Michael S. Tsirkin, Marc-André Lureau,
	Tomeu Vizoso, Gurchetan Singh, Gerd Hoffmann, Jason Wang,
	David Airlie, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	open list

Add new command VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 to create resources.
It adds (a) support planar resources and (b) returns stride and size of
the resource planes.  The later will be needed in case we support
mapping host resources into the guest some day.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/linux/virtio_gpu.h | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 732bb16a39f8..00010315e500 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -58,6 +58,11 @@
  */
 #define VIRTIO_GPU_F_MEMORY              2
 
+/*
+ * VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 command
+ */
+#define VIRTIO_GPU_F_RESSOURCE_V2        3
+
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
 
@@ -76,6 +81,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_CMD_MEMORY_CREATE,
 	VIRTIO_GPU_CMD_MEMORY_UNREF,
 	VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY,
+	VIRTIO_GPU_CMD_RESOURCE_CREATE_V2,
 
 	/* 3d commands */
 	VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
@@ -97,6 +103,7 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_RESP_OK_CAPSET_INFO,
 	VIRTIO_GPU_RESP_OK_CAPSET,
 	VIRTIO_GPU_RESP_OK_EDID,
+	VIRTIO_GPU_RESP_OK_RESOURCE_INFO,
 
 	/* error responses */
 	VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
@@ -308,6 +315,30 @@ struct virtio_gpu_cmd_resource_attach_memory {
 	__le64 offset[4];
 };
 
+/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
+struct virtio_gpu_cmd_resource_create_v2 {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 resource_id;
+	__le32 format;
+	__le32 width;
+	__le32 height;
+	/* 3d only */
+	__le32 target;
+	__le32 bind;
+	__le32 depth;
+	__le32 array_size;
+	__le32 last_level;
+	__le32 nr_samples;
+	__le32 flags;
+};
+
+/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
+struct virtio_gpu_resp_resource_info {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 stride[4];
+	__le32 size[4];
+};
+
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
 
-- 
2.18.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/3] virtio-gpu api: comment feature flags
  2019-04-10 11:42   ` Gerd Hoffmann
  (?)
@ 2019-04-11  1:03   ` Gurchetan Singh
  -1 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-11  1:03 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Jason Wang,
	open list, ML dri-devel, Gurchetan Singh, David Airlie, virtio,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS


[-- Attachment #1.1: Type: text/plain, Size: 1025 bytes --]

On Wed, Apr 10, 2019 at 4:42 AM Gerd Hoffmann <kraxel@redhat.com> wrote:

> Add comments to the existing feature flags,
> documenting which commands belong to them.
>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
>

This patch is:

Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>


> ---
>  include/uapi/linux/virtio_gpu.h | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/include/uapi/linux/virtio_gpu.h
> b/include/uapi/linux/virtio_gpu.h
> index 8e88eba1fa7a..0c85914d9369 100644
> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h
> @@ -40,8 +40,16 @@
>
>  #include <linux/types.h>
>
> -#define VIRTIO_GPU_F_VIRGL 0
> -#define VIRTIO_GPU_F_EDID  1
> +/*
> + * VIRTIO_GPU_CMD_CTX_*
> + * VIRTIO_GPU_CMD_*_3D
> + */
> +#define VIRTIO_GPU_F_VIRGL               0
> +
> +/*
> + * VIRTIO_GPU_CMD_GET_EDID
> + */
> +#define VIRTIO_GPU_F_EDID                1
>
>  enum virtio_gpu_ctrl_type {
>         VIRTIO_GPU_UNDEFINED = 0,
> --
> 2.18.1
>
>

[-- Attachment #1.2: Type: text/html, Size: 1792 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-10 11:42   ` Gerd Hoffmann
  (?)
@ 2019-04-11  1:06   ` Gurchetan Singh
  2019-04-11  5:03       ` Gerd Hoffmann
  2019-04-11  5:03     ` Gerd Hoffmann
  -1 siblings, 2 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-11  1:06 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Jason Wang,
	open list, ML dri-devel, Gurchetan Singh, David Airlie, virtio,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS


[-- Attachment #1.1: Type: text/plain, Size: 3363 bytes --]

On Wed, Apr 10, 2019 at 4:42 AM Gerd Hoffmann <kraxel@redhat.com> wrote:

> Add new command VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 to create resources.
> It adds (a) support planar resources and (b) returns stride and size of
> the resource planes.  The later will be needed in case we support
> mapping host resources into the guest some day.


Thanks for looking into this!  Definitely need something like this for
sharing buffers between display, gpu and multimedia units.  Userspace
implementation?  Not sure if this is required for kernel api commits or not.


> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  include/uapi/linux/virtio_gpu.h | 31 +++++++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
>
> diff --git a/include/uapi/linux/virtio_gpu.h
> b/include/uapi/linux/virtio_gpu.h
> index 732bb16a39f8..00010315e500 100644
> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h
> @@ -58,6 +58,11 @@
>   */
>  #define VIRTIO_GPU_F_MEMORY              2
>
> +/*
> + * VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 command
> + */
> +#define VIRTIO_GPU_F_RESSOURCE_V2        3
> +
>  enum virtio_gpu_ctrl_type {
>         VIRTIO_GPU_UNDEFINED = 0,
>
> @@ -76,6 +81,7 @@ enum virtio_gpu_ctrl_type {
>         VIRTIO_GPU_CMD_MEMORY_CREATE,
>         VIRTIO_GPU_CMD_MEMORY_UNREF,
>         VIRTIO_GPU_CMD_RESOURCE_ATTACH_MEMORY,
> +       VIRTIO_GPU_CMD_RESOURCE_CREATE_V2,
>
>         /* 3d commands */
>         VIRTIO_GPU_CMD_CTX_CREATE = 0x0200,
> @@ -97,6 +103,7 @@ enum virtio_gpu_ctrl_type {
>         VIRTIO_GPU_RESP_OK_CAPSET_INFO,
>         VIRTIO_GPU_RESP_OK_CAPSET,
>         VIRTIO_GPU_RESP_OK_EDID,
> +       VIRTIO_GPU_RESP_OK_RESOURCE_INFO,
>
>         /* error responses */
>         VIRTIO_GPU_RESP_ERR_UNSPEC = 0x1200,
> @@ -308,6 +315,30 @@ struct virtio_gpu_cmd_resource_attach_memory {
>         __le64 offset[4];
>  };
>
> +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> +struct virtio_gpu_cmd_resource_create_v2 {
> +       struct virtio_gpu_ctrl_hdr hdr;
> +       __le32 resource_id;
> +       __le32 format;
> +       __le32 width;
> +       __le32 height;
> +       /* 3d only */
> +       __le32 target;
> +       __le32 bind;
> +       __le32 depth;
> +       __le32 array_size;
> +       __le32 last_level;
> +       __le32 nr_samples;
> +       __le32 flags;
> +};


I assume this is always backed by some host side allocation, without any
guest side pages associated with it?

If so, do we want the option for the guest allocate?  We're basing this off
gbm, which will do the host side allocation for us.  But userspace
libraries could change (maybe something like the proposed Unix Device
Memory Allocator takes off).
The host, in theory, could return the the metadata required about an
allocation to virtio-gpu to use guest pages.  With guest allocation we can
be more like Vulkan in that regard (see vkGetImageMemoryRequirements,
followed by vkAllocateMemory), and use udmabuf to share.  But not all hosts
can support that, so maybe having an API for both methods would be useful.



> +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> +struct virtio_gpu_resp_resource_info {
> +       struct virtio_gpu_ctrl_hdr hdr;
> +       __le32 stride[4];
> +       __le32 size[4];
> +};
>

offsets[4] needed too


> +
>  #define VIRTIO_GPU_CAPSET_VIRGL 1
>  #define VIRTIO_GPU_CAPSET_VIRGL2 2
>
> --
> 2.18.1
>
>

[-- Attachment #1.2: Type: text/html, Size: 4750 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-11  1:06   ` Gurchetan Singh
@ 2019-04-11  5:03       ` Gerd Hoffmann
  2019-04-11  5:03     ` Gerd Hoffmann
  1 sibling, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-11  5:03 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: ML dri-devel, virtio, David Airlie, Michael S. Tsirkin,
	Marc-André Lureau, Tomeu Vizoso, Jason Wang, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS, open list

> > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > +struct virtio_gpu_cmd_resource_create_v2 {
> > +       struct virtio_gpu_ctrl_hdr hdr;
> > +       __le32 resource_id;
> > +       __le32 format;
> > +       __le32 width;
> > +       __le32 height;
> > +       /* 3d only */
> > +       __le32 target;
> > +       __le32 bind;
> > +       __le32 depth;
> > +       __le32 array_size;
> > +       __le32 last_level;
> > +       __le32 nr_samples;
> > +       __le32 flags;
> > +};
> 
> 
> I assume this is always backed by some host side allocation, without any
> guest side pages associated with it?

No.  It is not backed at all yet.  Workflow would be like this:

  (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
  (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
  (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)

You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
then go attach multiple resources to it.

> If so, do we want the option for the guest allocate?

Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
(initially guest allocated only, i.e. what virtio-gpu supports today,
the plan is to add other allocation types later on).

> > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > +struct virtio_gpu_resp_resource_info {
> > +       struct virtio_gpu_ctrl_hdr hdr;
> > +       __le32 stride[4];
> > +       __le32 size[4];
> > +};
> 
> offsets[4] needed too

That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-11  1:06   ` Gurchetan Singh
  2019-04-11  5:03       ` Gerd Hoffmann
@ 2019-04-11  5:03     ` Gerd Hoffmann
  1 sibling, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-11  5:03 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, open list,
	ML dri-devel, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	Marc-André Lureau, David Airlie, virtio

> > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > +struct virtio_gpu_cmd_resource_create_v2 {
> > +       struct virtio_gpu_ctrl_hdr hdr;
> > +       __le32 resource_id;
> > +       __le32 format;
> > +       __le32 width;
> > +       __le32 height;
> > +       /* 3d only */
> > +       __le32 target;
> > +       __le32 bind;
> > +       __le32 depth;
> > +       __le32 array_size;
> > +       __le32 last_level;
> > +       __le32 nr_samples;
> > +       __le32 flags;
> > +};
> 
> 
> I assume this is always backed by some host side allocation, without any
> guest side pages associated with it?

No.  It is not backed at all yet.  Workflow would be like this:

  (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
  (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
  (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)

You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
then go attach multiple resources to it.

> If so, do we want the option for the guest allocate?

Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
(initially guest allocated only, i.e. what virtio-gpu supports today,
the plan is to add other allocation types later on).

> > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > +struct virtio_gpu_resp_resource_info {
> > +       struct virtio_gpu_ctrl_hdr hdr;
> > +       __le32 stride[4];
> > +       __le32 size[4];
> > +};
> 
> offsets[4] needed too

That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
@ 2019-04-11  5:03       ` Gerd Hoffmann
  0 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-11  5:03 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Jason Wang,
	open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

> > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > +struct virtio_gpu_cmd_resource_create_v2 {
> > +       struct virtio_gpu_ctrl_hdr hdr;
> > +       __le32 resource_id;
> > +       __le32 format;
> > +       __le32 width;
> > +       __le32 height;
> > +       /* 3d only */
> > +       __le32 target;
> > +       __le32 bind;
> > +       __le32 depth;
> > +       __le32 array_size;
> > +       __le32 last_level;
> > +       __le32 nr_samples;
> > +       __le32 flags;
> > +};
> 
> 
> I assume this is always backed by some host side allocation, without any
> guest side pages associated with it?

No.  It is not backed at all yet.  Workflow would be like this:

  (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
  (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
  (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)

You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
then go attach multiple resources to it.

> If so, do we want the option for the guest allocate?

Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
(initially guest allocated only, i.e. what virtio-gpu supports today,
the plan is to add other allocation types later on).

> > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > +struct virtio_gpu_resp_resource_info {
> > +       struct virtio_gpu_ctrl_hdr hdr;
> > +       __le32 stride[4];
> > +       __le32 size[4];
> > +};
> 
> offsets[4] needed too

That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-11  5:03       ` Gerd Hoffmann
@ 2019-04-12  1:36         ` Gurchetan Singh
  -1 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-12  1:36 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, virtio, David Airlie, Michael S. Tsirkin,
	Marc-André Lureau, Tomeu Vizoso, Jason Wang, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS, open list

On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > +       __le32 resource_id;
> > > +       __le32 format;
> > > +       __le32 width;
> > > +       __le32 height;
> > > +       /* 3d only */
> > > +       __le32 target;
> > > +       __le32 bind;
> > > +       __le32 depth;
> > > +       __le32 array_size;
> > > +       __le32 last_level;
> > > +       __le32 nr_samples;
> > > +       __le32 flags;
> > > +};
> >
> >
> > I assume this is always backed by some host side allocation, without any
> > guest side pages associated with it?
>
> No.  It is not backed at all yet.  Workflow would be like this:
>
>   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
>   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
>   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)

Thanks for the clarification.

>
> You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> then go attach multiple resources to it.
>
> > If so, do we want the option for the guest allocate?
>
> Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> (initially guest allocated only, i.e. what virtio-gpu supports today,
> the plan is to add other allocation types later on).

You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
dma-bufs with this, correct?  Let me know if it's a non-goal :-)

If so, we might want to distinguish between memory types (kind of like
memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]

Here's some host-side APIs Chromium might want to use this with workflow:

1) Vulkan seems the most straightforward

virtio_gpu_cmd_memory_create --> create kernel data structure,
vkAllocateMemory on the host or import guest memory into Vulkan,
depending on the memory type
virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
vkGetImageMemoryRequirements on host
virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host

2) With a guest allocated dma-buf using some new allocation library,

virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
optimal allocation
virtio_gpu_cmd_memory_create --> allocate guest memory pages since
it's guest memory type
virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
resource in kernel, send iovecs to host for bookkeeping

3) With gbm it's a little trickier,

virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
get metadata in return
virtio_gpu_cmd_memory_create --> create kernel data structure, but
don't allocate pages, nothing on the host
virtio_gpu_cmd_resource_attach_memory --> associate memory structure
with resource in kernel, nothing on the host

Is this what you have in mind?

>
> > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > +struct virtio_gpu_resp_resource_info {
> > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > +       __le32 stride[4];
> > > +       __le32 size[4];
> > > +};
> >
> > offsets[4] needed too
>
> That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...

I assume the offsets aren't returned by
VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.  How does the guest know at
which offsets in memory will be compatible to share with display,
camera, etc?

Also, do you want to cover the case where the resource is backed by
three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?  If so,
we might want to make resource_create_v2 more opaque like
VkImageMemoryRequirementsInfo2.

>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
@ 2019-04-12  1:36         ` Gurchetan Singh
  0 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-12  1:36 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Jason Wang,
	open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > +       __le32 resource_id;
> > > +       __le32 format;
> > > +       __le32 width;
> > > +       __le32 height;
> > > +       /* 3d only */
> > > +       __le32 target;
> > > +       __le32 bind;
> > > +       __le32 depth;
> > > +       __le32 array_size;
> > > +       __le32 last_level;
> > > +       __le32 nr_samples;
> > > +       __le32 flags;
> > > +};
> >
> >
> > I assume this is always backed by some host side allocation, without any
> > guest side pages associated with it?
>
> No.  It is not backed at all yet.  Workflow would be like this:
>
>   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
>   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
>   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)

Thanks for the clarification.

>
> You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> then go attach multiple resources to it.
>
> > If so, do we want the option for the guest allocate?
>
> Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> (initially guest allocated only, i.e. what virtio-gpu supports today,
> the plan is to add other allocation types later on).

You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
dma-bufs with this, correct?  Let me know if it's a non-goal :-)

If so, we might want to distinguish between memory types (kind of like
memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]

Here's some host-side APIs Chromium might want to use this with workflow:

1) Vulkan seems the most straightforward

virtio_gpu_cmd_memory_create --> create kernel data structure,
vkAllocateMemory on the host or import guest memory into Vulkan,
depending on the memory type
virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
vkGetImageMemoryRequirements on host
virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host

2) With a guest allocated dma-buf using some new allocation library,

virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
optimal allocation
virtio_gpu_cmd_memory_create --> allocate guest memory pages since
it's guest memory type
virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
resource in kernel, send iovecs to host for bookkeeping

3) With gbm it's a little trickier,

virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
get metadata in return
virtio_gpu_cmd_memory_create --> create kernel data structure, but
don't allocate pages, nothing on the host
virtio_gpu_cmd_resource_attach_memory --> associate memory structure
with resource in kernel, nothing on the host

Is this what you have in mind?

>
> > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > +struct virtio_gpu_resp_resource_info {
> > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > +       __le32 stride[4];
> > > +       __le32 size[4];
> > > +};
> >
> > offsets[4] needed too
>
> That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...

I assume the offsets aren't returned by
VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.  How does the guest know at
which offsets in memory will be compatible to share with display,
camera, etc?

Also, do you want to cover the case where the resource is backed by
three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?  If so,
we might want to make resource_create_v2 more opaque like
VkImageMemoryRequirementsInfo2.

>
> cheers,
>   Gerd
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-12  1:36         ` Gurchetan Singh
  (?)
  (?)
@ 2019-04-12  5:49         ` Gerd Hoffmann
  2019-04-12 23:34           ` Chia-I Wu
                             ` (2 more replies)
  -1 siblings, 3 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-12  5:49 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: ML dri-devel, virtio, David Airlie, Michael S. Tsirkin,
	Marc-André Lureau, Tomeu Vizoso, Jason Wang, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS, open list

On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > +       __le32 resource_id;
> > > > +       __le32 format;
> > > > +       __le32 width;
> > > > +       __le32 height;
> > > > +       /* 3d only */
> > > > +       __le32 target;
> > > > +       __le32 bind;
> > > > +       __le32 depth;
> > > > +       __le32 array_size;
> > > > +       __le32 last_level;
> > > > +       __le32 nr_samples;
> > > > +       __le32 flags;
> > > > +};
> > >
> > >
> > > I assume this is always backed by some host side allocation, without any
> > > guest side pages associated with it?
> >
> > No.  It is not backed at all yet.  Workflow would be like this:
> >
> >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> 
> Thanks for the clarification.
> 
> >
> > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > then go attach multiple resources to it.
> >
> > > If so, do we want the option for the guest allocate?
> >
> > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > the plan is to add other allocation types later on).
> 
> You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> dma-bufs with this, correct?  Let me know if it's a non-goal :-)

Yes, even though it is not clear yet how we are going to handle
host-allocated buffers in the vhost-user case ...
 
> If so, we might want to distinguish between memory types (kind of like
> memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]

For the host-allocated buffers we surely want that, yes.
For guest-allocated memory regions it isn't useful I think ...

> 1) Vulkan seems the most straightforward
> 
> virtio_gpu_cmd_memory_create --> create kernel data structure,
> vkAllocateMemory on the host or import guest memory into Vulkan,
> depending on the memory type
> virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> vkGetImageMemoryRequirements on host
> virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host

Yes.

Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
first to figure stride and size, then adjust memory size accordingly.

Note 2: The old virtio_gpu_cmd_resource_create variants can be used
too if you don't need the _v2 features.

Note 3: If I understand things correctly it would be valid to create a
memory pool (allocate one big chunk of memory) with vkAllocateMemory,
then bind multiple images at different offsets to it.

> 2) With a guest allocated dma-buf using some new allocation library,
> 
> virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> optimal allocation
> virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> it's guest memory type
> virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> resource in kernel, send iovecs to host for bookkeeping

virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.

> 3) With gbm it's a little trickier,
> 
> virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> get metadata in return

Only get metadata in return.

> virtio_gpu_cmd_memory_create --> create kernel data structure, but
> don't allocate pages, nothing on the host

Memory allocation happens here.  Probably makes sense to have a
virtio_gpu_cmd_memory_create_host command here, because the parameters
we need are quite different from the guest-allocated case.

Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
variant, given that gbm doesn't have raw memory buffers without any
format attached to it.

> > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > +struct virtio_gpu_resp_resource_info {
> > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > +       __le32 stride[4];
> > > > +       __le32 size[4];
> > > > +};
> > >
> > > offsets[4] needed too
> >
> > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> 
> I assume the offsets aren't returned by
> VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.

Yes, they are send by the guest.

> How does the guest know at which offsets in memory will be compatible
> to share with display, camera, etc?

Is is good enough to align offsets to page boundaries?

> Also, do you want to cover the case where the resource is backed by
> three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?

Good point.  I guess we should make memory_id in
virtio_gpu_cmd_resource_attach_memory an array then,
so you can specify a different memory region for each plane.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-12  1:36         ` Gurchetan Singh
  (?)
@ 2019-04-12  5:49         ` Gerd Hoffmann
  -1 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-12  5:49 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, open list,
	ML dri-devel, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	Marc-André Lureau, David Airlie, virtio

On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > +       __le32 resource_id;
> > > > +       __le32 format;
> > > > +       __le32 width;
> > > > +       __le32 height;
> > > > +       /* 3d only */
> > > > +       __le32 target;
> > > > +       __le32 bind;
> > > > +       __le32 depth;
> > > > +       __le32 array_size;
> > > > +       __le32 last_level;
> > > > +       __le32 nr_samples;
> > > > +       __le32 flags;
> > > > +};
> > >
> > >
> > > I assume this is always backed by some host side allocation, without any
> > > guest side pages associated with it?
> >
> > No.  It is not backed at all yet.  Workflow would be like this:
> >
> >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> 
> Thanks for the clarification.
> 
> >
> > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > then go attach multiple resources to it.
> >
> > > If so, do we want the option for the guest allocate?
> >
> > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > the plan is to add other allocation types later on).
> 
> You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> dma-bufs with this, correct?  Let me know if it's a non-goal :-)

Yes, even though it is not clear yet how we are going to handle
host-allocated buffers in the vhost-user case ...
 
> If so, we might want to distinguish between memory types (kind of like
> memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]

For the host-allocated buffers we surely want that, yes.
For guest-allocated memory regions it isn't useful I think ...

> 1) Vulkan seems the most straightforward
> 
> virtio_gpu_cmd_memory_create --> create kernel data structure,
> vkAllocateMemory on the host or import guest memory into Vulkan,
> depending on the memory type
> virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> vkGetImageMemoryRequirements on host
> virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host

Yes.

Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
first to figure stride and size, then adjust memory size accordingly.

Note 2: The old virtio_gpu_cmd_resource_create variants can be used
too if you don't need the _v2 features.

Note 3: If I understand things correctly it would be valid to create a
memory pool (allocate one big chunk of memory) with vkAllocateMemory,
then bind multiple images at different offsets to it.

> 2) With a guest allocated dma-buf using some new allocation library,
> 
> virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> optimal allocation
> virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> it's guest memory type
> virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> resource in kernel, send iovecs to host for bookkeeping

virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.

> 3) With gbm it's a little trickier,
> 
> virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> get metadata in return

Only get metadata in return.

> virtio_gpu_cmd_memory_create --> create kernel data structure, but
> don't allocate pages, nothing on the host

Memory allocation happens here.  Probably makes sense to have a
virtio_gpu_cmd_memory_create_host command here, because the parameters
we need are quite different from the guest-allocated case.

Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
variant, given that gbm doesn't have raw memory buffers without any
format attached to it.

> > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > +struct virtio_gpu_resp_resource_info {
> > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > +       __le32 stride[4];
> > > > +       __le32 size[4];
> > > > +};
> > >
> > > offsets[4] needed too
> >
> > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> 
> I assume the offsets aren't returned by
> VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.

Yes, they are send by the guest.

> How does the guest know at which offsets in memory will be compatible
> to share with display, camera, etc?

Is is good enough to align offsets to page boundaries?

> Also, do you want to cover the case where the resource is backed by
> three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?

Good point.  I guess we should make memory_id in
virtio_gpu_cmd_resource_attach_memory an array then,
so you can specify a different memory region for each plane.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-12  5:49         ` Gerd Hoffmann
  2019-04-12 23:34           ` Chia-I Wu
@ 2019-04-12 23:34           ` Chia-I Wu
  2019-04-13  0:49             ` Gurchetan Singh
  2 siblings, 0 replies; 32+ messages in thread
From: Chia-I Wu @ 2019-04-12 23:34 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: virtio, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	open list, ML dri-devel, Gurchetan Singh, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS


[-- Attachment #1.1: Type: text/plain, Size: 7120 bytes --]

Hi,

I am still new to virgl, and missed the last round of discussion about
resource_create_v2.

From the discussion below, semantically resource_create_v2 creates a host
resource object _without_ any storage; memory_create creates a host memory
object which provides the storage.  Is that correct?

And this version of memory_create is probably the most special one among
its other potential variants, because it is the only(?) one who imports the
pre-allocated guest pages.

Do we expect these new commands to be supported by OpenGL, which does not
separate resources and memories?

On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@redhat.com> wrote:

> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com>
> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 resource_id;
> > > > > +       __le32 format;
> > > > > +       __le32 width;
> > > > > +       __le32 height;
> > > > > +       /* 3d only */
> > > > > +       __le32 target;
> > > > > +       __le32 bind;
> > > > > +       __le32 depth;
> > > > > +       __le32 array_size;
> > > > > +       __le32 last_level;
> > > > > +       __le32 nr_samples;
> > > > > +       __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without
> any
> > > > guest side pages associated with it?
> > >
> > > No.  It is not backed at all yet.  Workflow would be like this:
> > >
> > >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct?  Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...

This might be another dumb question, but is this only an issue for
vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
the guest address space?



>
> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...
>
Guest-allocated memory regions can be just another memory type.

But one needs to create the resource first to know which memory types can
be attached to it.  I think the metadata needs to be returned with
resource_create_v2.

>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.
>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here.  Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
If we follow Vulkan, we only need the size and the memory type in most
cases.  The current version of memory_create is a special case because it
is an import operation and needs the guest mem_entry.  Perhaps
memory_create (for host) and memory_import_guest (to replace the current
version)?

>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.
>
And the memory will only be attachable to the given (or compatible)
resource, right?

Vulkan is much more explicit than any pre-existing API.  I guess we will
have to add this to cover APIs beyond Vulkan.


>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 stride[4];
> > > > > +       __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
Gurchetan probably means alignment[4].

>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?
>
That should be good enough.  But by returning alignments, we can minimize
the gaps when attaching multiple resources, especially when the resources
are only used by GPU.


>
> > Also, do you want to cover the case where the resource is backed by
> > three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?
>
> Good point.  I guess we should make memory_id in
> virtio_gpu_cmd_resource_attach_memory an array then,
> so you can specify a different memory region for each plane.
>
> cheers,
>   Gerd
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

[-- Attachment #1.2: Type: text/html, Size: 9825 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-12  5:49         ` Gerd Hoffmann
@ 2019-04-12 23:34           ` Chia-I Wu
  2019-04-17  9:57               ` Gerd Hoffmann
  2019-04-17  9:57             ` Gerd Hoffmann
  2019-04-12 23:34           ` Chia-I Wu
  2019-04-13  0:49             ` Gurchetan Singh
  2 siblings, 2 replies; 32+ messages in thread
From: Chia-I Wu @ 2019-04-12 23:34 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: virtio, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	Jason Wang, open list, ML dri-devel, Gurchetan Singh,
	David Airlie, open list:VIRTIO CORE, NET AND BLOCK DRIVERS


[-- Attachment #1.1: Type: text/plain, Size: 7121 bytes --]

Hi,

I am still new to virgl, and missed the last round of discussion about
resource_create_v2.

>From the discussion below, semantically resource_create_v2 creates a host
resource object _without_ any storage; memory_create creates a host memory
object which provides the storage.  Is that correct?

And this version of memory_create is probably the most special one among
its other potential variants, because it is the only(?) one who imports the
pre-allocated guest pages.

Do we expect these new commands to be supported by OpenGL, which does not
separate resources and memories?

On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@redhat.com> wrote:

> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com>
> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 resource_id;
> > > > > +       __le32 format;
> > > > > +       __le32 width;
> > > > > +       __le32 height;
> > > > > +       /* 3d only */
> > > > > +       __le32 target;
> > > > > +       __le32 bind;
> > > > > +       __le32 depth;
> > > > > +       __le32 array_size;
> > > > > +       __le32 last_level;
> > > > > +       __le32 nr_samples;
> > > > > +       __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without
> any
> > > > guest side pages associated with it?
> > >
> > > No.  It is not backed at all yet.  Workflow would be like this:
> > >
> > >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct?  Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...

This might be another dumb question, but is this only an issue for
vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
the guest address space?



>
> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...
>
Guest-allocated memory regions can be just another memory type.

But one needs to create the resource first to know which memory types can
be attached to it.  I think the metadata needs to be returned with
resource_create_v2.

>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.
>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here.  Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
If we follow Vulkan, we only need the size and the memory type in most
cases.  The current version of memory_create is a special case because it
is an import operation and needs the guest mem_entry.  Perhaps
memory_create (for host) and memory_import_guest (to replace the current
version)?

>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.
>
And the memory will only be attachable to the given (or compatible)
resource, right?

Vulkan is much more explicit than any pre-existing API.  I guess we will
have to add this to cover APIs beyond Vulkan.


>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 stride[4];
> > > > > +       __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
Gurchetan probably means alignment[4].

>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?
>
That should be good enough.  But by returning alignments, we can minimize
the gaps when attaching multiple resources, especially when the resources
are only used by GPU.


>
> > Also, do you want to cover the case where the resource is backed by
> > three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?
>
> Good point.  I guess we should make memory_id in
> virtio_gpu_cmd_resource_attach_memory an array then,
> so you can specify a different memory region for each plane.
>
> cheers,
>   Gerd
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

[-- Attachment #1.2: Type: text/html, Size: 9825 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-12  5:49         ` Gerd Hoffmann
@ 2019-04-13  0:49             ` Gurchetan Singh
  2019-04-12 23:34           ` Chia-I Wu
  2019-04-13  0:49             ` Gurchetan Singh
  2 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-13  0:49 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: ML dri-devel, virtio, David Airlie, Michael S. Tsirkin,
	Marc-André Lureau, Tomeu Vizoso, Jason Wang, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS, open list

On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 resource_id;
> > > > > +       __le32 format;
> > > > > +       __le32 width;
> > > > > +       __le32 height;
> > > > > +       /* 3d only */
> > > > > +       __le32 target;
> > > > > +       __le32 bind;
> > > > > +       __le32 depth;
> > > > > +       __le32 array_size;
> > > > > +       __le32 last_level;
> > > > > +       __le32 nr_samples;
> > > > > +       __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without any
> > > > guest side pages associated with it?
> > >
> > > No.  It is not backed at all yet.  Workflow would be like this:
> > >
> > >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct?  Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...

For Vulkan, VK_EXT_external_memory_dma_buf +
VK_EXT_image_drm_format_modifier should do the trick on Linux devices
that support these extensions.

For GL, I don't see any way forward given the current KVM api.

> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...

Are guest memory regions always write combine, or can they be (read)
cached?  Is this something we can control in the virtgpu kernel
driver?

>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.

With the current gbm API, metadata is only returned after the
allocation is complete.

We're fine with changing this for minigbm (i.e, having
gbm_get_allocation_metadata(width, height, modifier, *out_metadata).
Not sure what the plan is for Mesa gbm, or the Unix Device Memory
Allocator.

>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here.  Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.

We should create VIRTIO_GPU_F_RESOURCE_V2  with what gbm should be in
mind, not what gbm currently is.

>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 stride[4];
> > > > > +       __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?

Do you mean ((offset - stride * height) <= PAGE_SIZE))?  If so, no.
For example, do the calculation here with a 256 x 256 NV12 buffer:

https://chromium.googlesource.com/chromiumos/platform/minigbm/+/master/rockchip.c#166

Is the offset generally divisible by PAGE_SIZE?  Yes, in the cases I've seen.


On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 resource_id;
> > > > > +       __le32 format;
> > > > > +       __le32 width;
> > > > > +       __le32 height;
> > > > > +       /* 3d only */
> > > > > +       __le32 target;
> > > > > +       __le32 bind;
> > > > > +       __le32 depth;
> > > > > +       __le32 array_size;
> > > > > +       __le32 last_level;
> > > > > +       __le32 nr_samples;
> > > > > +       __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without any
> > > > guest side pages associated with it?
> > >
> > > No.  It is not backed at all yet.  Workflow would be like this:
> > >
> > >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct?  Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...
>
> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...
>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.
>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here.  Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.
>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 stride[4];
> > > > > +       __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?
>
> > Also, do you want to cover the case where the resource is backed by
> > three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?
>
> Good point.  I guess we should make memory_id in
> virtio_gpu_cmd_resource_attach_memory an array then,
> so you can specify a different memory region for each plane.
>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
@ 2019-04-13  0:49             ` Gurchetan Singh
  0 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-13  0:49 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Jason Wang,
	open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 resource_id;
> > > > > +       __le32 format;
> > > > > +       __le32 width;
> > > > > +       __le32 height;
> > > > > +       /* 3d only */
> > > > > +       __le32 target;
> > > > > +       __le32 bind;
> > > > > +       __le32 depth;
> > > > > +       __le32 array_size;
> > > > > +       __le32 last_level;
> > > > > +       __le32 nr_samples;
> > > > > +       __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without any
> > > > guest side pages associated with it?
> > >
> > > No.  It is not backed at all yet.  Workflow would be like this:
> > >
> > >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct?  Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...

For Vulkan, VK_EXT_external_memory_dma_buf +
VK_EXT_image_drm_format_modifier should do the trick on Linux devices
that support these extensions.

For GL, I don't see any way forward given the current KVM api.

> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...

Are guest memory regions always write combine, or can they be (read)
cached?  Is this something we can control in the virtgpu kernel
driver?

>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.

With the current gbm API, metadata is only returned after the
allocation is complete.

We're fine with changing this for minigbm (i.e, having
gbm_get_allocation_metadata(width, height, modifier, *out_metadata).
Not sure what the plan is for Mesa gbm, or the Unix Device Memory
Allocator.

>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here.  Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.

We should create VIRTIO_GPU_F_RESOURCE_V2  with what gbm should be in
mind, not what gbm currently is.

>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 stride[4];
> > > > > +       __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?

Do you mean ((offset - stride * height) <= PAGE_SIZE))?  If so, no.
For example, do the calculation here with a 256 x 256 NV12 buffer:

https://chromium.googlesource.com/chromiumos/platform/minigbm/+/master/rockchip.c#166

Is the offset generally divisible by PAGE_SIZE?  Yes, in the cases I've seen.


On Thu, Apr 11, 2019 at 10:49 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote:
> > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */
> > > > > +struct virtio_gpu_cmd_resource_create_v2 {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 resource_id;
> > > > > +       __le32 format;
> > > > > +       __le32 width;
> > > > > +       __le32 height;
> > > > > +       /* 3d only */
> > > > > +       __le32 target;
> > > > > +       __le32 bind;
> > > > > +       __le32 depth;
> > > > > +       __le32 array_size;
> > > > > +       __le32 last_level;
> > > > > +       __le32 nr_samples;
> > > > > +       __le32 flags;
> > > > > +};
> > > >
> > > >
> > > > I assume this is always backed by some host side allocation, without any
> > > > guest side pages associated with it?
> > >
> > > No.  It is not backed at all yet.  Workflow would be like this:
> > >
> > >   (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2
> > >   (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2)
> > >   (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2)
> >
> > Thanks for the clarification.
> >
> > >
> > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE,
> > > then go attach multiple resources to it.
> > >
> > > > If so, do we want the option for the guest allocate?
> > >
> > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE
> > > (initially guest allocated only, i.e. what virtio-gpu supports today,
> > > the plan is to add other allocation types later on).
> >
> > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated
> > dma-bufs with this, correct?  Let me know if it's a non-goal :-)
>
> Yes, even though it is not clear yet how we are going to handle
> host-allocated buffers in the vhost-user case ...
>
> > If so, we might want to distinguish between memory types (kind of like
> > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id]
>
> For the host-allocated buffers we surely want that, yes.
> For guest-allocated memory regions it isn't useful I think ...
>
> > 1) Vulkan seems the most straightforward
> >
> > virtio_gpu_cmd_memory_create --> create kernel data structure,
> > vkAllocateMemory on the host or import guest memory into Vulkan,
> > depending on the memory type
> > virtio_gpu_cmd_resource_create_v2 -->  vkCreateImage +
> > vkGetImageMemoryRequirements on host
> > virtio_gpu_cmd_resource_attach_memory -->  vkBindImageMemory on host
>
> Yes.
>
> Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2
> ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2
> first to figure stride and size, then adjust memory size accordingly.
>
> Note 2: The old virtio_gpu_cmd_resource_create variants can be used
> too if you don't need the _v2 features.
>
> Note 3: If I understand things correctly it would be valid to create a
> memory pool (allocate one big chunk of memory) with vkAllocateMemory,
> then bind multiple images at different offsets to it.
>
> > 2) With a guest allocated dma-buf using some new allocation library,
> >
> > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing
> > optimal allocation
> > virtio_gpu_cmd_memory_create --> allocate guest memory pages since
> > it's guest memory type
> > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with
> > resource in kernel, send iovecs to host for bookkeeping
>
> virtio_gpu_cmd_memory_create sends the iovecs.  Otherwise correct.
>
> > 3) With gbm it's a little trickier,
> >
> > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers,
> > get metadata in return
>
> Only get metadata in return.
>
> > virtio_gpu_cmd_memory_create --> create kernel data structure, but
> > don't allocate pages, nothing on the host
>
> Memory allocation happens here.  Probably makes sense to have a
> virtio_gpu_cmd_memory_create_host command here, because the parameters
> we need are quite different from the guest-allocated case.
>
> Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource
> variant, given that gbm doesn't have raw memory buffers without any
> format attached to it.
>
> > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */
> > > > > +struct virtio_gpu_resp_resource_info {
> > > > > +       struct virtio_gpu_ctrl_hdr hdr;
> > > > > +       __le32 stride[4];
> > > > > +       __le32 size[4];
> > > > > +};
> > > >
> > > > offsets[4] needed too
> > >
> > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ...
> >
> > I assume the offsets aren't returned by
> > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH.
>
> Yes, they are send by the guest.
>
> > How does the guest know at which offsets in memory will be compatible
> > to share with display, camera, etc?
>
> Is is good enough to align offsets to page boundaries?
>
> > Also, do you want to cover the case where the resource is backed by
> > three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)?
>
> Good point.  I guess we should make memory_id in
> virtio_gpu_cmd_resource_attach_memory an array then,
> so you can specify a different memory region for each plane.
>
> cheers,
>   Gerd
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-12 23:34           ` Chia-I Wu
@ 2019-04-17  9:57               ` Gerd Hoffmann
  2019-04-17  9:57             ` Gerd Hoffmann
  1 sibling, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-17  9:57 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: Gurchetan Singh, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	Jason Wang, open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> Hi,
> 
> I am still new to virgl, and missed the last round of discussion about
> resource_create_v2.
> 
> From the discussion below, semantically resource_create_v2 creates a host
> resource object _without_ any storage; memory_create creates a host memory
> object which provides the storage.  Is that correct?

Right now all resource_create_* variants create a resource object with
host storage.  memory_create creates guest storage, and
resource_attach_memory binds things together.  Then you have to transfer
the data.

Hmm, maybe we need a flag indicating that host storage is not needed,
for resources where we want establish some kind of shared mapping later
on.

> Do we expect these new commands to be supported by OpenGL, which does not
> separate resources and memories?

Well, for opengl you need a 1:1 relationship between memory region and
resource.

> > Yes, even though it is not clear yet how we are going to handle
> > host-allocated buffers in the vhost-user case ...
> 
> This might be another dumb question, but is this only an issue for
> vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> the guest address space?

qemu can change the address space, that includes mmap()ing stuff there.
An external vhost-user process can't do this, it can only read the
address space layout, and read/write from/to guest memory.

> But one needs to create the resource first to know which memory types can
> be attached to it.  I think the metadata needs to be returned with
> resource_create_v2.

There is a resource_info reply for that.

> That should be good enough.  But by returning alignments, we can minimize
> the gaps when attaching multiple resources, especially when the resources
> are only used by GPU.

We can add alignments to the resource_info reply.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-12 23:34           ` Chia-I Wu
  2019-04-17  9:57               ` Gerd Hoffmann
@ 2019-04-17  9:57             ` Gerd Hoffmann
  1 sibling, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-17  9:57 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: virtio, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	open list, ML dri-devel, Gurchetan Singh, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS

On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> Hi,
> 
> I am still new to virgl, and missed the last round of discussion about
> resource_create_v2.
> 
> From the discussion below, semantically resource_create_v2 creates a host
> resource object _without_ any storage; memory_create creates a host memory
> object which provides the storage.  Is that correct?

Right now all resource_create_* variants create a resource object with
host storage.  memory_create creates guest storage, and
resource_attach_memory binds things together.  Then you have to transfer
the data.

Hmm, maybe we need a flag indicating that host storage is not needed,
for resources where we want establish some kind of shared mapping later
on.

> Do we expect these new commands to be supported by OpenGL, which does not
> separate resources and memories?

Well, for opengl you need a 1:1 relationship between memory region and
resource.

> > Yes, even though it is not clear yet how we are going to handle
> > host-allocated buffers in the vhost-user case ...
> 
> This might be another dumb question, but is this only an issue for
> vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> the guest address space?

qemu can change the address space, that includes mmap()ing stuff there.
An external vhost-user process can't do this, it can only read the
address space layout, and read/write from/to guest memory.

> But one needs to create the resource first to know which memory types can
> be attached to it.  I think the metadata needs to be returned with
> resource_create_v2.

There is a resource_info reply for that.

> That should be good enough.  But by returning alignments, we can minimize
> the gaps when attaching multiple resources, especially when the resources
> are only used by GPU.

We can add alignments to the resource_info reply.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
@ 2019-04-17  9:57               ` Gerd Hoffmann
  0 siblings, 0 replies; 32+ messages in thread
From: Gerd Hoffmann @ 2019-04-17  9:57 UTC (permalink / raw)
  To: Chia-I Wu
  Cc: Gurchetan Singh, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	Jason Wang, open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> Hi,
> 
> I am still new to virgl, and missed the last round of discussion about
> resource_create_v2.
> 
> From the discussion below, semantically resource_create_v2 creates a host
> resource object _without_ any storage; memory_create creates a host memory
> object which provides the storage.  Is that correct?

Right now all resource_create_* variants create a resource object with
host storage.  memory_create creates guest storage, and
resource_attach_memory binds things together.  Then you have to transfer
the data.

Hmm, maybe we need a flag indicating that host storage is not needed,
for resources where we want establish some kind of shared mapping later
on.

> Do we expect these new commands to be supported by OpenGL, which does not
> separate resources and memories?

Well, for opengl you need a 1:1 relationship between memory region and
resource.

> > Yes, even though it is not clear yet how we are going to handle
> > host-allocated buffers in the vhost-user case ...
> 
> This might be another dumb question, but is this only an issue for
> vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> the guest address space?

qemu can change the address space, that includes mmap()ing stuff there.
An external vhost-user process can't do this, it can only read the
address space layout, and read/write from/to guest memory.

> But one needs to create the resource first to know which memory types can
> be attached to it.  I think the metadata needs to be returned with
> resource_create_v2.

There is a resource_info reply for that.

> That should be good enough.  But by returning alignments, we can minimize
> the gaps when attaching multiple resources, especially when the resources
> are only used by GPU.

We can add alignments to the resource_info reply.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-17  9:57               ` Gerd Hoffmann
@ 2019-04-17 18:06                 ` Chia-I Wu
  -1 siblings, 0 replies; 32+ messages in thread
From: Chia-I Wu @ 2019-04-17 18:06 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Gurchetan Singh, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	Jason Wang, open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
In Gurchetan's Vulkan example,  the host storage allocation happens in
(some variant of) memory_create, not in resource_create_v2.  Maybe
that's what got me confused.

>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
This makes sense, to support both Vulkan and non-Vulkan models.

This differs from this patch, but I think a full-fledged resource
should logically have three components

 - a RESOURCE component that has not storage
 - a MEMORY component that provides the storage
 - a BACKING component that is for transfers

resource_attach_backing sets the BACKING component.  BACKING always
uses guest pages and supports only transfers into or out of MEMORY.

resource_attach_memory sets the MEMORY component.  MEMORY can use host
or guest pages, and must always support GPU operations.  When a MEMORY
is mappable in the guest, we can skip BACKING and achieve zero-copy.

resource_create_* can then get a flag to indicate whether only
RESOURCE is created or RESOURCE+MEMORY is created.


>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
I thought vhost-user process can work with the host-allocated dmabuf
directly.  That is,

  qemu: injects dmabuf pages into guest address space
  vhost-user: work with the dmabuf
  guest: can read/write those pages

>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.
>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-17  9:57               ` Gerd Hoffmann
  (?)
@ 2019-04-17 18:06               ` Chia-I Wu
  -1 siblings, 0 replies; 32+ messages in thread
From: Chia-I Wu @ 2019-04-17 18:06 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: virtio, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	open list, ML dri-devel, Gurchetan Singh, David Airlie,
	open list:VIRTIO CORE, NET AND BLOCK DRIVERS

On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
In Gurchetan's Vulkan example,  the host storage allocation happens in
(some variant of) memory_create, not in resource_create_v2.  Maybe
that's what got me confused.

>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
This makes sense, to support both Vulkan and non-Vulkan models.

This differs from this patch, but I think a full-fledged resource
should logically have three components

 - a RESOURCE component that has not storage
 - a MEMORY component that provides the storage
 - a BACKING component that is for transfers

resource_attach_backing sets the BACKING component.  BACKING always
uses guest pages and supports only transfers into or out of MEMORY.

resource_attach_memory sets the MEMORY component.  MEMORY can use host
or guest pages, and must always support GPU operations.  When a MEMORY
is mappable in the guest, we can skip BACKING and achieve zero-copy.

resource_create_* can then get a flag to indicate whether only
RESOURCE is created or RESOURCE+MEMORY is created.


>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
I thought vhost-user process can work with the host-allocated dmabuf
directly.  That is,

  qemu: injects dmabuf pages into guest address space
  vhost-user: work with the dmabuf
  guest: can read/write those pages

>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.
>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
@ 2019-04-17 18:06                 ` Chia-I Wu
  0 siblings, 0 replies; 32+ messages in thread
From: Chia-I Wu @ 2019-04-17 18:06 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Gurchetan Singh, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	Jason Wang, open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
In Gurchetan's Vulkan example,  the host storage allocation happens in
(some variant of) memory_create, not in resource_create_v2.  Maybe
that's what got me confused.

>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
This makes sense, to support both Vulkan and non-Vulkan models.

This differs from this patch, but I think a full-fledged resource
should logically have three components

 - a RESOURCE component that has not storage
 - a MEMORY component that provides the storage
 - a BACKING component that is for transfers

resource_attach_backing sets the BACKING component.  BACKING always
uses guest pages and supports only transfers into or out of MEMORY.

resource_attach_memory sets the MEMORY component.  MEMORY can use host
or guest pages, and must always support GPU operations.  When a MEMORY
is mappable in the guest, we can skip BACKING and achieve zero-copy.

resource_create_* can then get a flag to indicate whether only
RESOURCE is created or RESOURCE+MEMORY is created.


>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
I thought vhost-user process can work with the host-allocated dmabuf
directly.  That is,

  qemu: injects dmabuf pages into guest address space
  vhost-user: work with the dmabuf
  guest: can read/write those pages

>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.
>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-17  9:57               ` Gerd Hoffmann
@ 2019-04-23  0:12                 ` Gurchetan Singh
  -1 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-23  0:12 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Chia-I Wu, Tomeu Vizoso, Michael S. Tsirkin, David Airlie,
	Jason Wang, open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.

The memory type should probably be in resource_create_v2, not in the
reply.  The metadata will be different based on the memory heap it's
allocated from.

Also, not all heaps need to be exposed to the guest kernel.  For
example, device local memory in Vulkan could be un-mappable.  In fact,
for resources that are not host visible we might be better off
sidestepping the kernel altogether and tracking allocation in guest
userspace.

Here is an example of memory types the guest kernel may be interested
in (based on i965):

Type 0 --> DEVICE_LOCAL_BIT | HOST_VISIBLE_BIT | HOST_COHERENT_BIT |
HOST_CACHED_BIT | RENDERER_ALLOCATED (Vulkan)
Type 1 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | EXTERNAL_ALLOCATED
(gbm write combine)
Type 2 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | GUEST_ALLOCATED
(guest allocated memory, which I assume is also write combine)
Type 3 --> HOST_VISIBLE_BIT | HOST_CACHED | EXTERNAL_ALLOCATED (gbm
cached memory)



>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
  2019-04-17  9:57               ` Gerd Hoffmann
                                 ` (3 preceding siblings ...)
  (?)
@ 2019-04-23  0:12               ` Gurchetan Singh
  -1 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-23  0:12 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, open list,
	ML dri-devel, open list:VIRTIO CORE, NET AND BLOCK DRIVERS,
	David Airlie, virtio, Chia-I Wu

On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.

The memory type should probably be in resource_create_v2, not in the
reply.  The metadata will be different based on the memory heap it's
allocated from.

Also, not all heaps need to be exposed to the guest kernel.  For
example, device local memory in Vulkan could be un-mappable.  In fact,
for resources that are not host visible we might be better off
sidestepping the kernel altogether and tracking allocation in guest
userspace.

Here is an example of memory types the guest kernel may be interested
in (based on i965):

Type 0 --> DEVICE_LOCAL_BIT | HOST_VISIBLE_BIT | HOST_COHERENT_BIT |
HOST_CACHED_BIT | RENDERER_ALLOCATED (Vulkan)
Type 1 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | EXTERNAL_ALLOCATED
(gbm write combine)
Type 2 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | GUEST_ALLOCATED
(guest allocated memory, which I assume is also write combine)
Type 3 --> HOST_VISIBLE_BIT | HOST_CACHED | EXTERNAL_ALLOCATED (gbm
cached memory)



>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2
@ 2019-04-23  0:12                 ` Gurchetan Singh
  0 siblings, 0 replies; 32+ messages in thread
From: Gurchetan Singh @ 2019-04-23  0:12 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Jason Wang,
	open list, ML dri-devel, open list:VIRTIO CORE,
	NET AND BLOCK DRIVERS, David Airlie, virtio

On Wed, Apr 17, 2019 at 2:57 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Fri, Apr 12, 2019 at 04:34:20PM -0700, Chia-I Wu wrote:
> > Hi,
> >
> > I am still new to virgl, and missed the last round of discussion about
> > resource_create_v2.
> >
> > From the discussion below, semantically resource_create_v2 creates a host
> > resource object _without_ any storage; memory_create creates a host memory
> > object which provides the storage.  Is that correct?
>
> Right now all resource_create_* variants create a resource object with
> host storage.  memory_create creates guest storage, and
> resource_attach_memory binds things together.  Then you have to transfer
> the data.
>
> Hmm, maybe we need a flag indicating that host storage is not needed,
> for resources where we want establish some kind of shared mapping later
> on.
>
> > Do we expect these new commands to be supported by OpenGL, which does not
> > separate resources and memories?
>
> Well, for opengl you need a 1:1 relationship between memory region and
> resource.
>
> > > Yes, even though it is not clear yet how we are going to handle
> > > host-allocated buffers in the vhost-user case ...
> >
> > This might be another dumb question, but is this only an issue for
> > vhost-user(-gpu) case?  What mechanisms are used to map host dma-buf into
> > the guest address space?
>
> qemu can change the address space, that includes mmap()ing stuff there.
> An external vhost-user process can't do this, it can only read the
> address space layout, and read/write from/to guest memory.
>
> > But one needs to create the resource first to know which memory types can
> > be attached to it.  I think the metadata needs to be returned with
> > resource_create_v2.
>
> There is a resource_info reply for that.

The memory type should probably be in resource_create_v2, not in the
reply.  The metadata will be different based on the memory heap it's
allocated from.

Also, not all heaps need to be exposed to the guest kernel.  For
example, device local memory in Vulkan could be un-mappable.  In fact,
for resources that are not host visible we might be better off
sidestepping the kernel altogether and tracking allocation in guest
userspace.

Here is an example of memory types the guest kernel may be interested
in (based on i965):

Type 0 --> DEVICE_LOCAL_BIT | HOST_VISIBLE_BIT | HOST_COHERENT_BIT |
HOST_CACHED_BIT | RENDERER_ALLOCATED (Vulkan)
Type 1 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | EXTERNAL_ALLOCATED
(gbm write combine)
Type 2 --> HOST_VISIBLE_BIT | HOST_COHERENT_BIT | GUEST_ALLOCATED
(guest allocated memory, which I assume is also write combine)
Type 3 --> HOST_VISIBLE_BIT | HOST_CACHED | EXTERNAL_ALLOCATED (gbm
cached memory)



>
> > That should be good enough.  But by returning alignments, we can minimize
> > the gaps when attaching multiple resources, especially when the resources
> > are only used by GPU.
>
> We can add alignments to the resource_info reply.
>
> cheers,
>   Gerd
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2019-04-23  0:12 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-10 11:42 [PATCH 0/3] virtio-gpu api: memory and resource management Gerd Hoffmann
2019-04-10 11:42 ` [PATCH 1/3] virtio-gpu api: comment feature flags Gerd Hoffmann
2019-04-10 11:42   ` Gerd Hoffmann
2019-04-11  1:03   ` Gurchetan Singh
2019-04-10 11:42 ` Gerd Hoffmann
2019-04-10 11:42 ` [PATCH 2/3] virtio-gpu api: VIRTIO_GPU_F_MEMORY Gerd Hoffmann
2019-04-10 11:42 ` Gerd Hoffmann
2019-04-10 11:42   ` Gerd Hoffmann
2019-04-10 11:42 ` [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2 Gerd Hoffmann
2019-04-10 11:42   ` Gerd Hoffmann
2019-04-11  1:06   ` Gurchetan Singh
2019-04-11  5:03     ` Gerd Hoffmann
2019-04-11  5:03       ` Gerd Hoffmann
2019-04-12  1:36       ` Gurchetan Singh
2019-04-12  1:36         ` Gurchetan Singh
2019-04-12  5:49         ` Gerd Hoffmann
2019-04-12  5:49         ` Gerd Hoffmann
2019-04-12 23:34           ` Chia-I Wu
2019-04-17  9:57             ` Gerd Hoffmann
2019-04-17  9:57               ` Gerd Hoffmann
2019-04-17 18:06               ` Chia-I Wu
2019-04-17 18:06               ` Chia-I Wu
2019-04-17 18:06                 ` Chia-I Wu
2019-04-23  0:12               ` Gurchetan Singh
2019-04-23  0:12                 ` Gurchetan Singh
2019-04-23  0:12               ` Gurchetan Singh
2019-04-17  9:57             ` Gerd Hoffmann
2019-04-12 23:34           ` Chia-I Wu
2019-04-13  0:49           ` Gurchetan Singh
2019-04-13  0:49             ` Gurchetan Singh
2019-04-11  5:03     ` Gerd Hoffmann
2019-04-10 11:42 ` Gerd Hoffmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.