All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 00/11] Support blob memory and venus on qemu
@ 2024-04-18 19:00 Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3 Dmitry Osipenko
                   ` (11 more replies)
  0 siblings, 12 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

Hello,

This series enables Vulkan Venus context support on virtio-gpu.

All virglrender and almost all Linux kernel prerequisite changes
needed by Venus are already in upstream. For kernel there is a pending
KVM patchset that fixes mapping of compound pages needed for DRM drivers
using TTM [1], othewrwise hostmem blob mapping will fail with a KVM error
from Qemu.

[1] https://lore.kernel.org/kvm/20240229025759.1187910-1-stevensd@google.com/

Example Qemu cmdline that enables Venus for virtio-gpu:

  qemu-system-x86_64 -device virtio-vga-gl,hostmem=4G,blob=true,vulkan=true


Changes from V7 to V8

- Supported suspension of virtio-gpu commands processing and made
  unmapping of hostmem region asynchronous by blocking/suspending
  cmd processing until region is unmapped. Suggested by Akihiko Odaki.

- Fixed arm64 building of x86 targets using updated linux-headers.
  Corrected the update script. Thanks to Rob Clark for reporting
  the issue.

- Added new patch that makes registration of virgl capsets dynamic.
  Requested by Antonio Caggiano and Pierre-Eric Pelloux-Prayer.

- Venus capset now isn't advertised if Vulkan is disabled with vulkan=false

Changes from V6 to V7

- Used scripts/update-linux-headers.sh to update Qemu headers based
  on Linux v6.8-rc3 that adds Venus capset definition to virtio-gpu
  protocol, was requested by Peter Maydel

- Added r-bs that were given to v6 patches. Corrected missing s-o-bs

- Dropped context_init Qemu's virtio-gpu device configuration flag,
  was suggested by Marc-André Lureau

- Added missing error condition checks spotted by Marc-André Lureau
  and Akihiko Odaki, and few more

- Returned back res->mr referencing to memory_region_init_ram_ptr() like
  was suggested by Akihiko Odaki. Incorporated fix suggested by Pierre-Eric
  to specify the MR name

- Dropped the virgl_gpu_resource wrapper, cleaned up and simplified
  patch that adds blob-cmd support

- Fixed improper blob resource removal from resource list on resource_unref
  that was spotted by Akihiko Odaki

- Change order of the blob patches, was suggested by Akihiko Odaki.
  The cmd_set_scanout_blob support is enabled first

- Factored out patch that adds resource management support to virtio-gpu-gl,
  was requested by Marc-André Lureau

- Simplified and improved the UUID support patch, dropped the hash table
  as we don't need it for now. Moved QemuUUID to virtio_gpu_simple_resource.
  This all was suggested by Akihiko Odaki and Marc-André Lureau

- Dropped console_has_gl() check, suggested by Akihiko Odaki

- Reworked Meson cheking of libvirglrender features, made new features
  available based on virglrender pkgconfig version instead of checking
  symbols in header. This should fix build error using older virglrender
  version, reported by Alex Bennée

- Made enabling of Venus context configrable via new virtio-gpu device
  "vulkan=true" flag, suggested by Marc-André Lureau. The flag is disabled
  by default because it requires blob and hostmem options to be enabled
  and configured

Changes from V5 to V6

- Move macros configurations under virgl.found() and rename
  HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS.

- Handle the case while context_init is disabled.

- Enable context_init by default.

- Move virtio_gpu_virgl_resource_unmap() into
  virgl_cmd_resource_unmap_blob().

- Introduce new struct virgl_gpu_resource to store virgl specific members.

- Remove erro handling of g_new0, because glib will abort() on OOM.

- Set resource uuid as option.

- Implement optional subsection of vmstate_virtio_gpu_resource_uuid_state
  for virtio live migration.

- Use g_int_hash/g_int_equal instead of the default

- Add scanout_blob function for virtio-gpu-virgl

- Resolve the memory leak on virtio-gpu-virgl

- Remove the unstable API flags check because virglrenderer is already 1.0

- Squash the render server flag support into "Initialize Venus"

Changes from V4 (virtio gpu V4) to V5

- Inverted patch 5 and 6 because we should configure
  HAVE_VIRGL_CONTEXT_INIT firstly.

- Validate owner of memory region to avoid slowing down DMA.

- Use memory_region_init_ram_ptr() instead of
  memory_region_init_ram_device_ptr().

- Adjust sequence to allocate gpu resource before virglrender resource
  creation

- Add virtio migration handling for uuid.

- Send kernel patch to define VIRTIO_GPU_CAPSET_VENUS.
  https://lore.kernel.org/lkml/20230915105918.3763061-1-ray.huang@amd.com/

- Add meson check to make sure unstable APIs defined from 0.9.0.

Changes from V1 to V2 (virtio gpu V4)

- Remove unused #include "hw/virtio/virtio-iommu.h"

- Add a local function, called virgl_resource_destroy(), that is used
  to release a vgpu resource on error paths and in resource_unref.

- Remove virtio_gpu_virgl_resource_unmap from
  virtio_gpu_cleanup_mapping(),
  since this function won't be called on blob resources and also because
  blob resources are unmapped via virgl_cmd_resource_unmap_blob().

- In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
  and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
  has been fully initialized.

- Memory region has a different life-cycle from virtio gpu resources
  i.e. cannot be released synchronously along with the vgpu resource.
  So, here the field "region" was changed to a pointer and is allocated
  dynamically when the blob is mapped.
  Also, since the pointer can be used to indicate whether the blob
  is mapped, the explicite field "mapped" was removed.

- In virgl_cmd_resource_map_blob(), add check on the value of
  res->region, to prevent beeing called twice on the same resource.

- Add a patch to enable automatic deallocation of memory regions to resolve
  use-after-free memory corruption with a reference.

Antonio Caggiano (3):
  virtio-gpu: Handle resource blob commands
  virtio-gpu: Resource UUID
  virtio-gpu: Support Venus context

Dmitry Osipenko (4):
  linux-headers: Update to Linux v6.9-rc3
  virtio-gpu: Use pkgconfig version to decide which virgl features are
    available
  virtio-gpu: Don't require udmabuf when blobs and virgl are enabled
  virtio-gpu: Support suspension of commands processing

Huang Rui (2):
  virtio-gpu: Support context-init feature with virglrenderer
  virtio-gpu: Add virgl resource management

Pierre-Eric Pelloux-Prayer (1):
  virtio-gpu: Register capsets dynamically

Robert Beckett (1):
  virtio-gpu: Support blob scanout using dmabuf fd

 hw/display/trace-events                       |   1 +
 hw/display/virtio-gpu-base.c                  |   1 +
 hw/display/virtio-gpu-gl.c                    |   5 +
 hw/display/virtio-gpu-rutabaga.c              |   1 +
 hw/display/virtio-gpu-virgl.c                 | 519 ++++++++++++-
 hw/display/virtio-gpu.c                       |  42 +-
 hw/i386/x86.c                                 |   8 -
 include/hw/virtio/virtio-gpu.h                |  21 +
 include/standard-headers/asm-x86/bootparam.h  |  17 +-
 include/standard-headers/asm-x86/kvm_para.h   |   3 +-
 include/standard-headers/asm-x86/setup_data.h |  83 +++
 include/standard-headers/linux/ethtool.h      |  48 ++
 include/standard-headers/linux/fuse.h         |  39 +-
 .../linux/input-event-codes.h                 |   1 +
 include/standard-headers/linux/virtio_gpu.h   |   2 +
 include/standard-headers/linux/virtio_pci.h   |  10 +-
 include/standard-headers/linux/virtio_snd.h   | 154 ++++
 linux-headers/asm-arm64/kvm.h                 |  15 +-
 linux-headers/asm-arm64/sve_context.h         |  11 +
 linux-headers/asm-generic/bitsperlong.h       |   4 +
 linux-headers/asm-loongarch/kvm.h             |   2 -
 linux-headers/asm-mips/kvm.h                  |   2 -
 linux-headers/asm-powerpc/kvm.h               |  45 +-
 linux-headers/asm-riscv/kvm.h                 |   3 +-
 linux-headers/asm-s390/kvm.h                  | 315 +++++++-
 linux-headers/asm-x86/kvm.h                   | 308 +++++++-
 linux-headers/linux/bits.h                    |  15 +
 linux-headers/linux/kvm.h                     | 689 +-----------------
 linux-headers/linux/psp-sev.h                 |  59 ++
 linux-headers/linux/vhost.h                   |   7 +
 meson.build                                   |  10 +-
 scripts/update-linux-headers.sh               |   5 +-
 32 files changed, 1679 insertions(+), 766 deletions(-)
 create mode 100644 include/standard-headers/asm-x86/setup_data.h
 create mode 100644 linux-headers/linux/bits.h

-- 
2.44.0



^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-05-10 10:46   ` Alex Bennée
  2024-04-18 19:00 ` [PATCH v8 02/11] virtio-gpu: Use pkgconfig version to decide which virgl features are available Dmitry Osipenko
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

Update kernel headers to get new VirtIO-GPU capsets, in particular the
Venus capset.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/i386/x86.c                                 |   8 -
 include/standard-headers/asm-x86/bootparam.h  |  17 +-
 include/standard-headers/asm-x86/kvm_para.h   |   3 +-
 include/standard-headers/asm-x86/setup_data.h |  83 +++
 include/standard-headers/linux/ethtool.h      |  48 ++
 include/standard-headers/linux/fuse.h         |  39 +-
 .../linux/input-event-codes.h                 |   1 +
 include/standard-headers/linux/virtio_gpu.h   |   2 +
 include/standard-headers/linux/virtio_pci.h   |  10 +-
 include/standard-headers/linux/virtio_snd.h   | 154 ++++
 linux-headers/asm-arm64/kvm.h                 |  15 +-
 linux-headers/asm-arm64/sve_context.h         |  11 +
 linux-headers/asm-generic/bitsperlong.h       |   4 +
 linux-headers/asm-loongarch/kvm.h             |   2 -
 linux-headers/asm-mips/kvm.h                  |   2 -
 linux-headers/asm-powerpc/kvm.h               |  45 +-
 linux-headers/asm-riscv/kvm.h                 |   3 +-
 linux-headers/asm-s390/kvm.h                  | 315 +++++++-
 linux-headers/asm-x86/kvm.h                   | 308 +++++++-
 linux-headers/linux/bits.h                    |  15 +
 linux-headers/linux/kvm.h                     | 689 +-----------------
 linux-headers/linux/psp-sev.h                 |  59 ++
 linux-headers/linux/vhost.h                   |   7 +
 scripts/update-linux-headers.sh               |   5 +-
 24 files changed, 1106 insertions(+), 739 deletions(-)
 create mode 100644 include/standard-headers/asm-x86/setup_data.h
 create mode 100644 linux-headers/linux/bits.h

diff --git a/hw/i386/x86.c b/hw/i386/x86.c
index ffbda48917fd..84a48019770b 100644
--- a/hw/i386/x86.c
+++ b/hw/i386/x86.c
@@ -679,14 +679,6 @@ DeviceState *ioapic_init_secondary(GSIState *gsi_state)
     return dev;
 }
 
-struct setup_data {
-    uint64_t next;
-    uint32_t type;
-    uint32_t len;
-    uint8_t data[];
-} __attribute__((packed));
-
-
 /*
  * The entry point into the kernel for PVH boot is different from
  * the native entry point.  The PVH entry is defined by the x86/HVM
diff --git a/include/standard-headers/asm-x86/bootparam.h b/include/standard-headers/asm-x86/bootparam.h
index 0b06d2bff1b9..b582a105c087 100644
--- a/include/standard-headers/asm-x86/bootparam.h
+++ b/include/standard-headers/asm-x86/bootparam.h
@@ -2,21 +2,7 @@
 #ifndef _ASM_X86_BOOTPARAM_H
 #define _ASM_X86_BOOTPARAM_H
 
-/* setup_data/setup_indirect types */
-#define SETUP_NONE			0
-#define SETUP_E820_EXT			1
-#define SETUP_DTB			2
-#define SETUP_PCI			3
-#define SETUP_EFI			4
-#define SETUP_APPLE_PROPERTIES		5
-#define SETUP_JAILHOUSE			6
-#define SETUP_CC_BLOB			7
-#define SETUP_IMA			8
-#define SETUP_RNG_SEED			9
-#define SETUP_ENUM_MAX			SETUP_RNG_SEED
-
-#define SETUP_INDIRECT			(1<<31)
-#define SETUP_TYPE_MAX			(SETUP_ENUM_MAX | SETUP_INDIRECT)
+#include "standard-headers/asm-x86/setup_data.h"
 
 /* ram_size flags */
 #define RAMDISK_IMAGE_START_MASK	0x07FF
@@ -38,6 +24,7 @@
 #define XLF_EFI_KEXEC			(1<<4)
 #define XLF_5LEVEL			(1<<5)
 #define XLF_5LEVEL_ENABLED		(1<<6)
+#define XLF_MEM_ENCRYPTION		(1<<7)
 
 
 #endif /* _ASM_X86_BOOTPARAM_H */
diff --git a/include/standard-headers/asm-x86/kvm_para.h b/include/standard-headers/asm-x86/kvm_para.h
index f0235e58a1d3..9a011d20f017 100644
--- a/include/standard-headers/asm-x86/kvm_para.h
+++ b/include/standard-headers/asm-x86/kvm_para.h
@@ -92,7 +92,7 @@ struct kvm_clock_pairing {
 #define KVM_ASYNC_PF_DELIVERY_AS_INT		(1 << 3)
 
 /* MSR_KVM_ASYNC_PF_INT */
-#define KVM_ASYNC_PF_VEC_MASK			GENMASK(7, 0)
+#define KVM_ASYNC_PF_VEC_MASK			__GENMASK(7, 0)
 
 /* MSR_KVM_MIGRATION_CONTROL */
 #define KVM_MIGRATION_READY		(1 << 0)
@@ -142,7 +142,6 @@ struct kvm_vcpu_pv_apf_data {
 	uint32_t token;
 
 	uint8_t pad[56];
-	uint32_t enabled;
 };
 
 #define KVM_PV_EOI_BIT 0
diff --git a/include/standard-headers/asm-x86/setup_data.h b/include/standard-headers/asm-x86/setup_data.h
new file mode 100644
index 000000000000..09355f54c55f
--- /dev/null
+++ b/include/standard-headers/asm-x86/setup_data.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _ASM_X86_SETUP_DATA_H
+#define _ASM_X86_SETUP_DATA_H
+
+/* setup_data/setup_indirect types */
+#define SETUP_NONE			0
+#define SETUP_E820_EXT			1
+#define SETUP_DTB			2
+#define SETUP_PCI			3
+#define SETUP_EFI			4
+#define SETUP_APPLE_PROPERTIES		5
+#define SETUP_JAILHOUSE			6
+#define SETUP_CC_BLOB			7
+#define SETUP_IMA			8
+#define SETUP_RNG_SEED			9
+#define SETUP_ENUM_MAX			SETUP_RNG_SEED
+
+#define SETUP_INDIRECT			(1<<31)
+#define SETUP_TYPE_MAX			(SETUP_ENUM_MAX | SETUP_INDIRECT)
+
+#ifndef __ASSEMBLY__
+
+#include "standard-headers/linux/types.h"
+
+/* extensible setup data list node */
+struct setup_data {
+	uint64_t next;
+	uint32_t type;
+	uint32_t len;
+	uint8_t data[];
+};
+
+/* extensible setup indirect data node */
+struct setup_indirect {
+	uint32_t type;
+	uint32_t reserved;  /* Reserved, must be set to zero. */
+	uint64_t len;
+	uint64_t addr;
+};
+
+/*
+ * The E820 memory region entry of the boot protocol ABI:
+ */
+struct boot_e820_entry {
+	uint64_t addr;
+	uint64_t size;
+	uint32_t type;
+} QEMU_PACKED;
+
+/*
+ * The boot loader is passing platform information via this Jailhouse-specific
+ * setup data structure.
+ */
+struct jailhouse_setup_data {
+	struct {
+		uint16_t	version;
+		uint16_t	compatible_version;
+	} QEMU_PACKED hdr;
+	struct {
+		uint16_t	pm_timer_address;
+		uint16_t	num_cpus;
+		uint64_t	pci_mmconfig_base;
+		uint32_t	tsc_khz;
+		uint32_t	apic_khz;
+		uint8_t	standard_ioapic;
+		uint8_t	cpu_ids[255];
+	} QEMU_PACKED v1;
+	struct {
+		uint32_t	flags;
+	} QEMU_PACKED v2;
+} QEMU_PACKED;
+
+/*
+ * IMA buffer setup data information from the previous kernel during kexec
+ */
+struct ima_setup_data {
+	uint64_t addr;
+	uint64_t size;
+} QEMU_PACKED;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_X86_SETUP_DATA_H */
diff --git a/include/standard-headers/linux/ethtool.h b/include/standard-headers/linux/ethtool.h
index dfb54eff6f7f..01503784d26f 100644
--- a/include/standard-headers/linux/ethtool.h
+++ b/include/standard-headers/linux/ethtool.h
@@ -2023,6 +2023,53 @@ static inline int ethtool_validate_duplex(uint8_t duplex)
 #define	IPV4_FLOW	0x10	/* hash only */
 #define	IPV6_FLOW	0x11	/* hash only */
 #define	ETHER_FLOW	0x12	/* spec only (ether_spec) */
+
+/* Used for GTP-U IPv4 and IPv6.
+ * The format of GTP packets only includes
+ * elements such as TEID and GTP version.
+ * It is primarily intended for data communication of the UE.
+ */
+#define GTPU_V4_FLOW 0x13	/* hash only */
+#define GTPU_V6_FLOW 0x14	/* hash only */
+
+/* Use for GTP-C IPv4 and v6.
+ * The format of these GTP packets does not include TEID.
+ * Primarily expected to be used for communication
+ * to create sessions for UE data communication,
+ * commonly referred to as CSR (Create Session Request).
+ */
+#define GTPC_V4_FLOW 0x15	/* hash only */
+#define GTPC_V6_FLOW 0x16	/* hash only */
+
+/* Use for GTP-C IPv4 and v6.
+ * Unlike GTPC_V4_FLOW, the format of these GTP packets includes TEID.
+ * After session creation, it becomes this packet.
+ * This is mainly used for requests to realize UE handover.
+ */
+#define GTPC_TEID_V4_FLOW 0x17	/* hash only */
+#define GTPC_TEID_V6_FLOW 0x18	/* hash only */
+
+/* Use for GTP-U and extended headers for the PSC (PDU Session Container).
+ * The format of these GTP packets includes TEID and QFI.
+ * In 5G communication using UPF (User Plane Function),
+ * data communication with this extended header is performed.
+ */
+#define GTPU_EH_V4_FLOW 0x19	/* hash only */
+#define GTPU_EH_V6_FLOW 0x1a	/* hash only */
+
+/* Use for GTP-U IPv4 and v6 PSC (PDU Session Container) extended headers.
+ * This differs from GTPU_EH_V(4|6)_FLOW in that it is distinguished by
+ * UL/DL included in the PSC.
+ * There are differences in the data included based on Downlink/Uplink,
+ * and can be used to distinguish packets.
+ * The functions described so far are useful when you want to
+ * handle communication from the mobile network in UPF, PGW, etc.
+ */
+#define GTPU_UL_V4_FLOW 0x1b	/* hash only */
+#define GTPU_UL_V6_FLOW 0x1c	/* hash only */
+#define GTPU_DL_V4_FLOW 0x1d	/* hash only */
+#define GTPU_DL_V6_FLOW 0x1e	/* hash only */
+
 /* Flag to enable additional fields in struct ethtool_rx_flow_spec */
 #define	FLOW_EXT	0x80000000
 #define	FLOW_MAC_EXT	0x40000000
@@ -2037,6 +2084,7 @@ static inline int ethtool_validate_duplex(uint8_t duplex)
 #define	RXH_IP_DST	(1 << 5)
 #define	RXH_L4_B_0_1	(1 << 6) /* src port in case of TCP/UDP/SCTP */
 #define	RXH_L4_B_2_3	(1 << 7) /* dst port in case of TCP/UDP/SCTP */
+#define	RXH_GTP_TEID	(1 << 8) /* teid in case of GTP */
 #define	RXH_DISCARD	(1 << 31)
 
 #define	RX_CLS_FLOW_DISC	0xffffffffffffffffULL
diff --git a/include/standard-headers/linux/fuse.h b/include/standard-headers/linux/fuse.h
index fc0dcd10aede..bac9dbc49f80 100644
--- a/include/standard-headers/linux/fuse.h
+++ b/include/standard-headers/linux/fuse.h
@@ -211,6 +211,12 @@
  *  7.39
  *  - add FUSE_DIRECT_IO_ALLOW_MMAP
  *  - add FUSE_STATX and related structures
+ *
+ *  7.40
+ *  - add max_stack_depth to fuse_init_out, add FUSE_PASSTHROUGH init flag
+ *  - add backing_id to fuse_open_out, add FOPEN_PASSTHROUGH open flag
+ *  - add FUSE_NO_EXPORT_SUPPORT init flag
+ *  - add FUSE_NOTIFY_RESEND, add FUSE_HAS_RESEND init flag
  */
 
 #ifndef _LINUX_FUSE_H
@@ -242,7 +248,7 @@
 #define FUSE_KERNEL_VERSION 7
 
 /** Minor version number of this interface */
-#define FUSE_KERNEL_MINOR_VERSION 39
+#define FUSE_KERNEL_MINOR_VERSION 40
 
 /** The node ID of the root inode */
 #define FUSE_ROOT_ID 1
@@ -349,6 +355,7 @@ struct fuse_file_lock {
  * FOPEN_STREAM: the file is stream-like (no file position at all)
  * FOPEN_NOFLUSH: don't flush data cache on close (unless FUSE_WRITEBACK_CACHE)
  * FOPEN_PARALLEL_DIRECT_WRITES: Allow concurrent direct writes on the same inode
+ * FOPEN_PASSTHROUGH: passthrough read/write io for this open file
  */
 #define FOPEN_DIRECT_IO		(1 << 0)
 #define FOPEN_KEEP_CACHE	(1 << 1)
@@ -357,6 +364,7 @@ struct fuse_file_lock {
 #define FOPEN_STREAM		(1 << 4)
 #define FOPEN_NOFLUSH		(1 << 5)
 #define FOPEN_PARALLEL_DIRECT_WRITES	(1 << 6)
+#define FOPEN_PASSTHROUGH	(1 << 7)
 
 /**
  * INIT request/reply flags
@@ -406,6 +414,9 @@ struct fuse_file_lock {
  *			symlink and mknod (single group that matches parent)
  * FUSE_HAS_EXPIRE_ONLY: kernel supports expiry-only entry invalidation
  * FUSE_DIRECT_IO_ALLOW_MMAP: allow shared mmap in FOPEN_DIRECT_IO mode.
+ * FUSE_NO_EXPORT_SUPPORT: explicitly disable export support
+ * FUSE_HAS_RESEND: kernel supports resending pending requests, and the high bit
+ *		    of the request ID indicates resend requests
  */
 #define FUSE_ASYNC_READ		(1 << 0)
 #define FUSE_POSIX_LOCKS	(1 << 1)
@@ -445,6 +456,9 @@ struct fuse_file_lock {
 #define FUSE_CREATE_SUPP_GROUP	(1ULL << 34)
 #define FUSE_HAS_EXPIRE_ONLY	(1ULL << 35)
 #define FUSE_DIRECT_IO_ALLOW_MMAP (1ULL << 36)
+#define FUSE_PASSTHROUGH	(1ULL << 37)
+#define FUSE_NO_EXPORT_SUPPORT	(1ULL << 38)
+#define FUSE_HAS_RESEND		(1ULL << 39)
 
 /* Obsolete alias for FUSE_DIRECT_IO_ALLOW_MMAP */
 #define FUSE_DIRECT_IO_RELAX	FUSE_DIRECT_IO_ALLOW_MMAP
@@ -631,6 +645,7 @@ enum fuse_notify_code {
 	FUSE_NOTIFY_STORE = 4,
 	FUSE_NOTIFY_RETRIEVE = 5,
 	FUSE_NOTIFY_DELETE = 6,
+	FUSE_NOTIFY_RESEND = 7,
 	FUSE_NOTIFY_CODE_MAX,
 };
 
@@ -757,7 +772,7 @@ struct fuse_create_in {
 struct fuse_open_out {
 	uint64_t	fh;
 	uint32_t	open_flags;
-	uint32_t	padding;
+	int32_t		backing_id;
 };
 
 struct fuse_release_in {
@@ -873,7 +888,8 @@ struct fuse_init_out {
 	uint16_t	max_pages;
 	uint16_t	map_alignment;
 	uint32_t	flags2;
-	uint32_t	unused[7];
+	uint32_t	max_stack_depth;
+	uint32_t	unused[6];
 };
 
 #define CUSE_INIT_INFO_MAX 4096
@@ -956,6 +972,14 @@ struct fuse_fallocate_in {
 	uint32_t	padding;
 };
 
+/**
+ * FUSE request unique ID flag
+ *
+ * Indicates whether this is a resend request. The receiver should handle this
+ * request accordingly.
+ */
+#define FUSE_UNIQUE_RESEND (1ULL << 63)
+
 struct fuse_in_header {
 	uint32_t	len;
 	uint32_t	opcode;
@@ -1045,9 +1069,18 @@ struct fuse_notify_retrieve_in {
 	uint64_t	dummy4;
 };
 
+struct fuse_backing_map {
+	int32_t		fd;
+	uint32_t	flags;
+	uint64_t	padding;
+};
+
 /* Device ioctls: */
 #define FUSE_DEV_IOC_MAGIC		229
 #define FUSE_DEV_IOC_CLONE		_IOR(FUSE_DEV_IOC_MAGIC, 0, uint32_t)
+#define FUSE_DEV_IOC_BACKING_OPEN	_IOW(FUSE_DEV_IOC_MAGIC, 1, \
+					     struct fuse_backing_map)
+#define FUSE_DEV_IOC_BACKING_CLOSE	_IOW(FUSE_DEV_IOC_MAGIC, 2, uint32_t)
 
 struct fuse_lseek_in {
 	uint64_t	fh;
diff --git a/include/standard-headers/linux/input-event-codes.h b/include/standard-headers/linux/input-event-codes.h
index f6bab08540d8..2221b0c38348 100644
--- a/include/standard-headers/linux/input-event-codes.h
+++ b/include/standard-headers/linux/input-event-codes.h
@@ -602,6 +602,7 @@
 
 #define KEY_ALS_TOGGLE		0x230	/* Ambient light sensor */
 #define KEY_ROTATE_LOCK_TOGGLE	0x231	/* Display rotation lock */
+#define KEY_REFRESH_RATE_TOGGLE	0x232	/* Display refresh rate toggle */
 
 #define KEY_BUTTONCONFIG		0x240	/* AL Button Configuration */
 #define KEY_TASKMANAGER		0x241	/* AL Task/Project Manager */
diff --git a/include/standard-headers/linux/virtio_gpu.h b/include/standard-headers/linux/virtio_gpu.h
index 2da48d3d4c2c..2db643ed8fbf 100644
--- a/include/standard-headers/linux/virtio_gpu.h
+++ b/include/standard-headers/linux/virtio_gpu.h
@@ -309,6 +309,8 @@ struct virtio_gpu_cmd_submit {
 
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
+/* 3 is reserved for gfxstream */
+#define VIRTIO_GPU_CAPSET_VENUS 4
 
 /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
 struct virtio_gpu_get_capset_info {
diff --git a/include/standard-headers/linux/virtio_pci.h b/include/standard-headers/linux/virtio_pci.h
index 3e2bc2c97e6e..4010216103e5 100644
--- a/include/standard-headers/linux/virtio_pci.h
+++ b/include/standard-headers/linux/virtio_pci.h
@@ -240,7 +240,7 @@ struct virtio_pci_cfg_cap {
 #define VIRTIO_ADMIN_CMD_LEGACY_DEV_CFG_READ		0x5
 #define VIRTIO_ADMIN_CMD_LEGACY_NOTIFY_INFO		0x6
 
-struct QEMU_PACKED virtio_admin_cmd_hdr {
+struct virtio_admin_cmd_hdr {
 	uint16_t opcode;
 	/*
 	 * 1 - SR-IOV
@@ -252,20 +252,20 @@ struct QEMU_PACKED virtio_admin_cmd_hdr {
 	uint64_t group_member_id;
 };
 
-struct QEMU_PACKED virtio_admin_cmd_status {
+struct virtio_admin_cmd_status {
 	uint16_t status;
 	uint16_t status_qualifier;
 	/* Unused, reserved for future extensions. */
 	uint8_t reserved2[4];
 };
 
-struct QEMU_PACKED virtio_admin_cmd_legacy_wr_data {
+struct virtio_admin_cmd_legacy_wr_data {
 	uint8_t offset; /* Starting offset of the register(s) to write. */
 	uint8_t reserved[7];
 	uint8_t registers[];
 };
 
-struct QEMU_PACKED virtio_admin_cmd_legacy_rd_data {
+struct virtio_admin_cmd_legacy_rd_data {
 	uint8_t offset; /* Starting offset of the register(s) to read. */
 };
 
@@ -275,7 +275,7 @@ struct QEMU_PACKED virtio_admin_cmd_legacy_rd_data {
 
 #define VIRTIO_ADMIN_CMD_MAX_NOTIFY_INFO 4
 
-struct QEMU_PACKED virtio_admin_cmd_notify_info_data {
+struct virtio_admin_cmd_notify_info_data {
 	uint8_t flags; /* 0 = end of list, 1 = owner device, 2 = member device */
 	uint8_t bar; /* BAR of the member or the owner device */
 	uint8_t padding[6];
diff --git a/include/standard-headers/linux/virtio_snd.h b/include/standard-headers/linux/virtio_snd.h
index 1af96b9fc61a..860f12e0a4e1 100644
--- a/include/standard-headers/linux/virtio_snd.h
+++ b/include/standard-headers/linux/virtio_snd.h
@@ -7,6 +7,14 @@
 
 #include "standard-headers/linux/virtio_types.h"
 
+/*******************************************************************************
+ * FEATURE BITS
+ */
+enum {
+	/* device supports control elements */
+	VIRTIO_SND_F_CTLS = 0
+};
+
 /*******************************************************************************
  * CONFIGURATION SPACE
  */
@@ -17,6 +25,8 @@ struct virtio_snd_config {
 	uint32_t streams;
 	/* # of available channel maps */
 	uint32_t chmaps;
+	/* # of available control elements */
+	uint32_t controls;
 };
 
 enum {
@@ -55,6 +65,15 @@ enum {
 	/* channel map control request types */
 	VIRTIO_SND_R_CHMAP_INFO = 0x0200,
 
+	/* control element request types */
+	VIRTIO_SND_R_CTL_INFO = 0x0300,
+	VIRTIO_SND_R_CTL_ENUM_ITEMS,
+	VIRTIO_SND_R_CTL_READ,
+	VIRTIO_SND_R_CTL_WRITE,
+	VIRTIO_SND_R_CTL_TLV_READ,
+	VIRTIO_SND_R_CTL_TLV_WRITE,
+	VIRTIO_SND_R_CTL_TLV_COMMAND,
+
 	/* jack event types */
 	VIRTIO_SND_EVT_JACK_CONNECTED = 0x1000,
 	VIRTIO_SND_EVT_JACK_DISCONNECTED,
@@ -63,6 +82,9 @@ enum {
 	VIRTIO_SND_EVT_PCM_PERIOD_ELAPSED = 0x1100,
 	VIRTIO_SND_EVT_PCM_XRUN,
 
+	/* control element event types */
+	VIRTIO_SND_EVT_CTL_NOTIFY = 0x1200,
+
 	/* common status codes */
 	VIRTIO_SND_S_OK = 0x8000,
 	VIRTIO_SND_S_BAD_MSG,
@@ -331,4 +353,136 @@ struct virtio_snd_chmap_info {
 	uint8_t positions[VIRTIO_SND_CHMAP_MAX_SIZE];
 };
 
+/*******************************************************************************
+ * CONTROL ELEMENTS MESSAGES
+ */
+struct virtio_snd_ctl_hdr {
+	/* VIRTIO_SND_R_CTL_XXX */
+	struct virtio_snd_hdr hdr;
+	/* 0 ... virtio_snd_config::controls - 1 */
+	uint32_t control_id;
+};
+
+/* supported roles for control elements */
+enum {
+	VIRTIO_SND_CTL_ROLE_UNDEFINED = 0,
+	VIRTIO_SND_CTL_ROLE_VOLUME,
+	VIRTIO_SND_CTL_ROLE_MUTE,
+	VIRTIO_SND_CTL_ROLE_GAIN
+};
+
+/* supported value types for control elements */
+enum {
+	VIRTIO_SND_CTL_TYPE_BOOLEAN = 0,
+	VIRTIO_SND_CTL_TYPE_INTEGER,
+	VIRTIO_SND_CTL_TYPE_INTEGER64,
+	VIRTIO_SND_CTL_TYPE_ENUMERATED,
+	VIRTIO_SND_CTL_TYPE_BYTES,
+	VIRTIO_SND_CTL_TYPE_IEC958
+};
+
+/* supported access rights for control elements */
+enum {
+	VIRTIO_SND_CTL_ACCESS_READ = 0,
+	VIRTIO_SND_CTL_ACCESS_WRITE,
+	VIRTIO_SND_CTL_ACCESS_VOLATILE,
+	VIRTIO_SND_CTL_ACCESS_INACTIVE,
+	VIRTIO_SND_CTL_ACCESS_TLV_READ,
+	VIRTIO_SND_CTL_ACCESS_TLV_WRITE,
+	VIRTIO_SND_CTL_ACCESS_TLV_COMMAND
+};
+
+struct virtio_snd_ctl_info {
+	/* common header */
+	struct virtio_snd_info hdr;
+	/* element role (VIRTIO_SND_CTL_ROLE_XXX) */
+	uint32_t role;
+	/* element value type (VIRTIO_SND_CTL_TYPE_XXX) */
+	uint32_t type;
+	/* element access right bit map (1 << VIRTIO_SND_CTL_ACCESS_XXX) */
+	uint32_t access;
+	/* # of members in the element value */
+	uint32_t count;
+	/* index for an element with a non-unique name */
+	uint32_t index;
+	/* name identifier string for the element */
+	uint8_t name[44];
+	/* additional information about the element's value */
+	union {
+		/* VIRTIO_SND_CTL_TYPE_INTEGER */
+		struct {
+			/* minimum supported value */
+			uint32_t min;
+			/* maximum supported value */
+			uint32_t max;
+			/* fixed step size for value (0 = variable size) */
+			uint32_t step;
+		} integer;
+		/* VIRTIO_SND_CTL_TYPE_INTEGER64 */
+		struct {
+			/* minimum supported value */
+			uint64_t min;
+			/* maximum supported value */
+			uint64_t max;
+			/* fixed step size for value (0 = variable size) */
+			uint64_t step;
+		} integer64;
+		/* VIRTIO_SND_CTL_TYPE_ENUMERATED */
+		struct {
+			/* # of options supported for value */
+			uint32_t items;
+		} enumerated;
+	} value;
+};
+
+struct virtio_snd_ctl_enum_item {
+	/* option name */
+	uint8_t item[64];
+};
+
+struct virtio_snd_ctl_iec958 {
+	/* AES/IEC958 channel status bits */
+	uint8_t status[24];
+	/* AES/IEC958 subcode bits */
+	uint8_t subcode[147];
+	/* nothing */
+	uint8_t pad;
+	/* AES/IEC958 subframe bits */
+	uint8_t dig_subframe[4];
+};
+
+struct virtio_snd_ctl_value {
+	union {
+		/* VIRTIO_SND_CTL_TYPE_BOOLEAN|INTEGER value */
+		uint32_t integer[128];
+		/* VIRTIO_SND_CTL_TYPE_INTEGER64 value */
+		uint64_t integer64[64];
+		/* VIRTIO_SND_CTL_TYPE_ENUMERATED value (option indexes) */
+		uint32_t enumerated[128];
+		/* VIRTIO_SND_CTL_TYPE_BYTES value */
+		uint8_t bytes[512];
+		/* VIRTIO_SND_CTL_TYPE_IEC958 value */
+		struct virtio_snd_ctl_iec958 iec958;
+	} value;
+};
+
+/* supported event reason types */
+enum {
+	/* element's value has changed */
+	VIRTIO_SND_CTL_EVT_MASK_VALUE = 0,
+	/* element's information has changed */
+	VIRTIO_SND_CTL_EVT_MASK_INFO,
+	/* element's metadata has changed */
+	VIRTIO_SND_CTL_EVT_MASK_TLV
+};
+
+struct virtio_snd_ctl_event {
+	/* VIRTIO_SND_EVT_CTL_NOTIFY */
+	struct virtio_snd_hdr hdr;
+	/* 0 ... virtio_snd_config::controls - 1 */
+	uint16_t control_id;
+	/* event reason bit map (1 << VIRTIO_SND_CTL_EVT_MASK_XXX) */
+	uint16_t mask;
+};
+
 #endif /* VIRTIO_SND_IF_H */
diff --git a/linux-headers/asm-arm64/kvm.h b/linux-headers/asm-arm64/kvm.h
index c59ea55cd8eb..2af9931ae989 100644
--- a/linux-headers/asm-arm64/kvm.h
+++ b/linux-headers/asm-arm64/kvm.h
@@ -37,9 +37,7 @@
 #include <asm/ptrace.h>
 #include <asm/sve_context.h>
 
-#define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
-#define __KVM_HAVE_READONLY_MEM
 #define __KVM_HAVE_VCPU_EVENTS
 
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
@@ -76,11 +74,11 @@ struct kvm_regs {
 
 /* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
 #define KVM_ARM_DEVICE_TYPE_SHIFT	0
-#define KVM_ARM_DEVICE_TYPE_MASK	GENMASK(KVM_ARM_DEVICE_TYPE_SHIFT + 15, \
-						KVM_ARM_DEVICE_TYPE_SHIFT)
+#define KVM_ARM_DEVICE_TYPE_MASK	__GENMASK(KVM_ARM_DEVICE_TYPE_SHIFT + 15, \
+						  KVM_ARM_DEVICE_TYPE_SHIFT)
 #define KVM_ARM_DEVICE_ID_SHIFT		16
-#define KVM_ARM_DEVICE_ID_MASK		GENMASK(KVM_ARM_DEVICE_ID_SHIFT + 15, \
-						KVM_ARM_DEVICE_ID_SHIFT)
+#define KVM_ARM_DEVICE_ID_MASK		__GENMASK(KVM_ARM_DEVICE_ID_SHIFT + 15, \
+						  KVM_ARM_DEVICE_ID_SHIFT)
 
 /* Supported device IDs */
 #define KVM_ARM_DEVICE_VGIC_V2		0
@@ -162,6 +160,11 @@ struct kvm_sync_regs {
 	__u64 device_irq_level;
 };
 
+/* Bits for run->s.regs.device_irq_level */
+#define KVM_ARM_DEV_EL1_VTIMER		(1 << 0)
+#define KVM_ARM_DEV_EL1_PTIMER		(1 << 1)
+#define KVM_ARM_DEV_PMU			(1 << 2)
+
 /*
  * PMU filter structure. Describe a range of events with a particular
  * action. To be used with KVM_ARM_VCPU_PMU_V3_FILTER.
diff --git a/linux-headers/asm-arm64/sve_context.h b/linux-headers/asm-arm64/sve_context.h
index 1d0e3e1d0950..d1b1ec8cb1f1 100644
--- a/linux-headers/asm-arm64/sve_context.h
+++ b/linux-headers/asm-arm64/sve_context.h
@@ -13,6 +13,17 @@
 
 #define __SVE_VQ_BYTES		16	/* number of bytes per quadword */
 
+/*
+ * Yes, __SVE_VQ_MAX is 512 QUADWORDS.
+ *
+ * To help ensure forward portability, this is much larger than the
+ * current maximum value defined by the SVE architecture.  While arrays
+ * or static allocations can be sized based on this value, watch out!
+ * It will waste a surprisingly large amount of memory.
+ *
+ * Dynamic sizing based on the actual runtime vector length is likely to
+ * be preferable for most purposes.
+ */
 #define __SVE_VQ_MIN		1
 #define __SVE_VQ_MAX		512
 
diff --git a/linux-headers/asm-generic/bitsperlong.h b/linux-headers/asm-generic/bitsperlong.h
index 75f320fa91e5..1fb4f0c9f278 100644
--- a/linux-headers/asm-generic/bitsperlong.h
+++ b/linux-headers/asm-generic/bitsperlong.h
@@ -24,4 +24,8 @@
 #endif
 #endif
 
+#ifndef __BITS_PER_LONG_LONG
+#define __BITS_PER_LONG_LONG 64
+#endif
+
 #endif /* __ASM_GENERIC_BITS_PER_LONG */
diff --git a/linux-headers/asm-loongarch/kvm.h b/linux-headers/asm-loongarch/kvm.h
index 923d0bd38294..109785922cf9 100644
--- a/linux-headers/asm-loongarch/kvm.h
+++ b/linux-headers/asm-loongarch/kvm.h
@@ -14,8 +14,6 @@
  * Some parts derived from the x86 version of this file.
  */
 
-#define __KVM_HAVE_READONLY_MEM
-
 #define KVM_COALESCED_MMIO_PAGE_OFFSET	1
 #define KVM_DIRTY_LOG_PAGE_OFFSET	64
 
diff --git a/linux-headers/asm-mips/kvm.h b/linux-headers/asm-mips/kvm.h
index edcf717c4327..9673dc9cb315 100644
--- a/linux-headers/asm-mips/kvm.h
+++ b/linux-headers/asm-mips/kvm.h
@@ -20,8 +20,6 @@
  * Some parts derived from the x86 version of this file.
  */
 
-#define __KVM_HAVE_READONLY_MEM
-
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
 
 /*
diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
index 9f18fa090f1f..1691297a766a 100644
--- a/linux-headers/asm-powerpc/kvm.h
+++ b/linux-headers/asm-powerpc/kvm.h
@@ -28,7 +28,6 @@
 #define __KVM_HAVE_PPC_SMT
 #define __KVM_HAVE_IRQCHIP
 #define __KVM_HAVE_IRQ_LINE
-#define __KVM_HAVE_GUEST_DEBUG
 
 /* Not always available, but if it is, this is the correct offset.  */
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
@@ -733,4 +732,48 @@ struct kvm_ppc_xive_eq {
 #define KVM_XIVE_TIMA_PAGE_OFFSET	0
 #define KVM_XIVE_ESB_PAGE_OFFSET	4
 
+/* for KVM_PPC_GET_PVINFO */
+
+#define KVM_PPC_PVINFO_FLAGS_EV_IDLE   (1<<0)
+
+struct kvm_ppc_pvinfo {
+	/* out */
+	__u32 flags;
+	__u32 hcall[4];
+	__u8  pad[108];
+};
+
+/* for KVM_PPC_GET_SMMU_INFO */
+#define KVM_PPC_PAGE_SIZES_MAX_SZ	8
+
+struct kvm_ppc_one_page_size {
+	__u32 page_shift;	/* Page shift (or 0) */
+	__u32 pte_enc;		/* Encoding in the HPTE (>>12) */
+};
+
+struct kvm_ppc_one_seg_page_size {
+	__u32 page_shift;	/* Base page shift of segment (or 0) */
+	__u32 slb_enc;		/* SLB encoding for BookS */
+	struct kvm_ppc_one_page_size enc[KVM_PPC_PAGE_SIZES_MAX_SZ];
+};
+
+#define KVM_PPC_PAGE_SIZES_REAL		0x00000001
+#define KVM_PPC_1T_SEGMENTS		0x00000002
+#define KVM_PPC_NO_HASH			0x00000004
+
+struct kvm_ppc_smmu_info {
+	__u64 flags;
+	__u32 slb_size;
+	__u16 data_keys;	/* # storage keys supported for data */
+	__u16 instr_keys;	/* # storage keys supported for instructions */
+	struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ];
+};
+
+/* for KVM_PPC_RESIZE_HPT_{PREPARE,COMMIT} */
+struct kvm_ppc_resize_hpt {
+	__u64 flags;
+	__u32 shift;
+	__u32 pad;
+};
+
 #endif /* __LINUX_KVM_POWERPC_H */
diff --git a/linux-headers/asm-riscv/kvm.h b/linux-headers/asm-riscv/kvm.h
index 7499e88a947c..b1c503c2959c 100644
--- a/linux-headers/asm-riscv/kvm.h
+++ b/linux-headers/asm-riscv/kvm.h
@@ -16,7 +16,6 @@
 #include <asm/ptrace.h>
 
 #define __KVM_HAVE_IRQ_LINE
-#define __KVM_HAVE_READONLY_MEM
 
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
 
@@ -166,6 +165,8 @@ enum KVM_RISCV_ISA_EXT_ID {
 	KVM_RISCV_ISA_EXT_ZVFH,
 	KVM_RISCV_ISA_EXT_ZVFHMIN,
 	KVM_RISCV_ISA_EXT_ZFA,
+	KVM_RISCV_ISA_EXT_ZTSO,
+	KVM_RISCV_ISA_EXT_ZACAS,
 	KVM_RISCV_ISA_EXT_MAX,
 };
 
diff --git a/linux-headers/asm-s390/kvm.h b/linux-headers/asm-s390/kvm.h
index 023a2763a97c..684c4e1205d6 100644
--- a/linux-headers/asm-s390/kvm.h
+++ b/linux-headers/asm-s390/kvm.h
@@ -12,7 +12,320 @@
 #include <linux/types.h>
 
 #define __KVM_S390
-#define __KVM_HAVE_GUEST_DEBUG
+
+struct kvm_s390_skeys {
+	__u64 start_gfn;
+	__u64 count;
+	__u64 skeydata_addr;
+	__u32 flags;
+	__u32 reserved[9];
+};
+
+#define KVM_S390_CMMA_PEEK (1 << 0)
+
+/**
+ * kvm_s390_cmma_log - Used for CMMA migration.
+ *
+ * Used both for input and output.
+ *
+ * @start_gfn: Guest page number to start from.
+ * @count: Size of the result buffer.
+ * @flags: Control operation mode via KVM_S390_CMMA_* flags
+ * @remaining: Used with KVM_S390_GET_CMMA_BITS. Indicates how many dirty
+ *             pages are still remaining.
+ * @mask: Used with KVM_S390_SET_CMMA_BITS. Bitmap of bits to actually set
+ *        in the PGSTE.
+ * @values: Pointer to the values buffer.
+ *
+ * Used in KVM_S390_{G,S}ET_CMMA_BITS ioctls.
+ */
+struct kvm_s390_cmma_log {
+	__u64 start_gfn;
+	__u32 count;
+	__u32 flags;
+	union {
+		__u64 remaining;
+		__u64 mask;
+	};
+	__u64 values;
+};
+
+#define KVM_S390_RESET_POR       1
+#define KVM_S390_RESET_CLEAR     2
+#define KVM_S390_RESET_SUBSYSTEM 4
+#define KVM_S390_RESET_CPU_INIT  8
+#define KVM_S390_RESET_IPL       16
+
+/* for KVM_S390_MEM_OP */
+struct kvm_s390_mem_op {
+	/* in */
+	__u64 gaddr;		/* the guest address */
+	__u64 flags;		/* flags */
+	__u32 size;		/* amount of bytes */
+	__u32 op;		/* type of operation */
+	__u64 buf;		/* buffer in userspace */
+	union {
+		struct {
+			__u8 ar;	/* the access register number */
+			__u8 key;	/* access key, ignored if flag unset */
+			__u8 pad1[6];	/* ignored */
+			__u64 old_addr;	/* ignored if cmpxchg flag unset */
+		};
+		__u32 sida_offset; /* offset into the sida */
+		__u8 reserved[32]; /* ignored */
+	};
+};
+/* types for kvm_s390_mem_op->op */
+#define KVM_S390_MEMOP_LOGICAL_READ	0
+#define KVM_S390_MEMOP_LOGICAL_WRITE	1
+#define KVM_S390_MEMOP_SIDA_READ	2
+#define KVM_S390_MEMOP_SIDA_WRITE	3
+#define KVM_S390_MEMOP_ABSOLUTE_READ	4
+#define KVM_S390_MEMOP_ABSOLUTE_WRITE	5
+#define KVM_S390_MEMOP_ABSOLUTE_CMPXCHG	6
+
+/* flags for kvm_s390_mem_op->flags */
+#define KVM_S390_MEMOP_F_CHECK_ONLY		(1ULL << 0)
+#define KVM_S390_MEMOP_F_INJECT_EXCEPTION	(1ULL << 1)
+#define KVM_S390_MEMOP_F_SKEY_PROTECTION	(1ULL << 2)
+
+/* flags specifying extension support via KVM_CAP_S390_MEM_OP_EXTENSION */
+#define KVM_S390_MEMOP_EXTENSION_CAP_BASE	(1 << 0)
+#define KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG	(1 << 1)
+
+struct kvm_s390_psw {
+	__u64 mask;
+	__u64 addr;
+};
+
+/* valid values for type in kvm_s390_interrupt */
+#define KVM_S390_SIGP_STOP		0xfffe0000u
+#define KVM_S390_PROGRAM_INT		0xfffe0001u
+#define KVM_S390_SIGP_SET_PREFIX	0xfffe0002u
+#define KVM_S390_RESTART		0xfffe0003u
+#define KVM_S390_INT_PFAULT_INIT	0xfffe0004u
+#define KVM_S390_INT_PFAULT_DONE	0xfffe0005u
+#define KVM_S390_MCHK			0xfffe1000u
+#define KVM_S390_INT_CLOCK_COMP		0xffff1004u
+#define KVM_S390_INT_CPU_TIMER		0xffff1005u
+#define KVM_S390_INT_VIRTIO		0xffff2603u
+#define KVM_S390_INT_SERVICE		0xffff2401u
+#define KVM_S390_INT_EMERGENCY		0xffff1201u
+#define KVM_S390_INT_EXTERNAL_CALL	0xffff1202u
+/* Anything below 0xfffe0000u is taken by INT_IO */
+#define KVM_S390_INT_IO(ai,cssid,ssid,schid)   \
+	(((schid)) |			       \
+	 ((ssid) << 16) |		       \
+	 ((cssid) << 18) |		       \
+	 ((ai) << 26))
+#define KVM_S390_INT_IO_MIN		0x00000000u
+#define KVM_S390_INT_IO_MAX		0xfffdffffu
+#define KVM_S390_INT_IO_AI_MASK		0x04000000u
+
+
+struct kvm_s390_interrupt {
+	__u32 type;
+	__u32 parm;
+	__u64 parm64;
+};
+
+struct kvm_s390_io_info {
+	__u16 subchannel_id;
+	__u16 subchannel_nr;
+	__u32 io_int_parm;
+	__u32 io_int_word;
+};
+
+struct kvm_s390_ext_info {
+	__u32 ext_params;
+	__u32 pad;
+	__u64 ext_params2;
+};
+
+struct kvm_s390_pgm_info {
+	__u64 trans_exc_code;
+	__u64 mon_code;
+	__u64 per_address;
+	__u32 data_exc_code;
+	__u16 code;
+	__u16 mon_class_nr;
+	__u8 per_code;
+	__u8 per_atmid;
+	__u8 exc_access_id;
+	__u8 per_access_id;
+	__u8 op_access_id;
+#define KVM_S390_PGM_FLAGS_ILC_VALID	0x01
+#define KVM_S390_PGM_FLAGS_ILC_0	0x02
+#define KVM_S390_PGM_FLAGS_ILC_1	0x04
+#define KVM_S390_PGM_FLAGS_ILC_MASK	0x06
+#define KVM_S390_PGM_FLAGS_NO_REWIND	0x08
+	__u8 flags;
+	__u8 pad[2];
+};
+
+struct kvm_s390_prefix_info {
+	__u32 address;
+};
+
+struct kvm_s390_extcall_info {
+	__u16 code;
+};
+
+struct kvm_s390_emerg_info {
+	__u16 code;
+};
+
+#define KVM_S390_STOP_FLAG_STORE_STATUS	0x01
+struct kvm_s390_stop_info {
+	__u32 flags;
+};
+
+struct kvm_s390_mchk_info {
+	__u64 cr14;
+	__u64 mcic;
+	__u64 failing_storage_address;
+	__u32 ext_damage_code;
+	__u32 pad;
+	__u8 fixed_logout[16];
+};
+
+struct kvm_s390_irq {
+	__u64 type;
+	union {
+		struct kvm_s390_io_info io;
+		struct kvm_s390_ext_info ext;
+		struct kvm_s390_pgm_info pgm;
+		struct kvm_s390_emerg_info emerg;
+		struct kvm_s390_extcall_info extcall;
+		struct kvm_s390_prefix_info prefix;
+		struct kvm_s390_stop_info stop;
+		struct kvm_s390_mchk_info mchk;
+		char reserved[64];
+	} u;
+};
+
+struct kvm_s390_irq_state {
+	__u64 buf;
+	__u32 flags;        /* will stay unused for compatibility reasons */
+	__u32 len;
+	__u32 reserved[4];  /* will stay unused for compatibility reasons */
+};
+
+struct kvm_s390_ucas_mapping {
+	__u64 user_addr;
+	__u64 vcpu_addr;
+	__u64 length;
+};
+
+struct kvm_s390_pv_sec_parm {
+	__u64 origin;
+	__u64 length;
+};
+
+struct kvm_s390_pv_unp {
+	__u64 addr;
+	__u64 size;
+	__u64 tweak;
+};
+
+enum pv_cmd_dmp_id {
+	KVM_PV_DUMP_INIT,
+	KVM_PV_DUMP_CONFIG_STOR_STATE,
+	KVM_PV_DUMP_COMPLETE,
+	KVM_PV_DUMP_CPU,
+};
+
+struct kvm_s390_pv_dmp {
+	__u64 subcmd;
+	__u64 buff_addr;
+	__u64 buff_len;
+	__u64 gaddr;		/* For dump storage state */
+	__u64 reserved[4];
+};
+
+enum pv_cmd_info_id {
+	KVM_PV_INFO_VM,
+	KVM_PV_INFO_DUMP,
+};
+
+struct kvm_s390_pv_info_dump {
+	__u64 dump_cpu_buffer_len;
+	__u64 dump_config_mem_buffer_per_1m;
+	__u64 dump_config_finalize_len;
+};
+
+struct kvm_s390_pv_info_vm {
+	__u64 inst_calls_list[4];
+	__u64 max_cpus;
+	__u64 max_guests;
+	__u64 max_guest_addr;
+	__u64 feature_indication;
+};
+
+struct kvm_s390_pv_info_header {
+	__u32 id;
+	__u32 len_max;
+	__u32 len_written;
+	__u32 reserved;
+};
+
+struct kvm_s390_pv_info {
+	struct kvm_s390_pv_info_header header;
+	union {
+		struct kvm_s390_pv_info_dump dump;
+		struct kvm_s390_pv_info_vm vm;
+	};
+};
+
+enum pv_cmd_id {
+	KVM_PV_ENABLE,
+	KVM_PV_DISABLE,
+	KVM_PV_SET_SEC_PARMS,
+	KVM_PV_UNPACK,
+	KVM_PV_VERIFY,
+	KVM_PV_PREP_RESET,
+	KVM_PV_UNSHARE_ALL,
+	KVM_PV_INFO,
+	KVM_PV_DUMP,
+	KVM_PV_ASYNC_CLEANUP_PREPARE,
+	KVM_PV_ASYNC_CLEANUP_PERFORM,
+};
+
+struct kvm_pv_cmd {
+	__u32 cmd;	/* Command to be executed */
+	__u16 rc;	/* Ultravisor return code */
+	__u16 rrc;	/* Ultravisor return reason code */
+	__u64 data;	/* Data or address */
+	__u32 flags;    /* flags for future extensions. Must be 0 for now */
+	__u32 reserved[3];
+};
+
+struct kvm_s390_zpci_op {
+	/* in */
+	__u32 fh;               /* target device */
+	__u8  op;               /* operation to perform */
+	__u8  pad[3];
+	union {
+		/* for KVM_S390_ZPCIOP_REG_AEN */
+		struct {
+			__u64 ibv;      /* Guest addr of interrupt bit vector */
+			__u64 sb;       /* Guest addr of summary bit */
+			__u32 flags;
+			__u32 noi;      /* Number of interrupts */
+			__u8 isc;       /* Guest interrupt subclass */
+			__u8 sbo;       /* Offset of guest summary bit vector */
+			__u16 pad;
+		} reg_aen;
+		__u64 reserved[8];
+	} u;
+};
+
+/* types for kvm_s390_zpci_op->op */
+#define KVM_S390_ZPCIOP_REG_AEN                0
+#define KVM_S390_ZPCIOP_DEREG_AEN      1
+
+/* flags for kvm_s390_zpci_op->u.reg_aen.flags */
+#define KVM_S390_ZPCIOP_REGAEN_HOST    (1 << 0)
 
 /* Device control API: s390-specific devices */
 #define KVM_DEV_FLIC_GET_ALL_IRQS	1
diff --git a/linux-headers/asm-x86/kvm.h b/linux-headers/asm-x86/kvm.h
index 003fb745347c..76b3550a0434 100644
--- a/linux-headers/asm-x86/kvm.h
+++ b/linux-headers/asm-x86/kvm.h
@@ -7,6 +7,8 @@
  *
  */
 
+#include <linux/const.h>
+#include <linux/bits.h>
 #include <linux/types.h>
 #include <linux/ioctl.h>
 #include <linux/stddef.h>
@@ -40,7 +42,6 @@
 #define __KVM_HAVE_IRQ_LINE
 #define __KVM_HAVE_MSI
 #define __KVM_HAVE_USER_NMI
-#define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_MSIX
 #define __KVM_HAVE_MCE
 #define __KVM_HAVE_PIT_STATE2
@@ -49,7 +50,6 @@
 #define __KVM_HAVE_DEBUGREGS
 #define __KVM_HAVE_XSAVE
 #define __KVM_HAVE_XCRS
-#define __KVM_HAVE_READONLY_MEM
 
 /* Architectural interrupt line count. */
 #define KVM_NR_INTERRUPTS 256
@@ -524,9 +524,301 @@ struct kvm_pmu_event_filter {
 #define KVM_PMU_EVENT_ALLOW 0
 #define KVM_PMU_EVENT_DENY 1
 
-#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0)
+#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS _BITUL(0)
 #define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS)
 
+/* for KVM_CAP_MCE */
+struct kvm_x86_mce {
+	__u64 status;
+	__u64 addr;
+	__u64 misc;
+	__u64 mcg_status;
+	__u8 bank;
+	__u8 pad1[7];
+	__u64 pad2[3];
+};
+
+/* for KVM_CAP_XEN_HVM */
+#define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR	(1 << 0)
+#define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
+#define KVM_XEN_HVM_CONFIG_SHARED_INFO		(1 << 2)
+#define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_SEND		(1 << 5)
+#define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG	(1 << 6)
+#define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE	(1 << 7)
+#define KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA	(1 << 8)
+
+struct kvm_xen_hvm_config {
+	__u32 flags;
+	__u32 msr;
+	__u64 blob_addr_32;
+	__u64 blob_addr_64;
+	__u8 blob_size_32;
+	__u8 blob_size_64;
+	__u8 pad2[30];
+};
+
+struct kvm_xen_hvm_attr {
+	__u16 type;
+	__u16 pad[3];
+	union {
+		__u8 long_mode;
+		__u8 vector;
+		__u8 runstate_update_flag;
+		union {
+			__u64 gfn;
+#define KVM_XEN_INVALID_GFN ((__u64)-1)
+			__u64 hva;
+		} shared_info;
+		struct {
+			__u32 send_port;
+			__u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */
+			__u32 flags;
+#define KVM_XEN_EVTCHN_DEASSIGN		(1 << 0)
+#define KVM_XEN_EVTCHN_UPDATE		(1 << 1)
+#define KVM_XEN_EVTCHN_RESET		(1 << 2)
+			/*
+			 * Events sent by the guest are either looped back to
+			 * the guest itself (potentially on a different port#)
+			 * or signalled via an eventfd.
+			 */
+			union {
+				struct {
+					__u32 port;
+					__u32 vcpu;
+					__u32 priority;
+				} port;
+				struct {
+					__u32 port; /* Zero for eventfd */
+					__s32 fd;
+				} eventfd;
+				__u32 padding[4];
+			} deliver;
+		} evtchn;
+		__u32 xen_version;
+		__u64 pad[8];
+	} u;
+};
+
+
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
+#define KVM_XEN_ATTR_TYPE_LONG_MODE		0x0
+#define KVM_XEN_ATTR_TYPE_SHARED_INFO		0x1
+#define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR		0x2
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
+#define KVM_XEN_ATTR_TYPE_EVTCHN		0x3
+#define KVM_XEN_ATTR_TYPE_XEN_VERSION		0x4
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */
+#define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG	0x5
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */
+#define KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA	0x6
+
+struct kvm_xen_vcpu_attr {
+	__u16 type;
+	__u16 pad[3];
+	union {
+		__u64 gpa;
+#define KVM_XEN_INVALID_GPA ((__u64)-1)
+		__u64 hva;
+		__u64 pad[8];
+		struct {
+			__u64 state;
+			__u64 state_entry_time;
+			__u64 time_running;
+			__u64 time_runnable;
+			__u64 time_blocked;
+			__u64 time_offline;
+		} runstate;
+		__u32 vcpu_id;
+		struct {
+			__u32 port;
+			__u32 priority;
+			__u64 expires_ns;
+		} timer;
+		__u8 vector;
+	} u;
+};
+
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO	0x0
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO	0x1
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR	0x2
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT	0x3
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA	0x4
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST	0x5
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID		0x6
+#define KVM_XEN_VCPU_ATTR_TYPE_TIMER		0x7
+#define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR	0x8
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO_HVA	0x9
+
+/* Secure Encrypted Virtualization command */
+enum sev_cmd_id {
+	/* Guest initialization commands */
+	KVM_SEV_INIT = 0,
+	KVM_SEV_ES_INIT,
+	/* Guest launch commands */
+	KVM_SEV_LAUNCH_START,
+	KVM_SEV_LAUNCH_UPDATE_DATA,
+	KVM_SEV_LAUNCH_UPDATE_VMSA,
+	KVM_SEV_LAUNCH_SECRET,
+	KVM_SEV_LAUNCH_MEASURE,
+	KVM_SEV_LAUNCH_FINISH,
+	/* Guest migration commands (outgoing) */
+	KVM_SEV_SEND_START,
+	KVM_SEV_SEND_UPDATE_DATA,
+	KVM_SEV_SEND_UPDATE_VMSA,
+	KVM_SEV_SEND_FINISH,
+	/* Guest migration commands (incoming) */
+	KVM_SEV_RECEIVE_START,
+	KVM_SEV_RECEIVE_UPDATE_DATA,
+	KVM_SEV_RECEIVE_UPDATE_VMSA,
+	KVM_SEV_RECEIVE_FINISH,
+	/* Guest status and debug commands */
+	KVM_SEV_GUEST_STATUS,
+	KVM_SEV_DBG_DECRYPT,
+	KVM_SEV_DBG_ENCRYPT,
+	/* Guest certificates commands */
+	KVM_SEV_CERT_EXPORT,
+	/* Attestation report */
+	KVM_SEV_GET_ATTESTATION_REPORT,
+	/* Guest Migration Extension */
+	KVM_SEV_SEND_CANCEL,
+
+	KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_cmd {
+	__u32 id;
+	__u32 pad0;
+	__u64 data;
+	__u32 error;
+	__u32 sev_fd;
+};
+
+struct kvm_sev_launch_start {
+	__u32 handle;
+	__u32 policy;
+	__u64 dh_uaddr;
+	__u32 dh_len;
+	__u32 pad0;
+	__u64 session_uaddr;
+	__u32 session_len;
+	__u32 pad1;
+};
+
+struct kvm_sev_launch_update_data {
+	__u64 uaddr;
+	__u32 len;
+	__u32 pad0;
+};
+
+
+struct kvm_sev_launch_secret {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u32 pad0;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u32 pad1;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+	__u32 pad2;
+};
+
+struct kvm_sev_launch_measure {
+	__u64 uaddr;
+	__u32 len;
+	__u32 pad0;
+};
+
+struct kvm_sev_guest_status {
+	__u32 handle;
+	__u32 policy;
+	__u32 state;
+};
+
+struct kvm_sev_dbg {
+	__u64 src_uaddr;
+	__u64 dst_uaddr;
+	__u32 len;
+	__u32 pad0;
+};
+
+struct kvm_sev_attestation_report {
+	__u8 mnonce[16];
+	__u64 uaddr;
+	__u32 len;
+	__u32 pad0;
+};
+
+struct kvm_sev_send_start {
+	__u32 policy;
+	__u32 pad0;
+	__u64 pdh_cert_uaddr;
+	__u32 pdh_cert_len;
+	__u32 pad1;
+	__u64 plat_certs_uaddr;
+	__u32 plat_certs_len;
+	__u32 pad2;
+	__u64 amd_certs_uaddr;
+	__u32 amd_certs_len;
+	__u32 pad3;
+	__u64 session_uaddr;
+	__u32 session_len;
+	__u32 pad4;
+};
+
+struct kvm_sev_send_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u32 pad0;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u32 pad1;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+	__u32 pad2;
+};
+
+struct kvm_sev_receive_start {
+	__u32 handle;
+	__u32 policy;
+	__u64 pdh_uaddr;
+	__u32 pdh_len;
+	__u32 pad0;
+	__u64 session_uaddr;
+	__u32 session_len;
+	__u32 pad1;
+};
+
+struct kvm_sev_receive_update_data {
+	__u64 hdr_uaddr;
+	__u32 hdr_len;
+	__u32 pad0;
+	__u64 guest_uaddr;
+	__u32 guest_len;
+	__u32 pad1;
+	__u64 trans_uaddr;
+	__u32 trans_len;
+	__u32 pad2;
+};
+
+#define KVM_X2APIC_API_USE_32BIT_IDS            (1ULL << 0)
+#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK  (1ULL << 1)
+
+struct kvm_hyperv_eventfd {
+	__u32 conn_id;
+	__s32 fd;
+	__u32 flags;
+	__u32 padding[3];
+};
+
+#define KVM_HYPERV_CONN_ID_MASK		0x00ffffff
+#define KVM_HYPERV_EVENTFD_DEASSIGN	(1 << 0)
+
 /*
  * Masked event layout.
  * Bits   Description
@@ -547,10 +839,10 @@ struct kvm_pmu_event_filter {
 	((__u64)(!!(exclude)) << 55))
 
 #define KVM_PMU_MASKED_ENTRY_EVENT_SELECT \
-	(GENMASK_ULL(7, 0) | GENMASK_ULL(35, 32))
-#define KVM_PMU_MASKED_ENTRY_UMASK_MASK		(GENMASK_ULL(63, 56))
-#define KVM_PMU_MASKED_ENTRY_UMASK_MATCH	(GENMASK_ULL(15, 8))
-#define KVM_PMU_MASKED_ENTRY_EXCLUDE		(BIT_ULL(55))
+	(__GENMASK_ULL(7, 0) | __GENMASK_ULL(35, 32))
+#define KVM_PMU_MASKED_ENTRY_UMASK_MASK		(__GENMASK_ULL(63, 56))
+#define KVM_PMU_MASKED_ENTRY_UMASK_MATCH	(__GENMASK_ULL(15, 8))
+#define KVM_PMU_MASKED_ENTRY_EXCLUDE		(_BITULL(55))
 #define KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT	(56)
 
 /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */
@@ -558,7 +850,7 @@ struct kvm_pmu_event_filter {
 #define   KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
 
 /* x86-specific KVM_EXIT_HYPERCALL flags. */
-#define KVM_EXIT_HYPERCALL_LONG_MODE	BIT(0)
+#define KVM_EXIT_HYPERCALL_LONG_MODE	_BITULL(0)
 
 #define KVM_X86_DEFAULT_VM	0
 #define KVM_X86_SW_PROTECTED_VM	1
diff --git a/linux-headers/linux/bits.h b/linux-headers/linux/bits.h
new file mode 100644
index 000000000000..d9897771be8c
--- /dev/null
+++ b/linux-headers/linux/bits.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/* bits.h: Macros for dealing with bitmasks.  */
+
+#ifndef _LINUX_BITS_H
+#define _LINUX_BITS_H
+
+#define __GENMASK(h, l) \
+        (((~_UL(0)) - (_UL(1) << (l)) + 1) & \
+         (~_UL(0) >> (__BITS_PER_LONG - 1 - (h))))
+
+#define __GENMASK_ULL(h, l) \
+        (((~_ULL(0)) - (_ULL(1) << (l)) + 1) & \
+         (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h))))
+
+#endif /* _LINUX_BITS_H */
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
index 17839229b2ac..038731cdef26 100644
--- a/linux-headers/linux/kvm.h
+++ b/linux-headers/linux/kvm.h
@@ -16,6 +16,11 @@
 
 #define KVM_API_VERSION 12
 
+/*
+ * Backwards-compatible definitions.
+ */
+#define __KVM_HAVE_GUEST_DEBUG
+
 /* for KVM_SET_USER_MEMORY_REGION */
 struct kvm_userspace_memory_region {
 	__u32 slot;
@@ -85,43 +90,6 @@ struct kvm_pit_config {
 
 #define KVM_PIT_SPEAKER_DUMMY     1
 
-struct kvm_s390_skeys {
-	__u64 start_gfn;
-	__u64 count;
-	__u64 skeydata_addr;
-	__u32 flags;
-	__u32 reserved[9];
-};
-
-#define KVM_S390_CMMA_PEEK (1 << 0)
-
-/**
- * kvm_s390_cmma_log - Used for CMMA migration.
- *
- * Used both for input and output.
- *
- * @start_gfn: Guest page number to start from.
- * @count: Size of the result buffer.
- * @flags: Control operation mode via KVM_S390_CMMA_* flags
- * @remaining: Used with KVM_S390_GET_CMMA_BITS. Indicates how many dirty
- *             pages are still remaining.
- * @mask: Used with KVM_S390_SET_CMMA_BITS. Bitmap of bits to actually set
- *        in the PGSTE.
- * @values: Pointer to the values buffer.
- *
- * Used in KVM_S390_{G,S}ET_CMMA_BITS ioctls.
- */
-struct kvm_s390_cmma_log {
-	__u64 start_gfn;
-	__u32 count;
-	__u32 flags;
-	union {
-		__u64 remaining;
-		__u64 mask;
-	};
-	__u64 values;
-};
-
 struct kvm_hyperv_exit {
 #define KVM_EXIT_HYPERV_SYNIC          1
 #define KVM_EXIT_HYPERV_HCALL          2
@@ -313,11 +281,6 @@ struct kvm_run {
 			__u32 ipb;
 		} s390_sieic;
 		/* KVM_EXIT_S390_RESET */
-#define KVM_S390_RESET_POR       1
-#define KVM_S390_RESET_CLEAR     2
-#define KVM_S390_RESET_SUBSYSTEM 4
-#define KVM_S390_RESET_CPU_INIT  8
-#define KVM_S390_RESET_IPL       16
 		__u64 s390_reset_flags;
 		/* KVM_EXIT_S390_UCONTROL */
 		struct {
@@ -532,43 +495,6 @@ struct kvm_translation {
 	__u8  pad[5];
 };
 
-/* for KVM_S390_MEM_OP */
-struct kvm_s390_mem_op {
-	/* in */
-	__u64 gaddr;		/* the guest address */
-	__u64 flags;		/* flags */
-	__u32 size;		/* amount of bytes */
-	__u32 op;		/* type of operation */
-	__u64 buf;		/* buffer in userspace */
-	union {
-		struct {
-			__u8 ar;	/* the access register number */
-			__u8 key;	/* access key, ignored if flag unset */
-			__u8 pad1[6];	/* ignored */
-			__u64 old_addr;	/* ignored if cmpxchg flag unset */
-		};
-		__u32 sida_offset; /* offset into the sida */
-		__u8 reserved[32]; /* ignored */
-	};
-};
-/* types for kvm_s390_mem_op->op */
-#define KVM_S390_MEMOP_LOGICAL_READ	0
-#define KVM_S390_MEMOP_LOGICAL_WRITE	1
-#define KVM_S390_MEMOP_SIDA_READ	2
-#define KVM_S390_MEMOP_SIDA_WRITE	3
-#define KVM_S390_MEMOP_ABSOLUTE_READ	4
-#define KVM_S390_MEMOP_ABSOLUTE_WRITE	5
-#define KVM_S390_MEMOP_ABSOLUTE_CMPXCHG	6
-
-/* flags for kvm_s390_mem_op->flags */
-#define KVM_S390_MEMOP_F_CHECK_ONLY		(1ULL << 0)
-#define KVM_S390_MEMOP_F_INJECT_EXCEPTION	(1ULL << 1)
-#define KVM_S390_MEMOP_F_SKEY_PROTECTION	(1ULL << 2)
-
-/* flags specifying extension support via KVM_CAP_S390_MEM_OP_EXTENSION */
-#define KVM_S390_MEMOP_EXTENSION_CAP_BASE	(1 << 0)
-#define KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG	(1 << 1)
-
 /* for KVM_INTERRUPT */
 struct kvm_interrupt {
 	/* in */
@@ -633,124 +559,6 @@ struct kvm_mp_state {
 	__u32 mp_state;
 };
 
-struct kvm_s390_psw {
-	__u64 mask;
-	__u64 addr;
-};
-
-/* valid values for type in kvm_s390_interrupt */
-#define KVM_S390_SIGP_STOP		0xfffe0000u
-#define KVM_S390_PROGRAM_INT		0xfffe0001u
-#define KVM_S390_SIGP_SET_PREFIX	0xfffe0002u
-#define KVM_S390_RESTART		0xfffe0003u
-#define KVM_S390_INT_PFAULT_INIT	0xfffe0004u
-#define KVM_S390_INT_PFAULT_DONE	0xfffe0005u
-#define KVM_S390_MCHK			0xfffe1000u
-#define KVM_S390_INT_CLOCK_COMP		0xffff1004u
-#define KVM_S390_INT_CPU_TIMER		0xffff1005u
-#define KVM_S390_INT_VIRTIO		0xffff2603u
-#define KVM_S390_INT_SERVICE		0xffff2401u
-#define KVM_S390_INT_EMERGENCY		0xffff1201u
-#define KVM_S390_INT_EXTERNAL_CALL	0xffff1202u
-/* Anything below 0xfffe0000u is taken by INT_IO */
-#define KVM_S390_INT_IO(ai,cssid,ssid,schid)   \
-	(((schid)) |			       \
-	 ((ssid) << 16) |		       \
-	 ((cssid) << 18) |		       \
-	 ((ai) << 26))
-#define KVM_S390_INT_IO_MIN		0x00000000u
-#define KVM_S390_INT_IO_MAX		0xfffdffffu
-#define KVM_S390_INT_IO_AI_MASK		0x04000000u
-
-
-struct kvm_s390_interrupt {
-	__u32 type;
-	__u32 parm;
-	__u64 parm64;
-};
-
-struct kvm_s390_io_info {
-	__u16 subchannel_id;
-	__u16 subchannel_nr;
-	__u32 io_int_parm;
-	__u32 io_int_word;
-};
-
-struct kvm_s390_ext_info {
-	__u32 ext_params;
-	__u32 pad;
-	__u64 ext_params2;
-};
-
-struct kvm_s390_pgm_info {
-	__u64 trans_exc_code;
-	__u64 mon_code;
-	__u64 per_address;
-	__u32 data_exc_code;
-	__u16 code;
-	__u16 mon_class_nr;
-	__u8 per_code;
-	__u8 per_atmid;
-	__u8 exc_access_id;
-	__u8 per_access_id;
-	__u8 op_access_id;
-#define KVM_S390_PGM_FLAGS_ILC_VALID	0x01
-#define KVM_S390_PGM_FLAGS_ILC_0	0x02
-#define KVM_S390_PGM_FLAGS_ILC_1	0x04
-#define KVM_S390_PGM_FLAGS_ILC_MASK	0x06
-#define KVM_S390_PGM_FLAGS_NO_REWIND	0x08
-	__u8 flags;
-	__u8 pad[2];
-};
-
-struct kvm_s390_prefix_info {
-	__u32 address;
-};
-
-struct kvm_s390_extcall_info {
-	__u16 code;
-};
-
-struct kvm_s390_emerg_info {
-	__u16 code;
-};
-
-#define KVM_S390_STOP_FLAG_STORE_STATUS	0x01
-struct kvm_s390_stop_info {
-	__u32 flags;
-};
-
-struct kvm_s390_mchk_info {
-	__u64 cr14;
-	__u64 mcic;
-	__u64 failing_storage_address;
-	__u32 ext_damage_code;
-	__u32 pad;
-	__u8 fixed_logout[16];
-};
-
-struct kvm_s390_irq {
-	__u64 type;
-	union {
-		struct kvm_s390_io_info io;
-		struct kvm_s390_ext_info ext;
-		struct kvm_s390_pgm_info pgm;
-		struct kvm_s390_emerg_info emerg;
-		struct kvm_s390_extcall_info extcall;
-		struct kvm_s390_prefix_info prefix;
-		struct kvm_s390_stop_info stop;
-		struct kvm_s390_mchk_info mchk;
-		char reserved[64];
-	} u;
-};
-
-struct kvm_s390_irq_state {
-	__u64 buf;
-	__u32 flags;        /* will stay unused for compatibility reasons */
-	__u32 len;
-	__u32 reserved[4];  /* will stay unused for compatibility reasons */
-};
-
 /* for KVM_SET_GUEST_DEBUG */
 
 #define KVM_GUESTDBG_ENABLE		0x00000001
@@ -806,50 +614,6 @@ struct kvm_enable_cap {
 	__u8  pad[64];
 };
 
-/* for KVM_PPC_GET_PVINFO */
-
-#define KVM_PPC_PVINFO_FLAGS_EV_IDLE   (1<<0)
-
-struct kvm_ppc_pvinfo {
-	/* out */
-	__u32 flags;
-	__u32 hcall[4];
-	__u8  pad[108];
-};
-
-/* for KVM_PPC_GET_SMMU_INFO */
-#define KVM_PPC_PAGE_SIZES_MAX_SZ	8
-
-struct kvm_ppc_one_page_size {
-	__u32 page_shift;	/* Page shift (or 0) */
-	__u32 pte_enc;		/* Encoding in the HPTE (>>12) */
-};
-
-struct kvm_ppc_one_seg_page_size {
-	__u32 page_shift;	/* Base page shift of segment (or 0) */
-	__u32 slb_enc;		/* SLB encoding for BookS */
-	struct kvm_ppc_one_page_size enc[KVM_PPC_PAGE_SIZES_MAX_SZ];
-};
-
-#define KVM_PPC_PAGE_SIZES_REAL		0x00000001
-#define KVM_PPC_1T_SEGMENTS		0x00000002
-#define KVM_PPC_NO_HASH			0x00000004
-
-struct kvm_ppc_smmu_info {
-	__u64 flags;
-	__u32 slb_size;
-	__u16 data_keys;	/* # storage keys supported for data */
-	__u16 instr_keys;	/* # storage keys supported for instructions */
-	struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ];
-};
-
-/* for KVM_PPC_RESIZE_HPT_{PREPARE,COMMIT} */
-struct kvm_ppc_resize_hpt {
-	__u64 flags;
-	__u32 shift;
-	__u32 pad;
-};
-
 #define KVMIO 0xAE
 
 /* machine type bits, to be used as argument to KVM_CREATE_VM */
@@ -919,9 +683,7 @@ struct kvm_ppc_resize_hpt {
 /* Bug in KVM_SET_USER_MEMORY_REGION fixed: */
 #define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21
 #define KVM_CAP_USER_NMI 22
-#ifdef __KVM_HAVE_GUEST_DEBUG
 #define KVM_CAP_SET_GUEST_DEBUG 23
-#endif
 #ifdef __KVM_HAVE_PIT
 #define KVM_CAP_REINJECT_CONTROL 24
 #endif
@@ -1152,8 +914,6 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_GUEST_MEMFD 234
 #define KVM_CAP_VM_TYPES 235
 
-#ifdef KVM_CAP_IRQ_ROUTING
-
 struct kvm_irq_routing_irqchip {
 	__u32 irqchip;
 	__u32 pin;
@@ -1218,42 +978,6 @@ struct kvm_irq_routing {
 	struct kvm_irq_routing_entry entries[];
 };
 
-#endif
-
-#ifdef KVM_CAP_MCE
-/* x86 MCE */
-struct kvm_x86_mce {
-	__u64 status;
-	__u64 addr;
-	__u64 misc;
-	__u64 mcg_status;
-	__u8 bank;
-	__u8 pad1[7];
-	__u64 pad2[3];
-};
-#endif
-
-#ifdef KVM_CAP_XEN_HVM
-#define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR	(1 << 0)
-#define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
-#define KVM_XEN_HVM_CONFIG_SHARED_INFO		(1 << 2)
-#define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
-#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
-#define KVM_XEN_HVM_CONFIG_EVTCHN_SEND		(1 << 5)
-#define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG	(1 << 6)
-#define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE	(1 << 7)
-
-struct kvm_xen_hvm_config {
-	__u32 flags;
-	__u32 msr;
-	__u64 blob_addr_32;
-	__u64 blob_addr_64;
-	__u8 blob_size_32;
-	__u8 blob_size_64;
-	__u8 pad2[30];
-};
-#endif
-
 #define KVM_IRQFD_FLAG_DEASSIGN (1 << 0)
 /*
  * Available with KVM_CAP_IRQFD_RESAMPLE
@@ -1438,11 +1162,6 @@ struct kvm_vfio_spapr_tce {
 					 struct kvm_userspace_memory_region2)
 
 /* enable ucontrol for s390 */
-struct kvm_s390_ucas_mapping {
-	__u64 user_addr;
-	__u64 vcpu_addr;
-	__u64 length;
-};
 #define KVM_S390_UCAS_MAP        _IOW(KVMIO, 0x50, struct kvm_s390_ucas_mapping)
 #define KVM_S390_UCAS_UNMAP      _IOW(KVMIO, 0x51, struct kvm_s390_ucas_mapping)
 #define KVM_S390_VCPU_FAULT	 _IOW(KVMIO, 0x52, unsigned long)
@@ -1637,89 +1356,6 @@ struct kvm_enc_region {
 #define KVM_S390_NORMAL_RESET	_IO(KVMIO,   0xc3)
 #define KVM_S390_CLEAR_RESET	_IO(KVMIO,   0xc4)
 
-struct kvm_s390_pv_sec_parm {
-	__u64 origin;
-	__u64 length;
-};
-
-struct kvm_s390_pv_unp {
-	__u64 addr;
-	__u64 size;
-	__u64 tweak;
-};
-
-enum pv_cmd_dmp_id {
-	KVM_PV_DUMP_INIT,
-	KVM_PV_DUMP_CONFIG_STOR_STATE,
-	KVM_PV_DUMP_COMPLETE,
-	KVM_PV_DUMP_CPU,
-};
-
-struct kvm_s390_pv_dmp {
-	__u64 subcmd;
-	__u64 buff_addr;
-	__u64 buff_len;
-	__u64 gaddr;		/* For dump storage state */
-	__u64 reserved[4];
-};
-
-enum pv_cmd_info_id {
-	KVM_PV_INFO_VM,
-	KVM_PV_INFO_DUMP,
-};
-
-struct kvm_s390_pv_info_dump {
-	__u64 dump_cpu_buffer_len;
-	__u64 dump_config_mem_buffer_per_1m;
-	__u64 dump_config_finalize_len;
-};
-
-struct kvm_s390_pv_info_vm {
-	__u64 inst_calls_list[4];
-	__u64 max_cpus;
-	__u64 max_guests;
-	__u64 max_guest_addr;
-	__u64 feature_indication;
-};
-
-struct kvm_s390_pv_info_header {
-	__u32 id;
-	__u32 len_max;
-	__u32 len_written;
-	__u32 reserved;
-};
-
-struct kvm_s390_pv_info {
-	struct kvm_s390_pv_info_header header;
-	union {
-		struct kvm_s390_pv_info_dump dump;
-		struct kvm_s390_pv_info_vm vm;
-	};
-};
-
-enum pv_cmd_id {
-	KVM_PV_ENABLE,
-	KVM_PV_DISABLE,
-	KVM_PV_SET_SEC_PARMS,
-	KVM_PV_UNPACK,
-	KVM_PV_VERIFY,
-	KVM_PV_PREP_RESET,
-	KVM_PV_UNSHARE_ALL,
-	KVM_PV_INFO,
-	KVM_PV_DUMP,
-	KVM_PV_ASYNC_CLEANUP_PREPARE,
-	KVM_PV_ASYNC_CLEANUP_PERFORM,
-};
-
-struct kvm_pv_cmd {
-	__u32 cmd;	/* Command to be executed */
-	__u16 rc;	/* Ultravisor return code */
-	__u16 rrc;	/* Ultravisor return reason code */
-	__u64 data;	/* Data or address */
-	__u32 flags;    /* flags for future extensions. Must be 0 for now */
-	__u32 reserved[3];
-};
-
 /* Available with KVM_CAP_S390_PROTECTED */
 #define KVM_S390_PV_COMMAND		_IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
 
@@ -1733,58 +1369,6 @@ struct kvm_pv_cmd {
 #define KVM_XEN_HVM_GET_ATTR	_IOWR(KVMIO, 0xc8, struct kvm_xen_hvm_attr)
 #define KVM_XEN_HVM_SET_ATTR	_IOW(KVMIO,  0xc9, struct kvm_xen_hvm_attr)
 
-struct kvm_xen_hvm_attr {
-	__u16 type;
-	__u16 pad[3];
-	union {
-		__u8 long_mode;
-		__u8 vector;
-		__u8 runstate_update_flag;
-		struct {
-			__u64 gfn;
-#define KVM_XEN_INVALID_GFN ((__u64)-1)
-		} shared_info;
-		struct {
-			__u32 send_port;
-			__u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */
-			__u32 flags;
-#define KVM_XEN_EVTCHN_DEASSIGN		(1 << 0)
-#define KVM_XEN_EVTCHN_UPDATE		(1 << 1)
-#define KVM_XEN_EVTCHN_RESET		(1 << 2)
-			/*
-			 * Events sent by the guest are either looped back to
-			 * the guest itself (potentially on a different port#)
-			 * or signalled via an eventfd.
-			 */
-			union {
-				struct {
-					__u32 port;
-					__u32 vcpu;
-					__u32 priority;
-				} port;
-				struct {
-					__u32 port; /* Zero for eventfd */
-					__s32 fd;
-				} eventfd;
-				__u32 padding[4];
-			} deliver;
-		} evtchn;
-		__u32 xen_version;
-		__u64 pad[8];
-	} u;
-};
-
-
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
-#define KVM_XEN_ATTR_TYPE_LONG_MODE		0x0
-#define KVM_XEN_ATTR_TYPE_SHARED_INFO		0x1
-#define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR		0x2
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
-#define KVM_XEN_ATTR_TYPE_EVTCHN		0x3
-#define KVM_XEN_ATTR_TYPE_XEN_VERSION		0x4
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */
-#define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG	0x5
-
 /* Per-vCPU Xen attributes */
 #define KVM_XEN_VCPU_GET_ATTR	_IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr)
 #define KVM_XEN_VCPU_SET_ATTR	_IOW(KVMIO,  0xcb, struct kvm_xen_vcpu_attr)
@@ -1795,242 +1379,6 @@ struct kvm_xen_hvm_attr {
 #define KVM_GET_SREGS2             _IOR(KVMIO,  0xcc, struct kvm_sregs2)
 #define KVM_SET_SREGS2             _IOW(KVMIO,  0xcd, struct kvm_sregs2)
 
-struct kvm_xen_vcpu_attr {
-	__u16 type;
-	__u16 pad[3];
-	union {
-		__u64 gpa;
-#define KVM_XEN_INVALID_GPA ((__u64)-1)
-		__u64 pad[8];
-		struct {
-			__u64 state;
-			__u64 state_entry_time;
-			__u64 time_running;
-			__u64 time_runnable;
-			__u64 time_blocked;
-			__u64 time_offline;
-		} runstate;
-		__u32 vcpu_id;
-		struct {
-			__u32 port;
-			__u32 priority;
-			__u64 expires_ns;
-		} timer;
-		__u8 vector;
-	} u;
-};
-
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
-#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO	0x0
-#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO	0x1
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR	0x2
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT	0x3
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA	0x4
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST	0x5
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
-#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID		0x6
-#define KVM_XEN_VCPU_ATTR_TYPE_TIMER		0x7
-#define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR	0x8
-
-/* Secure Encrypted Virtualization command */
-enum sev_cmd_id {
-	/* Guest initialization commands */
-	KVM_SEV_INIT = 0,
-	KVM_SEV_ES_INIT,
-	/* Guest launch commands */
-	KVM_SEV_LAUNCH_START,
-	KVM_SEV_LAUNCH_UPDATE_DATA,
-	KVM_SEV_LAUNCH_UPDATE_VMSA,
-	KVM_SEV_LAUNCH_SECRET,
-	KVM_SEV_LAUNCH_MEASURE,
-	KVM_SEV_LAUNCH_FINISH,
-	/* Guest migration commands (outgoing) */
-	KVM_SEV_SEND_START,
-	KVM_SEV_SEND_UPDATE_DATA,
-	KVM_SEV_SEND_UPDATE_VMSA,
-	KVM_SEV_SEND_FINISH,
-	/* Guest migration commands (incoming) */
-	KVM_SEV_RECEIVE_START,
-	KVM_SEV_RECEIVE_UPDATE_DATA,
-	KVM_SEV_RECEIVE_UPDATE_VMSA,
-	KVM_SEV_RECEIVE_FINISH,
-	/* Guest status and debug commands */
-	KVM_SEV_GUEST_STATUS,
-	KVM_SEV_DBG_DECRYPT,
-	KVM_SEV_DBG_ENCRYPT,
-	/* Guest certificates commands */
-	KVM_SEV_CERT_EXPORT,
-	/* Attestation report */
-	KVM_SEV_GET_ATTESTATION_REPORT,
-	/* Guest Migration Extension */
-	KVM_SEV_SEND_CANCEL,
-
-	KVM_SEV_NR_MAX,
-};
-
-struct kvm_sev_cmd {
-	__u32 id;
-	__u64 data;
-	__u32 error;
-	__u32 sev_fd;
-};
-
-struct kvm_sev_launch_start {
-	__u32 handle;
-	__u32 policy;
-	__u64 dh_uaddr;
-	__u32 dh_len;
-	__u64 session_uaddr;
-	__u32 session_len;
-};
-
-struct kvm_sev_launch_update_data {
-	__u64 uaddr;
-	__u32 len;
-};
-
-
-struct kvm_sev_launch_secret {
-	__u64 hdr_uaddr;
-	__u32 hdr_len;
-	__u64 guest_uaddr;
-	__u32 guest_len;
-	__u64 trans_uaddr;
-	__u32 trans_len;
-};
-
-struct kvm_sev_launch_measure {
-	__u64 uaddr;
-	__u32 len;
-};
-
-struct kvm_sev_guest_status {
-	__u32 handle;
-	__u32 policy;
-	__u32 state;
-};
-
-struct kvm_sev_dbg {
-	__u64 src_uaddr;
-	__u64 dst_uaddr;
-	__u32 len;
-};
-
-struct kvm_sev_attestation_report {
-	__u8 mnonce[16];
-	__u64 uaddr;
-	__u32 len;
-};
-
-struct kvm_sev_send_start {
-	__u32 policy;
-	__u64 pdh_cert_uaddr;
-	__u32 pdh_cert_len;
-	__u64 plat_certs_uaddr;
-	__u32 plat_certs_len;
-	__u64 amd_certs_uaddr;
-	__u32 amd_certs_len;
-	__u64 session_uaddr;
-	__u32 session_len;
-};
-
-struct kvm_sev_send_update_data {
-	__u64 hdr_uaddr;
-	__u32 hdr_len;
-	__u64 guest_uaddr;
-	__u32 guest_len;
-	__u64 trans_uaddr;
-	__u32 trans_len;
-};
-
-struct kvm_sev_receive_start {
-	__u32 handle;
-	__u32 policy;
-	__u64 pdh_uaddr;
-	__u32 pdh_len;
-	__u64 session_uaddr;
-	__u32 session_len;
-};
-
-struct kvm_sev_receive_update_data {
-	__u64 hdr_uaddr;
-	__u32 hdr_len;
-	__u64 guest_uaddr;
-	__u32 guest_len;
-	__u64 trans_uaddr;
-	__u32 trans_len;
-};
-
-#define KVM_DEV_ASSIGN_ENABLE_IOMMU	(1 << 0)
-#define KVM_DEV_ASSIGN_PCI_2_3		(1 << 1)
-#define KVM_DEV_ASSIGN_MASK_INTX	(1 << 2)
-
-struct kvm_assigned_pci_dev {
-	__u32 assigned_dev_id;
-	__u32 busnr;
-	__u32 devfn;
-	__u32 flags;
-	__u32 segnr;
-	union {
-		__u32 reserved[11];
-	};
-};
-
-#define KVM_DEV_IRQ_HOST_INTX    (1 << 0)
-#define KVM_DEV_IRQ_HOST_MSI     (1 << 1)
-#define KVM_DEV_IRQ_HOST_MSIX    (1 << 2)
-
-#define KVM_DEV_IRQ_GUEST_INTX   (1 << 8)
-#define KVM_DEV_IRQ_GUEST_MSI    (1 << 9)
-#define KVM_DEV_IRQ_GUEST_MSIX   (1 << 10)
-
-#define KVM_DEV_IRQ_HOST_MASK	 0x00ff
-#define KVM_DEV_IRQ_GUEST_MASK   0xff00
-
-struct kvm_assigned_irq {
-	__u32 assigned_dev_id;
-	__u32 host_irq; /* ignored (legacy field) */
-	__u32 guest_irq;
-	__u32 flags;
-	union {
-		__u32 reserved[12];
-	};
-};
-
-struct kvm_assigned_msix_nr {
-	__u32 assigned_dev_id;
-	__u16 entry_nr;
-	__u16 padding;
-};
-
-#define KVM_MAX_MSIX_PER_DEV		256
-struct kvm_assigned_msix_entry {
-	__u32 assigned_dev_id;
-	__u32 gsi;
-	__u16 entry; /* The index of entry in the MSI-X table */
-	__u16 padding[3];
-};
-
-#define KVM_X2APIC_API_USE_32BIT_IDS            (1ULL << 0)
-#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK  (1ULL << 1)
-
-/* Available with KVM_CAP_ARM_USER_IRQ */
-
-/* Bits for run->s.regs.device_irq_level */
-#define KVM_ARM_DEV_EL1_VTIMER		(1 << 0)
-#define KVM_ARM_DEV_EL1_PTIMER		(1 << 1)
-#define KVM_ARM_DEV_PMU			(1 << 2)
-
-struct kvm_hyperv_eventfd {
-	__u32 conn_id;
-	__s32 fd;
-	__u32 flags;
-	__u32 padding[3];
-};
-
-#define KVM_HYPERV_CONN_ID_MASK		0x00ffffff
-#define KVM_HYPERV_EVENTFD_DEASSIGN	(1 << 0)
-
 #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE    (1 << 0)
 #define KVM_DIRTY_LOG_INITIALLY_SET            (1 << 1)
 
@@ -2176,33 +1524,6 @@ struct kvm_stats_desc {
 /* Available with KVM_CAP_S390_ZPCI_OP */
 #define KVM_S390_ZPCI_OP         _IOW(KVMIO,  0xd1, struct kvm_s390_zpci_op)
 
-struct kvm_s390_zpci_op {
-	/* in */
-	__u32 fh;               /* target device */
-	__u8  op;               /* operation to perform */
-	__u8  pad[3];
-	union {
-		/* for KVM_S390_ZPCIOP_REG_AEN */
-		struct {
-			__u64 ibv;      /* Guest addr of interrupt bit vector */
-			__u64 sb;       /* Guest addr of summary bit */
-			__u32 flags;
-			__u32 noi;      /* Number of interrupts */
-			__u8 isc;       /* Guest interrupt subclass */
-			__u8 sbo;       /* Offset of guest summary bit vector */
-			__u16 pad;
-		} reg_aen;
-		__u64 reserved[8];
-	} u;
-};
-
-/* types for kvm_s390_zpci_op->op */
-#define KVM_S390_ZPCIOP_REG_AEN                0
-#define KVM_S390_ZPCIOP_DEREG_AEN      1
-
-/* flags for kvm_s390_zpci_op->u.reg_aen.flags */
-#define KVM_S390_ZPCIOP_REGAEN_HOST    (1 << 0)
-
 /* Available with KVM_CAP_MEMORY_ATTRIBUTES */
 #define KVM_SET_MEMORY_ATTRIBUTES              _IOW(KVMIO,  0xd2, struct kvm_memory_attributes)
 
diff --git a/linux-headers/linux/psp-sev.h b/linux-headers/linux/psp-sev.h
index bcb21339ee39..c3046c6bfff5 100644
--- a/linux-headers/linux/psp-sev.h
+++ b/linux-headers/linux/psp-sev.h
@@ -28,6 +28,9 @@ enum {
 	SEV_PEK_CERT_IMPORT,
 	SEV_GET_ID,	/* This command is deprecated, use SEV_GET_ID2 */
 	SEV_GET_ID2,
+	SNP_PLATFORM_STATUS,
+	SNP_COMMIT,
+	SNP_SET_CONFIG,
 
 	SEV_MAX,
 };
@@ -69,6 +72,12 @@ typedef enum {
 	SEV_RET_RESOURCE_LIMIT,
 	SEV_RET_SECURE_DATA_INVALID,
 	SEV_RET_INVALID_KEY = 0x27,
+	SEV_RET_INVALID_PAGE_SIZE,
+	SEV_RET_INVALID_PAGE_STATE,
+	SEV_RET_INVALID_MDATA_ENTRY,
+	SEV_RET_INVALID_PAGE_OWNER,
+	SEV_RET_INVALID_PAGE_AEAD_OFLOW,
+	SEV_RET_RMP_INIT_REQUIRED,
 	SEV_RET_MAX,
 } sev_ret_code;
 
@@ -155,6 +164,56 @@ struct sev_user_data_get_id2 {
 	__u32 length;				/* In/Out */
 } __attribute__((packed));
 
+/**
+ * struct sev_user_data_snp_status - SNP status
+ *
+ * @api_major: API major version
+ * @api_minor: API minor version
+ * @state: current platform state
+ * @is_rmp_initialized: whether RMP is initialized or not
+ * @rsvd: reserved
+ * @build_id: firmware build id for the API version
+ * @mask_chip_id: whether chip id is present in attestation reports or not
+ * @mask_chip_key: whether attestation reports are signed or not
+ * @vlek_en: VLEK (Version Loaded Endorsement Key) hashstick is loaded
+ * @rsvd1: reserved
+ * @guest_count: the number of guest currently managed by the firmware
+ * @current_tcb_version: current TCB version
+ * @reported_tcb_version: reported TCB version
+ */
+struct sev_user_data_snp_status {
+	__u8 api_major;			/* Out */
+	__u8 api_minor;			/* Out */
+	__u8 state;			/* Out */
+	__u8 is_rmp_initialized:1;	/* Out */
+	__u8 rsvd:7;
+	__u32 build_id;			/* Out */
+	__u32 mask_chip_id:1;		/* Out */
+	__u32 mask_chip_key:1;		/* Out */
+	__u32 vlek_en:1;		/* Out */
+	__u32 rsvd1:29;
+	__u32 guest_count;		/* Out */
+	__u64 current_tcb_version;	/* Out */
+	__u64 reported_tcb_version;	/* Out */
+} __attribute__((packed));
+
+/**
+ * struct sev_user_data_snp_config - system wide configuration value for SNP.
+ *
+ * @reported_tcb: the TCB version to report in the guest attestation report.
+ * @mask_chip_id: whether chip id is present in attestation reports or not
+ * @mask_chip_key: whether attestation reports are signed or not
+ * @rsvd: reserved
+ * @rsvd1: reserved
+ */
+struct sev_user_data_snp_config {
+	__u64 reported_tcb  ;   /* In */
+	__u32 mask_chip_id:1;   /* In */
+	__u32 mask_chip_key:1;  /* In */
+	__u32 rsvd:30;          /* In */
+	__u8 rsvd1[52];
+} __attribute__((packed));
+
 /**
  * struct sev_issue_cmd - SEV ioctl parameters
  *
diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
index 649560c685f1..bea697390613 100644
--- a/linux-headers/linux/vhost.h
+++ b/linux-headers/linux/vhost.h
@@ -227,4 +227,11 @@
  */
 #define VHOST_VDPA_GET_VRING_DESC_GROUP	_IOWR(VHOST_VIRTIO, 0x7F,	\
 					      struct vhost_vring_state)
+
+/* Get the queue size of a specific virtqueue.
+ * userspace set the vring index in vhost_vring_state.index
+ * kernel set the queue size in vhost_vring_state.num
+ */
+#define VHOST_VDPA_GET_VRING_SIZE	_IOWR(VHOST_VIRTIO, 0x80,	\
+					      struct vhost_vring_state)
 #endif
diff --git a/scripts/update-linux-headers.sh b/scripts/update-linux-headers.sh
index a0006eec6fd1..6430881eec50 100755
--- a/scripts/update-linux-headers.sh
+++ b/scripts/update-linux-headers.sh
@@ -62,6 +62,7 @@ cp_portable() {
                                      -e 'linux/kernel' \
                                      -e 'linux/sysinfo' \
                                      -e 'asm-generic/kvm_para' \
+                                     -e 'asm-x86/setup_data.h' \
                                      > /dev/null
     then
         echo "Unexpected #include in input file $f".
@@ -149,9 +150,11 @@ for arch in $ARCHLIST; do
         cp "$tmpdir/include/asm/unistd_x32.h" "$output/linux-headers/asm-x86/"
         cp "$tmpdir/include/asm/unistd_64.h" "$output/linux-headers/asm-x86/"
         cp_portable "$tmpdir/include/asm/kvm_para.h" "$output/include/standard-headers/asm-$arch"
+        cp_portable "$tmpdir/include/asm/setup_data.h" "$output/include/standard-headers/asm-$arch"
         # Remove everything except the macros from bootparam.h avoiding the
         # unnecessary import of several video/ist/etc headers
         sed -e '/__ASSEMBLY__/,/__ASSEMBLY__/d' \
+            -e 's/<asm\/\([^>]*\)>/"standard-headers\/asm-x86\/\1"/' \
                "$tmpdir/include/asm/bootparam.h" > "$tmpdir/bootparam.h"
         cp_portable "$tmpdir/bootparam.h" \
                     "$output/include/standard-headers/asm-$arch"
@@ -165,7 +168,7 @@ rm -rf "$output/linux-headers/linux"
 mkdir -p "$output/linux-headers/linux"
 for header in const.h stddef.h kvm.h vfio.h vfio_ccw.h vfio_zdev.h vhost.h \
               psci.h psp-sev.h userfaultfd.h memfd.h mman.h nvme_ioctl.h \
-              vduse.h iommufd.h; do
+              vduse.h iommufd.h bits.h; do
     cp "$tmpdir/include/linux/$header" "$output/linux-headers/linux"
 done
 
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 02/11] virtio-gpu: Use pkgconfig version to decide which virgl features are available
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3 Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 03/11] virtio-gpu: Support context-init feature with virglrenderer Dmitry Osipenko
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

New virglrerenderer features were stabilized with release of v1.0.0.
Presence of symbols in virglrenderer.h doesn't guarantee ABI compatibility
with pre-release development versions of libvirglerender. Use virglrenderer
version to decide reliably which virgl features are available.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 meson.build | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/meson.build b/meson.build
index 91a0aa64c640..cafc32521efb 100644
--- a/meson.build
+++ b/meson.build
@@ -2286,11 +2286,8 @@ config_host_data.set('CONFIG_PNG', png.found())
 config_host_data.set('CONFIG_VNC', vnc.found())
 config_host_data.set('CONFIG_VNC_JPEG', jpeg.found())
 config_host_data.set('CONFIG_VNC_SASL', sasl.found())
-if virgl.found()
-  config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT',
-                       cc.has_member('struct virgl_renderer_resource_info_ext', 'd3d_tex2d',
-                                     prefix: '#include <virglrenderer.h>',
-                                     dependencies: virgl))
+if virgl.version().version_compare('>=1.0.0')
+  config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
 endif
 config_host_data.set('CONFIG_VIRTFS', have_virtfs)
 config_host_data.set('CONFIG_VTE', vte.found())
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 03/11] virtio-gpu: Support context-init feature with virglrenderer
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3 Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 02/11] virtio-gpu: Use pkgconfig version to decide which virgl features are available Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 04/11] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Dmitry Osipenko
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

From: Huang Rui <ray.huang@amd.com>

Patch "virtio-gpu: CONTEXT_INIT feature" has added the context_init
feature flags. Expose this feature and support creating virglrenderer
context with flags using context_id if libvirglrenderer is new enough.

Originally-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu-gl.c    |  4 ++++
 hw/display/virtio-gpu-virgl.c | 20 ++++++++++++++++++--
 meson.build                   |  1 +
 3 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/hw/display/virtio-gpu-gl.c b/hw/display/virtio-gpu-gl.c
index e06be60dfbfc..ba478124e2c2 100644
--- a/hw/display/virtio-gpu-gl.c
+++ b/hw/display/virtio-gpu-gl.c
@@ -127,6 +127,10 @@ static void virtio_gpu_gl_device_realize(DeviceState *qdev, Error **errp)
     VIRTIO_GPU_BASE(g)->virtio_config.num_capsets =
         virtio_gpu_virgl_get_num_capsets(g);
 
+#ifdef HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS
+    g->parent_obj.conf.flags |= 1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED;
+#endif
+
     virtio_gpu_device_realize(qdev, errp);
 }
 
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 9f34d0e6619c..ef598d8d23ee 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -106,8 +106,24 @@ static void virgl_cmd_context_create(VirtIOGPU *g,
     trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
                                     cc.debug_name);
 
-    virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen,
-                                  cc.debug_name);
+    if (cc.context_init) {
+        if (!virtio_gpu_context_init_enabled(g->parent_obj.conf)) {
+            qemu_log_mask(LOG_GUEST_ERROR, "%s: context_init disabled",
+                          __func__);
+            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+            return;
+        }
+
+#ifdef HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS
+        virgl_renderer_context_create_with_flags(cc.hdr.ctx_id,
+                                                 cc.context_init,
+                                                 cc.nlen,
+                                                 cc.debug_name);
+        return;
+#endif
+    }
+
+    virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen, cc.debug_name);
 }
 
 static void virgl_cmd_context_destroy(VirtIOGPU *g,
diff --git a/meson.build b/meson.build
index cafc32521efb..d71d33d69b45 100644
--- a/meson.build
+++ b/meson.build
@@ -2288,6 +2288,7 @@ config_host_data.set('CONFIG_VNC_JPEG', jpeg.found())
 config_host_data.set('CONFIG_VNC_SASL', sasl.found())
 if virgl.version().version_compare('>=1.0.0')
   config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
+  config_host_data.set('HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS', 1)
 endif
 config_host_data.set('CONFIG_VIRTFS', have_virtfs)
 config_host_data.set('CONFIG_VTE', vte.found())
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 04/11] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (2 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 03/11] virtio-gpu: Support context-init feature with virglrenderer Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 05/11] virtio-gpu: Add virgl resource management Dmitry Osipenko
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

The udmabuf usage is mandatory when virgl is disabled and blobs feature
enabled in the Qemu machine configuration. If virgl and blobs are enabled,
then udmabuf requirement is optional. Since udmabuf isn't widely supported
by a popular Linux distros today, let's relax the udmabuf requirement for
blobs=on,virgl=on. Now, a full-featured virtio-gpu acceleration is
available to Qemu users without a need to have udmabuf available in the
system.

Reviewed-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index ae831b6b3e3e..dac272ecadb1 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1472,6 +1472,7 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
 
     if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
         if (!virtio_gpu_rutabaga_enabled(g->parent_obj.conf) &&
+            !virtio_gpu_virgl_enabled(g->parent_obj.conf) &&
             !virtio_gpu_have_udmabuf()) {
             error_setg(errp, "need rutabaga or udmabuf for blob resources");
             return;
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 05/11] virtio-gpu: Add virgl resource management
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (3 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 04/11] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 06/11] virtio-gpu: Support blob scanout using dmabuf fd Dmitry Osipenko
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

From: Huang Rui <ray.huang@amd.com>

In a preparation to adding host blobs support to virtio-gpu, add virgl
resource management that allows to retrieve resource based on its ID.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu-virgl.c | 57 +++++++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index ef598d8d23ee..04f7a191c41a 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -35,11 +35,34 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
 {
     struct virtio_gpu_resource_create_2d c2d;
     struct virgl_renderer_resource_create_args args;
+    struct virtio_gpu_simple_resource *res;
 
     VIRTIO_GPU_FILL_CMD(c2d);
     trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
                                        c2d.width, c2d.height);
 
+    if (c2d.resource_id == 0) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, c2d.resource_id);
+    if (res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+                      __func__, c2d.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    res->width = c2d.width;
+    res->height = c2d.height;
+    res->format = c2d.format;
+    res->resource_id = c2d.resource_id;
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+
     args.handle = c2d.resource_id;
     args.target = 2;
     args.format = c2d.format;
@@ -59,11 +82,34 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
 {
     struct virtio_gpu_resource_create_3d c3d;
     struct virgl_renderer_resource_create_args args;
+    struct virtio_gpu_simple_resource *res;
 
     VIRTIO_GPU_FILL_CMD(c3d);
     trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
                                        c3d.width, c3d.height, c3d.depth);
 
+    if (c3d.resource_id == 0) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, c3d.resource_id);
+    if (res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+                      __func__, c3d.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    res->width = c3d.width;
+    res->height = c3d.height;
+    res->format = c3d.format;
+    res->resource_id = c3d.resource_id;
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+
     args.handle = c3d.resource_id;
     args.target = c3d.target;
     args.format = c3d.format;
@@ -82,12 +128,19 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
                                      struct virtio_gpu_ctrl_command *cmd)
 {
     struct virtio_gpu_resource_unref unref;
+    struct virtio_gpu_simple_resource *res;
     struct iovec *res_iovs = NULL;
     int num_iovs = 0;
 
     VIRTIO_GPU_FILL_CMD(unref);
     trace_virtio_gpu_cmd_res_unref(unref.resource_id);
 
+    res = virtio_gpu_find_resource(g, unref.resource_id);
+    if (!res) {
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
     virgl_renderer_resource_detach_iov(unref.resource_id,
                                        &res_iovs,
                                        &num_iovs);
@@ -95,6 +148,10 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
         virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
     }
     virgl_renderer_resource_unref(unref.resource_id);
+
+    QTAILQ_REMOVE(&g->reslist, res, next);
+
+    g_free(res);
 }
 
 static void virgl_cmd_context_create(VirtIOGPU *g,
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 06/11] virtio-gpu: Support blob scanout using dmabuf fd
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (4 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 05/11] virtio-gpu: Add virgl resource management Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing Dmitry Osipenko
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

From: Robert Beckett <bob.beckett@collabora.com>

Support displaying blob resources by handling SET_SCANOUT_BLOB
command.

Signed-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu-virgl.c  | 109 +++++++++++++++++++++++++++++++++
 hw/display/virtio-gpu.c        |  12 ++--
 include/hw/virtio/virtio-gpu.h |   7 +++
 meson.build                    |   1 +
 4 files changed, 123 insertions(+), 6 deletions(-)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 04f7a191c41a..c2057b0c2147 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -17,6 +17,8 @@
 #include "trace.h"
 #include "hw/virtio/virtio.h"
 #include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-bswap.h"
+#include "hw/virtio/virtio-gpu-pixman.h"
 
 #include "ui/egl-helpers.h"
 
@@ -61,6 +63,7 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
     res->height = c2d.height;
     res->format = c2d.format;
     res->resource_id = c2d.resource_id;
+    res->dmabuf_fd = -1;
     QTAILQ_INSERT_HEAD(&g->reslist, res, next);
 
     args.handle = c2d.resource_id;
@@ -108,6 +111,7 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
     res->height = c3d.height;
     res->format = c3d.format;
     res->resource_id = c3d.resource_id;
+    res->dmabuf_fd = -1;
     QTAILQ_INSERT_HEAD(&g->reslist, res, next);
 
     args.handle = c3d.resource_id;
@@ -490,6 +494,106 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
     g_free(resp);
 }
 
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+static void virgl_cmd_set_scanout_blob(VirtIOGPU *g,
+                                       struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_framebuffer fb = { 0 };
+    struct virgl_renderer_resource_info info;
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_set_scanout_blob ss;
+    uint64_t fbend;
+
+    VIRTIO_GPU_FILL_CMD(ss);
+    virtio_gpu_scanout_blob_bswap(&ss);
+    trace_virtio_gpu_cmd_set_scanout_blob(ss.scanout_id, ss.resource_id,
+                                          ss.r.width, ss.r.height, ss.r.x,
+                                          ss.r.y);
+
+    if (ss.scanout_id >= g->parent_obj.conf.max_outputs) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal scanout id specified %d",
+                      __func__, ss.scanout_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID;
+        return;
+    }
+
+    if (ss.resource_id == 0) {
+        virtio_gpu_disable_scanout(g, ss.scanout_id);
+        return;
+    }
+
+    if (ss.width < 16 ||
+        ss.height < 16 ||
+        ss.r.x + ss.r.width > ss.width ||
+        ss.r.y + ss.r.height > ss.height) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal scanout %d bounds for"
+                      " resource %d, rect (%d,%d)+%d,%d, fb %d %d\n",
+                      __func__, ss.scanout_id, ss.resource_id,
+                      ss.r.x, ss.r.y, ss.r.width, ss.r.height,
+                      ss.width, ss.height);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, ss.resource_id);
+    if (!res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+                      __func__, ss.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+    if (virgl_renderer_resource_get_info(ss.resource_id, &info)) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not have info %d\n",
+                      __func__, ss.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+    if (res->dmabuf_fd < 0) {
+        res->dmabuf_fd = info.fd;
+    }
+    if (res->dmabuf_fd < 0) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource not backed by dmabuf %d\n",
+                      __func__, ss.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    fb.format = virtio_gpu_get_pixman_format(ss.format);
+    if (!fb.format) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: pixel format not supported %d\n",
+                      __func__, ss.format);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    fb.bytes_pp = DIV_ROUND_UP(PIXMAN_FORMAT_BPP(fb.format), 8);
+    fb.width = ss.width;
+    fb.height = ss.height;
+    fb.stride = ss.strides[0];
+    fb.offset = ss.offsets[0] + ss.r.x * fb.bytes_pp + ss.r.y * fb.stride;
+
+    fbend = fb.offset;
+    fbend += fb.stride * (ss.r.height - 1);
+    fbend += fb.bytes_pp * ss.r.width;
+    if (fbend > res->blob_size) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: fb end out of range\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    g->parent_obj.enable = 1;
+    if (virtio_gpu_update_dmabuf(g, ss.scanout_id, res, &fb, &ss.r)) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to update dmabuf\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    virtio_gpu_update_scanout(g, ss.scanout_id, res, &fb, &ss.r);
+}
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
+
 void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
                                       struct virtio_gpu_ctrl_command *cmd)
 {
@@ -556,6 +660,11 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
     case VIRTIO_GPU_CMD_GET_EDID:
         virtio_gpu_get_edid(g, cmd);
         break;
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+    case VIRTIO_GPU_CMD_SET_SCANOUT_BLOB:
+        virgl_cmd_set_scanout_blob(g, cmd);
+        break;
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
     default:
         cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
         break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index dac272ecadb1..1e57a53d346c 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -380,7 +380,7 @@ static void virtio_gpu_resource_create_blob(VirtIOGPU *g,
     QTAILQ_INSERT_HEAD(&g->reslist, res, next);
 }
 
-static void virtio_gpu_disable_scanout(VirtIOGPU *g, int scanout_id)
+void virtio_gpu_disable_scanout(VirtIOGPU *g, int scanout_id)
 {
     struct virtio_gpu_scanout *scanout = &g->parent_obj.scanout[scanout_id];
     struct virtio_gpu_simple_resource *res;
@@ -597,11 +597,11 @@ static void virtio_unref_resource(pixman_image_t *image, void *data)
     pixman_image_unref(data);
 }
 
-static void virtio_gpu_update_scanout(VirtIOGPU *g,
-                                      uint32_t scanout_id,
-                                      struct virtio_gpu_simple_resource *res,
-                                      struct virtio_gpu_framebuffer *fb,
-                                      struct virtio_gpu_rect *r)
+void virtio_gpu_update_scanout(VirtIOGPU *g,
+                               uint32_t scanout_id,
+                               struct virtio_gpu_simple_resource *res,
+                               struct virtio_gpu_framebuffer *fb,
+                               struct virtio_gpu_rect *r)
 {
     struct virtio_gpu_simple_resource *ores;
     struct virtio_gpu_scanout *scanout;
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index ed44cdad6b34..44c676c3ca4a 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -329,6 +329,13 @@ int virtio_gpu_update_dmabuf(VirtIOGPU *g,
                              struct virtio_gpu_framebuffer *fb,
                              struct virtio_gpu_rect *r);
 
+void virtio_gpu_update_scanout(VirtIOGPU *g,
+                               uint32_t scanout_id,
+                               struct virtio_gpu_simple_resource *res,
+                               struct virtio_gpu_framebuffer *fb,
+                               struct virtio_gpu_rect *r);
+void virtio_gpu_disable_scanout(VirtIOGPU *g, int scanout_id);
+
 /* virtio-gpu-3d.c */
 void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
                                   struct virtio_gpu_ctrl_command *cmd);
diff --git a/meson.build b/meson.build
index d71d33d69b45..3ade035e8300 100644
--- a/meson.build
+++ b/meson.build
@@ -2289,6 +2289,7 @@ config_host_data.set('CONFIG_VNC_SASL', sasl.found())
 if virgl.version().version_compare('>=1.0.0')
   config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
   config_host_data.set('HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS', 1)
+  config_host_data.set('HAVE_VIRGL_RESOURCE_BLOB', 1)
 endif
 config_host_data.set('CONFIG_VIRTFS', have_virtfs)
 config_host_data.set('CONFIG_VTE', vte.found())
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (5 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 06/11] virtio-gpu: Support blob scanout using dmabuf fd Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-19  8:53   ` Akihiko Odaki
  2024-04-18 19:00 ` [PATCH v8 08/11] virtio-gpu: Handle resource blob commands Dmitry Osipenko
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

Add new "suspended" flag to virtio_gpu_ctrl_command telling cmd
processor that it should stop processing commands and retry again
next time until flag is unset.

Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu-gl.c       | 1 +
 hw/display/virtio-gpu-rutabaga.c | 1 +
 hw/display/virtio-gpu-virgl.c    | 3 +++
 hw/display/virtio-gpu.c          | 5 +++++
 include/hw/virtio/virtio-gpu.h   | 1 +
 5 files changed, 11 insertions(+)

diff --git a/hw/display/virtio-gpu-gl.c b/hw/display/virtio-gpu-gl.c
index ba478124e2c2..a8892bcc5346 100644
--- a/hw/display/virtio-gpu-gl.c
+++ b/hw/display/virtio-gpu-gl.c
@@ -79,6 +79,7 @@ static void virtio_gpu_gl_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
         cmd->vq = vq;
         cmd->error = 0;
         cmd->finished = false;
+        cmd->suspended = false;
         QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
         cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
     }
diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
index 17bf701a2163..b6e84d436fb2 100644
--- a/hw/display/virtio-gpu-rutabaga.c
+++ b/hw/display/virtio-gpu-rutabaga.c
@@ -1061,6 +1061,7 @@ static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
         cmd->vq = vq;
         cmd->error = 0;
         cmd->finished = false;
+        cmd->suspended = false;
         QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
         cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
     }
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index c2057b0c2147..bb9ee1eba9a0 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -670,6 +670,9 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
         break;
     }
 
+    if (cmd->suspended) {
+        return;
+    }
     if (cmd->finished) {
         return;
     }
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 1e57a53d346c..a1bd4d6914c4 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1054,6 +1054,10 @@ void virtio_gpu_process_cmdq(VirtIOGPU *g)
         /* process command */
         vgc->process_cmd(g, cmd);
 
+        if (cmd->suspended) {
+            break;
+        }
+
         QTAILQ_REMOVE(&g->cmdq, cmd, next);
         if (virtio_gpu_stats_enabled(g->parent_obj.conf)) {
             g->stats.requests++;
@@ -1113,6 +1117,7 @@ static void virtio_gpu_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
         cmd->vq = vq;
         cmd->error = 0;
         cmd->finished = false;
+        cmd->suspended = false;
         QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
         cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
     }
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 44c676c3ca4a..dc24360656ce 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -132,6 +132,7 @@ struct virtio_gpu_ctrl_command {
     struct virtio_gpu_ctrl_hdr cmd_hdr;
     uint32_t error;
     bool finished;
+    bool suspended;
     QTAILQ_ENTRY(virtio_gpu_ctrl_command) next;
 };
 
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 08/11] virtio-gpu: Handle resource blob commands
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (6 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-19  9:18   ` Akihiko Odaki
  2024-04-18 19:00 ` [PATCH v8 09/11] virtio-gpu: Resource UUID Dmitry Osipenko
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

From: Antonio Caggiano <antonio.caggiano@collabora.com>

Support BLOB resources creation, mapping and unmapping by calling the
new stable virglrenderer 0.10 interface. Only enabled when available and
via the blob config. E.g. -device virtio-vga-gl,blob=true

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu-virgl.c  | 248 +++++++++++++++++++++++++++++++++
 hw/display/virtio-gpu.c        |   4 +-
 include/hw/virtio/virtio-gpu.h |   4 +
 3 files changed, 255 insertions(+), 1 deletion(-)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index bb9ee1eba9a0..de132b22f554 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -32,6 +32,102 @@ virgl_get_egl_display(G_GNUC_UNUSED void *cookie)
 }
 #endif
 
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+struct virtio_gpu_virgl_hostmem_region {
+    MemoryRegion mr;
+    VirtIOGPUBase *b;
+    struct virtio_gpu_simple_resource *res;
+};
+
+static void virtio_gpu_virgl_hostmem_region_free(void *obj)
+{
+    MemoryRegion *mr = MEMORY_REGION(obj);
+    struct virtio_gpu_virgl_hostmem_region *vmr;
+
+    vmr = container_of(mr, struct virtio_gpu_virgl_hostmem_region, mr);
+    vmr->res->async_unmap_in_progress = false;
+    vmr->res->async_unmap_completed = true;
+    vmr->b->renderer_blocked--;
+
+    g_free(vmr);
+}
+
+static int
+virtio_gpu_virgl_map_resource_blob(VirtIOGPU *g,
+                                   struct virtio_gpu_simple_resource *res,
+                                   uint64_t offset)
+{
+    struct virtio_gpu_virgl_hostmem_region *vmr;
+    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
+    uint64_t size;
+    void *data;
+    int ret;
+
+    if (!virtio_gpu_hostmem_enabled(b->conf)) {
+        return -EOPNOTSUPP;
+    }
+
+    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
+    if (ret) {
+        return -ret;
+    }
+
+    vmr = g_new0(struct virtio_gpu_virgl_hostmem_region, 1);
+    MemoryRegion *mr = &vmr->mr;
+    vmr->res = res;
+    vmr->b = b;
+
+    memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
+    memory_region_add_subregion(&b->hostmem, offset, mr);
+    memory_region_set_enabled(mr, true);
+
+    /*
+     * Potentially, MR could outlive the resource if MR's reference is held
+     * outside of virtio-gpu. In order to prevent unmapping resource while
+     * MR is alive, and thus, making the data pointer invalid, we will block
+     * virtio-gpu command processing until MR is fully unreferenced and
+     * released.
+     */
+    OBJECT(mr)->free = virtio_gpu_virgl_hostmem_region_free;
+
+    res->mr = mr;
+
+    return 0;
+}
+
+static bool
+virtio_gpu_virgl_unmap_resource_blob(VirtIOGPU *g,
+                                     struct virtio_gpu_simple_resource *res)
+{
+    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
+
+    if (!res->async_unmap_in_progress && !res->async_unmap_completed) {
+        /* memory region owns self res->mr object and frees it by itself */
+        MemoryRegion *mr = res->mr;
+        res->mr = NULL;
+
+        res->async_unmap_in_progress = true;
+
+        /* render will be unblocked when MR is freed */
+        b->renderer_blocked++;
+
+        memory_region_set_enabled(mr, false);
+        memory_region_del_subregion(&b->hostmem, mr);
+        object_unparent(OBJECT(mr));
+    }
+
+    if (!res->async_unmap_completed) {
+        return false;
+    }
+
+    virgl_renderer_resource_unmap(res->resource_id);
+    res->async_unmap_completed = false;
+
+    return true;
+
+}
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
+
 static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
                                          struct virtio_gpu_ctrl_command *cmd)
 {
@@ -145,6 +241,14 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
         return;
     }
 
+    if (res->mr || cmd->suspended) {
+        bool unmapped = virtio_gpu_virgl_unmap_resource_blob(g, res);
+        cmd->suspended = !unmapped;
+        if (cmd->suspended) {
+            return;
+        }
+    }
+
     virgl_renderer_resource_detach_iov(unref.resource_id,
                                        &res_iovs,
                                        &num_iovs);
@@ -495,6 +599,141 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
 }
 
 #ifdef HAVE_VIRGL_RESOURCE_BLOB
+static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
+                                           struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
+    struct virtio_gpu_resource_create_blob cblob;
+    struct virtio_gpu_simple_resource *res;
+    int ret;
+
+    if (!virtio_gpu_blob_enabled(g->parent_obj.conf)) {
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    VIRTIO_GPU_FILL_CMD(cblob);
+    virtio_gpu_create_blob_bswap(&cblob);
+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+    if (cblob.resource_id == 0) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, cblob.resource_id);
+    if (res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+                      __func__, cblob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    res->resource_id = cblob.resource_id;
+    res->blob_size = cblob.size;
+    res->dmabuf_fd = -1;
+
+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
+                                            cmd, &res->addrs,
+                                            &res->iov, &res->iov_cnt);
+        if (!ret) {
+            g_free(res);
+            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+            return;
+        }
+    }
+
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+
+    virgl_args.res_handle = cblob.resource_id;
+    virgl_args.ctx_id = cblob.hdr.ctx_id;
+    virgl_args.blob_mem = cblob.blob_mem;
+    virgl_args.blob_id = cblob.blob_id;
+    virgl_args.blob_flags = cblob.blob_flags;
+    virgl_args.size = cblob.size;
+    virgl_args.iovecs = res->iov;
+    virgl_args.num_iovs = res->iov_cnt;
+
+    ret = virgl_renderer_resource_create_blob(&virgl_args);
+    if (ret) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
+                      __func__, strerror(-ret));
+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+    }
+}
+
+static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
+                                        struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_map_blob mblob;
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resp_map_info resp;
+    int ret;
+
+    VIRTIO_GPU_FILL_CMD(mblob);
+    virtio_gpu_map_blob_bswap(&mblob);
+
+    res = virtio_gpu_find_resource(g, mblob.resource_id);
+    if (!res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+                      __func__, mblob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    if (res->mr) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
+                      __func__, mblob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    ret = virtio_gpu_virgl_map_resource_blob(g, res, mblob.offset);
+    if (ret) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
+                      __func__, strerror(ret));
+        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
+        return;
+    }
+
+    memset(&resp, 0, sizeof(resp));
+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
+    virgl_renderer_resource_get_map_info(mblob.resource_id, &resp.map_info);
+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void virgl_cmd_resource_unmap_blob(VirtIOGPU *g,
+                                          struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_unmap_blob ublob;
+    struct virtio_gpu_simple_resource *res;
+
+    VIRTIO_GPU_FILL_CMD(ublob);
+    virtio_gpu_unmap_blob_bswap(&ublob);
+
+    res = virtio_gpu_find_resource(g, ublob.resource_id);
+    if (!res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+                      __func__, ublob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    if (!res->mr && !cmd->suspended) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already unmapped %d\n",
+                      __func__, ublob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    bool unmapped = virtio_gpu_virgl_unmap_resource_blob(g, res);
+    cmd->suspended = !unmapped;
+}
+
 static void virgl_cmd_set_scanout_blob(VirtIOGPU *g,
                                        struct virtio_gpu_ctrl_command *cmd)
 {
@@ -661,6 +900,15 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
         virtio_gpu_get_edid(g, cmd);
         break;
 #ifdef HAVE_VIRGL_RESOURCE_BLOB
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
+        virgl_cmd_resource_create_blob(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
+        virgl_cmd_resource_map_blob(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
+        virgl_cmd_resource_unmap_blob(g, cmd);
+        break;
     case VIRTIO_GPU_CMD_SET_SCANOUT_BLOB:
         virgl_cmd_set_scanout_blob(g, cmd);
         break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index a1bd4d6914c4..45c1f2006712 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1483,10 +1483,12 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
             return;
         }
 
+#ifndef HAVE_VIRGL_RESOURCE_BLOB
         if (virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
-            error_setg(errp, "blobs and virgl are not compatible (yet)");
+            error_setg(errp, "old virglrenderer, blob resources unsupported");
             return;
         }
+#endif
     }
 
     if (!virtio_gpu_base_device_realize(qdev,
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index dc24360656ce..b9d5e106f3c5 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -61,6 +61,10 @@ struct virtio_gpu_simple_resource {
     int dmabuf_fd;
     uint8_t *remapped;
 
+    MemoryRegion *mr;
+    bool async_unmap_completed;
+    bool async_unmap_in_progress;
+
     QTAILQ_ENTRY(virtio_gpu_simple_resource) next;
 };
 
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 09/11] virtio-gpu: Resource UUID
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (7 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 08/11] virtio-gpu: Handle resource blob commands Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-19  9:29   ` Akihiko Odaki
  2024-04-24 12:52   ` Dmitry Osipenko
  2024-04-18 19:00 ` [PATCH v8 10/11] virtio-gpu: Register capsets dynamically Dmitry Osipenko
                   ` (2 subsequent siblings)
  11 siblings, 2 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

From: Antonio Caggiano <antonio.caggiano@collabora.com>

Enable resource UUID feature and implement command resource assign UUID.
UUID feature availability is mandatory for Vulkan Venus context.

UUID is intended for sharing dmabufs between virtio devices on host. Qemu
doesn't have second virtio device for sharing, thus a simple stub UUID
implementation is enough. More complete implementation using global UUID
resource table might become interesting for a multi-gpu cases.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/trace-events        |  1 +
 hw/display/virtio-gpu-base.c   |  1 +
 hw/display/virtio-gpu-virgl.c  | 31 +++++++++++++++++++++++++++++++
 hw/display/virtio-gpu.c        |  5 +++++
 include/hw/virtio/virtio-gpu.h |  3 +++
 5 files changed, 41 insertions(+)

diff --git a/hw/display/trace-events b/hw/display/trace-events
index 2336a0ca1570..54d6894c59f4 100644
--- a/hw/display/trace-events
+++ b/hw/display/trace-events
@@ -41,6 +41,7 @@ virtio_gpu_cmd_res_create_blob(uint32_t res, uint64_t size) "res 0x%x, size %" P
 virtio_gpu_cmd_res_unref(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_back_attach(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_back_detach(uint32_t res) "res 0x%x"
+virtio_gpu_cmd_res_assign_uuid(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_xfer_toh_2d(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_xfer_toh_3d(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_xfer_fromh_3d(uint32_t res) "res 0x%x"
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 4fc7ef8896c1..610926348bd9 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -225,6 +225,7 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
     if (virtio_gpu_virgl_enabled(g->conf) ||
         virtio_gpu_rutabaga_enabled(g->conf)) {
         features |= (1 << VIRTIO_GPU_F_VIRGL);
+        features |= (1 << VIRTIO_GPU_F_RESOURCE_UUID);
     }
     if (virtio_gpu_edid_enabled(g->conf)) {
         features |= (1 << VIRTIO_GPU_F_EDID);
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index de132b22f554..eee3816b987f 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -160,6 +160,7 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
     res->format = c2d.format;
     res->resource_id = c2d.resource_id;
     res->dmabuf_fd = -1;
+    qemu_uuid_generate(&res->uuid);
     QTAILQ_INSERT_HEAD(&g->reslist, res, next);
 
     args.handle = c2d.resource_id;
@@ -208,6 +209,7 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
     res->format = c3d.format;
     res->resource_id = c3d.resource_id;
     res->dmabuf_fd = -1;
+    qemu_uuid_generate(&res->uuid);
     QTAILQ_INSERT_HEAD(&g->reslist, res, next);
 
     args.handle = c3d.resource_id;
@@ -635,6 +637,7 @@ static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
     res->resource_id = cblob.resource_id;
     res->blob_size = cblob.size;
     res->dmabuf_fd = -1;
+    qemu_uuid_generate(&res->uuid);
 
     if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
         ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
@@ -833,6 +836,31 @@ static void virgl_cmd_set_scanout_blob(VirtIOGPU *g,
 }
 #endif /* HAVE_VIRGL_RESOURCE_BLOB */
 
+static void virgl_cmd_assign_uuid(VirtIOGPU *g,
+                                  struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_assign_uuid assign;
+    struct virtio_gpu_resp_resource_uuid resp;
+    struct virtio_gpu_simple_resource *res;
+
+    VIRTIO_GPU_FILL_CMD(assign);
+    virtio_gpu_bswap_32(&assign, sizeof(assign));
+    trace_virtio_gpu_cmd_res_assign_uuid(assign.resource_id);
+
+    res = virtio_gpu_find_resource(g, assign.resource_id);
+    if (!res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+                      __func__, assign.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    memset(&resp, 0, sizeof(resp));
+    resp.hdr.type = VIRTIO_GPU_RESP_OK_RESOURCE_UUID;
+    memcpy(resp.uuid, res->uuid.data, sizeof(resp.uuid));
+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
 void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
                                       struct virtio_gpu_ctrl_command *cmd)
 {
@@ -887,6 +915,9 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
         /* TODO add security */
         virgl_cmd_ctx_detach_resource(g, cmd);
         break;
+    case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
+        virgl_cmd_assign_uuid(g, cmd);
+        break;
     case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
         virgl_cmd_get_capset_info(g, cmd);
         break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 45c1f2006712..fbf5c0e6b8b7 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1236,6 +1236,7 @@ static int virtio_gpu_save(QEMUFile *f, void *opaque, size_t size,
         }
         qemu_put_buffer(f, (void *)pixman_image_get_data(res->image),
                         pixman_image_get_stride(res->image) * res->height);
+        qemu_put_buffer(f, res->uuid.data, sizeof(res->uuid.data));
     }
     qemu_put_be32(f, 0); /* end of list */
 
@@ -1333,6 +1334,7 @@ static int virtio_gpu_load(QEMUFile *f, void *opaque, size_t size,
         }
         qemu_get_buffer(f, (void *)pixman_image_get_data(res->image),
                         pixman_image_get_stride(res->image) * res->height);
+        qemu_get_buffer(f, res->uuid.data, sizeof(res->uuid.data));
 
         if (!virtio_gpu_load_restore_mapping(g, res)) {
             pixman_image_unref(res->image);
@@ -1371,6 +1373,7 @@ static int virtio_gpu_blob_save(QEMUFile *f, void *opaque, size_t size,
             qemu_put_be64(f, res->addrs[i]);
             qemu_put_be32(f, res->iov[i].iov_len);
         }
+        qemu_put_buffer(f, res->uuid.data, sizeof(res->uuid.data));
     }
     qemu_put_be32(f, 0); /* end of list */
 
@@ -1405,6 +1408,8 @@ static int virtio_gpu_blob_load(QEMUFile *f, void *opaque, size_t size,
             res->iov[i].iov_len = qemu_get_be32(f);
         }
 
+        qemu_get_buffer(f, res->uuid.data, sizeof(res->uuid.data));
+
         if (!virtio_gpu_load_restore_mapping(g, res)) {
             g_free(res);
             return -EINVAL;
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index b9d5e106f3c5..d2a0d542fbb3 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -19,6 +19,7 @@
 #include "ui/console.h"
 #include "hw/virtio/virtio.h"
 #include "qemu/log.h"
+#include "qemu/uuid.h"
 #include "sysemu/vhost-user-backend.h"
 
 #include "standard-headers/linux/virtio_gpu.h"
@@ -65,6 +66,8 @@ struct virtio_gpu_simple_resource {
     bool async_unmap_completed;
     bool async_unmap_in_progress;
 
+    QemuUUID uuid;
+
     QTAILQ_ENTRY(virtio_gpu_simple_resource) next;
 };
 
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 10/11] virtio-gpu: Register capsets dynamically
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (8 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 09/11] virtio-gpu: Resource UUID Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-19  9:35   ` Akihiko Odaki
  2024-04-18 19:00 ` [PATCH v8 11/11] virtio-gpu: Support Venus context Dmitry Osipenko
  2024-04-23  8:30 ` [PATCH v8 00/11] Support blob memory and venus on qemu Alex Bennée
  11 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

From: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

virtio_gpu_virgl_get_num_capsets will return "num_capsets", but we can't
assume that capset_index 1 is always VIRGL2 once we'll support more capsets,
like Venus and DRM capsets. Register capsets dynamically to avoid that problem.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu-virgl.c  | 37 ++++++++++++++++++++++------------
 include/hw/virtio/virtio-gpu.h |  3 +++
 2 files changed, 27 insertions(+), 13 deletions(-)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index eee3816b987f..c0e1ca3ff339 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -558,19 +558,12 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
     VIRTIO_GPU_FILL_CMD(info);
 
     memset(&resp, 0, sizeof(resp));
-    if (info.capset_index == 0) {
-        resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL;
-        virgl_renderer_get_cap_set(resp.capset_id,
-                                   &resp.capset_max_version,
-                                   &resp.capset_max_size);
-    } else if (info.capset_index == 1) {
-        resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL2;
+
+    if (info.capset_index < g->num_capsets) {
+        resp.capset_id = g->capset_ids[info.capset_index];
         virgl_renderer_get_cap_set(resp.capset_id,
                                    &resp.capset_max_version,
                                    &resp.capset_max_size);
-    } else {
-        resp.capset_max_version = 0;
-        resp.capset_max_size = 0;
     }
     resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
     virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
@@ -1120,12 +1113,30 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
     return 0;
 }
 
+static void virtio_gpu_virgl_add_capset(VirtIOGPU *g, uint32_t capset_id)
+{
+    g->capset_ids = g_realloc_n(g->capset_ids, g->num_capsets + 1,
+                                sizeof(*g->capset_ids));
+    g->capset_ids[g->num_capsets++] = capset_id;
+}
+
 int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
 {
     uint32_t capset2_max_ver, capset2_max_size;
+
+    if (g->num_capsets) {
+        return g->num_capsets;
+    }
+
+    /* VIRGL is always supported. */
+    virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL);
+
     virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
-                              &capset2_max_ver,
-                              &capset2_max_size);
+                               &capset2_max_ver,
+                               &capset2_max_size);
+    if (capset2_max_ver) {
+        virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL2);
+    }
 
-    return capset2_max_ver ? 2 : 1;
+    return g->num_capsets;
 }
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index d2a0d542fbb3..3d7d001a85c5 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -218,6 +218,9 @@ struct VirtIOGPU {
         QTAILQ_HEAD(, VGPUDMABuf) bufs;
         VGPUDMABuf *primary[VIRTIO_GPU_MAX_SCANOUTS];
     } dmabuf;
+
+    uint32_t *capset_ids;
+    uint32_t num_capsets;
 };
 
 struct VirtIOGPUClass {
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 11/11] virtio-gpu: Support Venus context
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (9 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 10/11] virtio-gpu: Register capsets dynamically Dmitry Osipenko
@ 2024-04-18 19:00 ` Dmitry Osipenko
  2024-04-19  9:44   ` Akihiko Odaki
  2024-04-23  8:30 ` [PATCH v8 00/11] Support blob memory and venus on qemu Alex Bennée
  11 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-18 19:00 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

From: Antonio Caggiano <antonio.caggiano@collabora.com>

Request Venus when initializing VirGL and if vulkan=true flag is set for
virtio-gpu device.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 hw/display/virtio-gpu-virgl.c  | 14 ++++++++++++++
 hw/display/virtio-gpu.c        | 15 +++++++++++++++
 include/hw/virtio/virtio-gpu.h |  3 +++
 meson.build                    |  1 +
 4 files changed, 33 insertions(+)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index c0e1ca3ff339..2eac09370b84 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -1095,6 +1095,11 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
         flags |= VIRGL_RENDERER_D3D11_SHARE_TEXTURE;
     }
 #endif
+#ifdef VIRGL_RENDERER_VENUS
+    if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
+        flags |= VIRGL_RENDERER_VENUS | VIRGL_RENDERER_RENDER_SERVER;
+    }
+#endif
 
     ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
     if (ret != 0) {
@@ -1138,5 +1143,14 @@ int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
         virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL2);
     }
 
+    if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
+        virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VENUS,
+                                   &capset2_max_ver,
+                                   &capset2_max_size);
+        if (capset2_max_size) {
+            virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VENUS);
+        }
+    }
+
     return g->num_capsets;
 }
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index fbf5c0e6b8b7..a811a86dd600 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1496,6 +1496,19 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
 #endif
     }
 
+    if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
+#ifdef HAVE_VIRGL_VENUS
+        if (!virtio_gpu_blob_enabled(g->parent_obj.conf) ||
+            !virtio_gpu_hostmem_enabled(g->parent_obj.conf)) {
+            error_setg(errp, "venus requires enabled blob and hostmem options");
+            return;
+        }
+#else
+        error_setg(errp, "old virglrenderer, venus unsupported");
+        return;
+#endif
+    }
+
     if (!virtio_gpu_base_device_realize(qdev,
                                         virtio_gpu_handle_ctrl_cb,
                                         virtio_gpu_handle_cursor_cb,
@@ -1672,6 +1685,8 @@ static Property virtio_gpu_properties[] = {
     DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
                     VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
     DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
+    DEFINE_PROP_BIT("vulkan", VirtIOGPU, parent_obj.conf.flags,
+                    VIRTIO_GPU_FLAG_VENUS_ENABLED, false),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 3d7d001a85c5..87d812972988 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -106,6 +106,7 @@ enum virtio_gpu_base_conf_flags {
     VIRTIO_GPU_FLAG_BLOB_ENABLED,
     VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
     VIRTIO_GPU_FLAG_RUTABAGA_ENABLED,
+    VIRTIO_GPU_FLAG_VENUS_ENABLED,
 };
 
 #define virtio_gpu_virgl_enabled(_cfg) \
@@ -124,6 +125,8 @@ enum virtio_gpu_base_conf_flags {
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED))
 #define virtio_gpu_hostmem_enabled(_cfg) \
     (_cfg.hostmem > 0)
+#define virtio_gpu_venus_enabled(_cfg) \
+    (_cfg.flags & (1 << VIRTIO_GPU_FLAG_VENUS_ENABLED))
 
 struct virtio_gpu_base_conf {
     uint32_t max_outputs;
diff --git a/meson.build b/meson.build
index 3ade035e8300..30c4eaa43de0 100644
--- a/meson.build
+++ b/meson.build
@@ -2290,6 +2290,7 @@ if virgl.version().version_compare('>=1.0.0')
   config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
   config_host_data.set('HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS', 1)
   config_host_data.set('HAVE_VIRGL_RESOURCE_BLOB', 1)
+  config_host_data.set('HAVE_VIRGL_VENUS', 1)
 endif
 config_host_data.set('CONFIG_VIRTFS', have_virtfs)
 config_host_data.set('CONFIG_VTE', vte.found())
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-04-18 19:00 ` [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing Dmitry Osipenko
@ 2024-04-19  8:53   ` Akihiko Odaki
  2024-04-24  9:43     ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-04-19  8:53 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/04/19 4:00, Dmitry Osipenko wrote:
> Add new "suspended" flag to virtio_gpu_ctrl_command telling cmd
> processor that it should stop processing commands and retry again
> next time until flag is unset.
> 
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

This flag shouldn't be added to virtio_gpu_ctrl_command. suspended is 
just !finished in virtio-gpu.c. Only virtio_gpu_virgl_process_cmd() 
needs the distinction of suspended and !finished so it is not 
appropriate to add this flag the common structure.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 08/11] virtio-gpu: Handle resource blob commands
  2024-04-18 19:00 ` [PATCH v8 08/11] virtio-gpu: Handle resource blob commands Dmitry Osipenko
@ 2024-04-19  9:18   ` Akihiko Odaki
  2024-04-24 10:30     ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-04-19  9:18 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/04/19 4:00, Dmitry Osipenko wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> 
> Support BLOB resources creation, mapping and unmapping by calling the
> new stable virglrenderer 0.10 interface. Only enabled when available and
> via the blob config. E.g. -device virtio-vga-gl,blob=true
> 
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>   hw/display/virtio-gpu-virgl.c  | 248 +++++++++++++++++++++++++++++++++
>   hw/display/virtio-gpu.c        |   4 +-
>   include/hw/virtio/virtio-gpu.h |   4 +
>   3 files changed, 255 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index bb9ee1eba9a0..de132b22f554 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -32,6 +32,102 @@ virgl_get_egl_display(G_GNUC_UNUSED void *cookie)
>   }
>   #endif
>   
> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
> +struct virtio_gpu_virgl_hostmem_region {
> +    MemoryRegion mr;
> +    VirtIOGPUBase *b;
> +    struct virtio_gpu_simple_resource *res;
> +};
> +
> +static void virtio_gpu_virgl_hostmem_region_free(void *obj)
> +{
> +    MemoryRegion *mr = MEMORY_REGION(obj);
> +    struct virtio_gpu_virgl_hostmem_region *vmr;
> +
> +    vmr = container_of(mr, struct virtio_gpu_virgl_hostmem_region, mr);
> +    vmr->res->async_unmap_in_progress = false;
> +    vmr->res->async_unmap_completed = true;
> +    vmr->b->renderer_blocked--;

Resume the command queue processing here.

> +
> +    g_free(vmr);
> +}
> +
> +static int
> +virtio_gpu_virgl_map_resource_blob(VirtIOGPU *g,
> +                                   struct virtio_gpu_simple_resource *res,
> +                                   uint64_t offset)
> +{
> +    struct virtio_gpu_virgl_hostmem_region *vmr;
> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> +    uint64_t size;
> +    void *data;
> +    int ret;
> +
> +    if (!virtio_gpu_hostmem_enabled(b->conf)) {
> +        return -EOPNOTSUPP;

Log a message here instead of picking an error number.

> +    }
> +
> +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
> +    if (ret) {
> +        return -ret;
> +    }
> +
> +    vmr = g_new0(struct virtio_gpu_virgl_hostmem_region, 1);
> +    MemoryRegion *mr = &vmr->mr;

Mixed declarations are not allowed; see: docs/devel/style.rst

> +    vmr->res = res;
> +    vmr->b = b;
> +
> +    memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
> +    memory_region_add_subregion(&b->hostmem, offset, mr);
> +    memory_region_set_enabled(mr, true);
> +
> +    /*
> +     * Potentially, MR could outlive the resource if MR's reference is held
> +     * outside of virtio-gpu. In order to prevent unmapping resource while
> +     * MR is alive, and thus, making the data pointer invalid, we will block
> +     * virtio-gpu command processing until MR is fully unreferenced and
> +     * released.
> +     */
> +    OBJECT(mr)->free = virtio_gpu_virgl_hostmem_region_free;
> +
> +    res->mr = mr;
> +
> +    return 0;
> +}
> +
> +static bool
> +virtio_gpu_virgl_unmap_resource_blob(VirtIOGPU *g,
> +                                     struct virtio_gpu_simple_resource *res)
> +{
> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> +
> +    if (!res->async_unmap_in_progress && !res->async_unmap_completed) {
> +        /* memory region owns self res->mr object and frees it by itself */
> +        MemoryRegion *mr = res->mr;
> +        res->mr = NULL;
> +
> +        res->async_unmap_in_progress = true;
> +
> +        /* render will be unblocked when MR is freed */
> +        b->renderer_blocked++;
> +
> +        memory_region_set_enabled(mr, false);
> +        memory_region_del_subregion(&b->hostmem, mr);
> +        object_unparent(OBJECT(mr));
> +    }
> +
> +    if (!res->async_unmap_completed) {

This check is unnecessary as the command processing is blocked until the 
unmap operation completes.

> +        return false;
> +    }
> +
> +    virgl_renderer_resource_unmap(res->resource_id);
> +    res->async_unmap_completed = false;
> +
> +    return true;
> +
> +}
> +#endif /* HAVE_VIRGL_RESOURCE_BLOB */
> +
>   static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
>                                            struct virtio_gpu_ctrl_command *cmd)
>   {
> @@ -145,6 +241,14 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
>           return;
>       }
>   
> +    if (res->mr || cmd->suspended) {
> +        bool unmapped = virtio_gpu_virgl_unmap_resource_blob(g, res);
> +        cmd->suspended = !unmapped;
> +        if (cmd->suspended) {
> +            return;
> +        }
> +    }
> +
>       virgl_renderer_resource_detach_iov(unref.resource_id,
>                                          &res_iovs,
>                                          &num_iovs);
> @@ -495,6 +599,141 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
>   }
>   
>   #ifdef HAVE_VIRGL_RESOURCE_BLOB
> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
> +                                           struct virtio_gpu_ctrl_command *cmd)
> +{
> +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
> +    struct virtio_gpu_resource_create_blob cblob;
> +    struct virtio_gpu_simple_resource *res;
> +    int ret;
> +
> +    if (!virtio_gpu_blob_enabled(g->parent_obj.conf)) {
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
> +        return;
> +    }
> +
> +    VIRTIO_GPU_FILL_CMD(cblob);
> +    virtio_gpu_create_blob_bswap(&cblob);
> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> +
> +    if (cblob.resource_id == 0) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> +                      __func__);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    res = virtio_gpu_find_resource(g, cblob.resource_id);
> +    if (res) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
> +                      __func__, cblob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> +    res->resource_id = cblob.resource_id;
> +    res->blob_size = cblob.size;
> +    res->dmabuf_fd = -1;
> +
> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
> +                                            cmd, &res->addrs,
> +                                            &res->iov, &res->iov_cnt);
> +        if (!ret) {
> +            g_free(res);
> +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> +            return;
> +        }
> +    }
> +
> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> +
> +    virgl_args.res_handle = cblob.resource_id;
> +    virgl_args.ctx_id = cblob.hdr.ctx_id;
> +    virgl_args.blob_mem = cblob.blob_mem;
> +    virgl_args.blob_id = cblob.blob_id;
> +    virgl_args.blob_flags = cblob.blob_flags;
> +    virgl_args.size = cblob.size;
> +    virgl_args.iovecs = res->iov;
> +    virgl_args.num_iovs = res->iov_cnt;
> +
> +    ret = virgl_renderer_resource_create_blob(&virgl_args);
> +    if (ret) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
> +                      __func__, strerror(-ret));
> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> +    }
> +}
> +
> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
> +                                        struct virtio_gpu_ctrl_command *cmd)
> +{
> +    struct virtio_gpu_resource_map_blob mblob;
> +    struct virtio_gpu_simple_resource *res;
> +    struct virtio_gpu_resp_map_info resp;
> +    int ret;
> +
> +    VIRTIO_GPU_FILL_CMD(mblob);
> +    virtio_gpu_map_blob_bswap(&mblob);
> +
> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
> +    if (!res) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
> +                      __func__, mblob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    if (res->mr) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
> +                      __func__, mblob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    ret = virtio_gpu_virgl_map_resource_blob(g, res, mblob.offset);
> +    if (ret) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
> +                      __func__, strerror(ret));
> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> +        return;
> +    }
> +
> +    memset(&resp, 0, sizeof(resp));
> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
> +    virgl_renderer_resource_get_map_info(mblob.resource_id, &resp.map_info);
> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> +}
> +
> +static void virgl_cmd_resource_unmap_blob(VirtIOGPU *g,
> +                                          struct virtio_gpu_ctrl_command *cmd)
> +{
> +    struct virtio_gpu_resource_unmap_blob ublob;
> +    struct virtio_gpu_simple_resource *res;
> +
> +    VIRTIO_GPU_FILL_CMD(ublob);
> +    virtio_gpu_unmap_blob_bswap(&ublob);
> +
> +    res = virtio_gpu_find_resource(g, ublob.resource_id);
> +    if (!res) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
> +                      __func__, ublob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    if (!res->mr && !cmd->suspended) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already unmapped %d\n",
> +                      __func__, ublob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    bool unmapped = virtio_gpu_virgl_unmap_resource_blob(g, res);
> +    cmd->suspended = !unmapped;
> +}
> +
>   static void virgl_cmd_set_scanout_blob(VirtIOGPU *g,
>                                          struct virtio_gpu_ctrl_command *cmd)
>   {
> @@ -661,6 +900,15 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
>           virtio_gpu_get_edid(g, cmd);
>           break;
>   #ifdef HAVE_VIRGL_RESOURCE_BLOB
> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
> +        virgl_cmd_resource_create_blob(g, cmd);
> +        break;
> +    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
> +        virgl_cmd_resource_map_blob(g, cmd);
> +        break;
> +    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
> +        virgl_cmd_resource_unmap_blob(g, cmd);
> +        break;
>       case VIRTIO_GPU_CMD_SET_SCANOUT_BLOB:
>           virgl_cmd_set_scanout_blob(g, cmd);
>           break;
> diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
> index a1bd4d6914c4..45c1f2006712 100644
> --- a/hw/display/virtio-gpu.c
> +++ b/hw/display/virtio-gpu.c
> @@ -1483,10 +1483,12 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
>               return;
>           }
>   
> +#ifndef HAVE_VIRGL_RESOURCE_BLOB
>           if (virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
> -            error_setg(errp, "blobs and virgl are not compatible (yet)");
> +            error_setg(errp, "old virglrenderer, blob resources unsupported");
>               return;
>           }
> +#endif
>       }
>   
>       if (!virtio_gpu_base_device_realize(qdev,
> diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
> index dc24360656ce..b9d5e106f3c5 100644
> --- a/include/hw/virtio/virtio-gpu.h
> +++ b/include/hw/virtio/virtio-gpu.h
> @@ -61,6 +61,10 @@ struct virtio_gpu_simple_resource {
>       int dmabuf_fd;
>       uint8_t *remapped;
>   
> +    MemoryRegion *mr;
> +    bool async_unmap_completed;
> +    bool async_unmap_in_progress;
> +

Don't add fields to virtio_gpu_simple_resource but instead create a 
struct that embeds virtio_gpu_simple_resource in virtio-gpu-virgl.c.

>       QTAILQ_ENTRY(virtio_gpu_simple_resource) next;
>   };
>   


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 09/11] virtio-gpu: Resource UUID
  2024-04-18 19:00 ` [PATCH v8 09/11] virtio-gpu: Resource UUID Dmitry Osipenko
@ 2024-04-19  9:29   ` Akihiko Odaki
  2024-04-23 17:43     ` Dmitry Osipenko
  2024-04-24 12:52   ` Dmitry Osipenko
  1 sibling, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-04-19  9:29 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/04/19 4:00, Dmitry Osipenko wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> 
> Enable resource UUID feature and implement command resource assign UUID.
> UUID feature availability is mandatory for Vulkan Venus context.
> 
> UUID is intended for sharing dmabufs between virtio devices on host. Qemu
> doesn't have second virtio device for sharing, thus a simple stub UUID
> implementation is enough. More complete implementation using global UUID
> resource table might become interesting for a multi-gpu cases.

Isn't it possible to add two virtio-gpu devices even now?

A new subsection should also be added for migration compatibility; see: 
docs/devel/migration/main.rst


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 10/11] virtio-gpu: Register capsets dynamically
  2024-04-18 19:00 ` [PATCH v8 10/11] virtio-gpu: Register capsets dynamically Dmitry Osipenko
@ 2024-04-19  9:35   ` Akihiko Odaki
  0 siblings, 0 replies; 36+ messages in thread
From: Akihiko Odaki @ 2024-04-19  9:35 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/04/19 4:00, Dmitry Osipenko wrote:
> From: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> 
> virtio_gpu_virgl_get_num_capsets will return "num_capsets", but we can't
> assume that capset_index 1 is always VIRGL2 once we'll support more capsets,
> like Venus and DRM capsets. Register capsets dynamically to avoid that problem.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>   hw/display/virtio-gpu-virgl.c  | 37 ++++++++++++++++++++++------------
>   include/hw/virtio/virtio-gpu.h |  3 +++
>   2 files changed, 27 insertions(+), 13 deletions(-)
> 
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index eee3816b987f..c0e1ca3ff339 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -558,19 +558,12 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
>       VIRTIO_GPU_FILL_CMD(info);
>   
>       memset(&resp, 0, sizeof(resp));
> -    if (info.capset_index == 0) {
> -        resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL;
> -        virgl_renderer_get_cap_set(resp.capset_id,
> -                                   &resp.capset_max_version,
> -                                   &resp.capset_max_size);
> -    } else if (info.capset_index == 1) {
> -        resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL2;
> +
> +    if (info.capset_index < g->num_capsets) {
> +        resp.capset_id = g->capset_ids[info.capset_index];
>           virgl_renderer_get_cap_set(resp.capset_id,
>                                      &resp.capset_max_version,
>                                      &resp.capset_max_size);
> -    } else {
> -        resp.capset_max_version = 0;
> -        resp.capset_max_size = 0;
>       }
>       resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>       virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> @@ -1120,12 +1113,30 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>       return 0;
>   }
>   
> +static void virtio_gpu_virgl_add_capset(VirtIOGPU *g, uint32_t capset_id)
> +{
> +    g->capset_ids = g_realloc_n(g->capset_ids, g->num_capsets + 1,
> +                                sizeof(*g->capset_ids));
> +    g->capset_ids[g->num_capsets++] = capset_id;
> +}
> +
>   int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>   {
>       uint32_t capset2_max_ver, capset2_max_size;
> +
> +    if (g->num_capsets) {
> +        return g->num_capsets;
> +    }
> +
> +    /* VIRGL is always supported. */
> +    virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL);
> +
>       virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
> -                              &capset2_max_ver,
> -                              &capset2_max_size);
> +                               &capset2_max_ver,
> +                               &capset2_max_size);
> +    if (capset2_max_ver) {
> +        virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL2);
> +    }
>   
> -    return capset2_max_ver ? 2 : 1;
> +    return g->num_capsets;
>   }
> diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
> index d2a0d542fbb3..3d7d001a85c5 100644
> --- a/include/hw/virtio/virtio-gpu.h
> +++ b/include/hw/virtio/virtio-gpu.h
> @@ -218,6 +218,9 @@ struct VirtIOGPU {
>           QTAILQ_HEAD(, VGPUDMABuf) bufs;
>           VGPUDMABuf *primary[VIRTIO_GPU_MAX_SCANOUTS];
>       } dmabuf;
> +
> +    uint32_t *capset_ids;
> +    uint32_t num_capsets;

Use GArray.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 11/11] virtio-gpu: Support Venus context
  2024-04-18 19:00 ` [PATCH v8 11/11] virtio-gpu: Support Venus context Dmitry Osipenko
@ 2024-04-19  9:44   ` Akihiko Odaki
  0 siblings, 0 replies; 36+ messages in thread
From: Akihiko Odaki @ 2024-04-19  9:44 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/04/19 4:00, Dmitry Osipenko wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> 
> Request Venus when initializing VirGL and if vulkan=true flag is set for
> virtio-gpu device.
> 
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>   hw/display/virtio-gpu-virgl.c  | 14 ++++++++++++++
>   hw/display/virtio-gpu.c        | 15 +++++++++++++++
>   include/hw/virtio/virtio-gpu.h |  3 +++
>   meson.build                    |  1 +
>   4 files changed, 33 insertions(+)
> 
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index c0e1ca3ff339..2eac09370b84 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -1095,6 +1095,11 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>           flags |= VIRGL_RENDERER_D3D11_SHARE_TEXTURE;
>       }
>   #endif
> +#ifdef VIRGL_RENDERER_VENUS
> +    if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
> +        flags |= VIRGL_RENDERER_VENUS | VIRGL_RENDERER_RENDER_SERVER;
> +    }
> +#endif
>   
>       ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
>       if (ret != 0) {
> @@ -1138,5 +1143,14 @@ int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>           virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL2);
>       }
>   
> +    if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
> +        virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VENUS,
> +                                   &capset2_max_ver,
> +                                   &capset2_max_size);
> +        if (capset2_max_size) {

Now capset2_max_ver and capset2_max_size are misnomers as they are used 
not only for VIRTIO_GPU_CAPSET_VIRGL2 but also VIRTIO_GPU_CAPSET_VENUS. 
Just removing the "capset2_" prefix would be fine.

> +            virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VENUS);
> +        }
> +    }
> +
>       return g->num_capsets;
>   }
> diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
> index fbf5c0e6b8b7..a811a86dd600 100644
> --- a/hw/display/virtio-gpu.c
> +++ b/hw/display/virtio-gpu.c
> @@ -1496,6 +1496,19 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
>   #endif
>       }
>   
> +    if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
> +#ifdef HAVE_VIRGL_VENUS
> +        if (!virtio_gpu_blob_enabled(g->parent_obj.conf) ||
> +            !virtio_gpu_hostmem_enabled(g->parent_obj.conf)) {
> +            error_setg(errp, "venus requires enabled blob and hostmem options");
> +            return;
> +        }
> +#else
> +        error_setg(errp, "old virglrenderer, venus unsupported");
> +        return;
> +#endif
> +    }
> + >       if (!virtio_gpu_base_device_realize(qdev,
>                                           virtio_gpu_handle_ctrl_cb,
>                                           virtio_gpu_handle_cursor_cb,
> @@ -1672,6 +1685,8 @@ static Property virtio_gpu_properties[] = {
>       DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
>                       VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
>       DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
> +    DEFINE_PROP_BIT("vulkan", VirtIOGPU, parent_obj.conf.flags,
> +                    VIRTIO_GPU_FLAG_VENUS_ENABLED, false),

This property shouldn't be added here because it is specific to 
virtio-gpu-gl.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 00/11] Support blob memory and venus on qemu
  2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
                   ` (10 preceding siblings ...)
  2024-04-18 19:00 ` [PATCH v8 11/11] virtio-gpu: Support Venus context Dmitry Osipenko
@ 2024-04-23  8:30 ` Alex Bennée
  2024-04-23 17:37   ` Dmitry Osipenko
  11 siblings, 1 reply; 36+ messages in thread
From: Alex Bennée @ 2024-04-23  8:30 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, qemu-devel, Gurchetan Singh,
	ernunes, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang, Manos Pitsidianakis

Dmitry Osipenko <dmitry.osipenko@collabora.com> writes:

> Hello,
>
> This series enables Vulkan Venus context support on virtio-gpu.
>
> All virglrender and almost all Linux kernel prerequisite changes
> needed by Venus are already in upstream. For kernel there is a pending
> KVM patchset that fixes mapping of compound pages needed for DRM drivers
> using TTM [1], othewrwise hostmem blob mapping will fail with a KVM error
> from Qemu.
>
> [1]
> https://lore.kernel.org/kvm/20240229025759.1187910-1-stevensd@google.com/

Following the link for the TTM/KVM patches on the kernel side points at
changes for AMD cards getting NAK'ed so I'm a little confused as to what
parts are needed.

Is this only relevant for ensuring the virtual mappings to the
underlying hardware aren't moved around when KVM is exporting those
pages to the guest?

Our interest is in Xen which obviously mediates everything through stage
2 mappings to from the real PA to the IPA the domains see. However AIUI
all the blob allocation is managed by the GEM/TTM layer of whichever
kernel is responsible for driving the GPU. Does this layer work with
kernel vaddr or the underlying IPA of the resources? We shouldn't
expect the IPA to change between allocations should we?

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 00/11] Support blob memory and venus on qemu
  2024-04-23  8:30 ` [PATCH v8 00/11] Support blob memory and venus on qemu Alex Bennée
@ 2024-04-23 17:37   ` Dmitry Osipenko
  0 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-23 17:37 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, qemu-devel, Gurchetan Singh,
	ernunes, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang, Manos Pitsidianakis

On 4/23/24 11:30, Alex Bennée wrote:
> Dmitry Osipenko <dmitry.osipenko@collabora.com> writes:
> 
>> Hello,
>>
>> This series enables Vulkan Venus context support on virtio-gpu.
>>
>> All virglrender and almost all Linux kernel prerequisite changes
>> needed by Venus are already in upstream. For kernel there is a pending
>> KVM patchset that fixes mapping of compound pages needed for DRM drivers
>> using TTM [1], othewrwise hostmem blob mapping will fail with a KVM error
>> from Qemu.
>>
>> [1]
>> https://lore.kernel.org/kvm/20240229025759.1187910-1-stevensd@google.com/
> 
> Following the link for the TTM/KVM patches on the kernel side points at
> changes for AMD cards getting NAK'ed so I'm a little confused as to what
> parts are needed.

I wouldn't say that patches are NAK'ed, they more having a problem with
getting a review. Without KMV patches host blobs don't work depending on
a host GPU driver and kernel configuration.

It's actually not only TTM drivers that are requiring the KVM changes,
but a non-TTM GPU drivers that use huge pages may also need them too.
You may need a patched KVM for i915 driver that doesn't use TTM,
depending on whether transparent huge pages are enabled in the kernel
config.

> Is this only relevant for ensuring the virtual mappings to the
> underlying hardware aren't moved around when KVM is exporting those
> pages to the guest?

Yes, host GPU driver needs to handle guest access page fault to keep
pages in place.

> Our interest is in Xen which obviously mediates everything through stage
> 2 mappings to from the real PA to the IPA the domains see. However AIUI
> all the blob allocation is managed by the GEM/TTM layer of whichever
> kernel is responsible for driving the GPU. Does this layer work with
> kernel vaddr or the underlying IPA of the resources? We shouldn't
> expect the IPA to change between allocations should we?

TTM works with memory pages and it moves pages around. It may swap out
pages and then relies on a working page faulting notification to swap-in
pages back.

Whether PA stays fixed, I don't know for sure. Robert Beckett or
somebody from AMD should know better how it works for Xen and may
comment on it.
-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 09/11] virtio-gpu: Resource UUID
  2024-04-19  9:29   ` Akihiko Odaki
@ 2024-04-23 17:43     ` Dmitry Osipenko
  0 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-23 17:43 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 4/19/24 12:29, Akihiko Odaki wrote:
> On 2024/04/19 4:00, Dmitry Osipenko wrote:
>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>
>> Enable resource UUID feature and implement command resource assign UUID.
>> UUID feature availability is mandatory for Vulkan Venus context.
>>
>> UUID is intended for sharing dmabufs between virtio devices on host. Qemu
>> doesn't have second virtio device for sharing, thus a simple stub UUID
>> implementation is enough. More complete implementation using global UUID
>> resource table might become interesting for a multi-gpu cases.
> 
> Isn't it possible to add two virtio-gpu devices even now?

We can add two virtio-gpu devices, but these devices can't interact with
each other efficiently. They won't be able to share host blob resources
without proper UUID implementation.

> A new subsection should also be added for migration compatibility; see:
> docs/devel/migration/main.rst

Will update the docs, thanks.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-04-19  8:53   ` Akihiko Odaki
@ 2024-04-24  9:43     ` Dmitry Osipenko
  2024-04-27  5:48       ` Akihiko Odaki
  0 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-24  9:43 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 4/19/24 11:53, Akihiko Odaki wrote:
> On 2024/04/19 4:00, Dmitry Osipenko wrote:
>> Add new "suspended" flag to virtio_gpu_ctrl_command telling cmd
>> processor that it should stop processing commands and retry again
>> next time until flag is unset.
>>
>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> 
> This flag shouldn't be added to virtio_gpu_ctrl_command. suspended is
> just !finished in virtio-gpu.c. Only virtio_gpu_virgl_process_cmd()
> needs the distinction of suspended and !finished so it is not
> appropriate to add this flag the common structure.

The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is used by
every function processing commands. Changing process_cmd() to return
bool will require to change all those functions. Not worthwhile to
change it, IMO.

The flag reflects the exact command status. The !finished + !suspended
means that command is fenced, i.e. these flags don't have exactly same
meaning.

I'd keep the flag if there are no better suggestions.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 08/11] virtio-gpu: Handle resource blob commands
  2024-04-19  9:18   ` Akihiko Odaki
@ 2024-04-24 10:30     ` Dmitry Osipenko
  2024-04-27  5:52       ` Akihiko Odaki
  0 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-24 10:30 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 4/19/24 12:18, Akihiko Odaki wrote:
>> @@ -61,6 +61,10 @@ struct virtio_gpu_simple_resource {
>>       int dmabuf_fd;
>>       uint8_t *remapped;
>>   +    MemoryRegion *mr;
>> +    bool async_unmap_completed;
>> +    bool async_unmap_in_progress;
>> +
> 
> Don't add fields to virtio_gpu_simple_resource but instead create a
> struct that embeds virtio_gpu_simple_resource in virtio-gpu-virgl.c.

Please give a justification. I'd rather rename
virtio_gpu_simple_resource s/_simple//. Simple resource already supports
blob and the added fields are directly related to the blob. Don't see
why another struct is needed.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 09/11] virtio-gpu: Resource UUID
  2024-04-18 19:00 ` [PATCH v8 09/11] virtio-gpu: Resource UUID Dmitry Osipenko
  2024-04-19  9:29   ` Akihiko Odaki
@ 2024-04-24 12:52   ` Dmitry Osipenko
  1 sibling, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-04-24 12:52 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 4/18/24 22:00, Dmitry Osipenko wrote:
> @@ -1405,6 +1408,8 @@ static int virtio_gpu_blob_load(QEMUFile *f, void *opaque, size_t size,
>              res->iov[i].iov_len = qemu_get_be32(f);
>          }
>  
> +        qemu_get_buffer(f, res->uuid.data, sizeof(res->uuid.data));

Save/loading uuid without changing vm version was a bad idea. Will drop
it in v9, we don't need to save/load uuid for virgl anyways.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-04-24  9:43     ` Dmitry Osipenko
@ 2024-04-27  5:48       ` Akihiko Odaki
  2024-05-01 19:02         ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-04-27  5:48 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/04/24 18:43, Dmitry Osipenko wrote:
> On 4/19/24 11:53, Akihiko Odaki wrote:
>> On 2024/04/19 4:00, Dmitry Osipenko wrote:
>>> Add new "suspended" flag to virtio_gpu_ctrl_command telling cmd
>>> processor that it should stop processing commands and retry again
>>> next time until flag is unset.
>>>
>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>
>> This flag shouldn't be added to virtio_gpu_ctrl_command. suspended is
>> just !finished in virtio-gpu.c. Only virtio_gpu_virgl_process_cmd()
>> needs the distinction of suspended and !finished so it is not
>> appropriate to add this flag the common structure.
> 
> The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is used by
> every function processing commands. Changing process_cmd() to return
> bool will require to change all those functions. Not worthwhile to
> change it, IMO. >
> The flag reflects the exact command status. The !finished + !suspended
> means that command is fenced, i.e. these flags don't have exactly same
> meaning.

It is not necessary to change the signature of process_cmd(). You can 
just refer to !finished. No need to have the suspended flag.

> 
> I'd keep the flag if there are no better suggestions.
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 08/11] virtio-gpu: Handle resource blob commands
  2024-04-24 10:30     ` Dmitry Osipenko
@ 2024-04-27  5:52       ` Akihiko Odaki
  2024-05-01 19:20         ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-04-27  5:52 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/04/24 19:30, Dmitry Osipenko wrote:
> On 4/19/24 12:18, Akihiko Odaki wrote:
>>> @@ -61,6 +61,10 @@ struct virtio_gpu_simple_resource {
>>>        int dmabuf_fd;
>>>        uint8_t *remapped;
>>>    +    MemoryRegion *mr;
>>> +    bool async_unmap_completed;
>>> +    bool async_unmap_in_progress;
>>> +
>>
>> Don't add fields to virtio_gpu_simple_resource but instead create a
>> struct that embeds virtio_gpu_simple_resource in virtio-gpu-virgl.c.
> 
> Please give a justification. I'd rather rename
> virtio_gpu_simple_resource s/_simple//. Simple resource already supports
> blob and the added fields are directly related to the blob. Don't see
> why another struct is needed.
> 

Because mapping is only implemented in virtio-gpu-gl while blob itself 
is implemented also in virtio-gpu.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-04-27  5:48       ` Akihiko Odaki
@ 2024-05-01 19:02         ` Dmitry Osipenko
  2024-05-05  6:37           ` Akihiko Odaki
  0 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-05-01 19:02 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 4/27/24 08:48, Akihiko Odaki wrote:
>>
>> The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is used by
>> every function processing commands. Changing process_cmd() to return
>> bool will require to change all those functions. Not worthwhile to
>> change it, IMO. >
>> The flag reflects the exact command status. The !finished + !suspended
>> means that command is fenced, i.e. these flags don't have exactly same
>> meaning.
> 
> It is not necessary to change the signature of process_cmd(). You can
> just refer to !finished. No need to have the suspended flag.

Not sure what you're meaning. The !finished says that cmd is fenced,
this fenced command is added to the polling list and the fence is
checked periodically by the fence_poll timer, meanwhile next virgl
commands are executed in the same time.

This is completely different from the suspension where whole cmd
processing is blocked until command is resumed.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 08/11] virtio-gpu: Handle resource blob commands
  2024-04-27  5:52       ` Akihiko Odaki
@ 2024-05-01 19:20         ` Dmitry Osipenko
  2024-05-05  6:47           ` Akihiko Odaki
  0 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-05-01 19:20 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 4/27/24 08:52, Akihiko Odaki wrote:
> On 2024/04/24 19:30, Dmitry Osipenko wrote:
>> On 4/19/24 12:18, Akihiko Odaki wrote:
>>>> @@ -61,6 +61,10 @@ struct virtio_gpu_simple_resource {
>>>>        int dmabuf_fd;
>>>>        uint8_t *remapped;
>>>>    +    MemoryRegion *mr;
>>>> +    bool async_unmap_completed;
>>>> +    bool async_unmap_in_progress;
>>>> +
>>>
>>> Don't add fields to virtio_gpu_simple_resource but instead create a
>>> struct that embeds virtio_gpu_simple_resource in virtio-gpu-virgl.c.
>>
>> Please give a justification. I'd rather rename
>> virtio_gpu_simple_resource s/_simple//. Simple resource already supports
>> blob and the added fields are directly related to the blob. Don't see
>> why another struct is needed.
>>
> 
> Because mapping is only implemented in virtio-gpu-gl while blob itself
> is implemented also in virtio-gpu.

Rutubaga maps blobs and it should do unmapping blobs asynchronously as
well, AFAICT.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-05-01 19:02         ` Dmitry Osipenko
@ 2024-05-05  6:37           ` Akihiko Odaki
  2024-05-09 12:39             ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-05-05  6:37 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/05/02 4:02, Dmitry Osipenko wrote:
> On 4/27/24 08:48, Akihiko Odaki wrote:
>>>
>>> The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is used by
>>> every function processing commands. Changing process_cmd() to return
>>> bool will require to change all those functions. Not worthwhile to
>>> change it, IMO. >
>>> The flag reflects the exact command status. The !finished + !suspended
>>> means that command is fenced, i.e. these flags don't have exactly same
>>> meaning.
>>
>> It is not necessary to change the signature of process_cmd(). You can
>> just refer to !finished. No need to have the suspended flag.
> 
> Not sure what you're meaning. The !finished says that cmd is fenced,
> this fenced command is added to the polling list and the fence is
> checked periodically by the fence_poll timer, meanwhile next virgl
> commands are executed in the same time.
> 
> This is completely different from the suspension where whole cmd
> processing is blocked until command is resumed.
> 

!finished means you have not sent a response with 
virtio_gpu_ctrl_response(). Currently such a situation only happens when 
a fence is requested and virtio_gpu_process_cmdq() exploits the fact, 
but we are adding a new case without a fence.

So we need to add code to check if we are fencing or not in 
virtio_gpu_process_cmdq(). This can be achieved by evaluating the 
following expression as done in virtio_gpu_virgl_process_cmd():
cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 08/11] virtio-gpu: Handle resource blob commands
  2024-05-01 19:20         ` Dmitry Osipenko
@ 2024-05-05  6:47           ` Akihiko Odaki
  2024-05-09 12:29             ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-05-05  6:47 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/05/02 4:20, Dmitry Osipenko wrote:
> On 4/27/24 08:52, Akihiko Odaki wrote:
>> On 2024/04/24 19:30, Dmitry Osipenko wrote:
>>> On 4/19/24 12:18, Akihiko Odaki wrote:
>>>>> @@ -61,6 +61,10 @@ struct virtio_gpu_simple_resource {
>>>>>         int dmabuf_fd;
>>>>>         uint8_t *remapped;
>>>>>     +    MemoryRegion *mr;
>>>>> +    bool async_unmap_completed;
>>>>> +    bool async_unmap_in_progress;
>>>>> +
>>>>
>>>> Don't add fields to virtio_gpu_simple_resource but instead create a
>>>> struct that embeds virtio_gpu_simple_resource in virtio-gpu-virgl.c.
>>>
>>> Please give a justification. I'd rather rename
>>> virtio_gpu_simple_resource s/_simple//. Simple resource already supports
>>> blob and the added fields are directly related to the blob. Don't see
>>> why another struct is needed.
>>>
>>
>> Because mapping is only implemented in virtio-gpu-gl while blob itself
>> is implemented also in virtio-gpu.
> 
> Rutubaga maps blobs and it should do unmapping blobs asynchronously as
> well, AFAICT.
> 

Right. It makes sense to put mr in struct virtio_gpu_simple_resource in 
preparation for such a situation.

Based on this discussion, I think it is fine to put mr either in struct 
virtio_gpu_simple_resource or a distinct struct. However if you put mr 
in struct virtio_gpu_simple_resource, the logic that manages 
MemoryRegion should also be moved to virtio-gpu.c for consistency.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 08/11] virtio-gpu: Handle resource blob commands
  2024-05-05  6:47           ` Akihiko Odaki
@ 2024-05-09 12:29             ` Dmitry Osipenko
  0 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-05-09 12:29 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 5/5/24 09:47, Akihiko Odaki wrote:
> On 2024/05/02 4:20, Dmitry Osipenko wrote:
>> On 4/27/24 08:52, Akihiko Odaki wrote:
>>> On 2024/04/24 19:30, Dmitry Osipenko wrote:
>>>> On 4/19/24 12:18, Akihiko Odaki wrote:
>>>>>> @@ -61,6 +61,10 @@ struct virtio_gpu_simple_resource {
>>>>>>         int dmabuf_fd;
>>>>>>         uint8_t *remapped;
>>>>>>     +    MemoryRegion *mr;
>>>>>> +    bool async_unmap_completed;
>>>>>> +    bool async_unmap_in_progress;
>>>>>> +
>>>>>
>>>>> Don't add fields to virtio_gpu_simple_resource but instead create a
>>>>> struct that embeds virtio_gpu_simple_resource in virtio-gpu-virgl.c.
>>>>
>>>> Please give a justification. I'd rather rename
>>>> virtio_gpu_simple_resource s/_simple//. Simple resource already
>>>> supports
>>>> blob and the added fields are directly related to the blob. Don't see
>>>> why another struct is needed.
>>>>
>>>
>>> Because mapping is only implemented in virtio-gpu-gl while blob itself
>>> is implemented also in virtio-gpu.
>>
>> Rutubaga maps blobs and it should do unmapping blobs asynchronously as
>> well, AFAICT.
>>
> 
> Right. It makes sense to put mr in struct virtio_gpu_simple_resource in
> preparation for such a situation.
> 
> Based on this discussion, I think it is fine to put mr either in struct
> virtio_gpu_simple_resource or a distinct struct. However if you put mr
> in struct virtio_gpu_simple_resource, the logic that manages
> MemoryRegion should also be moved to virtio-gpu.c for consistency.

Rutabaga uses static MRs. It will either need a different workaround or
will have to move to dynamic MRs. I'll keep using distinct struct for now.

AFAICT, its a lesser problem for rutabaga because static MR isn't
subjected to the dynamic MR UAF problem that virgl has. On the other
hand, rutabaga is re-initing already inited static MR object on each new
mapping, that looks like a bug and it will need to move to dynamic MRs.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-05-05  6:37           ` Akihiko Odaki
@ 2024-05-09 12:39             ` Dmitry Osipenko
  2024-05-10 10:56               ` Akihiko Odaki
  0 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-05-09 12:39 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 5/5/24 09:37, Akihiko Odaki wrote:
> On 2024/05/02 4:02, Dmitry Osipenko wrote:
>> On 4/27/24 08:48, Akihiko Odaki wrote:
>>>>
>>>> The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is used by
>>>> every function processing commands. Changing process_cmd() to return
>>>> bool will require to change all those functions. Not worthwhile to
>>>> change it, IMO. >
>>>> The flag reflects the exact command status. The !finished + !suspended
>>>> means that command is fenced, i.e. these flags don't have exactly same
>>>> meaning.
>>>
>>> It is not necessary to change the signature of process_cmd(). You can
>>> just refer to !finished. No need to have the suspended flag.
>>
>> Not sure what you're meaning. The !finished says that cmd is fenced,
>> this fenced command is added to the polling list and the fence is
>> checked periodically by the fence_poll timer, meanwhile next virgl
>> commands are executed in the same time.
>>
>> This is completely different from the suspension where whole cmd
>> processing is blocked until command is resumed.
>>
> 
> !finished means you have not sent a response with
> virtio_gpu_ctrl_response(). Currently such a situation only happens when
> a fence is requested and virtio_gpu_process_cmdq() exploits the fact,
> but we are adding a new case without a fence.
> 
> So we need to add code to check if we are fencing or not in
> virtio_gpu_process_cmdq(). This can be achieved by evaluating the
> following expression as done in virtio_gpu_virgl_process_cmd():
> cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE

This works, but then I'll add back the res->async_unmap_in_progress
because we need to know whether unmapping has been started.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3
  2024-04-18 19:00 ` [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3 Dmitry Osipenko
@ 2024-05-10 10:46   ` Alex Bennée
  2024-05-10 16:23     ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Alex Bennée @ 2024-05-10 10:46 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, qemu-devel, Gurchetan Singh,
	ernunes, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

Dmitry Osipenko <dmitry.osipenko@collabora.com> writes:

> Update kernel headers to get new VirtIO-GPU capsets, in particular the
> Venus capset.
>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>  hw/i386/x86.c                                 |   8 -
>  include/standard-headers/asm-x86/bootparam.h  |  17 +-
>  include/standard-headers/asm-x86/kvm_para.h   |   3 +-
>  include/standard-headers/asm-x86/setup_data.h |  83 +++
>  include/standard-headers/linux/ethtool.h      |  48 ++
>  include/standard-headers/linux/fuse.h         |  39 +-
>  .../linux/input-event-codes.h                 |   1 +
>  include/standard-headers/linux/virtio_gpu.h   |   2 +
>  include/standard-headers/linux/virtio_pci.h   |  10 +-
>  include/standard-headers/linux/virtio_snd.h   | 154 ++++
>  linux-headers/asm-arm64/kvm.h                 |  15 +-
>  linux-headers/asm-arm64/sve_context.h         |  11 +
>  linux-headers/asm-generic/bitsperlong.h       |   4 +
>  linux-headers/asm-loongarch/kvm.h             |   2 -
>  linux-headers/asm-mips/kvm.h                  |   2 -
>  linux-headers/asm-powerpc/kvm.h               |  45 +-
>  linux-headers/asm-riscv/kvm.h                 |   3 +-
>  linux-headers/asm-s390/kvm.h                  | 315 +++++++-
>  linux-headers/asm-x86/kvm.h                   | 308 +++++++-
>  linux-headers/linux/bits.h                    |  15 +
>  linux-headers/linux/kvm.h                     | 689 +-----------------
>  linux-headers/linux/psp-sev.h                 |  59 ++
>  linux-headers/linux/vhost.h                   |   7 +
>  scripts/update-linux-headers.sh               |   5 +-
>  24 files changed, 1106 insertions(+), 739 deletions(-)
>  create mode 100644 include/standard-headers/asm-x86/setup_data.h
>  create mode 100644 linux-headers/linux/bits.h
>
> diff --git a/hw/i386/x86.c b/hw/i386/x86.c
> index ffbda48917fd..84a48019770b 100644
> --- a/hw/i386/x86.c
> +++ b/hw/i386/x86.c
> @@ -679,14 +679,6 @@ DeviceState *ioapic_init_secondary(GSIState *gsi_state)
>      return dev;
>  }
>  
> -struct setup_data {
> -    uint64_t next;
> -    uint32_t type;
> -    uint32_t len;
> -    uint8_t data[];
> -} __attribute__((packed));
> -
> -

This isn't part of the header import. I'd rather see

  - import the current header set
  - update to the script
  - clean-ups and additions

why are we migrating to using the kernels non-uapi assembler headers?

<snip>
> --- a/scripts/update-linux-headers.sh
> +++ b/scripts/update-linux-headers.sh
> @@ -62,6 +62,7 @@ cp_portable() {
>                                       -e 'linux/kernel' \
>                                       -e 'linux/sysinfo' \
>                                       -e 'asm-generic/kvm_para' \
> +                                     -e 'asm-x86/setup_data.h' \

some justification for this/

>                                       > /dev/null
>      then
>          echo "Unexpected #include in input file $f".
> @@ -149,9 +150,11 @@ for arch in $ARCHLIST; do
>          cp "$tmpdir/include/asm/unistd_x32.h" "$output/linux-headers/asm-x86/"
>          cp "$tmpdir/include/asm/unistd_64.h" "$output/linux-headers/asm-x86/"
>          cp_portable "$tmpdir/include/asm/kvm_para.h" "$output/include/standard-headers/asm-$arch"
> +        cp_portable "$tmpdir/include/asm/setup_data.h" "$output/include/standard-headers/asm-$arch"

is there a portable setup_data.h? why is it asm-x86 above?

>          # Remove everything except the macros from bootparam.h avoiding the
>          # unnecessary import of several video/ist/etc headers
>          sed -e '/__ASSEMBLY__/,/__ASSEMBLY__/d' \
> +            -e 's/<asm\/\([^>]*\)>/"standard-headers\/asm-x86\/\1"/' \
>                 "$tmpdir/include/asm/bootparam.h" > "$tmpdir/bootparam.h"
>          cp_portable "$tmpdir/bootparam.h" \
>                      "$output/include/standard-headers/asm-$arch"
> @@ -165,7 +168,7 @@ rm -rf "$output/linux-headers/linux"
>  mkdir -p "$output/linux-headers/linux"
>  for header in const.h stddef.h kvm.h vfio.h vfio_ccw.h vfio_zdev.h vhost.h \
>                psci.h psp-sev.h userfaultfd.h memfd.h mman.h nvme_ioctl.h \
> -              vduse.h iommufd.h; do
> +              vduse.h iommufd.h bits.h; do

What do we need bits for here? 

>      cp "$tmpdir/include/linux/$header" "$output/linux-headers/linux"
>  done

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-05-09 12:39             ` Dmitry Osipenko
@ 2024-05-10 10:56               ` Akihiko Odaki
  2024-05-10 16:12                 ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Akihiko Odaki @ 2024-05-10 10:56 UTC (permalink / raw)
  To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 2024/05/09 21:39, Dmitry Osipenko wrote:
> On 5/5/24 09:37, Akihiko Odaki wrote:
>> On 2024/05/02 4:02, Dmitry Osipenko wrote:
>>> On 4/27/24 08:48, Akihiko Odaki wrote:
>>>>>
>>>>> The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is used by
>>>>> every function processing commands. Changing process_cmd() to return
>>>>> bool will require to change all those functions. Not worthwhile to
>>>>> change it, IMO. >
>>>>> The flag reflects the exact command status. The !finished + !suspended
>>>>> means that command is fenced, i.e. these flags don't have exactly same
>>>>> meaning.
>>>>
>>>> It is not necessary to change the signature of process_cmd(). You can
>>>> just refer to !finished. No need to have the suspended flag.
>>>
>>> Not sure what you're meaning. The !finished says that cmd is fenced,
>>> this fenced command is added to the polling list and the fence is
>>> checked periodically by the fence_poll timer, meanwhile next virgl
>>> commands are executed in the same time.
>>>
>>> This is completely different from the suspension where whole cmd
>>> processing is blocked until command is resumed.
>>>
>>
>> !finished means you have not sent a response with
>> virtio_gpu_ctrl_response(). Currently such a situation only happens when
>> a fence is requested and virtio_gpu_process_cmdq() exploits the fact,
>> but we are adding a new case without a fence.
>>
>> So we need to add code to check if we are fencing or not in
>> virtio_gpu_process_cmdq(). This can be achieved by evaluating the
>> following expression as done in virtio_gpu_virgl_process_cmd():
>> cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE
> 
> This works, but then I'll add back the res->async_unmap_in_progress
> because we need to know whether unmapping has been started.
> 

Isn't the command processing paused when an unmapping operation is in 
progress?


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-05-10 10:56               ` Akihiko Odaki
@ 2024-05-10 16:12                 ` Dmitry Osipenko
  2024-05-10 16:33                   ` Dmitry Osipenko
  0 siblings, 1 reply; 36+ messages in thread
From: Dmitry Osipenko @ 2024-05-10 16:12 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 5/10/24 13:56, Akihiko Odaki wrote:
> On 2024/05/09 21:39, Dmitry Osipenko wrote:
>> On 5/5/24 09:37, Akihiko Odaki wrote:
>>> On 2024/05/02 4:02, Dmitry Osipenko wrote:
>>>> On 4/27/24 08:48, Akihiko Odaki wrote:
>>>>>>
>>>>>> The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is
>>>>>> used by
>>>>>> every function processing commands. Changing process_cmd() to return
>>>>>> bool will require to change all those functions. Not worthwhile to
>>>>>> change it, IMO. >
>>>>>> The flag reflects the exact command status. The !finished +
>>>>>> !suspended
>>>>>> means that command is fenced, i.e. these flags don't have exactly
>>>>>> same
>>>>>> meaning.
>>>>>
>>>>> It is not necessary to change the signature of process_cmd(). You can
>>>>> just refer to !finished. No need to have the suspended flag.
>>>>
>>>> Not sure what you're meaning. The !finished says that cmd is fenced,
>>>> this fenced command is added to the polling list and the fence is
>>>> checked periodically by the fence_poll timer, meanwhile next virgl
>>>> commands are executed in the same time.
>>>>
>>>> This is completely different from the suspension where whole cmd
>>>> processing is blocked until command is resumed.
>>>>
>>>
>>> !finished means you have not sent a response with
>>> virtio_gpu_ctrl_response(). Currently such a situation only happens when
>>> a fence is requested and virtio_gpu_process_cmdq() exploits the fact,
>>> but we are adding a new case without a fence.
>>>
>>> So we need to add code to check if we are fencing or not in
>>> virtio_gpu_process_cmdq(). This can be achieved by evaluating the
>>> following expression as done in virtio_gpu_virgl_process_cmd():
>>> cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE
>>
>> This works, but then I'll add back the res->async_unmap_in_progress
>> because we need to know whether unmapping has been started.
>>
> 
> Isn't the command processing paused when an unmapping operation is in
> progress?

The virtio_gpu_process_cmdq() continues to be invoked periodically while
unmapping is paused. Should be console doing that, see
virtio_gpu_handle_gl_flushed().

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3
  2024-05-10 10:46   ` Alex Bennée
@ 2024-05-10 16:23     ` Dmitry Osipenko
  0 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-05-10 16:23 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, qemu-devel, Gurchetan Singh,
	ernunes, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 5/10/24 13:46, Alex Bennée wrote:
...
>>          cp_portable "$tmpdir/include/asm/kvm_para.h" "$output/include/standard-headers/asm-$arch"
>> +        cp_portable "$tmpdir/include/asm/setup_data.h" "$output/include/standard-headers/asm-$arch"
> 
> is there a portable setup_data.h? why is it asm-x86 above?

Yes, it shouldn't have been asm-x86

...
>>  for header in const.h stddef.h kvm.h vfio.h vfio_ccw.h vfio_zdev.h vhost.h \
>>                psci.h psp-sev.h userfaultfd.h memfd.h mman.h nvme_ioctl.h \
>> -              vduse.h iommufd.h; do
>> +              vduse.h iommufd.h bits.h; do
> 
> What do we need bits for here? 

Some header started to include it

The kernel headers were already updated in Qemu and I dropped this patch
from v9. No need to review it further, thanks!

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing
  2024-05-10 16:12                 ` Dmitry Osipenko
@ 2024-05-10 16:33                   ` Dmitry Osipenko
  0 siblings, 0 replies; 36+ messages in thread
From: Dmitry Osipenko @ 2024-05-10 16:33 UTC (permalink / raw)
  To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
	Philippe Mathieu-Daudé,
	Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Gert Wollny, Alex Bennée
  Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Stefano Stabellini, Christian König,
	Xenia Ragiadakou, Pierre-Eric Pelloux-Prayer, Honglei Huang,
	Julia Zhang, Chen Jiqian, Yiwei Zhang

On 5/10/24 19:12, Dmitry Osipenko wrote:
> On 5/10/24 13:56, Akihiko Odaki wrote:
>> On 2024/05/09 21:39, Dmitry Osipenko wrote:
>>> On 5/5/24 09:37, Akihiko Odaki wrote:
>>>> On 2024/05/02 4:02, Dmitry Osipenko wrote:
>>>>> On 4/27/24 08:48, Akihiko Odaki wrote:
>>>>>>>
>>>>>>> The VIRTIO_GPU_FILL_CMD() macro returns void and this macro is
>>>>>>> used by
>>>>>>> every function processing commands. Changing process_cmd() to return
>>>>>>> bool will require to change all those functions. Not worthwhile to
>>>>>>> change it, IMO. >
>>>>>>> The flag reflects the exact command status. The !finished +
>>>>>>> !suspended
>>>>>>> means that command is fenced, i.e. these flags don't have exactly
>>>>>>> same
>>>>>>> meaning.
>>>>>>
>>>>>> It is not necessary to change the signature of process_cmd(). You can
>>>>>> just refer to !finished. No need to have the suspended flag.
>>>>>
>>>>> Not sure what you're meaning. The !finished says that cmd is fenced,
>>>>> this fenced command is added to the polling list and the fence is
>>>>> checked periodically by the fence_poll timer, meanwhile next virgl
>>>>> commands are executed in the same time.
>>>>>
>>>>> This is completely different from the suspension where whole cmd
>>>>> processing is blocked until command is resumed.
>>>>>
>>>>
>>>> !finished means you have not sent a response with
>>>> virtio_gpu_ctrl_response(). Currently such a situation only happens when
>>>> a fence is requested and virtio_gpu_process_cmdq() exploits the fact,
>>>> but we are adding a new case without a fence.
>>>>
>>>> So we need to add code to check if we are fencing or not in
>>>> virtio_gpu_process_cmdq(). This can be achieved by evaluating the
>>>> following expression as done in virtio_gpu_virgl_process_cmd():
>>>> cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE
>>>
>>> This works, but then I'll add back the res->async_unmap_in_progress
>>> because we need to know whether unmapping has been started.
>>>
>>
>> Isn't the command processing paused when an unmapping operation is in
>> progress?
> 
> The virtio_gpu_process_cmdq() continues to be invoked periodically while
> unmapping is paused. Should be console doing that, see
> virtio_gpu_handle_gl_flushed().

Though, we're now blocking the render, and thus,
virtio_gpu_process_cmdq() won't do anything while cmd is paused. I'll
check that nothing else is missed and then won't add
`async_unmap_in_progress` in v11, thanks!

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2024-05-10 16:34 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-18 19:00 [PATCH v8 00/11] Support blob memory and venus on qemu Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 01/11] linux-headers: Update to Linux v6.9-rc3 Dmitry Osipenko
2024-05-10 10:46   ` Alex Bennée
2024-05-10 16:23     ` Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 02/11] virtio-gpu: Use pkgconfig version to decide which virgl features are available Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 03/11] virtio-gpu: Support context-init feature with virglrenderer Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 04/11] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 05/11] virtio-gpu: Add virgl resource management Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 06/11] virtio-gpu: Support blob scanout using dmabuf fd Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 07/11] virtio-gpu: Support suspension of commands processing Dmitry Osipenko
2024-04-19  8:53   ` Akihiko Odaki
2024-04-24  9:43     ` Dmitry Osipenko
2024-04-27  5:48       ` Akihiko Odaki
2024-05-01 19:02         ` Dmitry Osipenko
2024-05-05  6:37           ` Akihiko Odaki
2024-05-09 12:39             ` Dmitry Osipenko
2024-05-10 10:56               ` Akihiko Odaki
2024-05-10 16:12                 ` Dmitry Osipenko
2024-05-10 16:33                   ` Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 08/11] virtio-gpu: Handle resource blob commands Dmitry Osipenko
2024-04-19  9:18   ` Akihiko Odaki
2024-04-24 10:30     ` Dmitry Osipenko
2024-04-27  5:52       ` Akihiko Odaki
2024-05-01 19:20         ` Dmitry Osipenko
2024-05-05  6:47           ` Akihiko Odaki
2024-05-09 12:29             ` Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 09/11] virtio-gpu: Resource UUID Dmitry Osipenko
2024-04-19  9:29   ` Akihiko Odaki
2024-04-23 17:43     ` Dmitry Osipenko
2024-04-24 12:52   ` Dmitry Osipenko
2024-04-18 19:00 ` [PATCH v8 10/11] virtio-gpu: Register capsets dynamically Dmitry Osipenko
2024-04-19  9:35   ` Akihiko Odaki
2024-04-18 19:00 ` [PATCH v8 11/11] virtio-gpu: Support Venus context Dmitry Osipenko
2024-04-19  9:44   ` Akihiko Odaki
2024-04-23  8:30 ` [PATCH v8 00/11] Support blob memory and venus on qemu Alex Bennée
2024-04-23 17:37   ` Dmitry Osipenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.