intel-xe.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds
@ 2023-04-04  1:42 Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 1/8] maple_tree: split up MA_STATE() macro Matthew Brost
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe

GPUVA is common code written primarily by Danilo with the idea being a
common place to track GPUVAs (VMAs in Xe) within an address space (VMs
in Xe), track all the GPUVAs attached to GEMs, and a common way
implement VM binds / unbinds with MMAP / MUNMAP semantics via creating
operation lists. All of this adds up to a common way to implement VK
sparse bindings.

This series pulls in the GPUVA code written by Danilo plus some small
fixes by myself into 1 large patch. Once the GPUVA makes it upstream, we
can rebase and drop this patch. I believe what lands upstream should be
nearly identical to this patch at least from an API perspective. 

The last three patches port Xe to GPUVA and add support for NULL VM binds
(writes dropped, read zero, VK sparse support). An example of the
semantics of this is below.

MAP 0x0000-0x8000 to NULL 	- 0x0000-0x8000 writes dropped + read zero
MAP 0x4000-0x5000 to a GEM 	- 0x0000-0x4000, 0x5000-0x8000 writes dropped + read zero; 0x4000-0x5000 mapped to a GEM
UNMAP 0x3000-0x6000		- 0x0000-0x3000, 0x6000-0x8000 writes dropped + read zero
UNMAP 0x0000-0x8000		- Nothing mapped

No changins to existing behavior, rather just new functionality./

v2: Fix CI build failure
v3: Export mas_preallocate, add patch to avoid rebinds
v5: Bug fixes, rebase, xe_vma size optimizations

Signed-off-by: Matthew Brost <matthew.brost@intel.com>

Danilo Krummrich (2):
  maple_tree: split up MA_STATE() macro
  drm: manager to keep track of GPUs VA mappings

Matthew Brost (6):
  maple_tree: Export mas_preallocate
  drm/xe: Port Xe to GPUVA
  drm/xe: NULL binding implementation
  drm/xe: Avoid doing rebinds
  drm/xe: Reduce the number list links in xe_vma
  drm/xe: Optimize size of xe_vma allocation

 Documentation/gpu/drm-mm.rst                |   31 +
 drivers/gpu/drm/Makefile                    |    1 +
 drivers/gpu/drm/drm_debugfs.c               |   56 +
 drivers/gpu/drm/drm_gem.c                   |    3 +
 drivers/gpu/drm/drm_gpuva_mgr.c             | 1890 ++++++++++++++++++
 drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
 drivers/gpu/drm/xe/xe_bo.h                  |    1 +
 drivers/gpu/drm/xe/xe_device.c              |    2 +-
 drivers/gpu/drm/xe/xe_exec.c                |    4 +-
 drivers/gpu/drm/xe/xe_gt_pagefault.c        |   29 +-
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
 drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
 drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
 drivers/gpu/drm/xe/xe_pt.c                  |  176 +-
 drivers/gpu/drm/xe/xe_trace.h               |   10 +-
 drivers/gpu/drm/xe/xe_vm.c                  | 1947 +++++++++----------
 drivers/gpu/drm/xe/xe_vm.h                  |   76 +-
 drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
 drivers/gpu/drm/xe/xe_vm_types.h            |  276 ++-
 include/drm/drm_debugfs.h                   |   24 +
 include/drm/drm_drv.h                       |    7 +
 include/drm/drm_gem.h                       |   75 +
 include/drm/drm_gpuva_mgr.h                 |  734 +++++++
 include/linux/maple_tree.h                  |    7 +-
 include/uapi/drm/xe_drm.h                   |    8 +
 lib/maple_tree.c                            |    1 +
 26 files changed, 4203 insertions(+), 1280 deletions(-)
 create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
 create mode 100644 include/drm/drm_gpuva_mgr.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 1/8]  maple_tree: split up MA_STATE() macro
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 2/8] maple_tree: Export mas_preallocate Matthew Brost
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe; +Cc: Danilo Krummrich

From: Danilo Krummrich <dakr@redhat.com>

Split up the MA_STATE() macro such that components using the maple tree
can easily inherit from struct ma_state and build custom tree walk
macros to hide their internals from users.

Example:

struct sample_iterator {
        struct ma_state mas;
        struct sample_mgr *mgr;
};

\#define SAMPLE_ITERATOR(name, __mgr, start)                    \
        struct sample_iterator name = {                         \
                .mas = MA_STATE_INIT(&(__mgr)->mt, start, 0),   \
                .mgr = __mgr,                                   \
        }

\#define sample_iter_for_each_range(it__, entry__, end__) \
        mas_for_each(&(it__).mas, entry__, end__)

--

struct sample *sample;
SAMPLE_ITERATOR(si, min);

sample_iter_for_each_range(&si, sample, max) {
        frob(mgr, sample);
}

Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
 include/linux/maple_tree.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h
index 1fadb5f5978b..87d55334f1c2 100644
--- a/include/linux/maple_tree.h
+++ b/include/linux/maple_tree.h
@@ -423,8 +423,8 @@ struct ma_wr_state {
 #define MA_ERROR(err) \
 		((struct maple_enode *)(((unsigned long)err << 2) | 2UL))
 
-#define MA_STATE(name, mt, first, end)					\
-	struct ma_state name = {					\
+#define MA_STATE_INIT(mt, first, end)					\
+	{								\
 		.tree = mt,						\
 		.index = first,						\
 		.last = end,						\
@@ -435,6 +435,9 @@ struct ma_wr_state {
 		.mas_flags = 0,						\
 	}
 
+#define MA_STATE(name, mt, first, end)					\
+	struct ma_state name = MA_STATE_INIT(mt, first, end)
+
 #define MA_WR_STATE(name, ma_state, wr_entry)				\
 	struct ma_wr_state name = {					\
 		.mas = ma_state,					\
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 2/8] maple_tree: Export mas_preallocate
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 1/8] maple_tree: split up MA_STATE() macro Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 3/8] drm: manager to keep track of GPUs VA mappings Matthew Brost
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe

The DRM GPUVA implementation needs this function.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 lib/maple_tree.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 9e2735cbc2b4..ae37a167e25d 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -5726,6 +5726,7 @@ int mas_preallocate(struct ma_state *mas, gfp_t gfp)
 	mas_reset(mas);
 	return ret;
 }
+EXPORT_SYMBOL_GPL(mas_preallocate);
 
 /*
  * mas_destroy() - destroy a maple state.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 3/8] drm: manager to keep track of GPUs VA mappings
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 1/8] maple_tree: split up MA_STATE() macro Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 2/8] maple_tree: Export mas_preallocate Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA Matthew Brost
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe; +Cc: Danilo Krummrich, Dave Airlie

From: Danilo Krummrich <dakr@redhat.com>

Add infrastructure to keep track of GPU virtual address (VA) mappings
with a decicated VA space manager implementation.

New UAPIs, motivated by Vulkan sparse memory bindings graphics drivers
start implementing, allow userspace applications to request multiple and
arbitrary GPU VA mappings of buffer objects. The DRM GPU VA manager is
intended to serve the following purposes in this context.

1) Provide infrastructure to track GPU VA allocations and mappings,
   making use of the maple_tree.

2) Generically connect GPU VA mappings to their backing buffers, in
   particular DRM GEM objects.

3) Provide a common implementation to perform more complex mapping
   operations on the GPU VA space. In particular splitting and merging
   of GPU VA mappings, e.g. for intersecting mapping requests or partial
   unmap requests.

Suggested-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 Documentation/gpu/drm-mm.rst    |   31 +
 drivers/gpu/drm/Makefile        |    1 +
 drivers/gpu/drm/drm_debugfs.c   |   56 +
 drivers/gpu/drm/drm_gem.c       |    3 +
 drivers/gpu/drm/drm_gpuva_mgr.c | 1890 +++++++++++++++++++++++++++++++
 include/drm/drm_debugfs.h       |   24 +
 include/drm/drm_drv.h           |    7 +
 include/drm/drm_gem.h           |   75 ++
 include/drm/drm_gpuva_mgr.h     |  734 ++++++++++++
 9 files changed, 2821 insertions(+)
 create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
 create mode 100644 include/drm/drm_gpuva_mgr.h

diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index a79fd3549ff8..fe40ee686f6e 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -466,6 +466,37 @@ DRM MM Range Allocator Function References
 .. kernel-doc:: drivers/gpu/drm/drm_mm.c
    :export:
 
+DRM GPU VA Manager
+==================
+
+Overview
+--------
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Overview
+
+Split and Merge
+---------------
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Split and Merge
+
+Locking
+-------
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :doc: Locking
+
+
+DRM GPU VA Manager Function References
+--------------------------------------
+
+.. kernel-doc:: include/drm/drm_gpuva_mgr.h
+   :internal:
+
+.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+   :export:
+
 DRM Buddy Allocator
 ===================
 
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 66dd2c48944a..ad6267273503 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -46,6 +46,7 @@ drm-y := \
 	drm_vblank.o \
 	drm_vblank_work.o \
 	drm_vma_manager.o \
+	drm_gpuva_mgr.o \
 	drm_writeback.o
 drm-$(CONFIG_DRM_LEGACY) += \
 	drm_agpsupport.o \
diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
index 4855230ba2c6..8dac98e5b383 100644
--- a/drivers/gpu/drm/drm_debugfs.c
+++ b/drivers/gpu/drm/drm_debugfs.c
@@ -38,6 +38,7 @@
 #include <drm/drm_edid.h>
 #include <drm/drm_file.h>
 #include <drm/drm_gem.h>
+#include <drm/drm_gpuva_mgr.h>
 #include <drm/drm_managed.h>
 
 #include "drm_crtc_internal.h"
@@ -175,6 +176,61 @@ static const struct file_operations drm_debugfs_fops = {
 	.release = single_release,
 };
 
+/**
+ * drm_debugfs_gpuva_info - dump the given DRM GPU VA space
+ * @m: pointer to the &seq_file to write
+ * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ *
+ * Dumps the GPU VA regions and mappings of a given DRM GPU VA manager.
+ *
+ * For each DRM GPU VA space drivers should call this function from their
+ * &drm_info_list's show callback.
+ *
+ * Returns: 0 on success, -ENODEV if the &mgr is not initialized
+ */
+int drm_debugfs_gpuva_info(struct seq_file *m,
+			   struct drm_gpuva_manager *mgr)
+{
+	DRM_GPUVA_ITER(it, mgr, 0);
+	DRM_GPUVA_REGION_ITER(__it, mgr, 0);
+
+	if (!mgr->name)
+		return -ENODEV;
+
+	seq_printf(m, "DRM GPU VA space (%s)\n", mgr->name);
+	seq_puts  (m, "\n");
+	seq_puts  (m, " VA regions  | start              | range              | end                | sparse\n");
+	seq_puts  (m, "------------------------------------------------------------------------------------\n");
+	seq_printf(m, " VA space    | 0x%016llx | 0x%016llx | 0x%016llx |   -\n",
+		   mgr->mm_start, mgr->mm_range, mgr->mm_start + mgr->mm_range);
+	seq_puts  (m, "-----------------------------------------------------------------------------------\n");
+	drm_gpuva_iter_for_each(__it) {
+		struct drm_gpuva_region *reg = __it.reg;
+
+		if (reg == &mgr->kernel_alloc_region) {
+			seq_printf(m, " kernel node | 0x%016llx | 0x%016llx | 0x%016llx |   -\n",
+				   reg->va.addr, reg->va.range, reg->va.addr + reg->va.range);
+			continue;
+		}
+
+		seq_printf(m, "             | 0x%016llx | 0x%016llx | 0x%016llx | %s\n",
+			   reg->va.addr, reg->va.range, reg->va.addr + reg->va.range,
+			   reg->sparse ? "true" : "false");
+	}
+	seq_puts(m, "\n");
+	seq_puts(m, " VAs | start              | range              | end                | object             | object offset\n");
+	seq_puts(m, "-------------------------------------------------------------------------------------------------------------\n");
+	drm_gpuva_iter_for_each(it) {
+		struct drm_gpuva *va = it.va;
+
+		seq_printf(m, "     | 0x%016llx | 0x%016llx | 0x%016llx | 0x%016llx | 0x%016llx\n",
+			   va->va.addr, va->va.range, va->va.addr + va->va.range,
+			   (u64)va->gem.obj, va->gem.offset);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_debugfs_gpuva_info);
 
 /**
  * drm_debugfs_create_files - Initialize a given set of debugfs files for DRM
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a6208e2c089b..15fe61856190 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -164,6 +164,9 @@ void drm_gem_private_object_init(struct drm_device *dev,
 	if (!obj->resv)
 		obj->resv = &obj->_resv;
 
+	if (drm_core_check_feature(dev, DRIVER_GEM_GPUVA))
+		drm_gem_gpuva_init(obj);
+
 	drm_vma_node_reset(&obj->vma_node);
 	INIT_LIST_HEAD(&obj->lru_node);
 }
diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c
new file mode 100644
index 000000000000..42841937f038
--- /dev/null
+++ b/drivers/gpu/drm/drm_gpuva_mgr.c
@@ -0,0 +1,1890 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Red Hat.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ *     Danilo Krummrich <dakr@redhat.com>
+ *
+ */
+
+#include <drm/drm_gem.h>
+#include <drm/drm_gpuva_mgr.h>
+
+/**
+ * DOC: Overview
+ *
+ * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track
+ * of a GPU's virtual address (VA) space and manages the corresponding virtual
+ * mappings represented by &drm_gpuva objects. It also keeps track of the
+ * mapping's backing &drm_gem_object buffers.
+ *
+ * &drm_gem_object buffers maintain a list (and a corresponding list lock) of
+ * &drm_gpuva objects representing all existent GPU VA mappings using this
+ * &drm_gem_object as backing buffer.
+ *
+ * If the &DRM_GPUVA_MANAGER_REGIONS feature is enabled, a GPU VA mapping can
+ * only be created within a previously allocated &drm_gpuva_region, which
+ * represents a reserved portion of the GPU VA space. GPU VA mappings are not
+ * allowed to span over a &drm_gpuva_region's boundary.
+ *
+ * GPU VA regions can also be flagged as sparse, which allows drivers to create
+ * sparse mappings for a whole GPU VA region in order to support Vulkan
+ * 'Sparse Resources'.
+ *
+ * The GPU VA manager internally uses &maple_tree structures to manage the
+ * &drm_gpuva mappings and the &drm_gpuva_regions within a GPU's virtual address
+ * space.
+ *
+ * Besides the GPU VA space regions (&drm_gpuva_region) allocated by a driver
+ * the &drm_gpuva_manager contains a special region representing the portion of
+ * VA space reserved by the kernel. This node is initialized together with the
+ * GPU VA manager instance and removed when the GPU VA manager is destroyed.
+ *
+ * In a typical application drivers would embed struct drm_gpuva_manager,
+ * struct drm_gpuva_region and struct drm_gpuva within their own driver
+ * specific structures, there won't be any memory allocations of it's own nor
+ * memory allocations of &drm_gpuva or &drm_gpuva_region entries.
+ */
+
+/**
+ * DOC: Split and Merge
+ *
+ * The DRM GPU VA manager also provides an algorithm implementing splitting and
+ * merging of existent GPU VA mappings with the ones that are requested to be
+ * mapped or unmapped. This feature is required by the Vulkan API to implement
+ * Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this as
+ * VM BIND.
+ *
+ * Drivers can call drm_gpuva_sm_map() to receive a sequence of callbacks
+ * containing map, unmap and remap operations for a given newly requested
+ * mapping. The sequence of callbacks represents the set of operations to
+ * execute in order to integrate the new mapping cleanly into the current state
+ * of the GPU VA space.
+ *
+ * Depending on how the new GPU VA mapping intersects with the existent mappings
+ * of the GPU VA space the &drm_gpuva_fn_ops callbacks contain an arbitrary
+ * amount of unmap operations, a maximum of two remap operations and a single
+ * map operation. The caller might receive no callback at all if no operation is
+ * required, e.g. if the requested mapping already exists in the exact same way.
+ *
+ * The single map operation, if existent, represents the original map operation
+ * requested by the caller. Please note that this operation might be altered
+ * comparing it with the original map operation, e.g. because it was merged with
+ * an already  existent mapping. Hence, drivers must execute this map operation
+ * instead of the original one passed to drm_gpuva_sm_map().
+ *
+ * &drm_gpuva_op_unmap contains a 'keep' field, which indicates whether the
+ * &drm_gpuva to unmap is physically contiguous with the original mapping
+ * request. Optionally, if 'keep' is set, drivers may keep the actual page table
+ * entries for this &drm_gpuva, adding the missing page table entries only and
+ * update the &drm_gpuva_manager's view of things accordingly.
+ *
+ * Drivers may do the same optimization, namely delta page table updates, also
+ * for remap operations. This is possible since &drm_gpuva_op_remap consists of
+ * one unmap operation and one or two map operations, such that drivers can
+ * derive the page table update delta accordingly.
+ *
+ * Note that there can't be more than two existent mappings to split up, one at
+ * the beginning and one at the end of the new mapping, hence there is a
+ * maximum of two remap operations.
+ *
+ * Generally, the DRM GPU VA manager never merges mappings across the
+ * boundaries of &drm_gpuva_regions. This is the case since merging between
+ * GPU VA regions would result into unmap and map operations to be issued for
+ * both regions involved although the original mapping request was referred to
+ * one specific GPU VA region only. Since the other GPU VA region, the one not
+ * explicitly requested to be altered, might be in use by the GPU, we are not
+ * allowed to issue any map/unmap operations for this region.
+ *
+ * To update the &drm_gpuva_manager's view of the GPU VA space
+ * drm_gpuva_insert() and drm_gpuva_remove() should be used.
+ *
+ * Analogous to drm_gpuva_sm_map() drm_gpuva_sm_unmap() uses &drm_gpuva_fn_ops
+ * to call back into the driver in order to unmap a range of GPU VA space. The
+ * logic behind this function is way simpler though: For all existent mappings
+ * enclosed by the given range unmap operations are created. For mappings which
+ * are only partically located within the given range, remap operations are
+ * created such that those mappings are split up and re-mapped partically.
+ *
+ * The following diagram depicts the basic relationships of existent GPU VA
+ * mappings, a newly requested mapping and the resulting mappings as implemented
+ * by drm_gpuva_sm_map() - it doesn't cover any arbitrary combinations of these.
+ *
+ * 1) Requested mapping is identical, hence noop.
+ *
+ *    ::
+ *
+ *	     0     a     1
+ *	old: |-----------| (bo_offset=n)
+ *
+ *	     0     a     1
+ *	req: |-----------| (bo_offset=n)
+ *
+ *	     0     a     1
+ *	new: |-----------| (bo_offset=n)
+ *
+ *
+ * 2) Requested mapping is identical, except for the BO offset, hence replace
+ *    the mapping.
+ *
+ *    ::
+ *
+ *	     0     a     1
+ *	old: |-----------| (bo_offset=n)
+ *
+ *	     0     a     1
+ *	req: |-----------| (bo_offset=m)
+ *
+ *	     0     a     1
+ *	new: |-----------| (bo_offset=m)
+ *
+ *
+ * 3) Requested mapping is identical, except for the backing BO, hence replace
+ *    the mapping.
+ *
+ *    ::
+ *
+ *	     0     a     1
+ *	old: |-----------| (bo_offset=n)
+ *
+ *	     0     b     1
+ *	req: |-----------| (bo_offset=n)
+ *
+ *	     0     b     1
+ *	new: |-----------| (bo_offset=n)
+ *
+ *
+ * 4) Existent mapping is a left aligned subset of the requested one, hence
+ *    replace the existent one.
+ *
+ *    ::
+ *
+ *	     0  a  1
+ *	old: |-----|       (bo_offset=n)
+ *
+ *	     0     a     2
+ *	req: |-----------| (bo_offset=n)
+ *
+ *	     0     a     2
+ *	new: |-----------| (bo_offset=n)
+ *
+ *    .. note::
+ *       We expect to see the same result for a request with a different BO
+ *       and/or non-contiguous BO offset.
+ *
+ *
+ * 5) Requested mapping's range is a left aligned subset of the existent one,
+ *    but backed by a different BO. Hence, map the requested mapping and split
+ *    the existent one adjusting it's BO offset.
+ *
+ *    ::
+ *
+ *	     0     a     2
+ *	old: |-----------| (bo_offset=n)
+ *
+ *	     0  b  1
+ *	req: |-----|       (bo_offset=n)
+ *
+ *	     0  b  1  a' 2
+ *	new: |-----|-----| (b.bo_offset=n, a.bo_offset=n+1)
+ *
+ *    .. note::
+ *       We expect to see the same result for a request with a different BO
+ *       and/or non-contiguous BO offset.
+ *
+ *
+ * 6) Existent mapping is a superset of the requested mapping, hence noop.
+ *
+ *    ::
+ *
+ *	     0     a     2
+ *	old: |-----------| (bo_offset=n)
+ *
+ *	     0  a  1
+ *	req: |-----|       (bo_offset=n)
+ *
+ *	     0     a     2
+ *	new: |-----------| (bo_offset=n)
+ *
+ *
+ * 7) Requested mapping's range is a right aligned subset of the existent one,
+ *    but backed by a different BO. Hence, map the requested mapping and split
+ *    the existent one, without adjusting the BO offset.
+ *
+ *    ::
+ *
+ *	     0     a     2
+ *	old: |-----------| (bo_offset=n)
+ *
+ *	           1  b  2
+ *	req:       |-----| (bo_offset=m)
+ *
+ *	     0  a  1  b  2
+ *	new: |-----|-----| (a.bo_offset=n,b.bo_offset=m)
+ *
+ *
+ * 8) Existent mapping is a superset of the requested mapping, hence noop.
+ *
+ *    ::
+ *
+ *	      0     a     2
+ *	old: |-----------| (bo_offset=n)
+ *
+ *	           1  a  2
+ *	req:       |-----| (bo_offset=n+1)
+ *
+ *	     0     a     2
+ *	new: |-----------| (bo_offset=n)
+ *
+ *
+ * 9) Existent mapping is overlapped at the end by the requested mapping backed
+ *    by a different BO. Hence, map the requested mapping and split up the
+ *    existent one, without adjusting the BO offset.
+ *
+ *    ::
+ *
+ *	     0     a     2
+ *	old: |-----------|       (bo_offset=n)
+ *
+ *	           1     b     3
+ *	req:       |-----------| (bo_offset=m)
+ *
+ *	     0  a  1     b     3
+ *	new: |-----|-----------| (a.bo_offset=n,b.bo_offset=m)
+ *
+ *
+ * 10) Existent mapping is overlapped by the requested mapping, both having the
+ *     same backing BO with a contiguous offset. Hence, merge both mappings.
+ *
+ *     ::
+ *
+ *	      0     a     2
+ *	 old: |-----------|       (bo_offset=n)
+ *
+ *	            1     a     3
+ *	 req:       |-----------| (bo_offset=n+1)
+ *
+ *	      0        a        3
+ *	 new: |-----------------| (bo_offset=n)
+ *
+ *
+ * 11) Requested mapping's range is a centered subset of the existent one
+ *     having a different backing BO. Hence, map the requested mapping and split
+ *     up the existent one in two mappings, adjusting the BO offset of the right
+ *     one accordingly.
+ *
+ *     ::
+ *
+ *	      0        a        3
+ *	 old: |-----------------| (bo_offset=n)
+ *
+ *	            1  b  2
+ *	 req:       |-----|       (bo_offset=m)
+ *
+ *	      0  a  1  b  2  a' 3
+ *	 new: |-----|-----|-----| (a.bo_offset=n,b.bo_offset=m,a'.bo_offset=n+2)
+ *
+ *
+ * 12) Requested mapping is a contiguous subset of the existent one, hence noop.
+ *
+ *     ::
+ *
+ *	      0        a        3
+ *	 old: |-----------------| (bo_offset=n)
+ *
+ *	            1  a  2
+ *	 req:       |-----|       (bo_offset=n+1)
+ *
+ *	      0        a        3
+ *	 old: |-----------------| (bo_offset=n)
+ *
+ *
+ * 13) Existent mapping is a right aligned subset of the requested one, hence
+ *     replace the existent one.
+ *
+ *     ::
+ *
+ *	            1  a  2
+ *	 old:       |-----| (bo_offset=n+1)
+ *
+ *	      0     a     2
+ *	 req: |-----------| (bo_offset=n)
+ *
+ *	      0     a     2
+ *	 new: |-----------| (bo_offset=n)
+ *
+ *     .. note::
+ *        We expect to see the same result for a request with a different bo
+ *        and/or non-contiguous bo_offset.
+ *
+ *
+ * 14) Existent mapping is a centered subset of the requested one, hence
+ *     replace the existent one.
+ *
+ *     ::
+ *
+ *	            1  a  2
+ *	 old:       |-----| (bo_offset=n+1)
+ *
+ *	      0        a       3
+ *	 req: |----------------| (bo_offset=n)
+ *
+ *	      0        a       3
+ *	 new: |----------------| (bo_offset=n)
+ *
+ *     .. note::
+ *        We expect to see the same result for a request with a different bo
+ *        and/or non-contiguous bo_offset.
+ *
+ *
+ * 15) Existent mappings is overlapped at the beginning by the requested mapping
+ *     backed by a different BO. Hence, map the requested mapping and split up
+ *     the existent one, adjusting it's BO offset accordingly.
+ *
+ *     ::
+ *
+ *	            1     a     3
+ *	 old:       |-----------| (bo_offset=n)
+ *
+ *	      0     b     2
+ *	 req: |-----------|       (bo_offset=m)
+ *
+ *	      0     b     2  a' 3
+ *	 new: |-----------|-----| (b.bo_offset=m,a.bo_offset=n+2)
+ *
+ *
+ * 16) Requested mapping fills the gap between two existent mappings all having
+ *     the same backing BO, such that all three have a contiguous BO offset.
+ *     Hence, merge all mappings.
+ *
+ *     ::
+ *
+ *	      0     a     1
+ *	 old: |-----------|                        (bo_offset=n)
+ *
+ *	                             2     a     3
+ *	 old':                       |-----------| (bo_offset=n+2)
+ *
+ *	                 1     a     2
+ *	 req:            |-----------|             (bo_offset=n+1)
+ *
+ *	                       a
+ *	 new: |----------------------------------| (bo_offset=n)
+ */
+
+/**
+ * DOC: Locking
+ *
+ * Generally, the GPU VA manager does not take care of locking itself, it is
+ * the drivers responsibility to take care about locking. Drivers might want to
+ * protect the following operations: inserting, removing and iterating
+ * &drm_gpuva and &drm_gpuva_region objects as well as generating all kinds of
+ * operations, such as split / merge or prefetch.
+ *
+ * The GPU VA manager also does not take care of the locking of the backing
+ * &drm_gem_object buffers GPU VA lists by itself; drivers are responsible to
+ * enforce mutual exclusion.
+ */
+
+ /*
+  * Maple Tree Locking
+  *
+  * The maple tree's advanced API requires the user of the API to protect
+  * certain tree operations with a lock (either the external or internal tree
+  * lock) for tree internal reasons.
+  *
+  * The actual rules (when to aquire/release the lock) are enforced by lockdep
+  * through the maple tree implementation.
+  *
+  * For this reason the DRM GPUVA manager takes the maple tree's internal
+  * spinlock according to the lockdep enforced rules.
+  *
+  * Please note, that this lock is *only* meant to fulfill the maple trees
+  * requirements and does not intentionally protect the DRM GPUVA manager
+  * against concurrent access.
+  *
+  * The following mail thread provides more details on why the maple tree
+  * has this requirement.
+  *
+  * https://lore.kernel.org/lkml/20230217134422.14116-5-dakr@redhat.com/
+  */
+
+static int __drm_gpuva_region_insert(struct drm_gpuva_manager *mgr,
+				     struct drm_gpuva_region *reg);
+static void __drm_gpuva_region_remove(struct drm_gpuva_region *reg);
+
+/**
+ * drm_gpuva_manager_init - initialize a &drm_gpuva_manager
+ * @mgr: pointer to the &drm_gpuva_manager to initialize
+ * @name: the name of the GPU VA space
+ * @start_offset: the start offset of the GPU VA space
+ * @range: the size of the GPU VA space
+ * @reserve_offset: the start of the kernel reserved GPU VA area
+ * @reserve_range: the size of the kernel reserved GPU VA area
+ * @ops: &drm_gpuva_fn_ops called on &drm_gpuva_sm_map / &drm_gpuva_sm_unmap
+ * @flags: the feature flags of the &drm_gpuva_manager
+ *
+ * The &drm_gpuva_manager must be initialized with this function before use.
+ *
+ * Note that @mgr must be cleared to 0 before calling this function. The given
+ * &name is expected to be managed by the surrounding driver structures.
+ */
+void
+drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
+		       const char *name,
+		       u64 start_offset, u64 range,
+		       u64 reserve_offset, u64 reserve_range,
+		       struct drm_gpuva_fn_ops *ops,
+		       enum drm_gpuva_mgr_flags flags)
+{
+	mt_init(&mgr->region_mt);
+	mt_init(&mgr->va_mt);
+
+	mgr->mm_start = start_offset;
+	mgr->mm_range = range;
+
+	mgr->name = name ? name : "unknown";
+	mgr->ops = ops;
+	mgr->flags = flags;
+
+	memset(&mgr->kernel_alloc_region, 0, sizeof(struct drm_gpuva_region));
+
+	if (reserve_offset || reserve_range) {
+		mgr->kernel_alloc_region.va.addr = reserve_offset;
+		mgr->kernel_alloc_region.va.range = reserve_range;
+
+		__drm_gpuva_region_insert(mgr, &mgr->kernel_alloc_region);
+	}
+}
+EXPORT_SYMBOL(drm_gpuva_manager_init);
+
+/**
+ * drm_gpuva_manager_destroy - cleanup a &drm_gpuva_manager
+ * @mgr: pointer to the &drm_gpuva_manager to clean up
+ *
+ * Note that it is a bug to call this function on a manager that still
+ * holds GPU VA mappings.
+ */
+void
+drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr)
+{
+	mgr->name = NULL;
+	if (mgr->kernel_alloc_region.va.addr ||
+	    mgr->kernel_alloc_region.va.range)
+		__drm_gpuva_region_remove(&mgr->kernel_alloc_region);
+
+	mtree_lock(&mgr->va_mt);
+	WARN(!mtree_empty(&mgr->va_mt),
+	     "GPUVA tree is not empty, potentially leaking memory.");
+	__mt_destroy(&mgr->va_mt);
+	mtree_unlock(&mgr->va_mt);
+
+	mtree_lock(&mgr->region_mt);
+	WARN(!mtree_empty(&mgr->region_mt),
+	     "GPUVA region tree is not empty, potentially leaking memory.");
+	__mt_destroy(&mgr->region_mt);
+	mtree_unlock(&mgr->region_mt);
+}
+EXPORT_SYMBOL(drm_gpuva_manager_destroy);
+
+static inline bool
+drm_gpuva_in_mm_range(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
+{
+	u64 end = addr + range;
+	u64 mm_start = mgr->mm_start;
+	u64 mm_end = mm_start + mgr->mm_range;
+
+	return addr < mm_end && mm_start < end;
+}
+
+static inline bool
+drm_gpuva_in_kernel_region(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
+{
+	u64 end = addr + range;
+	u64 kstart = mgr->kernel_alloc_region.va.addr;
+	u64 kend = kstart + mgr->kernel_alloc_region.va.range;
+
+	return addr < kend && kstart < end;
+}
+
+static struct drm_gpuva_region *
+drm_gpuva_in_region(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
+{
+	DRM_GPUVA_REGION_ITER(it, mgr, addr);
+
+	/* Find the VA region the requested range is strictly enclosed by. */
+	drm_gpuva_iter_for_each_range(it, addr + range) {
+		struct drm_gpuva_region *reg = it.reg;
+
+		if (reg->va.addr <= addr &&
+		    reg->va.addr + reg->va.range >= addr + range &&
+		    reg != &mgr->kernel_alloc_region)
+			return reg;
+	}
+
+	return NULL;
+}
+
+static bool
+drm_gpuva_in_any_region(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
+{
+	return !!drm_gpuva_in_region(mgr, addr, range);
+}
+
+/**
+ * drm_gpuva_iter_remove - removes the iterators current element
+ * @it: the &drm_gpuva_iterator
+ *
+ * This removes the element the iterator currently points to.
+ */
+void
+drm_gpuva_iter_remove(struct drm_gpuva_iterator *it)
+{
+	mas_lock(&it->mas);
+	mas_erase(&it->mas);
+	mas_unlock(&it->mas);
+}
+EXPORT_SYMBOL(drm_gpuva_iter_remove);
+
+static int
+drm_gpuva_iter_common_replace(struct drm_gpuva_iterator *it,
+			      u64 addr, u64 range, void *entry)
+{
+	struct ma_state *mas = &it->mas;
+	u64 last = addr + range - 1;
+	int ret;
+
+	if (unlikely(addr < mas->index ||
+		     last > mas->last))
+		return -EINVAL;
+
+	mas_lock(mas);
+
+	ret = mas_preallocate(mas, GFP_KERNEL);
+	if (ret)
+		goto err_unlock;
+
+	mas_erase(mas);
+
+	mas->index = addr;
+	mas->last = last;
+	mas_store_prealloc(mas, entry);
+
+	mas_unlock(mas);
+
+	return 0;
+
+err_unlock:
+	mas_unlock(mas);
+	return ret;
+}
+
+/**
+ * drm_gpuva_iter_replace - replaces the iterators current element
+ * @it: the &drm_gpuva_iterator
+ * @va: the &drm_gpuva to insert
+ *
+ * This replaces the element the iterator currently points to.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuva_iter_va_replace(struct drm_gpuva_iterator *it,
+			  struct drm_gpuva *va)
+{
+	u64 addr = va->va.addr;
+	u64 range = va->va.range;
+
+	return drm_gpuva_iter_common_replace(it, addr, range, va);
+}
+EXPORT_SYMBOL(drm_gpuva_iter_va_replace);
+
+/**
+ * drm_gpuva_region_iter_replace - replaces the iterators current element
+ * @it: the &drm_gpuva_iterator
+ * @reg: the &drm_gpuva_region to insert
+ *
+ * This replaces the element the iterator currently points to.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuva_iter_region_replace(struct drm_gpuva_iterator *it,
+			      struct drm_gpuva_region *reg)
+{
+	u64 addr = reg->va.addr;
+	u64 range = reg->va.range;
+
+	return drm_gpuva_iter_common_replace(it, addr, range, reg);
+}
+EXPORT_SYMBOL(drm_gpuva_iter_region_replace);
+
+/**
+ * drm_gpuva_insert - insert a &drm_gpuva
+ * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in
+ * @va: the &drm_gpuva to insert
+ * @addr: the start address of the GPU VA
+ * @range: the range of the GPU VA
+ *
+ * Insert a &drm_gpuva with a given address and range into a
+ * &drm_gpuva_manager.
+ *
+ * It is not allowed to use this function while iterating this GPU VA space,
+ * e.g via drm_gpuva_iter_for_each(). Please use drm_gpuva_iter_remove(),
+ * drm_gpuva_iter_va_replace() or drm_gpuva_iter_region_replace() instead.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuva_insert(struct drm_gpuva_manager *mgr,
+		 struct drm_gpuva *va)
+{
+	u64 addr = va->va.addr;
+	u64 range = va->va.range;
+	u64 last = addr + range - 1;
+	MA_STATE(mas, &mgr->va_mt, addr, addr);
+	struct drm_gpuva_region *reg = NULL;
+	int ret;
+
+	if (unlikely(!drm_gpuva_in_mm_range(mgr, addr, range)))
+		return -EINVAL;
+
+	if (unlikely(drm_gpuva_in_kernel_region(mgr, addr, range)))
+		return -EINVAL;
+
+	if (mgr->flags & DRM_GPUVA_MANAGER_REGIONS) {
+		reg = drm_gpuva_in_region(mgr, addr, range);
+		if (unlikely(!reg))
+			return -EINVAL;
+	}
+
+	mas_lock(&mas);
+
+	if (unlikely(mas_walk(&mas))) {
+		ret = -EEXIST;
+		goto err_unlock;
+	}
+
+	if (unlikely(mas.last < last)) {
+		ret = -EEXIST;
+		goto err_unlock;
+	}
+
+	mas.index = addr;
+	mas.last = last;
+	ret = mas_store_gfp(&mas, va, GFP_KERNEL);
+	if (unlikely(ret))
+		goto err_unlock;
+
+	mas_unlock(&mas);
+
+	va->mgr = mgr;
+	va->region = reg;
+
+	return 0;
+
+err_unlock:
+	mas_unlock(&mas);
+	return ret;
+}
+EXPORT_SYMBOL(drm_gpuva_insert);
+
+/**
+ * drm_gpuva_remove - remove a &drm_gpuva
+ * @va: the &drm_gpuva to remove
+ *
+ * This removes the given &va from the underlaying tree.
+ *
+ * It is not allowed to use this function while iterating this GPU VA space,
+ * e.g via drm_gpuva_iter_for_each(). Please use drm_gpuva_iter_remove(),
+ * drm_gpuva_iter_va_replace() or drm_gpuva_iter_region_replace() instead.
+ */
+void
+drm_gpuva_remove(struct drm_gpuva *va)
+{
+	MA_STATE(mas, &va->mgr->va_mt, va->va.addr, 0);
+
+	mas_lock(&mas);
+	mas_erase(&mas);
+	mas_unlock(&mas);
+}
+EXPORT_SYMBOL(drm_gpuva_remove);
+
+/**
+ * drm_gpuva_link - link a &drm_gpuva
+ * @va: the &drm_gpuva to link
+ *
+ * This adds the given &va to the GPU VA list of the &drm_gem_object it is
+ * associated with.
+ *
+ * This function expects the caller to protect the GEM's GPUVA list against
+ * concurrent access.
+ */
+void
+drm_gpuva_link(struct drm_gpuva *va)
+{
+	if (likely(va->gem.obj))
+		list_add_tail(&va->head, &va->gem.obj->gpuva.list);
+}
+EXPORT_SYMBOL(drm_gpuva_link);
+
+/**
+ * drm_gpuva_unlink - unlink a &drm_gpuva
+ * @va: the &drm_gpuva to unlink
+ *
+ * This removes the given &va from the GPU VA list of the &drm_gem_object it is
+ * associated with.
+ *
+ * This function expects the caller to protect the GEM's GPUVA list against
+ * concurrent access.
+ */
+void
+drm_gpuva_unlink(struct drm_gpuva *va)
+{
+	if (likely(va->gem.obj))
+		list_del_init(&va->head);
+}
+EXPORT_SYMBOL(drm_gpuva_unlink);
+
+/**
+ * drm_gpuva_find_first - find the first &drm_gpuva in the given range
+ * @mgr: the &drm_gpuva_manager to search in
+ * @addr: the &drm_gpuvas address
+ * @range: the &drm_gpuvas range
+ *
+ * Returns: the first &drm_gpuva within the given range
+ */
+struct drm_gpuva *
+drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
+		     u64 addr, u64 range)
+{
+	MA_STATE(mas, &mgr->va_mt, addr, 0);
+	struct drm_gpuva *va;
+
+	mas_lock(&mas);
+	va = mas_find(&mas, addr + range - 1);
+	mas_unlock(&mas);
+
+	return va;
+}
+EXPORT_SYMBOL(drm_gpuva_find_first);
+
+/**
+ * drm_gpuva_find - find a &drm_gpuva
+ * @mgr: the &drm_gpuva_manager to search in
+ * @addr: the &drm_gpuvas address
+ * @range: the &drm_gpuvas range
+ *
+ * Returns: the &drm_gpuva at a given &addr and with a given &range
+ */
+struct drm_gpuva *
+drm_gpuva_find(struct drm_gpuva_manager *mgr,
+	       u64 addr, u64 range)
+{
+	struct drm_gpuva *va;
+
+	va = drm_gpuva_find_first(mgr, addr, range);
+	if (!va)
+		goto out;
+
+	if (va->va.addr != addr ||
+	    va->va.range != range)
+		goto out;
+
+	return va;
+
+out:
+	return NULL;
+}
+EXPORT_SYMBOL(drm_gpuva_find);
+
+/**
+ * drm_gpuva_find_prev - find the &drm_gpuva before the given address
+ * @mgr: the &drm_gpuva_manager to search in
+ * @start: the given GPU VA's start address
+ *
+ * Find the adjacent &drm_gpuva before the GPU VA with given &start address.
+ *
+ * Note that if there is any free space between the GPU VA mappings no mapping
+ * is returned.
+ *
+ * Returns: a pointer to the found &drm_gpuva or NULL if none was found
+ */
+struct drm_gpuva *
+drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start)
+{
+	MA_STATE(mas, &mgr->va_mt, start - 1, 0);
+	struct drm_gpuva *va;
+
+	if (start <= mgr->mm_start ||
+	    start > (mgr->mm_start + mgr->mm_range))
+		return NULL;
+
+	mas_lock(&mas);
+	va = mas_walk(&mas);
+	mas_unlock(&mas);
+
+	return va;
+}
+EXPORT_SYMBOL(drm_gpuva_find_prev);
+
+/**
+ * drm_gpuva_find_next - find the &drm_gpuva after the given address
+ * @mgr: the &drm_gpuva_manager to search in
+ * @end: the given GPU VA's end address
+ *
+ * Find the adjacent &drm_gpuva after the GPU VA with given &end address.
+ *
+ * Note that if there is any free space between the GPU VA mappings no mapping
+ * is returned.
+ *
+ * Returns: a pointer to the found &drm_gpuva or NULL if none was found
+ */
+struct drm_gpuva *
+drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end)
+{
+	MA_STATE(mas, &mgr->va_mt, end, 0);
+	struct drm_gpuva *va;
+
+	if (end < mgr->mm_start ||
+	    end >= (mgr->mm_start + mgr->mm_range))
+		return NULL;
+
+	mas_lock(&mas);
+	va = mas_walk(&mas);
+	mas_unlock(&mas);
+
+	return va;
+}
+EXPORT_SYMBOL(drm_gpuva_find_next);
+
+static int
+__drm_gpuva_region_insert(struct drm_gpuva_manager *mgr,
+			  struct drm_gpuva_region *reg)
+{
+	u64 addr = reg->va.addr;
+	u64 range = reg->va.range;
+	u64 last = addr + range - 1;
+	MA_STATE(mas, &mgr->region_mt, addr, addr);
+	int ret;
+
+	if (unlikely(!drm_gpuva_in_mm_range(mgr, addr, range)))
+		return -EINVAL;
+
+	mas_lock(&mas);
+
+	if (unlikely(mas_walk(&mas))) {
+		ret = -EEXIST;
+		goto err_unlock;
+	}
+
+	if (unlikely(mas.last < last)) {
+		ret = -EEXIST;
+		goto err_unlock;
+	}
+
+	mas.index = addr;
+	mas.last = last;
+	ret = mas_store_gfp(&mas, reg, GFP_KERNEL);
+	if (unlikely(ret))
+		goto err_unlock;
+
+	mas_unlock(&mas);
+
+	reg->mgr = mgr;
+
+	return 0;
+
+err_unlock:
+	mas_unlock(&mas);
+	return ret;
+}
+
+/**
+ * drm_gpuva_region_insert - insert a &drm_gpuva_region
+ * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in
+ * @reg: the &drm_gpuva_region to insert
+ * @addr: the start address of the GPU VA
+ * @range: the range of the GPU VA
+ *
+ * Insert a &drm_gpuva_region with a given address and range into a
+ * &drm_gpuva_manager.
+ *
+ * It is not allowed to use this function while iterating this GPU VA space,
+ * e.g via drm_gpuva_iter_for_each(). Please use drm_gpuva_iter_remove(),
+ * drm_gpuva_iter_va_replace() or drm_gpuva_iter_region_replace() instead.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuva_region_insert(struct drm_gpuva_manager *mgr,
+			struct drm_gpuva_region *reg)
+{
+	if (unlikely(!(mgr->flags & DRM_GPUVA_MANAGER_REGIONS)))
+		return -EINVAL;
+
+	return __drm_gpuva_region_insert(mgr, reg);
+}
+EXPORT_SYMBOL(drm_gpuva_region_insert);
+
+static void
+__drm_gpuva_region_remove(struct drm_gpuva_region *reg)
+{
+	struct drm_gpuva_manager *mgr = reg->mgr;
+	MA_STATE(mas, &mgr->region_mt, reg->va.addr, 0);
+
+	mas_lock(&mas);
+	mas_erase(&mas);
+	mas_unlock(&mas);
+}
+
+/**
+ * drm_gpuva_region_remove - remove a &drm_gpuva_region
+ * @reg: the &drm_gpuva to remove
+ *
+ * This removes the given &reg from the underlaying tree.
+ *
+ * It is not allowed to use this function while iterating this GPU VA space,
+ * e.g via drm_gpuva_iter_for_each(). Please use drm_gpuva_iter_remove(),
+ * drm_gpuva_iter_va_replace() or drm_gpuva_iter_region_replace() instead.
+ */
+void
+drm_gpuva_region_remove(struct drm_gpuva_region *reg)
+{
+	struct drm_gpuva_manager *mgr = reg->mgr;
+
+	if (unlikely(!(mgr->flags & DRM_GPUVA_MANAGER_REGIONS)))
+		return;
+
+	if (unlikely(reg == &mgr->kernel_alloc_region)) {
+		WARN(1, "Can't destroy kernel reserved region.\n");
+		return;
+	}
+
+	if (unlikely(!drm_gpuva_region_empty(reg)))
+		WARN(1, "GPU VA region should be empty on destroy.\n");
+
+	__drm_gpuva_region_remove(reg);
+}
+EXPORT_SYMBOL(drm_gpuva_region_remove);
+
+/**
+ * drm_gpuva_region_empty - indicate whether a &drm_gpuva_region is empty
+ * @reg: the &drm_gpuva to destroy
+ *
+ * Returns: true if the &drm_gpuva_region is empty, false otherwise
+ */
+bool
+drm_gpuva_region_empty(struct drm_gpuva_region *reg)
+{
+	DRM_GPUVA_ITER(it, reg->mgr, reg->va.addr);
+
+	drm_gpuva_iter_for_each_range(it, reg->va.addr + reg->va.range)
+		return false;
+
+	return true;
+}
+EXPORT_SYMBOL(drm_gpuva_region_empty);
+
+/**
+ * drm_gpuva_region_find_first - find the first &drm_gpuva_region in the given
+ * range
+ * @mgr: the &drm_gpuva_manager to search in
+ * @addr: the &drm_gpuva_regions address
+ * @range: the &drm_gpuva_regions range
+ *
+ * Returns: the first &drm_gpuva_region within the given range
+ */
+struct drm_gpuva_region *
+drm_gpuva_region_find_first(struct drm_gpuva_manager *mgr,
+			    u64 addr, u64 range)
+{
+	MA_STATE(mas, &mgr->region_mt, addr, 0);
+	struct drm_gpuva_region *reg;
+
+	mas_lock(&mas);
+	reg = mas_find(&mas, addr + range - 1);
+	mas_unlock(&mas);
+
+	return reg;
+}
+EXPORT_SYMBOL(drm_gpuva_region_find_first);
+
+/**
+ * drm_gpuva_region_find - find a &drm_gpuva_region
+ * @mgr: the &drm_gpuva_manager to search in
+ * @addr: the &drm_gpuva_regions address
+ * @range: the &drm_gpuva_regions range
+ *
+ * Returns: the &drm_gpuva_region at a given &addr and with a given &range
+ */
+struct drm_gpuva_region *
+drm_gpuva_region_find(struct drm_gpuva_manager *mgr,
+		      u64 addr, u64 range)
+{
+	struct drm_gpuva_region *reg;
+
+	reg = drm_gpuva_region_find_first(mgr, addr, range);
+	if (!reg)
+		goto out;
+
+	if (reg->va.addr != addr ||
+	    reg->va.range != range)
+		goto out;
+
+	return reg;
+
+out:
+	return NULL;
+}
+EXPORT_SYMBOL(drm_gpuva_region_find);
+
+static int
+op_map_cb(int (*step)(struct drm_gpuva_op *, void *),
+	  void *priv,
+	  u64 addr, u64 range,
+	  struct drm_gem_object *obj, u64 offset)
+{
+	struct drm_gpuva_op op = {};
+
+	op.op = DRM_GPUVA_OP_MAP;
+	op.map.va.addr = addr;
+	op.map.va.range = range;
+	op.map.gem.obj = obj;
+	op.map.gem.offset = offset;
+
+	return step(&op, priv);
+}
+
+static int
+op_remap_cb(int (*step)(struct drm_gpuva_op *, void *),
+	    void *priv,
+	    struct drm_gpuva_op_map *prev,
+	    struct drm_gpuva_op_map *next,
+	    struct drm_gpuva_op_unmap *unmap)
+{
+	struct drm_gpuva_op op = {};
+	struct drm_gpuva_op_remap *r;
+
+	op.op = DRM_GPUVA_OP_REMAP;
+	r = &op.remap;
+	r->prev = prev;
+	r->next = next;
+	r->unmap = unmap;
+
+	return step(&op, priv);
+}
+
+static int
+op_unmap_cb(int (*step)(struct drm_gpuva_op *, void *),
+	    void *priv,
+	    struct drm_gpuva *va, bool merge)
+{
+	struct drm_gpuva_op op = {};
+
+	op.op = DRM_GPUVA_OP_UNMAP;
+	op.unmap.va = va;
+	op.unmap.keep = merge;
+
+	return step(&op, priv);
+}
+
+static inline bool
+gpuva_should_merge(struct drm_gpuva_manager *mgr, struct drm_gpuva *va)
+{
+	/* Never merge mappings with NULL GEMs. */
+	return mgr->flags & DRM_GPUVA_MANAGER_REGIONS && !!va->gem.obj;
+}
+
+static int
+__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
+		   struct drm_gpuva_fn_ops *ops, void *priv,
+		   u64 req_addr, u64 req_range,
+		   struct drm_gem_object *req_obj, u64 req_offset)
+{
+	DRM_GPUVA_ITER(it, mgr, req_addr);
+	int (*step)(struct drm_gpuva_op *, void *);
+	struct drm_gpuva *va, *prev = NULL;
+	u64 req_end = req_addr + req_range;
+	bool skip_pmerge = false, skip_nmerge = false;
+	int ret;
+
+	step = ops->sm_map_step;
+
+	if (unlikely(!drm_gpuva_in_mm_range(mgr, req_addr, req_range)))
+		return -EINVAL;
+
+	if (unlikely(drm_gpuva_in_kernel_region(mgr, req_addr, req_range)))
+		return -EINVAL;
+
+	if ((mgr->flags & DRM_GPUVA_MANAGER_REGIONS) &&
+	    !drm_gpuva_in_any_region(mgr, req_addr, req_range))
+		return -EINVAL;
+
+	drm_gpuva_iter_for_each_range(it, req_end) {
+		struct drm_gpuva *va = it.va;
+		struct drm_gem_object *obj = va->gem.obj;
+		u64 offset = va->gem.offset;
+		u64 addr = va->va.addr;
+		u64 range = va->va.range;
+		u64 end = addr + range;
+		bool merge = gpuva_should_merge(mgr, va);
+
+		/* Generally, we want to skip merging with potential mappings
+		 * left and right of the requested one when we found a
+		 * collision, since merging happens in this loop already.
+		 *
+		 * However, there is one exception when the requested mapping
+		 * spans into a free VM area. If this is the case we might
+		 * still hit the boundary of another mapping before and/or
+		 * after the free VM area.
+		 */
+		skip_pmerge = true;
+		skip_nmerge = true;
+
+		if (addr == req_addr) {
+			merge &= obj == req_obj &&
+				 offset == req_offset;
+
+			if (end == req_end) {
+				if (merge)
+					goto done;
+
+				ret = op_unmap_cb(step, priv, va, false);
+				if (ret)
+					return ret;
+				break;
+			}
+
+			if (end < req_end) {
+				skip_nmerge = false;
+				ret = op_unmap_cb(step, priv, va, merge);
+				if (ret)
+					return ret;
+				goto next;
+			}
+
+			if (end > req_end) {
+				struct drm_gpuva_op_map n = {
+					.va.addr = req_end,
+					.va.range = range - req_range,
+					.gem.obj = obj,
+					.gem.offset = offset + req_range,
+				};
+				struct drm_gpuva_op_unmap u = { .va = va };
+
+				if (merge)
+					goto done;
+
+				ret = op_remap_cb(step, priv, NULL, &n, &u);
+				if (ret)
+					return ret;
+				break;
+			}
+		} else if (addr < req_addr) {
+			u64 ls_range = req_addr - addr;
+			struct drm_gpuva_op_map p = {
+				.va.addr = addr,
+				.va.range = ls_range,
+				.gem.obj = obj,
+				.gem.offset = offset,
+			};
+			struct drm_gpuva_op_unmap u = { .va = va };
+
+			merge &= obj == req_obj &&
+				 offset + ls_range == req_offset;
+
+			if (end == req_end) {
+				if (merge)
+					goto done;
+
+				ret = op_remap_cb(step, priv, &p, NULL, &u);
+				if (ret)
+					return ret;
+				break;
+			}
+
+			if (end < req_end) {
+				u64 new_addr = addr;
+				u64 new_range = req_range + ls_range;
+				u64 new_offset = offset;
+
+				/* We validated that the requested mapping is
+				 * within a single VA region already.
+				 * Since it overlaps the current mapping (which
+				 * can't cross a VA region boundary) we can be
+				 * sure that we're still within the boundaries
+				 * of the same VA region after merging.
+				 */
+				if (merge) {
+					req_offset = new_offset;
+					req_addr = new_addr;
+					req_range = new_range;
+					ret = op_unmap_cb(step, priv, va, true);
+					if (ret)
+						return ret;
+					goto next;
+				}
+
+				ret = op_remap_cb(step, priv, &p, NULL, &u);
+				if (ret)
+					return ret;
+				goto next;
+			}
+
+			if (end > req_end) {
+				struct drm_gpuva_op_map n = {
+					.va.addr = req_end,
+					.va.range = end - req_end,
+					.gem.obj = obj,
+					.gem.offset = offset + ls_range +
+						      req_range,
+				};
+
+				if (merge)
+					goto done;
+
+				ret = op_remap_cb(step, priv, &p, &n, &u);
+				if (ret)
+					return ret;
+				break;
+			}
+		} else if (addr > req_addr) {
+			merge &= obj == req_obj &&
+				 offset == req_offset +
+					   (addr - req_addr);
+
+			if (!prev)
+				skip_pmerge = false;
+
+			if (end == req_end) {
+				ret = op_unmap_cb(step, priv, va, merge);
+				if (ret)
+					return ret;
+				break;
+			}
+
+			if (end < req_end) {
+				skip_nmerge = false;
+				ret = op_unmap_cb(step, priv, va, merge);
+				if (ret)
+					return ret;
+				goto next;
+			}
+
+			if (end > req_end) {
+				struct drm_gpuva_op_map n = {
+					.va.addr = req_end,
+					.va.range = end - req_end,
+					.gem.obj = obj,
+					.gem.offset = offset + req_end - addr,
+				};
+				struct drm_gpuva_op_unmap u = { .va = va };
+				u64 new_end = end;
+				u64 new_range = new_end - req_addr;
+
+				/* We validated that the requested mapping is
+				 * within a single VA region already.
+				 * Since it overlaps the current mapping (which
+				 * can't cross a VA region boundary) we can be
+				 * sure that we're still within the boundaries
+				 * of the same VA region after merging.
+				 */
+				if (merge) {
+					req_end = new_end;
+					req_range = new_range;
+					ret = op_unmap_cb(step, priv, va, true);
+					if (ret)
+						return ret;
+					break;
+				}
+
+				ret = op_remap_cb(step, priv, NULL, &n, &u);
+				if (ret)
+					return ret;
+				break;
+			}
+		}
+next:
+		prev = va;
+	}
+
+	va = skip_pmerge ? NULL : drm_gpuva_find_prev(mgr, req_addr);
+	if (va) {
+		struct drm_gem_object *obj = va->gem.obj;
+		u64 offset = va->gem.offset;
+		u64 addr = va->va.addr;
+		u64 range = va->va.range;
+		u64 new_offset = offset;
+		u64 new_addr = addr;
+		u64 new_range = req_range + range;
+		bool merge = gpuva_should_merge(mgr, va) &&
+			     obj == req_obj &&
+			     offset + range == req_offset;
+
+		if (mgr->flags & DRM_GPUVA_MANAGER_REGIONS)
+			merge &= drm_gpuva_in_any_region(mgr, new_addr,
+							 new_range);
+
+		if (merge) {
+			ret = op_unmap_cb(step, priv, va, true);
+			if (ret)
+				return ret;
+
+			req_offset = new_offset;
+			req_addr = new_addr;
+			req_range = new_range;
+		}
+	}
+
+	va = skip_nmerge ? NULL : drm_gpuva_find_next(mgr, req_end);
+	if (va) {
+		struct drm_gem_object *obj = va->gem.obj;
+		u64 offset = va->gem.offset;
+		u64 addr = va->va.addr;
+		u64 range = va->va.range;
+		u64 end = addr + range;
+		u64 new_range = req_range + range;
+		u64 new_end = end;
+		bool merge = gpuva_should_merge(mgr, va) &&
+			     obj == req_obj &&
+			     offset == req_offset + req_range;
+
+		if (mgr->flags & DRM_GPUVA_MANAGER_REGIONS)
+			merge &= drm_gpuva_in_any_region(mgr, req_addr,
+							 new_range);
+
+		if (merge) {
+			ret = op_unmap_cb(step, priv, va, true);
+			if (ret)
+				return ret;
+
+			req_range = new_range;
+			req_end = new_end;
+		}
+	}
+
+	ret = op_map_cb(step, priv,
+			req_addr, req_range,
+			req_obj, req_offset);
+	if (ret)
+		return ret;
+
+done:
+	return 0;
+}
+
+static int
+__drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
+		     struct drm_gpuva_fn_ops *ops, void *priv,
+		     u64 req_addr, u64 req_range)
+{
+	DRM_GPUVA_ITER(it, mgr, req_addr);
+	int (*step)(struct drm_gpuva_op *, void *);
+	u64 req_end = req_addr + req_range;
+	int ret;
+
+	step = ops->sm_unmap_step;
+
+	drm_gpuva_iter_for_each_range(it, req_end) {
+		struct drm_gpuva *va = it.va;
+		struct drm_gpuva_op_map prev = {}, next = {};
+		bool prev_split = false, next_split = false;
+		struct drm_gem_object *obj = va->gem.obj;
+		u64 offset = va->gem.offset;
+		u64 addr = va->va.addr;
+		u64 range = va->va.range;
+		u64 end = addr + range;
+
+		if (addr < req_addr) {
+			prev.va.addr = addr;
+			prev.va.range = req_addr - addr;
+			prev.gem.obj = obj;
+			prev.gem.offset = offset;
+
+			prev_split = true;
+		}
+
+		if (end > req_end) {
+			next.va.addr = req_end;
+			next.va.range = end - req_end;
+			next.gem.obj = obj;
+			next.gem.offset = offset + (req_end - addr);
+
+			next_split = true;
+		}
+
+		if (prev_split || next_split) {
+			struct drm_gpuva_op_unmap unmap = { .va = va };
+
+			ret = op_remap_cb(step, priv, prev_split ? &prev : NULL,
+					  next_split ? &next : NULL, &unmap);
+			if (ret)
+				return ret;
+		} else {
+			ret = op_unmap_cb(step, priv, va, false);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * drm_gpuva_sm_map - creates the &drm_gpuva_op split/merge steps
+ * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @req_addr: the start address of the new mapping
+ * @req_range: the range of the new mapping
+ * @req_obj: the &drm_gem_object to map
+ * @req_offset: the offset within the &drm_gem_object
+ * @priv: pointer to a driver private data structure
+ *
+ * This function iterates the given range of the GPU VA space. It utilizes the
+ * &drm_gpuva_fn_ops to call back into the driver providing the split and merge
+ * steps.
+ *
+ * Drivers may use these callbacks to update the GPU VA space right away within
+ * the callback. In case the driver decides to copy and store the operations for
+ * later processing neither this function nor &drm_gpuva_sm_unmap is allowed to
+ * be called before the &drm_gpuva_manager's view of the GPU VA space was
+ * updated with the previous set of operations. To update the
+ * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
+ * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
+ * used.
+ *
+ * A sequence of callbacks can contain map, unmap and remap operations, but
+ * the sequence of callbacks might also be empty if no operation is required,
+ * e.g. if the requested mapping already exists in the exact same way.
+ *
+ * There can be an arbitrary amount of unmap operations, a maximum of two remap
+ * operations and a single map operation. The latter one, if existent,
+ * represents the original map operation requested by the caller. Please note
+ * that the map operation might has been modified, e.g. if it was merged with
+ * an existent mapping.
+ *
+ * Returns: 0 on success or a negative error code
+ */
+int
+drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
+		 u64 req_addr, u64 req_range,
+		 struct drm_gem_object *req_obj, u64 req_offset)
+{
+	if (!mgr->ops || !mgr->ops->sm_map_step)
+		return -EINVAL;
+
+	return __drm_gpuva_sm_map(mgr, mgr->ops, priv,
+				  req_addr, req_range,
+				  req_obj, req_offset);
+}
+EXPORT_SYMBOL(drm_gpuva_sm_map);
+
+/**
+ * drm_gpuva_sm_unmap - creates the &drm_gpuva_ops to split on unmap
+ * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @req_addr: the start address of the range to unmap
+ * @req_range: the range of the mappings to unmap
+ * @ops: the &drm_gpuva_fn_ops callbacks to provide the split/merge steps
+ * @priv: pointer to a driver private data structure
+ *
+ * This function iterates the given range of the GPU VA space. It utilizes the
+ * &drm_gpuva_fn_ops to call back into the driver providing the operations to
+ * unmap and, if required, split existent mappings.
+ *
+ * Drivers may use these callbacks to update the GPU VA space right away within
+ * the callback. In case the driver decides to copy and store the operations for
+ * later processing neither this function nor &drm_gpuva_sm_map is allowed to be
+ * called before the &drm_gpuva_manager's view of the GPU VA space was updated
+ * with the previous set of operations. To update the &drm_gpuva_manager's view
+ * of the GPU VA space drm_gpuva_insert(), drm_gpuva_destroy_locked() and/or
+ * drm_gpuva_destroy_unlocked() should be used.
+ *
+ * A sequence of callbacks can contain unmap and remap operations, depending on
+ * whether there are actual overlapping mappings to split.
+ *
+ * There can be an arbitrary amount of unmap operations and a maximum of two
+ * remap operations.
+ *
+ * Returns: 0 on success or a negative error code
+ */
+int
+drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
+		   u64 req_addr, u64 req_range)
+{
+	if (!mgr->ops || !mgr->ops->sm_unmap_step)
+		return -EINVAL;
+
+	return __drm_gpuva_sm_unmap(mgr, mgr->ops, priv,
+				    req_addr, req_range);
+}
+EXPORT_SYMBOL(drm_gpuva_sm_unmap);
+
+static struct drm_gpuva_op *
+gpuva_op_alloc(struct drm_gpuva_manager *mgr)
+{
+	struct drm_gpuva_fn_ops *fn = mgr->ops;
+	struct drm_gpuva_op *op;
+
+	if (fn && fn->op_alloc)
+		op = fn->op_alloc();
+	else
+		op = kzalloc(sizeof(*op), GFP_KERNEL);
+
+	if (unlikely(!op))
+		return NULL;
+
+	return op;
+}
+
+static void
+gpuva_op_free(struct drm_gpuva_manager *mgr,
+	      struct drm_gpuva_op *op)
+{
+	struct drm_gpuva_fn_ops *fn = mgr->ops;
+
+	if (fn && fn->op_free)
+		fn->op_free(op);
+	else
+		kfree(op);
+}
+
+static int drm_gpuva_sm_step(struct drm_gpuva_op *__op, void *priv)
+{
+	struct {
+		struct drm_gpuva_manager *mgr;
+		struct drm_gpuva_ops *ops;
+	} *args = priv;
+	struct drm_gpuva_manager *mgr = args->mgr;
+	struct drm_gpuva_ops *ops = args->ops;
+	struct drm_gpuva_op *op;
+
+	op = gpuva_op_alloc(mgr);
+	if (unlikely(!op))
+		goto err;
+
+	memcpy(op, __op, sizeof(*op));
+
+	if (op->op == DRM_GPUVA_OP_REMAP) {
+		struct drm_gpuva_op_remap *__r = &__op->remap;
+		struct drm_gpuva_op_remap *r = &op->remap;
+
+		r->unmap = kmemdup(__r->unmap, sizeof(*r->unmap),
+				   GFP_KERNEL);
+		if (unlikely(!r->unmap))
+			goto err_free_op;
+
+		if (__r->prev) {
+			r->prev = kmemdup(__r->prev, sizeof(*r->prev),
+					  GFP_KERNEL);
+			if (unlikely(!r->prev))
+				goto err_free_unmap;
+		}
+
+		if (__r->next) {
+			r->next = kmemdup(__r->next, sizeof(*r->next),
+					  GFP_KERNEL);
+			if (unlikely(!r->next))
+				goto err_free_prev;
+		}
+	}
+
+	list_add_tail(&op->entry, &ops->list);
+
+	return 0;
+
+err_free_unmap:
+	kfree(op->remap.unmap);
+err_free_prev:
+	kfree(op->remap.prev);
+err_free_op:
+	gpuva_op_free(mgr, op);
+err:
+	return -ENOMEM;
+}
+
+static struct drm_gpuva_fn_ops gpuva_list_ops = {
+	.sm_map_step = drm_gpuva_sm_step,
+	.sm_unmap_step = drm_gpuva_sm_step,
+};
+
+/**
+ * drm_gpuva_sm_map_ops_create - creates the &drm_gpuva_ops to split and merge
+ * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @req_addr: the start address of the new mapping
+ * @req_range: the range of the new mapping
+ * @req_obj: the &drm_gem_object to map
+ * @req_offset: the offset within the &drm_gem_object
+ *
+ * This function creates a list of operations to perform splitting and merging
+ * of existent mapping(s) with the newly requested one.
+ *
+ * The list can be iterated with &drm_gpuva_for_each_op and must be processed
+ * in the given order. It can contain map, unmap and remap operations, but it
+ * also can be empty if no operation is required, e.g. if the requested mapping
+ * already exists is the exact same way.
+ *
+ * There can be an arbitrary amount of unmap operations, a maximum of two remap
+ * operations and a single map operation. The latter one, if existent,
+ * represents the original map operation requested by the caller. Please note
+ * that the map operation might has been modified, e.g. if it was merged with an
+ * existent mapping.
+ *
+ * Note that before calling this function again with another mapping request it
+ * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
+ * previously obtained operations must be either processed or abandoned. To
+ * update the &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
+ * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
+ * used.
+ *
+ * After the caller finished processing the returned &drm_gpuva_ops, they must
+ * be freed with &drm_gpuva_ops_free.
+ *
+ * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
+ */
+struct drm_gpuva_ops *
+drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
+			    u64 req_addr, u64 req_range,
+			    struct drm_gem_object *req_obj, u64 req_offset)
+{
+	struct drm_gpuva_ops *ops;
+	struct {
+		struct drm_gpuva_manager *mgr;
+		struct drm_gpuva_ops *ops;
+	} args;
+	int ret;
+
+	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
+	if (unlikely(!ops))
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&ops->list);
+
+	args.mgr = mgr;
+	args.ops = ops;
+
+	ret = __drm_gpuva_sm_map(mgr, &gpuva_list_ops, &args,
+				 req_addr, req_range,
+				 req_obj, req_offset);
+	if (ret) {
+		kfree(ops);
+		return ERR_PTR(ret);
+	}
+
+	return ops;
+}
+EXPORT_SYMBOL(drm_gpuva_sm_map_ops_create);
+
+/**
+ * drm_gpuva_sm_unmap_ops_create - creates the &drm_gpuva_ops to split on unmap
+ * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @req_addr: the start address of the range to unmap
+ * @req_range: the range of the mappings to unmap
+ *
+ * This function creates a list of operations to perform unmapping and, if
+ * required, splitting of the mappings overlapping the unmap range.
+ *
+ * The list can be iterated with &drm_gpuva_for_each_op and must be processed
+ * in the given order. It can contain unmap and remap operations, depending on
+ * whether there are actual overlapping mappings to split.
+ *
+ * There can be an arbitrary amount of unmap operations and a maximum of two
+ * remap operations.
+ *
+ * Note that before calling this function again with another range to unmap it
+ * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
+ * previously obtained operations must be processed or abandoned. To update the
+ * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
+ * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
+ * used.
+ *
+ * After the caller finished processing the returned &drm_gpuva_ops, they must
+ * be freed with &drm_gpuva_ops_free.
+ *
+ * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
+ */
+struct drm_gpuva_ops *
+drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
+			      u64 req_addr, u64 req_range)
+{
+	struct drm_gpuva_ops *ops;
+	struct {
+		struct drm_gpuva_manager *mgr;
+		struct drm_gpuva_ops *ops;
+	} args;
+	int ret;
+
+	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
+	if (unlikely(!ops))
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&ops->list);
+
+	args.mgr = mgr;
+	args.ops = ops;
+
+	ret = __drm_gpuva_sm_unmap(mgr, &gpuva_list_ops, &args,
+				   req_addr, req_range);
+	if (ret) {
+		kfree(ops);
+		return ERR_PTR(ret);
+	}
+
+	return ops;
+}
+EXPORT_SYMBOL(drm_gpuva_sm_unmap_ops_create);
+
+/**
+ * drm_gpuva_prefetch_ops_create - creates the &drm_gpuva_ops to prefetch
+ * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @req_addr: the start address of the range to prefetch
+ * @req_range: the range of the mappings to prefetch
+ *
+ * This function creates a list of operations to perform prefetching.
+ *
+ * The list can be iterated with &drm_gpuva_for_each_op and must be processed
+ * in the given order. It can contain prefetch operations.
+ *
+ * There can be an arbitrary amount of prefetch operations.
+ *
+ * After the caller finished processing the returned &drm_gpuva_ops, they must
+ * be freed with &drm_gpuva_ops_free.
+ *
+ * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
+ */
+struct drm_gpuva_ops *
+drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
+			      u64 addr, u64 range)
+{
+	DRM_GPUVA_ITER(it, mgr, addr);
+	struct drm_gpuva_ops *ops;
+	struct drm_gpuva_op *op;
+	int ret;
+
+	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
+	if (!ops)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&ops->list);
+
+	drm_gpuva_iter_for_each_range(it, addr + range) {
+		op = gpuva_op_alloc(mgr);
+		if (!op) {
+			ret = -ENOMEM;
+			goto err_free_ops;
+		}
+
+		op->op = DRM_GPUVA_OP_PREFETCH;
+		op->prefetch.va = it.va;
+		list_add_tail(&op->entry, &ops->list);
+	}
+
+	return ops;
+
+err_free_ops:
+	drm_gpuva_ops_free(mgr, ops);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(drm_gpuva_prefetch_ops_create);
+
+/**
+ * drm_gpuva_gem_unmap_ops_create - creates the &drm_gpuva_ops to unmap a GEM
+ * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @obj: the &drm_gem_object to unmap
+ *
+ * This function creates a list of operations to perform unmapping for every
+ * GPUVA attached to a GEM.
+ *
+ * The list can be iterated with &drm_gpuva_for_each_op and consists out of an
+ * arbitrary amount of unmap operations.
+ *
+ * After the caller finished processing the returned &drm_gpuva_ops, they must
+ * be freed with &drm_gpuva_ops_free.
+ *
+ * It is the callers responsibility to protect the GEMs GPUVA list against
+ * concurrent access.
+ *
+ * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
+ */
+struct drm_gpuva_ops *
+drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
+			       struct drm_gem_object *obj)
+{
+	struct drm_gpuva_ops *ops;
+	struct drm_gpuva_op *op;
+	struct drm_gpuva *va;
+	int ret;
+
+	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
+	if (!ops)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&ops->list);
+
+	drm_gem_for_each_gpuva(va, obj) {
+		op = gpuva_op_alloc(mgr);
+		if (!op) {
+			ret = -ENOMEM;
+			goto err_free_ops;
+		}
+
+		op->op = DRM_GPUVA_OP_UNMAP;
+		op->unmap.va = va;
+		list_add_tail(&op->entry, &ops->list);
+	}
+
+	return ops;
+
+err_free_ops:
+	drm_gpuva_ops_free(mgr, ops);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(drm_gpuva_gem_unmap_ops_create);
+
+
+/**
+ * drm_gpuva_ops_free - free the given &drm_gpuva_ops
+ * @mgr: the &drm_gpuva_manager the ops were created for
+ * @ops: the &drm_gpuva_ops to free
+ *
+ * Frees the given &drm_gpuva_ops structure including all the ops associated
+ * with it.
+ */
+void
+drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
+		   struct drm_gpuva_ops *ops)
+{
+	struct drm_gpuva_op *op, *next;
+
+	drm_gpuva_for_each_op_safe(op, next, ops) {
+		list_del(&op->entry);
+
+		if (op->op == DRM_GPUVA_OP_REMAP) {
+			kfree(op->remap.prev);
+			kfree(op->remap.next);
+			kfree(op->remap.unmap);
+		}
+
+		gpuva_op_free(mgr, op);
+	}
+
+	kfree(ops);
+}
+EXPORT_SYMBOL(drm_gpuva_ops_free);
diff --git a/include/drm/drm_debugfs.h b/include/drm/drm_debugfs.h
index 7616f457ce70..3031fcb96b39 100644
--- a/include/drm/drm_debugfs.h
+++ b/include/drm/drm_debugfs.h
@@ -34,6 +34,22 @@
 
 #include <linux/types.h>
 #include <linux/seq_file.h>
+
+#include <drm/drm_gpuva_mgr.h>
+
+/**
+ * DRM_DEBUGFS_GPUVA_INFO - &drm_info_list entry to dump a GPU VA space
+ * @show: the &drm_info_list's show callback
+ * @data: driver private data
+ *
+ * Drivers should use this macro to define a &drm_info_list entry to provide a
+ * debugfs file for dumping the GPU VA space regions and mappings.
+ *
+ * For each DRM GPU VA space drivers should call drm_debugfs_gpuva_info() from
+ * their @show callback.
+ */
+#define DRM_DEBUGFS_GPUVA_INFO(show, data) {"gpuvas", show, DRIVER_GEM_GPUVA, data}
+
 /**
  * struct drm_info_list - debugfs info list entry
  *
@@ -134,6 +150,8 @@ void drm_debugfs_add_file(struct drm_device *dev, const char *name,
 
 void drm_debugfs_add_files(struct drm_device *dev,
 			   const struct drm_debugfs_info *files, int count);
+int drm_debugfs_gpuva_info(struct seq_file *m,
+			   struct drm_gpuva_manager *mgr);
 #else
 static inline void drm_debugfs_create_files(const struct drm_info_list *files,
 					    int count, struct dentry *root,
@@ -155,6 +173,12 @@ static inline void drm_debugfs_add_files(struct drm_device *dev,
 					 const struct drm_debugfs_info *files,
 					 int count)
 {}
+
+static inline int drm_debugfs_gpuva_info(struct seq_file *m,
+					 struct drm_gpuva_manager *mgr)
+{
+	return 0;
+}
 #endif
 
 #endif /* _DRM_DEBUGFS_H_ */
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index b419c59c4bef..9e2ec2da0685 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -105,6 +105,13 @@ enum drm_driver_feature {
 	 */
 	DRIVER_COMPUTE_ACCEL            = BIT(7),
 
+	/**
+	 * @DRIVER_GEM_GPUVA:
+	 *
+	 * Driver supports user defined GPU VA bindings for GEM objects.
+	 */
+	DRIVER_GEM_GPUVA		= BIT(8),
+
 	/* IMPORTANT: Below are all the legacy flags, add new ones above. */
 
 	/**
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 7bd8e2bbbb36..55a5dabf548b 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -36,6 +36,8 @@
 
 #include <linux/kref.h>
 #include <linux/dma-resv.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
 
 #include <drm/drm_vma_manager.h>
 
@@ -347,6 +349,17 @@ struct drm_gem_object {
 	 */
 	struct dma_resv _resv;
 
+	/**
+	 * @gpuva:
+	 *
+	 * Provides the list and list mutex of GPU VAs attached to this
+	 * GEM object.
+	 */
+	struct {
+		struct list_head list;
+		struct mutex mutex;
+	} gpuva;
+
 	/**
 	 * @funcs:
 	 *
@@ -493,4 +506,66 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
 
 int drm_gem_evict(struct drm_gem_object *obj);
 
+/**
+ * drm_gem_gpuva_init - initialize the gpuva list of a GEM object
+ * @obj: the &drm_gem_object
+ *
+ * This initializes the &drm_gem_object's &drm_gpuva list and the mutex
+ * protecting it.
+ *
+ * Calling this function is only necessary for drivers intending to support the
+ * &drm_driver_feature DRIVER_GEM_GPUVA.
+ */
+static inline void drm_gem_gpuva_init(struct drm_gem_object *obj)
+{
+	INIT_LIST_HEAD(&obj->gpuva.list);
+	mutex_init(&obj->gpuva.mutex);
+}
+
+/**
+ * drm_gem_gpuva_lock - lock the GEM's gpuva list mutex
+ * @obj: the &drm_gem_object
+ *
+ * This unlocks the mutex protecting the &drm_gem_object's &drm_gpuva list.
+ */
+static inline void drm_gem_gpuva_lock(struct drm_gem_object *obj)
+{
+	mutex_lock(&obj->gpuva.mutex);
+}
+
+/**
+ * drm_gem_gpuva_unlock - unlock the GEM's gpuva list mutex
+ * @obj: the &drm_gem_object
+ *
+ * This unlocks the mutex protecting the &drm_gem_object's &drm_gpuva list.
+ */
+static inline void drm_gem_gpuva_unlock(struct drm_gem_object *obj)
+{
+	mutex_unlock(&obj->gpuva.mutex);
+}
+
+/**
+ * drm_gem_for_each_gpuva - iternator to walk over a list of gpuvas
+ * @entry: &drm_gpuva structure to assign to in each iteration step
+ * @obj: the &drm_gem_object the &drm_gpuvas to walk are associated with
+ *
+ * This iterator walks over all &drm_gpuva structures associated with the
+ * &drm_gpuva_manager.
+ */
+#define drm_gem_for_each_gpuva(entry, obj) \
+	list_for_each_entry(entry, &obj->gpuva.list, head)
+
+/**
+ * drm_gem_for_each_gpuva_safe - iternator to safely walk over a list of gpuvas
+ * @entry: &drm_gpuva structure to assign to in each iteration step
+ * @next: &next &drm_gpuva to store the next step
+ * @obj: the &drm_gem_object the &drm_gpuvas to walk are associated with
+ *
+ * This iterator walks over all &drm_gpuva structures associated with the
+ * &drm_gem_object. It is implemented with list_for_each_entry_safe(), hence
+ * it is save against removal of elements.
+ */
+#define drm_gem_for_each_gpuva_safe(entry, next, obj) \
+	list_for_each_entry_safe(entry, next, &obj->gpuva.list, head)
+
 #endif /* __DRM_GEM_H__ */
diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuva_mgr.h
new file mode 100644
index 000000000000..e42506ed7b14
--- /dev/null
+++ b/include/drm/drm_gpuva_mgr.h
@@ -0,0 +1,734 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __DRM_GPUVA_MGR_H__
+#define __DRM_GPUVA_MGR_H__
+
+/*
+ * Copyright (c) 2022 Red Hat.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/maple_tree.h>
+#include <linux/mm.h>
+#include <linux/rbtree.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+struct drm_gpuva_manager;
+struct drm_gpuva_fn_ops;
+struct drm_gpuva_op;
+
+/**
+ * struct drm_gpuva_region - structure to track a portion of GPU VA space
+ *
+ * This structure represents a portion of a GPUs VA space and is associated
+ * with a &drm_gpuva_manager.
+ *
+ * GPU VA mappings, represented by &drm_gpuva objects, are restricted to be
+ * placed within a &drm_gpuva_region.
+ */
+struct drm_gpuva_region {
+	/**
+	 * @mgr: the &drm_gpuva_manager this object is associated with
+	 */
+	struct drm_gpuva_manager *mgr;
+
+	/**
+	 * @va: structure containing the address and range of the &drm_gpuva_region
+	 */
+	struct {
+		/**
+		 * @addr: the start address
+		 */
+		u64 addr;
+
+		/*
+		 * @range: the range
+		 */
+		u64 range;
+	} va;
+
+	/**
+	 * @sparse: indicates whether this region is sparse
+	 */
+	bool sparse;
+};
+
+int drm_gpuva_region_insert(struct drm_gpuva_manager *mgr,
+			    struct drm_gpuva_region *reg);
+void drm_gpuva_region_remove(struct drm_gpuva_region *reg);
+
+bool
+drm_gpuva_region_empty(struct drm_gpuva_region *reg);
+
+struct drm_gpuva_region *
+drm_gpuva_region_find(struct drm_gpuva_manager *mgr,
+		      u64 addr, u64 range);
+struct drm_gpuva_region *
+drm_gpuva_region_find_first(struct drm_gpuva_manager *mgr,
+			    u64 addr, u64 range);
+
+/**
+ * enum drm_gpuva_flags - flags for struct drm_gpuva
+ */
+enum drm_gpuva_flags {
+	/**
+	 * @DRM_GPUVA_EVICTED:
+	 *
+	 * Flag indicating that the &drm_gpuva's backing GEM is evicted.
+	 */
+	DRM_GPUVA_EVICTED = (1 << 0),
+
+	/**
+	 * @DRM_GPUVA_USERBITS: user defined bits
+	 */
+	DRM_GPUVA_USERBITS = (1 << 1),
+};
+
+/**
+ * struct drm_gpuva - structure to track a GPU VA mapping
+ *
+ * This structure represents a GPU VA mapping and is associated with a
+ * &drm_gpuva_manager.
+ *
+ * Typically, this structure is embedded in bigger driver structures.
+ */
+struct drm_gpuva {
+	/**
+	 * @mgr: the &drm_gpuva_manager this object is associated with
+	 */
+	struct drm_gpuva_manager *mgr;
+
+	/**
+	 * @region: the &drm_gpuva_region the &drm_gpuva is mapped in
+	 */
+	struct drm_gpuva_region *region;
+
+	/**
+	 * @head: the &list_head to attach this object to a &drm_gem_object
+	 */
+	struct list_head head;
+
+	/**
+	 * @flags: the &drm_gpuva_flags for this mapping
+	 */
+	enum drm_gpuva_flags flags;
+
+	/**
+	 * @va: structure containing the address and range of the &drm_gpuva
+	 */
+	struct {
+		/**
+		 * @addr: the start address
+		 */
+		u64 addr;
+
+		/*
+		 * @range: the range
+		 */
+		u64 range;
+	} va;
+
+	/**
+	 * @gem: structure containing the &drm_gem_object and it's offset
+	 */
+	struct {
+		/**
+		 * @offset: the offset within the &drm_gem_object
+		 */
+		u64 offset;
+
+		/**
+		 * @obj: the mapped &drm_gem_object
+		 */
+		struct drm_gem_object *obj;
+	} gem;
+};
+
+void drm_gpuva_link(struct drm_gpuva *va);
+void drm_gpuva_unlink(struct drm_gpuva *va);
+
+int drm_gpuva_insert(struct drm_gpuva_manager *mgr,
+		     struct drm_gpuva *va);
+void drm_gpuva_remove(struct drm_gpuva *va);
+
+struct drm_gpuva *drm_gpuva_find(struct drm_gpuva_manager *mgr,
+				 u64 addr, u64 range);
+struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
+				       u64 addr, u64 range);
+struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start);
+struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end);
+
+/**
+ * drm_gpuva_evict - sets whether the backing GEM of this &drm_gpuva is evicted
+ * @va: the &drm_gpuva to set the evict flag for
+ * @evict: indicates whether the &drm_gpuva is evicted
+ */
+static inline void drm_gpuva_evict(struct drm_gpuva *va, bool evict)
+{
+	if (evict)
+		va->flags |= DRM_GPUVA_EVICTED;
+	else
+		va->flags &= ~DRM_GPUVA_EVICTED;
+}
+
+/**
+ * drm_gpuva_evicted - indicates whether the backing BO of this &drm_gpuva
+ * is evicted
+ * @va: the &drm_gpuva to check
+ */
+static inline bool drm_gpuva_evicted(struct drm_gpuva *va)
+{
+	return va->flags & DRM_GPUVA_EVICTED;
+}
+
+/**
+ * enum drm_gpuva_mgr_flags - the feature flags for the &drm_gpuva_manager
+ */
+enum drm_gpuva_mgr_flags {
+	/**
+	 * @DRM_GPUVA_MANAGER_REGIONS:
+	 *
+	 * Enable the &drm_gpuva_manager to separately track &drm_gpuva_regions.
+	 *
+	 * &drm_gpuva_regions represent a reserved portion of VA space drivers
+	 * can create mappings in. If regions are enabled, &drm_gpuvas can be
+	 * created within an existing &drm_gpuva_region only and merge
+	 * operations never indicate merging over region boundaries.
+	 */
+	DRM_GPUVA_MANAGER_REGIONS = (1 << 0),
+};
+
+/**
+ * struct drm_gpuva_manager - DRM GPU VA Manager
+ *
+ * The DRM GPU VA Manager keeps track of a GPU's virtual address space by using
+ * &maple_tree structures. Typically, this structure is embedded in bigger
+ * driver structures.
+ *
+ * Drivers can pass addresses and ranges in an arbitrary unit, e.g. bytes or
+ * pages.
+ *
+ * There should be one manager instance per GPU virtual address space.
+ */
+struct drm_gpuva_manager {
+	/**
+	 * @name: the name of the DRM GPU VA space
+	 */
+	const char *name;
+
+	/**
+	 * @mm_start: start of the VA space
+	 */
+	u64 mm_start;
+
+	/**
+	 * @mm_range: length of the VA space
+	 */
+	u64 mm_range;
+
+	/**
+	 * @region_mt: the &maple_tree to track GPU VA regions
+	 */
+	struct maple_tree region_mt;
+
+	/**
+	 * @va_mt: the &maple_tree to track GPU VA mappings
+	 */
+	struct maple_tree va_mt;
+
+	/**
+	 * @kernel_alloc_region:
+	 *
+	 * &drm_gpuva_region representing the address space cutout reserved for
+	 * the kernel
+	 */
+	struct drm_gpuva_region kernel_alloc_region;
+
+	/**
+	 * @ops: &drm_gpuva_fn_ops providing the split/merge steps to drivers
+	 */
+	struct drm_gpuva_fn_ops *ops;
+
+	/**
+	 * @flags: the feature flags of the &drm_gpuva_manager
+	 */
+	enum drm_gpuva_mgr_flags flags;
+};
+
+void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
+			    const char *name,
+			    u64 start_offset, u64 range,
+			    u64 reserve_offset, u64 reserve_range,
+			    struct drm_gpuva_fn_ops *ops,
+			    enum drm_gpuva_mgr_flags flags);
+void drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr);
+
+/**
+ * struct drm_gpuva_iterator - iterator for walking the internal (maple) tree
+ */
+struct drm_gpuva_iterator {
+	/**
+	 * @mas: the maple tree iterator (maple advanced state)
+	 */
+	struct ma_state mas;
+
+	/**
+	 * @mgr: the &drm_gpuva_manager to iterate
+	 */
+	struct drm_gpuva_manager *mgr;
+
+	union {
+		/**
+		 * @va: the current &drm_gpuva entry
+		 */
+		struct drm_gpuva *va;
+
+		/**
+		 * @reg: the current &drm_gpuva_region entry
+		 */
+		struct drm_gpuva_region *reg;
+
+		/**
+		 * @entry: the current entry
+		 */
+		void *entry;
+	};
+};
+
+void drm_gpuva_iter_remove(struct drm_gpuva_iterator *it);
+int drm_gpuva_iter_va_replace(struct drm_gpuva_iterator *it,
+			      struct drm_gpuva *va);
+int drm_gpuva_iter_region_replace(struct drm_gpuva_iterator *it,
+				  struct drm_gpuva_region *reg);
+
+static inline bool
+drm_gpuva_iter_find(struct drm_gpuva_iterator *it, unsigned long max)
+{
+
+	mas_lock(&it->mas);
+	it->entry = mas_find(&it->mas, max);
+	mas_unlock(&it->mas);
+
+	return !!it->entry;
+}
+
+/**
+ * DRM_GPUVA_ITER - create an iterator structure to iterate the &drm_gpuva tree
+ * @name: the name of the &drm_gpuva_iterator to create
+ * @mgr: the &drm_gpuva_manager to iterate
+ * @start: starting offset, the first entry will overlap this
+ */
+#define DRM_GPUVA_ITER(name, mgr__, start)				\
+	struct drm_gpuva_iterator name = {				\
+		.mas = MA_STATE_INIT(&(mgr__)->va_mt, start, 0),	\
+		.mgr = mgr__,						\
+		.va = NULL,						\
+	}
+
+/**
+ * DRM_GPUVA_REGION_ITER - create an iterator structure to iterate the
+ * &drm_gpuva_region tree
+ * @name: the name of the &drm_gpuva_iterator to create
+ * @mgr: the &drm_gpuva_manager to iterate
+ * @start: starting offset, the first entry will overlap this
+ */
+#define DRM_GPUVA_REGION_ITER(name, mgr__, start)			\
+	struct drm_gpuva_iterator name = {				\
+		.mas = MA_STATE_INIT(&(mgr__)->region_mt, start, 0),	\
+		.mgr = mgr__,						\
+		.reg = NULL,						\
+	}
+
+/**
+ * drm_gpuva_iter_for_each_range - iternator to walk over a range of entries
+ * @it__: &drm_gpuva_iterator structure to assign to in each iteration step
+ * @start__: starting offset, the first entry will overlap this
+ * @end__: ending offset, the last entry will start before this (but may overlap)
+ *
+ * This function can be used to iterate both &drm_gpuva objects and
+ * &drm_gpuva_region objects.
+ *
+ * It is safe against the removal of elements using &drm_gpuva_iter_remove,
+ * however it is not safe against the removal of elements using
+ * &drm_gpuva_remove and &drm_gpuva_region_remove.
+ */
+#define drm_gpuva_iter_for_each_range(it__, end__) \
+	while (drm_gpuva_iter_find(&(it__), (end__) - 1))
+
+/**
+ * drm_gpuva_iter_for_each - iternator to walk over all existing entries
+ * @it__: &drm_gpuva_iterator structure to assign to in each iteration step
+ *
+ * This function can be used to iterate both &drm_gpuva objects and
+ * &drm_gpuva_region objects.
+ *
+ * In order to walk over all potentially existing entries, the
+ * &drm_gpuva_iterator must be initialized to start at
+ * &drm_gpuva_manager->mm_start or simply 0.
+ *
+ * It is safe against the removal of elements using &drm_gpuva_iter_remove,
+ * however it is not safe against the removal of elements using
+ * &drm_gpuva_remove and &drm_gpuva_region_remove.
+ */
+#define drm_gpuva_iter_for_each(it__) \
+	drm_gpuva_iter_for_each_range(it__, (it__).mgr->mm_start + (it__).mgr->mm_range)
+
+/**
+ * enum drm_gpuva_op_type - GPU VA operation type
+ *
+ * Operations to alter the GPU VA mappings tracked by the &drm_gpuva_manager.
+ */
+enum drm_gpuva_op_type {
+	/**
+	 * @DRM_GPUVA_OP_MAP: the map op type
+	 */
+	DRM_GPUVA_OP_MAP,
+
+	/**
+	 * @DRM_GPUVA_OP_REMAP: the remap op type
+	 */
+	DRM_GPUVA_OP_REMAP,
+
+	/**
+	 * @DRM_GPUVA_OP_UNMAP: the unmap op type
+	 */
+	DRM_GPUVA_OP_UNMAP,
+
+	/**
+	 * @DRM_GPUVA_OP_PREFETCH: the prefetch op type
+	 */
+	DRM_GPUVA_OP_PREFETCH,
+};
+
+/**
+ * struct drm_gpuva_op_map - GPU VA map operation
+ *
+ * This structure represents a single map operation generated by the
+ * DRM GPU VA manager.
+ */
+struct drm_gpuva_op_map {
+	/**
+	 * @va: structure containing address and range of a map
+	 * operation
+	 */
+	struct {
+		/**
+		 * @addr: the base address of the new mapping
+		 */
+		u64 addr;
+
+		/**
+		 * @range: the range of the new mapping
+		 */
+		u64 range;
+	} va;
+
+	/**
+	 * @gem: structure containing the &drm_gem_object and it's offset
+	 */
+	struct {
+		/**
+		 * @offset: the offset within the &drm_gem_object
+		 */
+		u64 offset;
+
+		/**
+		 * @obj: the &drm_gem_object to map
+		 */
+		struct drm_gem_object *obj;
+	} gem;
+};
+
+/**
+ * struct drm_gpuva_op_unmap - GPU VA unmap operation
+ *
+ * This structure represents a single unmap operation generated by the
+ * DRM GPU VA manager.
+ */
+struct drm_gpuva_op_unmap {
+	/**
+	 * @va: the &drm_gpuva to unmap
+	 */
+	struct drm_gpuva *va;
+
+	/**
+	 * @keep:
+	 *
+	 * Indicates whether this &drm_gpuva is physically contiguous with the
+	 * original mapping request.
+	 *
+	 * Optionally, if &keep is set, drivers may keep the actual page table
+	 * mappings for this &drm_gpuva, adding the missing page table entries
+	 * only and update the &drm_gpuva_manager accordingly.
+	 */
+	bool keep;
+};
+
+/**
+ * struct drm_gpuva_op_remap - GPU VA remap operation
+ *
+ * This represents a single remap operation generated by the DRM GPU VA manager.
+ *
+ * A remap operation is generated when an existing GPU VA mmapping is split up
+ * by inserting a new GPU VA mapping or by partially unmapping existent
+ * mapping(s), hence it consists of a maximum of two map and one unmap
+ * operation.
+ *
+ * The @unmap operation takes care of removing the original existing mapping.
+ * @prev is used to remap the preceding part, @next the subsequent part.
+ *
+ * If either a new mapping's start address is aligned with the start address
+ * of the old mapping or the new mapping's end address is aligned with the
+ * end address of the old mapping, either @prev or @next is NULL.
+ *
+ * Note, the reason for a dedicated remap operation, rather than arbitrary
+ * unmap and map operations, is to give drivers the chance of extracting driver
+ * specific data for creating the new mappings from the unmap operations's
+ * &drm_gpuva structure which typically is embedded in larger driver specific
+ * structures.
+ */
+struct drm_gpuva_op_remap {
+	/**
+	 * @prev: the preceding part of a split mapping
+	 */
+	struct drm_gpuva_op_map *prev;
+
+	/**
+	 * @next: the subsequent part of a split mapping
+	 */
+	struct drm_gpuva_op_map *next;
+
+	/**
+	 * @unmap: the unmap operation for the original existing mapping
+	 */
+	struct drm_gpuva_op_unmap *unmap;
+};
+
+/**
+ * struct drm_gpuva_op_prefetch - GPU VA prefetch operation
+ *
+ * This structure represents a single prefetch operation generated by the
+ * DRM GPU VA manager.
+ */
+struct drm_gpuva_op_prefetch {
+	/**
+	 * @va: the &drm_gpuva to prefetch
+	 */
+	struct drm_gpuva *va;
+};
+
+/**
+ * struct drm_gpuva_op - GPU VA operation
+ *
+ * This structure represents a single generic operation.
+ *
+ * The particular type of the operation is defined by @op.
+ */
+struct drm_gpuva_op {
+	/**
+	 * @entry:
+	 *
+	 * The &list_head used to distribute instances of this struct within
+	 * &drm_gpuva_ops.
+	 */
+	struct list_head entry;
+
+	/**
+	 * @op: the type of the operation
+	 */
+	enum drm_gpuva_op_type op;
+
+	union {
+		/**
+		 * @map: the map operation
+		 */
+		struct drm_gpuva_op_map map;
+
+		/**
+		 * @remap: the remap operation
+		 */
+		struct drm_gpuva_op_remap remap;
+
+		/**
+		 * @unmap: the unmap operation
+		 */
+		struct drm_gpuva_op_unmap unmap;
+
+		/**
+		 * @prefetch: the prefetch operation
+		 */
+		struct drm_gpuva_op_prefetch prefetch;
+	};
+};
+
+/**
+ * struct drm_gpuva_ops - wraps a list of &drm_gpuva_op
+ */
+struct drm_gpuva_ops {
+	/**
+	 * @list: the &list_head
+	 */
+	struct list_head list;
+};
+
+/**
+ * drm_gpuva_for_each_op - iterator to walk over &drm_gpuva_ops
+ * @op: &drm_gpuva_op to assign in each iteration step
+ * @ops: &drm_gpuva_ops to walk
+ *
+ * This iterator walks over all ops within a given list of operations.
+ */
+#define drm_gpuva_for_each_op(op, ops) list_for_each_entry(op, &(ops)->list, entry)
+
+/**
+ * drm_gpuva_for_each_op_safe - iterator to safely walk over &drm_gpuva_ops
+ * @op: &drm_gpuva_op to assign in each iteration step
+ * @next: &next &drm_gpuva_op to store the next step
+ * @ops: &drm_gpuva_ops to walk
+ *
+ * This iterator walks over all ops within a given list of operations. It is
+ * implemented with list_for_each_safe(), so save against removal of elements.
+ */
+#define drm_gpuva_for_each_op_safe(op, next, ops) \
+	list_for_each_entry_safe(op, next, &(ops)->list, entry)
+
+/**
+ * drm_gpuva_for_each_op_from_reverse - iterate backwards from the given point
+ * @op: &drm_gpuva_op to assign in each iteration step
+ * @ops: &drm_gpuva_ops to walk
+ *
+ * This iterator walks over all ops within a given list of operations beginning
+ * from the given operation in reverse order.
+ */
+#define drm_gpuva_for_each_op_from_reverse(op, ops) \
+	list_for_each_entry_from_reverse(op, &(ops)->list, entry)
+
+/**
+ * drm_gpuva_first_op - returns the first &drm_gpuva_op from &drm_gpuva_ops
+ * @ops: the &drm_gpuva_ops to get the fist &drm_gpuva_op from
+ */
+#define drm_gpuva_first_op(ops) \
+	list_first_entry(&(ops)->list, struct drm_gpuva_op, entry)
+
+/**
+ * drm_gpuva_last_op - returns the last &drm_gpuva_op from &drm_gpuva_ops
+ * @ops: the &drm_gpuva_ops to get the last &drm_gpuva_op from
+ */
+#define drm_gpuva_last_op(ops) \
+	list_last_entry(&(ops)->list, struct drm_gpuva_op, entry)
+
+/**
+ * drm_gpuva_prev_op - previous &drm_gpuva_op in the list
+ * @op: the current &drm_gpuva_op
+ */
+#define drm_gpuva_prev_op(op) list_prev_entry(op, entry)
+
+/**
+ * drm_gpuva_next_op - next &drm_gpuva_op in the list
+ * @op: the current &drm_gpuva_op
+ */
+#define drm_gpuva_next_op(op) list_next_entry(op, entry)
+
+struct drm_gpuva_ops *
+drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
+			    u64 addr, u64 range,
+			    struct drm_gem_object *obj, u64 offset);
+struct drm_gpuva_ops *
+drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
+			      u64 addr, u64 range);
+
+struct drm_gpuva_ops *
+drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
+				 u64 addr, u64 range);
+
+struct drm_gpuva_ops *
+drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
+			       struct drm_gem_object *obj);
+
+void drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
+			struct drm_gpuva_ops *ops);
+
+/**
+ * struct drm_gpuva_fn_ops - callbacks for split/merge steps
+ *
+ * This structure defines the callbacks used by &drm_gpuva_sm_map and
+ * &drm_gpuva_sm_unmap to provide the split/merge steps for map and unmap
+ * operations to drivers.
+ */
+struct drm_gpuva_fn_ops {
+	/**
+	 * @op_alloc: called when the &drm_gpuva_manager allocates
+	 * a struct drm_gpuva_op
+	 *
+	 * Some drivers may want to embed struct drm_gpuva_op into driver
+	 * specific structures. By implementing this callback drivers can
+	 * allocate memory accordingly.
+	 *
+	 * This callback is optional.
+	 */
+	struct drm_gpuva_op *(*op_alloc)(void);
+
+	/**
+	 * @op_free: called when the &drm_gpuva_manager frees a
+	 * struct drm_gpuva_op
+	 *
+	 * Some drivers may want to embed struct drm_gpuva_op into driver
+	 * specific structures. By implementing this callback drivers can
+	 * free the previously allocated memory accordingly.
+	 *
+	 * This callback is optional.
+	 */
+	void (*op_free)(struct drm_gpuva_op *op);
+
+	/**
+	 * @sm_map_step: called from &drm_gpuva_sm_map providing the split and
+	 * merge steps
+	 *
+	 * This callback provides a single split / merge step or, if no split
+	 * and merge is indicated, the original map operation.
+	 *
+	 * The &priv pointer is equal to the one drivers pass to
+	 * &drm_gpuva_sm_map.
+	 */
+	int (*sm_map_step)(struct drm_gpuva_op *op, void *priv);
+
+	/**
+	 * @sm_unmap_step: called from &drm_gpuva_sm_map providing the split and
+	 * merge steps
+	 *
+	 * This callback provides a single split step or, if no split is
+	 * indicated, the plain unmap operations of the corresponding unmap
+	 * range originally passed to &drm_gpuva_sm_unmap.
+	 *
+	 * The &priv pointer is equal to the one drivers pass to
+	 * &drm_gpuva_sm_unmap.
+	 */
+	int (*sm_unmap_step)(struct drm_gpuva_op *op, void *priv);
+};
+
+int drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
+		     u64 addr, u64 range,
+		     struct drm_gem_object *obj, u64 offset);
+
+int drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
+		       u64 addr, u64 range);
+
+#endif /* __DRM_GPUVA_MGR_H__ */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (2 preceding siblings ...)
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 3/8] drm: manager to keep track of GPUs VA mappings Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-21 12:52   ` Thomas Hellström
                     ` (2 more replies)
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 5/8] drm/xe: NULL binding implementation Matthew Brost
                   ` (8 subsequent siblings)
  12 siblings, 3 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe

Rather than open coding VM binds and VMA tracking, use the GPUVA
library. GPUVA provides a common infrastructure for VM binds to use mmap
/ munmap semantics and support for VK sparse bindings.

The concepts are:

1) xe_vm inherits from drm_gpuva_manager
2) xe_vma inherits from drm_gpuva
3) xe_vma_op inherits from drm_gpuva_op
4) VM bind operations (MAP, UNMAP, PREFETCH, UNMAP_ALL) call into the
GPUVA code to generate an VMA operations list which is parsed, commited,
and executed.

v2 (CI): Add break after default in case statement.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
 drivers/gpu/drm/xe/xe_device.c              |    2 +-
 drivers/gpu/drm/xe/xe_exec.c                |    2 +-
 drivers/gpu/drm/xe/xe_gt_pagefault.c        |   23 +-
 drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
 drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
 drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
 drivers/gpu/drm/xe/xe_pt.c                  |  106 +-
 drivers/gpu/drm/xe/xe_trace.h               |   10 +-
 drivers/gpu/drm/xe/xe_vm.c                  | 1799 +++++++++----------
 drivers/gpu/drm/xe/xe_vm.h                  |   66 +-
 drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
 drivers/gpu/drm/xe/xe_vm_types.h            |  165 +-
 13 files changed, 1126 insertions(+), 1172 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 5460e6fe3c1f..3a482c61c3ec 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -391,7 +391,8 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo,
 {
 	struct dma_resv_iter cursor;
 	struct dma_fence *fence;
-	struct xe_vma *vma;
+	struct drm_gpuva *gpuva;
+	struct drm_gem_object *obj = &bo->ttm.base;
 	int ret = 0;
 
 	dma_resv_assert_held(bo->ttm.base.resv);
@@ -404,8 +405,9 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo,
 		dma_resv_iter_end(&cursor);
 	}
 
-	list_for_each_entry(vma, &bo->vmas, bo_link) {
-		struct xe_vm *vm = vma->vm;
+	drm_gem_for_each_gpuva(gpuva, obj) {
+		struct xe_vma *vma = gpuva_to_vma(gpuva);
+		struct xe_vm *vm = xe_vma_vm(vma);
 
 		trace_xe_vma_evict(vma);
 
@@ -430,10 +432,8 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo,
 			} else {
 				ret = timeout;
 			}
-
 		} else {
 			bool vm_resv_locked = false;
-			struct xe_vm *vm = vma->vm;
 
 			/*
 			 * We need to put the vma on the vm's rebind_list,
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index a79f934e3d2d..d0d70adedba6 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -130,7 +130,7 @@ static struct drm_driver driver = {
 	.driver_features =
 	    DRIVER_GEM |
 	    DRIVER_RENDER | DRIVER_SYNCOBJ |
-	    DRIVER_SYNCOBJ_TIMELINE,
+	    DRIVER_SYNCOBJ_TIMELINE | DRIVER_GEM_GPUVA,
 	.open = xe_file_open,
 	.postclose = xe_file_close,
 
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index ea869f2452ef..214d82bc906b 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -118,7 +118,7 @@ static int xe_exec_begin(struct xe_engine *e, struct ww_acquire_ctx *ww,
 		if (xe_vma_is_userptr(vma))
 			continue;
 
-		err = xe_bo_validate(vma->bo, vm, false);
+		err = xe_bo_validate(xe_vma_bo(vma), vm, false);
 		if (err) {
 			xe_vm_unlock_dma_resv(vm, tv_onstack, *tv, ww, objs);
 			*tv = NULL;
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index 1677640e1075..f7a066090a13 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -75,9 +75,10 @@ static bool vma_is_valid(struct xe_gt *gt, struct xe_vma *vma)
 		!(BIT(gt->info.id) & vma->usm.gt_invalidated);
 }
 
-static bool vma_matches(struct xe_vma *vma, struct xe_vma *lookup)
+static bool vma_matches(struct xe_vma *vma, u64 page_addr)
 {
-	if (lookup->start > vma->end || lookup->end < vma->start)
+	if (page_addr > xe_vma_end(vma) - 1 ||
+	    page_addr + SZ_4K < xe_vma_start(vma))
 		return false;
 
 	return true;
@@ -90,16 +91,14 @@ static bool only_needs_bo_lock(struct xe_bo *bo)
 
 static struct xe_vma *lookup_vma(struct xe_vm *vm, u64 page_addr)
 {
-	struct xe_vma *vma = NULL, lookup;
+	struct xe_vma *vma = NULL;
 
-	lookup.start = page_addr;
-	lookup.end = lookup.start + SZ_4K - 1;
 	if (vm->usm.last_fault_vma) {   /* Fast lookup */
-		if (vma_matches(vm->usm.last_fault_vma, &lookup))
+		if (vma_matches(vm->usm.last_fault_vma, page_addr))
 			vma = vm->usm.last_fault_vma;
 	}
 	if (!vma)
-		vma = xe_vm_find_overlapping_vma(vm, &lookup);
+		vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
 
 	return vma;
 }
@@ -170,7 +169,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
 	}
 
 	/* Lock VM and BOs dma-resv */
-	bo = vma->bo;
+	bo = xe_vma_bo(vma);
 	if (only_needs_bo_lock(bo)) {
 		/* This path ensures the BO's LRU is updated */
 		ret = xe_bo_lock(bo, &ww, xe->info.tile_count, false);
@@ -487,12 +486,8 @@ static struct xe_vma *get_acc_vma(struct xe_vm *vm, struct acc *acc)
 {
 	u64 page_va = acc->va_range_base + (ffs(acc->sub_granularity) - 1) *
 		sub_granularity_in_byte(acc->granularity);
-	struct xe_vma lookup;
-
-	lookup.start = page_va;
-	lookup.end = lookup.start + SZ_4K - 1;
 
-	return xe_vm_find_overlapping_vma(vm, &lookup);
+	return xe_vm_find_overlapping_vma(vm, page_va, SZ_4K);
 }
 
 static int handle_acc(struct xe_gt *gt, struct acc *acc)
@@ -536,7 +531,7 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc)
 		goto unlock_vm;
 
 	/* Lock VM and BOs dma-resv */
-	bo = vma->bo;
+	bo = xe_vma_bo(vma);
 	if (only_needs_bo_lock(bo)) {
 		/* This path ensures the BO's LRU is updated */
 		ret = xe_bo_lock(bo, &ww, xe->info.tile_count, false);
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index f279e21300aa..155f37aaf31c 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -201,8 +201,8 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
 	if (!xe->info.has_range_tlb_invalidation) {
 		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
 	} else {
-		u64 start = vma->start;
-		u64 length = vma->end - vma->start + 1;
+		u64 start = xe_vma_start(vma);
+		u64 length = xe_vma_size(vma);
 		u64 align, end;
 
 		if (length < SZ_4K)
@@ -215,12 +215,12 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
 		 * address mask covering the required range.
 		 */
 		align = roundup_pow_of_two(length);
-		start = ALIGN_DOWN(vma->start, align);
-		end = ALIGN(vma->start + length, align);
+		start = ALIGN_DOWN(xe_vma_start(vma), align);
+		end = ALIGN(xe_vma_start(vma) + length, align);
 		length = align;
 		while (start + length < end) {
 			length <<= 1;
-			start = ALIGN_DOWN(vma->start, length);
+			start = ALIGN_DOWN(xe_vma_start(vma), length);
 		}
 
 		/*
@@ -229,7 +229,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
 		 */
 		if (length >= SZ_2M) {
 			length = max_t(u64, SZ_16M, length);
-			start = ALIGN_DOWN(vma->start, length);
+			start = ALIGN_DOWN(xe_vma_start(vma), length);
 		}
 
 		XE_BUG_ON(length < SZ_4K);
@@ -238,7 +238,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
 		XE_BUG_ON(!IS_ALIGNED(start, length));
 
 		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_PAGE_SELECTIVE);
-		action[len++] = vma->vm->usm.asid;
+		action[len++] = xe_vma_vm(vma)->usm.asid;
 		action[len++] = lower_32_bits(start);
 		action[len++] = upper_32_bits(start);
 		action[len++] = ilog2(length) - ilog2(SZ_4K);
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 5e00b75d3ca2..e5ed9022a0a2 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -783,13 +783,13 @@ static int parse_g2h_response(struct xe_guc_ct *ct, u32 *msg, u32 len)
 	if (type == GUC_HXG_TYPE_RESPONSE_FAILURE) {
 		g2h_fence->fail = true;
 		g2h_fence->error =
-			FIELD_GET(GUC_HXG_FAILURE_MSG_0_ERROR, msg[0]);
+			FIELD_GET(GUC_HXG_FAILURE_MSG_0_ERROR, msg[1]);
 		g2h_fence->hint =
-			FIELD_GET(GUC_HXG_FAILURE_MSG_0_HINT, msg[0]);
+			FIELD_GET(GUC_HXG_FAILURE_MSG_0_HINT, msg[1]);
 	} else if (type == GUC_HXG_TYPE_NO_RESPONSE_RETRY) {
 		g2h_fence->retry = true;
 		g2h_fence->reason =
-			FIELD_GET(GUC_HXG_RETRY_MSG_0_REASON, msg[0]);
+			FIELD_GET(GUC_HXG_RETRY_MSG_0_REASON, msg[1]);
 	} else if (g2h_fence->response_buffer) {
 		g2h_fence->response_len = response_len;
 		memcpy(g2h_fence->response_buffer, msg + GUC_CTB_MSG_MIN_LEN,
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index e8978440c725..fee4c0028a2f 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -1049,8 +1049,10 @@ xe_migrate_update_pgtables_cpu(struct xe_migrate *m,
 		return ERR_PTR(-ETIME);
 
 	if (wait_vm && !dma_resv_test_signaled(&vm->resv,
-					       DMA_RESV_USAGE_BOOKKEEP))
+					       DMA_RESV_USAGE_BOOKKEEP)) {
+		vm_dbg(&vm->xe->drm, "wait on VM for munmap");
 		return ERR_PTR(-ETIME);
+	}
 
 	if (ops->pre_commit) {
 		err = ops->pre_commit(pt_update);
@@ -1138,7 +1140,8 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 	u64 addr;
 	int err = 0;
 	bool usm = !eng && xe->info.supports_usm;
-	bool first_munmap_rebind = vma && vma->first_munmap_rebind;
+	bool first_munmap_rebind = vma &&
+		vma->gpuva.flags & XE_VMA_FIRST_REBIND;
 
 	/* Use the CPU if no in syncs and engine is idle */
 	if (no_in_syncs(syncs, num_syncs) && (!eng || xe_engine_is_idle(eng))) {
@@ -1259,6 +1262,7 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
 	 * trigger preempts before moving forward
 	 */
 	if (first_munmap_rebind) {
+		vm_dbg(&vm->xe->drm, "wait on first_munmap_rebind");
 		err = job_add_deps(job, &vm->resv,
 				   DMA_RESV_USAGE_BOOKKEEP);
 		if (err)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 6b2943efcdbc..37a1ce6f62a3 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -94,7 +94,7 @@ static dma_addr_t vma_addr(struct xe_vma *vma, u64 offset,
 				&cur);
 		return xe_res_dma(&cur) + offset;
 	} else {
-		return xe_bo_addr(vma->bo, offset, page_size, is_vram);
+		return xe_bo_addr(xe_vma_bo(vma), offset, page_size, is_vram);
 	}
 }
 
@@ -159,7 +159,7 @@ u64 gen8_pte_encode(struct xe_vma *vma, struct xe_bo *bo,
 
 	if (is_vram) {
 		pte |= GEN12_PPGTT_PTE_LM;
-		if (vma && vma->use_atomic_access_pte_bit)
+		if (vma && vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT)
 			pte |= GEN12_USM_PPGTT_PTE_AE;
 	}
 
@@ -738,7 +738,7 @@ static int
 xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
 		 struct xe_vm_pgtable_update *entries, u32 *num_entries)
 {
-	struct xe_bo *bo = vma->bo;
+	struct xe_bo *bo = xe_vma_bo(vma);
 	bool is_vram = !xe_vma_is_userptr(vma) && bo && xe_bo_is_vram(bo);
 	struct xe_res_cursor curs;
 	struct xe_pt_stage_bind_walk xe_walk = {
@@ -747,22 +747,23 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
 			.shifts = xe_normal_pt_shifts,
 			.max_level = XE_PT_HIGHEST_LEVEL,
 		},
-		.vm = vma->vm,
+		.vm = xe_vma_vm(vma),
 		.gt = gt,
 		.curs = &curs,
-		.va_curs_start = vma->start,
-		.pte_flags = vma->pte_flags,
+		.va_curs_start = xe_vma_start(vma),
+		.pte_flags = xe_vma_read_only(vma) ? PTE_READ_ONLY : 0,
 		.wupd.entries = entries,
-		.needs_64K = (vma->vm->flags & XE_VM_FLAGS_64K) && is_vram,
+		.needs_64K = (xe_vma_vm(vma)->flags & XE_VM_FLAGS_64K) &&
+			is_vram,
 	};
-	struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
+	struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
 	int ret;
 
 	if (is_vram) {
 		struct xe_gt *bo_gt = xe_bo_to_gt(bo);
 
 		xe_walk.default_pte = GEN12_PPGTT_PTE_LM;
-		if (vma && vma->use_atomic_access_pte_bit)
+		if (vma && vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT)
 			xe_walk.default_pte |= GEN12_USM_PPGTT_PTE_AE;
 		xe_walk.dma_offset = bo_gt->mem.vram.io_start -
 			gt_to_xe(gt)->mem.vram.io_start;
@@ -778,17 +779,16 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
 
 	xe_bo_assert_held(bo);
 	if (xe_vma_is_userptr(vma))
-		xe_res_first_sg(vma->userptr.sg, 0, vma->end - vma->start + 1,
-				&curs);
+		xe_res_first_sg(vma->userptr.sg, 0, xe_vma_size(vma), &curs);
 	else if (xe_bo_is_vram(bo) || xe_bo_is_stolen(bo))
-		xe_res_first(bo->ttm.resource, vma->bo_offset,
-			     vma->end - vma->start + 1, &curs);
+		xe_res_first(bo->ttm.resource, xe_vma_bo_offset(vma),
+			     xe_vma_size(vma), &curs);
 	else
-		xe_res_first_sg(xe_bo_get_sg(bo), vma->bo_offset,
-				vma->end - vma->start + 1, &curs);
+		xe_res_first_sg(xe_bo_get_sg(bo), xe_vma_bo_offset(vma),
+				xe_vma_size(vma), &curs);
 
-	ret = drm_pt_walk_range(&pt->drm, pt->level, vma->start, vma->end + 1,
-				&xe_walk.drm);
+	ret = drm_pt_walk_range(&pt->drm, pt->level, xe_vma_start(vma),
+				xe_vma_end(vma), &xe_walk.drm);
 
 	*num_entries = xe_walk.wupd.num_used_entries;
 	return ret;
@@ -923,13 +923,13 @@ bool xe_pt_zap_ptes(struct xe_gt *gt, struct xe_vma *vma)
 		},
 		.gt = gt,
 	};
-	struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
+	struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
 
 	if (!(vma->gt_present & BIT(gt->info.id)))
 		return false;
 
-	(void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1,
-				 &xe_walk.drm);
+	(void)drm_pt_walk_shared(&pt->drm, pt->level, xe_vma_start(vma),
+				 xe_vma_end(vma), &xe_walk.drm);
 
 	return xe_walk.needs_invalidate;
 }
@@ -966,21 +966,21 @@ static void xe_pt_abort_bind(struct xe_vma *vma,
 			continue;
 
 		for (j = 0; j < entries[i].qwords; j++)
-			xe_pt_destroy(entries[i].pt_entries[j].pt, vma->vm->flags, NULL);
+			xe_pt_destroy(entries[i].pt_entries[j].pt, xe_vma_vm(vma)->flags, NULL);
 		kfree(entries[i].pt_entries);
 	}
 }
 
 static void xe_pt_commit_locks_assert(struct xe_vma *vma)
 {
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 
 	lockdep_assert_held(&vm->lock);
 
 	if (xe_vma_is_userptr(vma))
 		lockdep_assert_held_read(&vm->userptr.notifier_lock);
 	else
-		dma_resv_assert_held(vma->bo->ttm.base.resv);
+		dma_resv_assert_held(xe_vma_bo(vma)->ttm.base.resv);
 
 	dma_resv_assert_held(&vm->resv);
 }
@@ -1013,7 +1013,7 @@ static void xe_pt_commit_bind(struct xe_vma *vma,
 
 			if (xe_pt_entry(pt_dir, j_))
 				xe_pt_destroy(xe_pt_entry(pt_dir, j_),
-					      vma->vm->flags, deferred);
+					      xe_vma_vm(vma)->flags, deferred);
 
 			pt_dir->dir.entries[j_] = &newpte->drm;
 		}
@@ -1074,7 +1074,7 @@ static int xe_pt_userptr_inject_eagain(struct xe_vma *vma)
 	static u32 count;
 
 	if (count++ % divisor == divisor - 1) {
-		struct xe_vm *vm = vma->vm;
+		struct xe_vm *vm = xe_vma_vm(vma);
 
 		vma->userptr.divisor = divisor << 1;
 		spin_lock(&vm->userptr.invalidated_lock);
@@ -1117,7 +1117,7 @@ static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
 		container_of(pt_update, typeof(*userptr_update), base);
 	struct xe_vma *vma = pt_update->vma;
 	unsigned long notifier_seq = vma->userptr.notifier_seq;
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 
 	userptr_update->locked = false;
 
@@ -1288,20 +1288,20 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 		},
 		.bind = true,
 	};
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 	u32 num_entries;
 	struct dma_fence *fence;
 	struct invalidation_fence *ifence = NULL;
 	int err;
 
 	bind_pt_update.locked = false;
-	xe_bo_assert_held(vma->bo);
+	xe_bo_assert_held(xe_vma_bo(vma));
 	xe_vm_assert_held(vm);
 	XE_BUG_ON(xe_gt_is_media_type(gt));
 
-	vm_dbg(&vma->vm->xe->drm,
+	vm_dbg(&xe_vma_vm(vma)->xe->drm,
 	       "Preparing bind, with range [%llx...%llx) engine %p.\n",
-	       vma->start, vma->end, e);
+	       xe_vma_start(vma), xe_vma_end(vma) - 1, e);
 
 	err = xe_pt_prepare_bind(gt, vma, entries, &num_entries, rebind);
 	if (err)
@@ -1310,23 +1310,28 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 
 	xe_vm_dbg_print_entries(gt_to_xe(gt), entries, num_entries);
 
-	if (rebind && !xe_vm_no_dma_fences(vma->vm)) {
+	if (rebind && !xe_vm_no_dma_fences(xe_vma_vm(vma))) {
 		ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
 		if (!ifence)
 			return ERR_PTR(-ENOMEM);
 	}
 
 	fence = xe_migrate_update_pgtables(gt->migrate,
-					   vm, vma->bo,
+					   vm, xe_vma_bo(vma),
 					   e ? e : vm->eng[gt->info.id],
 					   entries, num_entries,
 					   syncs, num_syncs,
 					   &bind_pt_update.base);
 	if (!IS_ERR(fence)) {
+		bool last_munmap_rebind = vma->gpuva.flags & XE_VMA_LAST_REBIND;
 		LLIST_HEAD(deferred);
 
+
+		if (last_munmap_rebind)
+			vm_dbg(&vm->xe->drm, "last_munmap_rebind");
+
 		/* TLB invalidation must be done before signaling rebind */
-		if (rebind && !xe_vm_no_dma_fences(vma->vm)) {
+		if (rebind && !xe_vm_no_dma_fences(xe_vma_vm(vma))) {
 			int err = invalidation_fence_init(gt, ifence, fence,
 							  vma);
 			if (err) {
@@ -1339,12 +1344,12 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 
 		/* add shared fence now for pagetable delayed destroy */
 		dma_resv_add_fence(&vm->resv, fence, !rebind &&
-				   vma->last_munmap_rebind ?
+				   last_munmap_rebind ?
 				   DMA_RESV_USAGE_KERNEL :
 				   DMA_RESV_USAGE_BOOKKEEP);
 
-		if (!xe_vma_is_userptr(vma) && !vma->bo->vm)
-			dma_resv_add_fence(vma->bo->ttm.base.resv, fence,
+		if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
+			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
 					   DMA_RESV_USAGE_BOOKKEEP);
 		xe_pt_commit_bind(vma, entries, num_entries, rebind,
 				  bind_pt_update.locked ? &deferred : NULL);
@@ -1357,8 +1362,7 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 			up_read(&vm->userptr.notifier_lock);
 			xe_bo_put_commit(&deferred);
 		}
-		if (!rebind && vma->last_munmap_rebind &&
-		    xe_vm_in_compute_mode(vm))
+		if (!rebind && last_munmap_rebind && xe_vm_in_compute_mode(vm))
 			queue_work(vm->xe->ordered_wq,
 				   &vm->preempt.rebind_work);
 	} else {
@@ -1506,14 +1510,14 @@ static unsigned int xe_pt_stage_unbind(struct xe_gt *gt, struct xe_vma *vma,
 			.max_level = XE_PT_HIGHEST_LEVEL,
 		},
 		.gt = gt,
-		.modified_start = vma->start,
-		.modified_end = vma->end + 1,
+		.modified_start = xe_vma_start(vma),
+		.modified_end = xe_vma_end(vma),
 		.wupd.entries = entries,
 	};
-	struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
+	struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
 
-	(void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1,
-				 &xe_walk.drm);
+	(void)drm_pt_walk_shared(&pt->drm, pt->level, xe_vma_start(vma),
+				 xe_vma_end(vma), &xe_walk.drm);
 
 	return xe_walk.wupd.num_used_entries;
 }
@@ -1525,7 +1529,7 @@ xe_migrate_clear_pgtable_callback(struct xe_migrate_pt_update *pt_update,
 				  const struct xe_vm_pgtable_update *update)
 {
 	struct xe_vma *vma = pt_update->vma;
-	u64 empty = __xe_pt_empty_pte(gt, vma->vm, update->pt->level);
+	u64 empty = __xe_pt_empty_pte(gt, xe_vma_vm(vma), update->pt->level);
 	int i;
 
 	XE_BUG_ON(xe_gt_is_media_type(gt));
@@ -1563,7 +1567,7 @@ xe_pt_commit_unbind(struct xe_vma *vma,
 			     i++) {
 				if (xe_pt_entry(pt_dir, i))
 					xe_pt_destroy(xe_pt_entry(pt_dir, i),
-						      vma->vm->flags, deferred);
+						      xe_vma_vm(vma)->flags, deferred);
 
 				pt_dir->dir.entries[i] = NULL;
 			}
@@ -1612,19 +1616,19 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 			.vma = vma,
 		},
 	};
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 	u32 num_entries;
 	struct dma_fence *fence = NULL;
 	struct invalidation_fence *ifence;
 	LLIST_HEAD(deferred);
 
-	xe_bo_assert_held(vma->bo);
+	xe_bo_assert_held(xe_vma_bo(vma));
 	xe_vm_assert_held(vm);
 	XE_BUG_ON(xe_gt_is_media_type(gt));
 
-	vm_dbg(&vma->vm->xe->drm,
+	vm_dbg(&xe_vma_vm(vma)->xe->drm,
 	       "Preparing unbind, with range [%llx...%llx) engine %p.\n",
-	       vma->start, vma->end, e);
+	       xe_vma_start(vma), xe_vma_end(vma) - 1, e);
 
 	num_entries = xe_pt_stage_unbind(gt, vma, entries);
 	XE_BUG_ON(num_entries > ARRAY_SIZE(entries));
@@ -1663,8 +1667,8 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 				   DMA_RESV_USAGE_BOOKKEEP);
 
 		/* This fence will be installed by caller when doing eviction */
-		if (!xe_vma_is_userptr(vma) && !vma->bo->vm)
-			dma_resv_add_fence(vma->bo->ttm.base.resv, fence,
+		if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
+			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
 					   DMA_RESV_USAGE_BOOKKEEP);
 		xe_pt_commit_unbind(vma, entries, num_entries,
 				    unbind_pt_update.locked ? &deferred : NULL);
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 2f8eb7ebe9a7..12e12673fc91 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -18,7 +18,7 @@
 #include "xe_gt_types.h"
 #include "xe_guc_engine_types.h"
 #include "xe_sched_job.h"
-#include "xe_vm_types.h"
+#include "xe_vm.h"
 
 DECLARE_EVENT_CLASS(xe_gt_tlb_invalidation_fence,
 		    TP_PROTO(struct xe_gt_tlb_invalidation_fence *fence),
@@ -368,10 +368,10 @@ DECLARE_EVENT_CLASS(xe_vma,
 
 		    TP_fast_assign(
 			   __entry->vma = (unsigned long)vma;
-			   __entry->asid = vma->vm->usm.asid;
-			   __entry->start = vma->start;
-			   __entry->end = vma->end;
-			   __entry->ptr = (u64)vma->userptr.ptr;
+			   __entry->asid = xe_vma_vm(vma)->usm.asid;
+			   __entry->start = xe_vma_start(vma);
+			   __entry->end = xe_vma_end(vma) - 1;
+			   __entry->ptr = xe_vma_userptr(vma);
 			   ),
 
 		    TP_printk("vma=0x%016llx, asid=0x%05x, start=0x%012llx, end=0x%012llx, ptr=0x%012llx,",
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index bdf82d34eb66..fddbe8d5f984 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -25,10 +25,8 @@
 #include "xe_preempt_fence.h"
 #include "xe_pt.h"
 #include "xe_res_cursor.h"
-#include "xe_sync.h"
 #include "xe_trace.h"
-
-#define TEST_VM_ASYNC_OPS_ERROR
+#include "xe_sync.h"
 
 /**
  * xe_vma_userptr_check_repin() - Advisory check for repin needed
@@ -51,20 +49,19 @@ int xe_vma_userptr_check_repin(struct xe_vma *vma)
 
 int xe_vma_userptr_pin_pages(struct xe_vma *vma)
 {
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 	struct xe_device *xe = vm->xe;
-	const unsigned long num_pages =
-		(vma->end - vma->start + 1) >> PAGE_SHIFT;
+	const unsigned long num_pages = xe_vma_size(vma) >> PAGE_SHIFT;
 	struct page **pages;
 	bool in_kthread = !current->mm;
 	unsigned long notifier_seq;
 	int pinned, ret, i;
-	bool read_only = vma->pte_flags & PTE_READ_ONLY;
+	bool read_only = xe_vma_read_only(vma);
 
 	lockdep_assert_held(&vm->lock);
 	XE_BUG_ON(!xe_vma_is_userptr(vma));
 retry:
-	if (vma->destroyed)
+	if (vma->gpuva.flags & XE_VMA_DESTROYED)
 		return 0;
 
 	notifier_seq = mmu_interval_read_begin(&vma->userptr.notifier);
@@ -94,7 +91,8 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma)
 	}
 
 	while (pinned < num_pages) {
-		ret = get_user_pages_fast(vma->userptr.ptr + pinned * PAGE_SIZE,
+		ret = get_user_pages_fast(xe_vma_userptr(vma) +
+					  pinned * PAGE_SIZE,
 					  num_pages - pinned,
 					  read_only ? 0 : FOLL_WRITE,
 					  &pages[pinned]);
@@ -295,7 +293,7 @@ void xe_vm_fence_all_extobjs(struct xe_vm *vm, struct dma_fence *fence,
 	struct xe_vma *vma;
 
 	list_for_each_entry(vma, &vm->extobj.list, extobj.link)
-		dma_resv_add_fence(vma->bo->ttm.base.resv, fence, usage);
+		dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence, usage);
 }
 
 static void resume_and_reinstall_preempt_fences(struct xe_vm *vm)
@@ -444,7 +442,7 @@ int xe_vm_lock_dma_resv(struct xe_vm *vm, struct ww_acquire_ctx *ww,
 	INIT_LIST_HEAD(objs);
 	list_for_each_entry(vma, &vm->extobj.list, extobj.link) {
 		tv_bo->num_shared = num_shared;
-		tv_bo->bo = &vma->bo->ttm;
+		tv_bo->bo = &xe_vma_bo(vma)->ttm;
 
 		list_add_tail(&tv_bo->head, objs);
 		tv_bo++;
@@ -459,10 +457,10 @@ int xe_vm_lock_dma_resv(struct xe_vm *vm, struct ww_acquire_ctx *ww,
 	spin_lock(&vm->notifier.list_lock);
 	list_for_each_entry_safe(vma, next, &vm->notifier.rebind_list,
 				 notifier.rebind_link) {
-		xe_bo_assert_held(vma->bo);
+		xe_bo_assert_held(xe_vma_bo(vma));
 
 		list_del_init(&vma->notifier.rebind_link);
-		if (vma->gt_present && !vma->destroyed)
+		if (vma->gt_present && !(vma->gpuva.flags & XE_VMA_DESTROYED))
 			list_move_tail(&vma->rebind_link, &vm->rebind_list);
 	}
 	spin_unlock(&vm->notifier.list_lock);
@@ -583,10 +581,11 @@ static void preempt_rebind_work_func(struct work_struct *w)
 		goto out_unlock;
 
 	list_for_each_entry(vma, &vm->rebind_list, rebind_link) {
-		if (xe_vma_is_userptr(vma) || vma->destroyed)
+		if (xe_vma_is_userptr(vma) ||
+		    vma->gpuva.flags & XE_VMA_DESTROYED)
 			continue;
 
-		err = xe_bo_validate(vma->bo, vm, false);
+		err = xe_bo_validate(xe_vma_bo(vma), vm, false);
 		if (err)
 			goto out_unlock;
 	}
@@ -645,17 +644,12 @@ static void preempt_rebind_work_func(struct work_struct *w)
 	trace_xe_vm_rebind_worker_exit(vm);
 }
 
-struct async_op_fence;
-static int __xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma,
-			struct xe_engine *e, struct xe_sync_entry *syncs,
-			u32 num_syncs, struct async_op_fence *afence);
-
 static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
 				   const struct mmu_notifier_range *range,
 				   unsigned long cur_seq)
 {
 	struct xe_vma *vma = container_of(mni, struct xe_vma, userptr.notifier);
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 	struct dma_resv_iter cursor;
 	struct dma_fence *fence;
 	long err;
@@ -679,7 +673,8 @@ static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
 	 * Tell exec and rebind worker they need to repin and rebind this
 	 * userptr.
 	 */
-	if (!xe_vm_in_fault_mode(vm) && !vma->destroyed && vma->gt_present) {
+	if (!xe_vm_in_fault_mode(vm) &&
+	    !(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->gt_present) {
 		spin_lock(&vm->userptr.invalidated_lock);
 		list_move_tail(&vma->userptr.invalidate_link,
 			       &vm->userptr.invalidated);
@@ -784,7 +779,8 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
 
 static struct dma_fence *
 xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
-	       struct xe_sync_entry *syncs, u32 num_syncs);
+	       struct xe_sync_entry *syncs, u32 num_syncs,
+	       bool first_op, bool last_op);
 
 struct dma_fence *xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
 {
@@ -805,7 +801,7 @@ struct dma_fence *xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
 			trace_xe_vma_rebind_worker(vma);
 		else
 			trace_xe_vma_rebind_exec(vma);
-		fence = xe_vm_bind_vma(vma, NULL, NULL, 0);
+		fence = xe_vm_bind_vma(vma, NULL, NULL, 0, false, false);
 		if (IS_ERR(fence))
 			return fence;
 	}
@@ -833,6 +829,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 		return vma;
 	}
 
+	/* FIXME: Way to many lists, should be able to reduce this */
 	INIT_LIST_HEAD(&vma->rebind_link);
 	INIT_LIST_HEAD(&vma->unbind_link);
 	INIT_LIST_HEAD(&vma->userptr_link);
@@ -840,11 +837,12 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 	INIT_LIST_HEAD(&vma->notifier.rebind_link);
 	INIT_LIST_HEAD(&vma->extobj.link);
 
-	vma->vm = vm;
-	vma->start = start;
-	vma->end = end;
+	INIT_LIST_HEAD(&vma->gpuva.head);
+	vma->gpuva.mgr = &vm->mgr;
+	vma->gpuva.va.addr = start;
+	vma->gpuva.va.range = end - start + 1;
 	if (read_only)
-		vma->pte_flags = PTE_READ_ONLY;
+		vma->gpuva.flags |= XE_VMA_READ_ONLY;
 
 	if (gt_mask) {
 		vma->gt_mask = gt_mask;
@@ -855,22 +853,24 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 	}
 
 	if (vm->xe->info.platform == XE_PVC)
-		vma->use_atomic_access_pte_bit = true;
+		vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
 
 	if (bo) {
 		xe_bo_assert_held(bo);
-		vma->bo_offset = bo_offset_or_userptr;
-		vma->bo = xe_bo_get(bo);
-		list_add_tail(&vma->bo_link, &bo->vmas);
+
+		drm_gem_object_get(&bo->ttm.base);
+		vma->gpuva.gem.obj = &bo->ttm.base;
+		vma->gpuva.gem.offset = bo_offset_or_userptr;
+		drm_gpuva_link(&vma->gpuva);
 	} else /* userptr */ {
 		u64 size = end - start + 1;
 		int err;
 
-		vma->userptr.ptr = bo_offset_or_userptr;
+		vma->gpuva.gem.offset = bo_offset_or_userptr;
 
 		err = mmu_interval_notifier_insert(&vma->userptr.notifier,
 						   current->mm,
-						   vma->userptr.ptr, size,
+						   xe_vma_userptr(vma), size,
 						   &vma_userptr_notifier_ops);
 		if (err) {
 			kfree(vma);
@@ -888,16 +888,16 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 static void vm_remove_extobj(struct xe_vma *vma)
 {
 	if (!list_empty(&vma->extobj.link)) {
-		vma->vm->extobj.entries--;
+		xe_vma_vm(vma)->extobj.entries--;
 		list_del_init(&vma->extobj.link);
 	}
 }
 
 static void xe_vma_destroy_late(struct xe_vma *vma)
 {
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 	struct xe_device *xe = vm->xe;
-	bool read_only = vma->pte_flags & PTE_READ_ONLY;
+	bool read_only = xe_vma_read_only(vma);
 
 	if (xe_vma_is_userptr(vma)) {
 		if (vma->userptr.sg) {
@@ -917,7 +917,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
 		mmu_interval_notifier_remove(&vma->userptr.notifier);
 		xe_vm_put(vm);
 	} else {
-		xe_bo_put(vma->bo);
+		xe_bo_put(xe_vma_bo(vma));
 	}
 
 	kfree(vma);
@@ -942,21 +942,22 @@ static void vma_destroy_cb(struct dma_fence *fence,
 
 static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
 {
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 
 	lockdep_assert_held_write(&vm->lock);
 	XE_BUG_ON(!list_empty(&vma->unbind_link));
 
 	if (xe_vma_is_userptr(vma)) {
-		XE_WARN_ON(!vma->destroyed);
+		XE_WARN_ON(!(vma->gpuva.flags & XE_VMA_DESTROYED));
+
 		spin_lock(&vm->userptr.invalidated_lock);
 		list_del_init(&vma->userptr.invalidate_link);
 		spin_unlock(&vm->userptr.invalidated_lock);
 		list_del(&vma->userptr_link);
 	} else {
-		xe_bo_assert_held(vma->bo);
-		list_del(&vma->bo_link);
-		if (!vma->bo->vm)
+		xe_bo_assert_held(xe_vma_bo(vma));
+		drm_gpuva_unlink(&vma->gpuva);
+		if (!xe_vma_bo(vma)->vm)
 			vm_remove_extobj(vma);
 	}
 
@@ -981,13 +982,13 @@ static void xe_vma_destroy_unlocked(struct xe_vma *vma)
 {
 	struct ttm_validate_buffer tv[2];
 	struct ww_acquire_ctx ww;
-	struct xe_bo *bo = vma->bo;
+	struct xe_bo *bo = xe_vma_bo(vma);
 	LIST_HEAD(objs);
 	LIST_HEAD(dups);
 	int err;
 
 	memset(tv, 0, sizeof(tv));
-	tv[0].bo = xe_vm_ttm_bo(vma->vm);
+	tv[0].bo = xe_vm_ttm_bo(xe_vma_vm(vma));
 	list_add(&tv[0].head, &objs);
 
 	if (bo) {
@@ -1004,77 +1005,61 @@ static void xe_vma_destroy_unlocked(struct xe_vma *vma)
 		xe_bo_put(bo);
 }
 
-static struct xe_vma *to_xe_vma(const struct rb_node *node)
-{
-	BUILD_BUG_ON(offsetof(struct xe_vma, vm_node) != 0);
-	return (struct xe_vma *)node;
-}
-
-static int xe_vma_cmp(const struct xe_vma *a, const struct xe_vma *b)
-{
-	if (a->end < b->start) {
-		return -1;
-	} else if (b->end < a->start) {
-		return 1;
-	} else {
-		return 0;
-	}
-}
-
-static bool xe_vma_less_cb(struct rb_node *a, const struct rb_node *b)
-{
-	return xe_vma_cmp(to_xe_vma(a), to_xe_vma(b)) < 0;
-}
-
-int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node)
-{
-	struct xe_vma *cmp = to_xe_vma(node);
-	const struct xe_vma *own = key;
-
-	if (own->start > cmp->end)
-		return 1;
-
-	if (own->end < cmp->start)
-		return -1;
-
-	return 0;
-}
-
 struct xe_vma *
-xe_vm_find_overlapping_vma(struct xe_vm *vm, const struct xe_vma *vma)
+xe_vm_find_overlapping_vma(struct xe_vm *vm, u64 start, u64 range)
 {
-	struct rb_node *node;
+	struct drm_gpuva *gpuva;
 
 	if (xe_vm_is_closed(vm))
 		return NULL;
 
-	XE_BUG_ON(vma->end >= vm->size);
+	XE_BUG_ON(start + range > vm->size);
 	lockdep_assert_held(&vm->lock);
 
-	node = rb_find(vma, &vm->vmas, xe_vma_cmp_vma_cb);
+	gpuva = drm_gpuva_find_first(&vm->mgr, start, range);
 
-	return node ? to_xe_vma(node) : NULL;
+	return gpuva ? gpuva_to_vma(gpuva) : NULL;
 }
 
 static void xe_vm_insert_vma(struct xe_vm *vm, struct xe_vma *vma)
 {
-	XE_BUG_ON(vma->vm != vm);
+	int err;
+
+	XE_BUG_ON(xe_vma_vm(vma) != vm);
 	lockdep_assert_held(&vm->lock);
 
-	rb_add(&vma->vm_node, &vm->vmas, xe_vma_less_cb);
+	err = drm_gpuva_insert(&vm->mgr, &vma->gpuva);
+	XE_WARN_ON(err);
 }
 
-static void xe_vm_remove_vma(struct xe_vm *vm, struct xe_vma *vma)
+static void xe_vm_remove_vma(struct xe_vm *vm, struct xe_vma *vma, bool remove)
 {
-	XE_BUG_ON(vma->vm != vm);
+	XE_BUG_ON(xe_vma_vm(vma) != vm);
 	lockdep_assert_held(&vm->lock);
 
-	rb_erase(&vma->vm_node, &vm->vmas);
+	if (remove)
+		drm_gpuva_remove(&vma->gpuva);
 	if (vm->usm.last_fault_vma == vma)
 		vm->usm.last_fault_vma = NULL;
 }
 
-static void async_op_work_func(struct work_struct *w);
+static struct drm_gpuva_op *xe_vm_op_alloc(void)
+{
+	struct xe_vma_op *op;
+
+	op = kzalloc(sizeof(*op), GFP_KERNEL);
+
+	if (unlikely(!op))
+		return NULL;
+
+	return &op->base;
+}
+
+static struct drm_gpuva_fn_ops gpuva_ops = {
+	.op_alloc = xe_vm_op_alloc,
+};
+
+static void xe_vma_op_work_func(struct work_struct *w);
 static void vm_destroy_work_func(struct work_struct *w);
 
 struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
@@ -1094,7 +1079,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
 
 	vm->size = 1ull << xe_pt_shift(xe->info.vm_max_level + 1);
 
-	vm->vmas = RB_ROOT;
 	vm->flags = flags;
 
 	init_rwsem(&vm->lock);
@@ -1110,7 +1094,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
 	spin_lock_init(&vm->notifier.list_lock);
 
 	INIT_LIST_HEAD(&vm->async_ops.pending);
-	INIT_WORK(&vm->async_ops.work, async_op_work_func);
+	INIT_WORK(&vm->async_ops.work, xe_vma_op_work_func);
 	spin_lock_init(&vm->async_ops.lock);
 
 	INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
@@ -1130,6 +1114,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
 	if (err)
 		goto err_put;
 
+	drm_gpuva_manager_init(&vm->mgr, "Xe VM", 0, vm->size, 0, 0,
+			       &gpuva_ops, 0);
 	if (IS_DGFX(xe) && xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K)
 		vm->flags |= XE_VM_FLAGS_64K;
 
@@ -1235,6 +1221,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
 			xe_pt_destroy(vm->pt_root[id], vm->flags, NULL);
 	}
 	dma_resv_unlock(&vm->resv);
+	drm_gpuva_manager_destroy(&vm->mgr);
 err_put:
 	dma_resv_fini(&vm->resv);
 	kfree(vm);
@@ -1284,14 +1271,18 @@ static void vm_error_capture(struct xe_vm *vm, int err,
 
 void xe_vm_close_and_put(struct xe_vm *vm)
 {
-	struct rb_root contested = RB_ROOT;
+	struct list_head contested;
 	struct ww_acquire_ctx ww;
 	struct xe_device *xe = vm->xe;
 	struct xe_gt *gt;
+	struct xe_vma *vma, *next_vma;
+	DRM_GPUVA_ITER(it, &vm->mgr, 0);
 	u8 id;
 
 	XE_BUG_ON(vm->preempt.num_engines);
 
+	INIT_LIST_HEAD(&contested);
+
 	vm->size = 0;
 	smp_mb();
 	flush_async_ops(vm);
@@ -1308,24 +1299,25 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 
 	down_write(&vm->lock);
 	xe_vm_lock(vm, &ww, 0, false);
-	while (vm->vmas.rb_node) {
-		struct xe_vma *vma = to_xe_vma(vm->vmas.rb_node);
+	drm_gpuva_iter_for_each(it) {
+		vma = gpuva_to_vma(it.va);
 
 		if (xe_vma_is_userptr(vma)) {
 			down_read(&vm->userptr.notifier_lock);
-			vma->destroyed = true;
+			vma->gpuva.flags |= XE_VMA_DESTROYED;
 			up_read(&vm->userptr.notifier_lock);
 		}
 
-		rb_erase(&vma->vm_node, &vm->vmas);
+		xe_vm_remove_vma(vm, vma, false);
+		drm_gpuva_iter_remove(&it);
 
 		/* easy case, remove from VMA? */
-		if (xe_vma_is_userptr(vma) || vma->bo->vm) {
+		if (xe_vma_is_userptr(vma) || xe_vma_bo(vma)->vm) {
 			xe_vma_destroy(vma, NULL);
 			continue;
 		}
 
-		rb_add(&vma->vm_node, &contested, xe_vma_less_cb);
+		list_add_tail(&vma->unbind_link, &contested);
 	}
 
 	/*
@@ -1348,19 +1340,14 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 	}
 	xe_vm_unlock(vm, &ww);
 
-	if (contested.rb_node) {
-
-		/*
-		 * VM is now dead, cannot re-add nodes to vm->vmas if it's NULL
-		 * Since we hold a refcount to the bo, we can remove and free
-		 * the members safely without locking.
-		 */
-		while (contested.rb_node) {
-			struct xe_vma *vma = to_xe_vma(contested.rb_node);
-
-			rb_erase(&vma->vm_node, &contested);
-			xe_vma_destroy_unlocked(vma);
-		}
+	/*
+	 * VM is now dead, cannot re-add nodes to vm->vmas if it's NULL
+	 * Since we hold a refcount to the bo, we can remove and free
+	 * the members safely without locking.
+	 */
+	list_for_each_entry_safe(vma, next_vma, &contested, unbind_link) {
+		list_del_init(&vma->unbind_link);
+		xe_vma_destroy_unlocked(vma);
 	}
 
 	if (vm->async_ops.error_capture.addr)
@@ -1369,6 +1356,8 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 	XE_WARN_ON(!list_empty(&vm->extobj.list));
 	up_write(&vm->lock);
 
+	drm_gpuva_manager_destroy(&vm->mgr);
+
 	mutex_lock(&xe->usm.lock);
 	if (vm->flags & XE_VM_FLAG_FAULT_MODE)
 		xe->usm.num_vm_in_fault_mode--;
@@ -1456,13 +1445,14 @@ u64 xe_vm_pdp4_descriptor(struct xe_vm *vm, struct xe_gt *full_gt)
 
 static struct dma_fence *
 xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
-		 struct xe_sync_entry *syncs, u32 num_syncs)
+		 struct xe_sync_entry *syncs, u32 num_syncs,
+		 bool first_op, bool last_op)
 {
 	struct xe_gt *gt;
 	struct dma_fence *fence = NULL;
 	struct dma_fence **fences = NULL;
 	struct dma_fence_array *cf = NULL;
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 	int cur_fence = 0, i;
 	int number_gts = hweight_long(vma->gt_present);
 	int err;
@@ -1483,7 +1473,8 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
 
 		XE_BUG_ON(xe_gt_is_media_type(gt));
 
-		fence = __xe_pt_unbind_vma(gt, vma, e, syncs, num_syncs);
+		fence = __xe_pt_unbind_vma(gt, vma, e, first_op ? syncs : NULL,
+					   first_op ? num_syncs : 0);
 		if (IS_ERR(fence)) {
 			err = PTR_ERR(fence);
 			goto err_fences;
@@ -1509,7 +1500,7 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
 		}
 	}
 
-	for (i = 0; i < num_syncs; i++)
+	for (i = 0; last_op && i < num_syncs; i++)
 		xe_sync_entry_signal(&syncs[i], NULL, cf ? &cf->base : fence);
 
 	return cf ? &cf->base : !fence ? dma_fence_get_stub() : fence;
@@ -1528,13 +1519,14 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
 
 static struct dma_fence *
 xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
-	       struct xe_sync_entry *syncs, u32 num_syncs)
+	       struct xe_sync_entry *syncs, u32 num_syncs,
+	       bool first_op, bool last_op)
 {
 	struct xe_gt *gt;
 	struct dma_fence *fence;
 	struct dma_fence **fences = NULL;
 	struct dma_fence_array *cf = NULL;
-	struct xe_vm *vm = vma->vm;
+	struct xe_vm *vm = xe_vma_vm(vma);
 	int cur_fence = 0, i;
 	int number_gts = hweight_long(vma->gt_mask);
 	int err;
@@ -1554,7 +1546,8 @@ xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
 			goto next;
 
 		XE_BUG_ON(xe_gt_is_media_type(gt));
-		fence = __xe_pt_bind_vma(gt, vma, e, syncs, num_syncs,
+		fence = __xe_pt_bind_vma(gt, vma, e, first_op ? syncs : NULL,
+					 first_op ? num_syncs : 0,
 					 vma->gt_present & BIT(id));
 		if (IS_ERR(fence)) {
 			err = PTR_ERR(fence);
@@ -1581,7 +1574,7 @@ xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
 		}
 	}
 
-	for (i = 0; i < num_syncs; i++)
+	for (i = 0; last_op && i < num_syncs; i++)
 		xe_sync_entry_signal(&syncs[i], NULL, cf ? &cf->base : fence);
 
 	return cf ? &cf->base : fence;
@@ -1680,15 +1673,27 @@ int xe_vm_async_fence_wait_start(struct dma_fence *fence)
 
 static int __xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma,
 			struct xe_engine *e, struct xe_sync_entry *syncs,
-			u32 num_syncs, struct async_op_fence *afence)
+			u32 num_syncs, struct async_op_fence *afence,
+			bool immediate, bool first_op, bool last_op)
 {
 	struct dma_fence *fence;
 
 	xe_vm_assert_held(vm);
 
-	fence = xe_vm_bind_vma(vma, e, syncs, num_syncs);
-	if (IS_ERR(fence))
-		return PTR_ERR(fence);
+	if (immediate) {
+		fence = xe_vm_bind_vma(vma, e, syncs, num_syncs, first_op,
+				       last_op);
+		if (IS_ERR(fence))
+			return PTR_ERR(fence);
+	} else {
+		int i;
+
+		XE_BUG_ON(!xe_vm_in_fault_mode(vm));
+
+		fence = dma_fence_get_stub();
+		for (i = 0; last_op && i < num_syncs; i++)
+			xe_sync_entry_signal(&syncs[i], NULL, fence);
+	}
 	if (afence)
 		add_async_op_fence_cb(vm, fence, afence);
 
@@ -1698,32 +1703,35 @@ static int __xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma,
 
 static int xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma, struct xe_engine *e,
 		      struct xe_bo *bo, struct xe_sync_entry *syncs,
-		      u32 num_syncs, struct async_op_fence *afence)
+		      u32 num_syncs, struct async_op_fence *afence,
+		      bool immediate, bool first_op, bool last_op)
 {
 	int err;
 
 	xe_vm_assert_held(vm);
 	xe_bo_assert_held(bo);
 
-	if (bo) {
+	if (bo && immediate) {
 		err = xe_bo_validate(bo, vm, true);
 		if (err)
 			return err;
 	}
 
-	return __xe_vm_bind(vm, vma, e, syncs, num_syncs, afence);
+	return __xe_vm_bind(vm, vma, e, syncs, num_syncs, afence, immediate,
+			    first_op, last_op);
 }
 
 static int xe_vm_unbind(struct xe_vm *vm, struct xe_vma *vma,
 			struct xe_engine *e, struct xe_sync_entry *syncs,
-			u32 num_syncs, struct async_op_fence *afence)
+			u32 num_syncs, struct async_op_fence *afence,
+			bool first_op, bool last_op)
 {
 	struct dma_fence *fence;
 
 	xe_vm_assert_held(vm);
-	xe_bo_assert_held(vma->bo);
+	xe_bo_assert_held(xe_vma_bo(vma));
 
-	fence = xe_vm_unbind_vma(vma, e, syncs, num_syncs);
+	fence = xe_vm_unbind_vma(vma, e, syncs, num_syncs, first_op, last_op);
 	if (IS_ERR(fence))
 		return PTR_ERR(fence);
 	if (afence)
@@ -1946,26 +1954,27 @@ static const u32 region_to_mem_type[] = {
 static int xe_vm_prefetch(struct xe_vm *vm, struct xe_vma *vma,
 			  struct xe_engine *e, u32 region,
 			  struct xe_sync_entry *syncs, u32 num_syncs,
-			  struct async_op_fence *afence)
+			  struct async_op_fence *afence, bool first_op,
+			  bool last_op)
 {
 	int err;
 
 	XE_BUG_ON(region > ARRAY_SIZE(region_to_mem_type));
 
 	if (!xe_vma_is_userptr(vma)) {
-		err = xe_bo_migrate(vma->bo, region_to_mem_type[region]);
+		err = xe_bo_migrate(xe_vma_bo(vma), region_to_mem_type[region]);
 		if (err)
 			return err;
 	}
 
 	if (vma->gt_mask != (vma->gt_present & ~vma->usm.gt_invalidated)) {
-		return xe_vm_bind(vm, vma, e, vma->bo, syncs, num_syncs,
-				  afence);
+		return xe_vm_bind(vm, vma, e, xe_vma_bo(vma), syncs, num_syncs,
+				  afence, true, first_op, last_op);
 	} else {
 		int i;
 
 		/* Nothing to do, signal fences now */
-		for (i = 0; i < num_syncs; i++)
+		for (i = 0; last_op && i < num_syncs; i++)
 			xe_sync_entry_signal(&syncs[i], NULL,
 					     dma_fence_get_stub());
 		if (afence)
@@ -1976,29 +1985,6 @@ static int xe_vm_prefetch(struct xe_vm *vm, struct xe_vma *vma,
 
 #define VM_BIND_OP(op)	(op & 0xffff)
 
-static int __vm_bind_ioctl(struct xe_vm *vm, struct xe_vma *vma,
-			   struct xe_engine *e, struct xe_bo *bo, u32 op,
-			   u32 region, struct xe_sync_entry *syncs,
-			   u32 num_syncs, struct async_op_fence *afence)
-{
-	switch (VM_BIND_OP(op)) {
-	case XE_VM_BIND_OP_MAP:
-		return xe_vm_bind(vm, vma, e, bo, syncs, num_syncs, afence);
-	case XE_VM_BIND_OP_UNMAP:
-	case XE_VM_BIND_OP_UNMAP_ALL:
-		return xe_vm_unbind(vm, vma, e, syncs, num_syncs, afence);
-	case XE_VM_BIND_OP_MAP_USERPTR:
-		return xe_vm_bind(vm, vma, e, NULL, syncs, num_syncs, afence);
-	case XE_VM_BIND_OP_PREFETCH:
-		return xe_vm_prefetch(vm, vma, e, region, syncs, num_syncs,
-				      afence);
-		break;
-	default:
-		XE_BUG_ON("NOT POSSIBLE");
-		return -EINVAL;
-	}
-}
-
 struct ttm_buffer_object *xe_vm_ttm_bo(struct xe_vm *vm)
 {
 	int idx = vm->flags & XE_VM_FLAG_MIGRATION ?
@@ -2014,834 +2000,816 @@ static void xe_vm_tv_populate(struct xe_vm *vm, struct ttm_validate_buffer *tv)
 	tv->bo = xe_vm_ttm_bo(vm);
 }
 
-static bool is_map_op(u32 op)
+static void vm_set_async_error(struct xe_vm *vm, int err)
 {
-	return VM_BIND_OP(op) == XE_VM_BIND_OP_MAP ||
-		VM_BIND_OP(op) == XE_VM_BIND_OP_MAP_USERPTR;
+	lockdep_assert_held(&vm->lock);
+	vm->async_ops.error = err;
 }
 
-static bool is_unmap_op(u32 op)
+static bool bo_has_vm_references(struct xe_bo *bo, struct xe_vm *vm,
+				 struct xe_vma *ignore)
 {
-	return VM_BIND_OP(op) == XE_VM_BIND_OP_UNMAP ||
-		VM_BIND_OP(op) == XE_VM_BIND_OP_UNMAP_ALL;
+	struct ww_acquire_ctx ww;
+	struct drm_gpuva *gpuva;
+	struct drm_gem_object *obj = &bo->ttm.base;
+	bool ret = false;
+
+	xe_bo_lock(bo, &ww, 0, false);
+	drm_gem_for_each_gpuva(gpuva, obj) {
+		struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+		if (vma != ignore && xe_vma_vm(vma) == vm &&
+		    !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
+			ret = true;
+			break;
+		}
+	}
+	xe_bo_unlock(bo, &ww);
+
+	return ret;
 }
 
-static int vm_bind_ioctl(struct xe_vm *vm, struct xe_vma *vma,
-			 struct xe_engine *e, struct xe_bo *bo,
-			 struct drm_xe_vm_bind_op *bind_op,
-			 struct xe_sync_entry *syncs, u32 num_syncs,
-			 struct async_op_fence *afence)
+static int vm_insert_extobj(struct xe_vm *vm, struct xe_vma *vma)
 {
-	LIST_HEAD(objs);
-	LIST_HEAD(dups);
-	struct ttm_validate_buffer tv_bo, tv_vm;
-	struct ww_acquire_ctx ww;
-	struct xe_bo *vbo;
-	int err, i;
+	struct xe_bo *bo = xe_vma_bo(vma);
 
-	lockdep_assert_held(&vm->lock);
-	XE_BUG_ON(!list_empty(&vma->unbind_link));
+	lockdep_assert_held_write(&vm->lock);
 
-	/* Binds deferred to faults, signal fences now */
-	if (xe_vm_in_fault_mode(vm) && is_map_op(bind_op->op) &&
-	    !(bind_op->op & XE_VM_BIND_FLAG_IMMEDIATE)) {
-		for (i = 0; i < num_syncs; i++)
-			xe_sync_entry_signal(&syncs[i], NULL,
-					     dma_fence_get_stub());
-		if (afence)
-			dma_fence_signal(&afence->fence);
+	if (bo_has_vm_references(bo, vm, vma))
 		return 0;
-	}
 
-	xe_vm_tv_populate(vm, &tv_vm);
-	list_add_tail(&tv_vm.head, &objs);
-	vbo = vma->bo;
-	if (vbo) {
-		/*
-		 * An unbind can drop the last reference to the BO and
-		 * the BO is needed for ttm_eu_backoff_reservation so
-		 * take a reference here.
-		 */
-		xe_bo_get(vbo);
+	list_add(&vma->extobj.link, &vm->extobj.list);
+	vm->extobj.entries++;
 
-		tv_bo.bo = &vbo->ttm;
-		tv_bo.num_shared = 1;
-		list_add(&tv_bo.head, &objs);
-	}
+	return 0;
+}
 
-again:
-	err = ttm_eu_reserve_buffers(&ww, &objs, true, &dups);
-	if (!err) {
-		err = __vm_bind_ioctl(vm, vma, e, bo,
-				      bind_op->op, bind_op->region, syncs,
-				      num_syncs, afence);
-		ttm_eu_backoff_reservation(&ww, &objs);
-		if (err == -EAGAIN && xe_vma_is_userptr(vma)) {
-			lockdep_assert_held_write(&vm->lock);
-			err = xe_vma_userptr_pin_pages(vma);
-			if (!err)
-				goto again;
-		}
+static int __vm_bind_ioctl_lookup_vma(struct xe_vm *vm, struct xe_bo *bo,
+				      u64 addr, u64 range, u32 op)
+{
+	struct xe_device *xe = vm->xe;
+	struct xe_vma *vma;
+	bool async = !!(op & XE_VM_BIND_FLAG_ASYNC);
+
+	lockdep_assert_held(&vm->lock);
+
+	return 0;
+
+	switch (VM_BIND_OP(op)) {
+	case XE_VM_BIND_OP_MAP:
+	case XE_VM_BIND_OP_MAP_USERPTR:
+		vma = xe_vm_find_overlapping_vma(vm, addr, range);
+		if (XE_IOCTL_ERR(xe, vma))
+			return -EBUSY;
+		break;
+	case XE_VM_BIND_OP_UNMAP:
+	case XE_VM_BIND_OP_PREFETCH:
+		vma = xe_vm_find_overlapping_vma(vm, addr, range);
+		if (XE_IOCTL_ERR(xe, !vma) ||
+		    XE_IOCTL_ERR(xe, (xe_vma_start(vma) != addr ||
+				 xe_vma_end(vma) != addr + range) && !async))
+			return -EINVAL;
+		break;
+	case XE_VM_BIND_OP_UNMAP_ALL:
+		if (XE_IOCTL_ERR(xe, list_empty(&bo->ttm.base.gpuva.list)))
+			return -EINVAL;
+		break;
+	default:
+		XE_BUG_ON("NOT POSSIBLE");
+		return -EINVAL;
 	}
-	xe_bo_put(vbo);
 
-	return err;
+	return 0;
 }
 
-struct async_op {
-	struct xe_vma *vma;
-	struct xe_engine *engine;
-	struct xe_bo *bo;
-	struct drm_xe_vm_bind_op bind_op;
-	struct xe_sync_entry *syncs;
-	u32 num_syncs;
-	struct list_head link;
-	struct async_op_fence *fence;
-};
-
-static void async_op_cleanup(struct xe_vm *vm, struct async_op *op)
+static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma,
+			     bool post_commit)
 {
-	while (op->num_syncs--)
-		xe_sync_entry_cleanup(&op->syncs[op->num_syncs]);
-	kfree(op->syncs);
-	xe_bo_put(op->bo);
-	if (op->engine)
-		xe_engine_put(op->engine);
-	xe_vm_put(vm);
-	if (op->fence)
-		dma_fence_put(&op->fence->fence);
-	kfree(op);
+	down_read(&vm->userptr.notifier_lock);
+	vma->gpuva.flags |= XE_VMA_DESTROYED;
+	up_read(&vm->userptr.notifier_lock);
+	if (post_commit)
+		xe_vm_remove_vma(vm, vma, true);
 }
 
-static struct async_op *next_async_op(struct xe_vm *vm)
+#if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM)
+static void print_op(struct xe_device *xe, struct drm_gpuva_op *op)
 {
-	return list_first_entry_or_null(&vm->async_ops.pending,
-					struct async_op, link);
-}
+	struct xe_vma *vma;
 
-static void vm_set_async_error(struct xe_vm *vm, int err)
+	switch (op->op) {
+	case DRM_GPUVA_OP_MAP:
+		vm_dbg(&xe->drm, "MAP: addr=0x%016llx, range=0x%016llx",
+		       op->map.va.addr, op->map.va.range);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+		vma = gpuva_to_vma(op->remap.unmap->va);
+		vm_dbg(&xe->drm, "REMAP:UNMAP: addr=0x%016llx, range=0x%016llx, keep=%d",
+		       xe_vma_start(vma), xe_vma_size(vma),
+		       op->unmap.keep ? 1 : 0);
+		if (op->remap.prev)
+			vm_dbg(&xe->drm,
+			       "REMAP:PREV: addr=0x%016llx, range=0x%016llx",
+			       op->remap.prev->va.addr,
+			       op->remap.prev->va.range);
+		if (op->remap.next)
+			vm_dbg(&xe->drm,
+			       "REMAP:NEXT: addr=0x%016llx, range=0x%016llx",
+			       op->remap.next->va.addr,
+			       op->remap.next->va.range);
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+		vma = gpuva_to_vma(op->unmap.va);
+		vm_dbg(&xe->drm, "UNMAP: addr=0x%016llx, range=0x%016llx, keep=%d",
+		       xe_vma_start(vma), xe_vma_size(vma),
+		       op->unmap.keep ? 1 : 0);
+		break;
+	default:
+		XE_BUG_ON("NOT_POSSIBLE");
+	}
+}
+#else
+static void print_op(struct xe_device *xe, struct drm_gpuva_op *op)
 {
-	lockdep_assert_held(&vm->lock);
-	vm->async_ops.error = err;
 }
+#endif
 
-static void async_op_work_func(struct work_struct *w)
+/*
+ * Create operations list from IOCTL arguments, setup operations fields so parse
+ * and commit steps are decoupled from IOCTL arguments. This step can fail.
+ */
+static struct drm_gpuva_ops *
+vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
+			 u64 bo_offset_or_userptr, u64 addr, u64 range,
+			 u32 operation, u64 gt_mask, u32 region)
 {
-	struct xe_vm *vm = container_of(w, struct xe_vm, async_ops.work);
-
-	for (;;) {
-		struct async_op *op;
-		int err;
-
-		if (vm->async_ops.error && !xe_vm_is_closed(vm))
-			break;
+	struct drm_gem_object *obj = bo ? &bo->ttm.base : NULL;
+	struct ww_acquire_ctx ww;
+	struct drm_gpuva_ops *ops;
+	struct drm_gpuva_op *__op;
+	struct xe_vma_op *op;
+	int err;
 
-		spin_lock_irq(&vm->async_ops.lock);
-		op = next_async_op(vm);
-		if (op)
-			list_del_init(&op->link);
-		spin_unlock_irq(&vm->async_ops.lock);
+	lockdep_assert_held_write(&vm->lock);
 
-		if (!op)
-			break;
+	vm_dbg(&vm->xe->drm,
+	       "op=%d, addr=0x%016llx, range=0x%016llx, bo_offset_or_userptr=0x%016llx",
+	       VM_BIND_OP(operation), addr, range, bo_offset_or_userptr);
 
-		if (!xe_vm_is_closed(vm)) {
-			bool first, last;
+	switch (VM_BIND_OP(operation)) {
+	case XE_VM_BIND_OP_MAP:
+	case XE_VM_BIND_OP_MAP_USERPTR:
+		ops = drm_gpuva_sm_map_ops_create(&vm->mgr, addr, range,
+						  obj, bo_offset_or_userptr);
+		drm_gpuva_for_each_op(__op, ops) {
+			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
 
-			down_write(&vm->lock);
-again:
-			first = op->vma->first_munmap_rebind;
-			last = op->vma->last_munmap_rebind;
-#ifdef TEST_VM_ASYNC_OPS_ERROR
-#define FORCE_ASYNC_OP_ERROR	BIT(31)
-			if (!(op->bind_op.op & FORCE_ASYNC_OP_ERROR)) {
-				err = vm_bind_ioctl(vm, op->vma, op->engine,
-						    op->bo, &op->bind_op,
-						    op->syncs, op->num_syncs,
-						    op->fence);
-			} else {
-				err = -ENOMEM;
-				op->bind_op.op &= ~FORCE_ASYNC_OP_ERROR;
-			}
-#else
-			err = vm_bind_ioctl(vm, op->vma, op->engine, op->bo,
-					    &op->bind_op, op->syncs,
-					    op->num_syncs, op->fence);
-#endif
-			/*
-			 * In order for the fencing to work (stall behind
-			 * existing jobs / prevent new jobs from running) all
-			 * the dma-resv slots need to be programmed in a batch
-			 * relative to execs / the rebind worker. The vm->lock
-			 * ensure this.
-			 */
-			if (!err && ((first && VM_BIND_OP(op->bind_op.op) ==
-				      XE_VM_BIND_OP_UNMAP) ||
-				     vm->async_ops.munmap_rebind_inflight)) {
-				if (last) {
-					op->vma->last_munmap_rebind = false;
-					vm->async_ops.munmap_rebind_inflight =
-						false;
-				} else {
-					vm->async_ops.munmap_rebind_inflight =
-						true;
-
-					async_op_cleanup(vm, op);
-
-					spin_lock_irq(&vm->async_ops.lock);
-					op = next_async_op(vm);
-					XE_BUG_ON(!op);
-					list_del_init(&op->link);
-					spin_unlock_irq(&vm->async_ops.lock);
-
-					goto again;
-				}
-			}
-			if (err) {
-				trace_xe_vma_fail(op->vma);
-				drm_warn(&vm->xe->drm, "Async VM op(%d) failed with %d",
-					 VM_BIND_OP(op->bind_op.op),
-					 err);
+			op->gt_mask = gt_mask;
+			op->map.immediate =
+				operation & XE_VM_BIND_FLAG_IMMEDIATE;
+			op->map.read_only =
+				operation & XE_VM_BIND_FLAG_READONLY;
+		}
+		break;
+	case XE_VM_BIND_OP_UNMAP:
+		ops = drm_gpuva_sm_unmap_ops_create(&vm->mgr, addr, range);
+		drm_gpuva_for_each_op(__op, ops) {
+			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
 
-				spin_lock_irq(&vm->async_ops.lock);
-				list_add(&op->link, &vm->async_ops.pending);
-				spin_unlock_irq(&vm->async_ops.lock);
+			op->gt_mask = gt_mask;
+		}
+		break;
+	case XE_VM_BIND_OP_PREFETCH:
+		ops = drm_gpuva_prefetch_ops_create(&vm->mgr, addr, range);
+		drm_gpuva_for_each_op(__op, ops) {
+			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
 
-				vm_set_async_error(vm, err);
-				up_write(&vm->lock);
+			op->gt_mask = gt_mask;
+			op->prefetch.region = region;
+		}
+		break;
+	case XE_VM_BIND_OP_UNMAP_ALL:
+		XE_BUG_ON(!bo);
 
-				if (vm->async_ops.error_capture.addr)
-					vm_error_capture(vm, err,
-							 op->bind_op.op,
-							 op->bind_op.addr,
-							 op->bind_op.range);
-				break;
-			}
-			up_write(&vm->lock);
-		} else {
-			trace_xe_vma_flush(op->vma);
+		err = xe_bo_lock(bo, &ww, 0, true);
+		if (err)
+			return ERR_PTR(err);
+		ops = drm_gpuva_gem_unmap_ops_create(&vm->mgr, obj);
+		xe_bo_unlock(bo, &ww);
 
-			if (is_unmap_op(op->bind_op.op)) {
-				down_write(&vm->lock);
-				xe_vma_destroy_unlocked(op->vma);
-				up_write(&vm->lock);
-			}
+		drm_gpuva_for_each_op(__op, ops) {
+			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
 
-			if (op->fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
-						   &op->fence->fence.flags)) {
-				if (!xe_vm_no_dma_fences(vm)) {
-					op->fence->started = true;
-					smp_wmb();
-					wake_up_all(&op->fence->wq);
-				}
-				dma_fence_signal(&op->fence->fence);
-			}
+			op->gt_mask = gt_mask;
 		}
+		break;
+	default:
+		XE_BUG_ON("NOT POSSIBLE");
+		ops = ERR_PTR(-EINVAL);
+	}
 
-		async_op_cleanup(vm, op);
+#ifdef TEST_VM_ASYNC_OPS_ERROR
+	if (operation & FORCE_ASYNC_OP_ERROR) {
+		op = list_first_entry_or_null(&ops->list, struct xe_vma_op,
+					      base.entry);
+		if (op)
+			op->inject_error = true;
 	}
+#endif
+
+	if (!IS_ERR(ops))
+		drm_gpuva_for_each_op(__op, ops)
+			print_op(vm->xe, __op);
+
+	return ops;
 }
 
-static int __vm_bind_ioctl_async(struct xe_vm *vm, struct xe_vma *vma,
-				 struct xe_engine *e, struct xe_bo *bo,
-				 struct drm_xe_vm_bind_op *bind_op,
-				 struct xe_sync_entry *syncs, u32 num_syncs)
+static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
+			      u64 gt_mask, bool read_only)
 {
-	struct async_op *op;
-	bool installed = false;
-	u64 seqno;
-	int i;
+	struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
+	struct xe_vma *vma;
+	struct ww_acquire_ctx ww;
+	int err;
 
-	lockdep_assert_held(&vm->lock);
+	lockdep_assert_held_write(&vm->lock);
 
-	op = kmalloc(sizeof(*op), GFP_KERNEL);
-	if (!op) {
-		return -ENOMEM;
-	}
+	if (bo) {
+		err = xe_bo_lock(bo, &ww, 0, true);
+		if (err)
+			return ERR_PTR(err);
+	}
+	vma = xe_vma_create(vm, bo, op->gem.offset,
+			    op->va.addr, op->va.addr +
+			    op->va.range - 1, read_only,
+			    gt_mask);
+	if (bo)
+		xe_bo_unlock(bo, &ww);
 
-	if (num_syncs) {
-		op->fence = kmalloc(sizeof(*op->fence), GFP_KERNEL);
-		if (!op->fence) {
-			kfree(op);
-			return -ENOMEM;
+	if (xe_vma_is_userptr(vma)) {
+		err = xe_vma_userptr_pin_pages(vma);
+		if (err) {
+			xe_vma_destroy(vma, NULL);
+			return ERR_PTR(err);
 		}
+	} else if(!bo->vm) {
+		vm_insert_extobj(vm, vma);
+		err = add_preempt_fences(vm, bo);
+		if (err) {
+			xe_vma_destroy(vma, NULL);
+			return ERR_PTR(err);
+		}
+	}
+
+	return vma;
+}
+
+/*
+ * Parse operations list and create any resources needed for the operations
+ * prior to fully commiting to the operations. This setp can fail.
+ */
+static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
+				   struct drm_gpuva_ops **ops, int num_ops_list,
+				   struct xe_sync_entry *syncs, u32 num_syncs,
+				   struct list_head *ops_list, bool async)
+{
+	struct xe_vma_op *last_op = NULL;
+	struct list_head *async_list = NULL;
+	struct async_op_fence *fence = NULL;
+	int err, i;
+
+	lockdep_assert_held_write(&vm->lock);
+	XE_BUG_ON(num_ops_list > 1 && !async);
+
+	if (num_syncs && async) {
+		u64 seqno;
+
+		fence = kmalloc(sizeof(*fence), GFP_KERNEL);
+		if (!fence)
+			return -ENOMEM;
 
 		seqno = e ? ++e->bind.fence_seqno : ++vm->async_ops.fence.seqno;
-		dma_fence_init(&op->fence->fence, &async_op_fence_ops,
+		dma_fence_init(&fence->fence, &async_op_fence_ops,
 			       &vm->async_ops.lock, e ? e->bind.fence_ctx :
 			       vm->async_ops.fence.context, seqno);
 
 		if (!xe_vm_no_dma_fences(vm)) {
-			op->fence->vm = vm;
-			op->fence->started = false;
-			init_waitqueue_head(&op->fence->wq);
+			fence->vm = vm;
+			fence->started = false;
+			init_waitqueue_head(&fence->wq);
 		}
-	} else {
-		op->fence = NULL;
 	}
-	op->vma = vma;
-	op->engine = e;
-	op->bo = bo;
-	op->bind_op = *bind_op;
-	op->syncs = syncs;
-	op->num_syncs = num_syncs;
-	INIT_LIST_HEAD(&op->link);
-
-	for (i = 0; i < num_syncs; i++)
-		installed |= xe_sync_entry_signal(&syncs[i], NULL,
-						  &op->fence->fence);
 
-	if (!installed && op->fence)
-		dma_fence_signal(&op->fence->fence);
+	for (i = 0; i < num_ops_list; ++i) {
+		struct drm_gpuva_ops *__ops = ops[i];
+		struct drm_gpuva_op *__op;
 
-	spin_lock_irq(&vm->async_ops.lock);
-	list_add_tail(&op->link, &vm->async_ops.pending);
-	spin_unlock_irq(&vm->async_ops.lock);
+		drm_gpuva_for_each_op(__op, __ops) {
+			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+			bool first = !async_list;
 
-	if (!vm->async_ops.error)
-		queue_work(system_unbound_wq, &vm->async_ops.work);
+			XE_BUG_ON(!first && !async);
 
-	return 0;
-}
+			INIT_LIST_HEAD(&op->link);
+			if (first)
+				async_list = ops_list;
+			list_add_tail(&op->link, async_list);
 
-static int vm_bind_ioctl_async(struct xe_vm *vm, struct xe_vma *vma,
-			       struct xe_engine *e, struct xe_bo *bo,
-			       struct drm_xe_vm_bind_op *bind_op,
-			       struct xe_sync_entry *syncs, u32 num_syncs)
-{
-	struct xe_vma *__vma, *next;
-	struct list_head rebind_list;
-	struct xe_sync_entry *in_syncs = NULL, *out_syncs = NULL;
-	u32 num_in_syncs = 0, num_out_syncs = 0;
-	bool first = true, last;
-	int err;
-	int i;
+			if (first) {
+				op->flags |= XE_VMA_OP_FIRST;
+				op->num_syncs = num_syncs;
+				op->syncs = syncs;
+			}
 
-	lockdep_assert_held(&vm->lock);
+			op->engine = e;
 
-	/* Not a linked list of unbinds + rebinds, easy */
-	if (list_empty(&vma->unbind_link))
-		return __vm_bind_ioctl_async(vm, vma, e, bo, bind_op,
-					     syncs, num_syncs);
+			switch (op->base.op) {
+			case DRM_GPUVA_OP_MAP:
+			{
+				struct xe_vma *vma;
 
-	/*
-	 * Linked list of unbinds + rebinds, decompose syncs into 'in / out'
-	 * passing the 'in' to the first operation and 'out' to the last. Also
-	 * the reference counting is a little tricky, increment the VM / bind
-	 * engine ref count on all but the last operation and increment the BOs
-	 * ref count on each rebind.
-	 */
+				vma = new_vma(vm, &op->base.map,
+					      op->gt_mask, op->map.read_only);
+				if (IS_ERR(vma)) {
+					err = PTR_ERR(vma);
+					goto free_fence;
+				}
 
-	XE_BUG_ON(VM_BIND_OP(bind_op->op) != XE_VM_BIND_OP_UNMAP &&
-		  VM_BIND_OP(bind_op->op) != XE_VM_BIND_OP_UNMAP_ALL &&
-		  VM_BIND_OP(bind_op->op) != XE_VM_BIND_OP_PREFETCH);
+				op->map.vma = vma;
+				break;
+			}
+			case DRM_GPUVA_OP_REMAP:
+				if (op->base.remap.prev) {
+					struct xe_vma *vma;
+					bool read_only =
+						op->base.remap.unmap->va->flags &
+						XE_VMA_READ_ONLY;
+
+					vma = new_vma(vm, op->base.remap.prev,
+						      op->gt_mask, read_only);
+					if (IS_ERR(vma)) {
+						err = PTR_ERR(vma);
+						goto free_fence;
+					}
+
+					op->remap.prev = vma;
+				}
 
-	/* Decompose syncs */
-	if (num_syncs) {
-		in_syncs = kmalloc(sizeof(*in_syncs) * num_syncs, GFP_KERNEL);
-		out_syncs = kmalloc(sizeof(*out_syncs) * num_syncs, GFP_KERNEL);
-		if (!in_syncs || !out_syncs) {
-			err = -ENOMEM;
-			goto out_error;
-		}
+				if (op->base.remap.next) {
+					struct xe_vma *vma;
+					bool read_only =
+						op->base.remap.unmap->va->flags &
+						XE_VMA_READ_ONLY;
 
-		for (i = 0; i < num_syncs; ++i) {
-			bool signal = syncs[i].flags & DRM_XE_SYNC_SIGNAL;
+					vma = new_vma(vm, op->base.remap.next,
+						      op->gt_mask, read_only);
+					if (IS_ERR(vma)) {
+						err = PTR_ERR(vma);
+						goto free_fence;
+					}
 
-			if (signal)
-				out_syncs[num_out_syncs++] = syncs[i];
-			else
-				in_syncs[num_in_syncs++] = syncs[i];
-		}
-	}
+					op->remap.next = vma;
+				}
 
-	/* Do unbinds + move rebinds to new list */
-	INIT_LIST_HEAD(&rebind_list);
-	list_for_each_entry_safe(__vma, next, &vma->unbind_link, unbind_link) {
-		if (__vma->destroyed ||
-		    VM_BIND_OP(bind_op->op) == XE_VM_BIND_OP_PREFETCH) {
-			list_del_init(&__vma->unbind_link);
-			xe_bo_get(bo);
-			err = __vm_bind_ioctl_async(xe_vm_get(vm), __vma,
-						    e ? xe_engine_get(e) : NULL,
-						    bo, bind_op, first ?
-						    in_syncs : NULL,
-						    first ? num_in_syncs : 0);
-			if (err) {
-				xe_bo_put(bo);
-				xe_vm_put(vm);
-				if (e)
-					xe_engine_put(e);
-				goto out_error;
+				/* XXX: Support no doing remaps */
+				op->remap.start =
+					xe_vma_start(gpuva_to_vma(op->base.remap.unmap->va));
+				op->remap.range =
+					xe_vma_size(gpuva_to_vma(op->base.remap.unmap->va));
+				break;
+			case DRM_GPUVA_OP_UNMAP:
+				op->unmap.start =
+					xe_vma_start(gpuva_to_vma(op->base.unmap.va));
+				op->unmap.range =
+					xe_vma_size(gpuva_to_vma(op->base.unmap.va));
+				break;
+			case DRM_GPUVA_OP_PREFETCH:
+				/* Nothing to do */
+				break;
+			default:
+				XE_BUG_ON("NOT POSSIBLE");
 			}
-			in_syncs = NULL;
-			first = false;
-		} else {
-			list_move_tail(&__vma->unbind_link, &rebind_list);
-		}
-	}
-	last = list_empty(&rebind_list);
-	if (!last) {
-		xe_vm_get(vm);
-		if (e)
-			xe_engine_get(e);
-	}
-	err = __vm_bind_ioctl_async(vm, vma, e,
-				    bo, bind_op,
-				    first ? in_syncs :
-				    last ? out_syncs : NULL,
-				    first ? num_in_syncs :
-				    last ? num_out_syncs : 0);
-	if (err) {
-		if (!last) {
-			xe_vm_put(vm);
-			if (e)
-				xe_engine_put(e);
-		}
-		goto out_error;
-	}
-	in_syncs = NULL;
 
-	/* Do rebinds */
-	list_for_each_entry_safe(__vma, next, &rebind_list, unbind_link) {
-		list_del_init(&__vma->unbind_link);
-		last = list_empty(&rebind_list);
-
-		if (xe_vma_is_userptr(__vma)) {
-			bind_op->op = XE_VM_BIND_FLAG_ASYNC |
-				XE_VM_BIND_OP_MAP_USERPTR;
-		} else {
-			bind_op->op = XE_VM_BIND_FLAG_ASYNC |
-				XE_VM_BIND_OP_MAP;
-			xe_bo_get(__vma->bo);
-		}
-
-		if (!last) {
-			xe_vm_get(vm);
-			if (e)
-				xe_engine_get(e);
+			last_op = op;
 		}
 
-		err = __vm_bind_ioctl_async(vm, __vma, e,
-					    __vma->bo, bind_op, last ?
-					    out_syncs : NULL,
-					    last ? num_out_syncs : 0);
-		if (err) {
-			if (!last) {
-				xe_vm_put(vm);
-				if (e)
-					xe_engine_put(e);
-			}
-			goto out_error;
-		}
+		last_op->ops = __ops;
 	}
 
-	kfree(syncs);
-	return 0;
+	XE_BUG_ON(!last_op);	/* FIXME: This is not an error, handle */
 
-out_error:
-	kfree(in_syncs);
-	kfree(out_syncs);
-	kfree(syncs);
+	last_op->flags |= XE_VMA_OP_LAST;
+	last_op->num_syncs = num_syncs;
+	last_op->syncs = syncs;
+	last_op->fence = fence;
+
+	return 0;
 
+free_fence:
+	kfree(fence);
 	return err;
 }
 
-static bool bo_has_vm_references(struct xe_bo *bo, struct xe_vm *vm,
-				 struct xe_vma *ignore)
+static void xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
 {
-	struct ww_acquire_ctx ww;
-	struct xe_vma *vma;
-	bool ret = false;
+	lockdep_assert_held_write(&vm->lock);
 
-	xe_bo_lock(bo, &ww, 0, false);
-	list_for_each_entry(vma, &bo->vmas, bo_link) {
-		if (vma != ignore && vma->vm == vm && !vma->destroyed) {
-			ret = true;
-			break;
-		}
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		xe_vm_insert_vma(vm, op->map.vma);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+		prep_vma_destroy(vm, gpuva_to_vma(op->base.remap.unmap->va),
+				 true);
+		if (op->remap.prev)
+			xe_vm_insert_vma(vm, op->remap.prev);
+		if (op->remap.next)
+			xe_vm_insert_vma(vm, op->remap.next);
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+		prep_vma_destroy(vm, gpuva_to_vma(op->base.unmap.va), true);
+		break;
+	case DRM_GPUVA_OP_PREFETCH:
+		/* Nothing to do */
+		break;
+	default:
+		XE_BUG_ON("NOT POSSIBLE");
 	}
-	xe_bo_unlock(bo, &ww);
-
-	return ret;
 }
 
-static int vm_insert_extobj(struct xe_vm *vm, struct xe_vma *vma)
+static int __xe_vma_op_execute(struct xe_vm *vm, struct xe_vma *vma,
+			       struct xe_vma_op *op)
 {
-	struct xe_bo *bo = vma->bo;
+	LIST_HEAD(objs);
+	LIST_HEAD(dups);
+	struct ttm_validate_buffer tv_bo, tv_vm;
+	struct ww_acquire_ctx ww;
+	struct xe_bo *vbo;
+	int err;
 
 	lockdep_assert_held_write(&vm->lock);
 
-	if (bo_has_vm_references(bo, vm, vma))
-		return 0;
+	xe_vm_tv_populate(vm, &tv_vm);
+	list_add_tail(&tv_vm.head, &objs);
+	vbo = xe_vma_bo(vma);
+	if (vbo) {
+		/*
+		 * An unbind can drop the last reference to the BO and
+		 * the BO is needed for ttm_eu_backoff_reservation so
+		 * take a reference here.
+		 */
+		xe_bo_get(vbo);
 
-	list_add(&vma->extobj.link, &vm->extobj.list);
-	vm->extobj.entries++;
+		tv_bo.bo = &vbo->ttm;
+		tv_bo.num_shared = 1;
+		list_add(&tv_bo.head, &objs);
+	}
 
-	return 0;
-}
+again:
+	err = ttm_eu_reserve_buffers(&ww, &objs, true, &dups);
+	if (err) {
+		xe_bo_put(vbo);
+		return err;
+	}
 
-static int __vm_bind_ioctl_lookup_vma(struct xe_vm *vm, struct xe_bo *bo,
-				      u64 addr, u64 range, u32 op)
-{
-	struct xe_device *xe = vm->xe;
-	struct xe_vma *vma, lookup;
-	bool async = !!(op & XE_VM_BIND_FLAG_ASYNC);
+	xe_vm_assert_held(vm);
+	xe_bo_assert_held(xe_vma_bo(vma));
+
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		err = xe_vm_bind(vm, vma, op->engine, xe_vma_bo(vma),
+				 op->syncs, op->num_syncs, op->fence,
+				 op->map.immediate || !xe_vm_in_fault_mode(vm),
+				 op->flags & XE_VMA_OP_FIRST,
+				 op->flags & XE_VMA_OP_LAST);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+	{
+		bool prev = !!op->remap.prev;
+		bool next = !!op->remap.next;
+
+		if (!op->remap.unmap_done) {
+			vm->async_ops.munmap_rebind_inflight = true;
+			if (prev || next)
+				vma->gpuva.flags |= XE_VMA_FIRST_REBIND;
+			err = xe_vm_unbind(vm, vma, op->engine, op->syncs,
+					   op->num_syncs,
+					   !prev && !next ? op->fence : NULL,
+					   op->flags & XE_VMA_OP_FIRST,
+					   op->flags & XE_VMA_OP_LAST && !prev &&
+					   !next);
+			if (err)
+				break;
+			op->remap.unmap_done = true;
+		}
 
-	lockdep_assert_held(&vm->lock);
+		if (prev) {
+			op->remap.prev->gpuva.flags |= XE_VMA_LAST_REBIND;
+			err = xe_vm_bind(vm, op->remap.prev, op->engine,
+					 xe_vma_bo(op->remap.prev), op->syncs,
+					 op->num_syncs,
+					 !next ? op->fence : NULL, true, false,
+					 op->flags & XE_VMA_OP_LAST && !next);
+			op->remap.prev->gpuva.flags &= ~XE_VMA_LAST_REBIND;
+			if (err)
+				break;
+			op->remap.prev = NULL;
+		}
 
-	lookup.start = addr;
-	lookup.end = addr + range - 1;
+		if (next) {
+			op->remap.next->gpuva.flags |= XE_VMA_LAST_REBIND;
+			err = xe_vm_bind(vm, op->remap.next, op->engine,
+					 xe_vma_bo(op->remap.next),
+					 op->syncs, op->num_syncs,
+					 op->fence, true, false,
+					 op->flags & XE_VMA_OP_LAST);
+			op->remap.next->gpuva.flags &= ~XE_VMA_LAST_REBIND;
+			if (err)
+				break;
+			op->remap.next = NULL;
+		}
+		vm->async_ops.munmap_rebind_inflight = false;
 
-	switch (VM_BIND_OP(op)) {
-	case XE_VM_BIND_OP_MAP:
-	case XE_VM_BIND_OP_MAP_USERPTR:
-		vma = xe_vm_find_overlapping_vma(vm, &lookup);
-		if (XE_IOCTL_ERR(xe, vma))
-			return -EBUSY;
 		break;
-	case XE_VM_BIND_OP_UNMAP:
-	case XE_VM_BIND_OP_PREFETCH:
-		vma = xe_vm_find_overlapping_vma(vm, &lookup);
-		if (XE_IOCTL_ERR(xe, !vma) ||
-		    XE_IOCTL_ERR(xe, (vma->start != addr ||
-				 vma->end != addr + range - 1) && !async))
-			return -EINVAL;
+	}
+	case DRM_GPUVA_OP_UNMAP:
+		err = xe_vm_unbind(vm, vma, op->engine, op->syncs,
+				   op->num_syncs, op->fence,
+				   op->flags & XE_VMA_OP_FIRST,
+				   op->flags & XE_VMA_OP_LAST);
 		break;
-	case XE_VM_BIND_OP_UNMAP_ALL:
+	case DRM_GPUVA_OP_PREFETCH:
+		err = xe_vm_prefetch(vm, vma, op->engine, op->prefetch.region,
+				     op->syncs, op->num_syncs, op->fence,
+				     op->flags & XE_VMA_OP_FIRST,
+				     op->flags & XE_VMA_OP_LAST);
 		break;
 	default:
 		XE_BUG_ON("NOT POSSIBLE");
-		return -EINVAL;
 	}
 
-	return 0;
-}
-
-static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma)
-{
-	down_read(&vm->userptr.notifier_lock);
-	vma->destroyed = true;
-	up_read(&vm->userptr.notifier_lock);
-	xe_vm_remove_vma(vm, vma);
-}
-
-static int prep_replacement_vma(struct xe_vm *vm, struct xe_vma *vma)
-{
-	int err;
-
-	if (vma->bo && !vma->bo->vm) {
-		vm_insert_extobj(vm, vma);
-		err = add_preempt_fences(vm, vma->bo);
-		if (err)
-			return err;
+	ttm_eu_backoff_reservation(&ww, &objs);
+	if (err == -EAGAIN && xe_vma_is_userptr(vma)) {
+		lockdep_assert_held_write(&vm->lock);
+		err = xe_vma_userptr_pin_pages(vma);
+		if (!err)
+			goto again;
 	}
+	xe_bo_put(vbo);
 
-	return 0;
+	if (err)
+		trace_xe_vma_fail(vma);
+
+	return err;
 }
 
-/*
- * Find all overlapping VMAs in lookup range and add to a list in the returned
- * VMA, all of VMAs found will be unbound. Also possibly add 2 new VMAs that
- * need to be bound if first / last VMAs are not fully unbound. This is akin to
- * how munmap works.
- */
-static struct xe_vma *vm_unbind_lookup_vmas(struct xe_vm *vm,
-					    struct xe_vma *lookup)
+static int xe_vma_op_execute(struct xe_vm *vm, struct xe_vma_op *op)
 {
-	struct xe_vma *vma = xe_vm_find_overlapping_vma(vm, lookup);
-	struct rb_node *node;
-	struct xe_vma *first = vma, *last = vma, *new_first = NULL,
-		      *new_last = NULL, *__vma, *next;
-	int err = 0;
-	bool first_munmap_rebind = false;
+	int ret = 0;
 
-	lockdep_assert_held(&vm->lock);
-	XE_BUG_ON(!vma);
-
-	node = &vma->vm_node;
-	while ((node = rb_next(node))) {
-		if (!xe_vma_cmp_vma_cb(lookup, node)) {
-			__vma = to_xe_vma(node);
-			list_add_tail(&__vma->unbind_link, &vma->unbind_link);
-			last = __vma;
-		} else {
-			break;
-		}
-	}
+	lockdep_assert_held_write(&vm->lock);
 
-	node = &vma->vm_node;
-	while ((node = rb_prev(node))) {
-		if (!xe_vma_cmp_vma_cb(lookup, node)) {
-			__vma = to_xe_vma(node);
-			list_add(&__vma->unbind_link, &vma->unbind_link);
-			first = __vma;
-		} else {
-			break;
-		}
+#ifdef TEST_VM_ASYNC_OPS_ERROR
+	if (op->inject_error) {
+		op->inject_error = false;
+		return -ENOMEM;
 	}
+#endif
 
-	if (first->start != lookup->start) {
-		struct ww_acquire_ctx ww;
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		ret = __xe_vma_op_execute(vm, op->map.vma, op);
+		break;
+	case DRM_GPUVA_OP_REMAP:
+	{
+		struct xe_vma *vma;
+
+		if (!op->remap.unmap_done)
+			vma = gpuva_to_vma(op->base.remap.unmap->va);
+		else if(op->remap.prev)
+			vma = op->remap.prev;
+		else
+			vma = op->remap.next;
 
-		if (first->bo)
-			err = xe_bo_lock(first->bo, &ww, 0, true);
-		if (err)
-			goto unwind;
-		new_first = xe_vma_create(first->vm, first->bo,
-					  first->bo ? first->bo_offset :
-					  first->userptr.ptr,
-					  first->start,
-					  lookup->start - 1,
-					  (first->pte_flags & PTE_READ_ONLY),
-					  first->gt_mask);
-		if (first->bo)
-			xe_bo_unlock(first->bo, &ww);
-		if (!new_first) {
-			err = -ENOMEM;
-			goto unwind;
-		}
-		if (!first->bo) {
-			err = xe_vma_userptr_pin_pages(new_first);
-			if (err)
-				goto unwind;
-		}
-		err = prep_replacement_vma(vm, new_first);
-		if (err)
-			goto unwind;
+		ret = __xe_vma_op_execute(vm, vma, op);
+		break;
+	}
+	case DRM_GPUVA_OP_UNMAP:
+		ret = __xe_vma_op_execute(vm, gpuva_to_vma(op->base.unmap.va),
+					  op);
+		break;
+	case DRM_GPUVA_OP_PREFETCH:
+		ret = __xe_vma_op_execute(vm,
+					  gpuva_to_vma(op->base.prefetch.va),
+					  op);
+		break;
+	default:
+		XE_BUG_ON("NOT POSSIBLE");
 	}
 
-	if (last->end != lookup->end) {
-		struct ww_acquire_ctx ww;
-		u64 chunk = lookup->end + 1 - last->start;
+	return ret;
+}
 
-		if (last->bo)
-			err = xe_bo_lock(last->bo, &ww, 0, true);
-		if (err)
-			goto unwind;
-		new_last = xe_vma_create(last->vm, last->bo,
-					 last->bo ? last->bo_offset + chunk :
-					 last->userptr.ptr + chunk,
-					 last->start + chunk,
-					 last->end,
-					 (last->pte_flags & PTE_READ_ONLY),
-					 last->gt_mask);
-		if (last->bo)
-			xe_bo_unlock(last->bo, &ww);
-		if (!new_last) {
-			err = -ENOMEM;
-			goto unwind;
-		}
-		if (!last->bo) {
-			err = xe_vma_userptr_pin_pages(new_last);
-			if (err)
-				goto unwind;
-		}
-		err = prep_replacement_vma(vm, new_last);
-		if (err)
-			goto unwind;
-	}
+static void xe_vma_op_cleanup(struct xe_vm *vm, struct xe_vma_op *op)
+{
+	bool last = op->flags & XE_VMA_OP_LAST;
 
-	prep_vma_destroy(vm, vma);
-	if (list_empty(&vma->unbind_link) && (new_first || new_last))
-		vma->first_munmap_rebind = true;
-	list_for_each_entry(__vma, &vma->unbind_link, unbind_link) {
-		if ((new_first || new_last) && !first_munmap_rebind) {
-			__vma->first_munmap_rebind = true;
-			first_munmap_rebind = true;
-		}
-		prep_vma_destroy(vm, __vma);
-	}
-	if (new_first) {
-		xe_vm_insert_vma(vm, new_first);
-		list_add_tail(&new_first->unbind_link, &vma->unbind_link);
-		if (!new_last)
-			new_first->last_munmap_rebind = true;
+	if (last) {
+		while (op->num_syncs--)
+			xe_sync_entry_cleanup(&op->syncs[op->num_syncs]);
+		kfree(op->syncs);
+		if (op->engine)
+			xe_engine_put(op->engine);
+		if (op->fence)
+			dma_fence_put(&op->fence->fence);
 	}
-	if (new_last) {
-		xe_vm_insert_vma(vm, new_last);
-		list_add_tail(&new_last->unbind_link, &vma->unbind_link);
-		new_last->last_munmap_rebind = true;
+	if (!list_empty(&op->link)) {
+		spin_lock_irq(&vm->async_ops.lock);
+		list_del(&op->link);
+		spin_unlock_irq(&vm->async_ops.lock);
 	}
+	if (op->ops)
+		drm_gpuva_ops_free(&vm->mgr, op->ops);
+	if (last)
+		xe_vm_put(vm);
+}
 
-	return vma;
+static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
+			     bool post_commit)
+{
+	lockdep_assert_held_write(&vm->lock);
+
+	switch (op->base.op) {
+	case DRM_GPUVA_OP_MAP:
+		prep_vma_destroy(vm, op->map.vma, post_commit);
+		xe_vma_destroy(op->map.vma, NULL);
+		break;
+	case DRM_GPUVA_OP_UNMAP:
+	{
+		struct xe_vma *vma = gpuva_to_vma(op->base.unmap.va);
 
-unwind:
-	list_for_each_entry_safe(__vma, next, &vma->unbind_link, unbind_link)
-		list_del_init(&__vma->unbind_link);
-	if (new_last) {
-		prep_vma_destroy(vm, new_last);
-		xe_vma_destroy_unlocked(new_last);
+		down_read(&vm->userptr.notifier_lock);
+		vma->gpuva.flags &= ~XE_VMA_DESTROYED;
+		up_read(&vm->userptr.notifier_lock);
+		if (post_commit)
+			xe_vm_insert_vma(vm, vma);
+		break;
 	}
-	if (new_first) {
-		prep_vma_destroy(vm, new_first);
-		xe_vma_destroy_unlocked(new_first);
+	case DRM_GPUVA_OP_PREFETCH:
+	case DRM_GPUVA_OP_REMAP:
+		/* Nothing to do */
+		break;
+	default:
+		XE_BUG_ON("NOT POSSIBLE");
 	}
+}
 
-	return ERR_PTR(err);
+static struct xe_vma_op *next_vma_op(struct xe_vm *vm)
+{
+	return list_first_entry_or_null(&vm->async_ops.pending,
+					struct xe_vma_op, link);
 }
 
-/*
- * Similar to vm_unbind_lookup_vmas, find all VMAs in lookup range to prefetch
- */
-static struct xe_vma *vm_prefetch_lookup_vmas(struct xe_vm *vm,
-					      struct xe_vma *lookup,
-					      u32 region)
+static void xe_vma_op_work_func(struct work_struct *w)
 {
-	struct xe_vma *vma = xe_vm_find_overlapping_vma(vm, lookup), *__vma,
-		      *next;
-	struct rb_node *node;
+	struct xe_vm *vm = container_of(w, struct xe_vm, async_ops.work);
 
-	if (!xe_vma_is_userptr(vma)) {
-		if (!xe_bo_can_migrate(vma->bo, region_to_mem_type[region]))
-			return ERR_PTR(-EINVAL);
-	}
+	for (;;) {
+		struct xe_vma_op *op;
+		int err;
 
-	node = &vma->vm_node;
-	while ((node = rb_next(node))) {
-		if (!xe_vma_cmp_vma_cb(lookup, node)) {
-			__vma = to_xe_vma(node);
-			if (!xe_vma_is_userptr(__vma)) {
-				if (!xe_bo_can_migrate(__vma->bo, region_to_mem_type[region]))
-					goto flush_list;
-			}
-			list_add_tail(&__vma->unbind_link, &vma->unbind_link);
-		} else {
+		if (vm->async_ops.error && !xe_vm_is_closed(vm))
 			break;
-		}
-	}
 
-	node = &vma->vm_node;
-	while ((node = rb_prev(node))) {
-		if (!xe_vma_cmp_vma_cb(lookup, node)) {
-			__vma = to_xe_vma(node);
-			if (!xe_vma_is_userptr(__vma)) {
-				if (!xe_bo_can_migrate(__vma->bo, region_to_mem_type[region]))
-					goto flush_list;
-			}
-			list_add(&__vma->unbind_link, &vma->unbind_link);
-		} else {
+		spin_lock_irq(&vm->async_ops.lock);
+		op = next_vma_op(vm);
+		spin_unlock_irq(&vm->async_ops.lock);
+
+		if (!op)
 			break;
-		}
-	}
 
-	return vma;
+		if (!xe_vm_is_closed(vm)) {
+			down_write(&vm->lock);
+			err = xe_vma_op_execute(vm, op);
+			if (err) {
+				drm_warn(&vm->xe->drm, "Async VM op(%d) failed with %d",
+					 0, err);
 
-flush_list:
-	list_for_each_entry_safe(__vma, next, &vma->unbind_link,
-				 unbind_link)
-		list_del_init(&__vma->unbind_link);
+				vm_set_async_error(vm, err);
+				up_write(&vm->lock);
 
-	return ERR_PTR(-EINVAL);
-}
+				if (vm->async_ops.error_capture.addr)
+					vm_error_capture(vm, err, 0, 0, 0);
+				break;
+			}
+			up_write(&vm->lock);
+		} else {
+			struct xe_vma *vma;
 
-static struct xe_vma *vm_unbind_all_lookup_vmas(struct xe_vm *vm,
-						struct xe_bo *bo)
-{
-	struct xe_vma *first = NULL, *vma;
+			switch (op->base.op) {
+			case DRM_GPUVA_OP_REMAP:
+				vma = gpuva_to_vma(op->base.remap.unmap->va);
+				trace_xe_vma_flush(vma);
 
-	lockdep_assert_held(&vm->lock);
-	xe_bo_assert_held(bo);
+				down_write(&vm->lock);
+				xe_vma_destroy_unlocked(vma);
+				up_write(&vm->lock);
+				break;
+			case DRM_GPUVA_OP_UNMAP:
+				vma = gpuva_to_vma(op->base.unmap.va);
+				trace_xe_vma_flush(vma);
 
-	list_for_each_entry(vma, &bo->vmas, bo_link) {
-		if (vma->vm != vm)
-			continue;
+				down_write(&vm->lock);
+				xe_vma_destroy_unlocked(vma);
+				up_write(&vm->lock);
+				break;
+			default:
+				/* Nothing to do */
+				break;
+			}
 
-		prep_vma_destroy(vm, vma);
-		if (!first)
-			first = vma;
-		else
-			list_add_tail(&vma->unbind_link, &first->unbind_link);
-	}
+			if (op->fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
+						   &op->fence->fence.flags)) {
+				if (!xe_vm_no_dma_fences(vm)) {
+					op->fence->started = true;
+					smp_wmb();
+					wake_up_all(&op->fence->wq);
+				}
+				dma_fence_signal(&op->fence->fence);
+			}
+		}
 
-	return first;
+		xe_vma_op_cleanup(vm, op);
+	}
 }
 
-static struct xe_vma *vm_bind_ioctl_lookup_vma(struct xe_vm *vm,
-					       struct xe_bo *bo,
-					       u64 bo_offset_or_userptr,
-					       u64 addr, u64 range, u32 op,
-					       u64 gt_mask, u32 region)
+/*
+ * Commit operations list, this step cannot fail in async mode, can fail if the
+ * bind operation fails in sync mode.
+ */
+static int vm_bind_ioctl_ops_commit(struct xe_vm *vm,
+				    struct list_head *ops_list, bool async)
 {
-	struct ww_acquire_ctx ww;
-	struct xe_vma *vma, lookup;
-	int err;
-
-	lockdep_assert_held(&vm->lock);
+	struct xe_vma_op *op, *last_op;
+	int err = 0;
 
-	lookup.start = addr;
-	lookup.end = addr + range - 1;
+	lockdep_assert_held_write(&vm->lock);
 
-	switch (VM_BIND_OP(op)) {
-	case XE_VM_BIND_OP_MAP:
-		XE_BUG_ON(!bo);
+	list_for_each_entry(op, ops_list, link) {
+		last_op = op;
+		xe_vma_op_commit(vm, op);
+	}
 
-		err = xe_bo_lock(bo, &ww, 0, true);
+	if (!async) {
+		err = xe_vma_op_execute(vm, last_op);
 		if (err)
-			return ERR_PTR(err);
-		vma = xe_vma_create(vm, bo, bo_offset_or_userptr, addr,
-				    addr + range - 1,
-				    op & XE_VM_BIND_FLAG_READONLY,
-				    gt_mask);
-		xe_bo_unlock(bo, &ww);
-		if (!vma)
-			return ERR_PTR(-ENOMEM);
+			xe_vma_op_unwind(vm, last_op, true);
+		xe_vma_op_cleanup(vm, last_op);
+	} else {
+		int i;
+		bool installed = false;
 
-		xe_vm_insert_vma(vm, vma);
-		if (!bo->vm) {
-			vm_insert_extobj(vm, vma);
-			err = add_preempt_fences(vm, bo);
-			if (err) {
-				prep_vma_destroy(vm, vma);
-				xe_vma_destroy_unlocked(vma);
+		for (i = 0; i < last_op->num_syncs; i++)
+			installed |= xe_sync_entry_signal(&last_op->syncs[i],
+							  NULL,
+							  &last_op->fence->fence);
+		if (!installed && last_op->fence)
+			dma_fence_signal(&last_op->fence->fence);
 
-				return ERR_PTR(err);
-			}
-		}
-		break;
-	case XE_VM_BIND_OP_UNMAP:
-		vma = vm_unbind_lookup_vmas(vm, &lookup);
-		break;
-	case XE_VM_BIND_OP_PREFETCH:
-		vma = vm_prefetch_lookup_vmas(vm, &lookup, region);
-		break;
-	case XE_VM_BIND_OP_UNMAP_ALL:
-		XE_BUG_ON(!bo);
+		spin_lock_irq(&vm->async_ops.lock);
+		list_splice_tail(ops_list, &vm->async_ops.pending);
+		spin_unlock_irq(&vm->async_ops.lock);
 
-		err = xe_bo_lock(bo, &ww, 0, true);
-		if (err)
-			return ERR_PTR(err);
-		vma = vm_unbind_all_lookup_vmas(vm, bo);
-		if (!vma)
-			vma = ERR_PTR(-EINVAL);
-		xe_bo_unlock(bo, &ww);
-		break;
-	case XE_VM_BIND_OP_MAP_USERPTR:
-		XE_BUG_ON(bo);
+		if (!vm->async_ops.error)
+			queue_work(system_unbound_wq, &vm->async_ops.work);
+	}
 
-		vma = xe_vma_create(vm, NULL, bo_offset_or_userptr, addr,
-				    addr + range - 1,
-				    op & XE_VM_BIND_FLAG_READONLY,
-				    gt_mask);
-		if (!vma)
-			return ERR_PTR(-ENOMEM);
+	return err;
+}
 
-		err = xe_vma_userptr_pin_pages(vma);
-		if (err) {
-			prep_vma_destroy(vm, vma);
-			xe_vma_destroy_unlocked(vma);
+/*
+ * Unwind operations list, called after a failure of vm_bind_ioctl_ops_create or
+ * vm_bind_ioctl_ops_parse.
+ */
+static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
+				     struct drm_gpuva_ops **ops,
+				     int num_ops_list)
+{
+	int i;
 
-			return ERR_PTR(err);
-		} else {
-			xe_vm_insert_vma(vm, vma);
+	for (i = 0; i < num_ops_list; ++i) {
+		struct drm_gpuva_ops *__ops = ops[i];
+		struct drm_gpuva_op *__op;
+
+		if (!__ops)
+			continue;
+
+		drm_gpuva_for_each_op(__op, __ops) {
+			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+
+			xe_vma_op_unwind(vm, op, false);
 		}
-		break;
-	default:
-		XE_BUG_ON("NOT POSSIBLE");
-		vma = ERR_PTR(-EINVAL);
 	}
-
-	return vma;
 }
 
 #ifdef TEST_VM_ASYNC_OPS_ERROR
@@ -2971,15 +2939,16 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	struct drm_xe_vm_bind *args = data;
 	struct drm_xe_sync __user *syncs_user;
 	struct xe_bo **bos = NULL;
-	struct xe_vma **vmas = NULL;
+	struct drm_gpuva_ops **ops = NULL;
 	struct xe_vm *vm;
 	struct xe_engine *e = NULL;
 	u32 num_syncs;
 	struct xe_sync_entry *syncs = NULL;
 	struct drm_xe_vm_bind_op *bind_ops;
+	LIST_HEAD(ops_list);
 	bool async;
 	int err;
-	int i, j = 0;
+	int i;
 
 	err = vm_bind_ioctl_check_args(xe, args, &bind_ops, &async);
 	if (err)
@@ -3067,8 +3036,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		goto put_engine;
 	}
 
-	vmas = kzalloc(sizeof(*vmas) * args->num_binds, GFP_KERNEL);
-	if (!vmas) {
+	ops = kzalloc(sizeof(*ops) * args->num_binds, GFP_KERNEL);
+	if (!ops) {
 		err = -ENOMEM;
 		goto put_engine;
 	}
@@ -3148,128 +3117,40 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		u64 gt_mask = bind_ops[i].gt_mask;
 		u32 region = bind_ops[i].region;
 
-		vmas[i] = vm_bind_ioctl_lookup_vma(vm, bos[i], obj_offset,
-						   addr, range, op, gt_mask,
-						   region);
-		if (IS_ERR(vmas[i])) {
-			err = PTR_ERR(vmas[i]);
-			vmas[i] = NULL;
-			goto destroy_vmas;
-		}
-	}
-
-	for (j = 0; j < args->num_binds; ++j) {
-		struct xe_sync_entry *__syncs;
-		u32 __num_syncs = 0;
-		bool first_or_last = j == 0 || j == args->num_binds - 1;
-
-		if (args->num_binds == 1) {
-			__num_syncs = num_syncs;
-			__syncs = syncs;
-		} else if (first_or_last && num_syncs) {
-			bool first = j == 0;
-
-			__syncs = kmalloc(sizeof(*__syncs) * num_syncs,
-					  GFP_KERNEL);
-			if (!__syncs) {
-				err = ENOMEM;
-				break;
-			}
-
-			/* in-syncs on first bind, out-syncs on last bind */
-			for (i = 0; i < num_syncs; ++i) {
-				bool signal = syncs[i].flags &
-					DRM_XE_SYNC_SIGNAL;
-
-				if ((first && !signal) || (!first && signal))
-					__syncs[__num_syncs++] = syncs[i];
-			}
-		} else {
-			__num_syncs = 0;
-			__syncs = NULL;
-		}
-
-		if (async) {
-			bool last = j == args->num_binds - 1;
-
-			/*
-			 * Each pass of async worker drops the ref, take a ref
-			 * here, 1 set of refs taken above
-			 */
-			if (!last) {
-				if (e)
-					xe_engine_get(e);
-				xe_vm_get(vm);
-			}
-
-			err = vm_bind_ioctl_async(vm, vmas[j], e, bos[j],
-						  bind_ops + j, __syncs,
-						  __num_syncs);
-			if (err && !last) {
-				if (e)
-					xe_engine_put(e);
-				xe_vm_put(vm);
-			}
-			if (err)
-				break;
-		} else {
-			XE_BUG_ON(j != 0);	/* Not supported */
-			err = vm_bind_ioctl(vm, vmas[j], e, bos[j],
-					    bind_ops + j, __syncs,
-					    __num_syncs, NULL);
-			break;	/* Needed so cleanup loops work */
+		ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
+						  addr, range, op, gt_mask,
+						  region);
+		if (IS_ERR(ops[i])) {
+			err = PTR_ERR(ops[i]);
+			ops[i] = NULL;
+			goto unwind_ops;
 		}
 	}
 
-	/* Most of cleanup owned by the async bind worker */
-	if (async && !err) {
-		up_write(&vm->lock);
-		if (args->num_binds > 1)
-			kfree(syncs);
-		goto free_objs;
-	}
+	err = vm_bind_ioctl_ops_parse(vm, e, ops, args->num_binds,
+				      syncs, num_syncs, &ops_list, async);
+	if (err)
+		goto unwind_ops;
 
-destroy_vmas:
-	for (i = j; err && i < args->num_binds; ++i) {
-		u32 op = bind_ops[i].op;
-		struct xe_vma *vma, *next;
+	err = vm_bind_ioctl_ops_commit(vm, &ops_list, async);
+	up_write(&vm->lock);
 
-		if (!vmas[i])
-			break;
+	for (i = 0; i < args->num_binds; ++i)
+		xe_bo_put(bos[i]);
 
-		list_for_each_entry_safe(vma, next, &vma->unbind_link,
-					 unbind_link) {
-			list_del_init(&vma->unbind_link);
-			if (!vma->destroyed) {
-				prep_vma_destroy(vm, vma);
-				xe_vma_destroy_unlocked(vma);
-			}
-		}
+	return err;
 
-		switch (VM_BIND_OP(op)) {
-		case XE_VM_BIND_OP_MAP:
-			prep_vma_destroy(vm, vmas[i]);
-			xe_vma_destroy_unlocked(vmas[i]);
-			break;
-		case XE_VM_BIND_OP_MAP_USERPTR:
-			prep_vma_destroy(vm, vmas[i]);
-			xe_vma_destroy_unlocked(vmas[i]);
-			break;
-		}
-	}
+unwind_ops:
+	vm_bind_ioctl_ops_unwind(vm, ops, args->num_binds);
 release_vm_lock:
 	up_write(&vm->lock);
 free_syncs:
-	while (num_syncs--) {
-		if (async && j &&
-		    !(syncs[num_syncs].flags & DRM_XE_SYNC_SIGNAL))
-			continue;	/* Still in async worker */
+	while (num_syncs--)
 		xe_sync_entry_cleanup(&syncs[num_syncs]);
-	}
 
 	kfree(syncs);
 put_obj:
-	for (i = j; i < args->num_binds; ++i)
+	for (i = 0; i < args->num_binds; ++i)
 		xe_bo_put(bos[i]);
 put_engine:
 	if (e)
@@ -3278,7 +3159,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	xe_vm_put(vm);
 free_objs:
 	kfree(bos);
-	kfree(vmas);
+	kfree(ops);
 	if (args->num_binds > 1)
 		kfree(bind_ops);
 	return err;
@@ -3322,14 +3203,14 @@ void xe_vm_unlock(struct xe_vm *vm, struct ww_acquire_ctx *ww)
  */
 int xe_vm_invalidate_vma(struct xe_vma *vma)
 {
-	struct xe_device *xe = vma->vm->xe;
+	struct xe_device *xe = xe_vma_vm(vma)->xe;
 	struct xe_gt *gt;
 	u32 gt_needs_invalidate = 0;
 	int seqno[XE_MAX_GT];
 	u8 id;
 	int ret;
 
-	XE_BUG_ON(!xe_vm_in_fault_mode(vma->vm));
+	XE_BUG_ON(!xe_vm_in_fault_mode(xe_vma_vm(vma)));
 	trace_xe_vma_usm_invalidate(vma);
 
 	/* Check that we don't race with page-table updates */
@@ -3338,11 +3219,11 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
 			WARN_ON_ONCE(!mmu_interval_check_retry
 				     (&vma->userptr.notifier,
 				      vma->userptr.notifier_seq));
-			WARN_ON_ONCE(!dma_resv_test_signaled(&vma->vm->resv,
+			WARN_ON_ONCE(!dma_resv_test_signaled(&xe_vma_vm(vma)->resv,
 							     DMA_RESV_USAGE_BOOKKEEP));
 
 		} else {
-			xe_bo_assert_held(vma->bo);
+			xe_bo_assert_held(xe_vma_bo(vma));
 		}
 	}
 
@@ -3372,7 +3253,7 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
 #if IS_ENABLED(CONFIG_DRM_XE_SIMPLE_ERROR_CAPTURE)
 int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
 {
-	struct rb_node *node;
+	DRM_GPUVA_ITER(it, &vm->mgr, 0);
 	bool is_vram;
 	uint64_t addr;
 
@@ -3385,8 +3266,8 @@ int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
 		drm_printf(p, " VM root: A:0x%llx %s\n", addr, is_vram ? "VRAM" : "SYS");
 	}
 
-	for (node = rb_first(&vm->vmas); node; node = rb_next(node)) {
-		struct xe_vma *vma = to_xe_vma(node);
+	drm_gpuva_iter_for_each(it) {
+		struct xe_vma* vma = gpuva_to_vma(it.va);
 		bool is_userptr = xe_vma_is_userptr(vma);
 
 		if (is_userptr) {
@@ -3395,10 +3276,10 @@ int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
 			xe_res_first_sg(vma->userptr.sg, 0, GEN8_PAGE_SIZE, &cur);
 			addr = xe_res_dma(&cur);
 		} else {
-			addr = xe_bo_addr(vma->bo, 0, GEN8_PAGE_SIZE, &is_vram);
+			addr = xe_bo_addr(xe_vma_bo(vma), 0, GEN8_PAGE_SIZE, &is_vram);
 		}
 		drm_printf(p, " [%016llx-%016llx] S:0x%016llx A:%016llx %s\n",
-			   vma->start, vma->end, vma->end - vma->start + 1ull,
+			   xe_vma_start(vma), xe_vma_end(vma), xe_vma_size(vma),
 			   addr, is_userptr ? "USR" : is_vram ? "VRAM" : "SYS");
 	}
 	up_read(&vm->lock);
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 748dc16ebed9..21b1054949c4 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -6,6 +6,7 @@
 #ifndef _XE_VM_H_
 #define _XE_VM_H_
 
+#include "xe_bo_types.h"
 #include "xe_macros.h"
 #include "xe_map.h"
 #include "xe_vm_types.h"
@@ -25,7 +26,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags);
 void xe_vm_free(struct kref *ref);
 
 struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id);
-int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node);
 
 static inline struct xe_vm *xe_vm_get(struct xe_vm *vm)
 {
@@ -50,7 +50,67 @@ static inline bool xe_vm_is_closed(struct xe_vm *vm)
 }
 
 struct xe_vma *
-xe_vm_find_overlapping_vma(struct xe_vm *vm, const struct xe_vma *vma);
+xe_vm_find_overlapping_vma(struct xe_vm *vm, u64 start, u64 range);
+
+static inline struct xe_vm *gpuva_to_vm(struct drm_gpuva *gpuva)
+{
+	return container_of(gpuva->mgr, struct xe_vm, mgr);
+}
+
+static inline struct xe_vma *gpuva_to_vma(struct drm_gpuva *gpuva)
+{
+	return container_of(gpuva, struct xe_vma, gpuva);
+}
+
+static inline struct xe_vma_op *gpuva_op_to_vma_op(struct drm_gpuva_op *op)
+{
+	return container_of(op, struct xe_vma_op, base);
+}
+
+/*
+ * Let's abstract start, size, end, bo_offset, vm, and bo as the underlying
+ * implementation may change
+ */
+static inline u64 xe_vma_start(struct xe_vma *vma)
+{
+	return vma->gpuva.va.addr;
+}
+
+static inline u64 xe_vma_size(struct xe_vma *vma)
+{
+	return vma->gpuva.va.range;
+}
+
+static inline u64 xe_vma_end(struct xe_vma *vma)
+{
+	return xe_vma_start(vma) + xe_vma_size(vma);
+}
+
+static inline u64 xe_vma_bo_offset(struct xe_vma *vma)
+{
+	return vma->gpuva.gem.offset;
+}
+
+static inline struct xe_bo *xe_vma_bo(struct xe_vma *vma)
+{
+	return !vma->gpuva.gem.obj ? NULL :
+		container_of(vma->gpuva.gem.obj, struct xe_bo, ttm.base);
+}
+
+static inline struct xe_vm *xe_vma_vm(struct xe_vma *vma)
+{
+	return container_of(vma->gpuva.mgr, struct xe_vm, mgr);
+}
+
+static inline bool xe_vma_read_only(struct xe_vma *vma)
+{
+	return vma->gpuva.flags & XE_VMA_READ_ONLY;
+}
+
+static inline u64 xe_vma_userptr(struct xe_vma *vma)
+{
+	return vma->gpuva.gem.offset;
+}
 
 #define xe_vm_assert_held(vm) dma_resv_assert_held(&(vm)->resv)
 
@@ -117,7 +177,7 @@ static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
 
 static inline bool xe_vma_is_userptr(struct xe_vma *vma)
 {
-	return !vma->bo;
+	return !xe_vma_bo(vma);
 }
 
 int xe_vma_userptr_pin_pages(struct xe_vma *vma);
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 29815852985a..46d1b8d7b72f 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -30,7 +30,7 @@ static int madvise_preferred_mem_class(struct xe_device *xe, struct xe_vm *vm,
 		struct xe_bo *bo;
 		struct ww_acquire_ctx ww;
 
-		bo = vmas[i]->bo;
+		bo = xe_vma_bo(vmas[i]);
 
 		err = xe_bo_lock(bo, &ww, 0, true);
 		if (err)
@@ -55,7 +55,7 @@ static int madvise_preferred_gt(struct xe_device *xe, struct xe_vm *vm,
 		struct xe_bo *bo;
 		struct ww_acquire_ctx ww;
 
-		bo = vmas[i]->bo;
+		bo = xe_vma_bo(vmas[i]);
 
 		err = xe_bo_lock(bo, &ww, 0, true);
 		if (err)
@@ -91,7 +91,7 @@ static int madvise_preferred_mem_class_gt(struct xe_device *xe,
 		struct xe_bo *bo;
 		struct ww_acquire_ctx ww;
 
-		bo = vmas[i]->bo;
+		bo = xe_vma_bo(vmas[i]);
 
 		err = xe_bo_lock(bo, &ww, 0, true);
 		if (err)
@@ -114,7 +114,7 @@ static int madvise_cpu_atomic(struct xe_device *xe, struct xe_vm *vm,
 		struct xe_bo *bo;
 		struct ww_acquire_ctx ww;
 
-		bo = vmas[i]->bo;
+		bo = xe_vma_bo(vmas[i]);
 		if (XE_IOCTL_ERR(xe, !(bo->flags & XE_BO_CREATE_SYSTEM_BIT)))
 			return -EINVAL;
 
@@ -145,7 +145,7 @@ static int madvise_device_atomic(struct xe_device *xe, struct xe_vm *vm,
 		struct xe_bo *bo;
 		struct ww_acquire_ctx ww;
 
-		bo = vmas[i]->bo;
+		bo = xe_vma_bo(vmas[i]);
 		if (XE_IOCTL_ERR(xe, !(bo->flags & XE_BO_CREATE_VRAM0_BIT) &&
 				 !(bo->flags & XE_BO_CREATE_VRAM1_BIT)))
 			return -EINVAL;
@@ -176,7 +176,7 @@ static int madvise_priority(struct xe_device *xe, struct xe_vm *vm,
 		struct xe_bo *bo;
 		struct ww_acquire_ctx ww;
 
-		bo = vmas[i]->bo;
+		bo = xe_vma_bo(vmas[i]);
 
 		err = xe_bo_lock(bo, &ww, 0, true);
 		if (err)
@@ -210,19 +210,12 @@ static const madvise_func madvise_funcs[] = {
 	[DRM_XE_VM_MADVISE_PIN] = madvise_pin,
 };
 
-static struct xe_vma *node_to_vma(const struct rb_node *node)
-{
-	BUILD_BUG_ON(offsetof(struct xe_vma, vm_node) != 0);
-	return (struct xe_vma *)node;
-}
-
 static struct xe_vma **
 get_vmas(struct xe_vm *vm, int *num_vmas, u64 addr, u64 range)
 {
-	struct xe_vma **vmas;
-	struct xe_vma *vma, *__vma, lookup;
+	struct xe_vma **vmas, **__vmas;
 	int max_vmas = 8;
-	struct rb_node *node;
+	DRM_GPUVA_ITER(it, &vm->mgr, addr);
 
 	lockdep_assert_held(&vm->lock);
 
@@ -230,64 +223,24 @@ get_vmas(struct xe_vm *vm, int *num_vmas, u64 addr, u64 range)
 	if (!vmas)
 		return NULL;
 
-	lookup.start = addr;
-	lookup.end = addr + range - 1;
+	drm_gpuva_iter_for_each_range(it, addr + range) {
+		struct xe_vma *vma = gpuva_to_vma(it.va);
 
-	vma = xe_vm_find_overlapping_vma(vm, &lookup);
-	if (!vma)
-		return vmas;
+		if (xe_vma_is_userptr(vma))
+			continue;
 
-	if (!xe_vma_is_userptr(vma)) {
+		if (*num_vmas == max_vmas) {
+			max_vmas <<= 1;
+			__vmas = krealloc(vmas, max_vmas * sizeof(*vmas),
+					  GFP_KERNEL);
+			if (!__vmas)
+				return NULL;
+			vmas = __vmas;
+		}
 		vmas[*num_vmas] = vma;
 		*num_vmas += 1;
 	}
 
-	node = &vma->vm_node;
-	while ((node = rb_next(node))) {
-		if (!xe_vma_cmp_vma_cb(&lookup, node)) {
-			__vma = node_to_vma(node);
-			if (xe_vma_is_userptr(__vma))
-				continue;
-
-			if (*num_vmas == max_vmas) {
-				struct xe_vma **__vmas =
-					krealloc(vmas, max_vmas * sizeof(*vmas),
-						 GFP_KERNEL);
-
-				if (!__vmas)
-					return NULL;
-				vmas = __vmas;
-			}
-			vmas[*num_vmas] = __vma;
-			*num_vmas += 1;
-		} else {
-			break;
-		}
-	}
-
-	node = &vma->vm_node;
-	while ((node = rb_prev(node))) {
-		if (!xe_vma_cmp_vma_cb(&lookup, node)) {
-			__vma = node_to_vma(node);
-			if (xe_vma_is_userptr(__vma))
-				continue;
-
-			if (*num_vmas == max_vmas) {
-				struct xe_vma **__vmas =
-					krealloc(vmas, max_vmas * sizeof(*vmas),
-						 GFP_KERNEL);
-
-				if (!__vmas)
-					return NULL;
-				vmas = __vmas;
-			}
-			vmas[*num_vmas] = __vma;
-			*num_vmas += 1;
-		} else {
-			break;
-		}
-	}
-
 	return vmas;
 }
 
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index fada7896867f..a81dc9a1a7a6 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -6,6 +6,8 @@
 #ifndef _XE_VM_TYPES_H_
 #define _XE_VM_TYPES_H_
 
+#include <drm/drm_gpuva_mgr.h>
+
 #include <linux/dma-resv.h>
 #include <linux/kref.h>
 #include <linux/mmu_notifier.h>
@@ -14,28 +16,23 @@
 #include "xe_device_types.h"
 #include "xe_pt_types.h"
 
+struct async_op_fence;
 struct xe_bo;
+struct xe_sync_entry;
 struct xe_vm;
 
-struct xe_vma {
-	struct rb_node vm_node;
-	/** @vm: VM which this VMA belongs to */
-	struct xe_vm *vm;
+#define TEST_VM_ASYNC_OPS_ERROR
+#define FORCE_ASYNC_OP_ERROR	BIT(31)
 
-	/**
-	 * @start: start address of this VMA within its address domain, end -
-	 * start + 1 == VMA size
-	 */
-	u64 start;
-	/** @end: end address of this VMA within its address domain */
-	u64 end;
-	/** @pte_flags: pte flags for this VMA */
-	u32 pte_flags;
+#define XE_VMA_READ_ONLY	DRM_GPUVA_USERBITS
+#define XE_VMA_DESTROYED	(DRM_GPUVA_USERBITS << 1)
+#define XE_VMA_ATOMIC_PTE_BIT	(DRM_GPUVA_USERBITS << 2)
+#define XE_VMA_FIRST_REBIND	(DRM_GPUVA_USERBITS << 3)
+#define XE_VMA_LAST_REBIND	(DRM_GPUVA_USERBITS << 4)
 
-	/** @bo: BO if not a userptr, must be NULL is userptr */
-	struct xe_bo *bo;
-	/** @bo_offset: offset into BO if not a userptr, unused for userptr */
-	u64 bo_offset;
+struct xe_vma {
+	/** @gpuva: Base GPUVA object */
+	struct drm_gpuva gpuva;
 
 	/** @gt_mask: GT mask of where to create binding for this VMA */
 	u64 gt_mask;
@@ -49,40 +46,8 @@ struct xe_vma {
 	 */
 	u64 gt_present;
 
-	/**
-	 * @destroyed: VMA is destroyed, in the sense that it shouldn't be
-	 * subject to rebind anymore. This field must be written under
-	 * the vm lock in write mode and the userptr.notifier_lock in
-	 * either mode. Read under the vm lock or the userptr.notifier_lock in
-	 * write mode.
-	 */
-	bool destroyed;
-
-	/**
-	 * @first_munmap_rebind: VMA is first in a sequence of ops that triggers
-	 * a rebind (munmap style VM unbinds). This indicates the operation
-	 * using this VMA must wait on all dma-resv slots (wait for pending jobs
-	 * / trigger preempt fences).
-	 */
-	bool first_munmap_rebind;
-
-	/**
-	 * @last_munmap_rebind: VMA is first in a sequence of ops that triggers
-	 * a rebind (munmap style VM unbinds). This indicates the operation
-	 * using this VMA must install itself into kernel dma-resv slot (blocks
-	 * future jobs) and kick the rebind work in compute mode.
-	 */
-	bool last_munmap_rebind;
-
-	/** @use_atomic_access_pte_bit: Set atomic access bit in PTE */
-	bool use_atomic_access_pte_bit;
-
-	union {
-		/** @bo_link: link into BO if not a userptr */
-		struct list_head bo_link;
-		/** @userptr_link: link into VM repin list if userptr */
-		struct list_head userptr_link;
-	};
+	/** @userptr_link: link into VM repin list if userptr */
+	struct list_head userptr_link;
 
 	/**
 	 * @rebind_link: link into VM if this VMA needs rebinding, and
@@ -105,8 +70,6 @@ struct xe_vma {
 
 	/** @userptr: user pointer state */
 	struct {
-		/** @ptr: user pointer */
-		uintptr_t ptr;
 		/** @invalidate_link: Link for the vm::userptr.invalidated list */
 		struct list_head invalidate_link;
 		/**
@@ -154,6 +117,9 @@ struct xe_device;
 #define xe_vm_assert_held(vm) dma_resv_assert_held(&(vm)->resv)
 
 struct xe_vm {
+	/** @mgr: base GPUVA used to track VMAs */
+	struct drm_gpuva_manager mgr;
+
 	struct xe_device *xe;
 
 	struct kref refcount;
@@ -165,7 +131,6 @@ struct xe_vm {
 	struct dma_resv resv;
 
 	u64 size;
-	struct rb_root vmas;
 
 	struct xe_pt *pt_root[XE_MAX_GT];
 	struct xe_bo *scratch_bo[XE_MAX_GT];
@@ -339,4 +304,96 @@ struct xe_vm {
 	} error_capture;
 };
 
+/** struct xe_vma_op_map - VMA map operation */
+struct xe_vma_op_map {
+	/** @vma: VMA to map */
+	struct xe_vma *vma;
+	/** @immediate: Immediate bind */
+	bool immediate;
+	/** @read_only: Read only */
+	bool read_only;
+};
+
+/** struct xe_vma_op_unmap - VMA unmap operation */
+struct xe_vma_op_unmap {
+	/** @start: start of the VMA unmap */
+	u64 start;
+	/** @range: range of the VMA unmap */
+	u64 range;
+};
+
+/** struct xe_vma_op_remap - VMA remap operation */
+struct xe_vma_op_remap {
+	/** @prev: VMA preceding part of a split mapping */
+	struct xe_vma *prev;
+	/** @next: VMA subsequent part of a split mapping */
+	struct xe_vma *next;
+	/** @start: start of the VMA unmap */
+	u64 start;
+	/** @range: range of the VMA unmap */
+	u64 range;
+	/** @unmap_done: unmap operation in done */
+	bool unmap_done;
+};
+
+/** struct xe_vma_op_prefetch - VMA prefetch operation */
+struct xe_vma_op_prefetch {
+	/** @region: memory region to prefetch to */
+	u32 region;
+};
+
+/** enum xe_vma_op_flags - flags for VMA operation */
+enum xe_vma_op_flags {
+	/** @XE_VMA_OP_FIRST: first VMA operation for a set of syncs */
+	XE_VMA_OP_FIRST		= (0x1 << 0),
+	/** @XE_VMA_OP_LAST: last VMA operation for a set of syncs */
+	XE_VMA_OP_LAST		= (0x1 << 1),
+};
+
+/** struct xe_vma_op - VMA operation */
+struct xe_vma_op {
+	/** @base: GPUVA base operation */
+	struct drm_gpuva_op base;
+	/**
+	 * @ops: GPUVA ops, when set call drm_gpuva_ops_free after this
+	 * operations is processed
+	 */
+	struct drm_gpuva_ops *ops;
+	/** @engine: engine for this operation */
+	struct xe_engine *engine;
+	/**
+	 * @syncs: syncs for this operation, only used on first and last
+	 * operation
+	 */
+	struct xe_sync_entry *syncs;
+	/** @num_syncs: number of syncs */
+	u32 num_syncs;
+	/** @link: async operation link */
+	struct list_head link;
+	/**
+	 * @fence: async operation fence, signaled on last operation complete
+	 */
+	struct async_op_fence *fence;
+	/** @gt_mask: gt mask for this operation */
+	u64 gt_mask;
+	/** @flags: operation flags */
+	enum xe_vma_op_flags flags;
+
+#ifdef TEST_VM_ASYNC_OPS_ERROR
+	/** @inject_error: inject error to test async op error handling */
+	bool inject_error;
+#endif
+
+	union {
+		/** @map: VMA map operation specific data */
+		struct xe_vma_op_map map;
+		/** @unmap: VMA unmap operation specific data */
+		struct xe_vma_op_unmap unmap;
+		/** @remap: VMA remap operation specific data */
+		struct xe_vma_op_remap remap;
+		/** @prefetch: VMA prefetch operation specific data */
+		struct xe_vma_op_prefetch prefetch;
+	};
+};
+
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 5/8] drm/xe: NULL binding implementation
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (3 preceding siblings ...)
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 6/8] drm/xe: Avoid doing rebinds Matthew Brost
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe

Add uAPI and implementation for NULL bindings. A NULL binding is defined
as writes dropped and read zero. A single bit in the uAPI has been added
which results in a single bit in the PTEs being set.

NULL bindings are indended to be used to implement VK sparse bindings.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_bo.h           |  1 +
 drivers/gpu/drm/xe/xe_exec.c         |  2 +
 drivers/gpu/drm/xe/xe_gt_pagefault.c |  4 +-
 drivers/gpu/drm/xe/xe_pt.c           | 77 ++++++++++++++++-------
 drivers/gpu/drm/xe/xe_vm.c           | 92 ++++++++++++++++++----------
 drivers/gpu/drm/xe/xe_vm.h           | 10 +++
 drivers/gpu/drm/xe/xe_vm_madvise.c   |  2 +-
 drivers/gpu/drm/xe/xe_vm_types.h     |  3 +
 include/uapi/drm/xe_drm.h            |  8 +++
 9 files changed, 144 insertions(+), 55 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 8f5a7ad10d09..a8059f94ea2c 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -56,6 +56,7 @@
 #define GEN8_PDE_IPS_64K		BIT_ULL(11)
 
 #define GEN12_GGTT_PTE_LM		BIT_ULL(1)
+#define GEN12_PTE_NULL			BIT_ULL(9)
 #define GEN12_USM_PPGTT_PTE_AE		BIT_ULL(10)
 #define GEN12_PPGTT_PTE_LM		BIT_ULL(11)
 #define GEN12_PDE_64K			BIT_ULL(6)
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index 214d82bc906b..60d643fa478e 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -115,6 +115,8 @@ static int xe_exec_begin(struct xe_engine *e, struct ww_acquire_ctx *ww,
 	 * to a location where the GPU can access it).
 	 */
 	list_for_each_entry(vma, &vm->rebind_list, rebind_link) {
+		XE_BUG_ON(xe_vma_is_null(vma));
+
 		if (xe_vma_is_userptr(vma))
 			continue;
 
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index f7a066090a13..cfffe3398fe4 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -526,8 +526,8 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc)
 
 	trace_xe_vma_acc(vma);
 
-	/* Userptr can't be migrated, nothing to do */
-	if (xe_vma_is_userptr(vma))
+	/* Userptr or null can't be migrated, nothing to do */
+	if (xe_vma_has_no_bo(vma))
 		goto unlock_vm;
 
 	/* Lock VM and BOs dma-resv */
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 37a1ce6f62a3..4da248484267 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -82,7 +82,9 @@ u64 gen8_pde_encode(struct xe_bo *bo, u64 bo_offset,
 static dma_addr_t vma_addr(struct xe_vma *vma, u64 offset,
 			   size_t page_size, bool *is_vram)
 {
-	if (xe_vma_is_userptr(vma)) {
+	if (xe_vma_is_null(vma)) {
+		return 0;
+	} else if (xe_vma_is_userptr(vma)) {
 		struct xe_res_cursor cur;
 		u64 page;
 
@@ -563,6 +565,10 @@ static bool xe_pt_hugepte_possible(u64 addr, u64 next, unsigned int level,
 	if (next - xe_walk->va_curs_start > xe_walk->curs->size)
 		return false;
 
+	/* null VMA's do not have dma adresses */
+	if (xe_walk->pte_flags & GEN12_PTE_NULL)
+		return true;
+
 	/* Is the DMA address huge PTE size aligned? */
 	size = next - addr;
 	dma = addr - xe_walk->va_curs_start + xe_res_dma(xe_walk->curs);
@@ -585,6 +591,10 @@ xe_pt_scan_64K(u64 addr, u64 next, struct xe_pt_stage_bind_walk *xe_walk)
 	if (next > xe_walk->l0_end_addr)
 		return false;
 
+	/* null VMA's do not have dma adresses */
+	if (xe_walk->pte_flags & GEN12_PTE_NULL)
+		return true;
+
 	xe_res_next(&curs, addr - xe_walk->va_curs_start);
 	for (; addr < next; addr += SZ_64K) {
 		if (!IS_ALIGNED(xe_res_dma(&curs), SZ_64K) || curs.size < SZ_64K)
@@ -630,17 +640,34 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset,
 	struct xe_pt *xe_child;
 	bool covers;
 	int ret = 0;
-	u64 pte;
+	u64 pte = 0;
 
 	/* Is this a leaf entry ?*/
 	if (level == 0 || xe_pt_hugepte_possible(addr, next, level, xe_walk)) {
 		struct xe_res_cursor *curs = xe_walk->curs;
+		bool null = xe_walk->pte_flags & GEN12_PTE_NULL;
 
 		XE_WARN_ON(xe_walk->va_curs_start != addr);
 
-		pte = __gen8_pte_encode(xe_res_dma(curs) + xe_walk->dma_offset,
-					xe_walk->cache, xe_walk->pte_flags,
-					level);
+		if (null) {
+			pte |= GEN8_PAGE_PRESENT | GEN8_PAGE_RW;
+
+			if (unlikely(xe_walk->pte_flags & PTE_READ_ONLY))
+				pte &= ~GEN8_PAGE_RW;
+
+			if (level == 1)
+				pte |= GEN8_PDE_PS_2M;
+			else if (level == 2)
+				pte |= GEN8_PDPE_PS_1G;
+
+			pte |= GEN12_PTE_NULL;
+		} else {
+			pte = __gen8_pte_encode(xe_res_dma(curs) +
+						xe_walk->dma_offset,
+						xe_walk->cache,
+						xe_walk->pte_flags,
+						level);
+		}
 		pte |= xe_walk->default_pte;
 
 		/*
@@ -658,7 +685,8 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset,
 		if (unlikely(ret))
 			return ret;
 
-		xe_res_next(curs, next - addr);
+		if (!null)
+			xe_res_next(curs, next - addr);
 		xe_walk->va_curs_start = next;
 		*action = ACTION_CONTINUE;
 
@@ -751,7 +779,8 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
 		.gt = gt,
 		.curs = &curs,
 		.va_curs_start = xe_vma_start(vma),
-		.pte_flags = xe_vma_read_only(vma) ? PTE_READ_ONLY : 0,
+		.pte_flags = xe_vma_read_only(vma) ? PTE_READ_ONLY : 0 |
+			xe_vma_is_null(vma) ? GEN12_PTE_NULL : 0,
 		.wupd.entries = entries,
 		.needs_64K = (xe_vma_vm(vma)->flags & XE_VM_FLAGS_64K) &&
 			is_vram,
@@ -769,23 +798,28 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
 			gt_to_xe(gt)->mem.vram.io_start;
 		xe_walk.cache = XE_CACHE_WB;
 	} else {
-		if (!xe_vma_is_userptr(vma) && bo->flags & XE_BO_SCANOUT_BIT)
+		if (!xe_vma_has_no_bo(vma) && bo->flags & XE_BO_SCANOUT_BIT)
 			xe_walk.cache = XE_CACHE_WT;
 		else
 			xe_walk.cache = XE_CACHE_WB;
 	}
-	if (!xe_vma_is_userptr(vma) && xe_bo_is_stolen(bo))
+	if (!xe_vma_has_no_bo(vma) && xe_bo_is_stolen(bo))
 		xe_walk.dma_offset = xe_ttm_stolen_gpu_offset(xe_bo_device(bo));
 
 	xe_bo_assert_held(bo);
-	if (xe_vma_is_userptr(vma))
-		xe_res_first_sg(vma->userptr.sg, 0, xe_vma_size(vma), &curs);
-	else if (xe_bo_is_vram(bo) || xe_bo_is_stolen(bo))
-		xe_res_first(bo->ttm.resource, xe_vma_bo_offset(vma),
-			     xe_vma_size(vma), &curs);
-	else
-		xe_res_first_sg(xe_bo_get_sg(bo), xe_vma_bo_offset(vma),
-				xe_vma_size(vma), &curs);
+	if (!xe_vma_is_null(vma)) {
+		if (xe_vma_is_userptr(vma))
+			xe_res_first_sg(vma->userptr.sg, 0, xe_vma_size(vma),
+					&curs);
+		else if (xe_bo_is_vram(bo) || xe_bo_is_stolen(bo))
+			xe_res_first(bo->ttm.resource, xe_vma_bo_offset(vma),
+				     xe_vma_size(vma), &curs);
+		else
+			xe_res_first_sg(xe_bo_get_sg(bo), xe_vma_bo_offset(vma),
+					xe_vma_size(vma), &curs);
+	} else {
+		curs.size = xe_vma_size(vma);
+	}
 
 	ret = drm_pt_walk_range(&pt->drm, pt->level, xe_vma_start(vma),
 				xe_vma_end(vma), &xe_walk.drm);
@@ -979,7 +1013,7 @@ static void xe_pt_commit_locks_assert(struct xe_vma *vma)
 
 	if (xe_vma_is_userptr(vma))
 		lockdep_assert_held_read(&vm->userptr.notifier_lock);
-	else
+	else if (!xe_vma_is_null(vma))
 		dma_resv_assert_held(xe_vma_bo(vma)->ttm.base.resv);
 
 	dma_resv_assert_held(&vm->resv);
@@ -1283,7 +1317,8 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 	struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1];
 	struct xe_pt_migrate_pt_update bind_pt_update = {
 		.base = {
-			.ops = xe_vma_is_userptr(vma) ? &userptr_bind_ops : &bind_ops,
+			.ops = xe_vma_is_userptr(vma) ? &userptr_bind_ops :
+				&bind_ops,
 			.vma = vma,
 		},
 		.bind = true,
@@ -1348,7 +1383,7 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 				   DMA_RESV_USAGE_KERNEL :
 				   DMA_RESV_USAGE_BOOKKEEP);
 
-		if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
+		if (!xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
 			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
 					   DMA_RESV_USAGE_BOOKKEEP);
 		xe_pt_commit_bind(vma, entries, num_entries, rebind,
@@ -1667,7 +1702,7 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 				   DMA_RESV_USAGE_BOOKKEEP);
 
 		/* This fence will be installed by caller when doing eviction */
-		if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
+		if (!xe_vma_has_no_bo(vma) && !xe_vma_bo(vma)->vm)
 			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
 					   DMA_RESV_USAGE_BOOKKEEP);
 		xe_pt_commit_unbind(vma, entries, num_entries,
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index fddbe8d5f984..5d74a8aa9e8d 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -60,6 +60,7 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma)
 
 	lockdep_assert_held(&vm->lock);
 	XE_BUG_ON(!xe_vma_is_userptr(vma));
+	XE_BUG_ON(xe_vma_is_null(vma));
 retry:
 	if (vma->gpuva.flags & XE_VMA_DESTROYED)
 		return 0;
@@ -581,7 +582,7 @@ static void preempt_rebind_work_func(struct work_struct *w)
 		goto out_unlock;
 
 	list_for_each_entry(vma, &vm->rebind_list, rebind_link) {
-		if (xe_vma_is_userptr(vma) ||
+		if (xe_vma_has_no_bo(vma) ||
 		    vma->gpuva.flags & XE_VMA_DESTROYED)
 			continue;
 
@@ -813,7 +814,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 				    struct xe_bo *bo,
 				    u64 bo_offset_or_userptr,
 				    u64 start, u64 end,
-				    bool read_only,
+				    bool read_only, bool null,
 				    u64 gt_mask)
 {
 	struct xe_vma *vma;
@@ -843,6 +844,8 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 	vma->gpuva.va.range = end - start + 1;
 	if (read_only)
 		vma->gpuva.flags |= XE_VMA_READ_ONLY;
+	if (null)
+		vma->gpuva.flags |= XE_VMA_NULL;
 
 	if (gt_mask) {
 		vma->gt_mask = gt_mask;
@@ -862,23 +865,26 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 		vma->gpuva.gem.obj = &bo->ttm.base;
 		vma->gpuva.gem.offset = bo_offset_or_userptr;
 		drm_gpuva_link(&vma->gpuva);
-	} else /* userptr */ {
-		u64 size = end - start + 1;
-		int err;
-
-		vma->gpuva.gem.offset = bo_offset_or_userptr;
+	} else /* userptr or null */ {
+		if (!null) {
+			u64 size = end - start + 1;
+			int err;
+
+			vma->gpuva.gem.offset = bo_offset_or_userptr;
+			err = mmu_interval_notifier_insert(&vma->userptr.notifier,
+							   current->mm,
+							   xe_vma_userptr(vma),
+							   size,
+							   &vma_userptr_notifier_ops);
+			if (err) {
+				kfree(vma);
+				vma = ERR_PTR(err);
+				return vma;
+			}
 
-		err = mmu_interval_notifier_insert(&vma->userptr.notifier,
-						   current->mm,
-						   xe_vma_userptr(vma), size,
-						   &vma_userptr_notifier_ops);
-		if (err) {
-			kfree(vma);
-			vma = ERR_PTR(err);
-			return vma;
+			vma->userptr.notifier_seq = LONG_MAX;
 		}
 
-		vma->userptr.notifier_seq = LONG_MAX;
 		xe_vm_get(vm);
 	}
 
@@ -916,6 +922,8 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
 		 */
 		mmu_interval_notifier_remove(&vma->userptr.notifier);
 		xe_vm_put(vm);
+	} else if (xe_vma_is_null(vma)) {
+		xe_vm_put(vm);
 	} else {
 		xe_bo_put(xe_vma_bo(vma));
 	}
@@ -954,7 +962,7 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
 		list_del_init(&vma->userptr.invalidate_link);
 		spin_unlock(&vm->userptr.invalidated_lock);
 		list_del(&vma->userptr_link);
-	} else {
+	} else if (!xe_vma_is_null(vma)) {
 		xe_bo_assert_held(xe_vma_bo(vma));
 		drm_gpuva_unlink(&vma->gpuva);
 		if (!xe_vma_bo(vma)->vm)
@@ -1302,7 +1310,7 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 	drm_gpuva_iter_for_each(it) {
 		vma = gpuva_to_vma(it.va);
 
-		if (xe_vma_is_userptr(vma)) {
+		if (xe_vma_has_no_bo(vma)) {
 			down_read(&vm->userptr.notifier_lock);
 			vma->gpuva.flags |= XE_VMA_DESTROYED;
 			up_read(&vm->userptr.notifier_lock);
@@ -1312,7 +1320,7 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 		drm_gpuva_iter_remove(&it);
 
 		/* easy case, remove from VMA? */
-		if (xe_vma_is_userptr(vma) || xe_vma_bo(vma)->vm) {
+		if (xe_vma_has_no_bo(vma) || xe_vma_bo(vma)->vm) {
 			xe_vma_destroy(vma, NULL);
 			continue;
 		}
@@ -1961,7 +1969,7 @@ static int xe_vm_prefetch(struct xe_vm *vm, struct xe_vma *vma,
 
 	XE_BUG_ON(region > ARRAY_SIZE(region_to_mem_type));
 
-	if (!xe_vma_is_userptr(vma)) {
+	if (!xe_vma_has_no_bo(vma)) {
 		err = xe_bo_migrate(xe_vma_bo(vma), region_to_mem_type[region]);
 		if (err)
 			return err;
@@ -2169,6 +2177,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
 				operation & XE_VM_BIND_FLAG_IMMEDIATE;
 			op->map.read_only =
 				operation & XE_VM_BIND_FLAG_READONLY;
+			op->map.null = operation & XE_VM_BIND_FLAG_NULL;
 		}
 		break;
 	case XE_VM_BIND_OP_UNMAP:
@@ -2225,7 +2234,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
 }
 
 static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
-			      u64 gt_mask, bool read_only)
+			      u64 gt_mask, bool read_only, bool null)
 {
 	struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
 	struct xe_vma *vma;
@@ -2241,7 +2250,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
 	}
 	vma = xe_vma_create(vm, bo, op->gem.offset,
 			    op->va.addr, op->va.addr +
-			    op->va.range - 1, read_only,
+			    op->va.range - 1, read_only, null,
 			    gt_mask);
 	if (bo)
 		xe_bo_unlock(bo, &ww);
@@ -2252,7 +2261,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
 			xe_vma_destroy(vma, NULL);
 			return ERR_PTR(err);
 		}
-	} else if(!bo->vm) {
+	} else if(!xe_vma_has_no_bo(vma) && !bo->vm) {
 		vm_insert_extobj(vm, vma);
 		err = add_preempt_fences(vm, bo);
 		if (err) {
@@ -2329,7 +2338,8 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
 				struct xe_vma *vma;
 
 				vma = new_vma(vm, &op->base.map,
-					      op->gt_mask, op->map.read_only);
+					      op->gt_mask, op->map.read_only,
+					      op->map.null );
 				if (IS_ERR(vma)) {
 					err = PTR_ERR(vma);
 					goto free_fence;
@@ -2344,9 +2354,13 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
 					bool read_only =
 						op->base.remap.unmap->va->flags &
 						XE_VMA_READ_ONLY;
+					bool null =
+						op->base.remap.unmap->va->flags &
+						XE_VMA_NULL;
 
 					vma = new_vma(vm, op->base.remap.prev,
-						      op->gt_mask, read_only);
+						      op->gt_mask, read_only,
+						      null);
 					if (IS_ERR(vma)) {
 						err = PTR_ERR(vma);
 						goto free_fence;
@@ -2361,8 +2375,13 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
 						op->base.remap.unmap->va->flags &
 						XE_VMA_READ_ONLY;
 
+					bool null =
+						op->base.remap.unmap->va->flags &
+						XE_VMA_NULL;
+
 					vma = new_vma(vm, op->base.remap.next,
-						      op->gt_mask, read_only);
+						      op->gt_mask, read_only,
+						      null);
 					if (IS_ERR(vma)) {
 						err = PTR_ERR(vma);
 						goto free_fence;
@@ -2815,11 +2834,12 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
 #ifdef TEST_VM_ASYNC_OPS_ERROR
 #define SUPPORTED_FLAGS	\
 	(FORCE_ASYNC_OP_ERROR | XE_VM_BIND_FLAG_ASYNC | \
-	 XE_VM_BIND_FLAG_READONLY | XE_VM_BIND_FLAG_IMMEDIATE | 0xffff)
+	 XE_VM_BIND_FLAG_READONLY | XE_VM_BIND_FLAG_IMMEDIATE | \
+	 XE_VM_BIND_FLAG_NULL | 0xffff)
 #else
 #define SUPPORTED_FLAGS	\
 	(XE_VM_BIND_FLAG_ASYNC | XE_VM_BIND_FLAG_READONLY | \
-	 XE_VM_BIND_FLAG_IMMEDIATE | 0xffff)
+	 XE_VM_BIND_FLAG_IMMEDIATE | XE_VM_BIND_FLAG_NULL | 0xffff)
 #endif
 #define XE_64K_PAGE_MASK 0xffffull
 
@@ -2865,6 +2885,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
 		u32 obj = (*bind_ops)[i].obj;
 		u64 obj_offset = (*bind_ops)[i].obj_offset;
 		u32 region = (*bind_ops)[i].region;
+		bool null = op &  XE_VM_BIND_FLAG_NULL;
 
 		if (i == 0) {
 			*async = !!(op & XE_VM_BIND_FLAG_ASYNC);
@@ -2891,8 +2912,12 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
 		if (XE_IOCTL_ERR(xe, VM_BIND_OP(op) >
 				 XE_VM_BIND_OP_PREFETCH) ||
 		    XE_IOCTL_ERR(xe, op & ~SUPPORTED_FLAGS) ||
+		    XE_IOCTL_ERR(xe, obj && null) ||
+		    XE_IOCTL_ERR(xe, obj_offset && null) ||
+		    XE_IOCTL_ERR(xe, VM_BIND_OP(op) != XE_VM_BIND_OP_MAP &&
+				 null) ||
 		    XE_IOCTL_ERR(xe, !obj &&
-				 VM_BIND_OP(op) == XE_VM_BIND_OP_MAP) ||
+				 VM_BIND_OP(op) == XE_VM_BIND_OP_MAP && !null) ||
 		    XE_IOCTL_ERR(xe, !obj &&
 				 VM_BIND_OP(op) == XE_VM_BIND_OP_UNMAP_ALL) ||
 		    XE_IOCTL_ERR(xe, addr &&
@@ -3211,6 +3236,7 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
 	int ret;
 
 	XE_BUG_ON(!xe_vm_in_fault_mode(xe_vma_vm(vma)));
+	XE_BUG_ON(xe_vma_is_null(vma));
 	trace_xe_vma_usm_invalidate(vma);
 
 	/* Check that we don't race with page-table updates */
@@ -3269,8 +3295,11 @@ int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
 	drm_gpuva_iter_for_each(it) {
 		struct xe_vma* vma = gpuva_to_vma(it.va);
 		bool is_userptr = xe_vma_is_userptr(vma);
+		bool null = xe_vma_is_null(vma);
 
-		if (is_userptr) {
+		if (null) {
+			addr = 0;
+		} else if (is_userptr) {
 			struct xe_res_cursor cur;
 
 			xe_res_first_sg(vma->userptr.sg, 0, GEN8_PAGE_SIZE, &cur);
@@ -3280,7 +3309,8 @@ int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
 		}
 		drm_printf(p, " [%016llx-%016llx] S:0x%016llx A:%016llx %s\n",
 			   xe_vma_start(vma), xe_vma_end(vma), xe_vma_size(vma),
-			   addr, is_userptr ? "USR" : is_vram ? "VRAM" : "SYS");
+			   addr, null ? "NULL" :
+			   is_userptr ? "USR" : is_vram ? "VRAM" : "SYS");
 	}
 	up_read(&vm->lock);
 
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 21b1054949c4..96e2c6b07bf8 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -175,7 +175,17 @@ static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
 	}
 }
 
+static inline bool xe_vma_is_null(struct xe_vma *vma)
+{
+	return vma->gpuva.flags & XE_VMA_NULL;
+}
+
 static inline bool xe_vma_is_userptr(struct xe_vma *vma)
+{
+	return !xe_vma_bo(vma) && !xe_vma_is_null(vma);
+}
+
+static inline bool xe_vma_has_no_bo(struct xe_vma *vma)
 {
 	return !xe_vma_bo(vma);
 }
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 46d1b8d7b72f..29c99136a57f 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -226,7 +226,7 @@ get_vmas(struct xe_vm *vm, int *num_vmas, u64 addr, u64 range)
 	drm_gpuva_iter_for_each_range(it, addr + range) {
 		struct xe_vma *vma = gpuva_to_vma(it.va);
 
-		if (xe_vma_is_userptr(vma))
+		if (xe_vma_has_no_bo(vma))
 			continue;
 
 		if (*num_vmas == max_vmas) {
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index a81dc9a1a7a6..357f887b22ea 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -29,6 +29,7 @@ struct xe_vm;
 #define XE_VMA_ATOMIC_PTE_BIT	(DRM_GPUVA_USERBITS << 2)
 #define XE_VMA_FIRST_REBIND	(DRM_GPUVA_USERBITS << 3)
 #define XE_VMA_LAST_REBIND	(DRM_GPUVA_USERBITS << 4)
+#define XE_VMA_NULL		(DRM_GPUVA_USERBITS << 5)
 
 struct xe_vma {
 	/** @gpuva: Base GPUVA object */
@@ -312,6 +313,8 @@ struct xe_vma_op_map {
 	bool immediate;
 	/** @read_only: Read only */
 	bool read_only;
+	/** @null: NULL (writes dropped, read zero) */
+	bool null;
 };
 
 /** struct xe_vma_op_unmap - VMA unmap operation */
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index b0b80aae3ee8..27c51946fadd 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -447,6 +447,14 @@ struct drm_xe_vm_bind_op {
 	 * than differing the MAP to the page fault handler.
 	 */
 #define XE_VM_BIND_FLAG_IMMEDIATE	(0x1 << 18)
+	/*
+	 * When the NULL flag is set, the page tables are setup with a special
+	 * bit which indicates writes are dropped and all reads return zero. The
+	 * NULL flags is only valid for XE_VM_BIND_OP_MAP operations, the BO
+	 * handle MBZ, and the BO offset MBZ. This flag is intended to implement
+	 * VK sparse bindings.
+	 */
+#define XE_VM_BIND_FLAG_NULL		(0x1 << 19)
 
 	/** @reserved: Reserved */
 	__u64 reserved[2];
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 6/8] drm/xe: Avoid doing rebinds
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (4 preceding siblings ...)
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 5/8] drm/xe: NULL binding implementation Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 7/8] drm/xe: Reduce the number list links in xe_vma Matthew Brost
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe

If we dont change page sizes we can avoid doing rebinds rather just do a
partial unbind. The algorithm to determine is page size is greedy as we
assume all pages in the removed VMA are the largest page used in the
VMA.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_pt.c       |  4 ++
 drivers/gpu/drm/xe/xe_vm.c       | 69 +++++++++++++++++++++++++-------
 drivers/gpu/drm/xe/xe_vm_types.h | 17 ++++----
 3 files changed, 65 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 4da248484267..43e5a1054411 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -412,6 +412,8 @@ struct xe_pt_stage_bind_walk {
 	/* Input parameters for the walk */
 	/** @vm: The vm we're building for. */
 	struct xe_vm *vm;
+	/** @vma: The vma we are binding for. */
+	struct xe_vma *vma;
 	/** @gt: The gt we're building for. */
 	struct xe_gt *gt;
 	/** @cache: Desired cache level for the ptes */
@@ -688,6 +690,7 @@ xe_pt_stage_bind_entry(struct drm_pt *parent, pgoff_t offset,
 		if (!null)
 			xe_res_next(curs, next - addr);
 		xe_walk->va_curs_start = next;
+		xe_walk->vma->gpuva.flags |= (XE_VMA_PTE_4K << level);
 		*action = ACTION_CONTINUE;
 
 		return ret;
@@ -776,6 +779,7 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
 			.max_level = XE_PT_HIGHEST_LEVEL,
 		},
 		.vm = xe_vma_vm(vma),
+		.vma = vma,
 		.gt = gt,
 		.curs = &curs,
 		.va_curs_start = xe_vma_start(vma),
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 5d74a8aa9e8d..c1e1b443cd2d 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2273,6 +2273,16 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
 	return vma;
 }
 
+static u64 xe_vma_max_pte_size(struct xe_vma *vma)
+{
+	if (vma->gpuva.flags & XE_VMA_PTE_1G)
+		return SZ_1G;
+	else if (vma->gpuva.flags & XE_VMA_PTE_2M)
+		return SZ_2M;
+
+	return SZ_4K;
+}
+
 /*
  * Parse operations list and create any resources needed for the operations
  * prior to fully commiting to the operations. This setp can fail.
@@ -2349,6 +2359,13 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
 				break;
 			}
 			case DRM_GPUVA_OP_REMAP:
+			{
+				struct xe_vma *old =
+					gpuva_to_vma(op->base.remap.unmap->va);
+
+				op->remap.start = xe_vma_start(old);
+				op->remap.range = xe_vma_size(old);
+
 				if (op->base.remap.prev) {
 					struct xe_vma *vma;
 					bool read_only =
@@ -2367,6 +2384,20 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
 					}
 
 					op->remap.prev = vma;
+
+					/*
+					 * XXX: Not sure why userptr doesn't
+					 * work but really shouldn't be a use
+					 * case.
+					 */
+					op->remap.skip_prev = !xe_vma_is_userptr(old) &&
+						IS_ALIGNED(xe_vma_end(vma), xe_vma_max_pte_size(old));
+					if (op->remap.skip_prev) {
+						op->remap.range -=
+							xe_vma_end(vma) -
+							xe_vma_start(old);
+						op->remap.start = xe_vma_end(vma);
+					}
 				}
 
 				if (op->base.remap.next) {
@@ -2388,20 +2419,16 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
 					}
 
 					op->remap.next = vma;
+					op->remap.skip_next = !xe_vma_is_userptr(old) &&
+						IS_ALIGNED(xe_vma_start(vma), xe_vma_max_pte_size(old));
+					if (op->remap.skip_next)
+						op->remap.range -=
+							xe_vma_end(old) -
+							xe_vma_start(vma);
 				}
-
-				/* XXX: Support no doing remaps */
-				op->remap.start =
-					xe_vma_start(gpuva_to_vma(op->base.remap.unmap->va));
-				op->remap.range =
-					xe_vma_size(gpuva_to_vma(op->base.remap.unmap->va));
 				break;
+			}
 			case DRM_GPUVA_OP_UNMAP:
-				op->unmap.start =
-					xe_vma_start(gpuva_to_vma(op->base.unmap.va));
-				op->unmap.range =
-					xe_vma_size(gpuva_to_vma(op->base.unmap.va));
-				break;
 			case DRM_GPUVA_OP_PREFETCH:
 				/* Nothing to do */
 				break;
@@ -2440,10 +2467,21 @@ static void xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
 	case DRM_GPUVA_OP_REMAP:
 		prep_vma_destroy(vm, gpuva_to_vma(op->base.remap.unmap->va),
 				 true);
-		if (op->remap.prev)
+
+		/* Adjust for partial unbind after removin VMA from VM */
+		op->base.remap.unmap->va->va.addr = op->remap.start;
+		op->base.remap.unmap->va->va.range = op->remap.range;
+
+		if (op->remap.prev) {
 			xe_vm_insert_vma(vm, op->remap.prev);
-		if (op->remap.next)
+			if (op->remap.skip_prev)
+				op->remap.prev = NULL;
+		}
+		if (op->remap.next) {
 			xe_vm_insert_vma(vm, op->remap.next);
+			if (op->remap.skip_next)
+				op->remap.next = NULL;
+		}
 		break;
 	case DRM_GPUVA_OP_UNMAP:
 		prep_vma_destroy(vm, gpuva_to_vma(op->base.unmap.va), true);
@@ -2508,9 +2546,10 @@ static int __xe_vma_op_execute(struct xe_vm *vm, struct xe_vma *vma,
 		bool next = !!op->remap.next;
 
 		if (!op->remap.unmap_done) {
-			vm->async_ops.munmap_rebind_inflight = true;
-			if (prev || next)
+			if (prev || next) {
+				vm->async_ops.munmap_rebind_inflight = true;
 				vma->gpuva.flags |= XE_VMA_FIRST_REBIND;
+			}
 			err = xe_vm_unbind(vm, vma, op->engine, op->syncs,
 					   op->num_syncs,
 					   !prev && !next ? op->fence : NULL,
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 357f887b22ea..21c46b71b7d8 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -30,6 +30,9 @@ struct xe_vm;
 #define XE_VMA_FIRST_REBIND	(DRM_GPUVA_USERBITS << 3)
 #define XE_VMA_LAST_REBIND	(DRM_GPUVA_USERBITS << 4)
 #define XE_VMA_NULL		(DRM_GPUVA_USERBITS << 5)
+#define XE_VMA_PTE_4K		(DRM_GPUVA_USERBITS << 6)
+#define XE_VMA_PTE_2M		(DRM_GPUVA_USERBITS << 7)
+#define XE_VMA_PTE_1G		(DRM_GPUVA_USERBITS << 8)
 
 struct xe_vma {
 	/** @gpuva: Base GPUVA object */
@@ -317,14 +320,6 @@ struct xe_vma_op_map {
 	bool null;
 };
 
-/** struct xe_vma_op_unmap - VMA unmap operation */
-struct xe_vma_op_unmap {
-	/** @start: start of the VMA unmap */
-	u64 start;
-	/** @range: range of the VMA unmap */
-	u64 range;
-};
-
 /** struct xe_vma_op_remap - VMA remap operation */
 struct xe_vma_op_remap {
 	/** @prev: VMA preceding part of a split mapping */
@@ -335,6 +330,10 @@ struct xe_vma_op_remap {
 	u64 start;
 	/** @range: range of the VMA unmap */
 	u64 range;
+	/** @skip_prev: skip prev rebind */
+	bool skip_prev;
+	/** @skip_next: skip next rebind */
+	bool skip_next;
 	/** @unmap_done: unmap operation in done */
 	bool unmap_done;
 };
@@ -390,8 +389,6 @@ struct xe_vma_op {
 	union {
 		/** @map: VMA map operation specific data */
 		struct xe_vma_op_map map;
-		/** @unmap: VMA unmap operation specific data */
-		struct xe_vma_op_unmap unmap;
 		/** @remap: VMA remap operation specific data */
 		struct xe_vma_op_remap remap;
 		/** @prefetch: VMA prefetch operation specific data */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 7/8] drm/xe: Reduce the number list links in xe_vma
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (5 preceding siblings ...)
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 6/8] drm/xe: Avoid doing rebinds Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 8/8] drm/xe: Optimize size of xe_vma allocation Matthew Brost
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe

5 list links in can be squashed into a union in xe_vma as being on the
various list is mutually exclusive.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_pagefault.c |  2 +-
 drivers/gpu/drm/xe/xe_pt.c           |  5 +-
 drivers/gpu/drm/xe/xe_vm.c           | 29 ++++++------
 drivers/gpu/drm/xe/xe_vm_types.h     | 71 +++++++++++++++-------------
 4 files changed, 55 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index cfffe3398fe4..d7bf6b0a0697 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -157,7 +157,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
 
 	if (xe_vma_is_userptr(vma) && write_locked) {
 		spin_lock(&vm->userptr.invalidated_lock);
-		list_del_init(&vma->userptr.invalidate_link);
+		list_del_init(&vma->invalidate_link);
 		spin_unlock(&vm->userptr.invalidated_lock);
 
 		ret = xe_vma_userptr_pin_pages(vma);
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 43e5a1054411..195eb78e7a31 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1116,8 +1116,7 @@ static int xe_pt_userptr_inject_eagain(struct xe_vma *vma)
 
 		vma->userptr.divisor = divisor << 1;
 		spin_lock(&vm->userptr.invalidated_lock);
-		list_move_tail(&vma->userptr.invalidate_link,
-			       &vm->userptr.invalidated);
+		list_move_tail(&vma->invalidate_link, &vm->userptr.invalidated);
 		spin_unlock(&vm->userptr.invalidated_lock);
 		return true;
 	}
@@ -1724,7 +1723,7 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
 
 		if (!vma->gt_present) {
 			spin_lock(&vm->userptr.invalidated_lock);
-			list_del_init(&vma->userptr.invalidate_link);
+			list_del_init(&vma->invalidate_link);
 			spin_unlock(&vm->userptr.invalidated_lock);
 		}
 		up_read(&vm->userptr.notifier_lock);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index c1e1b443cd2d..bacad720e7ba 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -677,8 +677,7 @@ static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
 	if (!xe_vm_in_fault_mode(vm) &&
 	    !(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->gt_present) {
 		spin_lock(&vm->userptr.invalidated_lock);
-		list_move_tail(&vma->userptr.invalidate_link,
-			       &vm->userptr.invalidated);
+		list_move_tail(&vma->invalidate_link, &vm->userptr.invalidated);
 		spin_unlock(&vm->userptr.invalidated_lock);
 	}
 
@@ -726,8 +725,8 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
 	/* Collect invalidated userptrs */
 	spin_lock(&vm->userptr.invalidated_lock);
 	list_for_each_entry_safe(vma, next, &vm->userptr.invalidated,
-				 userptr.invalidate_link) {
-		list_del_init(&vma->userptr.invalidate_link);
+				 invalidate_link) {
+		list_del_init(&vma->invalidate_link);
 		list_move_tail(&vma->userptr_link, &vm->userptr.repin_list);
 	}
 	spin_unlock(&vm->userptr.invalidated_lock);
@@ -830,12 +829,11 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 		return vma;
 	}
 
-	/* FIXME: Way to many lists, should be able to reduce this */
+	/*
+	 * userptr_link, destroy_link, notifier.rebind_link,
+	 * invalidate_link
+	 */
 	INIT_LIST_HEAD(&vma->rebind_link);
-	INIT_LIST_HEAD(&vma->unbind_link);
-	INIT_LIST_HEAD(&vma->userptr_link);
-	INIT_LIST_HEAD(&vma->userptr.invalidate_link);
-	INIT_LIST_HEAD(&vma->notifier.rebind_link);
 	INIT_LIST_HEAD(&vma->extobj.link);
 
 	INIT_LIST_HEAD(&vma->gpuva.head);
@@ -953,15 +951,14 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
 	struct xe_vm *vm = xe_vma_vm(vma);
 
 	lockdep_assert_held_write(&vm->lock);
-	XE_BUG_ON(!list_empty(&vma->unbind_link));
 
 	if (xe_vma_is_userptr(vma)) {
 		XE_WARN_ON(!(vma->gpuva.flags & XE_VMA_DESTROYED));
 
 		spin_lock(&vm->userptr.invalidated_lock);
-		list_del_init(&vma->userptr.invalidate_link);
+		if (!list_empty(&vma->invalidate_link))
+			list_del_init(&vma->invalidate_link);
 		spin_unlock(&vm->userptr.invalidated_lock);
-		list_del(&vma->userptr_link);
 	} else if (!xe_vma_is_null(vma)) {
 		xe_bo_assert_held(xe_vma_bo(vma));
 		drm_gpuva_unlink(&vma->gpuva);
@@ -1325,7 +1322,9 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 			continue;
 		}
 
-		list_add_tail(&vma->unbind_link, &contested);
+		if (!list_empty(&vma->destroy_link))
+			list_del_init(&vma->destroy_link);
+		list_add_tail(&vma->destroy_link, &contested);
 	}
 
 	/*
@@ -1353,8 +1352,8 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 	 * Since we hold a refcount to the bo, we can remove and free
 	 * the members safely without locking.
 	 */
-	list_for_each_entry_safe(vma, next_vma, &contested, unbind_link) {
-		list_del_init(&vma->unbind_link);
+	list_for_each_entry_safe(vma, next_vma, &contested, destroy_link) {
+		list_del_init(&vma->destroy_link);
 		xe_vma_destroy_unlocked(vma);
 	}
 
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 21c46b71b7d8..a049dfa9450a 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -50,21 +50,32 @@ struct xe_vma {
 	 */
 	u64 gt_present;
 
-	/** @userptr_link: link into VM repin list if userptr */
-	struct list_head userptr_link;
+	union {
+		/** @userptr_link: link into VM repin list if userptr */
+		struct list_head userptr_link;
 
-	/**
-	 * @rebind_link: link into VM if this VMA needs rebinding, and
-	 * if it's a bo (not userptr) needs validation after a possible
-	 * eviction. Protected by the vm's resv lock.
-	 */
-	struct list_head rebind_link;
+		/**
+		 * @rebind_link: link into VM if this VMA needs rebinding, and
+		 * if it's a bo (not userptr) needs validation after a possible
+		 * eviction. Protected by the vm's resv lock.
+		 */
+		struct list_head rebind_link;
 
-	/**
-	 * @unbind_link: link or list head if an unbind of multiple VMAs, in
-	 * single unbind op, is being done.
-	 */
-	struct list_head unbind_link;
+		/** @destroy_link: link for contested VMAs on VM close */
+		struct list_head destroy_link;
+
+		/** @invalidate_link: Link for the vm::userptr.invalidated list */
+		struct list_head invalidate_link;
+
+		struct {
+			 /*
+			  * @notifier.rebind_link: link for
+			  * vm->notifier.rebind_list, protected by
+			  * vm->notifier.list_lock
+			  */
+			struct list_head rebind_link;
+		} notifier;
+	};
 
 	/** @destroy_cb: callback to destroy VMA when unbind job is done */
 	struct dma_fence_cb destroy_cb;
@@ -72,10 +83,22 @@ struct xe_vma {
 	/** @destroy_work: worker to destroy this BO */
 	struct work_struct destroy_work;
 
+	/** @usm: unified shared memory state */
+	struct {
+		/** @gt_invalidated: VMA has been invalidated */
+		u64 gt_invalidated;
+	} usm;
+
+	struct {
+		/**
+		 * @extobj.link: Link into vm's external object list.
+		 * protected by the vm lock.
+		 */
+		struct list_head link;
+	} extobj;
+
 	/** @userptr: user pointer state */
 	struct {
-		/** @invalidate_link: Link for the vm::userptr.invalidated list */
-		struct list_head invalidate_link;
 		/**
 		 * @notifier: MMU notifier for user pointer (invalidation call back)
 		 */
@@ -96,24 +119,6 @@ struct xe_vma {
 		u32 divisor;
 #endif
 	} userptr;
-
-	/** @usm: unified shared memory state */
-	struct {
-		/** @gt_invalidated: VMA has been invalidated */
-		u64 gt_invalidated;
-	} usm;
-
-	struct {
-		struct list_head rebind_link;
-	} notifier;
-
-	struct {
-		/**
-		 * @extobj.link: Link into vm's external object list.
-		 * protected by the vm lock.
-		 */
-		struct list_head link;
-	} extobj;
 };
 
 struct xe_device;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] [PATCH v5 8/8] drm/xe: Optimize size of xe_vma allocation
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (6 preceding siblings ...)
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 7/8] drm/xe: Reduce the number list links in xe_vma Matthew Brost
@ 2023-04-04  1:42 ` Matthew Brost
  2023-04-04  1:44 ` [Intel-xe] ✓ CI.Patch_applied: success for Port Xe to use GPUVA and implement NULL VM binds (rev5) Patchwork
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-04  1:42 UTC (permalink / raw)
  To: intel-xe

Reduce gt_mask to a u8 from a u64, only allocate userptr state if VMA is
a userptr, and union of destroy callback and worker.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_vm.c       | 14 +++--
 drivers/gpu/drm/xe/xe_vm_types.h | 88 +++++++++++++++++---------------
 2 files changed, 57 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index bacad720e7ba..a3ab257614ef 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -814,7 +814,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 				    u64 bo_offset_or_userptr,
 				    u64 start, u64 end,
 				    bool read_only, bool null,
-				    u64 gt_mask)
+				    u8 gt_mask)
 {
 	struct xe_vma *vma;
 	struct xe_gt *gt;
@@ -823,7 +823,11 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 	XE_BUG_ON(start >= end);
 	XE_BUG_ON(end >= vm->size);
 
-	vma = kzalloc(sizeof(*vma), GFP_KERNEL);
+	if (!bo && !null)	/* userptr */
+		vma = kzalloc(sizeof(*vma), GFP_KERNEL);
+	else
+		vma = kzalloc(sizeof(*vma) - sizeof(struct xe_userptr),
+			      GFP_KERNEL);
 	if (!vma) {
 		vma = ERR_PTR(-ENOMEM);
 		return vma;
@@ -2148,7 +2152,7 @@ static void print_op(struct xe_device *xe, struct drm_gpuva_op *op)
 static struct drm_gpuva_ops *
 vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
 			 u64 bo_offset_or_userptr, u64 addr, u64 range,
-			 u32 operation, u64 gt_mask, u32 region)
+			 u32 operation, u8 gt_mask, u32 region)
 {
 	struct drm_gem_object *obj = bo ? &bo->ttm.base : NULL;
 	struct ww_acquire_ctx ww;
@@ -2233,7 +2237,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
 }
 
 static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
-			      u64 gt_mask, bool read_only, bool null)
+			      u8 gt_mask, bool read_only, bool null)
 {
 	struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
 	struct xe_vma *vma;
@@ -3177,8 +3181,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		u64 addr = bind_ops[i].addr;
 		u32 op = bind_ops[i].op;
 		u64 obj_offset = bind_ops[i].obj_offset;
-		u64 gt_mask = bind_ops[i].gt_mask;
 		u32 region = bind_ops[i].region;
+		u8 gt_mask = bind_ops[i].gt_mask;
 
 		ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
 						  addr, range, op, gt_mask,
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index a049dfa9450a..d52f02a457c2 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -34,22 +34,34 @@ struct xe_vm;
 #define XE_VMA_PTE_2M		(DRM_GPUVA_USERBITS << 7)
 #define XE_VMA_PTE_1G		(DRM_GPUVA_USERBITS << 8)
 
+/** struct xe_userptr - User pointer */
+struct xe_userptr {
+	/**
+	 * @notifier: MMU notifier for user pointer (invalidation call back)
+	 */
+	struct mmu_interval_notifier notifier;
+	/** @sgt: storage for a scatter gather table */
+	struct sg_table sgt;
+	/** @sg: allocated scatter gather table */
+	struct sg_table *sg;
+	/** @notifier_seq: notifier sequence number */
+	unsigned long notifier_seq;
+	/**
+	 * @initial_bind: user pointer has been bound at least once.
+	 * write: vm->userptr.notifier_lock in read mode and vm->resv held.
+	 * read: vm->userptr.notifier_lock in write mode or vm->resv held.
+	 */
+	bool initial_bind;
+#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
+	u32 divisor;
+#endif
+};
+
+/** xe_vma - Virtual memory address */
 struct xe_vma {
 	/** @gpuva: Base GPUVA object */
 	struct drm_gpuva gpuva;
 
-	/** @gt_mask: GT mask of where to create binding for this VMA */
-	u64 gt_mask;
-
-	/**
-	 * @gt_present: GT mask of binding are present for this VMA.
-	 * protected by vm->lock, vm->resv and for userptrs,
-	 * vm->userptr.notifier_lock for writing. Needs either for reading,
-	 * but if reading is done under the vm->lock only, it needs to be held
-	 * in write mode.
-	 */
-	u64 gt_present;
-
 	union {
 		/** @userptr_link: link into VM repin list if userptr */
 		struct list_head userptr_link;
@@ -77,16 +89,29 @@ struct xe_vma {
 		} notifier;
 	};
 
-	/** @destroy_cb: callback to destroy VMA when unbind job is done */
-	struct dma_fence_cb destroy_cb;
+	union {
+		/** @destroy_cb: callback to destroy VMA when unbind job is done */
+		struct dma_fence_cb destroy_cb;
+		/** @destroy_work: worker to destroy this BO */
+		struct work_struct destroy_work;
+	};
 
-	/** @destroy_work: worker to destroy this BO */
-	struct work_struct destroy_work;
+	/** @gt_mask: GT mask of where to create binding for this VMA */
+	u8 gt_mask;
+
+	/**
+	 * @gt_present: GT mask of binding are present for this VMA.
+	 * protected by vm->lock, vm->resv and for userptrs,
+	 * vm->userptr.notifier_lock for writing. Needs either for reading,
+	 * but if reading is done under the vm->lock only, it needs to be held
+	 * in write mode.
+	 */
+	u8 gt_present;
 
 	/** @usm: unified shared memory state */
 	struct {
 		/** @gt_invalidated: VMA has been invalidated */
-		u64 gt_invalidated;
+		u8 gt_invalidated;
 	} usm;
 
 	struct {
@@ -97,28 +122,11 @@ struct xe_vma {
 		struct list_head link;
 	} extobj;
 
-	/** @userptr: user pointer state */
-	struct {
-		/**
-		 * @notifier: MMU notifier for user pointer (invalidation call back)
-		 */
-		struct mmu_interval_notifier notifier;
-		/** @sgt: storage for a scatter gather table */
-		struct sg_table sgt;
-		/** @sg: allocated scatter gather table */
-		struct sg_table *sg;
-		/** @notifier_seq: notifier sequence number */
-		unsigned long notifier_seq;
-		/**
-		 * @initial_bind: user pointer has been bound at least once.
-		 * write: vm->userptr.notifier_lock in read mode and vm->resv held.
-		 * read: vm->userptr.notifier_lock in write mode or vm->resv held.
-		 */
-		bool initial_bind;
-#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
-		u32 divisor;
-#endif
-	} userptr;
+	/**
+	 * @userptr: user pointer state, only allocated for VMAs that are
+	 * user pointers
+	 */
+	struct xe_userptr userptr;
 };
 
 struct xe_device;
@@ -382,7 +390,7 @@ struct xe_vma_op {
 	 */
 	struct async_op_fence *fence;
 	/** @gt_mask: gt mask for this operation */
-	u64 gt_mask;
+	u8 gt_mask;
 	/** @flags: operation flags */
 	enum xe_vma_op_flags flags;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Intel-xe] ✓ CI.Patch_applied: success for Port Xe to use GPUVA and implement NULL VM binds (rev5)
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (7 preceding siblings ...)
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 8/8] drm/xe: Optimize size of xe_vma allocation Matthew Brost
@ 2023-04-04  1:44 ` Patchwork
  2023-04-04  1:45 ` [Intel-xe] ✓ CI.KUnit: " Patchwork
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2023-04-04  1:44 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Port Xe to use GPUVA and implement NULL VM binds (rev5)
URL   : https://patchwork.freedesktop.org/series/115217/
State : success

== Summary ==

=== Applying kernel patches on branch 'drm-xe-next' with base: ===
commit 63b79d536e96e045be4f6c63947c7d42e8dbf600
Author:     Lucas De Marchi <lucas.demarchi@intel.com>
AuthorDate: Fri Mar 31 16:09:02 2023 -0700
Commit:     Lucas De Marchi <lucas.demarchi@intel.com>
CommitDate: Mon Apr 3 13:41:08 2023 -0700

    drm/xe: Fix platform order
    
    Platform order in enum xe_platform started to be used by some parts of
    the code, like the GuC/HuC firmware loading logic. The order itself is
    not very important, but it's better to follow a convention: as was
    documented in the comment above the enum, reorder the platforms by
    graphics version. While at it, remove the gen terminology.
    
    v2:
      - Use "graphics version" instead of chronological order (Matt Roper)
      - Also change pciidlist to follow the same order
      - Remove "gen" from comments around enum xe_platform
    
    Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
    Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
    Link: https://lore.kernel.org/r/20230331230902.1603294-1-lucas.demarchi@intel.com
=== git am output follows ===
Applying: maple_tree: split up MA_STATE() macro
Applying: maple_tree: Export mas_preallocate
Applying: drm: manager to keep track of GPUs VA mappings
Applying: drm/xe: Port Xe to GPUVA
Applying: drm/xe: NULL binding implementation
Applying: drm/xe: Avoid doing rebinds
Applying: drm/xe: Reduce the number list links in xe_vma
Applying: drm/xe: Optimize size of xe_vma allocation



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Intel-xe] ✓ CI.KUnit: success for Port Xe to use GPUVA and implement NULL VM binds (rev5)
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (8 preceding siblings ...)
  2023-04-04  1:44 ` [Intel-xe] ✓ CI.Patch_applied: success for Port Xe to use GPUVA and implement NULL VM binds (rev5) Patchwork
@ 2023-04-04  1:45 ` Patchwork
  2023-04-04  1:49 ` [Intel-xe] ✓ CI.Build: " Patchwork
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2023-04-04  1:45 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Port Xe to use GPUVA and implement NULL VM binds (rev5)
URL   : https://patchwork.freedesktop.org/series/115217/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
stty: 'standard input': Inappropriate ioctl for device
[01:44:29] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[01:44:33] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
[01:44:55] Starting KUnit Kernel (1/1)...
[01:44:55] ============================================================
[01:44:55] ==================== xe_bo (2 subtests) ====================
[01:44:55] [SKIPPED] xe_ccs_migrate_kunit
[01:44:55] [SKIPPED] xe_bo_evict_kunit
[01:44:55] ===================== [SKIPPED] xe_bo ======================
[01:44:55] ================== xe_dma_buf (1 subtest) ==================
[01:44:55] [SKIPPED] xe_dma_buf_kunit
[01:44:55] =================== [SKIPPED] xe_dma_buf ===================
[01:44:55] ================== xe_migrate (1 subtest) ==================
[01:44:55] [SKIPPED] xe_migrate_sanity_kunit
[01:44:55] =================== [SKIPPED] xe_migrate ===================
[01:44:55] ============================================================
[01:44:55] Testing complete. Ran 4 tests: skipped: 4
[01:44:55] Elapsed time: 25.954s total, 4.200s configuring, 21.635s building, 0.090s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[01:44:55] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[01:44:56] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
[01:45:15] Starting KUnit Kernel (1/1)...
[01:45:15] ============================================================
[01:45:15] ============ drm_test_pick_cmdline (2 subtests) ============
[01:45:15] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[01:45:15] =============== drm_test_pick_cmdline_named  ===============
[01:45:15] [PASSED] NTSC
[01:45:15] [PASSED] NTSC-J
[01:45:15] [PASSED] PAL
[01:45:15] [PASSED] PAL-M
[01:45:15] =========== [PASSED] drm_test_pick_cmdline_named ===========
[01:45:15] ============== [PASSED] drm_test_pick_cmdline ==============
[01:45:15] ================== drm_buddy (6 subtests) ==================
[01:45:15] [PASSED] drm_test_buddy_alloc_limit
[01:45:15] [PASSED] drm_test_buddy_alloc_range
[01:45:15] [PASSED] drm_test_buddy_alloc_optimistic
[01:45:15] [PASSED] drm_test_buddy_alloc_pessimistic
[01:45:15] [PASSED] drm_test_buddy_alloc_smoke
[01:45:15] [PASSED] drm_test_buddy_alloc_pathological
[01:45:15] ==================== [PASSED] drm_buddy ====================
[01:45:15] ============= drm_cmdline_parser (40 subtests) =============
[01:45:15] [PASSED] drm_test_cmdline_force_d_only
[01:45:15] [PASSED] drm_test_cmdline_force_D_only_dvi
[01:45:15] [PASSED] drm_test_cmdline_force_D_only_hdmi
[01:45:15] [PASSED] drm_test_cmdline_force_D_only_not_digital
[01:45:15] [PASSED] drm_test_cmdline_force_e_only
[01:45:15] [PASSED] drm_test_cmdline_res
[01:45:15] [PASSED] drm_test_cmdline_res_vesa
[01:45:15] [PASSED] drm_test_cmdline_res_vesa_rblank
[01:45:15] [PASSED] drm_test_cmdline_res_rblank
[01:45:15] [PASSED] drm_test_cmdline_res_bpp
[01:45:15] [PASSED] drm_test_cmdline_res_refresh
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[01:45:15] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[01:45:15] [PASSED] drm_test_cmdline_res_margins_force_on
[01:45:15] [PASSED] drm_test_cmdline_res_vesa_margins
[01:45:15] [PASSED] drm_test_cmdline_name
[01:45:15] [PASSED] drm_test_cmdline_name_bpp
[01:45:15] [PASSED] drm_test_cmdline_name_option
[01:45:15] [PASSED] drm_test_cmdline_name_bpp_option
[01:45:15] [PASSED] drm_test_cmdline_rotate_0
[01:45:15] [PASSED] drm_test_cmdline_rotate_90
[01:45:15] [PASSED] drm_test_cmdline_rotate_180
[01:45:15] [PASSED] drm_test_cmdline_rotate_270
[01:45:15] [PASSED] drm_test_cmdline_hmirror
[01:45:15] [PASSED] drm_test_cmdline_vmirror
[01:45:15] [PASSED] drm_test_cmdline_margin_options
[01:45:15] [PASSED] drm_test_cmdline_multiple_options
[01:45:15] [PASSED] drm_test_cmdline_bpp_extra_and_option
[01:45:15] [PASSED] drm_test_cmdline_extra_and_option
[01:45:15] [PASSED] drm_test_cmdline_freestanding_options
[01:45:15] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[01:45:15] [PASSED] drm_test_cmdline_panel_orientation
[01:45:15] ================ drm_test_cmdline_invalid  =================
[01:45:15] [PASSED] margin_only
[01:45:15] [PASSED] interlace_only
[01:45:15] [PASSED] res_missing_x
[01:45:15] [PASSED] res_missing_y
[01:45:15] [PASSED] res_bad_y
[01:45:15] [PASSED] res_missing_y_bpp
[01:45:15] [PASSED] res_bad_bpp
[01:45:15] [PASSED] res_bad_refresh
[01:45:15] [PASSED] res_bpp_refresh_force_on_off
[01:45:15] [PASSED] res_invalid_mode
[01:45:15] [PASSED] res_bpp_wrong_place_mode
[01:45:15] [PASSED] name_bpp_refresh
[01:45:15] [PASSED] name_refresh
[01:45:15] [PASSED] name_refresh_wrong_mode
[01:45:15] [PASSED] name_refresh_invalid_mode
[01:45:15] [PASSED] rotate_multiple
[01:45:15] [PASSED] rotate_invalid_val
[01:45:15] [PASSED] rotate_truncated
[01:45:15] [PASSED] invalid_option
[01:45:15] [PASSED] invalid_tv_option
[01:45:15] [PASSED] truncated_tv_option
[01:45:15] ============ [PASSED] drm_test_cmdline_invalid =============
[01:45:15] =============== drm_test_cmdline_tv_options  ===============
[01:45:15] [PASSED] NTSC
[01:45:15] [PASSED] NTSC_443
[01:45:15] [PASSED] NTSC_J
[01:45:15] [PASSED] PAL
[01:45:15] [PASSED] PAL_M
[01:45:15] [PASSED] PAL_N
[01:45:15] [PASSED] SECAM
[01:45:15] =========== [PASSED] drm_test_cmdline_tv_options ===========
[01:45:15] =============== [PASSED] drm_cmdline_parser ================
[01:45:15] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[01:45:15] ========== drm_test_get_tv_mode_from_name_valid  ===========
[01:45:15] [PASSED] NTSC
[01:45:15] [PASSED] NTSC-443
[01:45:15] [PASSED] NTSC-J
[01:45:15] [PASSED] PAL
[01:45:15] [PASSED] PAL-M
[01:45:15] [PASSED] PAL-N
[01:45:15] [PASSED] SECAM
[01:45:15] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[01:45:15] [PASSED] drm_test_get_tv_mode_from_name_truncated
[01:45:15] ============ [PASSED] drm_get_tv_mode_from_name ============
[01:45:15] ============= drm_damage_helper (21 subtests) ==============
[01:45:15] [PASSED] drm_test_damage_iter_no_damage
[01:45:15] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[01:45:15] [PASSED] drm_test_damage_iter_no_damage_src_moved
[01:45:15] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[01:45:15] [PASSED] drm_test_damage_iter_no_damage_not_visible
[01:45:15] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[01:45:15] [PASSED] drm_test_damage_iter_no_damage_no_fb
[01:45:15] [PASSED] drm_test_damage_iter_simple_damage
[01:45:15] [PASSED] drm_test_damage_iter_single_damage
[01:45:15] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[01:45:15] [PASSED] drm_test_damage_iter_single_damage_outside_src
[01:45:15] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[01:45:15] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[01:45:15] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[01:45:15] [PASSED] drm_test_damage_iter_single_damage_src_moved
[01:45:15] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[01:45:15] [PASSED] drm_test_damage_iter_damage
[01:45:15] [PASSED] drm_test_damage_iter_damage_one_intersect
[01:45:15] [PASSED] drm_test_damage_iter_damage_one_outside
[01:45:15] [PASSED] drm_test_damage_iter_damage_src_moved
[01:45:15] [PASSED] drm_test_damage_iter_damage_not_visible
[01:45:15] ================ [PASSED] drm_damage_helper ================
[01:45:15] ============== drm_dp_mst_helper (2 subtests) ==============
[01:45:15] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[01:45:15] [PASSED] Clock 154000 BPP 30 DSC disabled
[01:45:15] [PASSED] Clock 234000 BPP 30 DSC disabled
[01:45:15] [PASSED] Clock 297000 BPP 24 DSC disabled
[01:45:15] [PASSED] Clock 332880 BPP 24 DSC enabled
[01:45:15] [PASSED] Clock 324540 BPP 24 DSC enabled
[01:45:15] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[01:45:15] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[01:45:15] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[01:45:15] [PASSED] DP_POWER_UP_PHY with port number
[01:45:15] [PASSED] DP_POWER_DOWN_PHY with port number
[01:45:15] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[01:45:15] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[01:45:15] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[01:45:15] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[01:45:15] [PASSED] DP_QUERY_PAYLOAD with port number
[01:45:15] [PASSED] DP_QUERY_PAYLOAD with VCPI
[01:45:15] [PASSED] DP_REMOTE_DPCD_READ with port number
[01:45:15] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[01:45:15] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[01:45:15] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[01:45:15] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[01:45:15] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[01:45:15] [PASSED] DP_REMOTE_I2C_READ with port number
[01:45:15] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[01:45:15] [PASSED] DP_REMOTE_I2C_READ with transactions array
[01:45:15] [PASSED] DP_REMOTE_I2C_WRITE with port number
[01:45:15] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[01:45:15] [PASSED] DP_REMOTE_I2C_WRITE with data array
[01:45:15] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[01:45:15] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[01:45:15] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[01:45:15] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[01:45:15] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[01:45:15] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[01:45:15] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[01:45:15] ================ [PASSED] drm_dp_mst_helper ================
[01:45:15] =========== drm_format_helper_test (11 subtests) ===========
[01:45:15] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[01:45:15] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[01:45:15] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[01:45:15] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[01:45:15] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[01:45:15] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[01:45:15] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[01:45:15] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[01:45:15] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[01:45:15] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[01:45:15] ============== drm_test_fb_xrgb8888_to_mono  ===============
[01:45:15] [PASSED] single_pixel_source_buffer
[01:45:15] [PASSED] single_pixel_clip_rectangle
[01:45:15] [PASSED] well_known_colors
[01:45:15] [PASSED] destination_pitch
[01:45:15] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[01:45:15] ============= [PASSED] drm_format_helper_test ==============
[01:45:15] ================= drm_format (18 subtests) =================
[01:45:15] [PASSED] drm_test_format_block_width_invalid
[01:45:15] [PASSED] drm_test_format_block_width_one_plane
[01:45:15] [PASSED] drm_test_format_block_width_two_plane
[01:45:15] [PASSED] drm_test_format_block_width_three_plane
[01:45:15] [PASSED] drm_test_format_block_width_tiled
[01:45:15] [PASSED] drm_test_format_block_height_invalid
[01:45:15] [PASSED] drm_test_format_block_height_one_plane
[01:45:15] [PASSED] drm_test_format_block_height_two_plane
[01:45:15] [PASSED] drm_test_format_block_height_three_plane
[01:45:15] [PASSED] drm_test_format_block_height_tiled
[01:45:15] [PASSED] drm_test_format_min_pitch_invalid
[01:45:15] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[01:45:15] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[01:45:15] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[01:45:15] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[01:45:15] [PASSED] drm_test_format_min_pitch_two_plane
[01:45:15] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[01:45:15] [PASSED] drm_test_format_min_pitch_tiled
[01:45:15] =================== [PASSED] drm_format ====================
[01:45:15] =============== drm_framebuffer (1 subtest) ================
[01:45:15] =============== drm_test_framebuffer_create  ===============
[01:45:15] [PASSED] ABGR8888 normal sizes
[01:45:15] [PASSED] ABGR8888 max sizes
[01:45:15] [PASSED] ABGR8888 pitch greater than min required
[01:45:15] [PASSED] ABGR8888 pitch less than min required
[01:45:15] [PASSED] ABGR8888 Invalid width
[01:45:15] [PASSED] ABGR8888 Invalid buffer handle
[01:45:15] [PASSED] No pixel format
[01:45:15] [PASSED] ABGR8888 Width 0
[01:45:15] [PASSED] ABGR8888 Height 0
[01:45:15] [PASSED] ABGR8888 Out of bound height * pitch combination
[01:45:15] [PASSED] ABGR8888 Large buffer offset
[01:45:15] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[01:45:15] [PASSED] ABGR8888 Valid buffer modifier
[01:45:15] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[01:45:15] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[01:45:15] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[01:45:15] [PASSED] NV12 Normal sizes
[01:45:15] [PASSED] NV12 Max sizes
[01:45:15] [PASSED] NV12 Invalid pitch
[01:45:15] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[01:45:15] [PASSED] NV12 different  modifier per-plane
[01:45:15] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[01:45:15] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[01:45:15] [PASSED] NV12 Modifier for inexistent plane
[01:45:15] [PASSED] NV12 Handle for inexistent plane
[01:45:15] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[01:45:15] [PASSED] YVU420 Normal sizes
[01:45:15] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[01:45:15] [PASSED] YVU420 Max sizes
[01:45:15] [PASSED] YVU420 Invalid pitch
[01:45:15] [PASSED] YVU420 Different pitches
[01:45:15] [PASSED] YVU420 Different buffer offsets/pitches
[01:45:15] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[01:45:15] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[01:45:15] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[01:45:15] [PASSED] YVU420 Valid modifier
[01:45:15] [PASSED] YVU420 Different modifiers per plane
[01:45:15] [PASSED] YVU420 Modifier for inexistent plane
[01:45:15] [PASSED] X0L2 Normal sizes
[01:45:15] [PASSED] X0L2 Max sizes
[01:45:15] [PASSED] X0L2 Invalid pitch
[01:45:15] [PASSED] X0L2 Pitch greater than minimum required
stty: 'standard input': Inappropriate ioctl for device
[01:45:15] [PASSED] X0L2 Handle for inexistent plane
[01:45:15] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[01:45:15] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[01:45:15] [PASSED] X0L2 Valid modifier
[01:45:15] [PASSED] X0L2 Modifier for inexistent plane
[01:45:15] =========== [PASSED] drm_test_framebuffer_create ===========
[01:45:15] ================= [PASSED] drm_framebuffer =================
[01:45:15] =============== drm-test-managed (1 subtest) ===============
[01:45:15] [PASSED] drm_test_managed_run_action
[01:45:15] ================ [PASSED] drm-test-managed =================
[01:45:15] =================== drm_mm (19 subtests) ===================
[01:45:15] [PASSED] drm_test_mm_init
[01:45:15] [PASSED] drm_test_mm_debug
[01:45:25] [PASSED] drm_test_mm_reserve
[01:45:35] [PASSED] drm_test_mm_insert
[01:45:36] [PASSED] drm_test_mm_replace
[01:45:36] [PASSED] drm_test_mm_insert_range
[01:45:36] [PASSED] drm_test_mm_frag
[01:45:36] [PASSED] drm_test_mm_align
[01:45:36] [PASSED] drm_test_mm_align32
[01:45:36] [PASSED] drm_test_mm_align64
[01:45:36] [PASSED] drm_test_mm_evict
[01:45:36] [PASSED] drm_test_mm_evict_range
[01:45:36] [PASSED] drm_test_mm_topdown
[01:45:36] [PASSED] drm_test_mm_bottomup
[01:45:36] [PASSED] drm_test_mm_lowest
[01:45:36] [PASSED] drm_test_mm_highest
[01:45:37] [PASSED] drm_test_mm_color
[01:45:38] [PASSED] drm_test_mm_color_evict
[01:45:38] [PASSED] drm_test_mm_color_evict_range
[01:45:38] ===================== [PASSED] drm_mm ======================
[01:45:38] ============= drm_modes_analog_tv (4 subtests) =============
[01:45:38] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[01:45:38] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[01:45:38] [PASSED] drm_test_modes_analog_tv_pal_576i
[01:45:38] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[01:45:38] =============== [PASSED] drm_modes_analog_tv ===============
[01:45:38] ============== drm_plane_helper (2 subtests) ===============
[01:45:38] =============== drm_test_check_plane_state  ================
[01:45:38] [PASSED] clipping_simple
[01:45:38] [PASSED] clipping_rotate_reflect
[01:45:38] [PASSED] positioning_simple
[01:45:38] [PASSED] upscaling
[01:45:38] [PASSED] downscaling
[01:45:38] [PASSED] rounding1
[01:45:38] [PASSED] rounding2
[01:45:38] [PASSED] rounding3
[01:45:38] [PASSED] rounding4
[01:45:38] =========== [PASSED] drm_test_check_plane_state ============
[01:45:38] =========== drm_test_check_invalid_plane_state  ============
[01:45:38] [PASSED] positioning_invalid
[01:45:38] [PASSED] upscaling_invalid
[01:45:38] [PASSED] downscaling_invalid
[01:45:38] ======= [PASSED] drm_test_check_invalid_plane_state ========
[01:45:38] ================ [PASSED] drm_plane_helper =================
[01:45:38] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[01:45:38] ====== drm_test_connector_helper_tv_get_modes_check  =======
[01:45:38] [PASSED] None
[01:45:38] [PASSED] PAL
[01:45:38] [PASSED] NTSC
[01:45:38] [PASSED] Both, NTSC Default
[01:45:38] [PASSED] Both, PAL Default
[01:45:38] [PASSED] Both, NTSC Default, with PAL on command-line
[01:45:38] [PASSED] Both, PAL Default, with NTSC on command-line
[01:45:38] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[01:45:38] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[01:45:38] ================== drm_rect (4 subtests) ===================
[01:45:38] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[01:45:38] [PASSED] drm_test_rect_clip_scaled_not_clipped
[01:45:38] [PASSED] drm_test_rect_clip_scaled_clipped
[01:45:38] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[01:45:38] ==================== [PASSED] drm_rect =====================
[01:45:38] ============================================================
[01:45:38] Testing complete. Ran 294 tests: passed: 294
[01:45:38] Elapsed time: 42.849s total, 1.696s configuring, 18.524s building, 22.625s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Intel-xe] ✓ CI.Build: success for Port Xe to use GPUVA and implement NULL VM binds (rev5)
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (9 preceding siblings ...)
  2023-04-04  1:45 ` [Intel-xe] ✓ CI.KUnit: " Patchwork
@ 2023-04-04  1:49 ` Patchwork
  2023-04-04  2:09 ` [Intel-xe] ○ CI.BAT: info " Patchwork
  2023-04-04 13:20 ` [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Thomas Hellström
  12 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2023-04-04  1:49 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

== Series Details ==

Series: Port Xe to use GPUVA and implement NULL VM binds (rev5)
URL   : https://patchwork.freedesktop.org/series/115217/
State : success

== Summary ==

+ trap cleanup EXIT
+ cd /kernel
+ git clone https://gitlab.freedesktop.org/drm/xe/ci.git .ci
Cloning into '.ci'...
++ date +%s
+ echo -e '\e[0Ksection_start:1680572748:build_x86_64[collapsed=true]\r\e[0KBuild x86-64'
+ mkdir -p build64
^[[0Ksection_start:1680572748:build_x86_64[collapsed=true]
^[[0KBuild x86-64
+ cat .ci/kernel/kconfig
+ make O=build64 olddefconfig
make[1]: Entering directory '/kernel/build64'
  GEN     Makefile
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/confdata.o
  HOSTCC  scripts/kconfig/expr.o
  LEX     scripts/kconfig/lexer.lex.c
  YACC    scripts/kconfig/parser.tab.[ch]
  HOSTCC  scripts/kconfig/lexer.lex.o
  HOSTCC  scripts/kconfig/menu.o
  HOSTCC  scripts/kconfig/parser.tab.o
  HOSTCC  scripts/kconfig/preprocess.o
  HOSTCC  scripts/kconfig/symbol.o
  HOSTCC  scripts/kconfig/util.o
  HOSTLD  scripts/kconfig/conf
#
# configuration written to .config
#
make[1]: Leaving directory '/kernel/build64'
++ nproc
+ make O=build64 -j48
make[1]: Entering directory '/kernel/build64'
  GEN     Makefile
  WRAP    arch/x86/include/generated/uapi/asm/bpf_perf_event.h
  WRAP    arch/x86/include/generated/uapi/asm/errno.h
  SYSHDR  arch/x86/include/generated/uapi/asm/unistd_32.h
  WRAP    arch/x86/include/generated/uapi/asm/fcntl.h
  WRAP    arch/x86/include/generated/uapi/asm/ioctl.h
  SYSHDR  arch/x86/include/generated/uapi/asm/unistd_64.h
  WRAP    arch/x86/include/generated/uapi/asm/ioctls.h
  SYSHDR  arch/x86/include/generated/uapi/asm/unistd_x32.h
  SYSTBL  arch/x86/include/generated/asm/syscalls_32.h
  WRAP    arch/x86/include/generated/uapi/asm/ipcbuf.h
  SYSHDR  arch/x86/include/generated/asm/unistd_32_ia32.h
  WRAP    arch/x86/include/generated/uapi/asm/param.h
  SYSHDR  arch/x86/include/generated/asm/unistd_64_x32.h
  WRAP    arch/x86/include/generated/uapi/asm/poll.h
  SYSTBL  arch/x86/include/generated/asm/syscalls_64.h
  WRAP    arch/x86/include/generated/uapi/asm/resource.h
  WRAP    arch/x86/include/generated/uapi/asm/socket.h
  WRAP    arch/x86/include/generated/uapi/asm/sockios.h
  WRAP    arch/x86/include/generated/uapi/asm/termbits.h
  WRAP    arch/x86/include/generated/uapi/asm/termios.h
  WRAP    arch/x86/include/generated/uapi/asm/types.h
  UPD     include/generated/uapi/linux/version.h
  UPD     include/config/kernel.release
  HOSTCC  arch/x86/tools/relocs_32.o
  HOSTCC  arch/x86/tools/relocs_64.o
  WRAP    arch/x86/include/generated/asm/early_ioremap.h
  HOSTCC  arch/x86/tools/relocs_common.o
  UPD     include/generated/compile.h
  WRAP    arch/x86/include/generated/asm/export.h
  WRAP    arch/x86/include/generated/asm/mcs_spinlock.h
  WRAP    arch/x86/include/generated/asm/irq_regs.h
  WRAP    arch/x86/include/generated/asm/kmap_size.h
  WRAP    arch/x86/include/generated/asm/local64.h
  WRAP    arch/x86/include/generated/asm/mmiowb.h
  WRAP    arch/x86/include/generated/asm/module.lds.h
  WRAP    arch/x86/include/generated/asm/rwonce.h
  WRAP    arch/x86/include/generated/asm/unaligned.h
  HOSTCC  scripts/unifdef
  UPD     include/generated/utsrelease.h
  HOSTCC  scripts/kallsyms
  HOSTCC  scripts/sorttable
  HOSTCC  scripts/asn1_compiler
  DESCEND objtool
  HOSTCC  /kernel/build64/tools/objtool/fixdep.o
  HOSTLD  /kernel/build64/tools/objtool/fixdep-in.o
  LINK    /kernel/build64/tools/objtool/fixdep
  INSTALL /kernel/build64/tools/objtool/libsubcmd/include/subcmd/exec-cmd.h
  INSTALL /kernel/build64/tools/objtool/libsubcmd/include/subcmd/help.h
  INSTALL /kernel/build64/tools/objtool/libsubcmd/include/subcmd/pager.h
  INSTALL /kernel/build64/tools/objtool/libsubcmd/include/subcmd/parse-options.h
  INSTALL /kernel/build64/tools/objtool/libsubcmd/include/subcmd/run-command.h
  CC      /kernel/build64/tools/objtool/libsubcmd/exec-cmd.o
  CC      /kernel/build64/tools/objtool/libsubcmd/help.o
  CC      /kernel/build64/tools/objtool/libsubcmd/pager.o
  CC      /kernel/build64/tools/objtool/libsubcmd/parse-options.o
  CC      /kernel/build64/tools/objtool/libsubcmd/run-command.o
  CC      /kernel/build64/tools/objtool/libsubcmd/sigchain.o
  INSTALL libsubcmd_headers
  CC      /kernel/build64/tools/objtool/libsubcmd/subcmd-config.o
  HOSTLD  arch/x86/tools/relocs
  CC      scripts/mod/empty.o
  HOSTCC  scripts/mod/mk_elfconfig
  CC      scripts/mod/devicetable-offsets.s
  HDRINST usr/include/video/edid.h
  HDRINST usr/include/video/sisfb.h
  HDRINST usr/include/video/uvesafb.h
  HDRINST usr/include/drm/amdgpu_drm.h
  HDRINST usr/include/drm/i915_drm.h
  HDRINST usr/include/drm/vgem_drm.h
  HDRINST usr/include/drm/xe_drm.h
  HDRINST usr/include/drm/virtgpu_drm.h
  HDRINST usr/include/drm/omap_drm.h
  HDRINST usr/include/drm/radeon_drm.h
  HDRINST usr/include/drm/tegra_drm.h
  HDRINST usr/include/drm/ivpu_accel.h
  HDRINST usr/include/drm/drm_mode.h
  HDRINST usr/include/drm/exynos_drm.h
  HDRINST usr/include/drm/v3d_drm.h
  HDRINST usr/include/drm/drm_sarea.h
  HDRINST usr/include/drm/qxl_drm.h
  HDRINST usr/include/drm/drm_fourcc.h
  HDRINST usr/include/drm/nouveau_drm.h
  HDRINST usr/include/drm/habanalabs_accel.h
  HDRINST usr/include/drm/vmwgfx_drm.h
  HDRINST usr/include/drm/msm_drm.h
  HDRINST usr/include/drm/etnaviv_drm.h
  HDRINST usr/include/drm/vc4_drm.h
  HDRINST usr/include/drm/panfrost_drm.h
  HDRINST usr/include/drm/lima_drm.h
  HDRINST usr/include/drm/drm.h
  HDRINST usr/include/drm/armada_drm.h
  HDRINST usr/include/mtd/inftl-user.h
  HDRINST usr/include/mtd/nftl-user.h
  HDRINST usr/include/mtd/ubi-user.h
  HDRINST usr/include/mtd/mtd-user.h
  HDRINST usr/include/mtd/mtd-abi.h
  HDRINST usr/include/xen/gntdev.h
  HDRINST usr/include/xen/gntalloc.h
  HDRINST usr/include/xen/evtchn.h
  HDRINST usr/include/xen/privcmd.h
  HDRINST usr/include/asm-generic/auxvec.h
  HDRINST usr/include/asm-generic/bitsperlong.h
  HDRINST usr/include/asm-generic/posix_types.h
  HDRINST usr/include/asm-generic/ioctls.h
  HDRINST usr/include/asm-generic/mman.h
  HDRINST usr/include/asm-generic/shmbuf.h
  HDRINST usr/include/asm-generic/bpf_perf_event.h
  HDRINST usr/include/asm-generic/types.h
  HDRINST usr/include/asm-generic/poll.h
  HDRINST usr/include/asm-generic/msgbuf.h
  HDRINST usr/include/asm-generic/swab.h
  HDRINST usr/include/asm-generic/statfs.h
  HDRINST usr/include/asm-generic/unistd.h
  HDRINST usr/include/asm-generic/hugetlb_encode.h
  HDRINST usr/include/asm-generic/resource.h
  HDRINST usr/include/asm-generic/param.h
  HDRINST usr/include/asm-generic/termbits-common.h
  HDRINST usr/include/asm-generic/sockios.h
  HDRINST usr/include/asm-generic/kvm_para.h
  HDRINST usr/include/asm-generic/errno.h
  HDRINST usr/include/asm-generic/termios.h
  HDRINST usr/include/asm-generic/mman-common.h
  HDRINST usr/include/asm-generic/ioctl.h
  HDRINST usr/include/asm-generic/socket.h
  HDRINST usr/include/asm-generic/signal-defs.h
  HDRINST usr/include/asm-generic/termbits.h
  HDRINST usr/include/asm-generic/int-ll64.h
  HDRINST usr/include/asm-generic/signal.h
  HDRINST usr/include/asm-generic/siginfo.h
  HDRINST usr/include/asm-generic/stat.h
  HDRINST usr/include/asm-generic/int-l64.h
  HDRINST usr/include/asm-generic/errno-base.h
  HDRINST usr/include/asm-generic/fcntl.h
  HDRINST usr/include/asm-generic/setup.h
  HDRINST usr/include/asm-generic/ipcbuf.h
  HDRINST usr/include/asm-generic/sembuf.h
  UPD     scripts/mod/devicetable-offsets.h
  HDRINST usr/include/asm-generic/ucontext.h
  HDRINST usr/include/rdma/mlx5_user_ioctl_cmds.h
  HDRINST usr/include/rdma/irdma-abi.h
  HDRINST usr/include/rdma/mana-abi.h
  HDRINST usr/include/rdma/hfi/hfi1_user.h
  HDRINST usr/include/rdma/hfi/hfi1_ioctl.h
  HDRINST usr/include/rdma/rdma_user_rxe.h
  HDRINST usr/include/rdma/rdma_user_ioctl.h
  HDRINST usr/include/rdma/mlx5_user_ioctl_verbs.h
  HDRINST usr/include/rdma/bnxt_re-abi.h
  HDRINST usr/include/rdma/hns-abi.h
  HDRINST usr/include/rdma/qedr-abi.h
  HDRINST usr/include/rdma/ib_user_ioctl_cmds.h
  HDRINST usr/include/rdma/vmw_pvrdma-abi.h
  HDRINST usr/include/rdma/ib_user_sa.h
  HDRINST usr/include/rdma/ib_user_ioctl_verbs.h
  HDRINST usr/include/rdma/rvt-abi.h
  HDRINST usr/include/rdma/mlx5-abi.h
  HDRINST usr/include/rdma/rdma_netlink.h
  HDRINST usr/include/rdma/erdma-abi.h
  HDRINST usr/include/rdma/rdma_user_ioctl_cmds.h
  HDRINST usr/include/rdma/rdma_user_cm.h
  HDRINST usr/include/rdma/ib_user_verbs.h
  HDRINST usr/include/rdma/efa-abi.h
  HDRINST usr/include/rdma/siw-abi.h
  HDRINST usr/include/rdma/mlx4-abi.h
  HDRINST usr/include/rdma/mthca-abi.h
  HDRINST usr/include/rdma/ib_user_mad.h
  HDRINST usr/include/rdma/ocrdma-abi.h
  HDRINST usr/include/misc/xilinx_sdfec.h
  HDRINST usr/include/rdma/cxgb4-abi.h
  HDRINST usr/include/misc/uacce/hisi_qm.h
  HDRINST usr/include/misc/uacce/uacce.h
  HDRINST usr/include/misc/cxl.h
  HDRINST usr/include/misc/ocxl.h
  HDRINST usr/include/misc/fastrpc.h
  HDRINST usr/include/misc/pvpanic.h
  HDRINST usr/include/linux/i8k.h
  HDRINST usr/include/linux/acct.h
  HDRINST usr/include/linux/atmmpc.h
  HDRINST usr/include/linux/fs.h
  HDRINST usr/include/linux/cifs/cifs_mount.h
  HDRINST usr/include/linux/cifs/cifs_netlink.h
  HDRINST usr/include/linux/if_packet.h
  HDRINST usr/include/linux/route.h
  HDRINST usr/include/linux/patchkey.h
  HDRINST usr/include/linux/tc_ematch/tc_em_cmp.h
  HDRINST usr/include/linux/tc_ematch/tc_em_ipt.h
  HDRINST usr/include/linux/tc_ematch/tc_em_meta.h
  HDRINST usr/include/linux/tc_ematch/tc_em_nbyte.h
  HDRINST usr/include/linux/tc_ematch/tc_em_text.h
  HDRINST usr/include/linux/virtio_pmem.h
  HDRINST usr/include/linux/rkisp1-config.h
  HDRINST usr/include/linux/vhost.h
  HDRINST usr/include/linux/cec-funcs.h
  HDRINST usr/include/linux/ppdev.h
  HDRINST usr/include/linux/isdn/capicmd.h
  MKELF   scripts/mod/elfconfig.h
  HDRINST usr/include/linux/virtio_fs.h
  HDRINST usr/include/linux/netfilter_ipv6.h
  HDRINST usr/include/linux/lirc.h
  HDRINST usr/include/linux/mroute6.h
  HDRINST usr/include/linux/ivtvfb.h
  HDRINST usr/include/linux/nl80211-vnd-intel.h
  HOSTCC  scripts/mod/modpost.o
  HDRINST usr/include/linux/dm-log-userspace.h
  HDRINST usr/include/linux/auxvec.h
  HDRINST usr/include/linux/dccp.h
  HOSTCC  scripts/mod/file2alias.o
  HOSTCC  scripts/mod/sumversion.o
  HDRINST usr/include/linux/virtio_scmi.h
  HDRINST usr/include/linux/atmarp.h
  HDRINST usr/include/linux/arcfb.h
  HDRINST usr/include/linux/nbd-netlink.h
  HDRINST usr/include/linux/sched/types.h
  HDRINST usr/include/linux/tcp.h
  HDRINST usr/include/linux/neighbour.h
  HDRINST usr/include/linux/dlm_device.h
  HDRINST usr/include/linux/wmi.h
  HDRINST usr/include/linux/btrfs_tree.h
  HDRINST usr/include/linux/virtio_crypto.h
  HDRINST usr/include/linux/vbox_err.h
  HDRINST usr/include/linux/edd.h
  HDRINST usr/include/linux/loop.h
  HDRINST usr/include/linux/nvme_ioctl.h
  HDRINST usr/include/linux/mmtimer.h
  HDRINST usr/include/linux/if_pppol2tp.h
  HDRINST usr/include/linux/mtio.h
  HDRINST usr/include/linux/if_arcnet.h
  HDRINST usr/include/linux/romfs_fs.h
  HDRINST usr/include/linux/posix_types.h
  HDRINST usr/include/linux/rtc.h
  HDRINST usr/include/linux/landlock.h
  HDRINST usr/include/linux/gpio.h
  HDRINST usr/include/linux/selinux_netlink.h
  HDRINST usr/include/linux/pps.h
  HDRINST usr/include/linux/ndctl.h
  HDRINST usr/include/linux/virtio_gpu.h
  HDRINST usr/include/linux/android/binderfs.h
  HDRINST usr/include/linux/android/binder.h
  HDRINST usr/include/linux/virtio_vsock.h
  HDRINST usr/include/linux/sound.h
  HDRINST usr/include/linux/vtpm_proxy.h
  HDRINST usr/include/linux/nfs_fs.h
  HDRINST usr/include/linux/elf-fdpic.h
  HDRINST usr/include/linux/adfs_fs.h
  HDRINST usr/include/linux/target_core_user.h
  HDRINST usr/include/linux/netlink_diag.h
  HDRINST usr/include/linux/const.h
  HDRINST usr/include/linux/firewire-cdev.h
  HDRINST usr/include/linux/vdpa.h
  HDRINST usr/include/linux/if_infiniband.h
  HDRINST usr/include/linux/serial.h
  HDRINST usr/include/linux/iio/types.h
  HDRINST usr/include/linux/iio/buffer.h
  HDRINST usr/include/linux/iio/events.h
  HDRINST usr/include/linux/baycom.h
  HDRINST usr/include/linux/major.h
  HDRINST usr/include/linux/atmppp.h
  HDRINST usr/include/linux/ipv6_route.h
  HDRINST usr/include/linux/spi/spidev.h
  HDRINST usr/include/linux/spi/spi.h
  HDRINST usr/include/linux/virtio_ring.h
  HDRINST usr/include/linux/hdlc/ioctl.h
  HDRINST usr/include/linux/remoteproc_cdev.h
  HDRINST usr/include/linux/hyperv.h
  HDRINST usr/include/linux/rpl_iptunnel.h
  HDRINST usr/include/linux/sync_file.h
  HDRINST usr/include/linux/igmp.h
  HDRINST usr/include/linux/v4l2-dv-timings.h
  HDRINST usr/include/linux/virtio_i2c.h
  HDRINST usr/include/linux/xfrm.h
  HDRINST usr/include/linux/capability.h
  HDRINST usr/include/linux/gtp.h
  HDRINST usr/include/linux/xdp_diag.h
  HDRINST usr/include/linux/pkt_cls.h
  HDRINST usr/include/linux/suspend_ioctls.h
  HDRINST usr/include/linux/vt.h
  HDRINST usr/include/linux/loadpin.h
  HDRINST usr/include/linux/dlm_plock.h
  HDRINST usr/include/linux/fb.h
  HDRINST usr/include/linux/max2175.h
  HDRINST usr/include/linux/sunrpc/debug.h
  HDRINST usr/include/linux/gsmmux.h
  HDRINST usr/include/linux/watchdog.h
  HDRINST usr/include/linux/vhost_types.h
  HDRINST usr/include/linux/vduse.h
  HDRINST usr/include/linux/ila.h
  HDRINST usr/include/linux/tdx-guest.h
  HDRINST usr/include/linux/close_range.h
  HDRINST usr/include/linux/ivtv.h
  HDRINST usr/include/linux/cryptouser.h
  HDRINST usr/include/linux/netfilter/xt_string.h
  HDRINST usr/include/linux/netfilter/nfnetlink_compat.h
  HDRINST usr/include/linux/netfilter/nf_nat.h
  HDRINST usr/include/linux/netfilter/xt_recent.h
  HDRINST usr/include/linux/netfilter/xt_addrtype.h
  HDRINST usr/include/linux/netfilter/nf_conntrack_tcp.h
  HDRINST usr/include/linux/netfilter/xt_MARK.h
  HDRINST usr/include/linux/netfilter/xt_SYNPROXY.h
  HDRINST usr/include/linux/netfilter/xt_multiport.h
  HDRINST usr/include/linux/netfilter/nfnetlink.h
  HDRINST usr/include/linux/netfilter/xt_cgroup.h
  HDRINST usr/include/linux/netfilter/nf_synproxy.h
  HDRINST usr/include/linux/netfilter/xt_TCPOPTSTRIP.h
  HDRINST usr/include/linux/netfilter/nfnetlink_log.h
  HDRINST usr/include/linux/netfilter/xt_TPROXY.h
  HDRINST usr/include/linux/netfilter/xt_u32.h
  HDRINST usr/include/linux/netfilter/nfnetlink_osf.h
  HDRINST usr/include/linux/netfilter/xt_ecn.h
  HDRINST usr/include/linux/netfilter/xt_esp.h
  HDRINST usr/include/linux/netfilter/nfnetlink_hook.h
  HDRINST usr/include/linux/netfilter/xt_mac.h
  HDRINST usr/include/linux/netfilter/xt_comment.h
  HDRINST usr/include/linux/netfilter/xt_NFQUEUE.h
  HDRINST usr/include/linux/netfilter/xt_osf.h
  HDRINST usr/include/linux/netfilter/xt_hashlimit.h
  HDRINST usr/include/linux/netfilter/nf_conntrack_sctp.h
  HDRINST usr/include/linux/netfilter/xt_socket.h
  HDRINST usr/include/linux/netfilter/xt_connmark.h
  HDRINST usr/include/linux/netfilter/xt_sctp.h
  HDRINST usr/include/linux/netfilter/xt_tcpudp.h
  HDRINST usr/include/linux/netfilter/xt_DSCP.h
  HDRINST usr/include/linux/netfilter/xt_time.h
  HDRINST usr/include/linux/netfilter/xt_IDLETIMER.h
  HDRINST usr/include/linux/netfilter/xt_policy.h
  HDRINST usr/include/linux/netfilter/xt_rpfilter.h
  HDRINST usr/include/linux/netfilter/xt_nfacct.h
  HDRINST usr/include/linux/netfilter/xt_SECMARK.h
  HDRINST usr/include/linux/netfilter/xt_length.h
  HDRINST usr/include/linux/netfilter/nfnetlink_cthelper.h
  HDRINST usr/include/linux/netfilter/xt_quota.h
  HDRINST usr/include/linux/netfilter/xt_CLASSIFY.h
  HDRINST usr/include/linux/netfilter/xt_ipcomp.h
  HDRINST usr/include/linux/netfilter/xt_iprange.h
  HDRINST usr/include/linux/netfilter/xt_bpf.h
  HDRINST usr/include/linux/netfilter/xt_LOG.h
  HDRINST usr/include/linux/netfilter/xt_rateest.h
  HDRINST usr/include/linux/netfilter/xt_HMARK.h
  HDRINST usr/include/linux/netfilter/xt_CONNSECMARK.h
  HDRINST usr/include/linux/netfilter/xt_CONNMARK.h
  HDRINST usr/include/linux/netfilter/xt_pkttype.h
  HDRINST usr/include/linux/netfilter/xt_ipvs.h
  HDRINST usr/include/linux/netfilter/xt_devgroup.h
  HDRINST usr/include/linux/netfilter/xt_AUDIT.h
  HDRINST usr/include/linux/netfilter/xt_realm.h
  HDRINST usr/include/linux/netfilter/nf_conntrack_common.h
  HDRINST usr/include/linux/netfilter/xt_set.h
  HDRINST usr/include/linux/netfilter/xt_LED.h
  HDRINST usr/include/linux/netfilter/xt_connlabel.h
  HDRINST usr/include/linux/netfilter/xt_owner.h
  HDRINST usr/include/linux/netfilter/xt_dccp.h
  HDRINST usr/include/linux/netfilter/xt_limit.h
  HDRINST usr/include/linux/netfilter/xt_conntrack.h
  HDRINST usr/include/linux/netfilter/xt_TEE.h
  HDRINST usr/include/linux/netfilter/xt_RATEEST.h
  HDRINST usr/include/linux/netfilter/xt_connlimit.h
  HDRINST usr/include/linux/netfilter/ipset/ip_set.h
  HDRINST usr/include/linux/netfilter/ipset/ip_set_list.h
  HDRINST usr/include/linux/netfilter/ipset/ip_set_bitmap.h
  HDRINST usr/include/linux/netfilter/x_tables.h
  HDRINST usr/include/linux/netfilter/ipset/ip_set_hash.h
  HDRINST usr/include/linux/netfilter/xt_dscp.h
  HDRINST usr/include/linux/netfilter/nf_conntrack_ftp.h
  HDRINST usr/include/linux/netfilter/xt_cluster.h
  HDRINST usr/include/linux/netfilter/nf_conntrack_tuple_common.h
  HDRINST usr/include/linux/netfilter/nf_log.h
  HDRINST usr/include/linux/netfilter/xt_tcpmss.h
  HDRINST usr/include/linux/netfilter/xt_NFLOG.h
  HDRINST usr/include/linux/netfilter/xt_l2tp.h
  HDRINST usr/include/linux/netfilter/xt_helper.h
  HDRINST usr/include/linux/netfilter/xt_statistic.h
  HDRINST usr/include/linux/netfilter/nfnetlink_queue.h
  HDRINST usr/include/linux/netfilter/nfnetlink_cttimeout.h
  HDRINST usr/include/linux/netfilter/xt_CT.h
  HDRINST usr/include/linux/netfilter/xt_CHECKSUM.h
  HDRINST usr/include/linux/netfilter/xt_connbytes.h
  HDRINST usr/include/linux/netfilter/xt_state.h
  HDRINST usr/include/linux/netfilter/nf_tables.h
  HDRINST usr/include/linux/netfilter/xt_mark.h
  HDRINST usr/include/linux/netfilter/xt_cpu.h
  HDRINST usr/include/linux/netfilter/nf_tables_compat.h
  HDRINST usr/include/linux/netfilter/xt_physdev.h
  HDRINST usr/include/linux/netfilter/nfnetlink_conntrack.h
  HDRINST usr/include/linux/netfilter/nfnetlink_acct.h
  HDRINST usr/include/linux/netfilter/xt_TCPMSS.h
  HDRINST usr/include/linux/tty_flags.h
  HDRINST usr/include/linux/if_phonet.h
  HDRINST usr/include/linux/elf-em.h
  HDRINST usr/include/linux/vm_sockets.h
  HDRINST usr/include/linux/dlmconstants.h
  HDRINST usr/include/linux/bsg.h
  HDRINST usr/include/linux/matroxfb.h
  HDRINST usr/include/linux/sysctl.h
  HDRINST usr/include/linux/unix_diag.h
  HDRINST usr/include/linux/pcitest.h
  HDRINST usr/include/linux/mman.h
  HDRINST usr/include/linux/if_plip.h
  HDRINST usr/include/linux/virtio_balloon.h
  HDRINST usr/include/linux/pidfd.h
  HDRINST usr/include/linux/f2fs.h
  HDRINST usr/include/linux/x25.h
  HDRINST usr/include/linux/if_cablemodem.h
  HDRINST usr/include/linux/utsname.h
  HDRINST usr/include/linux/counter.h
  HDRINST usr/include/linux/atm_tcp.h
  HDRINST usr/include/linux/atalk.h
  HDRINST usr/include/linux/virtio_rng.h
  HDRINST usr/include/linux/vboxguest.h
  HDRINST usr/include/linux/bpf_perf_event.h
  HDRINST usr/include/linux/ipmi_ssif_bmc.h
  HDRINST usr/include/linux/nfs_mount.h
  HDRINST usr/include/linux/sonet.h
  HDRINST usr/include/linux/netfilter.h
  HDRINST usr/include/linux/keyctl.h
  HDRINST usr/include/linux/nl80211.h
  HDRINST usr/include/linux/misc/bcm_vk.h
  HDRINST usr/include/linux/audit.h
  HDRINST usr/include/linux/tipc_config.h
  HDRINST usr/include/linux/tipc_sockets_diag.h
  HDRINST usr/include/linux/futex.h
  HDRINST usr/include/linux/sev-guest.h
  HDRINST usr/include/linux/ublk_cmd.h
  HDRINST usr/include/linux/types.h
  HDRINST usr/include/linux/virtio_input.h
  HDRINST usr/include/linux/if_slip.h
  HDRINST usr/include/linux/personality.h
  HDRINST usr/include/linux/openat2.h
  HDRINST usr/include/linux/poll.h
  HDRINST usr/include/linux/posix_acl.h
  HDRINST usr/include/linux/smc_diag.h
  HDRINST usr/include/linux/snmp.h
  HDRINST usr/include/linux/errqueue.h
  HDRINST usr/include/linux/if_tunnel.h
  HDRINST usr/include/linux/fanotify.h
  HDRINST usr/include/linux/kernel.h
  HDRINST usr/include/linux/rpl.h
  HDRINST usr/include/linux/rtnetlink.h
  HDRINST usr/include/linux/memfd.h
  HDRINST usr/include/linux/serial_core.h
  HDRINST usr/include/linux/dns_resolver.h
  HDRINST usr/include/linux/pr.h
  HDRINST usr/include/linux/atm_eni.h
  HDRINST usr/include/linux/lp.h
  HDRINST usr/include/linux/virtio_mem.h
  HDRINST usr/include/linux/ultrasound.h
  HDRINST usr/include/linux/sctp.h
  HDRINST usr/include/linux/uio.h
  HDRINST usr/include/linux/tcp_metrics.h
  HDRINST usr/include/linux/wwan.h
  HDRINST usr/include/linux/atmbr2684.h
  HDRINST usr/include/linux/in_route.h
  HDRINST usr/include/linux/qemu_fw_cfg.h
  HDRINST usr/include/linux/if_macsec.h
  HDRINST usr/include/linux/usb/charger.h
  HDRINST usr/include/linux/usb/g_uvc.h
  HDRINST usr/include/linux/usb/gadgetfs.h
  HDRINST usr/include/linux/usb/raw_gadget.h
  HDRINST usr/include/linux/usb/cdc-wdm.h
  HDRINST usr/include/linux/usb/g_printer.h
  HDRINST usr/include/linux/usb/midi.h
  HDRINST usr/include/linux/usb/tmc.h
  HDRINST usr/include/linux/usb/video.h
  HDRINST usr/include/linux/usb/functionfs.h
  HDRINST usr/include/linux/usb/audio.h
  HDRINST usr/include/linux/usb/ch11.h
  HDRINST usr/include/linux/usb/ch9.h
  HDRINST usr/include/linux/usb/cdc.h
  HDRINST usr/include/linux/jffs2.h
  HDRINST usr/include/linux/ax25.h
  HDRINST usr/include/linux/auto_fs.h
  HDRINST usr/include/linux/tiocl.h
  HDRINST usr/include/linux/scc.h
  HDRINST usr/include/linux/psci.h
  HDRINST usr/include/linux/swab.h
  HDRINST usr/include/linux/cec.h
  HDRINST usr/include/linux/kfd_ioctl.h
  HDRINST usr/include/linux/smc.h
  HDRINST usr/include/linux/qrtr.h
  HDRINST usr/include/linux/screen_info.h
  HDRINST usr/include/linux/nfsacl.h
  HDRINST usr/include/linux/seg6_hmac.h
  HDRINST usr/include/linux/gameport.h
  HDRINST usr/include/linux/wireless.h
  HDRINST usr/include/linux/fdreg.h
  HDRINST usr/include/linux/cciss_defs.h
  HDRINST usr/include/linux/serial_reg.h
  HDRINST usr/include/linux/perf_event.h
  HDRINST usr/include/linux/in6.h
  HDRINST usr/include/linux/hid.h
  HDRINST usr/include/linux/netlink.h
  HDRINST usr/include/linux/fuse.h
  HDRINST usr/include/linux/magic.h
  HDRINST usr/include/linux/ioam6_iptunnel.h
  HDRINST usr/include/linux/stm.h
  HDRINST usr/include/linux/vsockmon.h
  HDRINST usr/include/linux/seg6.h
  HDRINST usr/include/linux/idxd.h
  HDRINST usr/include/linux/nitro_enclaves.h
  HDRINST usr/include/linux/ptrace.h
  HDRINST usr/include/linux/ioam6_genl.h
  HDRINST usr/include/linux/qnx4_fs.h
  HDRINST usr/include/linux/fsl_mc.h
  HDRINST usr/include/linux/net_tstamp.h
  HDRINST usr/include/linux/msg.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_TTL.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_ttl.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_ah.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_ECN.h
  HDRINST usr/include/linux/netfilter_ipv4/ip_tables.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_ecn.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_CLUSTERIP.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_REJECT.h
  HDRINST usr/include/linux/netfilter_ipv4/ipt_LOG.h
  HDRINST usr/include/linux/sem.h
  HDRINST usr/include/linux/net_namespace.h
  HDRINST usr/include/linux/radeonfb.h
  HDRINST usr/include/linux/tee.h
  HDRINST usr/include/linux/udp.h
  HDRINST usr/include/linux/virtio_bt.h
  HDRINST usr/include/linux/v4l2-subdev.h
  HDRINST usr/include/linux/posix_acl_xattr.h
  HDRINST usr/include/linux/v4l2-mediabus.h
  HDRINST usr/include/linux/atmapi.h
  HDRINST usr/include/linux/raid/md_p.h
  HDRINST usr/include/linux/raid/md_u.h
  HDRINST usr/include/linux/nbd.h
  HDRINST usr/include/linux/zorro_ids.h
  HDRINST usr/include/linux/isst_if.h
  HDRINST usr/include/linux/rxrpc.h
  HDRINST usr/include/linux/unistd.h
  HDRINST usr/include/linux/if_arp.h
  HDRINST usr/include/linux/atm_zatm.h
  HDRINST usr/include/linux/io_uring.h
  HDRINST usr/include/linux/if_fddi.h
  HDRINST usr/include/linux/bpqether.h
  HDRINST usr/include/linux/sysinfo.h
  HDRINST usr/include/linux/auto_dev-ioctl.h
  HDRINST usr/include/linux/nfs4_mount.h
  HDRINST usr/include/linux/keyboard.h
  HDRINST usr/include/linux/virtio_mmio.h
  HDRINST usr/include/linux/input.h
  HDRINST usr/include/linux/qnxtypes.h
  HDRINST usr/include/linux/mdio.h
  HDRINST usr/include/linux/lwtunnel.h
  HDRINST usr/include/linux/gfs2_ondisk.h
  HDRINST usr/include/linux/nfs4.h
  HDRINST usr/include/linux/ptp_clock.h
  HDRINST usr/include/linux/nubus.h
  HDRINST usr/include/linux/if_bonding.h
  HDRINST usr/include/linux/kcov.h
  HDRINST usr/include/linux/fadvise.h
  HDRINST usr/include/linux/taskstats.h
  HDRINST usr/include/linux/veth.h
  HDRINST usr/include/linux/atm.h
  HDRINST usr/include/linux/ipmi.h
  HDRINST usr/include/linux/kdev_t.h
  HDRINST usr/include/linux/mount.h
  HDRINST usr/include/linux/shm.h
  HDRINST usr/include/linux/resource.h
  HDRINST usr/include/linux/prctl.h
  HDRINST usr/include/linux/watch_queue.h
  HDRINST usr/include/linux/sched.h
  HDRINST usr/include/linux/phonet.h
  HDRINST usr/include/linux/random.h
  HDRINST usr/include/linux/tty.h
  HDRINST usr/include/linux/apm_bios.h
  HDRINST usr/include/linux/fd.h
  HDRINST usr/include/linux/um_timetravel.h
  HDRINST usr/include/linux/tls.h
  HDRINST usr/include/linux/rpmsg_types.h
  HDRINST usr/include/linux/pfrut.h
  HDRINST usr/include/linux/mei.h
  HDRINST usr/include/linux/fsi.h
  HDRINST usr/include/linux/rds.h
  HDRINST usr/include/linux/if_x25.h
  HDRINST usr/include/linux/param.h
  HDRINST usr/include/linux/netdevice.h
  HDRINST usr/include/linux/binfmts.h
  HDRINST usr/include/linux/if_pppox.h
  HDRINST usr/include/linux/sockios.h
  HDRINST usr/include/linux/kcm.h
  HDRINST usr/include/linux/virtio_9p.h
  HDRINST usr/include/linux/genwqe/genwqe_card.h
  HDRINST usr/include/linux/if_tun.h
  HDRINST usr/include/linux/if_ether.h
  HDRINST usr/include/linux/kvm_para.h
  HDRINST usr/include/linux/kernel-page-flags.h
  HDRINST usr/include/linux/cdrom.h
  HDRINST usr/include/linux/un.h
  HDRINST usr/include/linux/module.h
  HDRINST usr/include/linux/mqueue.h
  HDRINST usr/include/linux/a.out.h
  HDRINST usr/include/linux/input-event-codes.h
  HDRINST usr/include/linux/coda.h
  HDRINST usr/include/linux/rio_mport_cdev.h
  HDRINST usr/include/linux/ipsec.h
  HDRINST usr/include/linux/blkpg.h
  HDRINST usr/include/linux/blkzoned.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_arpreply.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_redirect.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_nflog.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_802_3.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_nat.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_mark_m.h
  HDRINST usr/include/linux/netfilter_bridge/ebtables.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_vlan.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_limit.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_log.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_stp.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_pkttype.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_ip.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_ip6.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_arp.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_mark_t.h
  HDRINST usr/include/linux/netfilter_bridge/ebt_among.h
  HDRINST usr/include/linux/reiserfs_fs.h
  HDRINST usr/include/linux/cciss_ioctl.h
  HDRINST usr/include/linux/fsmap.h
  HDRINST usr/include/linux/smiapp.h
  HDRINST usr/include/linux/switchtec_ioctl.h
  HDRINST usr/include/linux/atmdev.h
  HDRINST usr/include/linux/hpet.h
  HDRINST usr/include/linux/virtio_config.h
  HDRINST usr/include/linux/string.h
  HDRINST usr/include/linux/kfd_sysfs.h
  HDRINST usr/include/linux/inet_diag.h
  HDRINST usr/include/linux/netdev.h
  HDRINST usr/include/linux/xattr.h
  HDRINST usr/include/linux/iommufd.h
  HDRINST usr/include/linux/errno.h
  HDRINST usr/include/linux/icmp.h
  HDRINST usr/include/linux/i2o-dev.h
  HDRINST usr/include/linux/pg.h
  HDRINST usr/include/linux/if_bridge.h
  HDRINST usr/include/linux/thermal.h
  HDRINST usr/include/linux/uinput.h
  HDRINST usr/include/linux/dqblk_xfs.h
  HDRINST usr/include/linux/v4l2-common.h
  HDRINST usr/include/linux/nvram.h
  HDRINST usr/include/linux/if_vlan.h
  HDRINST usr/include/linux/uhid.h
  HDRINST usr/include/linux/omap3isp.h
  HDRINST usr/include/linux/rose.h
  HDRINST usr/include/linux/phantom.h
  HDRINST usr/include/linux/ipmi_msgdefs.h
  HDRINST usr/include/linux/bcm933xx_hcs.h
  HDRINST usr/include/linux/bpf.h
  HDRINST usr/include/linux/mempolicy.h
  HDRINST usr/include/linux/efs_fs_sb.h
  HDRINST usr/include/linux/nexthop.h
  HDRINST usr/include/linux/net_dropmon.h
  HDRINST usr/include/linux/surface_aggregator/cdev.h
  HDRINST usr/include/linux/surface_aggregator/dtx.h
  HDRINST usr/include/linux/net.h
  HDRINST usr/include/linux/mii.h
  HDRINST usr/include/linux/cm4000_cs.h
  HDRINST usr/include/linux/virtio_pcidev.h
  HDRINST usr/include/linux/termios.h
  HDRINST usr/include/linux/cgroupstats.h
  HDRINST usr/include/linux/mpls.h
  HDRINST usr/include/linux/iommu.h
  HDRINST usr/include/linux/toshiba.h
  HDRINST usr/include/linux/virtio_scsi.h
  HDRINST usr/include/linux/zorro.h
  HDRINST usr/include/linux/chio.h
  HDRINST usr/include/linux/pkt_sched.h
  HDRINST usr/include/linux/cramfs_fs.h
  HDRINST usr/include/linux/nfs3.h
  HDRINST usr/include/linux/vfio_ccw.h
  HDRINST usr/include/linux/atm_nicstar.h
  HDRINST usr/include/linux/ncsi.h
  HDRINST usr/include/linux/virtio_net.h
  HDRINST usr/include/linux/ioctl.h
  HDRINST usr/include/linux/stddef.h
  HDRINST usr/include/linux/limits.h
  HDRINST usr/include/linux/ipmi_bmc.h
  HDRINST usr/include/linux/netfilter_arp.h
  HDRINST usr/include/linux/if_addr.h
  HDRINST usr/include/linux/rpmsg.h
  LD      /kernel/build64/tools/objtool/libsubcmd/libsubcmd-in.o
  HDRINST usr/include/linux/media-bus-format.h
  HDRINST usr/include/linux/kernelcapi.h
  HDRINST usr/include/linux/ppp_defs.h
  HDRINST usr/include/linux/ethtool.h
  HDRINST usr/include/linux/aspeed-video.h
  HDRINST usr/include/linux/hdlc.h
  HDRINST usr/include/linux/fscrypt.h
  HDRINST usr/include/linux/batadv_packet.h
  HDRINST usr/include/linux/uuid.h
  HDRINST usr/include/linux/capi.h
  HDRINST usr/include/linux/mptcp.h
  HDRINST usr/include/linux/hidraw.h
  HDRINST usr/include/linux/virtio_console.h
  HDRINST usr/include/linux/irqnr.h
  HDRINST usr/include/linux/coresight-stm.h
  HDRINST usr/include/linux/cxl_mem.h
  HDRINST usr/include/linux/iso_fs.h
  HDRINST usr/include/linux/virtio_blk.h
  HDRINST usr/include/linux/udf_fs_i.h
  HDRINST usr/include/linux/coff.h
  HDRINST usr/include/linux/dma-buf.h
  HDRINST usr/include/linux/ife.h
  HDRINST usr/include/linux/agpgart.h
  HDRINST usr/include/linux/socket.h
  HDRINST usr/include/linux/nilfs2_ondisk.h
  HDRINST usr/include/linux/connector.h
  HDRINST usr/include/linux/auto_fs4.h
  HDRINST usr/include/linux/bt-bmc.h
  AR      /kernel/build64/tools/objtool/libsubcmd/libsubcmd.a
  HDRINST usr/include/linux/map_to_7segment.h
  HDRINST usr/include/linux/tc_act/tc_skbedit.h
  HDRINST usr/include/linux/tc_act/tc_ctinfo.h
  HDRINST usr/include/linux/tc_act/tc_defact.h
  HDRINST usr/include/linux/tc_act/tc_gact.h
  HDRINST usr/include/linux/tc_act/tc_vlan.h
  HDRINST usr/include/linux/tc_act/tc_skbmod.h
  HDRINST usr/include/linux/tc_act/tc_sample.h
  HDRINST usr/include/linux/tc_act/tc_tunnel_key.h
  HDRINST usr/include/linux/tc_act/tc_gate.h
  HDRINST usr/include/linux/tc_act/tc_mirred.h
  HDRINST usr/include/linux/tc_act/tc_nat.h
  HDRINST usr/include/linux/tc_act/tc_csum.h
  HDRINST usr/include/linux/tc_act/tc_connmark.h
  HDRINST usr/include/linux/tc_act/tc_ife.h
  HDRINST usr/include/linux/tc_act/tc_mpls.h
  HDRINST usr/include/linux/tc_act/tc_ct.h
  HDRINST usr/include/linux/tc_act/tc_pedit.h
  HDRINST usr/include/linux/tc_act/tc_bpf.h
  HDRINST usr/include/linux/tc_act/tc_ipt.h
  HDRINST usr/include/linux/netrom.h
  HDRINST usr/include/linux/joystick.h
  HDRINST usr/include/linux/falloc.h
  HDRINST usr/include/linux/cycx_cfm.h
  HDRINST usr/include/linux/msdos_fs.h
  HDRINST usr/include/linux/omapfb.h
  HDRINST usr/include/linux/virtio_types.h
  HDRINST usr/include/linux/mroute.h
  HDRINST usr/include/linux/psample.h
  HDRINST usr/include/linux/ipv6.h
  HDRINST usr/include/linux/dw100.h
  HDRINST usr/include/linux/psp-sev.h
  HDRINST usr/include/linux/vfio.h
  HDRINST usr/include/linux/if_ppp.h
  HDRINST usr/include/linux/byteorder/big_endian.h
  HDRINST usr/include/linux/byteorder/little_endian.h
  HDRINST usr/include/linux/comedi.h
  HDRINST usr/include/linux/scif_ioctl.h
  HDRINST usr/include/linux/timerfd.h
  HDRINST usr/include/linux/time_types.h
  HDRINST usr/include/linux/firewire-constants.h
  HDRINST usr/include/linux/virtio_snd.h
  HDRINST usr/include/linux/ppp-ioctl.h
  HDRINST usr/include/linux/fib_rules.h
  HDRINST usr/include/linux/gen_stats.h
  HDRINST usr/include/linux/virtio_iommu.h
  HDRINST usr/include/linux/genetlink.h
  HDRINST usr/include/linux/uvcvideo.h
  HDRINST usr/include/linux/pfkeyv2.h
  HDRINST usr/include/linux/soundcard.h
  HDRINST usr/include/linux/times.h
  HDRINST usr/include/linux/nfc.h
  HDRINST usr/include/linux/affs_hardblocks.h
  HDRINST usr/include/linux/nilfs2_api.h
  HDRINST usr/include/linux/rseq.h
  HDRINST usr/include/linux/caif/caif_socket.h
  HDRINST usr/include/linux/caif/if_caif.h
  HDRINST usr/include/linux/i2c-dev.h
  HDRINST usr/include/linux/cuda.h
  HDRINST usr/include/linux/cn_proc.h
  CC      /kernel/build64/tools/objtool/weak.o
  HDRINST usr/include/linux/parport.h
  CC      /kernel/build64/tools/objtool/check.o
  HDRINST usr/include/linux/v4l2-controls.h
  HDRINST usr/include/linux/hsi/cs-protocol.h
  CC      /kernel/build64/tools/objtool/special.o
  HDRINST usr/include/linux/hsi/hsi_char.h
  CC      /kernel/build64/tools/objtool/builtin-check.o
  HDRINST usr/include/linux/seg6_genl.h
  HDRINST usr/include/linux/am437x-vpfe.h
  CC      /kernel/build64/tools/objtool/elf.o
  HDRINST usr/include/linux/amt.h
  MKDIR   /kernel/build64/tools/objtool/arch/x86/
  CC      /kernel/build64/tools/objtool/objtool.o
  CC      /kernel/build64/tools/objtool/orc_gen.o
  HDRINST usr/include/linux/netconf.h
  MKDIR   /kernel/build64/tools/objtool/arch/x86/lib/
  HDRINST usr/include/linux/erspan.h
  CC      /kernel/build64/tools/objtool/orc_dump.o
  HDRINST usr/include/linux/nsfs.h
  HDRINST usr/include/linux/xilinx-v4l2-controls.h
  CC      /kernel/build64/tools/objtool/libstring.o
  HDRINST usr/include/linux/aspeed-p2a-ctrl.h
  CC      /kernel/build64/tools/objtool/libctype.o
  CC      /kernel/build64/tools/objtool/arch/x86/special.o
  HDRINST usr/include/linux/vfio_zdev.h
  GEN     /kernel/build64/tools/objtool/arch/x86/lib/inat-tables.c
  HDRINST usr/include/linux/serio.h
  HDRINST usr/include/linux/acrn.h
  CC      /kernel/build64/tools/objtool/str_error_r.o
  CC      /kernel/build64/tools/objtool/librbtree.o
  HDRINST usr/include/linux/nfs2.h
  HDRINST usr/include/linux/virtio_pci.h
  HDRINST usr/include/linux/ipc.h
  HDRINST usr/include/linux/ethtool_netlink.h
  HDRINST usr/include/linux/kd.h
  HDRINST usr/include/linux/elf.h
  HDRINST usr/include/linux/videodev2.h
  HDRINST usr/include/linux/if_alg.h
  HDRINST usr/include/linux/sonypi.h
  HDRINST usr/include/linux/fsverity.h
  HDRINST usr/include/linux/if.h
  HDRINST usr/include/linux/btrfs.h
  HDRINST usr/include/linux/vm_sockets_diag.h
  HDRINST usr/include/linux/netfilter_bridge.h
  HDRINST usr/include/linux/packet_diag.h
  HDRINST usr/include/linux/netfilter_ipv4.h
  HDRINST usr/include/linux/kvm.h
  HDRINST usr/include/linux/pci.h
  HDRINST usr/include/linux/if_addrlabel.h
  HDRINST usr/include/linux/hdlcdrv.h
  HDRINST usr/include/linux/cfm_bridge.h
  HDRINST usr/include/linux/fiemap.h
  HDRINST usr/include/linux/dm-ioctl.h
  HDRINST usr/include/linux/aspeed-lpc-ctrl.h
  HDRINST usr/include/linux/atmioc.h
  HDRINST usr/include/linux/dlm.h
  HDRINST usr/include/linux/pci_regs.h
  HDRINST usr/include/linux/cachefiles.h
  HDRINST usr/include/linux/membarrier.h
  HDRINST usr/include/linux/nfs_idmap.h
  HDRINST usr/include/linux/ip.h
  HDRINST usr/include/linux/atm_he.h
  HDRINST usr/include/linux/nfsd/export.h
  HDRINST usr/include/linux/nfsd/stats.h
  HDRINST usr/include/linux/nfsd/debug.h
  HDRINST usr/include/linux/nfsd/cld.h
  HDRINST usr/include/linux/ip_vs.h
  HDRINST usr/include/linux/vmcore.h
  HDRINST usr/include/linux/vbox_vmmdev_types.h
  HDRINST usr/include/linux/dvb/osd.h
  HDRINST usr/include/linux/dvb/dmx.h
  HDRINST usr/include/linux/dvb/net.h
  HDRINST usr/include/linux/dvb/frontend.h
  HDRINST usr/include/linux/dvb/ca.h
  HDRINST usr/include/linux/dvb/version.h
  HDRINST usr/include/linux/dvb/video.h
  HDRINST usr/include/linux/dvb/audio.h
  HDRINST usr/include/linux/nfs.h
  HDRINST usr/include/linux/if_link.h
  HDRINST usr/include/linux/wait.h
  HDRINST usr/include/linux/icmpv6.h
  HDRINST usr/include/linux/media.h
  HDRINST usr/include/linux/seg6_local.h
  HDRINST usr/include/linux/openvswitch.h
  HDRINST usr/include/linux/atmsap.h
  HDRINST usr/include/linux/bpfilter.h
  HDRINST usr/include/linux/fpga-dfl.h
  HDRINST usr/include/linux/userio.h
  HDRINST usr/include/linux/signal.h
  HDRINST usr/include/linux/map_to_14segment.h
  HDRINST usr/include/linux/hdreg.h
  HDRINST usr/include/linux/utime.h
  HDRINST usr/include/linux/usbdevice_fs.h
  HDRINST usr/include/linux/timex.h
  HDRINST usr/include/linux/if_fc.h
  HDRINST usr/include/linux/reiserfs_xattr.h
  HDRINST usr/include/linux/hw_breakpoint.h
  HDRINST usr/include/linux/quota.h
  HDRINST usr/include/linux/ioprio.h
  HDRINST usr/include/linux/eventpoll.h
  HDRINST usr/include/linux/atmclip.h
  HDRINST usr/include/linux/can.h
  HDRINST usr/include/linux/if_team.h
  HDRINST usr/include/linux/usbip.h
  HDRINST usr/include/linux/stat.h
  CC      /kernel/build64/tools/objtool/arch/x86/decode.o
  HDRINST usr/include/linux/fou.h
  HDRINST usr/include/linux/hash_info.h
  HDRINST usr/include/linux/ppp-comp.h
  HDRINST usr/include/linux/ip6_tunnel.h
  HDRINST usr/include/linux/tipc_netlink.h
  HDRINST usr/include/linux/in.h
  HDRINST usr/include/linux/wireguard.h
  HDRINST usr/include/linux/btf.h
  HDRINST usr/include/linux/batman_adv.h
  HDRINST usr/include/linux/fcntl.h
  HDRINST usr/include/linux/if_ltalk.h
  HDRINST usr/include/linux/i2c.h
  HDRINST usr/include/linux/atm_idt77105.h
  HDRINST usr/include/linux/kexec.h
  HDRINST usr/include/linux/arm_sdei.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6_tables.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_ah.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_NPT.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_rt.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_REJECT.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_opts.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_srh.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_LOG.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_mh.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_HL.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_hl.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_frag.h
  HDRINST usr/include/linux/netfilter_ipv6/ip6t_ipv6header.h
  HDRINST usr/include/linux/minix_fs.h
  HDRINST usr/include/linux/aio_abi.h
  HDRINST usr/include/linux/pktcdvd.h
  HDRINST usr/include/linux/libc-compat.h
  HDRINST usr/include/linux/atmlec.h
  HDRINST usr/include/linux/signalfd.h
  HDRINST usr/include/linux/bpf_common.h
  HDRINST usr/include/linux/seg6_iptunnel.h
  HDRINST usr/include/linux/synclink.h
  HDRINST usr/include/linux/mpls_iptunnel.h
  HDRINST usr/include/linux/mctp.h
  HDRINST usr/include/linux/if_xdp.h
  HDRINST usr/include/linux/llc.h
  HDRINST usr/include/linux/atmsvc.h
  HDRINST usr/include/linux/sed-opal.h
  HDRINST usr/include/linux/sock_diag.h
  HDRINST usr/include/linux/time.h
  HDRINST usr/include/linux/securebits.h
  HDRINST usr/include/linux/fsl_hypervisor.h
  HDRINST usr/include/linux/if_hippi.h
  HDRINST usr/include/linux/dlm_netlink.h
  HDRINST usr/include/linux/seccomp.h
  HDRINST usr/include/linux/oom.h
  HDRINST usr/include/linux/filter.h
  HDRINST usr/include/linux/inotify.h
  HDRINST usr/include/linux/rfkill.h
  HDRINST usr/include/linux/reboot.h
  HDRINST usr/include/linux/can/vxcan.h
  HDRINST usr/include/linux/can/j1939.h
  HDRINST usr/include/linux/can/netlink.h
  HDRINST usr/include/linux/can/bcm.h
  HDRINST usr/include/linux/can/raw.h
  HDRINST usr/include/linux/can/gw.h
  HDRINST usr/include/linux/can/error.h
  HDRINST usr/include/linux/can/isotp.h
  HDRINST usr/include/linux/if_eql.h
  HDRINST usr/include/linux/hiddev.h
  HDRINST usr/include/linux/blktrace_api.h
  HDRINST usr/include/linux/ccs.h
  HDRINST usr/include/linux/ioam6.h
  HDRINST usr/include/linux/hsr_netlink.h
  HDRINST usr/include/linux/mmc/ioctl.h
  HDRINST usr/include/linux/bfs_fs.h
  HDRINST usr/include/linux/rio_cm_cdev.h
  HDRINST usr/include/linux/uleds.h
  HDRINST usr/include/linux/mrp_bridge.h
  HDRINST usr/include/linux/adb.h
  HDRINST usr/include/linux/pmu.h
  HDRINST usr/include/linux/udmabuf.h
  HDRINST usr/include/linux/kcmp.h
  HDRINST usr/include/linux/dma-heap.h
  HDRINST usr/include/linux/userfaultfd.h
  HDRINST usr/include/linux/netfilter_arp/arpt_mangle.h
  HDRINST usr/include/linux/netfilter_arp/arp_tables.h
  HDRINST usr/include/linux/tipc.h
  HDRINST usr/include/linux/virtio_ids.h
  HDRINST usr/include/linux/l2tp.h
  HDRINST usr/include/linux/devlink.h
  HDRINST usr/include/linux/virtio_gpio.h
  HDRINST usr/include/linux/dcbnl.h
  HDRINST usr/include/linux/cyclades.h
  HDRINST usr/include/sound/intel/avs/tokens.h
  HDRINST usr/include/sound/sof/fw.h
  HDRINST usr/include/sound/sof/abi.h
  HDRINST usr/include/sound/sof/tokens.h
  HDRINST usr/include/sound/sof/header.h
  HDRINST usr/include/sound/usb_stream.h
  HDRINST usr/include/sound/sfnt_info.h
  HDRINST usr/include/sound/asequencer.h
  HDRINST usr/include/sound/tlv.h
  HDRINST usr/include/sound/asound.h
  HDRINST usr/include/sound/asoc.h
  HDRINST usr/include/sound/sb16_csp.h
  HDRINST usr/include/sound/compress_offload.h
  HDRINST usr/include/sound/hdsp.h
  HDRINST usr/include/sound/emu10k1.h
  HDRINST usr/include/sound/snd_ar_tokens.h
  HDRINST usr/include/sound/snd_sst_tokens.h
  HDRINST usr/include/sound/asound_fm.h
  HDRINST usr/include/sound/hdspm.h
  HDRINST usr/include/sound/compress_params.h
  HDRINST usr/include/sound/firewire.h
  HDRINST usr/include/sound/skl-tplg-interface.h
  HDRINST usr/include/scsi/scsi_bsg_ufs.h
  HDRINST usr/include/scsi/scsi_netlink_fc.h
  HDRINST usr/include/scsi/scsi_bsg_mpi3mr.h
  HDRINST usr/include/scsi/fc/fc_ns.h
  HDRINST usr/include/scsi/fc/fc_fs.h
  HDRINST usr/include/scsi/fc/fc_els.h
  HDRINST usr/include/scsi/fc/fc_gs.h
  HDRINST usr/include/scsi/scsi_bsg_fc.h
  HDRINST usr/include/scsi/cxlflash_ioctl.h
  HDRINST usr/include/scsi/scsi_netlink.h
  HDRINST usr/include/linux/version.h
  HDRINST usr/include/asm/processor-flags.h
  HDRINST usr/include/asm/auxvec.h
  HDRINST usr/include/asm/svm.h
  HDRINST usr/include/asm/bitsperlong.h
  HDRINST usr/include/asm/kvm_perf.h
  HDRINST usr/include/asm/mce.h
  HDRINST usr/include/asm/posix_types.h
  HDRINST usr/include/asm/msr.h
  HDRINST usr/include/asm/sigcontext32.h
  HDRINST usr/include/asm/mman.h
  HDRINST usr/include/asm/shmbuf.h
  HDRINST usr/include/asm/e820.h
  HDRINST usr/include/asm/posix_types_64.h
  HDRINST usr/include/asm/vsyscall.h
  HDRINST usr/include/asm/msgbuf.h
  HDRINST usr/include/asm/swab.h
  HDRINST usr/include/asm/statfs.h
  HDRINST usr/include/asm/posix_types_x32.h
  HDRINST usr/include/asm/ptrace.h
  HDRINST usr/include/asm/unistd.h
  HDRINST usr/include/asm/ist.h
  HDRINST usr/include/asm/prctl.h
  HDRINST usr/include/asm/boot.h
  HDRINST usr/include/asm/sigcontext.h
  HDRINST usr/include/asm/posix_types_32.h
  HDRINST usr/include/asm/kvm_para.h
  HDRINST usr/include/asm/a.out.h
  HDRINST usr/include/asm/mtrr.h
  HDRINST usr/include/asm/amd_hsmp.h
  HDRINST usr/include/asm/hwcap2.h
  HDRINST usr/include/asm/ptrace-abi.h
  HDRINST usr/include/asm/vm86.h
  HDRINST usr/include/asm/ldt.h
  HDRINST usr/include/asm/vmx.h
  HDRINST usr/include/asm/perf_regs.h
  HDRINST usr/include/asm/kvm.h
  HDRINST usr/include/asm/debugreg.h
  HDRINST usr/include/asm/signal.h
  HDRINST usr/include/asm/bootparam.h
  HDRINST usr/include/asm/siginfo.h
  HDRINST usr/include/asm/hw_breakpoint.h
  HDRINST usr/include/asm/stat.h
  HDRINST usr/include/asm/setup.h
  HDRINST usr/include/asm/sembuf.h
  HDRINST usr/include/asm/sgx.h
  HDRINST usr/include/asm/ucontext.h
  HDRINST usr/include/asm/byteorder.h
  HDRINST usr/include/asm/unistd_64.h
  HDRINST usr/include/asm/ioctls.h
  HDRINST usr/include/asm/bpf_perf_event.h
  HDRINST usr/include/asm/types.h
  HDRINST usr/include/asm/poll.h
  HDRINST usr/include/asm/resource.h
  HDRINST usr/include/asm/param.h
  HDRINST usr/include/asm/sockios.h
  HDRINST usr/include/asm/errno.h
  HDRINST usr/include/asm/unistd_x32.h
  HDRINST usr/include/asm/termios.h
  HDRINST usr/include/asm/ioctl.h
  HDRINST usr/include/asm/socket.h
  HDRINST usr/include/asm/unistd_32.h
  HDRINST usr/include/asm/termbits.h
  HDRINST usr/include/asm/fcntl.h
  HDRINST usr/include/asm/ipcbuf.h
  HOSTLD  scripts/mod/modpost
  CC      kernel/bounds.s
  CHKSHA1 ../include/linux/atomic/atomic-arch-fallback.h
  CHKSHA1 ../include/linux/atomic/atomic-instrumented.h
  CHKSHA1 ../include/linux/atomic/atomic-long.h
  UPD     include/generated/timeconst.h
  UPD     include/generated/bounds.h
  CC      arch/x86/kernel/asm-offsets.s
  LD      /kernel/build64/tools/objtool/arch/x86/objtool-in.o
  UPD     include/generated/asm-offsets.h
  CALL    ../scripts/checksyscalls.sh
  LD      /kernel/build64/tools/objtool/objtool-in.o
  LINK    /kernel/build64/tools/objtool/objtool
  LDS     scripts/module.lds
  CC      ipc/compat.o
  CC      init/main.o
  CC      ipc/util.o
  CC      init/do_mounts.o
  CC      ipc/msgutil.o
  CC      init/do_mounts_initrd.o
  HOSTCC  usr/gen_init_cpio
  CC      ipc/msg.o
  CC      init/initramfs.o
  CC      ipc/sem.o
  AR      certs/built-in.a
  UPD     init/utsversion-tmp.h
  CC      init/calibrate.o
  CC      init/init_task.o
  CC      ipc/shm.o
  CC      ipc/syscall.o
  AS      arch/x86/lib/clear_page_64.o
  CC      io_uring/io_uring.o
  CC      security/commoncap.o
  CC      ipc/ipc_sysctl.o
  CC      init/version.o
  CC      ipc/mqueue.o
  CC      arch/x86/lib/cmdline.o
  CC      block/bdev.o
  CC      io_uring/xattr.o
  CC      security/min_addr.o
  AR      arch/x86/video/built-in.a
  CC      arch/x86/power/cpu.o
  CC      security/keys/gc.o
  AR      arch/x86/net/built-in.a
  CC      arch/x86/pci/i386.o
  CC      arch/x86/realmode/init.o
  AR      arch/x86/ia32/built-in.a
  AR      virt/lib/built-in.a
  CC      block/partitions/core.o
  CC      fs/nfs_common/grace.o
  AS      arch/x86/crypto/aesni-intel_asm.o
  CC      arch/x86/mm/pat/set_memory.o
  CC      fs/notify/dnotify/dnotify.o
  AR      drivers/irqchip/built-in.a
  CC [M]  arch/x86/video/fbdev.o
  CC      arch/x86/events/amd/core.o
  AR      arch/x86/platform/atom/built-in.a
  CC [M]  virt/lib/irqbypass.o
  CC      arch/x86/kernel/fpu/init.o
  CC      net/core/sock.o
  CC      sound/core/sound.o
  CC      sound/core/seq/seq.o
  AR      arch/x86/platform/ce4100/built-in.a
  CC      sound/core/seq/seq_lock.o
  CC      arch/x86/platform/efi/memmap.o
  CC      lib/kunit/test.o
  CC      arch/x86/entry/vdso/vma.o
  AR      drivers/bus/mhi/built-in.a
  CC      lib/kunit/resource.o
  AR      drivers/bus/built-in.a
  CC      mm/kasan/common.o
  CC      kernel/sched/core.o
  CC      crypto/api.o
  AR      drivers/phy/allwinner/built-in.a
  CC      arch/x86/crypto/aesni-intel_glue.o
  AR      drivers/phy/amlogic/built-in.a
  AR      drivers/phy/broadcom/built-in.a
  AR      drivers/phy/cadence/built-in.a
  AR      drivers/phy/freescale/built-in.a
  AS      arch/x86/lib/cmpxchg16b_emu.o
  AR      drivers/phy/hisilicon/built-in.a
  CC      arch/x86/lib/copy_mc.o
  AR      drivers/phy/ingenic/built-in.a
  AR      drivers/phy/intel/built-in.a
  AR      drivers/phy/lantiq/built-in.a
  AR      drivers/phy/marvell/built-in.a
  AR      drivers/phy/mediatek/built-in.a
  AR      drivers/phy/microchip/built-in.a
  AR      drivers/phy/motorola/built-in.a
  CC      arch/x86/platform/efi/quirks.o
  AR      drivers/phy/mscc/built-in.a
  AR      drivers/phy/qualcomm/built-in.a
  AR      drivers/phy/ralink/built-in.a
  GEN     usr/initramfs_data.cpio
  AR      drivers/phy/renesas/built-in.a
  COPY    usr/initramfs_inc_data
  AS      usr/initramfs_data.o
  AR      drivers/phy/rockchip/built-in.a
  AR      drivers/phy/samsung/built-in.a
  AR      usr/built-in.a
  AR      drivers/phy/socionext/built-in.a
  AR      drivers/phy/st/built-in.a
  CC      kernel/locking/mutex.o
  CC      io_uring/nop.o
  AR      drivers/phy/sunplus/built-in.a
  AR      drivers/phy/tegra/built-in.a
  AR      drivers/phy/ti/built-in.a
  AR      drivers/phy/xilinx/built-in.a
  CC      drivers/phy/phy-core.o
  CC      kernel/locking/semaphore.o
  AS      arch/x86/lib/copy_mc_64.o
  AS      arch/x86/lib/copy_page_64.o
  AR      virt/built-in.a
  AS      arch/x86/lib/copy_user_64.o
  CC      lib/kunit/static_stub.o
  CC      sound/core/seq/seq_clientmgr.o
  AR      arch/x86/platform/geode/built-in.a
  CC      arch/x86/lib/cpu.o
  CC      security/keys/key.o
  CC      net/core/request_sock.o
  CC      kernel/power/qos.o
  CC      arch/x86/kernel/fpu/bugs.o
  AS      arch/x86/realmode/rm/header.o
  CC      kernel/printk/printk.o
  AS      arch/x86/realmode/rm/trampoline_64.o
  CC      kernel/printk/printk_safe.o
  AS      arch/x86/realmode/rm/stack.o
  CC      security/keys/keyring.o
  AS      arch/x86/realmode/rm/reboot.o
  CC      security/inode.o
  CC      kernel/irq/irqdesc.o
  CC      mm/kasan/report.o
  AS      arch/x86/realmode/rm/wakeup_asm.o
  AR      fs/notify/dnotify/built-in.a
  CC      fs/notify/inotify/inotify_fsnotify.o
  CC      arch/x86/realmode/rm/wakemain.o
  CC      sound/core/seq/seq_memory.o
  AR      fs/nfs_common/built-in.a
  CC      arch/x86/pci/init.o
  CC      arch/x86/lib/delay.o
  CC      arch/x86/kernel/fpu/core.o
  CC      fs/notify/inotify/inotify_user.o
  CC      arch/x86/kernel/fpu/regset.o
  CC      arch/x86/platform/efi/efi.o
  CC      arch/x86/realmode/rm/video-mode.o
  CC      arch/x86/platform/efi/efi_64.o
  CC      crypto/cipher.o
  CC      arch/x86/entry/vdso/extable.o
  CC      arch/x86/power/hibernate_64.o
  CC      kernel/power/main.o
  CC      block/partitions/ldm.o
  CC      block/fops.o
  AS      arch/x86/realmode/rm/copy.o
  CC      block/bio.o
  AS      arch/x86/realmode/rm/bioscall.o
  CC      arch/x86/realmode/rm/regs.o
  CC      lib/kunit/string-stream.o
  CC      arch/x86/realmode/rm/video-vga.o
  AS      arch/x86/lib/getuser.o
  CC      arch/x86/mm/init.o
  CC      io_uring/fs.o
  AS      arch/x86/crypto/aesni-intel_avx-x86_64.o
  CC      arch/x86/events/amd/lbr.o
  GEN     arch/x86/lib/inat-tables.c
  CC      arch/x86/realmode/rm/video-vesa.o
  CC      arch/x86/lib/insn-eval.o
  CC      block/partitions/msdos.o
  CC      arch/x86/lib/insn.o
  AS      arch/x86/lib/memcpy_64.o
  CC      arch/x86/mm/init_64.o
  CC      ipc/namespace.o
  AS      arch/x86/lib/memmove_64.o
  CC      arch/x86/realmode/rm/video-bios.o
  CC      arch/x86/events/amd/ibs.o
  CC      ipc/mq_sysctl.o
  AS      arch/x86/platform/efi/efi_stub_64.o
  CC      crypto/compress.o
  CC      lib/math/div64.o
  CC      crypto/algapi.o
  CC      lib/math/gcd.o
  PASYMS  arch/x86/realmode/rm/pasyms.h
  LDS     arch/x86/realmode/rm/realmode.lds
  LD      arch/x86/realmode/rm/realmode.elf
  CC      arch/x86/pci/mmconfig_64.o
  RELOCS  arch/x86/realmode/rm/realmode.relocs
  OBJCOPY arch/x86/realmode/rm/realmode.bin
  AS      arch/x86/realmode/rmpiggy.o
  AR      sound/i2c/other/built-in.a
  AR      sound/i2c/built-in.a
  CC      lib/math/lcm.o
  AR      drivers/phy/built-in.a
  AR      arch/x86/realmode/built-in.a
  AS      arch/x86/crypto/aes_ctrby8_avx-x86_64.o
  CC      arch/x86/pci/direct.o
  CC      security/device_cgroup.o
  AR      drivers/pinctrl/actions/built-in.a
  AR      drivers/pinctrl/bcm/built-in.a
  AR      drivers/pinctrl/cirrus/built-in.a
  CC      lib/math/int_pow.o
  AR      drivers/pinctrl/freescale/built-in.a
  CC      arch/x86/pci/mmconfig-shared.o
  CC      drivers/pinctrl/intel/pinctrl-baytrail.o
  CC      mm/filemap.o
  AS [M]  arch/x86/crypto/ghash-clmulni-intel_asm.o
  CC      mm/kasan/init.o
  CC      lib/math/int_sqrt.o
  CC      lib/kunit/assert.o
  CC      kernel/irq/handle.o
  CC      arch/x86/kernel/fpu/signal.o
  CC      drivers/pinctrl/intel/pinctrl-intel.o
  CC [M]  arch/x86/crypto/ghash-clmulni-intel_glue.o
  AR      init/built-in.a
  CC      arch/x86/kernel/fpu/xstate.o
  CC      arch/x86/entry/vdso/vdso32-setup.o
  AR      sound/drivers/opl3/built-in.a
  AS      arch/x86/lib/memset_64.o
  AR      sound/drivers/opl4/built-in.a
  CC      lib/math/reciprocal_div.o
  CC      arch/x86/lib/misc.o
  AR      sound/drivers/mpu401/built-in.a
  AS      arch/x86/power/hibernate_asm_64.o
  AR      sound/drivers/vx/built-in.a
  AR      sound/drivers/pcsp/built-in.a
  LDS     arch/x86/entry/vdso/vdso.lds
  AR      sound/drivers/built-in.a
  CC      arch/x86/power/hibernate.o
  CC      lib/crypto/memneq.o
  AR      drivers/pinctrl/mediatek/built-in.a
  AS      arch/x86/entry/vdso/vdso-note.o
  CC      kernel/printk/printk_ringbuffer.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/kvm_main.o
  CC      lib/math/rational.o
  CC      lib/crypto/utils.o
  CC      kernel/printk/sysctl.o
  CC      arch/x86/mm/pat/memtype.o
  AR      arch/x86/platform/efi/built-in.a
  AR      arch/x86/platform/iris/built-in.a
  CC      arch/x86/platform/intel/iosf_mbi.o
  AR      arch/x86/platform/intel-mid/built-in.a
  AR      fs/notify/inotify/built-in.a
  CC      lib/kunit/try-catch.o
  CC      arch/x86/entry/vdso/vclock_gettime.o
  CC      kernel/power/console.o
  CC      fs/notify/fanotify/fanotify.o
  CC      kernel/power/process.o
  CC      security/keys/keyctl.o
  CC      lib/kunit/executor.o
  CC      io_uring/splice.o
  CC      lib/zlib_inflate/inffast.o
  CC      lib/zlib_deflate/deflate.o
  CC      lib/lzo/lzo1x_compress.o
  CC      sound/core/seq/seq_queue.o
  CC      kernel/power/suspend.o
  CC      lib/lzo/lzo1x_decompress_safe.o
  CC      arch/x86/lib/pc-conf-reg.o
  CC      kernel/power/hibernate.o
  CC [M]  lib/math/prime_numbers.o
  CC      block/partitions/efi.o
  CC      arch/x86/mm/fault.o
  CC      block/elevator.o
  AS [M]  arch/x86/crypto/crc32-pclmul_asm.o
  CC      lib/crypto/chacha.o
  CC      crypto/scatterwalk.o
  CC      kernel/irq/manage.o
  CC [M]  arch/x86/crypto/crc32-pclmul_glue.o
  CC      lib/kunit/hooks.o
  CC      lib/zlib_inflate/inflate.o
  AR      ipc/built-in.a
  CC      arch/x86/mm/ioremap.o
  CC      net/core/skbuff.o
  CC      lib/crypto/aes.o
  AS      arch/x86/lib/putuser.o
  AS      arch/x86/lib/retpoline.o
  CC      arch/x86/events/amd/uncore.o
  CC      arch/x86/entry/vdso/vgetcpu.o
  CC      arch/x86/lib/usercopy.o
  AR      arch/x86/power/built-in.a
  CC      mm/kasan/generic.o
  AR      arch/x86/platform/intel-quark/built-in.a
  CC      io_uring/sync.o
  AR      arch/x86/platform/olpc/built-in.a
  CC      io_uring/advise.o
  HOSTCC  arch/x86/entry/vdso/vdso2c
  CC      drivers/gpio/gpiolib.o
  CC      io_uring/filetable.o
  AR      lib/kunit/built-in.a
  CC      arch/x86/pci/fixup.o
  CC      io_uring/openclose.o
  CC      kernel/locking/rwsem.o
  CC      kernel/locking/percpu-rwsem.o
  CC      lib/lz4/lz4_compress.o
  AR      arch/x86/platform/intel/built-in.a
  AR      arch/x86/platform/scx200/built-in.a
  AR      lib/lzo/built-in.a
  AR      arch/x86/platform/ts5500/built-in.a
  CC      crypto/proc.o
  CC      lib/zlib_deflate/deftree.o
  AR      arch/x86/platform/uv/built-in.a
  AR      arch/x86/platform/built-in.a
  CC      lib/lz4/lz4hc_compress.o
  CC      kernel/rcu/update.o
  AS [M]  arch/x86/crypto/crct10dif-pcl-asm_64.o
  CC      arch/x86/mm/pat/memtype_interval.o
  CC [M]  arch/x86/crypto/crct10dif-pclmul_glue.o
  CC      kernel/rcu/sync.o
  CC      arch/x86/entry/vsyscall/vsyscall_64.o
  AR      lib/math/built-in.a
  CC      kernel/rcu/srcutree.o
  CC      lib/zstd/zstd_compress_module.o
  AR      kernel/printk/built-in.a
  CC      net/llc/llc_core.o
  CC      lib/zstd/compress/fse_compress.o
  AR      kernel/livepatch/built-in.a
  CC      crypto/aead.o
  LDS     arch/x86/entry/vdso/vdso32/vdso32.lds
  CC      crypto/geniv.o
  AS      arch/x86/entry/vdso/vdso32/note.o
  CC      sound/core/seq/seq_fifo.o
  AS      arch/x86/entry/vdso/vdso32/system_call.o
  CC      fs/notify/fanotify/fanotify_user.o
  CC [M]  drivers/pinctrl/intel/pinctrl-cherryview.o
  AS      arch/x86/entry/vdso/vdso32/sigreturn.o
  AR      arch/x86/kernel/fpu/built-in.a
  CC      arch/x86/lib/usercopy_64.o
  CC      arch/x86/kernel/cpu/mce/core.o
  CC      arch/x86/entry/vdso/vdso32/vclock_gettime.o
  CC      lib/zlib_inflate/infutil.o
  CC      kernel/dma/mapping.o
  CC      kernel/locking/irqflag-debug.o
  CC      lib/crypto/gf128mul.o
  AR      block/partitions/built-in.a
  CC [M]  drivers/pinctrl/intel/pinctrl-broxton.o
  CC      arch/x86/kernel/cpu/mtrr/mtrr.o
  AR      sound/isa/ad1816a/built-in.a
  AR      sound/isa/ad1848/built-in.a
  LD [M]  arch/x86/crypto/ghash-clmulni-intel.o
  CC      security/keys/permission.o
  AR      sound/isa/cs423x/built-in.a
  LD [M]  arch/x86/crypto/crc32-pclmul.o
  CC      arch/x86/events/intel/core.o
  AR      sound/isa/es1688/built-in.a
  CC      arch/x86/events/intel/bts.o
  LD [M]  arch/x86/crypto/crct10dif-pclmul.o
  AR      sound/isa/galaxy/built-in.a
  AR      arch/x86/crypto/built-in.a
  AR      sound/isa/gus/built-in.a
  CC      net/core/datagram.o
  AR      sound/isa/msnd/built-in.a
  CC      arch/x86/events/zhaoxin/core.o
  CC      net/core/stream.o
  AR      sound/isa/opti9xx/built-in.a
  CC      lib/zstd/compress/hist.o
  CC      security/keys/process_keys.o
  AR      sound/isa/sb/built-in.a
  AR      sound/isa/wavefront/built-in.a
  CC      arch/x86/kernel/cpu/cacheinfo.o
  AR      sound/isa/wss/built-in.a
  AR      sound/isa/built-in.a
  CC      arch/x86/kernel/cpu/mtrr/if.o
  CC      lib/zlib_deflate/deflate_syms.o
  CC      kernel/power/snapshot.o
  AR      arch/x86/events/amd/built-in.a
  CC      arch/x86/kernel/cpu/mtrr/generic.o
  CC      lib/zlib_inflate/inftrees.o
  CC      block/blk-core.o
  AR      arch/x86/mm/pat/built-in.a
  CC      arch/x86/events/core.o
  CC      arch/x86/pci/acpi.o
  CC      mm/kasan/report_generic.o
  CC      lib/zstd/compress/huf_compress.o
  CC      arch/x86/mm/extable.o
  CC      lib/zlib_inflate/inflate_syms.o
  CC      sound/core/seq/seq_prioq.o
  CC      arch/x86/lib/msr-smp.o
  CC      lib/xz/xz_dec_syms.o
  AS      arch/x86/entry/vsyscall/vsyscall_emu_64.o
  CC      arch/x86/entry/vdso/vdso32/vgetcpu.o
  AR      arch/x86/entry/vsyscall/built-in.a
  CC [M]  drivers/pinctrl/intel/pinctrl-geminilake.o
  CC [M]  drivers/pinctrl/intel/pinctrl-sunrisepoint.o
  CC      kernel/rcu/tree.o
  CC      net/llc/llc_input.o
  CC      kernel/irq/spurious.o
  CC      kernel/rcu/rcu_segcblist.o
  CC      net/core/scm.o
  AR      lib/zlib_deflate/built-in.a
  CC      kernel/irq/resend.o
  CC      crypto/skcipher.o
  VDSO    arch/x86/entry/vdso/vdso64.so.dbg
  CC      lib/crypto/blake2s.o
  CC      lib/crypto/blake2s-generic.o
  VDSO    arch/x86/entry/vdso/vdso32.so.dbg
  OBJCOPY arch/x86/entry/vdso/vdso64.so
  OBJCOPY arch/x86/entry/vdso/vdso32.so
  VDSO2C  arch/x86/entry/vdso/vdso-image-64.c
  VDSO2C  arch/x86/entry/vdso/vdso-image-32.c
  CC      arch/x86/entry/vdso/vdso-image-64.o
  CC      arch/x86/entry/vdso/vdso-image-32.o
  CC      arch/x86/lib/cache-smp.o
  AR      lib/zlib_inflate/built-in.a
  AR      sound/pci/ac97/built-in.a
  CC      kernel/locking/mutex-debug.o
  AR      sound/pci/ali5451/built-in.a
  AR      sound/pci/asihpi/built-in.a
  AR      sound/pci/au88x0/built-in.a
  CC      lib/xz/xz_dec_stream.o
  CC      kernel/locking/lockdep.o
  AR      sound/pci/aw2/built-in.a
  AR      sound/pci/ctxfi/built-in.a
  CC      kernel/locking/lockdep_proc.o
  CC      io_uring/uring_cmd.o
  AR      sound/pci/ca0106/built-in.a
  AR      sound/pci/cs46xx/built-in.a
  AR      sound/pci/cs5535audio/built-in.a
  CC      kernel/dma/direct.o
  CC      kernel/irq/chip.o
  AR      sound/pci/lola/built-in.a
  AR      sound/pci/lx6464es/built-in.a
  CC      arch/x86/lib/msr.o
  AR      sound/pci/echoaudio/built-in.a
  AS      arch/x86/lib/msr-reg.o
  AR      sound/pci/emu10k1/built-in.a
  AR      arch/x86/events/zhaoxin/built-in.a
  AR      sound/pci/hda/built-in.a
  AR      arch/x86/entry/vdso/built-in.a
  AS      arch/x86/entry/entry.o
  CC      arch/x86/lib/msr-reg-export.o
  CC [M]  sound/pci/hda/hda_bind.o
  CC [M]  sound/pci/hda/hda_codec.o
  AS      arch/x86/lib/hweight.o
  AS      arch/x86/entry/entry_64.o
  CC      arch/x86/pci/legacy.o
  CC      security/keys/request_key.o
  CC      crypto/seqiv.o
  CC      mm/kasan/shadow.o
  CC      arch/x86/entry/syscall_64.o
  CC      net/core/gen_stats.o
  CC      arch/x86/kernel/cpu/mtrr/cleanup.o
  AR      drivers/pinctrl/intel/built-in.a
  AR      sound/pci/ice1712/built-in.a
  CC      net/ethernet/eth.o
  AR      drivers/pinctrl/mvebu/built-in.a
  CC      arch/x86/events/intel/ds.o
  CC      arch/x86/lib/iomem.o
  CC      sound/core/seq/seq_timer.o
  AR      drivers/pinctrl/nomadik/built-in.a
  CC      lib/lz4/lz4_decompress.o
  AR      drivers/pinctrl/nuvoton/built-in.a
  CC      mm/mempool.o
  AR      drivers/pinctrl/sprd/built-in.a
  AR      drivers/pinctrl/sunplus/built-in.a
  AR      drivers/pinctrl/ti/built-in.a
  CC      drivers/pinctrl/core.o
  CC      drivers/pinctrl/pinctrl-utils.o
  CC      arch/x86/mm/mmap.o
  CC      sound/core/seq/seq_system.o
  CC      kernel/entry/common.o
  CC      crypto/echainiv.o
  AR      fs/notify/fanotify/built-in.a
  CC      fs/notify/fsnotify.o
  CC      net/llc/llc_output.o
  CC      lib/crypto/blake2s-selftest.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/eventfd.o
  CC      lib/xz/xz_dec_lzma2.o
  AS      arch/x86/lib/iomap_copy_64.o
  CC      net/core/gen_estimator.o
  CC      kernel/entry/syscall_user_dispatch.o
  CC      arch/x86/kernel/cpu/mce/severity.o
  CC      arch/x86/pci/irq.o
  CC      arch/x86/pci/common.o
  CC      lib/crypto/des.o
  CC      arch/x86/entry/common.o
  CC      kernel/power/swap.o
  CC      kernel/entry/kvm.o
  CC      mm/kasan/quarantine.o
  CC      kernel/locking/spinlock.o
  CC      kernel/power/user.o
  AS      arch/x86/entry/thunk_64.o
  CC      lib/zstd/compress/zstd_compress.o
  CC      arch/x86/lib/inat.o
  CC      sound/core/seq/seq_ports.o
  CC      kernel/power/poweroff.o
  CC      arch/x86/pci/early.o
  CC      io_uring/epoll.o
  CC      arch/x86/mm/pgtable.o
  CC      kernel/dma/ops_helpers.o
  CC      lib/zstd/compress/zstd_compress_literals.o
  CC      crypto/ahash.o
  AR      drivers/pwm/built-in.a
  CC      kernel/locking/osq_lock.o
  AR      arch/x86/lib/built-in.a
  AR      arch/x86/lib/lib.a
  CC      arch/x86/mm/physaddr.o
  AS      arch/x86/entry/entry_64_compat.o
  CC      security/keys/request_key_auth.o
  CC      kernel/irq/dummychip.o
  CC      kernel/irq/devres.o
  CC      arch/x86/entry/syscall_32.o
  CC      kernel/locking/qspinlock.o
  CC      crypto/shash.o
  CC      crypto/akcipher.o
  CC      lib/xz/xz_dec_bcj.o
  AR      arch/x86/kernel/cpu/mtrr/built-in.a
  AR      sound/pci/korg1212/built-in.a
  CC      kernel/module/main.o
  CC      fs/notify/notification.o
  AR      net/llc/built-in.a
  CC      kernel/module/strict_rwx.o
  CC      drivers/gpio/gpiolib-devres.o
  CC      kernel/locking/rtmutex_api.o
  CC      kernel/module/tree_lookup.o
  CC      kernel/dma/dummy.o
  CC      fs/notify/group.o
  AR      lib/lz4/built-in.a
  AR      sound/pci/mixart/built-in.a
  AR      sound/pci/nm256/built-in.a
  CC      arch/x86/events/probe.o
  CC      kernel/dma/contiguous.o
  CC      arch/x86/kernel/cpu/mce/genpool.o
  AR      net/ethernet/built-in.a
  CC      kernel/irq/autoprobe.o
  CC      kernel/module/debug_kmemleak.o
  AR      mm/kasan/built-in.a
  CC      mm/oom_kill.o
  CC      net/core/net_namespace.o
  CC      kernel/dma/swiotlb.o
  CC      drivers/gpio/gpiolib-legacy.o
  CC      arch/x86/kernel/cpu/mce/intel.o
  CC      kernel/locking/spinlock_debug.o
  CC      kernel/irq/irqdomain.o
  AR      kernel/entry/built-in.a
  CC      mm/fadvise.o
  CC      kernel/time/time.o
  CC      drivers/gpio/gpiolib-cdev.o
  CC      drivers/pinctrl/pinmux.o
  CC      drivers/gpio/gpiolib-sysfs.o
  CC      io_uring/statx.o
  CC      security/keys/user_defined.o
  CC      sound/core/seq/seq_info.o
  CC      block/blk-sysfs.o
  AR      arch/x86/entry/built-in.a
  CC      security/keys/compat.o
  CC      drivers/gpio/gpiolib-acpi.o
  AR      lib/xz/built-in.a
  CC      net/core/secure_seq.o
  CC      mm/maccess.o
  CC      kernel/time/timer.o
  CC      drivers/gpio/gpiolib-swnode.o
  CC      lib/crypto/sha1.o
  CC      kernel/time/hrtimer.o
  CC      mm/page-writeback.o
  CC      kernel/module/kallsyms.o
  CC      arch/x86/kernel/acpi/boot.o
  CC      arch/x86/mm/tlb.o
  CC      arch/x86/kernel/acpi/sleep.o
  CC      arch/x86/pci/bus_numa.o
  CC      arch/x86/mm/cpu_entry_area.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/binary_stats.o
  CC      crypto/kpp.o
  CC      arch/x86/events/utils.o
  CC      block/blk-flush.o
  CC      arch/x86/events/intel/knc.o
  CC      kernel/futex/core.o
  CC      kernel/futex/syscalls.o
  CC      crypto/acompress.o
  CC      fs/notify/mark.o
  AR      kernel/power/built-in.a
  CC      kernel/futex/pi.o
  CC      arch/x86/kernel/cpu/mce/threshold.o
  AR      sound/core/seq/built-in.a
  CC      lib/crypto/sha256.o
  CC      sound/core/init.o
  CC      arch/x86/kernel/cpu/scattered.o
  CC      block/blk-settings.o
  CC      mm/folio-compat.o
  CC [M]  sound/pci/hda/hda_jack.o
  CC      kernel/time/timekeeping.o
  CC      kernel/sched/fair.o
  CC      arch/x86/kernel/cpu/topology.o
  CC      io_uring/net.o
  CC      security/keys/proc.o
  CC      drivers/pinctrl/pinconf.o
  CC      arch/x86/pci/amd_bus.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/vfio.o
  CC      drivers/pinctrl/pinconf-generic.o
  CC      arch/x86/mm/maccess.o
  CC      arch/x86/events/rapl.o
  AS      arch/x86/kernel/acpi/wakeup_64.o
  CC      kernel/cgroup/cgroup.o
  CC      kernel/trace/trace_clock.o
  CC      kernel/trace/ftrace.o
  CC      kernel/trace/ring_buffer.o
  CC      kernel/dma/remap.o
  CC      kernel/trace/trace.o
  CC      kernel/trace/trace_output.o
  CC      arch/x86/events/intel/lbr.o
  CC      kernel/trace/trace_seq.o
  CC      kernel/irq/proc.o
  CC      kernel/cgroup/rstat.o
  CC      crypto/scompress.o
  CC      arch/x86/events/intel/p4.o
  CC      crypto/algboss.o
  CC [M]  lib/crypto/arc4.o
  CC      net/core/flow_dissector.o
  CC      crypto/testmgr.o
  CC      arch/x86/kernel/apic/apic.o
  CC      arch/x86/mm/pgprot.o
  CC      arch/x86/kernel/apic/apic_common.o
  CC      arch/x86/kernel/cpu/mce/apei.o
  CC      arch/x86/events/intel/p6.o
  CC      kernel/futex/requeue.o
  CC      fs/notify/fdinfo.o
  CC      security/keys/sysctl.o
  CC      arch/x86/kernel/acpi/apei.o
  CC      block/blk-ioc.o
  CC      kernel/trace/trace_stat.o
  CC      mm/readahead.o
  AR      drivers/pinctrl/built-in.a
  CC      drivers/pci/msi/pcidev_msi.o
  CC      drivers/pci/pcie/portdrv.o
  CC      sound/core/memory.o
  AR      lib/crypto/built-in.a
  LD [M]  lib/crypto/libarc4.o
  CC      drivers/pci/pcie/rcec.o
  AR      drivers/gpio/built-in.a
  CC      sound/core/control.o
  AR      arch/x86/pci/built-in.a
  CC      kernel/module/procfs.o
  AR      kernel/dma/built-in.a
  CC      kernel/events/core.o
  CC      kernel/fork.o
  CC [M]  sound/pci/hda/hda_auto_parser.o
  CC      kernel/module/sysfs.o
  CC      arch/x86/events/intel/pt.o
  CC      kernel/bpf/core.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/coalesced_mmio.o
  CC      arch/x86/mm/hugetlbpage.o
  CC      kernel/events/ring_buffer.o
  CC      kernel/trace/trace_printk.o
  CC      sound/core/misc.o
  CC      kernel/irq/migration.o
  AR      security/keys/built-in.a
  AR      security/built-in.a
  CC      kernel/futex/waitwake.o
  CC      kernel/irq/cpuhotplug.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/async_pf.o
  CC      kernel/events/callchain.o
  AR      arch/x86/kernel/cpu/mce/built-in.a
  CC      arch/x86/kernel/cpu/common.o
  AR      fs/notify/built-in.a
  CC      kernel/time/ntp.o
  CC      fs/iomap/trace.o
  CC      arch/x86/kernel/acpi/cppc.o
  CC      fs/iomap/iter.o
  CC      kernel/time/clocksource.o
  CC      drivers/pci/msi/api.o
  CC      kernel/exec_domain.o
  CC      kernel/locking/qrwlock.o
  CC      io_uring/msg_ring.o
  LDS     arch/x86/kernel/vmlinux.lds
  CC      arch/x86/kernel/kprobes/core.o
  AS      arch/x86/kernel/head_64.o
  CC      kernel/trace/pid_list.o
  CC      drivers/pci/pcie/aspm.o
  CC      kernel/panic.o
  CC      block/blk-map.o
  CC      lib/zstd/compress/zstd_compress_sequences.o
  CC      drivers/pci/msi/msi.o
  CC      net/core/sysctl_net_core.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/irqchip.o
  CC      arch/x86/kernel/head64.o
  AR      kernel/module/built-in.a
  CC      kernel/trace/trace_sched_switch.o
  CC      arch/x86/kernel/kprobes/opt.o
  CC      kernel/irq/pm.o
  AR      kernel/futex/built-in.a
  CC      arch/x86/mm/kasan_init_64.o
  CC      mm/swap.o
  CC      kernel/irq/msi.o
  CC      arch/x86/kernel/kprobes/ftrace.o
  CC      kernel/trace/trace_functions.o
  CC      arch/x86/kernel/acpi/cstate.o
  CC      arch/x86/events/intel/uncore.o
  CC      arch/x86/kernel/ebda.o
  AR      kernel/rcu/built-in.a
  CC      kernel/cpu.o
  CC      kernel/events/hw_breakpoint.o
  AR      kernel/locking/built-in.a
  CC      lib/zstd/compress/zstd_compress_superblock.o
  CC      mm/truncate.o
  CC      kernel/trace/trace_preemptirq.o
  CC      io_uring/timeout.o
  CC      io_uring/sqpoll.o
  CC [M]  sound/pci/hda/hda_sysfs.o
  CC      io_uring/fdinfo.o
  CC      kernel/events/uprobes.o
  AR      fs/quota/built-in.a
  CC      fs/iomap/buffered-io.o
  CC      kernel/time/jiffies.o
  CC      lib/zstd/compress/zstd_double_fast.o
  CC      fs/iomap/direct-io.o
  CC      kernel/cgroup/namespace.o
  CC      arch/x86/kernel/apic/apic_noop.o
  CC      fs/proc/task_mmu.o
  CC      net/802/p8022.o
  CC      lib/raid6/algos.o
  CC      lib/fonts/fonts.o
  CC      net/802/psnap.o
  CC      lib/argv_split.o
  CC      arch/x86/kernel/platform-quirks.o
  AR      arch/x86/kernel/acpi/built-in.a
  CC      kernel/time/timer_list.o
  CC      fs/proc/inode.o
  CC      kernel/exit.o
  CC      arch/x86/mm/pkeys.o
  CC      block/blk-merge.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/dirty_ring.o
  AR      arch/x86/kernel/kprobes/built-in.a
  CC      drivers/pci/msi/irqdomain.o
  CC      arch/x86/kernel/apic/ipi.o
  CC      kernel/softirq.o
  CC      net/sched/sch_generic.o
  CC      sound/core/device.o
  CC      kernel/irq/affinity.o
  CC      arch/x86/kernel/cpu/rdrand.o
  CC      drivers/pci/pcie/aer.o
  CC [M]  sound/pci/hda/hda_controller.o
  CC      kernel/irq/matrix.o
  CC      net/802/stp.o
  CC      lib/fonts/font_8x8.o
  CC      crypto/cmac.o
  CC      arch/x86/kernel/cpu/match.o
  CC      fs/kernfs/mount.o
  CC      net/core/dev.o
  CC      lib/fonts/font_8x16.o
  CC      kernel/time/timeconv.o
  CC      kernel/cgroup/cgroup-v1.o
  CC      kernel/trace/trace_nop.o
  CC      arch/x86/kernel/apic/vector.o
  CC      kernel/cgroup/freezer.o
  CC      lib/raid6/recov.o
  CC      arch/x86/kernel/apic/hw_nmi.o
  CC      io_uring/tctx.o
  CC      kernel/resource.o
  CC      kernel/sysctl.o
  CC      kernel/capability.o
  CC      arch/x86/mm/pti.o
  CC      sound/core/info.o
  CC      net/core/dev_addr_lists.o
  AR      lib/fonts/built-in.a
  CC      drivers/pci/pcie/err.o
  CC      fs/iomap/fiemap.o
  CC      fs/iomap/seek.o
  CC      kernel/time/timecounter.o
  CC      arch/x86/kernel/cpu/bugs.o
  CC      arch/x86/kernel/apic/io_apic.o
  CC      kernel/time/alarmtimer.o
  CC      crypto/hmac.o
  CC      sound/core/isadma.o
  CC      fs/proc/root.o
  AR      drivers/pci/msi/built-in.a
  CC      fs/iomap/swapfile.o
  CC [M]  arch/x86/kvm/../../../virt/kvm/pfncache.o
  CC      fs/kernfs/inode.o
  CC      arch/x86/events/intel/uncore_nhmex.o
  AR      net/802/built-in.a
  CC      net/netlink/af_netlink.o
  CC      kernel/ptrace.o
  AR      kernel/bpf/built-in.a
  CC      drivers/pci/hotplug/pci_hotplug_core.o
  CC      arch/x86/kernel/apic/msi.o
  HOSTCC  lib/raid6/mktables
  CC      drivers/pci/hotplug/acpi_pcihp.o
  CC      mm/vmscan.o
  CC      drivers/pci/hotplug/pciehp_core.o
  UNROLL  lib/raid6/int1.c
  UNROLL  lib/raid6/int2.c
  UNROLL  lib/raid6/int4.c
  CC      kernel/user.o
  UNROLL  lib/raid6/int8.c
  UNROLL  lib/raid6/int16.c
  UNROLL  lib/raid6/int32.c
  CC      kernel/trace/trace_functions_graph.o
  CC      lib/raid6/recov_ssse3.o
  CC      lib/raid6/recov_avx2.o
  CC      kernel/cgroup/legacy_freezer.o
  CC      io_uring/poll.o
  CC      io_uring/cancel.o
  AR      arch/x86/mm/built-in.a
  CC      drivers/pci/pcie/aer_inject.o
  AR      drivers/pci/controller/dwc/built-in.a
  AR      kernel/irq/built-in.a
  AR      drivers/pci/controller/mobiveil/built-in.a
  CC      mm/shmem.o
  CC      drivers/pci/controller/vmd.o
  AR      drivers/pci/switch/built-in.a
  CC      kernel/cgroup/pids.o
  CC      sound/core/vmaster.o
  CC      block/blk-timeout.o
  CC      fs/kernfs/dir.o
  CC      crypto/vmac.o
  CC      arch/x86/events/intel/uncore_snb.o
  CC      drivers/pci/pcie/pme.o
  CC      fs/proc/base.o
  CC      kernel/time/posix-timers.o
  CC [M]  sound/pci/hda/hda_proc.o
  CC      kernel/sched/build_policy.o
  AR      fs/iomap/built-in.a
  CC      kernel/sched/build_utility.o
  CC      kernel/signal.o
  CC      net/core/dst.o
  CC      fs/kernfs/file.o
  CC      drivers/pci/pcie/dpc.o
  CC      drivers/video/console/dummycon.o
  CC      net/core/netevent.o
  CC      drivers/pci/hotplug/pciehp_ctrl.o
  CC      fs/proc/generic.o
  CC [M]  arch/x86/kvm/x86.o
  CC      fs/kernfs/symlink.o
  CC      kernel/time/posix-cpu-timers.o
  CC      net/core/neighbour.o
  CC      lib/raid6/mmx.o
  CC      block/blk-lib.o
  CC      lib/raid6/sse1.o
  CC      drivers/idle/intel_idle.o
  CC      lib/raid6/sse2.o
  AR      drivers/char/ipmi/built-in.a
  CC      arch/x86/kernel/apic/x2apic_phys.o
  CC      sound/core/ctljack.o
  CC [M]  sound/pci/hda/hda_hwdep.o
  CC      lib/bug.o
  CC      lib/buildid.o
  CC      lib/cmdline.o
  CC      net/sched/sch_mq.o
  CC      arch/x86/kernel/cpu/aperfmperf.o
  CC      mm/util.o
  CC      kernel/time/posix-clock.o
  CC      kernel/trace/fgraph.o
  CC      arch/x86/events/intel/uncore_snbep.o
  CC      arch/x86/events/intel/uncore_discovery.o
  CC      kernel/sys.o
  CC      drivers/video/console/vgacon.o
  CC      crypto/xcbc.o
  AR      sound/ppc/built-in.a
  AR      sound/arm/built-in.a
  AR      drivers/pci/controller/built-in.a
  CC      lib/cpumask.o
  AR      sound/sh/built-in.a
  CC      net/sched/sch_frag.o
  AR      sound/synth/emux/built-in.a
  AR      sound/synth/built-in.a
  AR      sound/usb/misc/built-in.a
  AR      sound/usb/usx2y/built-in.a
  CC      sound/core/jack.o
  CC      crypto/crypto_null.o
  AR      sound/usb/caiaq/built-in.a
  CC      crypto/md5.o
  AR      sound/usb/6fire/built-in.a
  AR      drivers/pci/pcie/built-in.a
  AR      sound/usb/hiface/built-in.a
  CC      arch/x86/kernel/apic/x2apic_cluster.o
  CC      crypto/sha1_generic.o
  AR      sound/usb/bcd2000/built-in.a
  AR      sound/usb/built-in.a
  CC      drivers/pci/access.o
  CC      drivers/acpi/acpica/dsargs.o
  CC      drivers/pci/hotplug/pciehp_pci.o
  CC      drivers/acpi/acpica/dscontrol.o
  CC      kernel/cgroup/cpuset.o
  CC      drivers/acpi/acpica/dsdebug.o
  CC      io_uring/kbuf.o
  CC      block/blk-mq.o
  AR      fs/kernfs/built-in.a
  CC      kernel/time/itimer.o
  CC      lib/raid6/avx2.o
  CC      drivers/acpi/acpica/dsfield.o
  CC      kernel/time/clockevents.o
  CC [M]  sound/pci/hda/hda_generic.o
  CC      sound/core/timer.o
  CC      arch/x86/kernel/cpu/cpuid-deps.o
  CC      crypto/sha256_generic.o
  CC      crypto/sha512_generic.o
  CC      crypto/blake2b_generic.o
  CC      lib/raid6/avx512.o
  CC      lib/raid6/recov_avx512.o
  CC      crypto/ecb.o
  CC      drivers/pci/bus.o
  CC      arch/x86/kernel/process_64.o
  CC      sound/core/hrtimer.o
  CC      arch/x86/kernel/cpu/umwait.o
  CC      arch/x86/kernel/apic/apic_flat_64.o
  CC      arch/x86/kernel/signal.o
  CC      drivers/acpi/acpica/dsinit.o
  AR      drivers/idle/built-in.a
  CC      drivers/acpi/acpica/dsmethod.o
  AR      net/bpf/built-in.a
  CC      drivers/pnp/pnpacpi/core.o
  CC      arch/x86/kernel/apic/probe_64.o
  CC      kernel/time/tick-common.o
  CC      kernel/trace/blktrace.o
  CC      sound/core/seq_device.o
  CC      kernel/time/tick-broadcast.o
  CC      drivers/pci/probe.o
  CC      drivers/pci/hotplug/pciehp_hpc.o
  CC      drivers/pnp/pnpacpi/rsparser.o
  CC      drivers/pnp/core.o
  AR      drivers/video/console/built-in.a
  CC      drivers/video/logo/logo.o
  CC      drivers/pnp/card.o
  CC      drivers/pnp/driver.o
  CC      kernel/time/tick-broadcast-hrtimer.o
  CC      mm/mmzone.o
  CC      drivers/acpi/acpica/dsmthdat.o
  CC      net/sched/sch_api.o
  HOSTCC  drivers/video/logo/pnmtologo
  CC [M]  sound/core/control_led.o
  CC      lib/zstd/compress/zstd_fast.o
  CC      lib/zstd/compress/zstd_lazy.o
  CC      block/blk-mq-tag.o
  AR      arch/x86/kernel/apic/built-in.a
  CC      arch/x86/kernel/signal_64.o
  TABLE   lib/raid6/tables.c
  CC      block/blk-stat.o
  CC      lib/raid6/int1.o
  CC      io_uring/rsrc.o
  CC [M]  sound/core/hwdep.o
  CC      arch/x86/kernel/cpu/proc.o
  CC      net/core/rtnetlink.o
  CC      net/netlink/genetlink.o
  CC      lib/zstd/compress/zstd_ldm.o
  CC      fs/proc/array.o
  CC      crypto/cbc.o
  LOGO    drivers/video/logo/logo_linux_clut224.c
  CC      drivers/video/logo/logo_linux_clut224.o
  CC      io_uring/rw.o
  AR      drivers/video/logo/built-in.a
  CC      lib/zstd/compress/zstd_opt.o
  CC      drivers/video/backlight/backlight.o
  CC      lib/zstd/zstd_decompress_module.o
  CC      drivers/pci/host-bridge.o
  CC      lib/zstd/decompress/huf_decompress.o
  CC      drivers/pnp/resource.o
  CC      net/netlink/policy.o
  CC      drivers/acpi/acpica/dsobject.o
  CC      kernel/trace/trace_events.o
  CC      io_uring/opdef.o
  CC      kernel/time/tick-oneshot.o
  CC      arch/x86/kernel/traps.o
  CC      arch/x86/kernel/idt.o
  CC      crypto/pcbc.o
  AR      drivers/pnp/pnpacpi/built-in.a
  CC      drivers/pnp/manager.o
  CC      net/netlink/diag.o
  CC      lib/raid6/int2.o
  CC      crypto/cts.o
  CC      drivers/pci/hotplug/acpiphp_core.o
  CC [M]  sound/core/pcm.o
  CC [M]  sound/core/pcm_native.o
  CC      arch/x86/kernel/irq.o
  MKCAP   arch/x86/kernel/cpu/capflags.c
  CC      arch/x86/events/intel/cstate.o
  CC      kernel/time/tick-sched.o
  CC      drivers/pnp/support.o
  CC      drivers/pci/remove.o
  CC      drivers/acpi/acpica/dsopcode.o
  CC      kernel/trace/trace_export.o
  CC      kernel/trace/trace_event_perf.o
  CC      crypto/lrw.o
  CC      crypto/xts.o
  CC [M]  sound/core/pcm_lib.o
  AR      drivers/video/backlight/built-in.a
  CC      kernel/umh.o
  CC      drivers/video/fbdev/core/fb_notify.o
  CC      drivers/acpi/acpica/dspkginit.o
  CC      drivers/acpi/acpica/dsutils.o
  CC      fs/proc/fd.o
  CC      kernel/workqueue.o
  CC      lib/ctype.o
  CC      kernel/pid.o
  CC      lib/raid6/int4.o
  CC      arch/x86/kernel/cpu/powerflags.o
  CC      drivers/pnp/interface.o
  CC      mm/vmstat.o
  CC      drivers/pci/hotplug/acpiphp_glue.o
  CC      drivers/video/aperture.o
  CC      lib/zstd/decompress/zstd_ddict.o
  CC      drivers/pci/pci.o
  CC      arch/x86/kernel/irq_64.o
  CC      fs/proc/proc_tty.o
  CC      lib/raid6/int8.o
  AR      arch/x86/events/intel/built-in.a
  CC      arch/x86/events/msr.o
  CC      lib/dec_and_lock.o
  CC      lib/decompress.o
  CC      arch/x86/kernel/cpu/feat_ctl.o
  AR      kernel/cgroup/built-in.a
  CC      drivers/video/cmdline.o
  AR      drivers/video/fbdev/omap/built-in.a
  AR      drivers/video/fbdev/omap2/omapfb/dss/built-in.a
  AR      drivers/video/fbdev/omap2/omapfb/displays/built-in.a
  CC [M]  drivers/video/fbdev/uvesafb.o
  AR      drivers/video/fbdev/omap2/omapfb/built-in.a
  CC      drivers/acpi/acpica/dswexec.o
  AR      drivers/video/fbdev/omap2/built-in.a
  CC      drivers/video/nomodeset.o
  CC      drivers/acpi/acpica/dswload.o
  CC      crypto/ctr.o
  CC      crypto/gcm.o
  CC      drivers/pci/pci-driver.o
  AR      net/netlink/built-in.a
  CC      drivers/pci/search.o
  CC      kernel/time/vsyscall.o
  CC      fs/proc/cmdline.o
  AR      kernel/events/built-in.a
  CC      kernel/trace/trace_events_filter.o
  CC      net/ethtool/ioctl.o
  CC      fs/proc/consoles.o
  CC      io_uring/notif.o
  CC [M]  drivers/video/fbdev/core/fbmem.o
  CC      net/ethtool/common.o
  CC      drivers/pnp/quirks.o
  CC      net/sched/sch_blackhole.o
  CC      lib/decompress_bunzip2.o
  CC      arch/x86/kernel/cpu/intel.o
  CC      drivers/video/hdmi.o
  CC      fs/proc/cpuinfo.o
  CC      kernel/time/timekeeping_debug.o
  CC      kernel/time/namespace.o
  CC      arch/x86/kernel/cpu/intel_pconfig.o
  CC      net/ethtool/netlink.o
  CC      lib/zstd/decompress/zstd_decompress.o
  AR      arch/x86/events/built-in.a
  CC      lib/raid6/int16.o
  CC      lib/zstd/decompress/zstd_decompress_block.o
  CC      drivers/acpi/acpica/dswload2.o
  CC [M]  sound/core/pcm_misc.o
  CC      net/core/utils.o
  CC [M]  sound/core/pcm_memory.o
  CC      fs/proc/devices.o
  CC      crypto/pcrypt.o
  CC      lib/raid6/int32.o
  CC      kernel/task_work.o
  AR      drivers/pci/hotplug/built-in.a
  CC [M]  net/netfilter/ipvs/ip_vs_conn.o
  CC      kernel/extable.o
  CC      drivers/pci/pci-sysfs.o
  CC      drivers/acpi/acpica/dswscope.o
  CC      block/blk-mq-sysfs.o
  CC      drivers/pnp/system.o
  CC      mm/backing-dev.o
  CC      drivers/pci/rom.o
  CC      drivers/pci/setup-res.o
  CC      mm/mm_init.o
  CC      crypto/cryptd.o
  CC      drivers/acpi/acpica/dswstate.o
  CC      lib/decompress_inflate.o
  CC      net/sched/sch_fifo.o
  AR      kernel/time/built-in.a
  CC      net/netfilter/core.o
  CC [M]  net/netfilter/ipvs/ip_vs_core.o
  CC      drivers/acpi/acpica/evevent.o
  CC      io_uring/io-wq.o
  CC      kernel/params.o
  AR      drivers/amba/built-in.a
  CC      fs/proc/interrupts.o
  CC      drivers/acpi/apei/apei-base.o
  AR      sound/firewire/built-in.a
  CC      drivers/acpi/apei/hest.o
  CC      lib/raid6/tables.o
  CC [M]  sound/pci/hda/patch_realtek.o
  CC      drivers/acpi/apei/erst.o
  CC      drivers/acpi/acpica/evgpe.o
  AR      sound/sparc/built-in.a
  CC      lib/zstd/zstd_common_module.o
  CC      kernel/kthread.o
  CC [M]  sound/core/memalloc.o
  CC      arch/x86/kernel/cpu/tsx.o
  AR      kernel/sched/built-in.a
  CC      arch/x86/kernel/cpu/intel_epb.o
  AR      drivers/pnp/built-in.a
  CC      mm/percpu.o
  CC      kernel/trace/trace_events_trigger.o
  CC      fs/sysfs/file.o
  CC      lib/decompress_unlz4.o
  CC      lib/decompress_unlzma.o
  CC [M]  drivers/video/fbdev/core/fbmon.o
  CC      drivers/acpi/acpica/evgpeblk.o
  CC      kernel/trace/trace_eprobe.o
  CC      fs/sysfs/dir.o
  CC      fs/proc/loadavg.o
  CC      lib/decompress_unlzo.o
  CC [M]  sound/core/pcm_timer.o
  CC      drivers/acpi/apei/bert.o
  CC      fs/sysfs/symlink.o
  CC [M]  net/netfilter/ipvs/ip_vs_ctl.o
  CC      block/blk-mq-cpumap.o
  CC [M]  net/netfilter/ipvs/ip_vs_sched.o
  CC      drivers/pci/irq.o
  CC      fs/sysfs/mount.o
  CC      lib/decompress_unxz.o
  CC      drivers/acpi/apei/ghes.o
  CC      net/ethtool/bitset.o
  CC      kernel/sys_ni.o
  AR      lib/raid6/built-in.a
  CC      net/ethtool/strset.o
  CC      net/ethtool/linkinfo.o
  CC      crypto/des_generic.o
  AR      net/sched/built-in.a
  CC      net/ethtool/linkmodes.o
  CC      drivers/pci/vpd.o
  CC [M]  sound/pci/hda/patch_analog.o
  CC      crypto/aes_generic.o
  CC      fs/sysfs/group.o
  CC      drivers/acpi/acpica/evgpeinit.o
  CC      drivers/acpi/acpica/evgpeutil.o
  CC      fs/proc/meminfo.o
  CC      fs/proc/stat.o
  CC      arch/x86/kernel/cpu/amd.o
  CC      arch/x86/kernel/cpu/hygon.o
  CC      crypto/deflate.o
  CC      fs/configfs/inode.o
  CC      net/core/link_watch.o
  CC      net/core/filter.o
  LD [M]  sound/core/snd-ctl-led.o
  AR      net/ipv4/netfilter/built-in.a
  LD [M]  sound/core/snd-hwdep.o
  CC [M]  net/ipv4/netfilter/nf_defrag_ipv4.o
  CC      net/core/sock_diag.o
  LD [M]  sound/core/snd-pcm.o
  CC      block/blk-mq-sched.o
  CC      kernel/nsproxy.o
  AR      sound/core/built-in.a
  CC      block/ioctl.o
  CC      drivers/pci/setup-bus.o
  CC      crypto/crc32c_generic.o
  CC      fs/proc/uptime.o
  CC [M]  net/ipv4/netfilter/nf_reject_ipv4.o
  CC [M]  net/ipv4/netfilter/ip_tables.o
  AR      sound/spi/built-in.a
  CC      lib/decompress_unzstd.o
  CC      net/ipv4/route.o
  CC      arch/x86/kernel/cpu/centaur.o
  CC      fs/devpts/inode.o
  CC      drivers/acpi/acpica/evglock.o
  CC      net/core/dev_ioctl.o
  CC      net/ethtool/rss.o
  CC      kernel/notifier.o
  AR      io_uring/built-in.a
  CC      arch/x86/kernel/cpu/zhaoxin.o
  CC      net/core/tso.o
  CC      drivers/pci/vc.o
  AR      fs/sysfs/built-in.a
  CC      net/ipv4/inetpeer.o
  CC      arch/x86/kernel/cpu/perfctr-watchdog.o
  CC      arch/x86/kernel/cpu/vmware.o
  CC [M]  net/netfilter/ipvs/ip_vs_xmit.o
  CC      drivers/pci/mmap.o
  CC [M]  net/netfilter/ipvs/ip_vs_app.o
  CC [M]  drivers/video/fbdev/core/fbcmap.o
  CC      drivers/pci/setup-irq.o
  CC      net/ipv4/protocol.o
  CC      net/ipv4/ip_input.o
  CC      fs/proc/util.o
  CC      kernel/trace/trace_kprobe.o
  CC      mm/slab_common.o
  CC      fs/configfs/file.o
  CC [M]  sound/pci/hda/patch_hdmi.o
  AR      drivers/acpi/apei/built-in.a
  CC      net/core/sock_reuseport.o
  AR      drivers/acpi/pmic/built-in.a
  CC      drivers/acpi/acpica/evhandler.o
  CC      fs/proc/version.o
  CC      drivers/acpi/acpica/evmisc.o
  CC      fs/ext4/balloc.o
  CC      fs/jbd2/transaction.o
  CC      fs/ramfs/inode.o
  CC [M]  net/ipv4/netfilter/iptable_filter.o
  CC      crypto/crct10dif_common.o
  CC [M]  net/ipv4/netfilter/iptable_mangle.o
  CC      net/ipv4/ip_fragment.o
  CC      net/core/fib_notifier.o
  AR      fs/devpts/built-in.a
  CC      crypto/crct10dif_generic.o
  CC      mm/compaction.o
  CC      lib/zstd/common/debug.o
  CC      lib/zstd/common/entropy_common.o
  CC      lib/zstd/common/error_private.o
  CC      block/genhd.o
  CC      lib/zstd/common/fse_decompress.o
  CC      arch/x86/kernel/cpu/hypervisor.o
  CC      net/ethtool/linkstate.o
  CC      net/ethtool/debug.o
  CC      net/ethtool/wol.o
  CC      net/ethtool/features.o
  CC [M]  net/ipv4/netfilter/iptable_nat.o
  CC      crypto/authenc.o
  CC      drivers/acpi/dptf/int340x_thermal.o
  CC      fs/proc/softirqs.o
  CC      drivers/acpi/acpica/evregion.o
  CC      net/ethtool/privflags.o
  CC [M]  arch/x86/kvm/emulate.o
  CC      fs/configfs/dir.o
  CC [M]  arch/x86/kvm/i8259.o
  CC [M]  drivers/video/fbdev/core/fbsysfs.o
  CC [M]  arch/x86/kvm/irq.o
  CC [M]  sound/pci/hda/hda_eld.o
  CC      arch/x86/kernel/cpu/mshyperv.o
  CC      fs/proc/namespaces.o
  CC      drivers/pci/proc.o
  CC      fs/configfs/symlink.o
  CC      mm/interval_tree.o
  CC      fs/ext4/bitmap.o
  CC      lib/zstd/common/zstd_common.o
  CC      fs/ramfs/file-mmu.o
  CC [M]  sound/pci/hda/hda_intel.o
  AR      drivers/acpi/dptf/built-in.a
  CC      fs/configfs/mount.o
  CC      mm/list_lru.o
  CC [M]  drivers/video/fbdev/core/modedb.o
  CC      net/ipv4/ip_forward.o
  CC      drivers/acpi/acpica/evrgnini.o
  CC      net/ipv4/ip_options.o
  CC      fs/jbd2/commit.o
  CC      net/core/xdp.o
  CC      drivers/acpi/tables.o
  CC [M]  net/ipv4/netfilter/ipt_REJECT.o
  CC      mm/workingset.o
  CC      net/ethtool/rings.o
  CC [M]  drivers/video/fbdev/core/fbcvt.o
  CC [M]  net/netfilter/ipvs/ip_vs_sync.o
  CC      net/ethtool/channels.o
  AR      fs/ramfs/built-in.a
  CC [M]  net/netfilter/ipvs/ip_vs_est.o
  CC      net/ipv4/ip_output.o
  CC      net/ipv4/ip_sockglue.o
  CC      fs/configfs/item.o
  CC      fs/proc/self.o
  CC      crypto/authencesn.o
  CC      fs/proc/thread_self.o
  CC [M]  net/netfilter/ipvs/ip_vs_proto.o
  CC [M]  net/netfilter/ipvs/ip_vs_pe.o
  CC      fs/ext4/block_validity.o
  CC      arch/x86/kernel/cpu/capflags.o
  CC      drivers/pci/slot.o
  CC      net/ipv4/inet_hashtables.o
  CC      drivers/acpi/acpica/evsci.o
  CC      lib/dump_stack.o
  AR      arch/x86/kernel/cpu/built-in.a
  CC      arch/x86/kernel/dumpstack_64.o
  CC [M]  net/netfilter/ipvs/ip_vs_proto_tcp.o
  CC      lib/earlycpio.o
  CC [M]  drivers/video/fbdev/core/fb_cmdline.o
  CC      fs/proc/proc_sysctl.o
  CC [M]  arch/x86/kvm/lapic.o
  CC      block/ioprio.o
  CC      net/xfrm/xfrm_policy.o
  CC [M]  drivers/video/fbdev/core/fb_defio.o
  CC      net/xfrm/xfrm_state.o
  CC      crypto/lzo.o
  CC      net/unix/af_unix.o
  CC      crypto/lzo-rle.o
  CC      kernel/trace/error_report-traces.o
  AR      fs/configfs/built-in.a
  CC      net/unix/garbage.o
  CC      kernel/ksysfs.o
  CC      fs/proc/proc_net.o
  CC      drivers/acpi/acpica/evxface.o
  CC      drivers/acpi/acpica/evxfevnt.o
  CC      mm/debug.o
  LD [M]  sound/pci/hda/snd-hda-codec.o
  LD [M]  sound/pci/hda/snd-hda-codec-generic.o
  CC      lib/extable.o
  CC      arch/x86/kernel/time.o
  CC      crypto/lz4.o
  AR      sound/parisc/built-in.a
  CC      crypto/lz4hc.o
  CC      kernel/cred.o
  CC      crypto/xxhash_generic.o
  CC      fs/proc/kcore.o
  CC      drivers/pci/pci-acpi.o
  CC      net/unix/sysctl_net_unix.o
  CC      net/ethtool/coalesce.o
  CC      fs/ext4/dir.o
  CC      drivers/pci/quirks.o
  CC      net/unix/diag.o
  CC [M]  drivers/video/fbdev/core/fbcon.o
  LD [M]  sound/pci/hda/snd-hda-codec-realtek.o
  CC      kernel/trace/power-traces.o
  CC      net/ethtool/pause.o
  CC      net/ethtool/eee.o
  CC      net/xfrm/xfrm_hash.o
  AR      sound/pci/oxygen/built-in.a
  CC      crypto/rng.o
  AR      sound/pci/pcxhr/built-in.a
  CC      crypto/drbg.o
  CC      drivers/acpi/acpica/evxfgpe.o
  CC      block/badblocks.o
  CC      block/blk-rq-qos.o
  CC      kernel/trace/rpm-traces.o
  AR      sound/pci/riptide/built-in.a
  LD [M]  sound/pci/hda/snd-hda-codec-analog.o
  CC      crypto/jitterentropy.o
  CC      fs/jbd2/recovery.o
  LD [M]  sound/pci/hda/snd-hda-codec-hdmi.o
  LD [M]  sound/pci/hda/snd-hda-intel.o
  CC      arch/x86/kernel/ioport.o
  CC      arch/x86/kernel/dumpstack.o
  CC      fs/jbd2/checkpoint.o
  CC      fs/jbd2/revoke.o
  CC      net/ipv4/inet_timewait_sock.o
  AR      sound/pci/rme9652/built-in.a
  AR      sound/pci/trident/built-in.a
  CC      mm/gup.o
  AR      sound/pci/ymfpci/built-in.a
  AR      sound/pci/vx222/built-in.a
  AR      sound/pci/built-in.a
  CC [M]  net/netfilter/ipvs/ip_vs_proto_udp.o
  AR      sound/pcmcia/vx/built-in.a
  AR      sound/mips/built-in.a
  CC      mm/mmap_lock.o
  AR      sound/pcmcia/pdaudiocf/built-in.a
  CC      arch/x86/kernel/nmi.o
  AR      sound/pcmcia/built-in.a
  AR      sound/soc/built-in.a
  AR      sound/atmel/built-in.a
  AR      net/ipv6/netfilter/built-in.a
  CC [M]  net/ipv6/netfilter/nf_defrag_ipv6_hooks.o
  AR      sound/hda/built-in.a
  CC [M]  sound/hda/hda_bus_type.o
  CC      drivers/acpi/blacklist.o
  CC [M]  net/ipv6/netfilter/nf_conntrack_reasm.o
  CC      net/packet/af_packet.o
  CC      mm/highmem.o
  CC      drivers/acpi/acpica/evxfregn.o
  CC      lib/flex_proportions.o
  CC      kernel/reboot.o
  CC [M]  net/netfilter/ipvs/ip_vs_nfct.o
  CC      fs/ext4/ext4_jbd2.o
  CC      block/disk-events.o
  CC      fs/proc/kmsg.o
  CC      mm/memory.o
  CC      net/xfrm/xfrm_input.o
  CC [M]  sound/hda/hdac_bus.o
  CC [M]  sound/hda/hdac_device.o
  CC [M]  sound/hda/hdac_sysfs.o
  CC [M]  sound/hda/hdac_regmap.o
  CC      net/ethtool/tsinfo.o
  CC      fs/proc/page.o
  CC      net/xfrm/xfrm_output.o
  CC      fs/jbd2/journal.o
  CC      arch/x86/kernel/ldt.o
  AR      drivers/clk/actions/built-in.a
  CC      drivers/acpi/osi.o
  CC      kernel/async.o
  AR      drivers/clk/analogbits/built-in.a
  AR      drivers/clk/bcm/built-in.a
  AR      drivers/clk/imgtec/built-in.a
  AR      drivers/clk/imx/built-in.a
  CC      block/blk-ia-ranges.o
  AR      drivers/clk/ingenic/built-in.a
  AR      drivers/clk/mediatek/built-in.a
  AR      drivers/clk/microchip/built-in.a
  AR      drivers/clk/mstar/built-in.a
  AR      drivers/clk/mvebu/built-in.a
  CC [M]  sound/hda/hdac_controller.o
  AR      drivers/clk/ralink/built-in.a
  CC      drivers/acpi/acpica/exconcat.o
  AR      drivers/clk/renesas/built-in.a
  AR      drivers/clk/socfpga/built-in.a
  AR      drivers/clk/sprd/built-in.a
  AR      drivers/clk/sunxi-ng/built-in.a
  CC      drivers/dma/dw/core.o
  AR      drivers/clk/ti/built-in.a
  AR      drivers/clk/versatile/built-in.a
  CC      drivers/clk/x86/clk-lpss-atom.o
  AR      drivers/clk/xilinx/built-in.a
  AR      drivers/soc/apple/built-in.a
  CC      drivers/virtio/virtio.o
  AR      drivers/soc/aspeed/built-in.a
  CC      arch/x86/kernel/setup.o
  CC      drivers/clk/clk-devres.o
  AR      drivers/soc/bcm/bcm63xx/built-in.a
  CC      drivers/clk/clk-bulk.o
  AR      drivers/soc/bcm/built-in.a
  AR      drivers/soc/fsl/built-in.a
  AR      drivers/soc/fujitsu/built-in.a
  AR      drivers/soc/imx/built-in.a
  CC      arch/x86/kernel/x86_init.o
  AR      drivers/soc/ixp4xx/built-in.a
  AR      drivers/soc/loongson/built-in.a
  AR      drivers/soc/mediatek/built-in.a
  CC      drivers/dma/hsu/hsu.o
  AR      drivers/soc/microchip/built-in.a
  AR      drivers/soc/nuvoton/built-in.a
  AR      drivers/soc/pxa/built-in.a
  AR      drivers/soc/amlogic/built-in.a
  CC      lib/idr.o
  AR      drivers/soc/qcom/built-in.a
  AR      drivers/soc/renesas/built-in.a
  AR      drivers/soc/rockchip/built-in.a
  AR      drivers/soc/sifive/built-in.a
  AR      drivers/soc/sunxi/built-in.a
  AR      drivers/soc/ti/built-in.a
  AR      drivers/soc/xilinx/built-in.a
  AR      drivers/soc/built-in.a
  CC [M]  net/netfilter/ipvs/ip_vs_rr.o
  CC      crypto/jitterentropy-kcapi.o
  CC      block/bsg.o
  CC      drivers/pci/ats.o
  CC      kernel/trace/trace_dynevent.o
  CC      net/ipv4/inet_connection_sock.o
  CC      drivers/acpi/acpica/exconfig.o
  CC      fs/ext4/extents.o
  CC      drivers/acpi/acpica/exconvrt.o
  CC      drivers/clk/x86/clk-pmc-atom.o
  CC      fs/ext4/extents_status.o
  CC [M]  arch/x86/kvm/i8254.o
  CC      net/ipv4/tcp.o
  CC [M]  sound/hda/hdac_stream.o
  CC      drivers/tty/vt/vt_ioctl.o
  CC      drivers/tty/hvc/hvc_console.o
  AR      drivers/tty/ipwireless/built-in.a
  CC      drivers/tty/serial/8250/8250_core.o
  CC      drivers/tty/serial/8250/8250_pnp.o
  CC      drivers/tty/vt/vc_screen.o
  CC [M]  drivers/video/fbdev/core/bitblit.o
  CC      drivers/tty/tty_io.o
  CC      net/ethtool/cabletest.o
  AR      fs/proc/built-in.a
  CC      fs/ext4/file.o
  CC [M]  sound/hda/array.o
  CC      kernel/range.o
  LD [M]  net/ipv6/netfilter/nf_defrag_ipv6.o
  CC      arch/x86/kernel/i8259.o
  CC      net/ipv6/af_inet6.o
  CC      net/unix/scm.o
  CC      crypto/ghash-generic.o
  CC      fs/hugetlbfs/inode.o
  CC      drivers/virtio/virtio_ring.o
  CC      drivers/char/hw_random/core.o
  CC      drivers/char/agp/backend.o
  CC      drivers/char/tpm/tpm-chip.o
  CC      drivers/char/mem.o
  CC      drivers/char/random.o
  CC      drivers/acpi/acpica/excreate.o
  CC      fs/ext4/fsmap.o
  AR      drivers/clk/x86/built-in.a
  CC      drivers/clk/clkdev.o
  AR      drivers/dma/hsu/built-in.a
  CC      block/bsg-lib.o
  CC      drivers/char/tpm/tpm-dev-common.o
  CC      drivers/pci/iov.o
  CC      drivers/tty/serial/8250/8250_port.o
  CC      kernel/trace/trace_probe.o
  CC      net/ipv4/tcp_input.o
  CC      drivers/dma/dw/dw.o
  CC      net/xfrm/xfrm_sysctl.o
  CC      crypto/af_alg.o
  CC      crypto/algif_hash.o
  CC      arch/x86/kernel/irqinit.o
  CC      net/xfrm/xfrm_replay.o
  LD [M]  net/netfilter/ipvs/ip_vs.o
  CC      drivers/acpi/acpica/exdebug.o
  CC      drivers/dma/dw/idma32.o
  CC [M]  drivers/video/fbdev/core/softcursor.o
  CC      net/netfilter/nf_log.o
  CC [M]  sound/hda/hdmi_chmap.o
  CC [M]  arch/x86/kvm/ioapic.o
  CC [M]  sound/hda/trace.o
  CC      drivers/acpi/acpica/exdump.o
  CC      drivers/char/agp/generic.o
  AR      drivers/tty/hvc/built-in.a
  AR      drivers/iommu/amd/built-in.a
  CC      drivers/clk/clk.o
  CC      drivers/iommu/intel/dmar.o
  CC      drivers/iommu/intel/iommu.o
  CC      drivers/char/hw_random/intel-rng.o
  CC      drivers/tty/vt/selection.o
  AR      drivers/gpu/host1x/built-in.a
  CC      drivers/connector/cn_queue.o
  AR      drivers/gpu/drm/tests/built-in.a
  AR      net/unix/built-in.a
  CC [M]  drivers/gpu/drm/tests/drm_kunit_helpers.o
  CC      drivers/base/power/sysfs.o
  CC      drivers/iommu/intel/pasid.o
  CC      drivers/char/tpm/tpm-dev.o
  CC      drivers/base/power/generic_ops.o
  CC      net/ethtool/tunnels.o
  CC      block/blk-cgroup.o
  CC      drivers/base/power/common.o
  CC [M]  drivers/gpu/drm/tests/drm_buddy_test.o
  CC      drivers/acpi/acpica/exfield.o
  CC      drivers/dma/dw/acpi.o
  CC      arch/x86/kernel/jump_label.o
  CC      drivers/pci/pci-label.o
  AR      fs/hugetlbfs/built-in.a
  CC      fs/fat/cache.o
  CC [M]  drivers/video/fbdev/core/tileblit.o
  CC [M]  drivers/gpu/drm/tests/drm_cmdline_parser_test.o
  CC      drivers/block/loop.o
  CC      fs/fat/dir.o
  CC [M]  drivers/block/nbd.o
  CC      drivers/char/tpm/tpm-interface.o
  CC      drivers/char/agp/isoch.o
  AR      drivers/misc/eeprom/built-in.a
  AR      drivers/char/hw_random/built-in.a
  AR      drivers/misc/cb710/built-in.a
  CC      drivers/acpi/acpica/exfldio.o
  CC      drivers/base/power/qos.o
  CC      drivers/tty/serial/8250/8250_dma.o
  AR      drivers/misc/ti-st/built-in.a
  CC      drivers/tty/vt/keyboard.o
  CC      drivers/char/tpm/tpm1-cmd.o
  CC      drivers/tty/serial/8250/8250_dwlib.o
  AR      drivers/misc/lis3lv02d/built-in.a
  AR      drivers/misc/cardreader/built-in.a
  CC [M]  sound/hda/hdac_component.o
  CC      drivers/virtio/virtio_anchor.o
  CC [M]  sound/hda/hdac_i915.o
  CC      net/ipv6/anycast.o
  CC [M]  drivers/misc/mei/hdcp/mei_hdcp.o
  CC [M]  drivers/misc/mei/pxp/mei_pxp.o
  AR      fs/jbd2/built-in.a
  CC      kernel/trace/trace_uprobe.o
  CC      drivers/connector/connector.o
  CC      net/xfrm/xfrm_device.o
  CC      net/netfilter/nf_queue.o
  AR      drivers/misc/built-in.a
  CC      drivers/base/power/runtime.o
  CC      drivers/mfd/mfd-core.o
  CC [M]  arch/x86/kvm/irq_comm.o
  CC      crypto/algif_skcipher.o
  CC      drivers/dma/dw/pci.o
  CC      net/xfrm/xfrm_algo.o
  CC      arch/x86/kernel/irq_work.o
  CC      drivers/mfd/intel-lpss.o
  CC      net/packet/diag.o
  CC      drivers/pci/pci-stub.o
  CC [M]  drivers/gpu/drm/tests/drm_connector_test.o
  CC      net/ethtool/fec.o
  CC      drivers/acpi/acpica/exmisc.o
  CC      drivers/acpi/acpica/exmutex.o
  CC [M]  drivers/gpu/drm/tests/drm_damage_helper_test.o
  CC      drivers/virtio/virtio_pci_modern_dev.o
  CC [M]  drivers/video/fbdev/core/cfbfillrect.o
  CC      drivers/tty/serial/8250/8250_pcilib.o
  CC      mm/mincore.o
  CC      drivers/char/agp/intel-agp.o
  CC      kernel/smpboot.o
  CC      kernel/ucount.o
  CC      drivers/char/tpm/tpm2-cmd.o
  CC [M]  sound/hda/intel-dsp-config.o
  CC      drivers/iommu/intel/trace.o
  CC      net/core/flow_offload.o
  CC      drivers/iommu/intel/cap_audit.o
  AR      drivers/dma/dw/built-in.a
  CC      drivers/iommu/intel/irq_remapping.o
  AR      drivers/dma/idxd/built-in.a
  AR      drivers/dma/mediatek/built-in.a
  CC [M]  drivers/misc/mei/init.o
  AR      drivers/dma/qcom/built-in.a
  AR      drivers/dma/ti/built-in.a
  AR      drivers/dma/xilinx/built-in.a
  CC      drivers/dma/dmaengine.o
  CC      drivers/dma/virt-dma.o
  CC [M]  drivers/dma/ioat/init.o
  CC      drivers/acpi/acpica/exnames.o
  CC [M]  drivers/dma/ioat/dma.o
  CC      drivers/connector/cn_proc.o
  CC      drivers/pci/vgaarb.o
  CC      drivers/mfd/intel-lpss-pci.o
  CC [M]  drivers/dma/ioat/prep.o
  CC      arch/x86/kernel/probe_roms.o
  CC      drivers/tty/serial/8250/8250_pci.o
  CC      crypto/xor.o
  CC      net/ipv6/ip6_output.o
  CC [M]  arch/x86/kvm/cpuid.o
  CC      fs/fat/fatent.o
  CC      drivers/dma/acpi-dma.o
  CC      block/blk-cgroup-rwstat.o
  CC      drivers/virtio/virtio_pci_legacy_dev.o
  CC      drivers/tty/serial/8250/8250_exar.o
  AR      net/packet/built-in.a
  CC      net/core/gro.o
  CC      drivers/base/power/wakeirq.o
  CC [M]  sound/hda/intel-nhlt.o
  CC      net/xfrm/xfrm_user.o
  CC [M]  drivers/video/fbdev/core/cfbcopyarea.o
  CC      net/ipv6/ip6_input.o
  CC      drivers/acpi/acpica/exoparg1.o
  CC      drivers/acpi/acpica/exoparg2.o
  CC      drivers/char/agp/intel-gtt.o
  CC      net/ethtool/eeprom.o
  CC      net/key/af_key.o
  CC      net/netfilter/nf_sockopt.o
  CC      drivers/tty/vt/consolemap.o
  CC      mm/mlock.o
  CC [M]  drivers/misc/mei/hbm.o
  CC [M]  drivers/gpu/drm/tests/drm_dp_mst_helper_test.o
  CC      kernel/trace/rethook.o
  CC      drivers/mfd/intel-lpss-acpi.o
  CC [M]  drivers/video/fbdev/core/cfbimgblt.o
  CC      drivers/char/tpm/tpmrm-dev.o
  CC      crypto/hash_info.o
  CC      arch/x86/kernel/sys_ia32.o
  CC      net/ethtool/stats.o
  CC      crypto/simd.o
  CC      fs/ext4/fsync.o
  CC      net/core/netdev-genl.o
  CC      net/core/netdev-genl-gen.o
  CC      drivers/base/power/main.o
  CC      drivers/virtio/virtio_pci_modern.o
  CC      drivers/base/power/wakeup.o
  CC [M]  drivers/gpu/drm/tests/drm_format_helper_test.o
  CC [M]  drivers/video/fbdev/core/sysfillrect.o
  CC      block/blk-throttle.o
  CC [M]  sound/hda/intel-sdw-acpi.o
  CC      drivers/acpi/acpica/exoparg3.o
  CC      drivers/char/tpm/tpm2-space.o
  CC      drivers/tty/serial/8250/8250_early.o
  CC      drivers/virtio/virtio_pci_common.o
  CC      arch/x86/kernel/signal_32.o
  CC      drivers/mfd/intel_soc_pmic_crc.o
  AR      drivers/connector/built-in.a
  CC      drivers/clk/clk-divider.o
  CC [M]  drivers/dma/ioat/dca.o
  AR      drivers/pci/built-in.a
  CC      block/mq-deadline.o
  CC      drivers/iommu/intel/perfmon.o
  CC      fs/ext4/hash.o
  CC      mm/mmap.o
  AR      drivers/gpu/drm/arm/built-in.a
  AR      drivers/nfc/built-in.a
  AR      net/bridge/netfilter/built-in.a
  CC      net/bridge/br.o
  AR      drivers/gpu/vga/built-in.a
  AR      kernel/trace/built-in.a
  CC      net/bridge/br_device.o
  AR      drivers/block/built-in.a
  CC      kernel/regset.o
  CC      block/kyber-iosched.o
  CC [M]  crypto/md4.o
  AR      drivers/gpu/drm/display/built-in.a
  CC      fs/fat/file.o
  CC [M]  drivers/gpu/drm/display/drm_display_helper_mod.o
  LD [M]  sound/hda/snd-hda-core.o
  HOSTCC  drivers/tty/vt/conmakehash
  CC      net/netfilter/utils.o
  CC      drivers/acpi/acpica/exoparg6.o
  LD [M]  sound/hda/snd-intel-dspcfg.o
  AR      drivers/char/agp/built-in.a
  CC      drivers/tty/vt/vt.o
  CC      net/ethtool/phc_vclocks.o
  CC      arch/x86/kernel/sys_x86_64.o
  CC [M]  drivers/gpu/drm/tests/drm_format_test.o
  LD [M]  sound/hda/snd-intel-sdw-acpi.o
  AR      sound/x86/built-in.a
  AR      sound/xen/built-in.a
  CC [M]  drivers/gpu/drm/tests/drm_framebuffer_test.o
  AR      sound/virtio/built-in.a
  CC      sound/sound_core.o
  CC      net/ethtool/mm.o
  CC      net/ethtool/module.o
  CC [M]  drivers/misc/mei/interrupt.o
  CC      fs/nfs/client.o
  CC [M]  drivers/misc/mei/client.o
  AR      lib/zstd/built-in.a
  CC      lib/irq_regs.o
  CC      drivers/tty/serial/8250/8250_dw.o
  CC      kernel/kmod.o
  CC [M]  drivers/gpu/drm/display/drm_dp_dual_mode_helper.o
  CC [M]  drivers/misc/mei/main.o
  CC [M]  drivers/video/fbdev/core/syscopyarea.o
  CC [M]  drivers/dma/ioat/sysfs.o
  CC [M]  drivers/mfd/lpc_sch.o
  CC      lib/is_single_threaded.o
  CC      drivers/clk/clk-fixed-factor.o
  COPY    drivers/tty/vt/defkeymap.c
  CC      drivers/clk/clk-fixed-rate.o
  CC      drivers/char/tpm/tpm-sysfs.o
  CC      mm/mmu_gather.o
  CC [M]  crypto/ccm.o
  CC      drivers/acpi/acpica/exprep.o
  CC      drivers/clk/clk-gate.o
  CC      sound/last.o
  CC      drivers/acpi/acpica/exregion.o
  CC      net/core/net-sysfs.o
  CC      drivers/virtio/virtio_pci_legacy.o
  CC      fs/ext4/ialloc.o
  CC      drivers/base/power/wakeup_stats.o
  AR      drivers/dma/built-in.a
  CC      mm/mprotect.o
  CC      drivers/clk/clk-multiplier.o
  CC      net/ipv6/addrconf.o
  CC      arch/x86/kernel/espfix_64.o
  CC      lib/klist.o
  CC [M]  arch/x86/kvm/pmu.o
  CC      fs/fat/inode.o
  CC [M]  arch/x86/kvm/mtrr.o
  CC [M]  drivers/gpu/drm/tests/drm_managed_test.o
  AR      sound/built-in.a
  CC [M]  arch/x86/kvm/hyperv.o
  CC      net/ipv6/addrlabel.o
  CC [M]  drivers/gpu/drm/tests/drm_mm_test.o
  CC      drivers/acpi/acpica/exresnte.o
  AR      drivers/iommu/intel/built-in.a
  CC      drivers/char/tpm/eventlog/common.o
  CC      drivers/tty/n_tty.o
  AR      drivers/iommu/arm/arm-smmu/built-in.a
  CC      net/ipv6/route.o
  AR      drivers/iommu/arm/arm-smmu-v3/built-in.a
  AR      drivers/iommu/arm/built-in.a
  CC      block/bfq-iosched.o
  AR      drivers/iommu/iommufd/built-in.a
  LD [M]  drivers/dma/ioat/ioatdma.o
  CC      drivers/iommu/iommu.o
  AR      drivers/gpu/drm/rcar-du/built-in.a
  CC      kernel/groups.o
  CC      lib/kobject.o
  CC [M]  drivers/mfd/lpc_ich.o
  CC      kernel/kcmp.o
  CC      kernel/freezer.o
  CC      net/ethtool/pse-pd.o
  CC [M]  net/netfilter/nfnetlink.o
  CC      lib/kobject_uevent.o
  CC      net/bridge/br_fdb.o
  CC [M]  drivers/gpu/drm/display/drm_dp_helper.o
  CC      drivers/clk/clk-mux.o
  CC      drivers/clk/clk-composite.o
  CC [M]  drivers/virtio/virtio_mem.o
  CC      net/ipv4/tcp_output.o
  CC      drivers/tty/serial/8250/8250_lpss.o
  CC      net/ipv4/tcp_timer.o
  CC      lib/logic_pio.o
  CC [M]  drivers/video/fbdev/core/sysimgblt.o
  CC      net/ethtool/plca.o
  CC      drivers/base/power/domain.o
  CC [M]  drivers/misc/mei/dma-ring.o
  CC      block/bfq-wf2q.o
  CC      drivers/acpi/acpica/exresolv.o
  CC      drivers/base/power/domain_governor.o
  CC [M]  crypto/arc4.o
  AR      net/xfrm/built-in.a
  CC      fs/fat/misc.o
  CC      arch/x86/kernel/ksysfs.o
  AR      net/key/built-in.a
  CC      drivers/char/tpm/eventlog/tpm1.o
  CC [M]  net/sunrpc/auth_gss/auth_gss.o
  CC      net/sunrpc/clnt.o
  CC      drivers/clk/clk-fractional-divider.o
  CC      kernel/stacktrace.o
  CC      lib/maple_tree.o
  CC      kernel/dma.o
  CC      drivers/char/tpm/eventlog/tpm2.o
  CC      drivers/acpi/acpica/exresop.o
  CC      drivers/acpi/acpica/exserial.o
  CC      net/sunrpc/xprt.o
  AR      drivers/mfd/built-in.a
  CC      drivers/tty/serial/8250/8250_mid.o
  CC      drivers/clk/clk-gpio.o
  CC [M]  crypto/ecc.o
  CC      drivers/acpi/acpica/exstore.o
  CC      fs/fat/nfs.o
  CC      fs/nfs/dir.o
  CC      fs/fat/namei_vfat.o
  CC      drivers/tty/tty_ioctl.o
  CC [M]  drivers/misc/mei/bus.o
  CC [M]  net/sunrpc/auth_gss/gss_generic_token.o
  CC      drivers/tty/serial/8250/8250_pericom.o
  CC [M]  drivers/video/fbdev/core/fb_sys_fops.o
  CC      lib/memcat_p.o
  CC      arch/x86/kernel/bootflag.o
  AR      net/ethtool/built-in.a
  CC      drivers/acpi/osl.o
  CC      net/8021q/vlan_core.o
  CC      net/core/net-procfs.o
  CC      net/ipv4/tcp_ipv4.o
  CC      drivers/char/tpm/tpm_ppi.o
  CC      drivers/acpi/utils.o
  CC      mm/mremap.o
  CC      net/core/netpoll.o
  CC      net/core/fib_rules.o
  CC      drivers/acpi/acpica/exstoren.o
  CC      net/ipv4/tcp_minisocks.o
  CC      block/bfq-cgroup.o
  AR      drivers/clk/built-in.a
  CC      fs/ext4/indirect.o
  CC      kernel/smp.o
  CC [M]  net/netfilter/nf_conntrack_core.o
  CC      net/ipv6/ip6_fib.o
  CC      drivers/char/tpm/eventlog/acpi.o
  CC      net/core/net-traces.o
  CC      drivers/base/power/clock_ops.o
  CONMK   drivers/tty/vt/consolemap_deftbl.c
  CC      drivers/tty/vt/defkeymap.o
  CC      drivers/acpi/acpica/exstorob.o
  CC      block/blk-mq-pci.o
  CC      arch/x86/kernel/e820.o
  CC      drivers/char/misc.o
  AR      drivers/tty/serial/8250/built-in.a
  CC [M]  drivers/misc/mei/bus-fixup.o
  CC      drivers/tty/serial/serial_core.o
  CC [M]  drivers/gpu/drm/tests/drm_modes_test.o
  CC      drivers/tty/vt/consolemap_deftbl.o
  CC      arch/x86/kernel/pci-dma.o
  AR      drivers/tty/vt/built-in.a
  CC [M]  net/sunrpc/auth_gss/gss_mech_switch.o
  LD [M]  drivers/video/fbdev/core/fb.o
  CC [M]  drivers/gpu/drm/tests/drm_plane_helper_test.o
  CC      drivers/tty/serial/earlycon.o
  AR      drivers/video/fbdev/core/built-in.a
  CC      drivers/acpi/acpica/exsystem.o
  CC [M]  drivers/video/fbdev/simplefb.o
  AR      drivers/virtio/built-in.a
  CC [M]  drivers/gpu/drm/display/drm_dp_mst_topology.o
  CC      fs/nfs/file.o
  CC      drivers/iommu/iommu-traces.o
  CC      drivers/iommu/iommu-sysfs.o
  CC      drivers/iommu/dma-iommu.o
  CC      fs/fat/namei_msdos.o
  CC [M]  net/sunrpc/auth_gss/svcauth_gss.o
  CC      drivers/tty/tty_ldisc.o
  CC      drivers/char/tpm/eventlog/efi.o
  CC      fs/nfs/getroot.o
  CC      kernel/uid16.o
  AR      drivers/base/power/built-in.a
  CC      drivers/base/firmware_loader/main.o
  CC      drivers/base/firmware_loader/builtin/main.o
  CC      net/bridge/br_forward.o
  CC      block/blk-mq-virtio.o
  CC      net/ipv4/tcp_cong.o
  CC      drivers/acpi/acpica/extrace.o
  CC      kernel/kallsyms.o
  CC [M]  arch/x86/kvm/debugfs.o
  CC [M]  net/8021q/vlan.o
  CC [M]  drivers/gpu/drm/tests/drm_probe_helper_test.o
  CC [M]  drivers/misc/mei/debugfs.o
  CC      mm/msync.o
  CC      fs/nfs/inode.o
  CC      drivers/tty/serial/serial_mctrl_gpio.o
  AR      drivers/base/firmware_loader/builtin/built-in.a
  CC [M]  drivers/gpu/drm/tests/drm_rect_test.o
  CC      kernel/acct.o
  CC      drivers/acpi/acpica/exutils.o
  CC [M]  crypto/essiv.o
  CC      drivers/char/tpm/tpm_crb.o
  AR      drivers/video/fbdev/built-in.a
  AR      drivers/video/built-in.a
  CC      drivers/acpi/acpica/hwacpi.o
  AR      drivers/dax/hmem/built-in.a
  CC      drivers/dax/super.o
  CC      drivers/acpi/acpica/hwesleep.o
  CC      arch/x86/kernel/quirks.o
  CC      net/core/selftests.o
  CC      drivers/dma-buf/dma-buf.o
  CC      net/core/ptp_classifier.o
  CC      drivers/dma-buf/dma-fence.o
  AR      fs/fat/built-in.a
  CC      kernel/crash_core.o
  CC      net/ipv4/tcp_metrics.o
  CC      drivers/dax/bus.o
  CC      fs/ext4/inline.o
  CC      block/blk-mq-debugfs.o
  CC [M]  net/sunrpc/auth_gss/gss_rpc_upcall.o
  CC      net/core/netprio_cgroup.o
  CC [M]  drivers/misc/mei/mei-trace.o
  AR      drivers/cxl/core/built-in.a
  AR      drivers/macintosh/built-in.a
  AR      drivers/cxl/built-in.a
  CC      drivers/scsi/scsi.o
  CC [M]  drivers/gpu/drm/display/drm_dsc_helper.o
  CC      drivers/acpi/acpica/hwgpe.o
  CC      arch/x86/kernel/topology.o
  CC [M]  arch/x86/kvm/mmu/mmu.o
  CC      net/ipv4/tcp_fastopen.o
  CC      mm/page_vma_mapped.o
  CC      net/ipv4/tcp_rate.o
  AR      drivers/base/firmware_loader/built-in.a
  CC      drivers/base/regmap/regmap.o
  AR      drivers/base/test/built-in.a
  CC      drivers/base/component.o
  CC      drivers/acpi/acpica/hwregs.o
  CC      drivers/iommu/ioasid.o
  CC      net/bridge/br_if.o
  CC [M]  crypto/ecdh.o
  CC [M]  net/8021q/vlan_dev.o
  CC      drivers/acpi/acpica/hwsleep.o
  CC      net/sunrpc/socklib.o
  CC      net/sunrpc/xprtsock.o
  CC      net/core/dst_cache.o
  CC      net/sunrpc/sched.o
  CC      kernel/compat.o
  AR      drivers/char/tpm/built-in.a
  CC      drivers/char/virtio_console.o
  CC      net/ipv6/ipv6_sockglue.o
  CC      arch/x86/kernel/kdebugfs.o
  CC      kernel/utsname.o
  CC      fs/nfs/super.o
  CC [M]  crypto/ecdh_helper.o
  AR      drivers/tty/serial/built-in.a
  CC      drivers/tty/tty_buffer.o
  CC      net/sunrpc/auth.o
  CC [M]  net/8021q/vlan_netlink.o
  AR      drivers/gpu/drm/omapdrm/built-in.a
  CC      drivers/iommu/iova.o
  AR      drivers/gpu/drm/tilcdc/built-in.a
  CC [M]  drivers/misc/mei/pci-me.o
  AR      drivers/gpu/drm/imx/built-in.a
  CC      net/sunrpc/auth_null.o
  CC      fs/ext4/inode.o
  CC      fs/ext4/ioctl.o
  CC [M]  net/netfilter/nf_conntrack_standalone.o
  CC      drivers/acpi/acpica/hwvalid.o
  CC      fs/ext4/mballoc.o
  CC      block/blk-pm.o
  CC      mm/pagewalk.o
  CC [M]  net/sunrpc/auth_gss/gss_rpc_xdr.o
  CC [M]  drivers/gpu/drm/display/drm_hdcp_helper.o
  CC      drivers/dma-buf/dma-fence-array.o
  CC      drivers/acpi/acpica/hwxface.o
  CC      drivers/scsi/hosts.o
  CC      block/holder.o
  LD [M]  crypto/ecdh_generic.o
  AR      crypto/built-in.a
  CC      net/core/gro_cells.o
  CC [M]  drivers/misc/mei/hw-me.o
  AR      drivers/dax/built-in.a
  CC      kernel/user_namespace.o
  CC      drivers/base/regmap/regcache.o
  CC [M]  arch/x86/kvm/mmu/page_track.o
  CC [M]  net/sunrpc/auth_gss/trace.o
  CC      arch/x86/kernel/alternative.o
  CC      drivers/scsi/scsi_ioctl.o
  CC      net/ipv4/tcp_recovery.o
  CC      drivers/acpi/acpica/hwxfsleep.o
  CC      kernel/pid_namespace.o
  CC      net/sunrpc/auth_unix.o
  CC      drivers/tty/tty_port.o
  CC      drivers/tty/tty_mutex.o
  CC      drivers/dma-buf/dma-fence-chain.o
  CC      drivers/scsi/scsicam.o
  CC      drivers/base/core.o
  CC      drivers/base/regmap/regcache-rbtree.o
  CC      arch/x86/kernel/i8253.o
  CC      arch/x86/kernel/hw_breakpoint.o
  AR      drivers/gpu/drm/i2c/built-in.a
  CC [M]  drivers/misc/mei/gsc-me.o
  CC [M]  net/8021q/vlanproc.o
  CC      drivers/iommu/irq_remapping.o
  AR      block/built-in.a
  LD [M]  drivers/misc/mei/mei.o
  CC      drivers/tty/tty_ldsem.o
  UPD     kernel/config_data
  CC      drivers/acpi/acpica/hwpci.o
  CC      net/sunrpc/svc.o
  CC      arch/x86/kernel/tsc.o
  CC      drivers/acpi/acpica/nsaccess.o
  CC      net/bridge/br_input.o
  CC [M]  arch/x86/kvm/mmu/spte.o
  CC      mm/pgtable-generic.o
  CC      mm/rmap.o
  CC      fs/nfs/io.o
  CC      drivers/char/hpet.o
  CC      net/sunrpc/svcsock.o
  CC [M]  drivers/gpu/drm/display/drm_hdmi_helper.o
  AR      drivers/gpu/drm/panel/built-in.a
  CC      arch/x86/kernel/tsc_msr.o
  CC      drivers/dma-buf/dma-fence-unwrap.o
  CC      drivers/scsi/scsi_error.o
  CC [M]  net/sunrpc/auth_gss/gss_krb5_mech.o
  CC      drivers/tty/tty_baudrate.o
  CC      drivers/acpi/acpica/nsalloc.o
  CC      arch/x86/kernel/io_delay.o
  AR      drivers/gpu/drm/bridge/analogix/built-in.a
  AR      drivers/gpu/drm/bridge/cadence/built-in.a
  CC      net/sunrpc/svcauth.o
  CC [M]  net/netfilter/nf_conntrack_expect.o
  AR      drivers/gpu/drm/bridge/imx/built-in.a
  CC      kernel/stop_machine.o
  CC      mm/vmalloc.o
  AR      drivers/gpu/drm/bridge/synopsys/built-in.a
  CC      mm/page_alloc.o
  AR      drivers/gpu/drm/bridge/built-in.a
  CC      net/sunrpc/svcauth_unix.o
  AR      drivers/gpu/drm/hisilicon/built-in.a
  CC      arch/x86/kernel/rtc.o
  CC      arch/x86/kernel/resource.o
  CC      drivers/acpi/acpica/nsarguments.o
  CC      drivers/acpi/acpica/nsconvert.o
  AR      drivers/iommu/built-in.a
  CC [M]  net/sunrpc/auth_gss/gss_krb5_seal.o
  CC      drivers/tty/tty_jobctrl.o
  CC      drivers/nvme/host/core.o
  CC      net/ipv4/tcp_ulp.o
  CC      drivers/nvme/host/ioctl.o
  CC      net/ipv6/ndisc.o
  AR      net/core/built-in.a
  CC      drivers/nvme/host/trace.o
  CC      drivers/base/regmap/regcache-flat.o
  CC      mm/init-mm.o
  CC      net/dcb/dcbnl.o
  AR      net/8021q/built-in.a
  LD [M]  net/8021q/8021q.o
  AS      arch/x86/kernel/irqflags.o
  CC      net/dcb/dcbevent.o
  CC      drivers/dma-buf/dma-resv.o
  CC      net/l3mdev/l3mdev.o
  CC [M]  net/sunrpc/auth_gss/gss_krb5_unseal.o
  CC      drivers/acpi/acpica/nsdump.o
  CC [M]  drivers/gpu/drm/display/drm_scdc_helper.o
  CC      drivers/acpi/acpica/nseval.o
  AR      drivers/gpu/drm/mxsfb/built-in.a
  AR      drivers/gpu/drm/tiny/built-in.a
  CC      drivers/acpi/acpica/nsinit.o
  AR      drivers/gpu/drm/xlnx/built-in.a
  AR      drivers/gpu/drm/gud/built-in.a
  LD [M]  drivers/misc/mei/mei-gsc.o
  AR      drivers/gpu/drm/solomon/built-in.a
  LD [M]  drivers/misc/mei/mei-me.o
  CC      drivers/acpi/acpica/nsload.o
  CC      drivers/acpi/acpica/nsnames.o
  CC [M]  drivers/gpu/drm/ttm/ttm_tt.o
  CC      drivers/acpi/acpica/nsobject.o
  CC [M]  drivers/gpu/drm/ttm/ttm_bo.o
  CC      drivers/acpi/acpica/nsparse.o
  CC      drivers/char/nvram.o
  CC      fs/nfs/direct.o
  CC      arch/x86/kernel/static_call.o
  CC      fs/nfs/pagelist.o
  CC      fs/nfs/read.o
  CC      drivers/base/regmap/regmap-debugfs.o
  CC      drivers/nvme/host/pci.o
  CC [M]  net/bluetooth/af_bluetooth.o
  CC      kernel/kprobes.o
  CC      mm/memblock.o
  CC      drivers/base/regmap/regmap-i2c.o
  CC      net/bridge/br_ioctl.o
  CC      drivers/tty/n_null.o
  CC      drivers/ata/libata-core.o
  CC      drivers/spi/spi.o
  CC      drivers/ata/libata-scsi.o
  CC      drivers/net/phy/mdio-boardinfo.o
  CC [M]  net/sunrpc/auth_gss/gss_krb5_seqnum.o
  AR      drivers/firewire/built-in.a
  CC      lib/nmi_backtrace.o
  CC      drivers/ata/libata-eh.o
  CC      drivers/acpi/acpica/nspredef.o
  CC [M]  drivers/gpu/drm/display/drm_dp_aux_dev.o
  CC      drivers/acpi/acpica/nsprepkg.o
  AR      drivers/net/pse-pd/built-in.a
  CC      drivers/acpi/acpica/nsrepair.o
  CC      drivers/net/mdio/acpi_mdio.o
  CC      arch/x86/kernel/process.o
  AR      net/l3mdev/built-in.a
  CC      drivers/net/mdio/fwnode_mdio.o
  CC [M]  net/dns_resolver/dns_key.o
  CC      net/ipv4/tcp_offload.o
  CC      drivers/dma-buf/sync_file.o
  AR      drivers/char/built-in.a
  CC [M]  net/dns_resolver/dns_query.o
  CC      net/sunrpc/addr.o
  AR      drivers/cdrom/built-in.a
  CC      drivers/scsi/scsi_lib.o
  CC [M]  drivers/gpu/drm/scheduler/sched_main.o
  CC [M]  net/netfilter/nf_conntrack_helper.o
  CC      drivers/tty/pty.o
  CC [M]  drivers/gpu/drm/scheduler/sched_fence.o
  CC [M]  drivers/gpu/drm/scheduler/sched_entity.o
  CC      lib/plist.o
  CC      net/ipv4/tcp_plb.o
  CC      drivers/acpi/acpica/nsrepair2.o
  CC      drivers/ata/libata-transport.o
  CC      drivers/base/regmap/regmap-irq.o
  CC [M]  drivers/gpu/drm/ttm/ttm_bo_util.o
  AR      drivers/nvme/target/built-in.a
  CC      fs/ext4/migrate.o
  CC      lib/radix-tree.o
  CC [M]  net/sunrpc/auth_gss/gss_krb5_wrap.o
  CC      drivers/net/phy/mdio_devres.o
  CC      drivers/net/phy/phy.o
  CC      drivers/net/phy/phy-c45.o
  CC      net/ipv4/datagram.o
  AR      drivers/net/mdio/built-in.a
  CC      drivers/dma-buf/sw_sync.o
  CC      drivers/acpi/acpica/nssearch.o
  CC      drivers/dma-buf/sync_debug.o
  LD [M]  drivers/gpu/drm/display/drm_display_helper.o
  CC [M]  drivers/gpu/drm/ttm/ttm_bo_vm.o
  CC [M]  drivers/gpu/drm/ttm/ttm_module.o
  CC      net/bridge/br_stp.o
  CC      drivers/tty/sysrq.o
  LD [M]  net/dns_resolver/dns_resolver.o
  CC      drivers/scsi/scsi_lib_dma.o
  CC      net/bridge/br_stp_bpdu.o
  CC [M]  net/bluetooth/hci_core.o
  CC      drivers/base/bus.o
  AR      net/dcb/built-in.a
  CC      lib/ratelimit.o
  CC      lib/rbtree.o
  CC      net/sunrpc/rpcb_clnt.o
  CC      net/sunrpc/timer.o
  CC      drivers/acpi/acpica/nsutils.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.o
  CC [M]  net/sunrpc/auth_gss/gss_krb5_crypto.o
  CC      lib/seq_buf.o
  CC      kernel/hung_task.o
  CC      kernel/watchdog.o
  CC      kernel/watchdog_hld.o
  CC      mm/memory_hotplug.o
  CC      fs/nfs/symlink.o
  CC      fs/nfs/unlink.o
  CC [M]  drivers/gpu/drm/i915/i915_driver.o
  CC      net/ipv6/udp.o
  CC      fs/nfs/write.o
  CC      arch/x86/kernel/ptrace.o
  CC [M]  drivers/gpu/drm/i915/i915_drm_client.o
  AR      drivers/auxdisplay/built-in.a
  CC      fs/ext4/mmp.o
  CC [M]  net/netfilter/nf_conntrack_proto.o
  CC [M]  net/netfilter/nf_conntrack_proto_generic.o
  CC      drivers/usb/common/common.o
  CC [M]  drivers/gpu/drm/ttm/ttm_execbuf_util.o
  LD [M]  drivers/gpu/drm/scheduler/gpu-sched.o
  CC      drivers/usb/common/debug.o
  CC [M]  drivers/gpu/drm/ttm/ttm_range_manager.o
  CC      fs/nfs/namespace.o
  CC [M]  drivers/dma-buf/selftest.o
  AR      drivers/base/regmap/built-in.a
  CC      drivers/input/serio/serio.o
  CC [M]  net/sunrpc/auth_gss/gss_krb5_keys.o
  CC      drivers/acpi/acpica/nswalk.o
  CC      drivers/base/dd.o
  CC      lib/show_mem.o
  CC      net/ipv4/raw.o
  AR      drivers/tty/built-in.a
  CC [M]  drivers/gpu/drm/i915/i915_config.o
  CC      drivers/input/keyboard/atkbd.o
  AR      drivers/input/mouse/built-in.a
  CC      drivers/net/phy/phy-core.o
  CC      net/ipv4/udp.o
  CC [M]  drivers/gpu/drm/i915/i915_getparam.o
  CC      kernel/seccomp.o
  CC      net/ipv4/udplite.o
  CC      drivers/input/input.o
  CC      drivers/input/input-compat.o
  CC      drivers/input/input-mt.o
  CC      drivers/net/phy/phy_device.o
  CC      net/bridge/br_stp_if.o
  CC      drivers/scsi/scsi_scan.o
  CC [M]  drivers/dma-buf/st-dma-fence.o
  CC      kernel/relay.o
  CC      drivers/acpi/acpica/nsxfeval.o
  CC      drivers/input/serio/i8042.o
  CC      drivers/input/serio/libps2.o
  CC [M]  drivers/gpu/drm/ttm/ttm_resource.o
  AR      drivers/usb/common/built-in.a
  CC      drivers/usb/core/usb.o
  CC      drivers/usb/core/hub.o
  LD [M]  net/sunrpc/auth_gss/auth_rpcgss.o
  CC      net/devres.o
  CC      net/ipv4/udp_offload.o
  CC      lib/siphash.o
  CC      net/ipv4/arp.o
  CC      net/socket.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_device.o
  CC      drivers/input/input-poller.o
  CC      fs/ext4/move_extent.o
  AR      drivers/nvme/host/built-in.a
  AR      drivers/nvme/built-in.a
  CC      arch/x86/kernel/tls.o
  CC      drivers/rtc/lib.o
  LD [M]  net/sunrpc/auth_gss/rpcsec_gss_krb5.o
  CC      arch/x86/kernel/step.o
  CC      drivers/rtc/class.o
  CC [M]  drivers/gpu/drm/ttm/ttm_pool.o
  CC      net/compat.o
  CC      drivers/acpi/acpica/nsxfname.o
  CC      drivers/base/syscore.o
  CC      fs/nfs/mount_clnt.o
  CC      net/ipv6/udplite.o
  CC      fs/nfs/nfstrace.o
  CC [M]  drivers/dma-buf/st-dma-fence-chain.o
  CC      net/sysctl_net.o
  CC      lib/string.o
  CC [M]  drivers/gpu/drm/i915/i915_ioctl.o
  CC      drivers/input/ff-core.o
  CC [M]  net/netfilter/nf_conntrack_proto_tcp.o
  AR      drivers/spi/built-in.a
  CC      fs/nfs/export.o
  CC      drivers/net/phy/linkmode.o
  CC [M]  drivers/dma-buf/st-dma-fence-unwrap.o
  AR      drivers/input/keyboard/built-in.a
  CC      mm/madvise.o
  CC      drivers/input/touchscreen.o
  CC [M]  net/netfilter/nf_conntrack_proto_udp.o
  CC      net/sunrpc/xdr.o
  CC      drivers/input/ff-memless.o
  CC [M]  drivers/gpu/drm/xe/tests/xe_bo_test.o
  CC      drivers/input/vivaldi-fmap.o
  CC      net/bridge/br_stp_timer.o
  CC [M]  drivers/gpu/drm/vgem/vgem_drv.o
  CC      drivers/acpi/acpica/nsxfobj.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/object.o
  CC      drivers/rtc/interface.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/client.o
  GEN     drivers/scsi/scsi_devinfo_tbl.c
  CC      drivers/scsi/scsi_devinfo.o
  CC      arch/x86/kernel/i8237.o
  CC      lib/timerqueue.o
  CC [M]  arch/x86/kvm/mmu/tdp_iter.o
  AR      drivers/input/serio/built-in.a
  CC      lib/vsprintf.o
  CC [M]  drivers/gpu/drm/vgem/vgem_fence.o
  CC      mm/page_io.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/conn.o
  CC      fs/ext4/namei.o
  CC [M]  drivers/gpu/drm/xe/tests/xe_dma_buf_test.o
  AR      drivers/i2c/algos/built-in.a
  AR      drivers/usb/phy/built-in.a
  CC [M]  drivers/i2c/algos/i2c-algo-bit.o
  CC      drivers/net/phy/mdio_bus.o
  CC [M]  drivers/gpu/drm/ttm/ttm_device.o
  CC      drivers/base/driver.o
  CC      drivers/base/class.o
  CC      drivers/i2c/busses/i2c-designware-common.o
  CC      drivers/acpi/acpica/psargs.o
  CC      drivers/i2c/busses/i2c-designware-master.o
  CC      kernel/utsname_sysctl.o
  CC      arch/x86/kernel/stacktrace.o
  CC [M]  drivers/gpu/drm/i915/i915_irq.o
  CC      drivers/acpi/acpica/psloop.o
  CC      drivers/i2c/busses/i2c-designware-platdrv.o
  CC      arch/x86/kernel/reboot.o
  CC      drivers/base/platform.o
  CC [M]  drivers/dma-buf/st-dma-resv.o
  CC      kernel/delayacct.o
  CC      net/ipv6/raw.o
  CC      lib/win_minmax.o
  CC      drivers/input/input-leds.o
  CC [M]  drivers/gpu/drm/xe/tests/xe_migrate_test.o
  CC      lib/xarray.o
  CC      net/ipv6/icmp.o
  CC      net/ipv6/mcast.o
  CC      drivers/ata/libata-trace.o
  CC      drivers/input/mousedev.o
  CC [M]  net/bluetooth/hci_conn.o
  CC      drivers/scsi/scsi_sysctl.o
  CC [M]  net/bluetooth/hci_event.o
  CC      kernel/taskstats.o
  CC      drivers/acpi/reboot.o
  LD [M]  drivers/gpu/drm/vgem/vgem.o
  CC      arch/x86/kernel/msr.o
  CC      arch/x86/kernel/cpuid.o
  CC      net/bridge/br_netlink.o
  CC      net/bridge/br_netlink_tunnel.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/device.o
  CC      drivers/input/evdev.o
  CC      drivers/acpi/acpica/psobject.o
  CC      drivers/base/cpu.o
  CC [M]  arch/x86/kvm/mmu/tdp_mmu.o
  CC [M]  arch/x86/kvm/smm.o
  AR      drivers/dma-buf/built-in.a
  CC [M]  drivers/gpu/drm/ttm/ttm_sys_manager.o
  LD [M]  drivers/dma-buf/dmabuf_selftests.o
  CC [M]  net/bluetooth/mgmt.o
  CC [M]  drivers/gpu/drm/ast/ast_drv.o
  CC [M]  net/bluetooth/hci_sock.o
  CC [M]  drivers/gpu/drm/xe/xe_bb.o
  AR      drivers/i3c/built-in.a
  CC [M]  drivers/gpu/drm/xe/xe_bo.o
  CC      drivers/acpi/nvs.o
  CC      drivers/net/phy/mdio_device.o
  CC      drivers/acpi/wakeup.o
  CC      kernel/tsacct.o
  CC [M]  net/netfilter/nf_conntrack_proto_icmp.o
  CC      drivers/base/firmware.o
  CC      mm/swap_state.o
  CC      drivers/scsi/scsi_debugfs.o
  CC      arch/x86/kernel/early-quirks.o
  CC [M]  drivers/gpu/drm/ast/ast_i2c.o
  CC      drivers/rtc/nvmem.o
  CC      drivers/i2c/busses/i2c-designware-baytrail.o
  CC [M]  drivers/i2c/busses/i2c-scmi.o
  CC      drivers/acpi/acpica/psopcode.o
  CC [M]  drivers/gpu/drm/ttm/ttm_agp_backend.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/disp.o
  CC      drivers/ata/libata-sata.o
  CC      drivers/base/init.o
  CC      drivers/scsi/scsi_trace.o
  CC [M]  arch/x86/kvm/vmx/vmx.o
  CC      arch/x86/kernel/smp.o
  CC [M]  drivers/gpu/drm/ast/ast_main.o
  CC [M]  drivers/gpu/drm/xe/xe_bo_evict.o
  AR      drivers/i2c/muxes/built-in.a
  CC [M]  drivers/i2c/muxes/i2c-mux-gpio.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/driver.o
  CC      kernel/tracepoint.o
  CC [M]  drivers/gpu/drm/xe/xe_debugfs.o
  CC [M]  drivers/gpu/drm/xe/xe_device.o
  CC      kernel/latencytop.o
  CC      drivers/acpi/sleep.o
  CC      drivers/acpi/acpica/psopinfo.o
  CC      drivers/rtc/dev.o
  CC      net/ipv4/icmp.o
  CC      drivers/net/phy/swphy.o
  CC      drivers/rtc/proc.o
  AR      drivers/media/i2c/built-in.a
  AR      drivers/input/built-in.a
  AR      drivers/media/tuners/built-in.a
  CC      drivers/gpu/drm/drm_mipi_dsi.o
  AR      drivers/media/rc/keymaps/built-in.a
  AR      drivers/media/rc/built-in.a
  AR      drivers/media/common/b2c2/built-in.a
  AR      drivers/media/common/saa7146/built-in.a
  AR      drivers/media/common/siano/built-in.a
  AR      drivers/media/common/v4l2-tpg/built-in.a
  CC      drivers/base/map.o
  AR      drivers/media/platform/allegro-dvt/built-in.a
  AR      drivers/media/common/videobuf2/built-in.a
  AR      drivers/media/common/built-in.a
  CC      net/sunrpc/sunrpc_syms.o
  LD [M]  drivers/gpu/drm/ttm/ttm.o
  AR      drivers/media/platform/amlogic/meson-ge2d/built-in.a
  AR      drivers/media/platform/amphion/built-in.a
  AR      drivers/media/platform/aspeed/built-in.a
  CC      drivers/acpi/device_sysfs.o
  AR      drivers/media/platform/amlogic/built-in.a
  CC [M]  drivers/gpu/drm/ast/ast_mm.o
  AR      drivers/media/platform/atmel/built-in.a
  CC [M]  drivers/gpu/drm/ast/ast_mode.o
  CC      kernel/irq_work.o
  AR      drivers/media/platform/cadence/built-in.a
  CC      drivers/usb/core/hcd.o
  CC      lib/lockref.o
  CC      drivers/usb/core/urb.o
  AR      drivers/media/platform/chips-media/built-in.a
  AR      drivers/media/platform/intel/built-in.a
  CC [M]  drivers/i2c/busses/i2c-ccgx-ucsi.o
  AR      drivers/media/platform/marvell/built-in.a
  CC      drivers/acpi/acpica/psparse.o
  AR      drivers/media/platform/mediatek/jpeg/built-in.a
  AR      drivers/media/platform/mediatek/mdp/built-in.a
  AR      drivers/media/platform/mediatek/vcodec/built-in.a
  CC      drivers/scsi/scsi_logging.o
  AR      drivers/media/platform/mediatek/vpu/built-in.a
  AR      drivers/media/platform/mediatek/mdp3/built-in.a
  AR      drivers/media/platform/microchip/built-in.a
  AR      drivers/media/platform/mediatek/built-in.a
  CC [M]  net/netfilter/nf_conntrack_extend.o
  CC [M]  drivers/i2c/busses/i2c-i801.o
  AR      drivers/media/platform/nvidia/tegra-vde/built-in.a
  AR      drivers/media/platform/nvidia/built-in.a
  CC      arch/x86/kernel/smpboot.o
  AR      drivers/media/platform/nxp/dw100/built-in.a
  AR      drivers/media/platform/nxp/imx-jpeg/built-in.a
  AR      drivers/media/platform/nxp/built-in.a
  AR      drivers/media/platform/qcom/camss/built-in.a
  AR      drivers/media/platform/qcom/venus/built-in.a
  AR      drivers/media/platform/qcom/built-in.a
  CC      lib/bcd.o
  AR      drivers/media/platform/renesas/rcar-vin/built-in.a
  CC      lib/sort.o
  AR      drivers/media/platform/renesas/rzg2l-cru/built-in.a
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/event.o
  AR      drivers/media/platform/renesas/vsp1/built-in.a
  CC      mm/swapfile.o
  AR      drivers/media/platform/renesas/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvif/fifo.o
  AR      drivers/media/platform/rockchip/rga/built-in.a
  AR      drivers/media/platform/samsung/exynos-gsc/built-in.a
  AR      drivers/media/platform/rockchip/rkisp1/built-in.a
  CC      net/ipv6/reassembly.o
  CC      drivers/scsi/scsi_pm.o
  AR      drivers/media/platform/rockchip/built-in.a
  AR      drivers/media/platform/samsung/exynos4-is/built-in.a
  AR      drivers/media/platform/samsung/s3c-camif/built-in.a
  AR      drivers/net/pcs/built-in.a
  CC      arch/x86/kernel/tsc_sync.o
  AR      drivers/media/platform/samsung/s5p-g2d/built-in.a
  CC      arch/x86/kernel/setup_percpu.o
  AR      drivers/net/usb/built-in.a
  AR      drivers/net/ethernet/adi/built-in.a
  CC [M]  drivers/net/usb/pegasus.o
  CC      lib/parser.o
  CC      drivers/i2c/i2c-boardinfo.o
  AR      drivers/media/platform/samsung/s5p-jpeg/built-in.a
  AR      drivers/media/pci/ttpci/built-in.a
  AR      drivers/net/ethernet/alacritech/built-in.a
  CC      drivers/ata/libata-sff.o
  CC      drivers/rtc/sysfs.o
  AR      drivers/net/ethernet/amazon/built-in.a
  AR      drivers/media/platform/samsung/s5p-mfc/built-in.a
  AR      drivers/media/pci/b2c2/built-in.a
  AR      drivers/media/platform/samsung/built-in.a
  AR      drivers/net/ethernet/aquantia/built-in.a
  AR      drivers/media/pci/pluto2/built-in.a
  AR      drivers/net/ethernet/asix/built-in.a
  AR      drivers/media/pci/dm1105/built-in.a
  AR      drivers/media/platform/st/sti/bdisp/built-in.a
  AR      drivers/net/ethernet/cadence/built-in.a
  AR      drivers/media/pci/pt1/built-in.a
  AR      drivers/media/platform/st/sti/c8sectpfe/built-in.a
  AR      drivers/net/ethernet/broadcom/built-in.a
  CC      drivers/net/phy/fixed_phy.o
  CC [M]  drivers/net/ethernet/broadcom/b44.o
  AR      drivers/media/pci/pt3/built-in.a
  CC      drivers/base/devres.o
  AR      drivers/media/platform/st/sti/delta/built-in.a
  AR      drivers/media/pci/mantis/built-in.a
  AR      drivers/media/platform/st/sti/hva/built-in.a
  AR      drivers/media/pci/ngene/built-in.a
  AR      drivers/media/platform/st/stm32/built-in.a
  AR      drivers/media/pci/ddbridge/built-in.a
  AR      drivers/media/platform/st/built-in.a
  CC [M]  drivers/net/ethernet/broadcom/bnx2.o
  AR      drivers/media/pci/saa7146/built-in.a
  AR      drivers/media/pci/smipcie/built-in.a
  CC      arch/x86/kernel/ftrace.o
  AR      drivers/media/platform/sunxi/sun4i-csi/built-in.a
  CC      net/bridge/br_arp_nd_proxy.o
  AR      drivers/media/pci/netup_unidvb/built-in.a
  AR      drivers/media/platform/sunxi/sun6i-csi/built-in.a
  AR      drivers/media/platform/sunxi/sun6i-mipi-csi2/built-in.a
  AR      drivers/media/pci/intel/ipu3/built-in.a
  CC      drivers/rtc/rtc-mc146818-lib.o
  AR      drivers/media/pci/intel/built-in.a
  AR      drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/built-in.a
  AR      drivers/media/pci/built-in.a
  AR      drivers/media/platform/sunxi/sun8i-di/built-in.a
  AR      drivers/media/platform/sunxi/sun8i-rotate/built-in.a
  CC [M]  drivers/net/ethernet/broadcom/cnic.o
  CC      drivers/acpi/acpica/psscope.o
  AR      drivers/media/platform/sunxi/built-in.a
  CC      kernel/static_call.o
  AR      drivers/media/platform/ti/am437x/built-in.a
  AR      drivers/media/platform/ti/cal/built-in.a
  AR      drivers/media/platform/ti/vpe/built-in.a
  AR      drivers/media/platform/ti/davinci/built-in.a
  AR      drivers/media/platform/ti/omap/built-in.a
  AR      drivers/media/platform/ti/omap3isp/built-in.a
  CC      drivers/acpi/acpica/pstree.o
  AR      drivers/media/platform/ti/built-in.a
  AR      drivers/media/platform/verisilicon/built-in.a
  AR      drivers/media/platform/via/built-in.a
  AR      drivers/media/platform/xilinx/built-in.a
  CC      drivers/acpi/acpica/psutils.o
  AR      drivers/media/platform/built-in.a
  CC [M]  drivers/i2c/busses/i2c-isch.o
  AS      arch/x86/kernel/ftrace_64.o
  CC      lib/debug_locks.o
  AR      drivers/media/usb/b2c2/built-in.a
  CC      lib/random32.o
  AR      drivers/media/usb/dvb-usb/built-in.a
  CC [M]  drivers/gpu/drm/ast/ast_post.o
  AR      drivers/media/usb/dvb-usb-v2/built-in.a
  CC [M]  drivers/gpu/drm/xe/xe_dma_buf.o
  AR      drivers/media/usb/s2255/built-in.a
  AR      drivers/media/usb/siano/built-in.a
  AR      drivers/media/usb/ttusb-budget/built-in.a
  AR      drivers/media/usb/ttusb-dec/built-in.a
  AR      drivers/media/usb/built-in.a
  AR      drivers/media/mmc/siano/built-in.a
  CC      lib/bust_spinlocks.o
  AR      drivers/media/mmc/built-in.a
  CC [M]  drivers/gpu/drm/ast/ast_dp501.o
  CC      drivers/acpi/acpica/pswalk.o
  CC      net/sunrpc/cache.o
  AR      drivers/media/firewire/built-in.a
  AR      drivers/media/spi/built-in.a
  CC      net/sunrpc/rpc_pipe.o
  AR      drivers/media/test-drivers/built-in.a
  AR      drivers/media/built-in.a
  CC [M]  arch/x86/kvm/kvm-asm-offsets.s
  CC      net/sunrpc/sysfs.o
  CC      lib/kasprintf.o
  CC      kernel/static_call_inline.o
  CC      net/ipv4/devinet.o
  CC [M]  drivers/gpu/drm/ast/ast_dp.o
  CC      drivers/scsi/scsi_bsg.o
  CC      fs/ext4/page-io.o
  CC      lib/bitmap.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/head.o
  CC [M]  net/netfilter/nf_conntrack_acct.o
  CC [M]  net/netfilter/nf_conntrack_seqadj.o
  CC [M]  net/netfilter/nf_conntrack_proto_icmpv6.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.o
  CC [M]  drivers/gpu/drm/i915/i915_mitigations.o
  CC      drivers/i2c/i2c-core-base.o
  CC      drivers/rtc/rtc-cmos.o
  CC      net/bridge/br_sysfs_if.o
  CC      drivers/acpi/acpica/psxface.o
  CC      drivers/base/transport_class.o
  CC      drivers/base/attribute_container.o
  CC      lib/scatterlist.o
  CC [M]  drivers/net/phy/phylink.o
  CC      lib/list_sort.o
  CC [M]  drivers/i2c/busses/i2c-ismt.o
  CC [M]  drivers/i2c/busses/i2c-piix4.o
  CC      arch/x86/kernel/trace_clock.o
  CC      fs/nfs/sysfs.o
  CC      net/ipv4/af_inet.o
  CC [M]  arch/x86/kvm/vmx/pmu_intel.o
  CC      kernel/user-return-notifier.o
  CC      arch/x86/kernel/trace.o
  CC      kernel/padata.o
  CC      drivers/usb/core/message.o
  CC [M]  drivers/net/usb/rtl8150.o
  CC      fs/exportfs/expfs.o
  CC      drivers/scsi/scsi_common.o
  CC [M]  drivers/gpu/drm/xe/xe_engine.o
  CC      net/bridge/br_sysfs_br.o
  CC      drivers/scsi/sd.o
  CC      drivers/acpi/acpica/rsaddr.o
  CC      fs/lockd/clntlock.o
  CC      fs/nls/nls_base.o
  CC      drivers/scsi/sg.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/mem.o
  CC      mm/swap_slots.o
  CC      drivers/base/topology.o
  CC      net/ipv6/tcp_ipv6.o
  CC [M]  drivers/net/usb/r8152.o
  CC [M]  drivers/gpu/drm/i915/i915_module.o
  CC [M]  net/netfilter/nf_conntrack_proto_dccp.o
  CC      arch/x86/kernel/rethook.o
  CC [M]  drivers/gpu/drm/i915/i915_params.o
  CC      fs/ext4/readpage.o
  CC      arch/x86/kernel/crash_core_64.o
  CC      drivers/acpi/acpica/rscalc.o
  AR      drivers/rtc/built-in.a
  LD [M]  drivers/gpu/drm/ast/ast.o
  CC      drivers/acpi/acpica/rscreate.o
  CC      arch/x86/kernel/module.o
  CC      arch/x86/kernel/early_printk.o
  CC      lib/uuid.o
  CC      fs/nls/nls_cp437.o
  CC [M]  drivers/gpu/drm/drm_aperture.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/atombios_crtc.o
  CC      drivers/acpi/acpica/rsdumpinfo.o
  AR      fs/exportfs/built-in.a
  CC [M]  net/bluetooth/hci_sysfs.o
  CC [M]  net/bluetooth/l2cap_core.o
  CC      lib/iov_iter.o
  CC [M]  drivers/i2c/busses/i2c-designware-pcidrv.o
  CC      fs/nfs/fs_context.o
  CC      fs/nls/nls_ascii.o
  CC      kernel/jump_label.o
  CC      drivers/base/container.o
  CC      drivers/ata/libata-pmp.o
  CC      mm/dmapool.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/mmu.o
  CC [M]  drivers/gpu/drm/xe/xe_exec.o
  CC      net/ipv4/igmp.o
  CC      drivers/acpi/acpica/rsinfo.o
  CC      fs/nls/nls_iso8859-1.o
  CC      drivers/ata/libata-acpi.o
  CC      drivers/ata/libata-pata-timings.o
  CC [M]  net/bluetooth/l2cap_sock.o
  CC      drivers/ata/ahci.o
  CC [M]  drivers/net/ethernet/broadcom/tg3.o
  CC      drivers/acpi/acpica/rsio.o
  CC      fs/lockd/clntproc.o
  CC [M]  drivers/gpu/drm/i915/i915_pci.o
  CC      net/ipv4/fib_frontend.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.o
  CC      arch/x86/kernel/hpet.o
  CC      fs/nls/nls_utf8.o
  CC      kernel/context_tracking.o
  CC      drivers/base/property.o
  CC [M]  net/bluetooth/smp.o
  CC      net/bridge/br_nf_core.o
  CC      fs/ext4/resize.o
  CC [M]  drivers/gpu/drm/xe/xe_execlist.o
  CC [M]  drivers/net/phy/aquantia_main.o
  CC      kernel/iomem.o
  CC [M]  net/bluetooth/lib.o
  CC [M]  net/netfilter/nf_conntrack_proto_sctp.o
  CC      drivers/acpi/acpica/rsirq.o
  CC      drivers/usb/core/driver.o
  CC      net/sunrpc/svc_xprt.o
  CC      kernel/rseq.o
  AR      fs/nls/built-in.a
  CC      net/ipv4/fib_semantics.o
  CC      net/ipv4/fib_trie.o
  AR      fs/unicode/built-in.a
  LD [M]  drivers/i2c/busses/i2c-designware-pci.o
  CC [M]  drivers/net/phy/aquantia_hwmon.o
  AR      drivers/i2c/busses/built-in.a
  CC      drivers/i2c/i2c-core-smbus.o
  CC      mm/hugetlb.o
  CC      fs/ext4/super.o
  CC      arch/x86/kernel/amd_nb.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/outp.o
  CC      fs/lockd/clntxdr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/atom.o
  CC [M]  drivers/gpu/drm/i915/i915_scatterlist.o
  CC      drivers/acpi/acpica/rslist.o
  CC [M]  net/netfilter/nf_conntrack_netlink.o
  CC      drivers/i2c/i2c-core-acpi.o
  CC [M]  drivers/net/usb/asix_devices.o
  CC      drivers/scsi/scsi_sysfs.o
  CC      mm/hugetlb_vmemmap.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/timer.o
  CC      drivers/ata/libahci.o
  CC [M]  drivers/gpu/drm/i915/i915_suspend.o
  CC      drivers/acpi/acpica/rsmemory.o
  CC [M]  drivers/net/phy/ax88796b.o
  CC      net/bridge/br_multicast.o
  CC [M]  net/bluetooth/ecdh_helper.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/vmm.o
  CC [M]  drivers/gpu/drm/xe/xe_force_wake.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.o
  CC      drivers/base/cacheinfo.o
  CC [M]  drivers/gpu/drm/i915/i915_switcheroo.o
  CC      net/bridge/br_mdb.o
  CC      arch/x86/kernel/kvm.o
  GZIP    kernel/config_data.gz
  CC      kernel/configs.o
  CC      drivers/acpi/acpica/rsmisc.o
  CC      fs/nfs/sysctl.o
  CC      drivers/usb/core/config.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/user.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.o
  CC      fs/lockd/host.o
  CC      net/ipv6/ping.o
  CC      drivers/ata/ata_piix.o
  CC [M]  drivers/net/phy/bcm7xxx.o
  CC [M]  drivers/gpu/drm/xe/xe_ggtt.o
  CC      drivers/usb/core/file.o
  CC      net/bridge/br_multicast_eht.o
  CC      arch/x86/kernel/kvmclock.o
  CC      drivers/acpi/acpica/rsserial.o
  CC [M]  net/bluetooth/hci_request.o
  CC      drivers/i2c/i2c-core-slave.o
  CC      drivers/base/swnode.o
  AR      kernel/built-in.a
  CC      arch/x86/kernel/paravirt.o
  CC [M]  net/bluetooth/mgmt_util.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_object.o
  CC      drivers/base/auxiliary.o
  CC      drivers/i2c/i2c-dev.o
  CC      arch/x86/kernel/pvclock.o
  CC [M]  drivers/gpu/drm/xe/xe_gt.o
  CC      drivers/usb/core/buffer.o
  CC      lib/clz_ctz.o
  CC      drivers/base/devtmpfs.o
  CC      drivers/usb/core/sysfs.o
  CC      arch/x86/kernel/pcspeaker.o
  CC      lib/bsearch.o
  CC [M]  drivers/gpu/drm/i915/i915_sysfs.o
  CC      drivers/acpi/acpica/rsutils.o
  AR      drivers/scsi/built-in.a
  AR      drivers/ptp/built-in.a
  CC [M]  drivers/ptp/ptp_clock.o
  CC [M]  drivers/gpu/drm/nouveau/nvif/userc361.o
  CC      fs/nfs/nfs2super.o
  CC      drivers/usb/core/endpoint.o
  CC [M]  drivers/ptp/ptp_chardev.o
  AR      drivers/power/reset/built-in.a
  CC      drivers/power/supply/power_supply_core.o
  CC      arch/x86/kernel/check.o
  CC      arch/x86/kernel/uprobes.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_gart.o
  CC      net/sunrpc/xprtmultipath.o
  CC [M]  drivers/gpu/drm/xe/xe_gt_clock.o
  CC [M]  arch/x86/kvm/vmx/vmcs12.o
  CC [M]  drivers/net/phy/bcm87xx.o
  CC      drivers/power/supply/power_supply_sysfs.o
  CC [M]  net/bluetooth/mgmt_config.o
  CC      net/sunrpc/stats.o
  CC [M]  drivers/gpu/drm/drm_atomic.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/client.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/engine.o
  CC      net/ipv6/exthdrs.o
  CC      lib/find_bit.o
  CC      drivers/base/memory.o
  CC      drivers/acpi/acpica/rsxface.o
  CC [M]  drivers/ptp/ptp_sysfs.o
  CC [M]  net/bluetooth/hci_codec.o
  CC [M]  net/bluetooth/eir.o
  CC      drivers/acpi/device_pm.o
  AR      drivers/ata/built-in.a
  CC [M]  drivers/i2c/i2c-smbus.o
  CC      fs/lockd/svc.o
  CC      drivers/base/module.o
  CC      drivers/acpi/acpica/tbdata.o
  CC      fs/nfs/proc.o
  CC      net/ipv6/datagram.o
  CC      arch/x86/kernel/perf_regs.o
  CC      lib/llist.o
  CC [M]  drivers/i2c/i2c-mux.o
  CC      net/ipv4/fib_notifier.o
  CC [M]  drivers/gpu/drm/i915/i915_utils.o
  CC      net/ipv4/inet_fragment.o
  CC      drivers/usb/core/devio.o
  CC      net/ipv4/ping.o
  CC      drivers/usb/core/notify.o
  CC      net/ipv4/ip_tunnel_core.o
  CC      lib/memweight.o
  CC      net/ipv4/gre_offload.o
  CC [M]  drivers/gpu/drm/xe/xe_gt_debugfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_encoders.o
  CC      lib/kfifo.o
  CC      drivers/power/supply/power_supply_leds.o
  CC [M]  net/netfilter/nf_nat_core.o
  CC [M]  drivers/net/phy/bcm-phy-lib.o
  CC [M]  drivers/gpu/drm/xe/xe_gt_mcr.o
  CC      net/sunrpc/sysctl.o
  CC [M]  arch/x86/kvm/vmx/hyperv.o
  CC [M]  drivers/gpu/drm/i915/intel_device_info.o
  CC      lib/percpu-refcount.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_display.o
  CC      fs/lockd/svclock.o
  CC [M]  drivers/net/ipvlan/ipvlan_core.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/enum.o
  CC [M]  drivers/net/ipvlan/ipvlan_main.o
  CC      drivers/base/pinctrl.o
  CC      drivers/acpi/acpica/tbfadt.o
  CC [M]  drivers/ptp/ptp_vclock.o
  CC      drivers/acpi/proc.o
  CC [M]  drivers/gpu/drm/i915/intel_memory_region.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/event.o
  CC      arch/x86/kernel/tracepoint.o
  CC      drivers/usb/host/pci-quirks.o
  CC [M]  drivers/net/phy/broadcom.o
  CC      drivers/power/supply/power_supply_hwmon.o
  CC [M]  drivers/net/phy/lxt.o
  CC [M]  net/bluetooth/hci_sync.o
  CC      drivers/base/platform-msi.o
  CC      drivers/acpi/acpica/tbfind.o
  AR      drivers/i2c/built-in.a
  CC      drivers/usb/storage/scsiglue.o
  CC [M]  drivers/net/phy/realtek.o
  CC      drivers/base/physical_location.o
  CC      drivers/acpi/bus.o
  CC [M]  net/netfilter/nf_nat_proto.o
  CC      fs/lockd/svcshare.o
  CC      net/ipv4/metrics.o
  CC      drivers/base/trace.o
  CC      arch/x86/kernel/itmt.o
  CC [M]  drivers/gpu/drm/i915/intel_pcode.o
  CC      lib/rhashtable.o
  CC [M]  drivers/gpu/drm/xe/xe_gt_pagefault.o
  AR      net/sunrpc/built-in.a
  CC      mm/sparse.o
  CC [M]  drivers/net/usb/asix_common.o
  CC      fs/ntfs/aops.o
  CC      fs/nfs/nfs2xdr.o
  AR      drivers/power/supply/built-in.a
  AR      drivers/power/built-in.a
  CC      fs/ntfs/attrib.o
  CC      fs/lockd/svcproc.o
  CC [M]  drivers/ptp/ptp_kvm_x86.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/firmware.o
  CC      net/ipv4/netlink.o
  CC      drivers/acpi/acpica/tbinstal.o
  CC [M]  net/netfilter/nf_nat_helper.o
  CC [M]  arch/x86/kvm/vmx/nested.o
  CC [M]  arch/x86/kvm/vmx/posted_intr.o
  CC      fs/lockd/svcsubs.o
  CC      net/ipv6/ip6_flowlabel.o
  CC      fs/nfs/nfs3super.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.o
  CC      net/bridge/br_vlan.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.o
  CC      drivers/usb/host/ehci-hcd.o
  CC      drivers/usb/host/ehci-pci.o
  CC      arch/x86/kernel/umip.o
  CC      fs/lockd/mon.o
  CC [M]  drivers/net/usb/ax88172a.o
  CC      net/ipv4/nexthop.o
  AR      drivers/base/built-in.a
  CC      net/ipv4/udp_tunnel_stub.o
  CC      drivers/usb/storage/protocol.o
  CC      net/ipv4/sysctl_net_ipv4.o
  CC      drivers/usb/host/ohci-hcd.o
  CC      drivers/acpi/acpica/tbprint.o
  CC      drivers/usb/serial/usb-serial.o
  AR      drivers/usb/misc/built-in.a
  CC [M]  drivers/usb/misc/ftdi-elan.o
  CC [M]  drivers/net/phy/smsc.o
  CC      fs/lockd/xdr.o
  CC [M]  drivers/ptp/ptp_kvm_common.o
  CC [M]  drivers/net/ipvlan/ipvlan_l3s.o
  LD [M]  drivers/ptp/ptp.o
  CC      drivers/acpi/acpica/tbutils.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.o
  CC      fs/lockd/clnt4xdr.o
  CC [M]  drivers/gpu/drm/xe/xe_gt_sysfs.o
  CC      drivers/hwmon/hwmon.o
  CC      mm/sparse-vmemmap.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/gpuobj.o
  CC      drivers/usb/core/generic.o
  CC [M]  drivers/hwmon/acpi_power_meter.o
  CC [M]  drivers/gpu/drm/i915/intel_pm.o
  CC      fs/ext4/symlink.o
  CC      fs/ntfs/collate.o
  CC      drivers/usb/host/ohci-pci.o
  CC      arch/x86/kernel/unwind_orc.o
  CC      net/bridge/br_vlan_tunnel.o
  CC      net/bridge/br_vlan_options.o
  CC      fs/nfs/nfs3client.o
  CC      drivers/usb/storage/transport.o
  CC [M]  drivers/hwmon/coretemp.o
  CC      fs/ntfs/compress.o
  CC      lib/base64.o
  CC      drivers/acpi/acpica/tbxface.o
  CC      drivers/usb/core/quirks.o
  CC [M]  net/netfilter/nf_nat_redirect.o
  CC [M]  net/netfilter/nf_nat_masquerade.o
  CC [M]  drivers/gpu/drm/i915/intel_region_ttm.o
  CC      lib/once.o
  CC      drivers/acpi/acpica/tbxfload.o
  CC      drivers/acpi/acpica/tbxfroot.o
  CC [M]  drivers/net/usb/ax88179_178a.o
  CC      fs/ext4/sysfs.o
  LD [M]  drivers/ptp/ptp_kvm.o
  CC [M]  drivers/net/usb/cdc_ether.o
  CC [M]  drivers/net/usb/cdc_eem.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.o
  LD [M]  drivers/net/phy/aquantia.o
  AR      drivers/net/phy/built-in.a
  CC [M]  drivers/net/usb/smsc75xx.o
  CC [M]  net/bluetooth/sco.o
  CC      lib/refcount.o
  CC [M]  drivers/gpu/drm/xe/xe_gt_tlb_invalidation.o
  CC      mm/mmu_notifier.o
  CC      lib/usercopy.o
  CC      fs/nfs/nfs3proc.o
  CC      net/ipv6/inet6_connection_sock.o
  CC [M]  net/netfilter/x_tables.o
  CC [M]  net/bluetooth/iso.o
  CC [M]  drivers/gpu/drm/xe/xe_gt_topology.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/intr.o
  CC      net/ipv6/udp_offload.o
  CC [M]  net/netfilter/xt_tcpudp.o
  LD [M]  drivers/net/ipvlan/ipvlan.o
  CC      fs/lockd/xdr4.o
  CC      fs/lockd/svc4proc.o
  CC      drivers/acpi/acpica/utaddress.o
  CC      net/ipv6/seg6.o
  CC [M]  net/netfilter/xt_mark.o
  CC      fs/lockd/procfs.o
  CC      arch/x86/kernel/callthunks.o
  CC      drivers/usb/serial/generic.o
  CC      drivers/usb/core/devices.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_bios.o
  CC      lib/errseq.o
  CC      net/ipv6/fib6_notifier.o
  AR      drivers/hwmon/built-in.a
  CC [M]  drivers/gpu/drm/drm_atomic_uapi.o
  CC      drivers/usb/serial/bus.o
  CC      lib/bucket_locks.o
  CC      drivers/usb/storage/usb.o
  CC [M]  drivers/gpu/drm/i915/intel_runtime_pm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.o
  CC      drivers/acpi/acpica/utalloc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/atombios_dp.o
  CC      fs/ntfs/debug.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.o
  CC      net/ipv6/rpl.o
  CC [M]  drivers/gpu/drm/drm_auth.o
  AR      drivers/thermal/broadcom/built-in.a
  AR      drivers/thermal/samsung/built-in.a
  CC      net/ipv6/ioam6.o
  CC      drivers/thermal/intel/intel_tcc.o
  AR      drivers/thermal/st/built-in.a
  CC      net/bridge/br_mst.o
  AR      drivers/thermal/qcom/built-in.a
  CC      mm/ksm.o
  CC [M]  net/netfilter/xt_nat.o
  CC [M]  net/bridge/br_netfilter_hooks.o
  CC      fs/ntfs/dir.o
  CC      net/ipv4/proc.o
  CC [M]  drivers/gpu/drm/xe/xe_guc.o
  CC      lib/generic-radix-tree.o
  CC      arch/x86/kernel/mmconf-fam10h_64.o
  CC      drivers/thermal/intel/therm_throt.o
  CC      drivers/acpi/acpica/utascii.o
  LD [M]  arch/x86/kvm/kvm.o
  CC      drivers/usb/host/uhci-hcd.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/ioctl.o
  CC      drivers/usb/serial/console.o
  CC      drivers/usb/core/phy.o
  CC [M]  drivers/net/vxlan/vxlan_core.o
  CC      drivers/net/loopback.o
  CC      fs/ntfs/file.o
  AR      drivers/net/ethernet/cortina/built-in.a
  AR      drivers/net/ethernet/cavium/common/built-in.a
  AR      drivers/net/ethernet/cavium/thunder/built-in.a
  CC      fs/autofs/init.o
  AR      drivers/net/ethernet/cavium/liquidio/built-in.a
  AR      drivers/net/ethernet/cavium/octeon/built-in.a
  AR      fs/lockd/built-in.a
  AR      drivers/net/ethernet/cavium/built-in.a
  CC [M]  drivers/net/usb/smsc95xx.o
  AR      drivers/net/ethernet/engleder/built-in.a
  CC      fs/autofs/inode.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/memory.o
  CC      drivers/usb/serial/ftdi_sio.o
  CC      net/ipv6/sysctl_net_ipv6.o
  CC      fs/ext4/xattr.o
  CC      fs/nfs/nfs3xdr.o
  CC      lib/string_helpers.o
  UPD     arch/x86/kvm/kvm-asm-offsets.h
  CC      drivers/acpi/acpica/utbuffer.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_trace_points.o
  AS [M]  arch/x86/kvm/vmx/vmenter.o
  CC      drivers/usb/storage/initializers.o
  CC      drivers/acpi/acpica/utcksum.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/atombios_encoders.o
  CC      drivers/acpi/acpica/utcopy.o
  CC      net/ipv6/xfrm6_policy.o
  CC      fs/ntfs/index.o
  CC      arch/x86/kernel/vsmp_64.o
  CC [M]  drivers/gpu/drm/i915/intel_sbi.o
  CC [M]  net/bridge/br_netfilter_ipv6.o
  CC      drivers/usb/gadget/udc/core.o
  CC [M]  drivers/usb/class/usbtmc.o
  CC      fs/autofs/root.o
  CC [M]  drivers/net/usb/mcs7830.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_sa.o
  CC [M]  drivers/net/vxlan/vxlan_multicast.o
  CC [M]  net/bluetooth/a2mp.o
  CC      fs/ntfs/inode.o
  CC      drivers/usb/core/port.o
  CC      drivers/usb/core/hcd-pci.o
  AR      drivers/usb/gadget/function/built-in.a
  CC [M]  drivers/gpu/drm/amd/amdgpu/atombios_i2c.o
  AR      drivers/usb/gadget/legacy/built-in.a
  CC      drivers/acpi/acpica/utexcep.o
  CC      fs/ntfs/mft.o
  CC [M]  drivers/gpu/drm/xe/xe_guc_ads.o
  CC [M]  drivers/thermal/intel/x86_pkg_temp_thermal.o
  CC      drivers/usb/host/xhci.o
  CC      drivers/usb/gadget/usbstring.o
  AR      drivers/net/ethernet/ezchip/built-in.a
  CC      fs/ntfs/mst.o
  CC      drivers/usb/gadget/config.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/mm.o
  CC      fs/ntfs/namei.o
  CC [M]  net/netfilter/xt_REDIRECT.o
  CC [M]  drivers/thermal/intel/intel_menlow.o
  AR      arch/x86/kernel/built-in.a
  CC      drivers/usb/storage/sierra_ms.o
  CC      fs/autofs/symlink.o
  CC      lib/hexdump.o
  AR      net/bridge/built-in.a
  CC      fs/debugfs/inode.o
  AR      drivers/thermal/tegra/built-in.a
  CC      net/ipv4/syncookies.o
  CC [M]  drivers/gpu/drm/drm_blend.o
  CC      drivers/acpi/acpica/utdebug.o
  CC      fs/ntfs/runlist.o
  CC      lib/kstrtox.o
  CC [M]  drivers/gpu/drm/i915/intel_step.o
  CC      drivers/usb/serial/pl2303.o
  CC      net/ipv6/xfrm6_state.o
  CC      drivers/acpi/acpica/utdecode.o
  CC      drivers/acpi/acpica/utdelete.o
  CC      fs/autofs/waitq.o
  CC      net/ipv4/esp4.o
  CC      fs/debugfs/file.o
  AR      arch/x86/built-in.a
  CC      drivers/acpi/acpica/uterror.o
  CC      net/ipv6/xfrm6_input.o
  CC      drivers/usb/core/usb-acpi.o
  CC      drivers/net/netconsole.o
  CC      fs/tracefs/inode.o
  CC      drivers/acpi/acpica/uteval.o
  CC      fs/ntfs/super.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.o
  CC      drivers/usb/storage/option_ms.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.o
  CC      lib/debug_info.o
  AR      drivers/thermal/intel/built-in.a
  CC [M]  drivers/net/usb/usbnet.o
  AR      drivers/thermal/mediatek/built-in.a
  CC      lib/iomap.o
  CC      drivers/thermal/thermal_core.o
  CC      lib/pci_iomap.o
  CC      fs/btrfs/super.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/object.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/oproxy.o
  CC      lib/iomap_copy.o
  LD [M]  net/bridge/br_netfilter.o
  CC      drivers/usb/gadget/udc/trace.o
  CC      fs/btrfs/ctree.o
  CC      drivers/acpi/acpica/utglobal.o
  CC [M]  drivers/net/vxlan/vxlan_vnifilter.o
  CC      drivers/acpi/acpica/uthex.o
  CC      drivers/usb/gadget/epautoconf.o
  CC [M]  drivers/net/usb/cdc_ncm.o
  CC      mm/slub.o
  CC [M]  drivers/gpu/drm/xe/xe_guc_ct.o
  CC      fs/pstore/inode.o
  CC [M]  net/netfilter/xt_MASQUERADE.o
  CC      fs/efivarfs/inode.o
  CC [M]  net/bluetooth/amp.o
  AR      fs/nfs/built-in.a
  CC      fs/pstore/platform.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/option.o
  CC      fs/autofs/expire.o
  AR      drivers/net/ethernet/fungible/built-in.a
  CC      fs/pstore/pmsg.o
  CC [M]  fs/netfs/buffered_read.o
  CC [M]  fs/fscache/cache.o
  AR      drivers/usb/core/built-in.a
  CC [M]  fs/netfs/io.o
  CC      drivers/usb/storage/usual-tables.o
  CC [M]  fs/netfs/iterator.o
  CC [M]  fs/smbfs_common/cifs_arc4.o
  CC [M]  drivers/gpu/drm/i915/intel_uncore.o
  AR      fs/tracefs/built-in.a
  CC      fs/ntfs/sysctl.o
  CC      lib/devres.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.o
  CC      mm/migrate.o
  CC      fs/autofs/dev-ioctl.o
  CC      drivers/acpi/acpica/utids.o
  CC [M]  drivers/net/usb/r8153_ecm.o
  AR      drivers/usb/serial/built-in.a
  CC      fs/ext4/xattr_hurd.o
  CC      net/ipv6/xfrm6_output.o
  AR      fs/debugfs/built-in.a
  AR      drivers/net/ethernet/huawei/built-in.a
  CC [M]  fs/cifs/trace.o
  CC [M]  fs/fuse/dev.o
  CC [M]  drivers/gpu/drm/drm_bridge.o
  CC      drivers/usb/gadget/composite.o
  CC [M]  fs/overlayfs/super.o
  CC      fs/efivarfs/file.o
  LD [M]  arch/x86/kvm/kvm-intel.o
  CC [M]  fs/overlayfs/namei.o
  CC [M]  fs/fuse/dir.o
  CC [M]  fs/smbfs_common/cifs_md4.o
  CC      fs/efivarfs/super.o
  CC      mm/migrate_device.o
  CC      fs/ext4/xattr_trusted.o
  AR      fs/pstore/built-in.a
  AR      drivers/usb/storage/built-in.a
  CC      drivers/usb/gadget/functions.o
  CC [M]  fs/fuse/file.o
  CC      drivers/usb/gadget/configfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ib.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/ramht.o
  CC      lib/check_signature.o
  CC      fs/ntfs/unistr.o
  CC      drivers/acpi/acpica/utinit.o
  AR      drivers/usb/gadget/udc/built-in.a
  CC      drivers/usb/gadget/u_f.o
  CC [M]  net/netfilter/xt_addrtype.o
  CC      fs/ext4/xattr_user.o
  CC      net/ipv4/esp4_offload.o
  CC      lib/interval_tree.o
  AR      fs/autofs/built-in.a
  CC      net/ipv4/netfilter.o
  CC [M]  drivers/net/ethernet/intel/e1000/e1000_main.o
  CC [M]  fs/fscache/cookie.o
  CC [M]  drivers/net/ethernet/intel/e1000e/82571.o
  CC [M]  drivers/net/ethernet/intel/e1000e/ich8lan.o
  CC      net/ipv4/inet_diag.o
  CC      lib/assoc_array.o
  CC      lib/list_debug.o
  CC [M]  drivers/net/ethernet/intel/igb/igb_main.o
  CC      drivers/thermal/thermal_sysfs.o
  CC      drivers/acpi/acpica/utlock.o
  CC [M]  fs/netfs/main.o
  CC [M]  drivers/net/ethernet/intel/igb/igb_ethtool.o
  CC [M]  net/bluetooth/hci_debugfs.o
  CC      fs/efivarfs/vars.o
  CC      lib/debugobjects.o
  CC      fs/btrfs/extent-tree.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_main.o
  CC [M]  drivers/net/ethernet/intel/igbvf/vf.o
  CC [M]  drivers/gpu/drm/xe/xe_guc_debugfs.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_mac.o
  CC      fs/ntfs/upcase.o
  CC      net/ipv6/xfrm6_protocol.o
  CC      lib/bitrev.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/subdev.o
  LD [M]  drivers/net/usb/asix.o
  CC      net/ipv6/netfilter.o
  CC [M]  fs/overlayfs/util.o
  CC [M]  drivers/net/ethernet/intel/igbvf/mbx.o
  CC      fs/ext4/fast_commit.o
  CC      drivers/acpi/acpica/utmath.o
  CC      drivers/acpi/acpica/utmisc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_pll.o
  CC      lib/crc16.o
  CC      drivers/thermal/thermal_trip.o
  CC      drivers/acpi/glue.o
  CC      drivers/usb/host/xhci-mem.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.o
  CC [M]  drivers/net/ethernet/intel/igbvf/ethtool.o
  CC [M]  fs/overlayfs/inode.o
  AR      fs/ntfs/built-in.a
  CC [M]  drivers/gpu/drm/xe/xe_guc_hwconfig.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.o
  CC [M]  fs/fuse/inode.o
  LD [M]  drivers/net/vxlan/vxlan.o
  CC [M]  net/netfilter/xt_conntrack.o
  CC [M]  fs/overlayfs/file.o
  CC      net/ipv6/fib6_rules.o
  AR      fs/efivarfs/built-in.a
  CC [M]  drivers/net/ethernet/intel/igb/e1000_82575.o
  CC      drivers/usb/host/xhci-ext-caps.o
  CC      drivers/acpi/acpica/utmutex.o
  CC      drivers/acpi/scan.o
  CC      drivers/thermal/thermal_helpers.o
  CC [M]  fs/netfs/objects.o
  AR      drivers/net/ethernet/i825xx/built-in.a
  CC      lib/crc-t10dif.o
  AR      drivers/net/ethernet/microsoft/built-in.a
  HOSTCC  lib/gen_crc32table
  CC [M]  fs/fuse/control.o
  CC [M]  drivers/net/dummy.o
  AR      drivers/net/ethernet/litex/built-in.a
  CC [M]  fs/overlayfs/dir.o
  AR      drivers/net/ethernet/microchip/built-in.a
  CC [M]  drivers/net/ethernet/intel/igc/igc_i225.o
  CC [M]  fs/overlayfs/readdir.o
  CC      fs/btrfs/print-tree.o
  CC [M]  drivers/net/ethernet/intel/igbvf/netdev.o
  CC [M]  fs/fscache/io.o
  AR      drivers/usb/gadget/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/core/uevent.o
  AR      drivers/net/ethernet/mscc/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/nvfw/fw.o
  CC      drivers/acpi/acpica/utnonansi.o
  CC      net/ipv6/proc.o
  CC [M]  fs/overlayfs/copy_up.o
  CC      drivers/thermal/thermal_hwmon.o
  CC [M]  drivers/net/ethernet/intel/igb/e1000_mac.o
  CC      lib/libcrc32c.o
  CC [M]  drivers/gpu/drm/xe/xe_guc_log.o
  CC      net/ipv4/tcp_diag.o
  CC      lib/xxhash.o
  LD [M]  net/bluetooth/bluetooth.o
  CC      mm/huge_memory.o
  CC      net/ipv6/syncookies.o
  CC [M]  drivers/net/ethernet/intel/e1000/e1000_hw.o
  LD [M]  fs/netfs/netfs.o
  CC      fs/open.o
  CC      drivers/acpi/acpica/utobject.o
  CC      drivers/acpi/resource.o
  CC [M]  fs/fuse/xattr.o
  CC [M]  drivers/gpu/drm/i915/intel_wakeref.o
  CC      mm/khugepaged.o
  CC [M]  fs/cifs/cifsfs.o
  CC [M]  drivers/gpu/drm/i915/vlv_sideband.o
  CC [M]  drivers/net/ethernet/intel/e1000/e1000_ethtool.o
  CC [M]  net/netfilter/xt_ipvs.o
  CC      fs/read_write.o
  CC [M]  drivers/gpu/drm/drm_cache.o
  CC      fs/file_table.o
  CC [M]  fs/fuse/acl.o
  CC [M]  fs/overlayfs/export.o
  CC [M]  fs/fscache/main.o
  CC      drivers/acpi/acpica/utosi.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/nvfw/hs.o
  CC      drivers/thermal/gov_fair_share.o
  CC      drivers/thermal/gov_step_wise.o
  CC      net/ipv6/mip6.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_sync.o
  CC      lib/genalloc.o
  CC      lib/percpu_counter.o
  CC [M]  fs/fscache/volume.o
  CC [M]  drivers/gpu/drm/xe/xe_guc_pc.o
  CC [M]  drivers/net/macvlan.o
  CC      drivers/thermal/gov_user_space.o
  CC [M]  drivers/net/ethernet/intel/igb/e1000_nvm.o
  CC      net/ipv6/addrconf_core.o
  CC [M]  fs/fuse/readdir.o
  CC      fs/btrfs/root-tree.o
  CC [M]  drivers/net/ethernet/intel/e1000e/80003es2lan.o
  CC [M]  fs/fscache/proc.o
  CC      drivers/acpi/acpica/utownerid.o
  CC      lib/fault-inject.o
  CC      lib/syscall.o
  CC      fs/btrfs/dir-item.o
  CC [M]  fs/cifs/cifs_debug.o
  CC [M]  fs/cifs/connect.o
  CC      drivers/acpi/acpi_processor.o
  CC      fs/ext4/orphan.o
  CC      drivers/acpi/acpica/utpredef.o
  CC [M]  fs/fuse/ioctl.o
  CC [M]  drivers/net/ethernet/intel/e1000e/mac.o
  CC      net/ipv4/udp_diag.o
  CC [M]  drivers/net/ethernet/intel/e1000/e1000_param.o
  AR      drivers/thermal/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/nvfw/ls.o
  CC      drivers/usb/host/xhci-ring.o
  CC      fs/super.o
  LD [M]  fs/overlayfs/overlay.o
  CC      fs/char_dev.o
  CC      drivers/usb/host/xhci-hub.o
  CC      drivers/usb/host/xhci-dbg.o
  CC      net/ipv6/exthdrs_core.o
  CC [M]  drivers/net/ethernet/intel/igb/e1000_phy.o
  CC      drivers/watchdog/watchdog_core.o
  LD [M]  net/netfilter/nf_conntrack.o
  CC [M]  drivers/gpu/drm/i915/vlv_suspend.o
  CC      drivers/watchdog/watchdog_dev.o
  CC      drivers/acpi/acpica/utresdecode.o
  LD [M]  net/netfilter/nf_nat.o
  AR      net/netfilter/built-in.a
  CC      lib/dynamic_debug.o
  CC [M]  drivers/net/ethernet/intel/ixgbevf/vf.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_main.o
  CC [M]  drivers/net/ethernet/intel/ixgbevf/mbx.o
  CC      lib/errname.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_common.o
  CC      net/ipv6/ip6_checksum.o
  LD [M]  fs/fscache/fscache.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/nvfw/acr.o
  CC      drivers/watchdog/softdog.o
  CC [M]  drivers/net/ethernet/intel/ixgb/ixgb_main.o
  LD [M]  drivers/net/ethernet/intel/igbvf/igbvf.o
  AR      drivers/net/ethernet/intel/built-in.a
  CC      fs/stat.o
  CC [M]  drivers/net/ethernet/intel/ixgb/ixgb_hw.o
  CC [M]  drivers/gpu/drm/xe/xe_guc_submit.o
  CC [M]  drivers/net/ethernet/intel/igb/e1000_mbx.o
  CC      drivers/acpi/acpica/utresrc.o
  CC      drivers/usb/host/xhci-trace.o
  LD [M]  fs/fuse/fuse.o
  CC      net/ipv4/tcp_cubic.o
  CC      fs/exec.o
  CC      fs/pipe.o
  AR      fs/ext4/built-in.a
  CC      fs/namei.o
  CC [M]  drivers/gpu/drm/xe/xe_hw_engine.o
  CC      fs/fcntl.o
  CC [M]  drivers/net/ethernet/intel/e1000e/manage.o
  CC [M]  drivers/net/ethernet/intel/e1000e/nvm.o
  CC [M]  drivers/net/ethernet/intel/igb/e1000_i210.o
  CC      net/ipv4/xfrm4_policy.o
  CC      drivers/acpi/processor_core.o
  CC      net/ipv4/xfrm4_state.o
  CC      drivers/usb/host/xhci-debugfs.o
  CC      drivers/usb/host/xhci-pci.o
  CC [M]  fs/cifs/dir.o
  CC      drivers/acpi/acpica/utstate.o
  CC [M]  fs/cifs/file.o
  AR      drivers/watchdog/built-in.a
  CC [M]  drivers/md/persistent-data/dm-array.o
  CC      drivers/md/md.o
  CC      drivers/md/md-bitmap.o
  CC      drivers/opp/core.o
  CC      fs/btrfs/file-item.o
  CC [M]  drivers/net/ethernet/intel/ixgb/ixgb_ee.o
  CC [M]  drivers/gpu/drm/i915/soc/intel_dram.o
  CC [M]  drivers/net/ethernet/intel/ixgbevf/ethtool.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_preempt_mgr.o
  CC [M]  drivers/net/ethernet/intel/ixgb/ixgb_ethtool.o
  LD [M]  drivers/net/ethernet/intel/e1000/e1000.o
  CC [M]  drivers/gpu/drm/i915/soc/intel_gmch.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_base.o
  CC [M]  drivers/net/ethernet/intel/igb/igb_ptp.o
  CC [M]  fs/cifs/inode.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/nvfw/flcn.o
  CC      net/ipv6/ip6_icmp.o
  CC      drivers/acpi/acpica/utstring.o
  CC      drivers/acpi/processor_pdc.o
  CC      drivers/opp/cpu.o
  CC [M]  drivers/gpu/drm/i915/soc/intel_pch.o
  CC [M]  drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.o
  CC [M]  drivers/gpu/drm/xe/xe_hw_fence.o
  CC [M]  drivers/net/ethernet/intel/e1000e/phy.o
  CC      lib/nlattr.o
  CC      net/ipv6/output_core.o
  CC      lib/checksum.o
  CC [M]  drivers/net/ethernet/intel/igb/igb_hwmon.o
  CC      drivers/acpi/acpica/utstrsuppt.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/base.o
  CC      fs/ioctl.o
  CC      net/ipv4/xfrm4_input.o
  CC      drivers/acpi/ec.o
  CC      drivers/acpi/dock.o
  CC [M]  drivers/md/persistent-data/dm-bitset.o
  CC      drivers/acpi/acpica/utstrtoul64.o
  CC      net/ipv4/xfrm4_output.o
  CC      net/ipv4/xfrm4_protocol.o
  CC      mm/page_counter.o
  CC [M]  net/ipv4/ip_tunnel.o
  CC      lib/cpu_rmap.o
  CC [M]  drivers/net/ethernet/intel/e1000e/param.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_nvm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_phy.o
  CC      drivers/acpi/pci_root.o
  CC [M]  drivers/net/ethernet/intel/ixgb/ixgb_param.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.o
  CC [M]  drivers/gpu/drm/i915/i915_memcpy.o
  CC [M]  drivers/net/ethernet/intel/e1000e/ethtool.o
  CC [M]  drivers/gpu/drm/xe/xe_huc.o
  CC      mm/memcontrol.o
  CC      drivers/acpi/acpica/utxface.o
  CC [M]  drivers/net/ethernet/intel/ixgbevf/ipsec.o
  CC      drivers/md/md-autodetect.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_virt.o
  CC      drivers/opp/debugfs.o
  CC [M]  drivers/gpu/drm/drm_client.o
  CC      mm/vmpressure.o
  CC [M]  drivers/gpu/drm/i915/i915_mm.o
  CC      lib/dynamic_queue_limits.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_82599.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_82598.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_diag.o
  CC [M]  drivers/gpu/drm/i915/i915_sw_fence.o
  CC [M]  drivers/md/persistent-data/dm-block-manager.o
  CC [M]  drivers/gpu/drm/drm_client_modeset.o
  CC [M]  drivers/gpu/drm/drm_color_mgmt.o
  CC [M]  net/ipv4/udp_tunnel_core.o
  CC [M]  net/ipv4/udp_tunnel_nic.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/cmdq.o
  CC      lib/glob.o
  LD [M]  drivers/net/ethernet/intel/igb/igb.o
  CC      net/ipv6/protocol.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_phy.o
  CC      drivers/acpi/acpica/utxfinit.o
  CC [M]  drivers/net/mii.o
  CC      fs/btrfs/inode-item.o
  CC      fs/btrfs/disk-io.o
  CC      lib/strncpy_from_user.o
  CC      mm/swap_cgroup.o
  CC [M]  drivers/gpu/drm/xe/xe_huc_debugfs.o
  AR      drivers/usb/host/built-in.a
  AR      drivers/usb/built-in.a
  CC      drivers/cpufreq/cpufreq.o
  CC [M]  drivers/gpu/drm/xe/xe_irq.o
  CC      drivers/cpuidle/governors/menu.o
  AR      drivers/opp/built-in.a
  CC      drivers/cpuidle/cpuidle.o
  CC [M]  drivers/gpu/drm/xe/xe_lrc.o
  LD [M]  drivers/net/ethernet/intel/ixgb/ixgb.o
  CC      drivers/cpuidle/driver.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_ethtool.o
  CC      drivers/md/dm-uevent.o
  CC      drivers/acpi/pci_link.o
  CC [M]  drivers/gpu/drm/xe/xe_migrate.o
  CC      drivers/cpufreq/freq_table.o
  CC      drivers/md/dm.o
  CC      drivers/acpi/acpica/utxferror.o
  CC [M]  drivers/net/ethernet/intel/e100.o
  CC [M]  drivers/md/persistent-data/dm-space-map-common.o
  CC      drivers/acpi/pci_irq.o
  CC      fs/readdir.o
  CC [M]  drivers/gpu/drm/i915/i915_sw_fence_work.o
  CC      lib/strnlen_user.o
  CC [M]  drivers/gpu/drm/xe/xe_mmio.o
  CC      drivers/cpufreq/cpufreq_performance.o
  CC      drivers/cpuidle/governors/haltpoll.o
  CC [M]  drivers/net/ethernet/intel/e1000e/netdev.o
  CC      mm/hugetlb_cgroup.o
  CC [M]  drivers/md/persistent-data/dm-space-map-disk.o
  CC [M]  drivers/md/persistent-data/dm-space-map-metadata.o
  CC      mm/kmemleak.o
  CC [M]  drivers/gpu/drm/xe/xe_mocs.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/fw.o
  CC      net/ipv6/ip6_offload.o
  CC      drivers/acpi/acpica/utxfmutex.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/msgq.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.o
  CC [M]  drivers/gpu/drm/i915/i915_syncmap.o
  CC [M]  drivers/net/ethernet/intel/e1000e/ptp.o
  CC [M]  drivers/gpu/drm/i915/i915_user_extensions.o
  CC [M]  drivers/gpu/drm/drm_connector.o
  CC [M]  drivers/gpu/drm/xe/xe_module.o
  CC [M]  drivers/gpu/drm/i915/i915_ioc32.o
  CC [M]  drivers/gpu/drm/xe/xe_pat.o
  CC      lib/net_utils.o
  CC      fs/select.o
  CC      drivers/acpi/acpi_lpss.o
  CC      net/ipv6/tcpv6_offload.o
  CC [M]  drivers/gpu/drm/xe/xe_pci.o
  CC [M]  drivers/gpu/drm/drm_crtc.o
  AR      drivers/acpi/acpica/built-in.a
  CC      drivers/cpuidle/governor.o
  CC [M]  drivers/gpu/drm/i915/i915_debugfs.o
  CC      lib/sg_pool.o
  CC      lib/stackdepot.o
  CC      drivers/acpi/acpi_apd.o
  LD [M]  net/ipv4/udp_tunnel.o
  AR      net/ipv4/built-in.a
  CC      drivers/acpi/acpi_platform.o
  CC      drivers/cpufreq/cpufreq_ondemand.o
  CC      fs/btrfs/transaction.o
  CC [M]  drivers/md/persistent-data/dm-transaction-manager.o
  CC      fs/dcache.o
  AR      drivers/cpuidle/governors/built-in.a
  CC      fs/inode.o
  CC [M]  drivers/gpu/drm/drm_displayid.o
  CC      fs/attr.o
  CC      drivers/acpi/acpi_pnp.o
  CC [M]  drivers/gpu/drm/xe/xe_pcode.o
  CC      fs/btrfs/inode.o
  CC [M]  drivers/gpu/drm/xe/xe_pm.o
  CC      drivers/cpuidle/sysfs.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.o
  CC      drivers/cpufreq/cpufreq_governor.o
  CC [M]  drivers/gpu/drm/xe/xe_preempt_fence.o
  CC      net/ipv6/exthdrs_offload.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/qmgr.o
  LD [M]  drivers/net/ethernet/intel/ixgbevf/ixgbevf.o
  CC      drivers/acpi/power.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/v1.o
  CC      fs/bad_inode.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_ptp.o
  CC      drivers/cpuidle/poll_state.o
  CC      drivers/cpufreq/cpufreq_governor_attr_set.o
  CC      lib/ucs2_string.o
  CC [M]  drivers/gpu/drm/drm_drv.o
  CC      drivers/md/dm-table.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_x540.o
  CC [M]  drivers/md/persistent-data/dm-btree.o
  CC      fs/file.o
  CC      drivers/acpi/event.o
  CC      drivers/acpi/evged.o
  CC [M]  drivers/gpu/drm/xe/xe_pt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vf_error.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_sched.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_x550.o
  CC [M]  drivers/md/persistent-data/dm-btree-remove.o
  CC      drivers/mmc/core/core.o
  CC      drivers/cpufreq/acpi-cpufreq.o
  CC      drivers/mmc/core/bus.o
  CC      mm/page_isolation.o
  CC      mm/early_ioremap.o
  CC      drivers/cpuidle/cpuidle-haltpoll.o
  CC      drivers/cpufreq/intel_pstate.o
  CC      lib/sbitmap.o
  CC [M]  drivers/gpu/drm/i915/i915_debugfs_params.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.o
  CC [M]  drivers/gpu/drm/i915/display/intel_display_debugfs.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.o
  CC      drivers/mmc/host/sdhci.o
  CC [M]  drivers/net/mdio.o
  CC      drivers/mmc/host/sdhci-pci-core.o
  CC      fs/btrfs/file.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.o
  CC [M]  drivers/gpu/drm/xe/xe_query.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/gm200.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/gp102.o
  CC      drivers/mmc/host/sdhci-pci-o2micro.o
  CC      drivers/mmc/host/sdhci-pci-arasan.o
  CC [M]  drivers/gpu/drm/drm_dumb_buffers.o
  CC      net/ipv6/inet6_hashtables.o
  CC      fs/btrfs/defrag.o
  AR      drivers/cpuidle/built-in.a
  AR      drivers/ufs/built-in.a
  AR      drivers/leds/trigger/built-in.a
  CC [M]  drivers/leds/trigger/ledtrig-audio.o
  AR      drivers/leds/blink/built-in.a
  CC      fs/filesystems.o
  CC      drivers/acpi/sysfs.o
  CC [M]  drivers/gpu/drm/drm_edid.o
  CC      drivers/mmc/core/host.o
  CC      mm/cma.o
  AR      drivers/net/ethernet/neterion/built-in.a
  CC [M]  drivers/net/tun.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.o
  CC      fs/namespace.o
  CC      drivers/acpi/property.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_dump.o
  AR      drivers/firmware/arm_ffa/built-in.a
  AR      drivers/firmware/arm_scmi/built-in.a
  AR      drivers/leds/simple/built-in.a
  CC      drivers/leds/led-core.o
  AR      drivers/firmware/broadcom/built-in.a
  AR      drivers/firmware/cirrus/built-in.a
  AR      drivers/firmware/meson/built-in.a
  CC      lib/group_cpus.o
  CC      drivers/leds/led-class.o
  CC      drivers/firmware/efi/efi-bgrt.o
  CC [M]  drivers/md/persistent-data/dm-btree-spine.o
  CC      drivers/firmware/efi/efi.o
  CC [M]  fs/cifs/link.o
  CC      drivers/leds/led-triggers.o
  CC [M]  drivers/gpu/drm/xe/xe_reg_sr.o
  CC      drivers/firmware/efi/libstub/efi-stub-helper.o
  CC [M]  lib/asn1_decoder.o
  CC      mm/secretmem.o
  CC [M]  drivers/net/veth.o
  CC      drivers/md/dm-target.o
  AR      drivers/firmware/imx/built-in.a
  CC [M]  drivers/gpu/drm/i915/display/intel_pipe_crc.o
  CC      drivers/firmware/efi/libstub/gop.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/ga100.o
  CC      drivers/mmc/core/mmc.o
  CC      drivers/mmc/core/mmc_ops.o
  CC      drivers/firmware/efi/libstub/secureboot.o
  CC      fs/btrfs/extent_map.o
  CC      fs/btrfs/sysfs.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/falcon/ga102.o
  AR      drivers/net/ethernet/netronome/built-in.a
  AR      drivers/net/ethernet/ni/built-in.a
  AR      drivers/net/ethernet/packetengines/built-in.a
  AR      drivers/net/ethernet/realtek/built-in.a
  CC [M]  drivers/net/ethernet/realtek/8139cp.o
  CC [M]  drivers/net/ethernet/realtek/8139too.o
  CC [M]  drivers/gpu/drm/i915/i915_pmu.o
  CC      fs/btrfs/accessors.o
  GEN     lib/oid_registry_data.c
  CC [M]  drivers/net/ethernet/realtek/r8169_main.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ids.o
  CC [M]  drivers/net/ethernet/realtek/r8169_firmware.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_mmhub.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.o
  CC [M]  lib/oid_registry.o
  LD [M]  drivers/md/persistent-data/dm-persistent-data.o
  CC      net/ipv6/mcast_snoop.o
  CC      drivers/md/dm-linear.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_tsn.o
  AR      drivers/leds/built-in.a
  CC [M]  drivers/gpu/drm/i915/gt/gen2_engine_cs.o
  CC      drivers/firmware/efi/libstub/tpm.o
  CC [M]  drivers/gpu/drm/xe/xe_reg_whitelist.o
  CC      drivers/firmware/efi/vars.o
  CC [M]  net/ipv6/ip6_udp_tunnel.o
  CC      drivers/acpi/acpi_cmos_rtc.o
  CC      drivers/md/dm-stripe.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_hdp.o
  CC      drivers/mmc/host/sdhci-pci-dwc-mshc.o
  CC      mm/userfaultfd.o
  CC      drivers/firmware/efi/reboot.o
  CC      fs/btrfs/xattr.o
  CC      mm/memremap.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.o
  CC      drivers/md/dm-ioctl.o
  AR      lib/lib.a
  AR      drivers/cpufreq/built-in.a
  CC      fs/btrfs/ordered-data.o
  GEN     lib/crc32table.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.o
  CC      lib/crc32.o
  CC      mm/hmm.o
  CC      mm/memfd.o
  CC      drivers/mmc/core/sd.o
  CC      drivers/md/dm-io.o
  CC [M]  drivers/net/ethernet/intel/igc/igc_xdp.o
  CC [M]  drivers/gpu/drm/xe/xe_rtp.o
  CC      drivers/acpi/x86/apple.o
  CC      drivers/md/dm-kcopyd.o
  CC      drivers/acpi/x86/utils.o
  CC [M]  drivers/gpu/drm/xe/xe_ring_ops.o
  CC [M]  fs/cifs/misc.o
  CC      drivers/firmware/efi/libstub/file.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/lsfw.o
  CC [M]  drivers/net/ethernet/realtek/r8169_phy_config.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.o
  CC [M]  drivers/gpu/drm/xe/xe_sa.o
  CC      fs/btrfs/extent_io.o
  AR      lib/built-in.a
  CC [M]  fs/cifs/netmisc.o
  AR      drivers/net/ethernet/renesas/built-in.a
  AR      net/ipv6/built-in.a
  AR      drivers/net/ethernet/sfc/built-in.a
  CC      drivers/md/dm-sysfs.o
  CC      drivers/mmc/host/sdhci-pci-gli.o
  CC      drivers/mmc/core/sd_ops.o
  CC [M]  drivers/gpu/drm/xe/xe_sched_job.o
  CC      mm/bootmem_info.o
  CC      fs/btrfs/volumes.o
  CC      drivers/md/dm-stats.o
  CC [M]  drivers/gpu/drm/i915/gt/gen6_engine_cs.o
  AR      net/built-in.a
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.o
  LD [M]  drivers/net/ethernet/intel/e1000e/e1000e.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.o
  CC      fs/btrfs/async-thread.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm200.o
  AR      drivers/crypto/stm32/built-in.a
  AR      drivers/crypto/xilinx/built-in.a
  AR      drivers/crypto/hisilicon/built-in.a
  CC [M]  fs/cifs/smbencrypt.o
  CC      drivers/acpi/x86/s2idle.o
  CC      drivers/acpi/debugfs.o
  AR      drivers/crypto/keembay/built-in.a
  AR      drivers/crypto/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/gm20b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_csa.o
  CC [M]  drivers/gpu/drm/i915/gt/gen6_ppgtt.o
  CC      fs/btrfs/ioctl.o
  CC      drivers/clocksource/acpi_pm.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_sysfs.o
  CC      drivers/firmware/efi/libstub/mem.o
  CC [M]  drivers/gpu/drm/xe/xe_step.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp102.o
  CC      drivers/hid/usbhid/hid-core.o
  AR      drivers/staging/media/built-in.a
  AR      drivers/staging/built-in.a
  CC      drivers/firmware/efi/libstub/random.o
  CC      drivers/firmware/efi/libstub/randomalloc.o
  AR      drivers/platform/x86/amd/built-in.a
  CC      drivers/hid/usbhid/hiddev.o
  CC      drivers/platform/x86/intel/pmc/core.o
  CC      drivers/platform/x86/intel/pmc/spt.o
  CC      drivers/platform/x86/p2sb.o
  LD [M]  drivers/net/ethernet/intel/igc/igc.o
  AR      mm/built-in.a
  AR      drivers/platform/surface/built-in.a
  CC      drivers/platform/x86/intel/pmc/cnp.o
  CC      drivers/firmware/efi/memattr.o
  CC      drivers/mmc/host/sdhci-acpi.o
  CC      drivers/platform/x86/pmc_atom.o
  CC      fs/seq_file.o
  CC      drivers/mmc/core/sdio.o
  CC      fs/btrfs/locking.o
  CC      fs/btrfs/orphan.o
  CC      drivers/platform/x86/intel/turbo_max_3.o
  CC [M]  drivers/platform/x86/intel/pmt/class.o
  CC      fs/btrfs/export.o
  CC      drivers/firmware/efi/tpm.o
  CC      drivers/clocksource/i8253.o
  CC      drivers/platform/x86/intel/pmc/icl.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_debugfs.o
  CC [M]  drivers/gpu/drm/drm_encoder.o
  CC      drivers/firmware/efi/libstub/pci.o
  CC      fs/btrfs/tree-log.o
  CC [M]  drivers/platform/x86/intel/pmt/telemetry.o
  CC      drivers/acpi/acpi_lpat.o
  CC [M]  drivers/gpu/drm/xe/xe_sync.o
  CC [M]  drivers/gpu/drm/drm_file.o
  CC      drivers/platform/x86/intel/pmc/tgl.o
  CC [M]  drivers/gpu/drm/drm_fourcc.o
  CC [M]  drivers/gpu/drm/xe/xe_trace.o
  CC      drivers/firmware/efi/libstub/skip_spaces.o
  CC [M]  drivers/platform/x86/wmi.o
  CC      drivers/platform/x86/intel/pmc/adl.o
  CC      fs/btrfs/free-space-cache.o
  CC      drivers/hid/hid-core.o
  CC      drivers/firmware/efi/libstub/lib-cmdline.o
  CC [M]  drivers/gpu/drm/i915/gt/gen7_renderclear.o
  CC [M]  drivers/gpu/drm/xe/xe_ttm_gtt_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp108.o
  CC [M]  drivers/platform/x86/intel/pmt/crashlog.o
  CC      drivers/platform/x86/intel/pmc/mtl.o
  CC      fs/xattr.o
  CC      fs/btrfs/zlib.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ras.o
  CC [M]  drivers/platform/x86/intel/vsec.o
  AR      drivers/clocksource/built-in.a
  CC [M]  drivers/gpu/drm/i915/gt/gen8_engine_cs.o
  CC [M]  fs/cifs/transport.o
  CC [M]  drivers/gpu/drm/i915/gt/gen8_ppgtt.o
  CC      drivers/mailbox/mailbox.o
  CC      drivers/md/dm-rq.o
  CC      drivers/mailbox/pcc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/gv100.o
  CC      fs/btrfs/lzo.o
  CC [M]  fs/cifs/cached_dir.o
  CC      drivers/md/dm-io-rewind.o
  CC      drivers/devfreq/devfreq.o
  CC      drivers/powercap/powercap_sys.o
  LD [M]  drivers/net/ethernet/realtek/r8169.o
  CC      drivers/mmc/host/cqhci-core.o
  CC      drivers/platform/x86/intel/pmc/pltdrv.o
  AR      drivers/perf/built-in.a
  AR      drivers/net/ethernet/smsc/built-in.a
  CC [M]  drivers/net/ethernet/smsc/smsc9420.o
  AR      drivers/net/ethernet/socionext/built-in.a
  CC      drivers/acpi/acpi_lpit.o
  CC      drivers/acpi/prmt.o
  CC      drivers/firmware/efi/libstub/lib-ctype.o
  CC      drivers/acpi/acpi_pcc.o
  CC      drivers/firmware/efi/libstub/alignedmem.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_breadcrumbs.o
  CC [M]  drivers/platform/x86/wmi-bmof.o
  CC      drivers/acpi/ac.o
  AR      drivers/firmware/psci/built-in.a
  CC [M]  drivers/mmc/host/sdhci-pltfm.o
  AR      drivers/firmware/smccc/built-in.a
  CC [M]  drivers/platform/x86/mxm-wmi.o
  CC      drivers/mmc/core/sdio_ops.o
  CC [M]  drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.o
  CC      fs/btrfs/zstd.o
  LD [M]  drivers/platform/x86/intel/pmt/pmt_class.o
  LD [M]  drivers/platform/x86/intel/pmt/pmt_telemetry.o
  CC      drivers/powercap/intel_rapl_common.o
  LD [M]  drivers/platform/x86/intel/pmt/pmt_crashlog.o
  AR      drivers/hid/usbhid/built-in.a
  CC [M]  drivers/gpu/drm/xe/xe_ttm_stolen_mgr.o
  CC      drivers/hid/hid-input.o
  CC      drivers/hid/hid-quirks.o
  CC      fs/btrfs/compression.o
  CC      drivers/powercap/intel_rapl_msr.o
  CC      fs/btrfs/delayed-ref.o
  CC      drivers/acpi/button.o
  CC [M]  drivers/gpu/drm/drm_framebuffer.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/gp10b.o
  AR      drivers/mailbox/built-in.a
  AR      drivers/firmware/tegra/built-in.a
  CC      drivers/md/dm-builtin.o
  AR      drivers/platform/x86/intel/pmc/built-in.a
  CC [M]  drivers/platform/x86/intel/rst.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/tu102.o
  CC      drivers/mmc/core/sdio_bus.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_context.o
  LD [M]  drivers/platform/x86/intel/intel_vsec.o
  CC [M]  drivers/gpu/drm/drm_gem.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_context_sseu.o
  CC [M]  drivers/gpu/drm/xe/xe_ttm_vram_mgr.o
  CC [M]  drivers/md/dm-bufio.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/ga100.o
  CC      drivers/firmware/efi/libstub/relocate.o
  CC      drivers/mmc/core/sdio_cis.o
  CC [M]  drivers/platform/x86/intel_ips.o
  CC [M]  drivers/md/dm-bio-prison-v1.o
  AR      drivers/platform/x86/intel/built-in.a
  CC      drivers/firmware/efi/libstub/printk.o
  CC      drivers/acpi/fan_core.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_engine_cs.o
  CC      fs/libfs.o
  CC      drivers/hid/hid-debug.o
  CC      drivers/hid/hidraw.o
  LD [M]  drivers/platform/x86/intel/intel-rst.o
  CC      drivers/mmc/core/sdio_io.o
  CC      drivers/mmc/core/sdio_irq.o
  CC      drivers/mmc/core/slot-gpio.o
  AR      drivers/firmware/xilinx/built-in.a
  AR      drivers/net/ethernet/vertexcom/built-in.a
  CC      drivers/acpi/fan_attr.o
  AR      drivers/net/ethernet/wangxun/built-in.a
  CC [M]  drivers/gpu/drm/i915/gt/intel_engine_heartbeat.o
  CC [M]  drivers/gpu/drm/xe/xe_tuning.o
  CC      drivers/acpi/processor_driver.o
  AR      drivers/platform/x86/built-in.a
  CC      drivers/firmware/efi/libstub/vsprintf.o
  CC      drivers/firmware/efi/libstub/x86-stub.o
  STUBCPY drivers/firmware/efi/libstub/alignedmem.stub.o
  AR      drivers/mmc/host/built-in.a
  CC [M]  drivers/gpu/drm/i915/gt/intel_engine_pm.o
  CC      drivers/firmware/dmi_scan.o
  CC      drivers/hid/hid-generic.o
  CC      fs/fs-writeback.o
  CC      fs/btrfs/relocation.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_engine_user.o
  CC      drivers/mmc/core/regulator.o
  CC      fs/btrfs/delayed-inode.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/acr/ga102.o
  STUBCPY drivers/firmware/efi/libstub/efi-stub-helper.stub.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/base.o
  CC [M]  drivers/devfreq/governor_simpleondemand.o
  AR      drivers/powercap/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.o
  CC      drivers/ras/ras.o
  STUBCPY drivers/firmware/efi/libstub/file.stub.o
  CC      drivers/ras/debugfs.o
  CC      drivers/acpi/processor_thermal.o
  CC [M]  drivers/gpu/drm/drm_ioctl.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_execlists_submission.o
  CC [M]  drivers/md/dm-bio-prison-v2.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_ggtt.o
  CC      drivers/mmc/core/debugfs.o
  CC      drivers/hid/hid-a4tech.o
  CC      drivers/acpi/processor_idle.o
  AR      drivers/platform/built-in.a
  AR      drivers/hwtracing/intel_th/built-in.a
  CC [M]  drivers/gpu/drm/xe/xe_uc.o
  LD [M]  drivers/net/ethernet/intel/ixgbe/ixgbe.o
  CC [M]  drivers/gpu/drm/xe/xe_uc_debugfs.o
  CC      drivers/android/binderfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.o
  CC      drivers/android/binder.o
  CC [M]  drivers/gpu/drm/xe/xe_uc_fw.o
  CC      drivers/firmware/efi/memmap.o
  CC      drivers/nvmem/core.o
  CC [M]  drivers/devfreq/governor_performance.o
  AR      drivers/net/ethernet/xilinx/built-in.a
  CC [M]  fs/cifs/cifs_unicode.o
  CC      fs/pnode.o
  AR      drivers/net/ethernet/synopsys/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/g84.o
  AR      drivers/net/ethernet/pensando/built-in.a
  AR      drivers/net/ethernet/built-in.a
  CC      drivers/hid/hid-apple.o
  CC [M]  drivers/mtd/chips/chipreg.o
  CC      fs/btrfs/scrub.o
  CC [M]  drivers/md/dm-crypt.o
  AR      drivers/net/built-in.a
  CC      drivers/hid/hid-belkin.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/gf100.o
  STUBCPY drivers/firmware/efi/libstub/gop.stub.o
  STUBCPY drivers/firmware/efi/libstub/lib-cmdline.stub.o
  STUBCPY drivers/firmware/efi/libstub/lib-ctype.stub.o
  CC [M]  drivers/uio/uio.o
  STUBCPY drivers/firmware/efi/libstub/mem.stub.o
  STUBCPY drivers/firmware/efi/libstub/pci.stub.o
  CC [M]  drivers/vfio/pci/vfio_pci_core.o
  STUBCPY drivers/firmware/efi/libstub/printk.stub.o
  CC      drivers/hid/hid-cherry.o
  STUBCPY drivers/firmware/efi/libstub/random.stub.o
  STUBCPY drivers/firmware/efi/libstub/randomalloc.stub.o
  STUBCPY drivers/firmware/efi/libstub/relocate.stub.o
  STUBCPY drivers/firmware/efi/libstub/secureboot.stub.o
  STUBCPY drivers/firmware/efi/libstub/skip_spaces.stub.o
  CC [M]  drivers/vfio/pci/vfio_pci_intrs.o
  STUBCPY drivers/firmware/efi/libstub/tpm.stub.o
  STUBCPY drivers/firmware/efi/libstub/vsprintf.stub.o
  STUBCPY drivers/firmware/efi/libstub/x86-stub.stub.o
  AR      drivers/firmware/efi/libstub/lib.a
  CC [M]  drivers/md/dm-thin.o
  CC      fs/splice.o
  CC      fs/btrfs/backref.o
  CC [M]  fs/cifs/nterr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/gk20a.o
  AR      drivers/devfreq/built-in.a
  CC [M]  drivers/gpu/drm/drm_lease.o
  CC      drivers/mmc/core/block.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/gm107.o
  CC [M]  drivers/md/dm-thin-metadata.o
  CC      fs/btrfs/ulist.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.o
  CC [M]  drivers/vfio/pci/vfio_pci_rdwr.o
  CC [M]  drivers/gpu/drm/drm_managed.o
  AR      drivers/ras/built-in.a
  CC [M]  drivers/gpu/drm/drm_mm.o
  CC [M]  drivers/gpu/drm/xe/xe_vm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.o
  CC      fs/btrfs/qgroup.o
  CC [M]  drivers/mtd/mtdcore.o
  CC      drivers/firmware/efi/esrt.o
  CC      drivers/android/binder_alloc.o
  CC      drivers/hid/hid-chicony.o
  CC      drivers/acpi/processor_throttling.o
  CC [M]  drivers/vfio/pci/vfio_pci_config.o
  CC      fs/sync.o
  CC      fs/utimes.o
  CC      fs/d_path.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/gm20b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_ggtt_fencing.o
  CC [M]  drivers/gpu/drm/xe/xe_vm_madvise.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bar/tu102.o
  CC      drivers/acpi/processor_perflib.o
  AR      drivers/nvmem/built-in.a
  CC [M]  drivers/mtd/mtdsuper.o
  CC [M]  drivers/pps/pps.o
  CC [M]  fs/cifs/cifsencrypt.o
  CC [M]  drivers/pps/kapi.o
  CC [M]  drivers/pps/sysfs.o
  CC [M]  drivers/vfio/pci/vfio_pci.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/base.o
  CC [M]  fs/cifs/readdir.o
  CC [M]  fs/cifs/ioctl.o
  CC      drivers/hid/hid-cypress.o
  CC      drivers/acpi/container.o
  CC      drivers/firmware/efi/efi-pstore.o
  CC      fs/btrfs/send.o
  CC      fs/btrfs/dev-replace.o
  CC [M]  drivers/mtd/mtdconcat.o
  CC      fs/btrfs/raid56.o
  CC [M]  drivers/gpu/drm/xe/xe_wait_user_fence.o
  CC [M]  drivers/gpu/drm/drm_mode_config.o
  CC [M]  drivers/gpu/drm/xe/xe_wa.o
  CC      drivers/acpi/thermal.o
  CC      drivers/acpi/acpi_memhotplug.o
  CC      fs/btrfs/uuid-tree.o
  CC      drivers/firmware/efi/cper.o
  CC      drivers/firmware/efi/cper_cxl.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/bit.o
  CC      drivers/acpi/ioapic.o
  CC [M]  fs/cifs/sess.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_nbio.o
  LD [M]  drivers/pps/pps_core.o
  CC      fs/btrfs/props.o
  CC      fs/btrfs/free-space-tree.o
  CC [M]  drivers/bluetooth/btusb.o
  CC [M]  drivers/bluetooth/btintel.o
  CC      fs/stack.o
  CC [M]  drivers/bluetooth/btbcm.o
  CC      drivers/acpi/battery.o
  CC [M]  drivers/gpu/drm/drm_mode_object.o
  CC      drivers/hid/hid-ezkey.o
  CC      drivers/firmware/efi/runtime-wrappers.o
  LD [M]  drivers/vfio/pci/vfio-pci.o
  LD [M]  drivers/vfio/pci/vfio-pci-core.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/boost.o
  CC      drivers/acpi/hed.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/conn.o
  CC [M]  drivers/vfio/vfio_main.o
  CC      drivers/firmware/efi/dev-path-parser.o
  CC [M]  drivers/dca/dca-core.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_umc.o
  CC [M]  drivers/vfio/group.o
  CC      drivers/mmc/core/queue.o
  CC      drivers/acpi/bgrt.o
  CC [M]  drivers/bluetooth/btrtl.o
  CC      drivers/acpi/cppc_acpi.o
  CC      fs/btrfs/tree-checker.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/smu_v11_0_i2c.o
  CC [M]  drivers/gpu/drm/drm_modes.o
  CC [M]  drivers/gpu/drm/drm_modeset_lock.o
  LD [M]  drivers/md/dm-bio-prison.o
  CC [M]  drivers/mtd/mtdpart.o
  CC [M]  drivers/gpu/drm/drm_plane.o
  AR      drivers/md/built-in.a
  CC [M]  drivers/gpu/drm/drm_prime.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_rap.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/cstep.o
  CC      drivers/acpi/spcr.o
  CC      drivers/hid/hid-kensington.o
  CC [M]  drivers/gpu/drm/drm_print.o
  CC [M]  drivers/dca/dca-sysfs.o
  CC [M]  drivers/gpu/drm/drm_property.o
  LD [M]  drivers/md/dm-thin-pool.o
  CC [M]  drivers/gpu/drm/drm_pt_walk.o
  CC      fs/btrfs/space-info.o
  CC      drivers/hid/hid-lg.o
  CC      fs/fs_struct.o
  CC      fs/btrfs/block-rsv.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_securedisplay.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_eeprom.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_mca.o
  CC [M]  drivers/gpu/drm/xe/xe_wopcm.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/dcb.o
  CC      drivers/firmware/efi/apple-properties.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/disp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/dp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_psp_ta.o
  AR      drivers/mmc/core/built-in.a
  AR      drivers/mmc/built-in.a
  CC      fs/btrfs/delalloc-space.o
  CC [M]  drivers/ssb/main.o
  CC [M]  drivers/vhost/net.o
  CC      drivers/acpi/acpi_pad.o
  CC [M]  drivers/vhost/vhost.o
  CC [M]  drivers/mtd/mtdchar.o
  CC [M]  drivers/gpu/drm/xe/xe_display.o
  CC [M]  fs/cifs/export.o
  CC [M]  drivers/vfio/iova_bitmap.o
  CC [M]  drivers/ssb/scan.o
  CC [M]  drivers/vfio/container.o
  LD [M]  drivers/dca/dca.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt.o
  CC [M]  drivers/gpu/drm/drm_syncobj.o
  CC [M]  drivers/gpu/drm/drm_sysfs.o
  CC      fs/statfs.o
  CC      fs/btrfs/block-group.o
  CC      drivers/firmware/dmi-sysfs.o
  CC [M]  drivers/acpi/acpi_video.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_lsdma.o
  CC [M]  drivers/acpi/video_detect.o
  CC      drivers/firmware/efi/earlycon.o
  CC      drivers/firmware/dmi-id.o
  CC      drivers/firmware/memmap.o
  CC [M]  drivers/gpu/drm/drm_trace_points.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.o
  CC [M]  drivers/gpu/drm/xe/display/icl_dsi.o
  CC      drivers/hid/hid-lg-g15.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_fdinfo.o
  CC      fs/btrfs/discard.o
  CC      drivers/firmware/efi/cper-x86.o
  CC      fs/fs_pin.o
  CC [M]  fs/cifs/unc.o
  CC [M]  drivers/vfio/virqfd.o
  CC [M]  fs/cifs/winucase.o
  CC      fs/btrfs/reflink.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/extdev.o
  CC [M]  drivers/gpu/drm/drm_vblank.o
  CC      fs/nsfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_pmu.o
  CC [M]  drivers/gpu/drm/xe/display/intel_atomic.o
  CC      drivers/hid/hid-microsoft.o
  CC [M]  drivers/gpu/drm/drm_vblank_work.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/cik.o
  CC      fs/fs_types.o
  CC      fs/btrfs/subpage.o
  CC [M]  drivers/vhost/iotlb.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/fan.o
  CC [M]  drivers/vfio/vfio_iommu_type1.o
  CC [M]  fs/cifs/smb2ops.o
  CC      drivers/hid/hid-monterey.o
  CC      fs/btrfs/tree-mod-log.o
  CC [M]  fs/cifs/smb2maperror.o
  CC [M]  drivers/ssb/sprom.o
  CC [M]  drivers/gpu/drm/xe/display/intel_atomic_plane.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/cik_ih.o
  CC [M]  drivers/gpu/drm/xe/display/intel_audio.o
  CC [M]  drivers/gpu/drm/drm_vma_manager.o
  CC [M]  drivers/gpu/drm/drm_gpuva_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/gpio.o
  CC      fs/fs_context.o
  CC [M]  fs/cifs/smb2transport.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/dce_v8_0.o
  LD [M]  drivers/mtd/mtd.o
  CC [M]  drivers/gpu/drm/drm_writeback.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/i2c.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/iccsense.o
  AR      drivers/firmware/efi/built-in.a
  AR      drivers/firmware/built-in.a
  CC [M]  drivers/gpu/drm/xe/display/intel_backlight.o
  CC      fs/btrfs/extent-io-tree.o
  CC      fs/fs_parser.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/image.o
  AR      drivers/android/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/init.o
  LD [M]  drivers/vhost/vhost_iotlb.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/mxm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.o
  AR      drivers/acpi/built-in.a
  CC [M]  drivers/gpu/drm/amd/amdgpu/cik_sdma.o
  LD [M]  drivers/vfio/vfio.o
  CC [M]  drivers/gpu/drm/lib/drm_random.o
  CC [M]  drivers/ssb/pci.o
  CC [M]  drivers/gpu/drm/xe/display/intel_bios.o
  CC      fs/fsopen.o
  AR      drivers/hid/built-in.a
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/npde.o
  CC [M]  drivers/gpu/drm/xe/display/intel_bw.o
  CC [M]  drivers/ssb/pcihost_wrapper.o
  CC [M]  fs/cifs/smb2misc.o
  LD [M]  drivers/vhost/vhost_net.o
  CC [M]  fs/cifs/smb2pdu.o
  CC [M]  drivers/ssb/driver_chipcommon.o
  LD [M]  drivers/acpi/video.o
  CC [M]  drivers/ssb/driver_chipcommon_pmu.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/uvd_v4_2.o
  CC      fs/btrfs/fs.o
  CC [M]  drivers/gpu/drm/xe/display/intel_cdclk.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/pcir.o
  CC [M]  drivers/gpu/drm/drm_ioc32.o
  CC [M]  drivers/gpu/drm/drm_panel.o
  CC [M]  drivers/gpu/drm/drm_pci.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vce_v2_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/si.o
  CC      fs/init.o
  CC [M]  drivers/gpu/drm/xe/display/intel_color.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/perf.o
  CC [M]  drivers/gpu/drm/xe/display/intel_combo_phy.o
  CC [M]  drivers/ssb/driver_pcicore.o
  CC      fs/btrfs/messages.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.o
  CC [M]  drivers/gpu/drm/drm_debugfs.o
  CC [M]  fs/cifs/smb2inode.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/pmu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/power_budget.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gmc_v6_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/ramcfg.o
  CC [M]  drivers/gpu/drm/xe/display/intel_connector.o
  CC      fs/kernel_read_file.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_clock_utils.o
  CC [M]  drivers/gpu/drm/xe/display/intel_crtc_state_dump.o
  CC      fs/btrfs/bio.o
  CC      fs/btrfs/lru_cache.o
  CC [M]  drivers/gpu/drm/drm_debugfs_crc.o
  CC [M]  drivers/gpu/drm/drm_edid_load.o
  CC [M]  drivers/gpu/drm/drm_panel_orientation_quirks.o
  CC [M]  fs/cifs/smb2file.o
  CC [M]  fs/cifs/cifsacl.o
  CC [M]  drivers/gpu/drm/drm_buddy.o
  CC      fs/mnt_idmapping.o
  CC [M]  drivers/gpu/drm/drm_gem_shmem_helper.o
  CC [M]  fs/cifs/fs_context.o
  CC [M]  fs/cifs/dns_resolve.o
  LD [M]  drivers/ssb/ssb.o
  CC [M]  drivers/gpu/drm/xe/display/intel_crtc.o
  CC [M]  drivers/gpu/drm/xe/display/intel_cursor.o
  CC [M]  drivers/gpu/drm/xe/display/intel_ddi_buf_trans.o
  CC [M]  drivers/gpu/drm/xe/display/intel_ddi.o
  CC      fs/btrfs/acl.o
  CC [M]  drivers/gpu/drm/xe/display/intel_display.o
  CC      fs/remap_range.o
  CC [M]  drivers/gpu/drm/xe/display/intel_display_debugfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v6_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/si_ih.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/rammap.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadow.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_debugfs.o
  ASN.1   fs/cifs/cifs_spnego_negtokeninit.asn1.[ch]
  CC [M]  drivers/gpu/drm/amd/amdgpu/si_dma.o
  CC [M]  drivers/gpu/drm/drm_suballoc.o
  CC [M]  drivers/gpu/drm/xe/display/intel_display_power.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowof.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowpci.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/dce_v6_0.o
  CC [M]  drivers/gpu/drm/xe/display/intel_display_power_map.o
  CC      fs/buffer.o
  CC [M]  drivers/gpu/drm/drm_gem_ttm_helper.o
  CC [M]  drivers/gpu/drm/drm_atomic_helper.o
  CC [M]  drivers/gpu/drm/drm_atomic_state_helper.o
  CC [M]  drivers/gpu/drm/xe/display/intel_display_power_well.o
  CC [M]  drivers/gpu/drm/xe/display/intel_display_trace.o
  CC [M]  drivers/gpu/drm/drm_bridge_connector.o
  CC [M]  drivers/gpu/drm/drm_crtc_helper.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_engines_debugfs.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_irq.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/uvd_v3_1.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vi.o
  CC [M]  drivers/gpu/drm/drm_damage_helper.o
  CC [M]  drivers/gpu/drm/drm_encoder_slave.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowramin.o
  CC [M]  drivers/gpu/drm/drm_flip_work.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowrom.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dkl_phy.o
  CC [M]  drivers/gpu/drm/drm_format_helper.o
  CC [M]  fs/cifs/smb1ops.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dmc.o
  CC [M]  fs/cifs/cifssmb.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/timing.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mxgpu_vi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/nbio_v6_1.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dp_aux_backlight.o
  AR      fs/btrfs/built-in.a
  CC [M]  fs/cifs/cifs_spnego_negtokeninit.asn1.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/soc15.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/therm.o
  CC [M]  drivers/gpu/drm/drm_gem_atomic_helper.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/vmap.o
  CC [M]  drivers/gpu/drm/drm_gem_framebuffer_helper.o
  CC [M]  drivers/gpu/drm/drm_kms_helper_common.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dp_aux.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/volt.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_mcr.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dp.o
  CC [M]  fs/cifs/asn1.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_pm.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dp_hdcp.o
  CC [M]  drivers/gpu/drm/drm_modeset_helper.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/emu_soc.o
  CC [M]  drivers/gpu/drm/drm_plane_helper.o
  CC [M]  drivers/gpu/drm/drm_probe_helper.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mxgpu_ai.o
  CC [M]  drivers/gpu/drm/drm_rect.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dp_link_training.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dp_mst.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dpll.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dpll_mgr.o
  CC [M]  drivers/gpu/drm/drm_self_refresh_helper.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/vpstate.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/xpio.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_pm_irq.o
  CC [M]  drivers/gpu/drm/drm_simple_kms_helper.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_requests.o
  CC [M]  drivers/gpu/drm/bridge/panel.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_sysfs.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dpt.o
  CC [M]  drivers/gpu/drm/xe/display/intel_drrs.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/M0203.o
  CC [M]  drivers/gpu/drm/drm_fbdev_generic.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/M0205.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/M0209.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dsb.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/nbio_v7_0.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dsi.o
  CC [M]  drivers/gpu/drm/drm_fb_helper.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dsi_dcs_backlight.o
  LD [M]  drivers/gpu/drm/drm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vega10_reg_init.o
  CC [M]  drivers/gpu/drm/xe/display/intel_dsi_vbt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vega20_reg_init.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/nbio_v7_4.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/nbio_v2_3.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/nv.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gtt.o
  LD [M]  drivers/gpu/drm/drm_shmem_helper.o
  LD [M]  drivers/gpu/drm/drm_suballoc_helper.o
  LD [M]  drivers/gpu/drm/drm_ttm_helper.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bios/P0260.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bus/base.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_llc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/arct_reg_init.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_lrc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mxgpu_nv.o
  AR      drivers/gpu/drm/built-in.a
  CC [M]  drivers/gpu/drm/amd/amdgpu/nbio_v7_2.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_migrate.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_mocs.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_ppgtt.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bus/g94.o
  CC      fs/mpage.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_rc6.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/hdp_v4_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/hdp_v5_0.o
  CC [M]  drivers/gpu/drm/xe/display/intel_fb.o
  CC [M]  drivers/gpu/drm/xe/display/intel_fbc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.o
  CC [M]  drivers/gpu/drm/xe/display/intel_fdi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/aldebaran_reg_init.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/aldebaran.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_region_lmem.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/soc21.o
  CC [M]  drivers/gpu/drm/xe/display/intel_fifo_underrun.o
  CC [M]  drivers/gpu/drm/xe/display/intel_frontbuffer.o
  CC [M]  drivers/gpu/drm/xe/display/intel_global_state.o
  CC      fs/proc_namespace.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/sienna_cichlid.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/smu_v13_0_10.o
  CC [M]  drivers/gpu/drm/xe/display/intel_gmbus.o
  CC [M]  drivers/gpu/drm/xe/display/intel_hdcp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/nbio_v4_3.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/nv04.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/nv40.o
  CC [M]  drivers/gpu/drm/xe/display/intel_hdmi.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/nv50.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/g84.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_renderstate.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/hdp_v6_0.o
  CC [M]  drivers/gpu/drm/xe/display/intel_hotplug.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_reset.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_ring.o
  CC [M]  drivers/gpu/drm/xe/display/intel_hti.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/nbio_v7_7.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/gt215.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/hdp_v5_2.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_ring_submission.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/lsdma_v6_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/df_v1_7.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_rps.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/df_v3_6.o
  CC      fs/direct-io.o
  CC      fs/eventpoll.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_sa_media.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/df_v4_3.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gmc_v7_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gmc_v8_0.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_sseu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/gf100.o
  CC [M]  drivers/gpu/drm/xe/display/intel_lspcon.o
  LD [M]  drivers/gpu/drm/drm_kms_helper.o
  CC [M]  drivers/gpu/drm/xe/display/intel_lvds.o
  CC [M]  drivers/gpu/drm/xe/display/intel_modeset_setup.o
  CC [M]  drivers/gpu/drm/xe/display/intel_modeset_verify.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/gk20a.o
  CC [M]  drivers/gpu/drm/xe/display/intel_panel.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/gm20b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.o
  CC [M]  drivers/gpu/drm/xe/display/intel_pipe_crc.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_sseu_debugfs.o
  CC [M]  drivers/gpu/drm/xe/display/intel_pps.o
  CC [M]  drivers/gpu/drm/xe/display/intel_psr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/pllnv04.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/clk/pllgt215.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_timeline.o
  CC [M]  drivers/gpu/drm/xe/display/intel_qp_tables.o
  CC [M]  drivers/gpu/drm/xe/display/intel_quirks.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv04.o
  CC [M]  drivers/gpu/drm/xe/display/intel_snps_phy.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfxhub_v1_1.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_wopcm.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_workarounds.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v9_4.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv05.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv10.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv1a.o
  CC [M]  drivers/gpu/drm/i915/gt/shmem_utils.o
  LD [M]  fs/cifs/cifs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfxhub_v2_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.o
  CC [M]  drivers/gpu/drm/xe/display/intel_sprite.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv20.o
  CC [M]  drivers/gpu/drm/xe/display/intel_tc.o
  CC [M]  drivers/gpu/drm/xe/display/intel_vblank.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv50.o
  CC [M]  drivers/gpu/drm/xe/display/intel_vdsc.o
  CC [M]  drivers/gpu/drm/xe/display/intel_vga.o
  CC      fs/anon_inodes.o
  CC [M]  drivers/gpu/drm/xe/display/intel_vrr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gmc_v10_0.o
  CC      fs/signalfd.o
  CC      fs/timerfd.o
  CC      fs/eventfd.o
  CC [M]  drivers/gpu/drm/xe/display/intel_wm.o
  CC [M]  drivers/gpu/drm/xe/display/xe_fb_pin.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/g84.o
  CC [M]  drivers/gpu/drm/xe/display/xe_hdcp_gsc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfxhub_v2_1.o
  CC [M]  drivers/gpu/drm/xe/display/xe_plane_initial.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/g98.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gt215.o
  CC [M]  drivers/gpu/drm/i915/gt/sysfs_engines.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/mcp89.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v1_7.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_ggtt_gmch.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfxhub_v3_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm107.o
  CC [M]  drivers/gpu/drm/i915/gt/gen6_renderstate.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gv100.o
  CC      fs/userfaultfd.o
  CC [M]  drivers/gpu/drm/i915/gt/gen7_renderstate.o
  CC [M]  drivers/gpu/drm/i915/gt/gen8_renderstate.o
  CC [M]  drivers/gpu/drm/xe/display/xe_display_rps.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.o
  CC      fs/aio.o
  CC [M]  drivers/gpu/drm/i915/gt/gen9_renderstate.o
  CC [M]  drivers/gpu/drm/xe/display/skl_scaler.o
  CC [M]  drivers/gpu/drm/xe/display/skl_universal_plane.o
  CC [M]  drivers/gpu/drm/xe/display/skl_watermark.o
  CC [M]  drivers/gpu/drm/xe/display/ext/i915_irq.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_busy.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/devinit/ga100.o
  CC      fs/locks.o
  CC [M]  drivers/gpu/drm/xe/display/ext/i9xx_wm.o
  CC [M]  drivers/gpu/drm/xe/display/ext/intel_device_info.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_clflush.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fault/base.o
  CC [M]  drivers/gpu/drm/xe/display/ext/intel_dram.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fault/user.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_2.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fault/gp100.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_context.o
  CC      fs/binfmt_script.o
  CC      fs/binfmt_elf.o
  CC      fs/compat_binfmt_elf.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gmc_v11_0.o
  CC [M]  drivers/gpu/drm/xe/display/ext/intel_pch.o
  CC [M]  drivers/gpu/drm/xe/display/ext/intel_pm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.o
  CC [M]  drivers/gpu/drm/xe/display/intel_acpi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfxhub_v3_0_3.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fault/gp10b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/umc_v6_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fault/gv100.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_create.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fault/tu102.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o
  CC [M]  drivers/gpu/drm/xe/display/intel_opregion.o
  CC      fs/mbcache.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/umc_v6_1.o
  CC      fs/posix_acl.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv10.o
  CC [M]  drivers/gpu/drm/xe/display/intel_fbdev.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/umc_v6_7.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_domain.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_execbuffer.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv1a.o
  HDRTEST drivers/gpu/drm/xe/abi/guc_klvs_abi.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv20.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/umc_v8_7.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv25.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/umc_v8_10.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_irq.o
  CC      fs/coredump.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ih.o
  HDRTEST drivers/gpu/drm/xe/abi/guc_errors_abi.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/iceland_ih.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv30.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_internal.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/tonga_ih.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv35.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv36.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/cz_ih.o
  HDRTEST drivers/gpu/drm/xe/abi/guc_actions_slpc_abi.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv40.o
  HDRTEST drivers/gpu/drm/xe/abi/guc_communication_mmio_abi.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv41.o
  HDRTEST drivers/gpu/drm/xe/abi/guc_actions_abi.h
  HDRTEST drivers/gpu/drm/xe/abi/guc_communication_ctb_abi.h
  HDRTEST drivers/gpu/drm/xe/abi/guc_messages_abi.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_vma_types.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_object.o
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/intel_wakeref.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_drv.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_lmem.o
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_reg_defs.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_mman.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv44.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vega10_ih.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv46.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_pages.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vega20_ih.o
  CC      fs/drop_caches.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv47.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/navi10_ih.o
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_reg.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_active_types.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_utils.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv49.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv4e.o
  CC      fs/fhandle.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/nv50.o
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_config.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_phys.o
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_vma.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/intel_mchbar_regs.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/soc/intel_gmch.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/i915_fixed.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/g84.o
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/intel_runtime_pm.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/intel_pm_types.h
  HDRTEST drivers/gpu/drm/xe/compat-i915-headers/intel_pci_config.h
  HDRTEST drivers/gpu/drm/xe/display/ext/i915_irq.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_pm.o
  HDRTEST drivers/gpu/drm/xe/display/ext/intel_pch.h
  HDRTEST drivers/gpu/drm/xe/display/ext/intel_pm.h
  HDRTEST drivers/gpu/drm/xe/display/ext/i9xx_wm.h
  HDRTEST drivers/gpu/drm/xe/display/ext/intel_dram.h
  HDRTEST drivers/gpu/drm/xe/display/ext/intel_device_info.h
  HDRTEST drivers/gpu/drm/xe/display/xe_de.h
  HDRTEST drivers/gpu/drm/xe/regs/xe_reg_defs.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_region.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gt215.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_shmem.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/mcp77.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/mcp89.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf100.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_shrinker.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/ih_v6_0.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_stolen.o
  HDRTEST drivers/gpu/drm/xe/regs/xe_gt_regs.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_throttle.o
  HDRTEST drivers/gpu/drm/xe/regs/xe_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_psp.o
  HDRTEST drivers/gpu/drm/xe/regs/xe_gpu_commands.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_tiling.o
  HDRTEST drivers/gpu/drm/xe/regs/xe_lrc_layout.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/psp_v3_1.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/psp_v10_0.o
  HDRTEST drivers/gpu/drm/xe/regs/xe_engine_regs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf108.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_ttm.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_ttm_move.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk104.o
  HDRTEST drivers/gpu/drm/xe/tests/xe_test.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/psp_v11_0.o
  HDRTEST drivers/gpu/drm/xe/tests/xe_migrate_test.h
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.o
  HDRTEST drivers/gpu/drm/xe/tests/xe_dma_buf_test.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk110.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_userptr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk20a.o
  HDRTEST drivers/gpu/drm/xe/tests/xe_bo_test.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.o
  HDRTEST drivers/gpu/drm/xe/xe_bb.h
  HDRTEST drivers/gpu/drm/xe/xe_bb_types.h
  HDRTEST drivers/gpu/drm/xe/xe_bo.h
  HDRTEST drivers/gpu/drm/xe/xe_bo_doc.h
  HDRTEST drivers/gpu/drm/xe/xe_bo_evict.h
  HDRTEST drivers/gpu/drm/xe/xe_bo_types.h
  HDRTEST drivers/gpu/drm/xe/xe_debugfs.h
  HDRTEST drivers/gpu/drm/xe/xe_device.h
  HDRTEST drivers/gpu/drm/xe/xe_device_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gm107.o
  HDRTEST drivers/gpu/drm/xe/xe_display.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gm200.o
  AR      fs/built-in.a
  CC [M]  drivers/gpu/drm/amd/amdgpu/psp_v12_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/psp_v13_0.o
  HDRTEST drivers/gpu/drm/xe/xe_dma_buf.h
  HDRTEST drivers/gpu/drm/xe/xe_drv.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gm20b.o
  HDRTEST drivers/gpu/drm/xe/xe_engine.h
  HDRTEST drivers/gpu/drm/xe/xe_engine_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp102.o
  HDRTEST drivers/gpu/drm/xe/xe_exec.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/dce_v10_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/dce_v11_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gp10b.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gem_wait.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.o
  CC [M]  drivers/gpu/drm/i915/gem/i915_gemfs.o
  CC [M]  drivers/gpu/drm/i915/i915_active.o
  HDRTEST drivers/gpu/drm/xe/xe_execlist.h
  HDRTEST drivers/gpu/drm/xe/xe_execlist_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.o
  HDRTEST drivers/gpu/drm/xe/xe_force_wake.h
  HDRTEST drivers/gpu/drm/xe/xe_force_wake_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.o
  HDRTEST drivers/gpu/drm/xe/xe_ggtt.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.o
  CC [M]  drivers/gpu/drm/i915/i915_cmd_parser.o
  HDRTEST drivers/gpu/drm/xe/xe_ggtt_types.h
  HDRTEST drivers/gpu/drm/xe/xe_gt.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_clock.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_debugfs.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_mcr.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_pagefault.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_sysfs.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_sysfs_types.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
  HDRTEST drivers/gpu/drm/xe/xe_gt_tlb_invalidation_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.o
  HDRTEST drivers/gpu/drm/xe/xe_gt_topology.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.o
  CC [M]  drivers/gpu/drm/i915/i915_deps.o
  CC [M]  drivers/gpu/drm/i915/i915_gem_evict.o
  CC [M]  drivers/gpu/drm/i915/i915_gem_gtt.o
  HDRTEST drivers/gpu/drm/xe/xe_gt_types.h
  HDRTEST drivers/gpu/drm/xe/xe_guc.h
  CC [M]  drivers/gpu/drm/i915/i915_gem_ww.o
  HDRTEST drivers/gpu/drm/xe/xe_guc_ads.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_ads_types.h
  CC [M]  drivers/gpu/drm/i915/i915_gem.o
  CC [M]  drivers/gpu/drm/i915/i915_query.o
  HDRTEST drivers/gpu/drm/xe/xe_guc_ct.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga100.o
  CC [M]  drivers/gpu/drm/i915/i915_request.o
  HDRTEST drivers/gpu/drm/xe/xe_guc_ct_types.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_debugfs.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_engine_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v9_4.o
  CC [M]  drivers/gpu/drm/i915/i915_scheduler.o
  HDRTEST drivers/gpu/drm/xe/xe_guc_fwif.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.o
  CC [M]  drivers/gpu/drm/i915/i915_trace_points.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/imu_v11_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v11_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.o
  HDRTEST drivers/gpu/drm/xe/xe_guc_hwconfig.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_log.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_log_types.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_pc.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_pc_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ram.o
  HDRTEST drivers/gpu/drm/xe/xe_guc_reg.h
  HDRTEST drivers/gpu/drm/xe/xe_guc_submit.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv04.o
  HDRTEST drivers/gpu/drm/xe/xe_guc_types.h
  HDRTEST drivers/gpu/drm/xe/xe_huc.h
  HDRTEST drivers/gpu/drm/xe/xe_huc_debugfs.h
  CC [M]  drivers/gpu/drm/i915/i915_ttm_buddy_manager.o
  HDRTEST drivers/gpu/drm/xe/xe_huc_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv10.o
  HDRTEST drivers/gpu/drm/xe/xe_hw_engine.h
  HDRTEST drivers/gpu/drm/xe/xe_hw_engine_types.h
  HDRTEST drivers/gpu/drm/xe/xe_hw_fence.h
  HDRTEST drivers/gpu/drm/xe/xe_hw_fence_types.h
  CC [M]  drivers/gpu/drm/i915/i915_vma.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/gfx_v11_0_3.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv1a.o
  HDRTEST drivers/gpu/drm/xe/xe_irq.h
  HDRTEST drivers/gpu/drm/xe/xe_lrc.h
  HDRTEST drivers/gpu/drm/xe/xe_lrc_types.h
  HDRTEST drivers/gpu/drm/xe/xe_macros.h
  HDRTEST drivers/gpu/drm/xe/xe_map.h
  CC [M]  drivers/gpu/drm/i915/i915_vma_resource.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/imu_v11_0_3.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.o
  HDRTEST drivers/gpu/drm/xe/xe_migrate.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv20.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.o
  HDRTEST drivers/gpu/drm/xe/xe_migrate_doc.h
  HDRTEST drivers/gpu/drm/xe/xe_mmio.h
  HDRTEST drivers/gpu/drm/xe/xe_mocs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v2_4.o
  HDRTEST drivers/gpu/drm/xe/xe_module.h
  HDRTEST drivers/gpu/drm/xe/xe_pat.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v3_0.o
  HDRTEST drivers/gpu/drm/xe/xe_pci.h
  HDRTEST drivers/gpu/drm/xe/xe_pcode.h
  HDRTEST drivers/gpu/drm/xe/xe_pcode_api.h
  HDRTEST drivers/gpu/drm/xe/xe_platform_types.h
  HDRTEST drivers/gpu/drm/xe/xe_pm.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.o
  HDRTEST drivers/gpu/drm/xe/xe_preempt_fence.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv40.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv41.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_ads.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv44.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v4_4.o
  HDRTEST drivers/gpu/drm/xe/xe_preempt_fence_types.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_capture.o
  HDRTEST drivers/gpu/drm/xe/xe_pt.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv49.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_ct.o
  HDRTEST drivers/gpu/drm/xe/xe_pt_types.h
  HDRTEST drivers/gpu/drm/xe/xe_query.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.o
  HDRTEST drivers/gpu/drm/xe/xe_reg_sr.h
  HDRTEST drivers/gpu/drm/xe/xe_reg_sr_types.h
  HDRTEST drivers/gpu/drm/xe/xe_reg_whitelist.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_fw.o
  HDRTEST drivers/gpu/drm/xe/xe_res_cursor.h
  HDRTEST drivers/gpu/drm/xe/xe_ring_ops.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_hwconfig.o
  HDRTEST drivers/gpu/drm/xe/xe_ring_ops_types.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_log.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/sdma_v6_0.o
  HDRTEST drivers/gpu/drm/xe/xe_rtp.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_rc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_mes.o
  HDRTEST drivers/gpu/drm/xe/xe_rtp_types.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_guc_submission.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_huc.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.o
  HDRTEST drivers/gpu/drm/xe/xe_sa.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_huc_fw.o
  HDRTEST drivers/gpu/drm/xe/xe_sa_types.h
  HDRTEST drivers/gpu/drm/xe/xe_sched_job.h
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_uc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mes_v10_1.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mes_v11_0.o
  CC [M]  drivers/gpu/drm/i915/gt/uc/intel_uc_fw.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv4e.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.o
  HDRTEST drivers/gpu/drm/xe/xe_sched_job_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgt215.o
  CC [M]  drivers/gpu/drm/i915/gt/intel_gsc.o
  HDRTEST drivers/gpu/drm/xe/xe_step.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/rammcp77.o
  HDRTEST drivers/gpu/drm/xe/xe_step_types.h
  HDRTEST drivers/gpu/drm/xe/xe_sync.h
  HDRTEST drivers/gpu/drm/xe/xe_sync_types.h
  HDRTEST drivers/gpu/drm/xe/xe_trace.h
  HDRTEST drivers/gpu/drm/xe/xe_ttm_gtt_mgr.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/uvd_v5_0.o
  CC [M]  drivers/gpu/drm/i915/i915_hwmon.o
  HDRTEST drivers/gpu/drm/xe/xe_ttm_gtt_mgr_types.h
  HDRTEST drivers/gpu/drm/xe/xe_ttm_stolen_mgr.h
  HDRTEST drivers/gpu/drm/xe/xe_ttm_vram_mgr.h
  HDRTEST drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h
  CC [M]  drivers/gpu/drm/i915/display/hsw_ips.o
  CC [M]  drivers/gpu/drm/i915/display/intel_atomic.o
  CC [M]  drivers/gpu/drm/i915/display/intel_atomic_plane.o
  CC [M]  drivers/gpu/drm/i915/display/intel_audio.o
  CC [M]  drivers/gpu/drm/i915/display/intel_bios.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/uvd_v6_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_bw.o
  CC [M]  drivers/gpu/drm/i915/display/intel_cdclk.o
  CC [M]  drivers/gpu/drm/i915/display/intel_color.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/uvd_v7_0.o
  HDRTEST drivers/gpu/drm/xe/xe_tuning.h
  HDRTEST drivers/gpu/drm/xe/xe_uc.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf100.o
  CC [M]  drivers/gpu/drm/i915/display/intel_combo_phy.o
  CC [M]  drivers/gpu/drm/i915/display/intel_connector.o
  HDRTEST drivers/gpu/drm/xe/xe_uc_debugfs.h
  HDRTEST drivers/gpu/drm/xe/xe_uc_fw.h
  HDRTEST drivers/gpu/drm/xe/xe_uc_fw_abi.h
  CC [M]  drivers/gpu/drm/i915/display/intel_crtc.o
  HDRTEST drivers/gpu/drm/xe/xe_uc_fw_types.h
  HDRTEST drivers/gpu/drm/xe/xe_uc_types.h
  HDRTEST drivers/gpu/drm/xe/xe_vm.h
  CC [M]  drivers/gpu/drm/i915/display/intel_crtc_state_dump.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgf108.o
  CC [M]  drivers/gpu/drm/i915/display/intel_cursor.o
  HDRTEST drivers/gpu/drm/xe/xe_vm_doc.h
  CC [M]  drivers/gpu/drm/i915/display/intel_display.o
  HDRTEST drivers/gpu/drm/xe/xe_vm_madvise.h
  HDRTEST drivers/gpu/drm/xe/xe_vm_types.h
  HDRTEST drivers/gpu/drm/xe/xe_wa.h
  CC [M]  drivers/gpu/drm/i915/display/intel_display_power.o
  HDRTEST drivers/gpu/drm/xe/xe_wait_user_fence.h
  HDRTEST drivers/gpu/drm/xe/xe_wopcm.h
  HDRTEST drivers/gpu/drm/xe/xe_wopcm_types.h
  CC [M]  drivers/gpu/drm/i915/display/intel_display_power_map.o
  CC [M]  drivers/gpu/drm/i915/display/intel_display_power_well.o
  LD [M]  drivers/gpu/drm/xe/xe.o
  CC [M]  drivers/gpu/drm/i915/display/intel_display_rps.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vce.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vce_v3_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vce_v4_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dmc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vcn_sw_ring.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dpio_phy.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vcn_v1_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vcn_v2_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dpll.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dpll_mgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vcn_v2_5.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/vcn_v3_0.o
drivers/gpu/drm/xe/xe.o: warning: objtool: intel_crtc_init+0x241: unreachable instruction
  CC [M]  drivers/gpu/drm/amd/amdgpu/vcn_v4_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dpt.o
  CC [M]  drivers/gpu/drm/i915/display/intel_drrs.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgk104.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dsb.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgm107.o
  CC [M]  drivers/gpu/drm/i915/display/intel_fb.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgm200.o
  CC [M]  drivers/gpu/drm/i915/display/intel_fb_pin.o
  CC [M]  drivers/gpu/drm/i915/display/intel_fbc.o
  CC [M]  drivers/gpu/drm/i915/display/intel_fdi.o
  CC [M]  drivers/gpu/drm/i915/display/intel_fifo_underrun.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_frontbuffer.o
  CC [M]  drivers/gpu/drm/i915/display/intel_global_state.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.o
  CC [M]  drivers/gpu/drm/i915/display/intel_hdcp.o
  CC [M]  drivers/gpu/drm/i915/display/intel_hdcp_gsc.o
  CC [M]  drivers/gpu/drm/i915/display/intel_hotplug.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/athub_v1_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgp100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/athub_v2_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_hti.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramga102.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/athub_v2_1.o
  CC [M]  drivers/gpu/drm/i915/display/intel_lpe_audio.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/sddr2.o
  CC [M]  drivers/gpu/drm/i915/display/intel_modeset_verify.o
  CC [M]  drivers/gpu/drm/i915/display/intel_modeset_setup.o
  CC [M]  drivers/gpu/drm/i915/display/intel_overlay.o
  CC [M]  drivers/gpu/drm/i915/display/intel_pch_display.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/athub_v3_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_pch_refclk.o
  CC [M]  drivers/gpu/drm/i915/display/intel_plane_initial.o
  CC [M]  drivers/gpu/drm/i915/display/intel_psr.o
  CC [M]  drivers/gpu/drm/i915/display/intel_quirks.o
  CC [M]  drivers/gpu/drm/i915/display/intel_sprite.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/smuio_v9_0.o
  CC [M]  drivers/gpu/drm/i915/display/intel_sprite_uapi.o
  CC [M]  drivers/gpu/drm/i915/display/intel_tc.o
  CC [M]  drivers/gpu/drm/i915/display/intel_vblank.o
  CC [M]  drivers/gpu/drm/i915/display/intel_vga.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/sddr3.o
  CC [M]  drivers/gpu/drm/i915/display/intel_wm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/smuio_v11_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/smuio_v11_0_6.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gddr3.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/smuio_v13_0.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/smuio_v13_0_6.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_reset.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fb/gddr5.o
  CC [M]  drivers/gpu/drm/i915/display/i9xx_plane.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fuse/base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/mca_v3_0.o
  CC [M]  drivers/gpu/drm/i915/display/i9xx_wm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_module.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device.o
  CC [M]  drivers/gpu/drm/i915/display/skl_scaler.o
  CC [M]  drivers/gpu/drm/i915/display/skl_universal_plane.o
  CC [M]  drivers/gpu/drm/i915/display/skl_watermark.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_chardev.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_topology.o
  CC [M]  drivers/gpu/drm/i915/display/intel_acpi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_pasid.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fuse/nv50.o
  CC [M]  drivers/gpu/drm/i915/display/intel_opregion.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fuse/gf100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/fuse/gm107.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_doorbell.o
  CC [M]  drivers/gpu/drm/i915/display/intel_fbdev.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gpio/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gpio/nv10.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_flat_memory.o
  CC [M]  drivers/gpu/drm/i915/display/dvo_ch7017.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_process.o
  CC [M]  drivers/gpu/drm/i915/display/dvo_ch7xxx.o
  CC [M]  drivers/gpu/drm/i915/display/dvo_ivch.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gpio/nv50.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_queue.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_mqd_manager.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_mqd_manager_cik.o
  CC [M]  drivers/gpu/drm/i915/display/dvo_ns2501.o
  CC [M]  drivers/gpu/drm/i915/display/dvo_sil164.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_mqd_manager_vi.o
  CC [M]  drivers/gpu/drm/i915/display/dvo_tfp410.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gpio/g94.o
  CC [M]  drivers/gpu/drm/i915/display/g4x_dp.o
  CC [M]  drivers/gpu/drm/i915/display/g4x_hdmi.o
  CC [M]  drivers/gpu/drm/i915/display/icl_dsi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_mqd_manager_v9.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_mqd_manager_v10.o
  CC [M]  drivers/gpu/drm/i915/display/intel_backlight.o
  CC [M]  drivers/gpu/drm/i915/display/intel_crt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_mqd_manager_v11.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_kernel_queue.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gpio/gf119.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_packet_manager.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gpio/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gpio/ga102.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.o
  CC [M]  drivers/gpu/drm/i915/display/intel_ddi.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gsp/gv100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_packet_manager_vi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_packet_manager_v9.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga102.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_process_queue_manager.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.o
  CC [M]  drivers/gpu/drm/i915/display/intel_ddi_buf_trans.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device_queue_manager.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device_queue_manager_cik.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device_queue_manager_vi.o
  CC [M]  drivers/gpu/drm/i915/display/intel_display_trace.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device_queue_manager_v9.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dkl_phy.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device_queue_manager_v10.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_device_queue_manager_v11.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dp.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dp_aux.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dp_aux_backlight.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dp_hdcp.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dp_link_training.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_interrupt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_events.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dp_mst.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/cik_event_interrupt.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/nv04.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/nv4e.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/nv50.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/g94.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_int_process_v9.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gf117.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_int_process_v11.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dsi.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gf119.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_smi_events.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dsi_vbt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_crat.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_debugfs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_svm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../amdkfd/kfd_migrate.o
  CC [M]  drivers/gpu/drm/i915/display/intel_dvo.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gk104.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.o
  CC [M]  drivers/gpu/drm/i915/display/intel_gmbus.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v8.o
  CC [M]  drivers/gpu/drm/i915/display/intel_hdmi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.o
  CC [M]  drivers/gpu/drm/i915/display/intel_lspcon.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gk110.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_aldebaran.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/gm200.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/pad.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/padnv04.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/padnv4e.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/padnv50.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v10_3.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.o
  CC [M]  drivers/gpu/drm/i915/display/intel_lvds.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_job.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_acp.o
  CC [M]  drivers/gpu/drm/i915/display/intel_panel.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../acp/acp_hw.o
  CC [M]  drivers/gpu/drm/i915/display/intel_pps.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_ioc32.o
  CC [M]  drivers/gpu/drm/i915/display/intel_qp_tables.o
  CC [M]  drivers/gpu/drm/i915/display/intel_sdvo.o
  CC [M]  drivers/gpu/drm/i915/display/intel_snps_phy.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/amdgpu_hmm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu11/arcturus_ppt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu11/navi10_ppt.o
  CC [M]  drivers/gpu/drm/i915/display/intel_tv.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu11/sienna_cichlid_ppt.o
  CC [M]  drivers/gpu/drm/i915/display/intel_vdsc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu11/vangogh_ppt.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/padg94.o
  CC [M]  drivers/gpu/drm/i915/display/intel_vrr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/padgf119.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu11/cyan_skillfish_ppt.o
  CC [M]  drivers/gpu/drm/i915/display/vlv_dsi.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/padgm200.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bus.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/busnv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu11/smu_v11_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/busnv4e.o
  CC [M]  drivers/gpu/drm/i915/display/vlv_dsi_pll.o
  CC [M]  drivers/gpu/drm/i915/i915_perf.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp_tee.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu12/renoir_ppt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu12/smu_v12_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/busnv50.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/smu_v13_0.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp_huc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/aldebaran_ppt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/yellow_carp_ppt.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp_cmd.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/smu_v13_0_0_ppt.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp_debugfs.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/busgf119.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/smu_v13_0_4_ppt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/smu_v13_0_5_ppt.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/smu_v13_0_7_ppt.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/bit.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu13/smu_v13_0_6_ppt.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/aux.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp_irq.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxg94.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/amdgpu_smu.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/swsmu/smu_cmn.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgf119.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp_pm.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/auxgm200.o
  CC [M]  drivers/gpu/drm/i915/pxp/intel_pxp_session.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/i2c/anx9805.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/smumgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/smu8_smumgr.o
  CC [M]  drivers/gpu/drm/i915/i915_gpu_error.o
  CC [M]  drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/iccsense/base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/tonga_smumgr.o
  CC [M]  drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.o
  CC [M]  drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/iccsense/gf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/fiji_smumgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/instmem/base.o
  CC [M]  drivers/gpu/drm/i915/selftests/i915_random.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/polaris10_smumgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv04.o
  CC [M]  drivers/gpu/drm/i915/selftests/i915_selftest.o
  CC [M]  drivers/gpu/drm/i915/selftests/igt_atomic.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/iceland_smumgr.o
  CC [M]  drivers/gpu/drm/i915/selftests/igt_flush_test.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/smu7_smumgr.o
  CC [M]  drivers/gpu/drm/i915/selftests/igt_live_test.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv40.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.o
  CC [M]  drivers/gpu/drm/i915/selftests/igt_mmap.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/vega10_smumgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.o
  CC [M]  drivers/gpu/drm/i915/selftests/igt_reset.o
  CC [M]  drivers/gpu/drm/i915/selftests/igt_spinner.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/smu10_smumgr.o
  CC [M]  drivers/gpu/drm/i915/selftests/librapl.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/base.o
  CC [M]  drivers/gpu/drm/i915/i915_vgpu.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/ci_smumgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/vega12_smumgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/vegam_smumgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/smu9_smumgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/smumgr/vega20_smumgr.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dkl_phy_regs.h
  HDRTEST drivers/gpu/drm/i915/display/intel_crtc_state_dump.h
  HDRTEST drivers/gpu/drm/i915/display/hsw_ips.h
  HDRTEST drivers/gpu/drm/i915/display/g4x_hdmi.h
  HDRTEST drivers/gpu/drm/i915/display/intel_hdcp_regs.h
  HDRTEST drivers/gpu/drm/i915/display/intel_overlay.h
  HDRTEST drivers/gpu/drm/i915/display/intel_display.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dmc.h
  HDRTEST drivers/gpu/drm/i915/display/intel_vga.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/hwmgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/processpptables.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/hardwaremanager.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu8_hwmgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gf100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gm107.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/pppcielanes.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gm200.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gp100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gp102.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/process_pptables_v1_0.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gp10b.o
  HDRTEST drivers/gpu/drm/i915/display/intel_audio.h
  HDRTEST drivers/gpu/drm/i915/display/intel_lvds.h
  HDRTEST drivers/gpu/drm/i915/display/intel_modeset_setup.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/ppatomctrl.o
  HDRTEST drivers/gpu/drm/i915/display/intel_cdclk.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/ppatomfwctrl.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu7_hwmgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/ltc/ga102.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_limits.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu7_powertune.o
  HDRTEST drivers/gpu/drm/i915/display/intel_hotplug.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dkl_phy.h
  HDRTEST drivers/gpu/drm/i915/display/intel_atomic.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dpll.h
  HDRTEST drivers/gpu/drm/i915/display/vlv_dsi_pll_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu7_thermal.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu7_clockpowergating.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dp_mst.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/nv04.o
  HDRTEST drivers/gpu/drm/i915/display/g4x_dp.h
  HDRTEST drivers/gpu/drm/i915/display/intel_tc.h
  HDRTEST drivers/gpu/drm/i915/display/intel_frontbuffer.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/nv11.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/nv17.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dsi_vbt.h
  HDRTEST drivers/gpu/drm/i915/display/intel_psr.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/nv44.o
  HDRTEST drivers/gpu/drm/i915/display/intel_crt.h
  HDRTEST drivers/gpu/drm/i915/display/intel_opregion.h
  HDRTEST drivers/gpu/drm/i915/display/intel_snps_phy_regs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/nv50.o
  HDRTEST drivers/gpu/drm/i915/display/i9xx_wm.h
  HDRTEST drivers/gpu/drm/i915/display/intel_global_state.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/g84.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega10_processpptables.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega10_hwmgr.o
  HDRTEST drivers/gpu/drm/i915/display/intel_lpe_audio.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/g98.o
  HDRTEST drivers/gpu/drm/i915/display/intel_drrs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/gt215.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_rps.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/gf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega10_powertune.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega10_thermal.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu10_hwmgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/gk20a.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/gp100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/gp10b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/pp_psm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega12_processpptables.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega12_hwmgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mc/ga100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega12_thermal.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.o
  HDRTEST drivers/gpu/drm/i915/display/intel_fbdev.h
  HDRTEST drivers/gpu/drm/i915/display/intel_hdmi.h
  HDRTEST drivers/gpu/drm/i915/display/intel_fdi.h
  HDRTEST drivers/gpu/drm/i915/display/intel_fb.h
  HDRTEST drivers/gpu/drm/i915/display/intel_qp_tables.h
  HDRTEST drivers/gpu/drm/i915/display/intel_vdsc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/pp_overdriver.o
  HDRTEST drivers/gpu/drm/i915/display/intel_snps_phy.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu_helper.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/nv04.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_core.h
  HDRTEST drivers/gpu/drm/i915/display/vlv_dsi_pll.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega20_processpptables.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/nv41.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dvo_dev.h
  HDRTEST drivers/gpu/drm/i915/display/intel_hdcp.h
  HDRTEST drivers/gpu/drm/i915/display/intel_sdvo_regs.h
  HDRTEST drivers/gpu/drm/i915/display/intel_pch_refclk.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega20_hwmgr.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_trace.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/nv44.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_power.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega20_powertune.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/nv50.o
  HDRTEST drivers/gpu/drm/i915/display/i9xx_plane.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/g84.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dp_aux_backlight.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dpll_mgr.h
  HDRTEST drivers/gpu/drm/i915/display/vlv_dsi.h
  HDRTEST drivers/gpu/drm/i915/display/intel_plane_initial.h
  HDRTEST drivers/gpu/drm/i915/display/intel_fifo_underrun.h
  HDRTEST drivers/gpu/drm/i915/display/intel_cursor.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega20_thermal.o
  HDRTEST drivers/gpu/drm/i915/display/vlv_dsi_regs.h
  HDRTEST drivers/gpu/drm/i915/display/skl_scaler.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/common_baco.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/mcp77.o
  HDRTEST drivers/gpu/drm/i915/display/intel_hti.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gf100.o
  HDRTEST drivers/gpu/drm/i915/display/icl_dsi_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega10_baco.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega20_baco.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/vega12_baco.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gk104.o
  HDRTEST drivers/gpu/drm/i915/display/intel_atomic_plane.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gk20a.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu9_baco.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/tonga_baco.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gm200.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/polaris_baco.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gm20b.o
  HDRTEST drivers/gpu/drm/i915/display/skl_watermark.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/fiji_baco.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/ci_baco.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/hwmgr/smu7_baco.o
  HDRTEST drivers/gpu/drm/i915/display/intel_fbc.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gp100.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_reg_defs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gp10b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/powerplay/amd_powerplay.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gv100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/legacy-dpm/legacy_dpm.o
  HDRTEST drivers/gpu/drm/i915/display/intel_acpi.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/tu102.o
  HDRTEST drivers/gpu/drm/i915/display/intel_connector.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dpt.h
  HDRTEST drivers/gpu/drm/i915/display/intel_quirks.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dp_link_training.h
  HDRTEST drivers/gpu/drm/i915/display/intel_color.h
  HDRTEST drivers/gpu/drm/i915/display/intel_crtc.h
  HDRTEST drivers/gpu/drm/i915/display/intel_display_debugfs.h
  HDRTEST drivers/gpu/drm/i915/display/intel_modeset_verify.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/mem.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_power_well.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/legacy-dpm/kv_dpm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/legacy-dpm/kv_smc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/memnv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/legacy-dpm/si_dpm.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/memnv50.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/memgf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/legacy-dpm/si_smc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/amdgpu_dpm.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/amdgpu_pm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../pm/amdgpu_dpm_internal.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm.o
  HDRTEST drivers/gpu/drm/i915/display/intel_wm.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv04.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv41.o
  HDRTEST drivers/gpu/drm/i915/display/intel_pipe_crc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_plane.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv44.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmnv50.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmmcp77.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_crtc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_irq.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgk104.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_mst_types.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgk20a.o
  HDRTEST drivers/gpu/drm/i915/display/intel_audio_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_color.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/dc_fpu.o
  HDRTEST drivers/gpu/drm/i915/display/intel_panel.h
  HDRTEST drivers/gpu/drm/i915/display/intel_sprite.h
  HDRTEST drivers/gpu/drm/i915/display/intel_wm_types.h
  HDRTEST drivers/gpu/drm/i915/display/intel_tv.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgm200.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgm20b.o
  HDRTEST drivers/gpu/drm/i915/display/intel_hti_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_services.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_helpers.o
  HDRTEST drivers/gpu/drm/i915/display/intel_vrr.h
  HDRTEST drivers/gpu/drm/i915/display/skl_universal_plane.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_pp_smu.o
  HDRTEST drivers/gpu/drm/i915/display/intel_mg_phy_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_psr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_hdcp.o
  HDRTEST drivers/gpu/drm/i915/display/intel_bw.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.o
  HDRTEST drivers/gpu/drm/i915/display/intel_de.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_crc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/amdgpu_dm_debugfs.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp10b.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgv100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmtu102.o
  HDRTEST drivers/gpu/drm/i915/display/intel_lvds_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/basics/conversion.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/basics/fixpt31_32.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/basics/vector.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/basics/dc_common.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/bios_parser.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/bios_parser_interface.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/umem.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/bios_parser_helper.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/ummu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/command_table.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mxm/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mxm/mxms.o
  HDRTEST drivers/gpu/drm/i915/display/intel_gmbus_regs.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/command_table_helper.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dvo.h
  HDRTEST drivers/gpu/drm/i915/display/intel_sdvo.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dp_aux.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/bios_parser_common.o
  HDRTEST drivers/gpu/drm/i915/display/intel_vdsc_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/command_table2.o
  HDRTEST drivers/gpu/drm/i915/display/intel_combo_phy.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/mxm/nv50.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dvo_regs.h
  HDRTEST drivers/gpu/drm/i915/display/intel_gmbus.h
  HDRTEST drivers/gpu/drm/i915/display/intel_hdcp_gsc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/command_table_helper2.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/agp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/pcie.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dsi.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/nv04.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dmc_regs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/nv40.o
  HDRTEST drivers/gpu/drm/i915/display/intel_ddi.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/bios_parser2.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dsb.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce60/command_table_helper_dce60.o
  HDRTEST drivers/gpu/drm/i915/display/intel_bios.h
  HDRTEST drivers/gpu/drm/i915/display/intel_pch_display.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce80/command_table_helper_dce80.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_types.h
  HDRTEST drivers/gpu/drm/i915/display/intel_backlight.h
  HDRTEST drivers/gpu/drm/i915/display/intel_vblank.h
  HDRTEST drivers/gpu/drm/i915/display/intel_dp.h
  HDRTEST drivers/gpu/drm/i915/display/intel_backlight_regs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/nv46.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce110/command_table_helper_dce110.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/nv4c.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce112/command_table_helper_dce112.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce112/command_table_helper2_dce112.o
  HDRTEST drivers/gpu/drm/i915/display/intel_combo_phy_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/calcs/dce_calcs.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/calcs/custom_float.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/g84.o
  HDRTEST drivers/gpu/drm/i915/display/intel_display_power_map.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/calcs/bw_fixed.o
  HDRTEST drivers/gpu/drm/i915/display/intel_ddi_buf_trans.h
  HDRTEST drivers/gpu/drm/i915/display/icl_dsi.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/g92.o
  HDRTEST drivers/gpu/drm/i915/display/intel_lspcon.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/g94.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dpio_phy.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/display_mode_lib.o
  HDRTEST drivers/gpu/drm/i915/display/intel_dp_hdcp.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/display_rq_dlg_helpers.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/gf100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/gf106.o
  HDRTEST drivers/gpu/drm/i915/display/intel_fb_pin.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/gk104.o
  HDRTEST drivers/gpu/drm/i915/display/intel_pps.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dml1_display_rq_dlg_calc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pci/gp100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/memx.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn10/dcn10_fpu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gt215.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn20/dcn20_fpu.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/display_mode_vba.o
  HDRTEST drivers/gpu/drm/i915/display/intel_sprite_uapi.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn20/display_rq_dlg_calc_20.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_ttm.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gf100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gf119.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn20/display_mode_vba_20.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_region.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk104.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_context_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn20/display_rq_dlg_calc_20v2.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk110.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn20/display_mode_vba_20v2.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn21/display_rq_dlg_calc_21.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn21/display_mode_vba_21.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk208.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_lmem.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_mman.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn30/dcn30_fpu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk20a.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm107.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn30/display_mode_vba_30.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_object_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn30/display_rq_dlg_calc_30.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn31/display_mode_vba_31.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm200.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn31/display_rq_dlg_calc_31.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn314/display_mode_vba_314.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn314/display_rq_dlg_calc_314.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_context.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/privring/gf100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/privring/gf117.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn32/display_mode_vba_32.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/privring/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/privring/gk20a.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn32/display_rq_dlg_calc_32.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn32/display_mode_vba_util_32.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_clflush.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn31/dcn31_fpu.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_tiling.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn32/dcn32_fpu.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_stolen.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_create.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_ttm_move.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_ioctls.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_domain.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/privring/gm200.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_internal.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_dmabuf.h
  HDRTEST drivers/gpu/drm/i915/gem/selftests/mock_context.h
  HDRTEST drivers/gpu/drm/i915/gem/selftests/huge_gem_object.h
  HDRTEST drivers/gpu/drm/i915/gem/selftests/mock_gem_object.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/privring/gp10b.o
  HDRTEST drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn321/dcn321_fpu.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn301/dcn301_fpu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/fan.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/fannil.o
  HDRTEST drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/fanpwm.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/fantog.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/ic.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/temp.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_userptr.h
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_pm.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn302/dcn302_fpu.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_shrinker.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/nv40.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gemfs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/nv50.o
  HDRTEST drivers/gpu/drm/i915/gem/i915_gem_object.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/g84.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gt215.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_timeline_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn303/dcn303_fpu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gf119.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dcn314/dcn314_fpu.o
  HDRTEST drivers/gpu/drm/i915/gt/selftest_engine.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_breadcrumbs.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_engine_heartbeat.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gk104.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/dsc/rc_calc_fpu.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_context_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gm107.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gm200.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/calcs/dcn_calcs.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_execlists_submission.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_pm.h
  HDRTEST drivers/gpu/drm/i915/gt/selftest_rc6.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_llc_types.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_region_lmem.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/calcs/dcn_calc_math.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/therm/gp100.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_requests.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_ggtt_gmch.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/timer/base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dml/calcs/dcn_calc_auto.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/timer/nv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/clk_mgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dce60/dce60_clk_mgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dce100/dce_clk_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/timer/nv40.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dce110/dce110_clk_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/timer/nv41.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/timer/gk20a.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_print.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dce112/dce112_clk_mgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dce120/dce120_clk_mgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn10/rv1_clk_mgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.o
  HDRTEST drivers/gpu/drm/i915/gt/gen8_ppgtt.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/top/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/top/ga100.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_mcr.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn10/rv2_clk_mgr.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_timeline.h
  HDRTEST drivers/gpu/drm/i915/gt/gen6_engine_cs.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_workarounds_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn20/dcn20_clk_mgr.o
  HDRTEST drivers/gpu/drm/i915/gt/selftest_rps.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn201/dcn201_clk_mgr.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_sa_media.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/vfn/base.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_debugfs.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_clock_utils.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/vfn/uvfn.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn21/rn_clk_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/vfn/gv100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/vfn/tu102.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_rps_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.o
  HDRTEST drivers/gpu/drm/i915/gt/selftest_engine_heartbeat.h
  HDRTEST drivers/gpu/drm/i915/gt/sysfs_engines.h
  HDRTEST drivers/gpu/drm/i915/gt/gen7_renderclear.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/vfn/ga100.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_context.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_wopcm.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn30/dcn30_clk_mgr.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_mocs.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_engine_pm.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_sysfs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/gpio.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/nv40.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/gf100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/gf117.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_rc6.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_ring_types.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_workarounds.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_engine_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn301/vg_clk_mgr.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_pm_irq.h
  HDRTEST drivers/gpu/drm/i915/gt/shmem_utils.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_engine.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_reset_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn301/dcn301_smu.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn31/dcn31_smu.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn31/dcn31_clk_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/gk104.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn314/dcn314_smu.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn314/dcn314_clk_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/gk20a.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn315/dcn315_smu.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/subdev/volt/gm20b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn315/dcn315_clk_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/falcon.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_reset.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/xtensa.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn316/dcn316_smu.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/guc_capture_fwif.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn316/dcn316_clk_mgr.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/bsp/g84.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_audio.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gt215.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_stream_encoder.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_link_encoder.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_uc.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_hwseq.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gk104.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_mem_input.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_uc_fw_abi.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_print.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_fw.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gm107.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/abi/guc_klvs_abi.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_clock_source.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_scl_filters.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/abi/guc_errors_abi.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/abi/guc_actions_slpc_abi.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_transform.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/abi/guc_messages_abi.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_opp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_dmcu.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_reg.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gm200.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_abm.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gp100.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_huc.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gp102.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_ipp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/gv100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_aux.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_i2c.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_i2c_hw.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_i2c_sw.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dmub_psr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/tu102.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/ga100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/ce/ga102.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/cipher/g84.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dmub_abm.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dce_panel_cntl.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/device/acpi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dmub_hw_lock_mgr.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/device/base.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_huc_fw.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/device/ctrl.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_fwif.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dce/dmub_outbox.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/gpio_base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/gpio_service.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/hw_factory.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_capture.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_submission.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/device/pci.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/hw_gpio.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/device/user.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/hw_hpd.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_log.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/hw_ddc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/hw_generic.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/base.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/chan.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/hw_translate.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce60/hw_translate_dce60.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce60/hw_factory_dce60.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce80/hw_translate_dce80.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/conn.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_ads.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce80/hw_factory_dce80.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/hdmi.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/head.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce110/hw_translate_dce110.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce110/hw_factory_dce110.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce120/hw_translate_dce120.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dce120/hw_factory_dce120.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn10/hw_translate_dcn10.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_guc_rc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn10/hw_factory_dcn10.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/vga.o
  HDRTEST drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/nv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn20/hw_translate_dcn20.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn20/hw_factory_dcn20.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_hwconfig.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_llc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn21/hw_translate_dcn21.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/g84.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn21/hw_factory_dcn21.o
  HDRTEST drivers/gpu/drm/i915/gt/gen8_engine_cs.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_sseu_debugfs.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_rc6_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn30/hw_translate_dcn30.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn30/hw_factory_dcn30.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/g94.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_context_param.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn315/hw_translate_dcn315.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gt200.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn315/hw_factory_dcn315.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn32/hw_translate_dcn32.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gpu_commands.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_engine_user.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_irq.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gsc.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_rps.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/gpio/dcn32/hw_factory_dcn32.o
  HDRTEST drivers/gpu/drm/i915/gt/selftest_llc.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/irq_service.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gt215.o
  HDRTEST drivers/gpu/drm/i915/gt/gen6_ppgtt.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp89.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dce60/irq_service_dce60.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gf119.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gk110.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dce80/irq_service_dce80.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_ggtt_fencing.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dce110/irq_service_dce110.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_migrate_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dce120/irq_service_dce120.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn10/irq_service_dcn10.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn20/irq_service_dcn20.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gm107.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn21/irq_service_dcn21.o
  HDRTEST drivers/gpu/drm/i915/gt/selftests/mock_timeline.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_lrc.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gm200.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn201/irq_service_dcn201.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn30/irq_service_dcn30.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gp100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gp102.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn302/irq_service_dcn302.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn303/irq_service_dcn303.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/gv100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn31/irq_service_dcn31.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn314/irq_service_dcn314.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn315/irq_service_dcn315.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/irq/dcn32/irq_service_dcn32.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_lrc_reg.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_migrate.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/tu102.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/ga102.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/link_detection.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/udisp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/link_dpms.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/uoutp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/link_factory.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/disp/uhead.o
  HDRTEST drivers/gpu/drm/i915/gt/mock_engine.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/link_resource.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/link_validation.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/base.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_engine_stats.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/nv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/accessories/link_dp_trace.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/accessories/link_dp_cts.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/accessories/link_fpga.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/nv50.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/gf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/hwss/link_hwss_dio.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/hwss/link_hwss_dpia.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/hwss/link_hwss_hpo_dp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_hpd.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_ddc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/gf119.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/gv100.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gtt.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_buffer_pool_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dpcd.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/user.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_dpia.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_training.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_ring.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_training_8b_10b.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_types.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_renderstate.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_sseu.h
  HDRTEST drivers/gpu/drm/i915/gt/intel_engine_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/usernv04.o
  HDRTEST drivers/gpu/drm/i915/gt/intel_gt_engines_debugfs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_training_128b_132b.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_training_dpia.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_training_auxless.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_training_fixed_vs_pe_retimer.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_phy.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/usernv50.o
  HDRTEST drivers/gpu/drm/i915/gt/gen2_engine_cs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/usergf100.o
  HDRTEST drivers/gpu/drm/i915/gvt/gvt.h
  HDRTEST drivers/gpu/drm/i915/gvt/trace.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/usergf119.o
  HDRTEST drivers/gpu/drm/i915/gvt/debug.h
  HDRTEST drivers/gpu/drm/i915/gvt/edid.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/dma/usergv100.o
  HDRTEST drivers/gpu/drm/i915/gvt/page_track.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.o
  HDRTEST drivers/gpu/drm/i915/gvt/mmio.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_capability.o
  HDRTEST drivers/gpu/drm/i915/gvt/sched_policy.h
  HDRTEST drivers/gpu/drm/i915/gvt/fb_decoder.h
  HDRTEST drivers/gpu/drm/i915/gvt/cmd_parser.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/cgrp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/chan.o
  HDRTEST drivers/gpu/drm/i915/gvt/dmabuf.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/chid.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/runl.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_edp_panel_control.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_irq_handler.o
  HDRTEST drivers/gpu/drm/i915/gvt/mmio_context.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/link/protocols/link_dp_dpia_bw.o
  HDRTEST drivers/gpu/drm/i915/gvt/display.h
  HDRTEST drivers/gpu/drm/i915/gvt/gtt.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/virtual/virtual_link_encoder.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/virtual/virtual_stream_encoder.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/runq.o
  HDRTEST drivers/gpu/drm/i915/gvt/scheduler.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/virtual/virtual_link_hwss.o
  HDRTEST drivers/gpu/drm/i915/gvt/reg.h
  HDRTEST drivers/gpu/drm/i915/gvt/execlist.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/nv04.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dsc/dc_dsc.o
  HDRTEST drivers/gpu/drm/i915/gvt/interrupt.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dsc/rc_calc.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dsc/rc_calc_dpi.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_resource.o
  HDRTEST drivers/gpu/drm/i915/i915_active.h
  HDRTEST drivers/gpu/drm/i915/i915_active_types.h
  HDRTEST drivers/gpu/drm/i915/i915_cmd_parser.h
  HDRTEST drivers/gpu/drm/i915/i915_config.h
  HDRTEST drivers/gpu/drm/i915/i915_debugfs.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/nv10.o
  HDRTEST drivers/gpu/drm/i915/i915_debugfs_params.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/nv17.o
  HDRTEST drivers/gpu/drm/i915/i915_deps.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/nv40.o
  HDRTEST drivers/gpu/drm/i915/i915_driver.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/nv50.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_init.o
  HDRTEST drivers/gpu/drm/i915/i915_drm_client.h
  HDRTEST drivers/gpu/drm/i915/i915_drv.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/g84.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/g98.o
  HDRTEST drivers/gpu/drm/i915/i915_file_private.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_hwseq.o
  HDRTEST drivers/gpu/drm/i915/i915_fixed.h
  HDRTEST drivers/gpu/drm/i915/i915_gem.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gf100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_dpp.o
  HDRTEST drivers/gpu/drm/i915/i915_gem_evict.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_dpp_cm.o
  HDRTEST drivers/gpu/drm/i915/i915_gem_gtt.h
  HDRTEST drivers/gpu/drm/i915/i915_gem_ww.h
  HDRTEST drivers/gpu/drm/i915/i915_getparam.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_hubp.o
  HDRTEST drivers/gpu/drm/i915/i915_gpu_error.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_mpc.o
  HDRTEST drivers/gpu/drm/i915/i915_hwmon.h
  HDRTEST drivers/gpu/drm/i915/i915_ioc32.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_opp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.o
  HDRTEST drivers/gpu/drm/i915/i915_ioctl.h
  HDRTEST drivers/gpu/drm/i915/i915_iosf_mbi.h
  HDRTEST drivers/gpu/drm/i915/i915_irq.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk110.o
  HDRTEST drivers/gpu/drm/i915/i915_memcpy.h
  HDRTEST drivers/gpu/drm/i915/i915_mitigations.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_hubbub.o
  HDRTEST drivers/gpu/drm/i915/i915_mm.h
  HDRTEST drivers/gpu/drm/i915/i915_params.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_optc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk208.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_mmhubbub.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_stream_encoder.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_link_encoder.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_dccg.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk20a.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_vmid.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gm107.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gm200.o
  HDRTEST drivers/gpu/drm/i915/i915_pci.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_dwb.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_dwb_scl.o
  HDRTEST drivers/gpu/drm/i915/i915_perf.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn20/dcn20_dsc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gp100.o
  HDRTEST drivers/gpu/drm/i915/i915_perf_oa_regs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_init.o
  HDRTEST drivers/gpu/drm/i915/i915_perf_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_resource.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_ipp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/gv100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/tu102.o
  HDRTEST drivers/gpu/drm/i915/i915_pmu.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga100.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/ga102.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/ucgrp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hw_sequencer.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/fifo/uchan.o
  HDRTEST drivers/gpu/drm/i915/i915_priolist_types.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/base.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hw_sequencer_debug.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv04.o
  HDRTEST drivers/gpu/drm/i915/i915_pvinfo.h
  HDRTEST drivers/gpu/drm/i915/i915_query.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_dpp.o
  HDRTEST drivers/gpu/drm/i915/i915_reg.h
  HDRTEST drivers/gpu/drm/i915/i915_reg_defs.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_opp.o
  HDRTEST drivers/gpu/drm/i915/i915_request.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv10.o
  HDRTEST drivers/gpu/drm/i915/i915_scatterlist.h
  HDRTEST drivers/gpu/drm/i915/i915_scheduler.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_optc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv15.o
  HDRTEST drivers/gpu/drm/i915/i915_scheduler_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hubp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv17.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_mpc.o
  HDRTEST drivers/gpu/drm/i915/i915_selftest.h
  HDRTEST drivers/gpu/drm/i915/i915_suspend.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv25.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv2a.o
  HDRTEST drivers/gpu/drm/i915/i915_sw_fence.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_dpp_dscl.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv30.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_dpp_cm.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv34.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv35.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_cm_common.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_hubbub.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_stream_encoder.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv40.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn10/dcn10_link_encoder.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_init.o
  HDRTEST drivers/gpu/drm/i915/i915_sw_fence_work.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv44.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/nv50.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_hubp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_hubbub.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/g84.o
  HDRTEST drivers/gpu/drm/i915/i915_switcheroo.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gt200.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/mcp79.o
  HDRTEST drivers/gpu/drm/i915/i915_syncmap.h
  HDRTEST drivers/gpu/drm/i915/i915_sysfs.h
  HDRTEST drivers/gpu/drm/i915/i915_tasklet.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gt215.o
  HDRTEST drivers/gpu/drm/i915/i915_trace.h
  HDRTEST drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_resource.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/mcp89.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.o
  HDRTEST drivers/gpu/drm/i915/i915_user_extensions.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gf104.o
  HDRTEST drivers/gpu/drm/i915/i915_utils.h
  HDRTEST drivers/gpu/drm/i915/i915_vgpu.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_hwseq.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_link_encoder.o
  HDRTEST drivers/gpu/drm/i915/i915_vma.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn21/dcn21_dccg.o
  HDRTEST drivers/gpu/drm/i915/i915_vma_resource.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gf108.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_init.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_resource.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gf110.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gf117.o
  HDRTEST drivers/gpu/drm/i915/i915_vma_types.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_hwseq.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_hubbub.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_mpc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gf119.o
  HDRTEST drivers/gpu/drm/i915/intel_device_info.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_hubp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110b.o
  HDRTEST drivers/gpu/drm/i915/intel_gvt.h
  HDRTEST drivers/gpu/drm/i915/intel_mchbar_regs.h
  HDRTEST drivers/gpu/drm/i915/intel_memory_region.h
  HDRTEST drivers/gpu/drm/i915/intel_pci_config.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gk208.o
  HDRTEST drivers/gpu/drm/i915/intel_pcode.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gm107.o
  HDRTEST drivers/gpu/drm/i915/intel_pm.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_opp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_optc.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gm200.o
  HDRTEST drivers/gpu/drm/i915/intel_region_ttm.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_dpp.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_dccg.o
  HDRTEST drivers/gpu/drm/i915/intel_runtime_pm.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gm20b.o
  HDRTEST drivers/gpu/drm/i915/intel_sbi.h
  HDRTEST drivers/gpu/drm/i915/intel_step.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gp100.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn201/dcn201_link_encoder.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn30/dcn30_init.o
  HDRTEST drivers/gpu/drm/i915/intel_uncore.h
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gp102.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn30/dcn30_hubbub.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gp104.o
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn30/dcn30_hubp.o
  HDRTEST drivers/gpu/drm/i915/intel_wakeref.h
  HDRTEST drivers/gpu/drm/i915/pxp/intel_pxp_tee.h
  HDRTEST drivers/gpu/drm/i915/pxp/intel_pxp_irq.h
  CC [M]  drivers/gpu/drm/amd/amdgpu/../display/dc/dcn30/dcn30_dpp.o
  CC [M]  drivers/gpu/drm/nouveau/nvkm/engine/gr/gp107.o
  HDRTEST drivers/gpu/drm/i915/px



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Intel-xe] ○ CI.BAT: info for Port Xe to use GPUVA and implement NULL VM binds (rev5)
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (10 preceding siblings ...)
  2023-04-04  1:49 ` [Intel-xe] ✓ CI.Build: " Patchwork
@ 2023-04-04  2:09 ` Patchwork
  2023-04-04 13:20 ` [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Thomas Hellström
  12 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2023-04-04  2:09 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 345 bytes --]

== Series Details ==

Series: Port Xe to use GPUVA and implement NULL VM binds (rev5)
URL   : https://patchwork.freedesktop.org/series/115217/
State : info

== Summary ==

Participating hosts:
bat-atsm-2
bat-dg2-oem2
bat-adlp-7
Missing hosts results[0]:
Results: [xe-pw-115217v5](https://intel-gfx-ci.01.org/tree/xe/xe-pw-115217v5/index.html)



[-- Attachment #2: Type: text/html, Size: 855 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds
  2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
                   ` (11 preceding siblings ...)
  2023-04-04  2:09 ` [Intel-xe] ○ CI.BAT: info " Patchwork
@ 2023-04-04 13:20 ` Thomas Hellström
  2023-04-04 14:56   ` Matthew Brost
  12 siblings, 1 reply; 21+ messages in thread
From: Thomas Hellström @ 2023-04-04 13:20 UTC (permalink / raw)
  To: Matthew Brost, intel-xe

Hi, Matthew,


On 4/4/23 03:42, Matthew Brost wrote:
> GPUVA is common code written primarily by Danilo with the idea being a
> common place to track GPUVAs (VMAs in Xe) within an address space (VMs
> in Xe), track all the GPUVAs attached to GEMs, and a common way
> implement VM binds / unbinds with MMAP / MUNMAP semantics via creating
> operation lists. All of this adds up to a common way to implement VK
> sparse bindings.
>
> This series pulls in the GPUVA code written by Danilo plus some small
> fixes by myself into 1 large patch. Once the GPUVA makes it upstream, we
> can rebase and drop this patch. I believe what lands upstream should be
> nearly identical to this patch at least from an API perspective.
>
> The last three patches port Xe to GPUVA and add support for NULL VM binds
> (writes dropped, read zero, VK sparse support). An example of the
> semantics of this is below.

Going through thew new code in xe_vm.c I'm still concerned about our 
error handling. In the cases we attempt to handle, for example an 
-ENOMEM, the sync ioctl appears to be doing a fragile unwind whereas the 
async worker puts the VM in an error state that can only be exited by a 
RESTART ioctl (pls correct me if I'm wrong here). We could of course do 
the same for sync ioctls, but I'm still wondering what happens if, for 
example a MAP operation on a dual-gt VM fails after binding on the first gt?

I think we need to clearly define the desired behaviour in the uAPI and 
then make sure we do the necessary changes with that in mind, and Even 
if I prefer the !async_worker solution this can be don orthogonally to 
that discussion. Ideally I'd see we find a way to *not* put the VM  in 
an error state like this, and I figure there are various ways to aid in 
this, for example keeping a progress cookie in the user-space data or 
deferring any failed bind operations to the next rebind worker or exec, 
but in the end I think this boils down to allocating all needed 
resources up-front for operations or groups of operations that are not 
allowed to fail once we start to modify the page-tables.

Thoughts?

/Thomas




>
> MAP 0x0000-0x8000 to NULL 	- 0x0000-0x8000 writes dropped + read zero
> MAP 0x4000-0x5000 to a GEM 	- 0x0000-0x4000, 0x5000-0x8000 writes dropped + read zero; 0x4000-0x5000 mapped to a GEM
> UNMAP 0x3000-0x6000		- 0x0000-0x3000, 0x6000-0x8000 writes dropped + read zero
> UNMAP 0x0000-0x8000		- Nothing mapped
>
> No changins to existing behavior, rather just new functionality./
>
> v2: Fix CI build failure
> v3: Export mas_preallocate, add patch to avoid rebinds
> v5: Bug fixes, rebase, xe_vma size optimizations
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>
> Danilo Krummrich (2):
>    maple_tree: split up MA_STATE() macro
>    drm: manager to keep track of GPUs VA mappings
>
> Matthew Brost (6):
>    maple_tree: Export mas_preallocate
>    drm/xe: Port Xe to GPUVA
>    drm/xe: NULL binding implementation
>    drm/xe: Avoid doing rebinds
>    drm/xe: Reduce the number list links in xe_vma
>    drm/xe: Optimize size of xe_vma allocation
>
>   Documentation/gpu/drm-mm.rst                |   31 +
>   drivers/gpu/drm/Makefile                    |    1 +
>   drivers/gpu/drm/drm_debugfs.c               |   56 +
>   drivers/gpu/drm/drm_gem.c                   |    3 +
>   drivers/gpu/drm/drm_gpuva_mgr.c             | 1890 ++++++++++++++++++
>   drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
>   drivers/gpu/drm/xe/xe_bo.h                  |    1 +
>   drivers/gpu/drm/xe/xe_device.c              |    2 +-
>   drivers/gpu/drm/xe/xe_exec.c                |    4 +-
>   drivers/gpu/drm/xe/xe_gt_pagefault.c        |   29 +-
>   drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
>   drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
>   drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
>   drivers/gpu/drm/xe/xe_pt.c                  |  176 +-
>   drivers/gpu/drm/xe/xe_trace.h               |   10 +-
>   drivers/gpu/drm/xe/xe_vm.c                  | 1947 +++++++++----------
>   drivers/gpu/drm/xe/xe_vm.h                  |   76 +-
>   drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
>   drivers/gpu/drm/xe/xe_vm_types.h            |  276 ++-
>   include/drm/drm_debugfs.h                   |   24 +
>   include/drm/drm_drv.h                       |    7 +
>   include/drm/drm_gem.h                       |   75 +
>   include/drm/drm_gpuva_mgr.h                 |  734 +++++++
>   include/linux/maple_tree.h                  |    7 +-
>   include/uapi/drm/xe_drm.h                   |    8 +
>   lib/maple_tree.c                            |    1 +
>   26 files changed, 4203 insertions(+), 1280 deletions(-)
>   create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
>   create mode 100644 include/drm/drm_gpuva_mgr.h
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds
  2023-04-04 13:20 ` [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Thomas Hellström
@ 2023-04-04 14:56   ` Matthew Brost
  2023-04-05 15:31     ` Thomas Hellström
  0 siblings, 1 reply; 21+ messages in thread
From: Matthew Brost @ 2023-04-04 14:56 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

On Tue, Apr 04, 2023 at 03:20:47PM +0200, Thomas Hellström wrote:
> Hi, Matthew,
> 
> 
> On 4/4/23 03:42, Matthew Brost wrote:
> > GPUVA is common code written primarily by Danilo with the idea being a
> > common place to track GPUVAs (VMAs in Xe) within an address space (VMs
> > in Xe), track all the GPUVAs attached to GEMs, and a common way
> > implement VM binds / unbinds with MMAP / MUNMAP semantics via creating
> > operation lists. All of this adds up to a common way to implement VK
> > sparse bindings.
> > 
> > This series pulls in the GPUVA code written by Danilo plus some small
> > fixes by myself into 1 large patch. Once the GPUVA makes it upstream, we
> > can rebase and drop this patch. I believe what lands upstream should be
> > nearly identical to this patch at least from an API perspective.
> > 
> > The last three patches port Xe to GPUVA and add support for NULL VM binds
> > (writes dropped, read zero, VK sparse support). An example of the
> > semantics of this is below.
> 
> Going through thew new code in xe_vm.c I'm still concerned about our error
> handling. In the cases we attempt to handle, for example an -ENOMEM, the
> sync ioctl appears to be doing a fragile unwind whereas the async worker
> puts the VM in an error state that can only be exited by a RESTART ioctl
> (pls correct me if I'm wrong here). We could of course do the same for sync
> ioctls, but I'm still wondering what happens if, for example a MAP operation
> on a dual-gt VM fails after binding on the first gt?
> 

So sync mode we don't allow advanced binds (more than 1 map operation)
but I failed to take into account dual-gt VMs, yea that might be an
issue foe sync binds. This should just for work async though. Maybe we
depreciate the sync mode except for unbinds when in an error state.

> I think we need to clearly define the desired behaviour in the uAPI and then
> make sure we do the necessary changes with that in mind, and Even if I
> prefer the !async_worker solution this can be don orthogonally to that
> discussion. Ideally I'd see we find a way to *not* put the VM  in an error
> state like this, and I figure there are various ways to aid in this, for
> example keeping a progress cookie in the user-space data or deferring any
> failed bind operations to the next rebind worker or exec, but in the end I
> think this boils down to allocating all needed resources up-front for
> operations or groups of operations that are not allowed to fail once we
> start to modify the page-tables.
>

Agree on clearly defined uAPI behavior, I'm not sure if I have answer
for the error handling. Personally I like the stop / restart mechanism
as it essentially mean binds never can fail with a well behavied user.
wrt to Allocating up front, I think that is what Nouveau is doing? We
could study that code but allocating everything up front will be a
pretty large change.

Obviously we need to do some work here and this is one of our biggest
opens left but I don't view this as a true blocker until we remove force
probe. GPUVA regardless is a step in the right direction which will make
any changes going forward easier + enable VK to get sparse implemented.
I'd say let's get GPUVA merged and next focus on these issues you have
raised.

Also another VM bind issue we need address soon:
https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/41

Matt

> Thoughts?
> 
> /Thomas
> 
> 
> 
> 
> > 
> > MAP 0x0000-0x8000 to NULL 	- 0x0000-0x8000 writes dropped + read zero
> > MAP 0x4000-0x5000 to a GEM 	- 0x0000-0x4000, 0x5000-0x8000 writes dropped + read zero; 0x4000-0x5000 mapped to a GEM
> > UNMAP 0x3000-0x6000		- 0x0000-0x3000, 0x6000-0x8000 writes dropped + read zero
> > UNMAP 0x0000-0x8000		- Nothing mapped
> > 
> > No changins to existing behavior, rather just new functionality./
> > 
> > v2: Fix CI build failure
> > v3: Export mas_preallocate, add patch to avoid rebinds
> > v5: Bug fixes, rebase, xe_vma size optimizations
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > 
> > Danilo Krummrich (2):
> >    maple_tree: split up MA_STATE() macro
> >    drm: manager to keep track of GPUs VA mappings
> > 
> > Matthew Brost (6):
> >    maple_tree: Export mas_preallocate
> >    drm/xe: Port Xe to GPUVA
> >    drm/xe: NULL binding implementation
> >    drm/xe: Avoid doing rebinds
> >    drm/xe: Reduce the number list links in xe_vma
> >    drm/xe: Optimize size of xe_vma allocation
> > 
> >   Documentation/gpu/drm-mm.rst                |   31 +
> >   drivers/gpu/drm/Makefile                    |    1 +
> >   drivers/gpu/drm/drm_debugfs.c               |   56 +
> >   drivers/gpu/drm/drm_gem.c                   |    3 +
> >   drivers/gpu/drm/drm_gpuva_mgr.c             | 1890 ++++++++++++++++++
> >   drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
> >   drivers/gpu/drm/xe/xe_bo.h                  |    1 +
> >   drivers/gpu/drm/xe/xe_device.c              |    2 +-
> >   drivers/gpu/drm/xe/xe_exec.c                |    4 +-
> >   drivers/gpu/drm/xe/xe_gt_pagefault.c        |   29 +-
> >   drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
> >   drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
> >   drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
> >   drivers/gpu/drm/xe/xe_pt.c                  |  176 +-
> >   drivers/gpu/drm/xe/xe_trace.h               |   10 +-
> >   drivers/gpu/drm/xe/xe_vm.c                  | 1947 +++++++++----------
> >   drivers/gpu/drm/xe/xe_vm.h                  |   76 +-
> >   drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
> >   drivers/gpu/drm/xe/xe_vm_types.h            |  276 ++-
> >   include/drm/drm_debugfs.h                   |   24 +
> >   include/drm/drm_drv.h                       |    7 +
> >   include/drm/drm_gem.h                       |   75 +
> >   include/drm/drm_gpuva_mgr.h                 |  734 +++++++
> >   include/linux/maple_tree.h                  |    7 +-
> >   include/uapi/drm/xe_drm.h                   |    8 +
> >   lib/maple_tree.c                            |    1 +
> >   26 files changed, 4203 insertions(+), 1280 deletions(-)
> >   create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
> >   create mode 100644 include/drm/drm_gpuva_mgr.h
> > 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds
  2023-04-04 14:56   ` Matthew Brost
@ 2023-04-05 15:31     ` Thomas Hellström
  2023-04-06  1:20       ` Matthew Brost
  0 siblings, 1 reply; 21+ messages in thread
From: Thomas Hellström @ 2023-04-05 15:31 UTC (permalink / raw)
  To: Matthew Brost; +Cc: intel-xe

Hi,

On 4/4/23 16:56, Matthew Brost wrote:
> On Tue, Apr 04, 2023 at 03:20:47PM +0200, Thomas Hellström wrote:
>> Hi, Matthew,
>>
>>
>> On 4/4/23 03:42, Matthew Brost wrote:
>>> GPUVA is common code written primarily by Danilo with the idea being a
>>> common place to track GPUVAs (VMAs in Xe) within an address space (VMs
>>> in Xe), track all the GPUVAs attached to GEMs, and a common way
>>> implement VM binds / unbinds with MMAP / MUNMAP semantics via creating
>>> operation lists. All of this adds up to a common way to implement VK
>>> sparse bindings.
>>>
>>> This series pulls in the GPUVA code written by Danilo plus some small
>>> fixes by myself into 1 large patch. Once the GPUVA makes it upstream, we
>>> can rebase and drop this patch. I believe what lands upstream should be
>>> nearly identical to this patch at least from an API perspective.
>>>
>>> The last three patches port Xe to GPUVA and add support for NULL VM binds
>>> (writes dropped, read zero, VK sparse support). An example of the
>>> semantics of this is below.
>> Going through thew new code in xe_vm.c I'm still concerned about our error
>> handling. In the cases we attempt to handle, for example an -ENOMEM, the
>> sync ioctl appears to be doing a fragile unwind whereas the async worker
>> puts the VM in an error state that can only be exited by a RESTART ioctl
>> (pls correct me if I'm wrong here). We could of course do the same for sync
>> ioctls, but I'm still wondering what happens if, for example a MAP operation
>> on a dual-gt VM fails after binding on the first gt?
>>
> So sync mode we don't allow advanced binds (more than 1 map operation)
> but I failed to take into account dual-gt VMs, yea that might be an
> issue foe sync binds. This should just for work async though. Maybe we
> depreciate the sync mode except for unbinds when in an error state.

I think we need to separate this from sync/async and what's in the 
current code. I've started to dig into the Nouveau code a bit to get 
things right in the async VM_BIND document, but the error handling 
should really be agnostic as to whether we're async or not. We should be 
able to run RESTART also after a failing sync VM_BIND if the decision is 
to put the VM in a recoverable error state rather than to clean 
everything up after each failure.

>
>> I think we need to clearly define the desired behaviour in the uAPI and then
>> make sure we do the necessary changes with that in mind, and Even if I
>> prefer the !async_worker solution this can be don orthogonally to that
>> discussion. Ideally I'd see we find a way to *not* put the VM  in an error
>> state like this, and I figure there are various ways to aid in this, for
>> example keeping a progress cookie in the user-space data or deferring any
>> failed bind operations to the next rebind worker or exec, but in the end I
>> think this boils down to allocating all needed resources up-front for
>> operations or groups of operations that are not allowed to fail once we
>> start to modify the page-tables.
>>
> Agree on clearly defined uAPI behavior, I'm not sure if I have answer
> for the error handling. Personally I like the stop / restart mechanism
> as it essentially mean binds never can fail with a well behavied user.
> wrt to Allocating up front, I think that is what Nouveau is doing? We
> could study that code but allocating everything up front will be a
> pretty large change.
>
> Obviously we need to do some work here and this is one of our biggest
> opens left but I don't view this as a true blocker until we remove force
> probe. GPUVA regardless is a step in the right direction which will make
> any changes going forward easier + enable VK to get sparse implemented.
> I'd say let's get GPUVA merged and next focus on these issues you have
> raised.

I think we need a pretty major rewrite here, and I think moving over to 
GPUVA would be cleaner with that done and possibilities of a clean error 
recovery in place. I stumbled on these WARN_ON(err) when starting to 
review, when it's IMO perfectly possible for that err to be non-zero, 
and since we're not in POC mode anymore I'm reluctant to Ack something 
that we know might blow up.

What I can do, though is to dig more into, Nouveau to see how they 
handle the error unwind, and start implementing a clean unwind also in 
our driver and rebase the GPUVA stuff on top.

/Thomas


>
> Also another VM bind issue we need address soon:
> https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/41
>
> Matt
>
>> Thoughts?
>>
>> /Thomas
>>
>>
>>
>>
>>> MAP 0x0000-0x8000 to NULL 	- 0x0000-0x8000 writes dropped + read zero
>>> MAP 0x4000-0x5000 to a GEM 	- 0x0000-0x4000, 0x5000-0x8000 writes dropped + read zero; 0x4000-0x5000 mapped to a GEM
>>> UNMAP 0x3000-0x6000		- 0x0000-0x3000, 0x6000-0x8000 writes dropped + read zero
>>> UNMAP 0x0000-0x8000		- Nothing mapped
>>>
>>> No changins to existing behavior, rather just new functionality./
>>>
>>> v2: Fix CI build failure
>>> v3: Export mas_preallocate, add patch to avoid rebinds
>>> v5: Bug fixes, rebase, xe_vma size optimizations
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>
>>> Danilo Krummrich (2):
>>>     maple_tree: split up MA_STATE() macro
>>>     drm: manager to keep track of GPUs VA mappings
>>>
>>> Matthew Brost (6):
>>>     maple_tree: Export mas_preallocate
>>>     drm/xe: Port Xe to GPUVA
>>>     drm/xe: NULL binding implementation
>>>     drm/xe: Avoid doing rebinds
>>>     drm/xe: Reduce the number list links in xe_vma
>>>     drm/xe: Optimize size of xe_vma allocation
>>>
>>>    Documentation/gpu/drm-mm.rst                |   31 +
>>>    drivers/gpu/drm/Makefile                    |    1 +
>>>    drivers/gpu/drm/drm_debugfs.c               |   56 +
>>>    drivers/gpu/drm/drm_gem.c                   |    3 +
>>>    drivers/gpu/drm/drm_gpuva_mgr.c             | 1890 ++++++++++++++++++
>>>    drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
>>>    drivers/gpu/drm/xe/xe_bo.h                  |    1 +
>>>    drivers/gpu/drm/xe/xe_device.c              |    2 +-
>>>    drivers/gpu/drm/xe/xe_exec.c                |    4 +-
>>>    drivers/gpu/drm/xe/xe_gt_pagefault.c        |   29 +-
>>>    drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
>>>    drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
>>>    drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
>>>    drivers/gpu/drm/xe/xe_pt.c                  |  176 +-
>>>    drivers/gpu/drm/xe/xe_trace.h               |   10 +-
>>>    drivers/gpu/drm/xe/xe_vm.c                  | 1947 +++++++++----------
>>>    drivers/gpu/drm/xe/xe_vm.h                  |   76 +-
>>>    drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
>>>    drivers/gpu/drm/xe/xe_vm_types.h            |  276 ++-
>>>    include/drm/drm_debugfs.h                   |   24 +
>>>    include/drm/drm_drv.h                       |    7 +
>>>    include/drm/drm_gem.h                       |   75 +
>>>    include/drm/drm_gpuva_mgr.h                 |  734 +++++++
>>>    include/linux/maple_tree.h                  |    7 +-
>>>    include/uapi/drm/xe_drm.h                   |    8 +
>>>    lib/maple_tree.c                            |    1 +
>>>    26 files changed, 4203 insertions(+), 1280 deletions(-)
>>>    create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
>>>    create mode 100644 include/drm/drm_gpuva_mgr.h
>>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds
  2023-04-05 15:31     ` Thomas Hellström
@ 2023-04-06  1:20       ` Matthew Brost
  0 siblings, 0 replies; 21+ messages in thread
From: Matthew Brost @ 2023-04-06  1:20 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-xe

On Wed, Apr 05, 2023 at 05:31:15PM +0200, Thomas Hellström wrote:
> Hi,
> 
> On 4/4/23 16:56, Matthew Brost wrote:
> > On Tue, Apr 04, 2023 at 03:20:47PM +0200, Thomas Hellström wrote:
> > > Hi, Matthew,
> > > 
> > > 
> > > On 4/4/23 03:42, Matthew Brost wrote:
> > > > GPUVA is common code written primarily by Danilo with the idea being a
> > > > common place to track GPUVAs (VMAs in Xe) within an address space (VMs
> > > > in Xe), track all the GPUVAs attached to GEMs, and a common way
> > > > implement VM binds / unbinds with MMAP / MUNMAP semantics via creating
> > > > operation lists. All of this adds up to a common way to implement VK
> > > > sparse bindings.
> > > > 
> > > > This series pulls in the GPUVA code written by Danilo plus some small
> > > > fixes by myself into 1 large patch. Once the GPUVA makes it upstream, we
> > > > can rebase and drop this patch. I believe what lands upstream should be
> > > > nearly identical to this patch at least from an API perspective.
> > > > 
> > > > The last three patches port Xe to GPUVA and add support for NULL VM binds
> > > > (writes dropped, read zero, VK sparse support). An example of the
> > > > semantics of this is below.
> > > Going through thew new code in xe_vm.c I'm still concerned about our error
> > > handling. In the cases we attempt to handle, for example an -ENOMEM, the
> > > sync ioctl appears to be doing a fragile unwind whereas the async worker
> > > puts the VM in an error state that can only be exited by a RESTART ioctl
> > > (pls correct me if I'm wrong here). We could of course do the same for sync
> > > ioctls, but I'm still wondering what happens if, for example a MAP operation
> > > on a dual-gt VM fails after binding on the first gt?
> > > 
> > So sync mode we don't allow advanced binds (more than 1 map operation)
> > but I failed to take into account dual-gt VMs, yea that might be an
> > issue foe sync binds. This should just for work async though. Maybe we
> > depreciate the sync mode except for unbinds when in an error state.
> 
> I think we need to separate this from sync/async and what's in the current
> code. I've started to dig into the Nouveau code a bit to get things right in
> the async VM_BIND document, but the error handling should really be agnostic
> as to whether we're async or not. We should be able to run RESTART also
> after a failing sync VM_BIND if the decision is to put the VM in a
> recoverable error state rather than to clean everything up after each
> failure.
> 

Not sure I agree with this but we can hash this out later.

> > 
> > > I think we need to clearly define the desired behaviour in the uAPI and then
> > > make sure we do the necessary changes with that in mind, and Even if I
> > > prefer the !async_worker solution this can be don orthogonally to that
> > > discussion. Ideally I'd see we find a way to *not* put the VM  in an error
> > > state like this, and I figure there are various ways to aid in this, for
> > > example keeping a progress cookie in the user-space data or deferring any
> > > failed bind operations to the next rebind worker or exec, but in the end I
> > > think this boils down to allocating all needed resources up-front for
> > > operations or groups of operations that are not allowed to fail once we
> > > start to modify the page-tables.
> > > 
> > Agree on clearly defined uAPI behavior, I'm not sure if I have answer
> > for the error handling. Personally I like the stop / restart mechanism
> > as it essentially mean binds never can fail with a well behavied user.
> > wrt to Allocating up front, I think that is what Nouveau is doing? We
> > could study that code but allocating everything up front will be a
> > pretty large change.
> > 
> > Obviously we need to do some work here and this is one of our biggest
> > opens left but I don't view this as a true blocker until we remove force
> > probe. GPUVA regardless is a step in the right direction which will make
> > any changes going forward easier + enable VK to get sparse implemented.
> > I'd say let's get GPUVA merged and next focus on these issues you have
> > raised.
> 
> I think we need a pretty major rewrite here, and I think moving over to
> GPUVA would be cleaner with that done and possibilities of a clean error
> recovery in place. I stumbled on these WARN_ON(err) when starting to review,
> when it's IMO perfectly possible for that err to be non-zero, and since
> we're not in POC mode anymore I'm reluctant to Ack something that we know
> might blow up.
>

Everything will be easier if we move to GPUVA first and agree this needs
to be stable, from testing it is. Also GPUVA gets us everything we need
for sparse to which is a plus. Also I want to move some userptr stuff to
GPUVA, move extobj to GPUVA, some locking to GPUVA /w drm exec, and
perhaps preempt fence handling to DRM sched + GPUVA. I'd strongly
advocate for merging ASAP and stage the aforementioned + error handling
fixes behind this merge.

Can you point of the WARN? Is it not hanlding the error from
drm_gpuva_insert? Yea, I'll fix that.

> What I can do, though is to dig more into, Nouveau to see how they handle
> the error unwind, and start implementing a clean unwind also in our driver
> and rebase the GPUVA stuff on top.
>

Lets both dig in.

Matt
 
> /Thomas
> 
> 
> > 
> > Also another VM bind issue we need address soon:
> > https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/41
> > 
> > Matt
> > 
> > > Thoughts?
> > > 
> > > /Thomas
> > > 
> > > 
> > > 
> > > 
> > > > MAP 0x0000-0x8000 to NULL 	- 0x0000-0x8000 writes dropped + read zero
> > > > MAP 0x4000-0x5000 to a GEM 	- 0x0000-0x4000, 0x5000-0x8000 writes dropped + read zero; 0x4000-0x5000 mapped to a GEM
> > > > UNMAP 0x3000-0x6000		- 0x0000-0x3000, 0x6000-0x8000 writes dropped + read zero
> > > > UNMAP 0x0000-0x8000		- Nothing mapped
> > > > 
> > > > No changins to existing behavior, rather just new functionality./
> > > > 
> > > > v2: Fix CI build failure
> > > > v3: Export mas_preallocate, add patch to avoid rebinds
> > > > v5: Bug fixes, rebase, xe_vma size optimizations
> > > > 
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > 
> > > > Danilo Krummrich (2):
> > > >     maple_tree: split up MA_STATE() macro
> > > >     drm: manager to keep track of GPUs VA mappings
> > > > 
> > > > Matthew Brost (6):
> > > >     maple_tree: Export mas_preallocate
> > > >     drm/xe: Port Xe to GPUVA
> > > >     drm/xe: NULL binding implementation
> > > >     drm/xe: Avoid doing rebinds
> > > >     drm/xe: Reduce the number list links in xe_vma
> > > >     drm/xe: Optimize size of xe_vma allocation
> > > > 
> > > >    Documentation/gpu/drm-mm.rst                |   31 +
> > > >    drivers/gpu/drm/Makefile                    |    1 +
> > > >    drivers/gpu/drm/drm_debugfs.c               |   56 +
> > > >    drivers/gpu/drm/drm_gem.c                   |    3 +
> > > >    drivers/gpu/drm/drm_gpuva_mgr.c             | 1890 ++++++++++++++++++
> > > >    drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
> > > >    drivers/gpu/drm/xe/xe_bo.h                  |    1 +
> > > >    drivers/gpu/drm/xe/xe_device.c              |    2 +-
> > > >    drivers/gpu/drm/xe/xe_exec.c                |    4 +-
> > > >    drivers/gpu/drm/xe/xe_gt_pagefault.c        |   29 +-
> > > >    drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
> > > >    drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
> > > >    drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
> > > >    drivers/gpu/drm/xe/xe_pt.c                  |  176 +-
> > > >    drivers/gpu/drm/xe/xe_trace.h               |   10 +-
> > > >    drivers/gpu/drm/xe/xe_vm.c                  | 1947 +++++++++----------
> > > >    drivers/gpu/drm/xe/xe_vm.h                  |   76 +-
> > > >    drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
> > > >    drivers/gpu/drm/xe/xe_vm_types.h            |  276 ++-
> > > >    include/drm/drm_debugfs.h                   |   24 +
> > > >    include/drm/drm_drv.h                       |    7 +
> > > >    include/drm/drm_gem.h                       |   75 +
> > > >    include/drm/drm_gpuva_mgr.h                 |  734 +++++++
> > > >    include/linux/maple_tree.h                  |    7 +-
> > > >    include/uapi/drm/xe_drm.h                   |    8 +
> > > >    lib/maple_tree.c                            |    1 +
> > > >    26 files changed, 4203 insertions(+), 1280 deletions(-)
> > > >    create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
> > > >    create mode 100644 include/drm/drm_gpuva_mgr.h
> > > > 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA Matthew Brost
@ 2023-04-21 12:52   ` Thomas Hellström
  2023-04-21 12:56     ` Thomas Hellström
  2023-04-21 14:54   ` Thomas Hellström
  2023-04-25 11:41   ` Thomas Hellström
  2 siblings, 1 reply; 21+ messages in thread
From: Thomas Hellström @ 2023-04-21 12:52 UTC (permalink / raw)
  To: Matthew Brost, intel-xe


On 4/4/23 03:42, Matthew Brost wrote:
> Rather than open coding VM binds and VMA tracking, use the GPUVA
> library. GPUVA provides a common infrastructure for VM binds to use mmap
> / munmap semantics and support for VK sparse bindings.
>
> The concepts are:
>
> 1) xe_vm inherits from drm_gpuva_manager
> 2) xe_vma inherits from drm_gpuva
> 3) xe_vma_op inherits from drm_gpuva_op
> 4) VM bind operations (MAP, UNMAP, PREFETCH, UNMAP_ALL) call into the
> GPUVA code to generate an VMA operations list which is parsed, commited,
> and executed.
>
> v2 (CI): Add break after default in case statement.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
>   drivers/gpu/drm/xe/xe_device.c              |    2 +-
>   drivers/gpu/drm/xe/xe_exec.c                |    2 +-
>   drivers/gpu/drm/xe/xe_gt_pagefault.c        |   23 +-
>   drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
>   drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
>   drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
>   drivers/gpu/drm/xe/xe_pt.c                  |  106 +-
>   drivers/gpu/drm/xe/xe_trace.h               |   10 +-
>   drivers/gpu/drm/xe/xe_vm.c                  | 1799 +++++++++----------
>   drivers/gpu/drm/xe/xe_vm.h                  |   66 +-
>   drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
>   drivers/gpu/drm/xe/xe_vm_types.h            |  165 +-
>   13 files changed, 1126 insertions(+), 1172 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 5460e6fe3c1f..3a482c61c3ec 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -391,7 +391,8 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo,
>   {
>   	struct dma_resv_iter cursor;
>   	struct dma_fence *fence;
> -	struct xe_vma *vma;
> +	struct drm_gpuva *gpuva;
> +	struct drm_gem_object *obj = &bo->ttm.base;
>   	int ret = 0;
>   
>   	dma_resv_assert_held(bo->ttm.base.resv);
> @@ -404,8 +405,9 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo,
>   		dma_resv_iter_end(&cursor);
>   	}
>   
> -	list_for_each_entry(vma, &bo->vmas, bo_link) {
> -		struct xe_vm *vm = vma->vm;
> +	drm_gem_for_each_gpuva(gpuva, obj) {
> +		struct xe_vma *vma = gpuva_to_vma(gpuva);
> +		struct xe_vm *vm = xe_vma_vm(vma);
>   
>   		trace_xe_vma_evict(vma);
>   
> @@ -430,10 +432,8 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo,
>   			} else {
>   				ret = timeout;
>   			}
> -

Unrelated


>   		} else {
>   			bool vm_resv_locked = false;
> -			struct xe_vm *vm = vma->vm;
>   
>   			/*
>   			 * We need to put the vma on the vm's rebind_list,
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index a79f934e3d2d..d0d70adedba6 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -130,7 +130,7 @@ static struct drm_driver driver = {
>   	.driver_features =
>   	    DRIVER_GEM |
>   	    DRIVER_RENDER | DRIVER_SYNCOBJ |
> -	    DRIVER_SYNCOBJ_TIMELINE,
> +	    DRIVER_SYNCOBJ_TIMELINE | DRIVER_GEM_GPUVA,
>   	.open = xe_file_open,
>   	.postclose = xe_file_close,
>   
> diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
> index ea869f2452ef..214d82bc906b 100644
> --- a/drivers/gpu/drm/xe/xe_exec.c
> +++ b/drivers/gpu/drm/xe/xe_exec.c
> @@ -118,7 +118,7 @@ static int xe_exec_begin(struct xe_engine *e, struct ww_acquire_ctx *ww,
>   		if (xe_vma_is_userptr(vma))
>   			continue;
>   
> -		err = xe_bo_validate(vma->bo, vm, false);
> +		err = xe_bo_validate(xe_vma_bo(vma), vm, false);
>   		if (err) {
>   			xe_vm_unlock_dma_resv(vm, tv_onstack, *tv, ww, objs);
>   			*tv = NULL;
> diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> index 1677640e1075..f7a066090a13 100644
> --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> @@ -75,9 +75,10 @@ static bool vma_is_valid(struct xe_gt *gt, struct xe_vma *vma)
>   		!(BIT(gt->info.id) & vma->usm.gt_invalidated);
>   }
>   
> -static bool vma_matches(struct xe_vma *vma, struct xe_vma *lookup)
> +static bool vma_matches(struct xe_vma *vma, u64 page_addr)
>   {
> -	if (lookup->start > vma->end || lookup->end < vma->start)
> +	if (page_addr > xe_vma_end(vma) - 1 ||
> +	    page_addr + SZ_4K < xe_vma_start(vma))
>   		return false;
>   
>   	return true;
> @@ -90,16 +91,14 @@ static bool only_needs_bo_lock(struct xe_bo *bo)
>   
>   static struct xe_vma *lookup_vma(struct xe_vm *vm, u64 page_addr)
>   {
> -	struct xe_vma *vma = NULL, lookup;
> +	struct xe_vma *vma = NULL;
>   
> -	lookup.start = page_addr;
> -	lookup.end = lookup.start + SZ_4K - 1;
>   	if (vm->usm.last_fault_vma) {   /* Fast lookup */
> -		if (vma_matches(vm->usm.last_fault_vma, &lookup))
> +		if (vma_matches(vm->usm.last_fault_vma, page_addr))
>   			vma = vm->usm.last_fault_vma;
>   	}
>   	if (!vma)
> -		vma = xe_vm_find_overlapping_vma(vm, &lookup);
> +		vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
>   
>   	return vma;
>   }
> @@ -170,7 +169,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
>   	}
>   
>   	/* Lock VM and BOs dma-resv */
> -	bo = vma->bo;
> +	bo = xe_vma_bo(vma);
>   	if (only_needs_bo_lock(bo)) {
>   		/* This path ensures the BO's LRU is updated */
>   		ret = xe_bo_lock(bo, &ww, xe->info.tile_count, false);
> @@ -487,12 +486,8 @@ static struct xe_vma *get_acc_vma(struct xe_vm *vm, struct acc *acc)
>   {
>   	u64 page_va = acc->va_range_base + (ffs(acc->sub_granularity) - 1) *
>   		sub_granularity_in_byte(acc->granularity);
> -	struct xe_vma lookup;
> -
> -	lookup.start = page_va;
> -	lookup.end = lookup.start + SZ_4K - 1;
>   
> -	return xe_vm_find_overlapping_vma(vm, &lookup);
> +	return xe_vm_find_overlapping_vma(vm, page_va, SZ_4K);
>   }
>   
>   static int handle_acc(struct xe_gt *gt, struct acc *acc)
> @@ -536,7 +531,7 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc)
>   		goto unlock_vm;
>   
>   	/* Lock VM and BOs dma-resv */
> -	bo = vma->bo;
> +	bo = xe_vma_bo(vma);
>   	if (only_needs_bo_lock(bo)) {
>   		/* This path ensures the BO's LRU is updated */
>   		ret = xe_bo_lock(bo, &ww, xe->info.tile_count, false);
> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> index f279e21300aa..155f37aaf31c 100644
> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
> @@ -201,8 +201,8 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>   	if (!xe->info.has_range_tlb_invalidation) {
>   		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
>   	} else {
> -		u64 start = vma->start;
> -		u64 length = vma->end - vma->start + 1;
> +		u64 start = xe_vma_start(vma);
> +		u64 length = xe_vma_size(vma);
>   		u64 align, end;
>   
>   		if (length < SZ_4K)
> @@ -215,12 +215,12 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>   		 * address mask covering the required range.
>   		 */
>   		align = roundup_pow_of_two(length);
> -		start = ALIGN_DOWN(vma->start, align);
> -		end = ALIGN(vma->start + length, align);
> +		start = ALIGN_DOWN(xe_vma_start(vma), align);
> +		end = ALIGN(xe_vma_start(vma) + length, align);
>   		length = align;
>   		while (start + length < end) {
>   			length <<= 1;
> -			start = ALIGN_DOWN(vma->start, length);
> +			start = ALIGN_DOWN(xe_vma_start(vma), length);
>   		}
>   
>   		/*
> @@ -229,7 +229,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>   		 */
>   		if (length >= SZ_2M) {
>   			length = max_t(u64, SZ_16M, length);
> -			start = ALIGN_DOWN(vma->start, length);
> +			start = ALIGN_DOWN(xe_vma_start(vma), length);
>   		}
>   
>   		XE_BUG_ON(length < SZ_4K);
> @@ -238,7 +238,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>   		XE_BUG_ON(!IS_ALIGNED(start, length));
>   
>   		action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_PAGE_SELECTIVE);
> -		action[len++] = vma->vm->usm.asid;
> +		action[len++] = xe_vma_vm(vma)->usm.asid;
>   		action[len++] = lower_32_bits(start);
>   		action[len++] = upper_32_bits(start);
>   		action[len++] = ilog2(length) - ilog2(SZ_4K);
> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
> index 5e00b75d3ca2..e5ed9022a0a2 100644
> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
> @@ -783,13 +783,13 @@ static int parse_g2h_response(struct xe_guc_ct *ct, u32 *msg, u32 len)
>   	if (type == GUC_HXG_TYPE_RESPONSE_FAILURE) {
>   		g2h_fence->fail = true;
>   		g2h_fence->error =
> -			FIELD_GET(GUC_HXG_FAILURE_MSG_0_ERROR, msg[0]);
> +			FIELD_GET(GUC_HXG_FAILURE_MSG_0_ERROR, msg[1]);
>   		g2h_fence->hint =
> -			FIELD_GET(GUC_HXG_FAILURE_MSG_0_HINT, msg[0]);
> +			FIELD_GET(GUC_HXG_FAILURE_MSG_0_HINT, msg[1]);
>   	} else if (type == GUC_HXG_TYPE_NO_RESPONSE_RETRY) {
>   		g2h_fence->retry = true;
>   		g2h_fence->reason =
> -			FIELD_GET(GUC_HXG_RETRY_MSG_0_REASON, msg[0]);
> +			FIELD_GET(GUC_HXG_RETRY_MSG_0_REASON, msg[1]);
>   	} else if (g2h_fence->response_buffer) {
>   		g2h_fence->response_len = response_len;
>   		memcpy(g2h_fence->response_buffer, msg + GUC_CTB_MSG_MIN_LEN,
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index e8978440c725..fee4c0028a2f 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -1049,8 +1049,10 @@ xe_migrate_update_pgtables_cpu(struct xe_migrate *m,
>   		return ERR_PTR(-ETIME);
>   
>   	if (wait_vm && !dma_resv_test_signaled(&vm->resv,
> -					       DMA_RESV_USAGE_BOOKKEEP))
> +					       DMA_RESV_USAGE_BOOKKEEP)) {
> +		vm_dbg(&vm->xe->drm, "wait on VM for munmap");
>   		return ERR_PTR(-ETIME);
> +	}
>   
>   	if (ops->pre_commit) {
>   		err = ops->pre_commit(pt_update);
> @@ -1138,7 +1140,8 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
>   	u64 addr;
>   	int err = 0;
>   	bool usm = !eng && xe->info.supports_usm;
> -	bool first_munmap_rebind = vma && vma->first_munmap_rebind;
> +	bool first_munmap_rebind = vma &&
> +		vma->gpuva.flags & XE_VMA_FIRST_REBIND;
>   
>   	/* Use the CPU if no in syncs and engine is idle */
>   	if (no_in_syncs(syncs, num_syncs) && (!eng || xe_engine_is_idle(eng))) {
> @@ -1259,6 +1262,7 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
>   	 * trigger preempts before moving forward
>   	 */
>   	if (first_munmap_rebind) {
> +		vm_dbg(&vm->xe->drm, "wait on first_munmap_rebind");
>   		err = job_add_deps(job, &vm->resv,
>   				   DMA_RESV_USAGE_BOOKKEEP);
>   		if (err)
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 6b2943efcdbc..37a1ce6f62a3 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -94,7 +94,7 @@ static dma_addr_t vma_addr(struct xe_vma *vma, u64 offset,
>   				&cur);
>   		return xe_res_dma(&cur) + offset;
>   	} else {
> -		return xe_bo_addr(vma->bo, offset, page_size, is_vram);
> +		return xe_bo_addr(xe_vma_bo(vma), offset, page_size, is_vram);
>   	}
>   }
>   
> @@ -159,7 +159,7 @@ u64 gen8_pte_encode(struct xe_vma *vma, struct xe_bo *bo,
>   
>   	if (is_vram) {
>   		pte |= GEN12_PPGTT_PTE_LM;
> -		if (vma && vma->use_atomic_access_pte_bit)
> +		if (vma && vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT)
>   			pte |= GEN12_USM_PPGTT_PTE_AE;
>   	}
>   
> @@ -738,7 +738,7 @@ static int
>   xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
>   		 struct xe_vm_pgtable_update *entries, u32 *num_entries)
>   {
> -	struct xe_bo *bo = vma->bo;
> +	struct xe_bo *bo = xe_vma_bo(vma);
>   	bool is_vram = !xe_vma_is_userptr(vma) && bo && xe_bo_is_vram(bo);
>   	struct xe_res_cursor curs;
>   	struct xe_pt_stage_bind_walk xe_walk = {
> @@ -747,22 +747,23 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
>   			.shifts = xe_normal_pt_shifts,
>   			.max_level = XE_PT_HIGHEST_LEVEL,
>   		},
> -		.vm = vma->vm,
> +		.vm = xe_vma_vm(vma),
>   		.gt = gt,
>   		.curs = &curs,
> -		.va_curs_start = vma->start,
> -		.pte_flags = vma->pte_flags,
> +		.va_curs_start = xe_vma_start(vma),
> +		.pte_flags = xe_vma_read_only(vma) ? PTE_READ_ONLY : 0,
>   		.wupd.entries = entries,
> -		.needs_64K = (vma->vm->flags & XE_VM_FLAGS_64K) && is_vram,
> +		.needs_64K = (xe_vma_vm(vma)->flags & XE_VM_FLAGS_64K) &&
> +			is_vram,
>   	};
> -	struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
> +	struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
>   	int ret;
>   
>   	if (is_vram) {
>   		struct xe_gt *bo_gt = xe_bo_to_gt(bo);
>   
>   		xe_walk.default_pte = GEN12_PPGTT_PTE_LM;
> -		if (vma && vma->use_atomic_access_pte_bit)
> +		if (vma && vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT)
>   			xe_walk.default_pte |= GEN12_USM_PPGTT_PTE_AE;
>   		xe_walk.dma_offset = bo_gt->mem.vram.io_start -
>   			gt_to_xe(gt)->mem.vram.io_start;
> @@ -778,17 +779,16 @@ xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
>   
>   	xe_bo_assert_held(bo);
>   	if (xe_vma_is_userptr(vma))
> -		xe_res_first_sg(vma->userptr.sg, 0, vma->end - vma->start + 1,
> -				&curs);
> +		xe_res_first_sg(vma->userptr.sg, 0, xe_vma_size(vma), &curs);
>   	else if (xe_bo_is_vram(bo) || xe_bo_is_stolen(bo))
> -		xe_res_first(bo->ttm.resource, vma->bo_offset,
> -			     vma->end - vma->start + 1, &curs);
> +		xe_res_first(bo->ttm.resource, xe_vma_bo_offset(vma),
> +			     xe_vma_size(vma), &curs);
>   	else
> -		xe_res_first_sg(xe_bo_get_sg(bo), vma->bo_offset,
> -				vma->end - vma->start + 1, &curs);
> +		xe_res_first_sg(xe_bo_get_sg(bo), xe_vma_bo_offset(vma),
> +				xe_vma_size(vma), &curs);
>   
> -	ret = drm_pt_walk_range(&pt->drm, pt->level, vma->start, vma->end + 1,
> -				&xe_walk.drm);
> +	ret = drm_pt_walk_range(&pt->drm, pt->level, xe_vma_start(vma),
> +				xe_vma_end(vma), &xe_walk.drm);
>   
>   	*num_entries = xe_walk.wupd.num_used_entries;
>   	return ret;
> @@ -923,13 +923,13 @@ bool xe_pt_zap_ptes(struct xe_gt *gt, struct xe_vma *vma)
>   		},
>   		.gt = gt,
>   	};
> -	struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
> +	struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
>   
>   	if (!(vma->gt_present & BIT(gt->info.id)))
>   		return false;
>   
> -	(void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1,
> -				 &xe_walk.drm);
> +	(void)drm_pt_walk_shared(&pt->drm, pt->level, xe_vma_start(vma),
> +				 xe_vma_end(vma), &xe_walk.drm);
>   
>   	return xe_walk.needs_invalidate;
>   }
> @@ -966,21 +966,21 @@ static void xe_pt_abort_bind(struct xe_vma *vma,
>   			continue;
>   
>   		for (j = 0; j < entries[i].qwords; j++)
> -			xe_pt_destroy(entries[i].pt_entries[j].pt, vma->vm->flags, NULL);
> +			xe_pt_destroy(entries[i].pt_entries[j].pt, xe_vma_vm(vma)->flags, NULL);
>   		kfree(entries[i].pt_entries);
>   	}
>   }
>   
>   static void xe_pt_commit_locks_assert(struct xe_vma *vma)
>   {
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   
>   	lockdep_assert_held(&vm->lock);
>   
>   	if (xe_vma_is_userptr(vma))
>   		lockdep_assert_held_read(&vm->userptr.notifier_lock);
>   	else
> -		dma_resv_assert_held(vma->bo->ttm.base.resv);
> +		dma_resv_assert_held(xe_vma_bo(vma)->ttm.base.resv);
>   
>   	dma_resv_assert_held(&vm->resv);
>   }
> @@ -1013,7 +1013,7 @@ static void xe_pt_commit_bind(struct xe_vma *vma,
>   
>   			if (xe_pt_entry(pt_dir, j_))
>   				xe_pt_destroy(xe_pt_entry(pt_dir, j_),
> -					      vma->vm->flags, deferred);
> +					      xe_vma_vm(vma)->flags, deferred);
>   
>   			pt_dir->dir.entries[j_] = &newpte->drm;
>   		}
> @@ -1074,7 +1074,7 @@ static int xe_pt_userptr_inject_eagain(struct xe_vma *vma)
>   	static u32 count;
>   
>   	if (count++ % divisor == divisor - 1) {
> -		struct xe_vm *vm = vma->vm;
> +		struct xe_vm *vm = xe_vma_vm(vma);
>   
>   		vma->userptr.divisor = divisor << 1;
>   		spin_lock(&vm->userptr.invalidated_lock);
> @@ -1117,7 +1117,7 @@ static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
>   		container_of(pt_update, typeof(*userptr_update), base);
>   	struct xe_vma *vma = pt_update->vma;
>   	unsigned long notifier_seq = vma->userptr.notifier_seq;
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   
>   	userptr_update->locked = false;
>   
> @@ -1288,20 +1288,20 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
>   		},
>   		.bind = true,
>   	};
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   	u32 num_entries;
>   	struct dma_fence *fence;
>   	struct invalidation_fence *ifence = NULL;
>   	int err;
>   
>   	bind_pt_update.locked = false;
> -	xe_bo_assert_held(vma->bo);
> +	xe_bo_assert_held(xe_vma_bo(vma));
>   	xe_vm_assert_held(vm);
>   	XE_BUG_ON(xe_gt_is_media_type(gt));
>   
> -	vm_dbg(&vma->vm->xe->drm,
> +	vm_dbg(&xe_vma_vm(vma)->xe->drm,
>   	       "Preparing bind, with range [%llx...%llx) engine %p.\n",
> -	       vma->start, vma->end, e);
> +	       xe_vma_start(vma), xe_vma_end(vma) - 1, e);
>   
>   	err = xe_pt_prepare_bind(gt, vma, entries, &num_entries, rebind);
>   	if (err)
> @@ -1310,23 +1310,28 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
>   
>   	xe_vm_dbg_print_entries(gt_to_xe(gt), entries, num_entries);
>   
> -	if (rebind && !xe_vm_no_dma_fences(vma->vm)) {
> +	if (rebind && !xe_vm_no_dma_fences(xe_vma_vm(vma))) {
>   		ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
>   		if (!ifence)
>   			return ERR_PTR(-ENOMEM);
>   	}
>   
>   	fence = xe_migrate_update_pgtables(gt->migrate,
> -					   vm, vma->bo,
> +					   vm, xe_vma_bo(vma),
>   					   e ? e : vm->eng[gt->info.id],
>   					   entries, num_entries,
>   					   syncs, num_syncs,
>   					   &bind_pt_update.base);
>   	if (!IS_ERR(fence)) {
> +		bool last_munmap_rebind = vma->gpuva.flags & XE_VMA_LAST_REBIND;
>   		LLIST_HEAD(deferred);
>   
> +
> +		if (last_munmap_rebind)
> +			vm_dbg(&vm->xe->drm, "last_munmap_rebind");
> +
>   		/* TLB invalidation must be done before signaling rebind */
> -		if (rebind && !xe_vm_no_dma_fences(vma->vm)) {
> +		if (rebind && !xe_vm_no_dma_fences(xe_vma_vm(vma))) {
>   			int err = invalidation_fence_init(gt, ifence, fence,
>   							  vma);
>   			if (err) {
> @@ -1339,12 +1344,12 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
>   
>   		/* add shared fence now for pagetable delayed destroy */
>   		dma_resv_add_fence(&vm->resv, fence, !rebind &&
> -				   vma->last_munmap_rebind ?
> +				   last_munmap_rebind ?
>   				   DMA_RESV_USAGE_KERNEL :
>   				   DMA_RESV_USAGE_BOOKKEEP);
>   
> -		if (!xe_vma_is_userptr(vma) && !vma->bo->vm)
> -			dma_resv_add_fence(vma->bo->ttm.base.resv, fence,
> +		if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
> +			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
>   					   DMA_RESV_USAGE_BOOKKEEP);
>   		xe_pt_commit_bind(vma, entries, num_entries, rebind,
>   				  bind_pt_update.locked ? &deferred : NULL);
> @@ -1357,8 +1362,7 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
>   			up_read(&vm->userptr.notifier_lock);
>   			xe_bo_put_commit(&deferred);
>   		}
> -		if (!rebind && vma->last_munmap_rebind &&
> -		    xe_vm_in_compute_mode(vm))
> +		if (!rebind && last_munmap_rebind && xe_vm_in_compute_mode(vm))
>   			queue_work(vm->xe->ordered_wq,
>   				   &vm->preempt.rebind_work);
>   	} else {
> @@ -1506,14 +1510,14 @@ static unsigned int xe_pt_stage_unbind(struct xe_gt *gt, struct xe_vma *vma,
>   			.max_level = XE_PT_HIGHEST_LEVEL,
>   		},
>   		.gt = gt,
> -		.modified_start = vma->start,
> -		.modified_end = vma->end + 1,
> +		.modified_start = xe_vma_start(vma),
> +		.modified_end = xe_vma_end(vma),
>   		.wupd.entries = entries,
>   	};
> -	struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
> +	struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
>   
> -	(void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, vma->end + 1,
> -				 &xe_walk.drm);
> +	(void)drm_pt_walk_shared(&pt->drm, pt->level, xe_vma_start(vma),
> +				 xe_vma_end(vma), &xe_walk.drm);
>   
>   	return xe_walk.wupd.num_used_entries;
>   }
> @@ -1525,7 +1529,7 @@ xe_migrate_clear_pgtable_callback(struct xe_migrate_pt_update *pt_update,
>   				  const struct xe_vm_pgtable_update *update)
>   {
>   	struct xe_vma *vma = pt_update->vma;
> -	u64 empty = __xe_pt_empty_pte(gt, vma->vm, update->pt->level);
> +	u64 empty = __xe_pt_empty_pte(gt, xe_vma_vm(vma), update->pt->level);
>   	int i;
>   
>   	XE_BUG_ON(xe_gt_is_media_type(gt));
> @@ -1563,7 +1567,7 @@ xe_pt_commit_unbind(struct xe_vma *vma,
>   			     i++) {
>   				if (xe_pt_entry(pt_dir, i))
>   					xe_pt_destroy(xe_pt_entry(pt_dir, i),
> -						      vma->vm->flags, deferred);
> +						      xe_vma_vm(vma)->flags, deferred);
>   
>   				pt_dir->dir.entries[i] = NULL;
>   			}
> @@ -1612,19 +1616,19 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
>   			.vma = vma,
>   		},
>   	};
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   	u32 num_entries;
>   	struct dma_fence *fence = NULL;
>   	struct invalidation_fence *ifence;
>   	LLIST_HEAD(deferred);
>   
> -	xe_bo_assert_held(vma->bo);
> +	xe_bo_assert_held(xe_vma_bo(vma));
>   	xe_vm_assert_held(vm);
>   	XE_BUG_ON(xe_gt_is_media_type(gt));
>   
> -	vm_dbg(&vma->vm->xe->drm,
> +	vm_dbg(&xe_vma_vm(vma)->xe->drm,
>   	       "Preparing unbind, with range [%llx...%llx) engine %p.\n",
> -	       vma->start, vma->end, e);
> +	       xe_vma_start(vma), xe_vma_end(vma) - 1, e);
>   
>   	num_entries = xe_pt_stage_unbind(gt, vma, entries);
>   	XE_BUG_ON(num_entries > ARRAY_SIZE(entries));
> @@ -1663,8 +1667,8 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct xe_vma *vma, struct xe_engine *e,
>   				   DMA_RESV_USAGE_BOOKKEEP);
>   
>   		/* This fence will be installed by caller when doing eviction */
> -		if (!xe_vma_is_userptr(vma) && !vma->bo->vm)
> -			dma_resv_add_fence(vma->bo->ttm.base.resv, fence,
> +		if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
> +			dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
>   					   DMA_RESV_USAGE_BOOKKEEP);
>   		xe_pt_commit_unbind(vma, entries, num_entries,
>   				    unbind_pt_update.locked ? &deferred : NULL);
> diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
> index 2f8eb7ebe9a7..12e12673fc91 100644
> --- a/drivers/gpu/drm/xe/xe_trace.h
> +++ b/drivers/gpu/drm/xe/xe_trace.h
> @@ -18,7 +18,7 @@
>   #include "xe_gt_types.h"
>   #include "xe_guc_engine_types.h"
>   #include "xe_sched_job.h"
> -#include "xe_vm_types.h"
> +#include "xe_vm.h"
>   
>   DECLARE_EVENT_CLASS(xe_gt_tlb_invalidation_fence,
>   		    TP_PROTO(struct xe_gt_tlb_invalidation_fence *fence),
> @@ -368,10 +368,10 @@ DECLARE_EVENT_CLASS(xe_vma,
>   
>   		    TP_fast_assign(
>   			   __entry->vma = (unsigned long)vma;
> -			   __entry->asid = vma->vm->usm.asid;
> -			   __entry->start = vma->start;
> -			   __entry->end = vma->end;
> -			   __entry->ptr = (u64)vma->userptr.ptr;
> +			   __entry->asid = xe_vma_vm(vma)->usm.asid;
> +			   __entry->start = xe_vma_start(vma);
> +			   __entry->end = xe_vma_end(vma) - 1;
> +			   __entry->ptr = xe_vma_userptr(vma);
>   			   ),
>   
>   		    TP_printk("vma=0x%016llx, asid=0x%05x, start=0x%012llx, end=0x%012llx, ptr=0x%012llx,",

Is it possible to split this patch (for review and possible regression 
debugging purposes) to introduce new macros first in one patch and then 
have the relevant xe_vm changes in a separate patch?

Anyway I'll continue reviewing the patch as is.

/Thomas



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA
  2023-04-21 12:52   ` Thomas Hellström
@ 2023-04-21 12:56     ` Thomas Hellström
  0 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström @ 2023-04-21 12:56 UTC (permalink / raw)
  To: Matthew Brost, intel-xe


On 4/21/23 14:52, Thomas Hellström wrote:
>
> On 4/4/23 03:42, Matthew Brost wrote:
>> Rather than open coding VM binds and VMA tracking, use the GPUVA
>> library. GPUVA provides a common infrastructure for VM binds to use mmap
>> / munmap semantics and support for VK sparse bindings.
>>
>> The concepts are:
>>
>> 1) xe_vm inherits from drm_gpuva_manager
>> 2) xe_vma inherits from drm_gpuva
>> 3) xe_vma_op inherits from drm_gpuva_op
>> 4) VM bind operations (MAP, UNMAP, PREFETCH, UNMAP_ALL) call into the
>> GPUVA code to generate an VMA operations list which is parsed, commited,
>> and executed.
>>
>> v2 (CI): Add break after default in case statement.
>>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
>>   drivers/gpu/drm/xe/xe_device.c              |    2 +-
>>   drivers/gpu/drm/xe/xe_exec.c                |    2 +-
>>   drivers/gpu/drm/xe/xe_gt_pagefault.c        |   23 +-
>>   drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
>>   drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
>>   drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
>>   drivers/gpu/drm/xe/xe_pt.c                  |  106 +-
>>   drivers/gpu/drm/xe/xe_trace.h               |   10 +-
>>   drivers/gpu/drm/xe/xe_vm.c                  | 1799 +++++++++----------
>>   drivers/gpu/drm/xe/xe_vm.h                  |   66 +-
>>   drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
>>   drivers/gpu/drm/xe/xe_vm_types.h            |  165 +-
>>   13 files changed, 1126 insertions(+), 1172 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 5460e6fe3c1f..3a482c61c3ec 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -391,7 +391,8 @@ static int xe_bo_trigger_rebind(struct xe_device 
>> *xe, struct xe_bo *bo,
>>   {
>>       struct dma_resv_iter cursor;
>>       struct dma_fence *fence;
>> -    struct xe_vma *vma;
>> +    struct drm_gpuva *gpuva;
>> +    struct drm_gem_object *obj = &bo->ttm.base;
>>       int ret = 0;
>>         dma_resv_assert_held(bo->ttm.base.resv);
>> @@ -404,8 +405,9 @@ static int xe_bo_trigger_rebind(struct xe_device 
>> *xe, struct xe_bo *bo,
>>           dma_resv_iter_end(&cursor);
>>       }
>>   -    list_for_each_entry(vma, &bo->vmas, bo_link) {
>> -        struct xe_vm *vm = vma->vm;
>> +    drm_gem_for_each_gpuva(gpuva, obj) {
>> +        struct xe_vma *vma = gpuva_to_vma(gpuva);
>> +        struct xe_vm *vm = xe_vma_vm(vma);
>>             trace_xe_vma_evict(vma);
>>   @@ -430,10 +432,8 @@ static int xe_bo_trigger_rebind(struct 
>> xe_device *xe, struct xe_bo *bo,
>>               } else {
>>                   ret = timeout;
>>               }
>> -
>
> Unrelated
>
>
>>           } else {
>>               bool vm_resv_locked = false;
>> -            struct xe_vm *vm = vma->vm;
>>                 /*
>>                * We need to put the vma on the vm's rebind_list,
>> diff --git a/drivers/gpu/drm/xe/xe_device.c 
>> b/drivers/gpu/drm/xe/xe_device.c
>> index a79f934e3d2d..d0d70adedba6 100644
>> --- a/drivers/gpu/drm/xe/xe_device.c
>> +++ b/drivers/gpu/drm/xe/xe_device.c
>> @@ -130,7 +130,7 @@ static struct drm_driver driver = {
>>       .driver_features =
>>           DRIVER_GEM |
>>           DRIVER_RENDER | DRIVER_SYNCOBJ |
>> -        DRIVER_SYNCOBJ_TIMELINE,
>> +        DRIVER_SYNCOBJ_TIMELINE | DRIVER_GEM_GPUVA,
>>       .open = xe_file_open,
>>       .postclose = xe_file_close,
>>   diff --git a/drivers/gpu/drm/xe/xe_exec.c 
>> b/drivers/gpu/drm/xe/xe_exec.c
>> index ea869f2452ef..214d82bc906b 100644
>> --- a/drivers/gpu/drm/xe/xe_exec.c
>> +++ b/drivers/gpu/drm/xe/xe_exec.c
>> @@ -118,7 +118,7 @@ static int xe_exec_begin(struct xe_engine *e, 
>> struct ww_acquire_ctx *ww,
>>           if (xe_vma_is_userptr(vma))
>>               continue;
>>   -        err = xe_bo_validate(vma->bo, vm, false);
>> +        err = xe_bo_validate(xe_vma_bo(vma), vm, false);
>>           if (err) {
>>               xe_vm_unlock_dma_resv(vm, tv_onstack, *tv, ww, objs);
>>               *tv = NULL;
>> diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c 
>> b/drivers/gpu/drm/xe/xe_gt_pagefault.c
>> index 1677640e1075..f7a066090a13 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
>> @@ -75,9 +75,10 @@ static bool vma_is_valid(struct xe_gt *gt, struct 
>> xe_vma *vma)
>>           !(BIT(gt->info.id) & vma->usm.gt_invalidated);
>>   }
>>   -static bool vma_matches(struct xe_vma *vma, struct xe_vma *lookup)
>> +static bool vma_matches(struct xe_vma *vma, u64 page_addr)
>>   {
>> -    if (lookup->start > vma->end || lookup->end < vma->start)
>> +    if (page_addr > xe_vma_end(vma) - 1 ||
>> +        page_addr + SZ_4K < xe_vma_start(vma))
>>           return false;
>>         return true;
>> @@ -90,16 +91,14 @@ static bool only_needs_bo_lock(struct xe_bo *bo)
>>     static struct xe_vma *lookup_vma(struct xe_vm *vm, u64 page_addr)
>>   {
>> -    struct xe_vma *vma = NULL, lookup;
>> +    struct xe_vma *vma = NULL;
>>   -    lookup.start = page_addr;
>> -    lookup.end = lookup.start + SZ_4K - 1;
>>       if (vm->usm.last_fault_vma) {   /* Fast lookup */
>> -        if (vma_matches(vm->usm.last_fault_vma, &lookup))
>> +        if (vma_matches(vm->usm.last_fault_vma, page_addr))
>>               vma = vm->usm.last_fault_vma;
>>       }
>>       if (!vma)
>> -        vma = xe_vm_find_overlapping_vma(vm, &lookup);
>> +        vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
>>         return vma;
>>   }
>> @@ -170,7 +169,7 @@ static int handle_pagefault(struct xe_gt *gt, 
>> struct pagefault *pf)
>>       }
>>         /* Lock VM and BOs dma-resv */
>> -    bo = vma->bo;
>> +    bo = xe_vma_bo(vma);
>>       if (only_needs_bo_lock(bo)) {
>>           /* This path ensures the BO's LRU is updated */
>>           ret = xe_bo_lock(bo, &ww, xe->info.tile_count, false);
>> @@ -487,12 +486,8 @@ static struct xe_vma *get_acc_vma(struct xe_vm 
>> *vm, struct acc *acc)
>>   {
>>       u64 page_va = acc->va_range_base + (ffs(acc->sub_granularity) - 
>> 1) *
>>           sub_granularity_in_byte(acc->granularity);
>> -    struct xe_vma lookup;
>> -
>> -    lookup.start = page_va;
>> -    lookup.end = lookup.start + SZ_4K - 1;
>>   -    return xe_vm_find_overlapping_vma(vm, &lookup);
>> +    return xe_vm_find_overlapping_vma(vm, page_va, SZ_4K);
>>   }
>>     static int handle_acc(struct xe_gt *gt, struct acc *acc)
>> @@ -536,7 +531,7 @@ static int handle_acc(struct xe_gt *gt, struct 
>> acc *acc)
>>           goto unlock_vm;
>>         /* Lock VM and BOs dma-resv */
>> -    bo = vma->bo;
>> +    bo = xe_vma_bo(vma);
>>       if (only_needs_bo_lock(bo)) {
>>           /* This path ensures the BO's LRU is updated */
>>           ret = xe_bo_lock(bo, &ww, xe->info.tile_count, false);
>> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c 
>> b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>> index f279e21300aa..155f37aaf31c 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>> @@ -201,8 +201,8 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>>       if (!xe->info.has_range_tlb_invalidation) {
>>           action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
>>       } else {
>> -        u64 start = vma->start;
>> -        u64 length = vma->end - vma->start + 1;
>> +        u64 start = xe_vma_start(vma);
>> +        u64 length = xe_vma_size(vma);
>>           u64 align, end;
>>             if (length < SZ_4K)
>> @@ -215,12 +215,12 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>>            * address mask covering the required range.
>>            */
>>           align = roundup_pow_of_two(length);
>> -        start = ALIGN_DOWN(vma->start, align);
>> -        end = ALIGN(vma->start + length, align);
>> +        start = ALIGN_DOWN(xe_vma_start(vma), align);
>> +        end = ALIGN(xe_vma_start(vma) + length, align);
>>           length = align;
>>           while (start + length < end) {
>>               length <<= 1;
>> -            start = ALIGN_DOWN(vma->start, length);
>> +            start = ALIGN_DOWN(xe_vma_start(vma), length);
>>           }
>>             /*
>> @@ -229,7 +229,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>>            */
>>           if (length >= SZ_2M) {
>>               length = max_t(u64, SZ_16M, length);
>> -            start = ALIGN_DOWN(vma->start, length);
>> +            start = ALIGN_DOWN(xe_vma_start(vma), length);
>>           }
>>             XE_BUG_ON(length < SZ_4K);
>> @@ -238,7 +238,7 @@ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
>>           XE_BUG_ON(!IS_ALIGNED(start, length));
>>             action[len++] = 
>> MAKE_INVAL_OP(XE_GUC_TLB_INVAL_PAGE_SELECTIVE);
>> -        action[len++] = vma->vm->usm.asid;
>> +        action[len++] = xe_vma_vm(vma)->usm.asid;
>>           action[len++] = lower_32_bits(start);
>>           action[len++] = upper_32_bits(start);
>>           action[len++] = ilog2(length) - ilog2(SZ_4K);
>> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c 
>> b/drivers/gpu/drm/xe/xe_guc_ct.c
>> index 5e00b75d3ca2..e5ed9022a0a2 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
>> @@ -783,13 +783,13 @@ static int parse_g2h_response(struct xe_guc_ct 
>> *ct, u32 *msg, u32 len)
>>       if (type == GUC_HXG_TYPE_RESPONSE_FAILURE) {
>>           g2h_fence->fail = true;
>>           g2h_fence->error =
>> -            FIELD_GET(GUC_HXG_FAILURE_MSG_0_ERROR, msg[0]);
>> +            FIELD_GET(GUC_HXG_FAILURE_MSG_0_ERROR, msg[1]);
>>           g2h_fence->hint =
>> -            FIELD_GET(GUC_HXG_FAILURE_MSG_0_HINT, msg[0]);
>> +            FIELD_GET(GUC_HXG_FAILURE_MSG_0_HINT, msg[1]);
>>       } else if (type == GUC_HXG_TYPE_NO_RESPONSE_RETRY) {
>>           g2h_fence->retry = true;
>>           g2h_fence->reason =
>> -            FIELD_GET(GUC_HXG_RETRY_MSG_0_REASON, msg[0]);
>> +            FIELD_GET(GUC_HXG_RETRY_MSG_0_REASON, msg[1]);
>>       } else if (g2h_fence->response_buffer) {
>>           g2h_fence->response_len = response_len;
>>           memcpy(g2h_fence->response_buffer, msg + GUC_CTB_MSG_MIN_LEN,
>> diff --git a/drivers/gpu/drm/xe/xe_migrate.c 
>> b/drivers/gpu/drm/xe/xe_migrate.c
>> index e8978440c725..fee4c0028a2f 100644
>> --- a/drivers/gpu/drm/xe/xe_migrate.c
>> +++ b/drivers/gpu/drm/xe/xe_migrate.c
>> @@ -1049,8 +1049,10 @@ xe_migrate_update_pgtables_cpu(struct 
>> xe_migrate *m,
>>           return ERR_PTR(-ETIME);
>>         if (wait_vm && !dma_resv_test_signaled(&vm->resv,
>> -                           DMA_RESV_USAGE_BOOKKEEP))
>> +                           DMA_RESV_USAGE_BOOKKEEP)) {
>> +        vm_dbg(&vm->xe->drm, "wait on VM for munmap");
>>           return ERR_PTR(-ETIME);
>> +    }
>>         if (ops->pre_commit) {
>>           err = ops->pre_commit(pt_update);
>> @@ -1138,7 +1140,8 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
>>       u64 addr;
>>       int err = 0;
>>       bool usm = !eng && xe->info.supports_usm;
>> -    bool first_munmap_rebind = vma && vma->first_munmap_rebind;
>> +    bool first_munmap_rebind = vma &&
>> +        vma->gpuva.flags & XE_VMA_FIRST_REBIND;
>>         /* Use the CPU if no in syncs and engine is idle */
>>       if (no_in_syncs(syncs, num_syncs) && (!eng || 
>> xe_engine_is_idle(eng))) {
>> @@ -1259,6 +1262,7 @@ xe_migrate_update_pgtables(struct xe_migrate *m,
>>        * trigger preempts before moving forward
>>        */
>>       if (first_munmap_rebind) {
>> +        vm_dbg(&vm->xe->drm, "wait on first_munmap_rebind");
>>           err = job_add_deps(job, &vm->resv,
>>                      DMA_RESV_USAGE_BOOKKEEP);
>>           if (err)
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index 6b2943efcdbc..37a1ce6f62a3 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -94,7 +94,7 @@ static dma_addr_t vma_addr(struct xe_vma *vma, u64 
>> offset,
>>                   &cur);
>>           return xe_res_dma(&cur) + offset;
>>       } else {
>> -        return xe_bo_addr(vma->bo, offset, page_size, is_vram);
>> +        return xe_bo_addr(xe_vma_bo(vma), offset, page_size, is_vram);
>>       }
>>   }
>>   @@ -159,7 +159,7 @@ u64 gen8_pte_encode(struct xe_vma *vma, struct 
>> xe_bo *bo,
>>         if (is_vram) {
>>           pte |= GEN12_PPGTT_PTE_LM;
>> -        if (vma && vma->use_atomic_access_pte_bit)
>> +        if (vma && vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT)
>>               pte |= GEN12_USM_PPGTT_PTE_AE;
>>       }
>>   @@ -738,7 +738,7 @@ static int
>>   xe_pt_stage_bind(struct xe_gt *gt, struct xe_vma *vma,
>>            struct xe_vm_pgtable_update *entries, u32 *num_entries)
>>   {
>> -    struct xe_bo *bo = vma->bo;
>> +    struct xe_bo *bo = xe_vma_bo(vma);
>>       bool is_vram = !xe_vma_is_userptr(vma) && bo && xe_bo_is_vram(bo);
>>       struct xe_res_cursor curs;
>>       struct xe_pt_stage_bind_walk xe_walk = {
>> @@ -747,22 +747,23 @@ xe_pt_stage_bind(struct xe_gt *gt, struct 
>> xe_vma *vma,
>>               .shifts = xe_normal_pt_shifts,
>>               .max_level = XE_PT_HIGHEST_LEVEL,
>>           },
>> -        .vm = vma->vm,
>> +        .vm = xe_vma_vm(vma),
>>           .gt = gt,
>>           .curs = &curs,
>> -        .va_curs_start = vma->start,
>> -        .pte_flags = vma->pte_flags,
>> +        .va_curs_start = xe_vma_start(vma),
>> +        .pte_flags = xe_vma_read_only(vma) ? PTE_READ_ONLY : 0,
>>           .wupd.entries = entries,
>> -        .needs_64K = (vma->vm->flags & XE_VM_FLAGS_64K) && is_vram,
>> +        .needs_64K = (xe_vma_vm(vma)->flags & XE_VM_FLAGS_64K) &&
>> +            is_vram,
>>       };
>> -    struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
>> +    struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
>>       int ret;
>>         if (is_vram) {
>>           struct xe_gt *bo_gt = xe_bo_to_gt(bo);
>>             xe_walk.default_pte = GEN12_PPGTT_PTE_LM;
>> -        if (vma && vma->use_atomic_access_pte_bit)
>> +        if (vma && vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT)
>>               xe_walk.default_pte |= GEN12_USM_PPGTT_PTE_AE;
>>           xe_walk.dma_offset = bo_gt->mem.vram.io_start -
>>               gt_to_xe(gt)->mem.vram.io_start;
>> @@ -778,17 +779,16 @@ xe_pt_stage_bind(struct xe_gt *gt, struct 
>> xe_vma *vma,
>>         xe_bo_assert_held(bo);
>>       if (xe_vma_is_userptr(vma))
>> -        xe_res_first_sg(vma->userptr.sg, 0, vma->end - vma->start + 1,
>> -                &curs);
>> +        xe_res_first_sg(vma->userptr.sg, 0, xe_vma_size(vma), &curs);
>>       else if (xe_bo_is_vram(bo) || xe_bo_is_stolen(bo))
>> -        xe_res_first(bo->ttm.resource, vma->bo_offset,
>> -                 vma->end - vma->start + 1, &curs);
>> +        xe_res_first(bo->ttm.resource, xe_vma_bo_offset(vma),
>> +                 xe_vma_size(vma), &curs);
>>       else
>> -        xe_res_first_sg(xe_bo_get_sg(bo), vma->bo_offset,
>> -                vma->end - vma->start + 1, &curs);
>> +        xe_res_first_sg(xe_bo_get_sg(bo), xe_vma_bo_offset(vma),
>> +                xe_vma_size(vma), &curs);
>>   -    ret = drm_pt_walk_range(&pt->drm, pt->level, vma->start, 
>> vma->end + 1,
>> -                &xe_walk.drm);
>> +    ret = drm_pt_walk_range(&pt->drm, pt->level, xe_vma_start(vma),
>> +                xe_vma_end(vma), &xe_walk.drm);
>>         *num_entries = xe_walk.wupd.num_used_entries;
>>       return ret;
>> @@ -923,13 +923,13 @@ bool xe_pt_zap_ptes(struct xe_gt *gt, struct 
>> xe_vma *vma)
>>           },
>>           .gt = gt,
>>       };
>> -    struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
>> +    struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
>>         if (!(vma->gt_present & BIT(gt->info.id)))
>>           return false;
>>   -    (void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, 
>> vma->end + 1,
>> -                 &xe_walk.drm);
>> +    (void)drm_pt_walk_shared(&pt->drm, pt->level, xe_vma_start(vma),
>> +                 xe_vma_end(vma), &xe_walk.drm);
>>         return xe_walk.needs_invalidate;
>>   }
>> @@ -966,21 +966,21 @@ static void xe_pt_abort_bind(struct xe_vma *vma,
>>               continue;
>>             for (j = 0; j < entries[i].qwords; j++)
>> -            xe_pt_destroy(entries[i].pt_entries[j].pt, 
>> vma->vm->flags, NULL);
>> +            xe_pt_destroy(entries[i].pt_entries[j].pt, 
>> xe_vma_vm(vma)->flags, NULL);
>>           kfree(entries[i].pt_entries);
>>       }
>>   }
>>     static void xe_pt_commit_locks_assert(struct xe_vma *vma)
>>   {
>> -    struct xe_vm *vm = vma->vm;
>> +    struct xe_vm *vm = xe_vma_vm(vma);
>>         lockdep_assert_held(&vm->lock);
>>         if (xe_vma_is_userptr(vma))
>> lockdep_assert_held_read(&vm->userptr.notifier_lock);
>>       else
>> -        dma_resv_assert_held(vma->bo->ttm.base.resv);
>> +        dma_resv_assert_held(xe_vma_bo(vma)->ttm.base.resv);
>>         dma_resv_assert_held(&vm->resv);
>>   }
>> @@ -1013,7 +1013,7 @@ static void xe_pt_commit_bind(struct xe_vma *vma,
>>                 if (xe_pt_entry(pt_dir, j_))
>>                   xe_pt_destroy(xe_pt_entry(pt_dir, j_),
>> -                          vma->vm->flags, deferred);
>> +                          xe_vma_vm(vma)->flags, deferred);
>>                 pt_dir->dir.entries[j_] = &newpte->drm;
>>           }
>> @@ -1074,7 +1074,7 @@ static int xe_pt_userptr_inject_eagain(struct 
>> xe_vma *vma)
>>       static u32 count;
>>         if (count++ % divisor == divisor - 1) {
>> -        struct xe_vm *vm = vma->vm;
>> +        struct xe_vm *vm = xe_vma_vm(vma);
>>             vma->userptr.divisor = divisor << 1;
>>           spin_lock(&vm->userptr.invalidated_lock);
>> @@ -1117,7 +1117,7 @@ static int xe_pt_userptr_pre_commit(struct 
>> xe_migrate_pt_update *pt_update)
>>           container_of(pt_update, typeof(*userptr_update), base);
>>       struct xe_vma *vma = pt_update->vma;
>>       unsigned long notifier_seq = vma->userptr.notifier_seq;
>> -    struct xe_vm *vm = vma->vm;
>> +    struct xe_vm *vm = xe_vma_vm(vma);
>>         userptr_update->locked = false;
>>   @@ -1288,20 +1288,20 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct 
>> xe_vma *vma, struct xe_engine *e,
>>           },
>>           .bind = true,
>>       };
>> -    struct xe_vm *vm = vma->vm;
>> +    struct xe_vm *vm = xe_vma_vm(vma);
>>       u32 num_entries;
>>       struct dma_fence *fence;
>>       struct invalidation_fence *ifence = NULL;
>>       int err;
>>         bind_pt_update.locked = false;
>> -    xe_bo_assert_held(vma->bo);
>> +    xe_bo_assert_held(xe_vma_bo(vma));
>>       xe_vm_assert_held(vm);
>>       XE_BUG_ON(xe_gt_is_media_type(gt));
>>   -    vm_dbg(&vma->vm->xe->drm,
>> +    vm_dbg(&xe_vma_vm(vma)->xe->drm,
>>              "Preparing bind, with range [%llx...%llx) engine %p.\n",
>> -           vma->start, vma->end, e);
>> +           xe_vma_start(vma), xe_vma_end(vma) - 1, e);
>>         err = xe_pt_prepare_bind(gt, vma, entries, &num_entries, 
>> rebind);
>>       if (err)
>> @@ -1310,23 +1310,28 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct 
>> xe_vma *vma, struct xe_engine *e,
>>         xe_vm_dbg_print_entries(gt_to_xe(gt), entries, num_entries);
>>   -    if (rebind && !xe_vm_no_dma_fences(vma->vm)) {
>> +    if (rebind && !xe_vm_no_dma_fences(xe_vma_vm(vma))) {
>>           ifence = kzalloc(sizeof(*ifence), GFP_KERNEL);
>>           if (!ifence)
>>               return ERR_PTR(-ENOMEM);
>>       }
>>         fence = xe_migrate_update_pgtables(gt->migrate,
>> -                       vm, vma->bo,
>> +                       vm, xe_vma_bo(vma),
>>                          e ? e : vm->eng[gt->info.id],
>>                          entries, num_entries,
>>                          syncs, num_syncs,
>>                          &bind_pt_update.base);
>>       if (!IS_ERR(fence)) {
>> +        bool last_munmap_rebind = vma->gpuva.flags & 
>> XE_VMA_LAST_REBIND;
>>           LLIST_HEAD(deferred);
>>   +
>> +        if (last_munmap_rebind)
>> +            vm_dbg(&vm->xe->drm, "last_munmap_rebind");
>> +
>>           /* TLB invalidation must be done before signaling rebind */
>> -        if (rebind && !xe_vm_no_dma_fences(vma->vm)) {
>> +        if (rebind && !xe_vm_no_dma_fences(xe_vma_vm(vma))) {
>>               int err = invalidation_fence_init(gt, ifence, fence,
>>                                 vma);
>>               if (err) {
>> @@ -1339,12 +1344,12 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct 
>> xe_vma *vma, struct xe_engine *e,
>>             /* add shared fence now for pagetable delayed destroy */
>>           dma_resv_add_fence(&vm->resv, fence, !rebind &&
>> -                   vma->last_munmap_rebind ?
>> +                   last_munmap_rebind ?
>>                      DMA_RESV_USAGE_KERNEL :
>>                      DMA_RESV_USAGE_BOOKKEEP);
>>   -        if (!xe_vma_is_userptr(vma) && !vma->bo->vm)
>> -            dma_resv_add_fence(vma->bo->ttm.base.resv, fence,
>> +        if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
>> + dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
>>                          DMA_RESV_USAGE_BOOKKEEP);
>>           xe_pt_commit_bind(vma, entries, num_entries, rebind,
>>                     bind_pt_update.locked ? &deferred : NULL);
>> @@ -1357,8 +1362,7 @@ __xe_pt_bind_vma(struct xe_gt *gt, struct 
>> xe_vma *vma, struct xe_engine *e,
>>               up_read(&vm->userptr.notifier_lock);
>>               xe_bo_put_commit(&deferred);
>>           }
>> -        if (!rebind && vma->last_munmap_rebind &&
>> -            xe_vm_in_compute_mode(vm))
>> +        if (!rebind && last_munmap_rebind && xe_vm_in_compute_mode(vm))
>>               queue_work(vm->xe->ordered_wq,
>>                      &vm->preempt.rebind_work);
>>       } else {
>> @@ -1506,14 +1510,14 @@ static unsigned int xe_pt_stage_unbind(struct 
>> xe_gt *gt, struct xe_vma *vma,
>>               .max_level = XE_PT_HIGHEST_LEVEL,
>>           },
>>           .gt = gt,
>> -        .modified_start = vma->start,
>> -        .modified_end = vma->end + 1,
>> +        .modified_start = xe_vma_start(vma),
>> +        .modified_end = xe_vma_end(vma),
>>           .wupd.entries = entries,
>>       };
>> -    struct xe_pt *pt = vma->vm->pt_root[gt->info.id];
>> +    struct xe_pt *pt = xe_vma_vm(vma)->pt_root[gt->info.id];
>>   -    (void)drm_pt_walk_shared(&pt->drm, pt->level, vma->start, 
>> vma->end + 1,
>> -                 &xe_walk.drm);
>> +    (void)drm_pt_walk_shared(&pt->drm, pt->level, xe_vma_start(vma),
>> +                 xe_vma_end(vma), &xe_walk.drm);
>>         return xe_walk.wupd.num_used_entries;
>>   }
>> @@ -1525,7 +1529,7 @@ xe_migrate_clear_pgtable_callback(struct 
>> xe_migrate_pt_update *pt_update,
>>                     const struct xe_vm_pgtable_update *update)
>>   {
>>       struct xe_vma *vma = pt_update->vma;
>> -    u64 empty = __xe_pt_empty_pte(gt, vma->vm, update->pt->level);
>> +    u64 empty = __xe_pt_empty_pte(gt, xe_vma_vm(vma), 
>> update->pt->level);
>>       int i;
>>         XE_BUG_ON(xe_gt_is_media_type(gt));
>> @@ -1563,7 +1567,7 @@ xe_pt_commit_unbind(struct xe_vma *vma,
>>                    i++) {
>>                   if (xe_pt_entry(pt_dir, i))
>>                       xe_pt_destroy(xe_pt_entry(pt_dir, i),
>> -                              vma->vm->flags, deferred);
>> +                              xe_vma_vm(vma)->flags, deferred);
>>                     pt_dir->dir.entries[i] = NULL;
>>               }
>> @@ -1612,19 +1616,19 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct 
>> xe_vma *vma, struct xe_engine *e,
>>               .vma = vma,
>>           },
>>       };
>> -    struct xe_vm *vm = vma->vm;
>> +    struct xe_vm *vm = xe_vma_vm(vma);
>>       u32 num_entries;
>>       struct dma_fence *fence = NULL;
>>       struct invalidation_fence *ifence;
>>       LLIST_HEAD(deferred);
>>   -    xe_bo_assert_held(vma->bo);
>> +    xe_bo_assert_held(xe_vma_bo(vma));
>>       xe_vm_assert_held(vm);
>>       XE_BUG_ON(xe_gt_is_media_type(gt));
>>   -    vm_dbg(&vma->vm->xe->drm,
>> +    vm_dbg(&xe_vma_vm(vma)->xe->drm,
>>              "Preparing unbind, with range [%llx...%llx) engine %p.\n",
>> -           vma->start, vma->end, e);
>> +           xe_vma_start(vma), xe_vma_end(vma) - 1, e);
>>         num_entries = xe_pt_stage_unbind(gt, vma, entries);
>>       XE_BUG_ON(num_entries > ARRAY_SIZE(entries));
>> @@ -1663,8 +1667,8 @@ __xe_pt_unbind_vma(struct xe_gt *gt, struct 
>> xe_vma *vma, struct xe_engine *e,
>>                      DMA_RESV_USAGE_BOOKKEEP);
>>             /* This fence will be installed by caller when doing 
>> eviction */
>> -        if (!xe_vma_is_userptr(vma) && !vma->bo->vm)
>> -            dma_resv_add_fence(vma->bo->ttm.base.resv, fence,
>> +        if (!xe_vma_is_userptr(vma) && !xe_vma_bo(vma)->vm)
>> + dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence,
>>                          DMA_RESV_USAGE_BOOKKEEP);
>>           xe_pt_commit_unbind(vma, entries, num_entries,
>>                       unbind_pt_update.locked ? &deferred : NULL);
>> diff --git a/drivers/gpu/drm/xe/xe_trace.h 
>> b/drivers/gpu/drm/xe/xe_trace.h
>> index 2f8eb7ebe9a7..12e12673fc91 100644
>> --- a/drivers/gpu/drm/xe/xe_trace.h
>> +++ b/drivers/gpu/drm/xe/xe_trace.h
>> @@ -18,7 +18,7 @@
>>   #include "xe_gt_types.h"
>>   #include "xe_guc_engine_types.h"
>>   #include "xe_sched_job.h"
>> -#include "xe_vm_types.h"
>> +#include "xe_vm.h"
>>     DECLARE_EVENT_CLASS(xe_gt_tlb_invalidation_fence,
>>               TP_PROTO(struct xe_gt_tlb_invalidation_fence *fence),
>> @@ -368,10 +368,10 @@ DECLARE_EVENT_CLASS(xe_vma,
>>                 TP_fast_assign(
>>                  __entry->vma = (unsigned long)vma;
>> -               __entry->asid = vma->vm->usm.asid;
>> -               __entry->start = vma->start;
>> -               __entry->end = vma->end;
>> -               __entry->ptr = (u64)vma->userptr.ptr;
>> +               __entry->asid = xe_vma_vm(vma)->usm.asid;
>> +               __entry->start = xe_vma_start(vma);
>> +               __entry->end = xe_vma_end(vma) - 1;
>> +               __entry->ptr = xe_vma_userptr(vma);
>>                  ),
>>                 TP_printk("vma=0x%016llx, asid=0x%05x, 
>> start=0x%012llx, end=0x%012llx, ptr=0x%012llx,",
>
> Is it possible to split this patch (for review and possible regression 
> debugging purposes) to introduce new macros first in one patch and 
> then have the relevant xe_vm changes in a separate patch?

With macros meaning also inlines / accessors.

/Thomas

>
> Anyway I'll continue reviewing the patch as is.
>
> /Thomas
>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA Matthew Brost
  2023-04-21 12:52   ` Thomas Hellström
@ 2023-04-21 14:54   ` Thomas Hellström
  2023-04-25 11:41   ` Thomas Hellström
  2 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström @ 2023-04-21 14:54 UTC (permalink / raw)
  To: Matthew Brost, intel-xe


On 4/4/23 03:42, Matthew Brost wrote:
> Rather than open coding VM binds and VMA tracking, use the GPUVA
> library. GPUVA provides a common infrastructure for VM binds to use mmap
> / munmap semantics and support for VK sparse bindings.
>
> The concepts are:
>
> 1) xe_vm inherits from drm_gpuva_manager
> 2) xe_vma inherits from drm_gpuva
> 3) xe_vma_op inherits from drm_gpuva_op
> 4) VM bind operations (MAP, UNMAP, PREFETCH, UNMAP_ALL) call into the
> GPUVA code to generate an VMA operations list which is parsed, commited,
> and executed.
>
> v2 (CI): Add break after default in case statement.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
>   drivers/gpu/drm/xe/xe_device.c              |    2 +-
>   drivers/gpu/drm/xe/xe_exec.c                |    2 +-
>   drivers/gpu/drm/xe/xe_gt_pagefault.c        |   23 +-
>   drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
>   drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
>   drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
>   drivers/gpu/drm/xe/xe_pt.c                  |  106 +-
>   drivers/gpu/drm/xe/xe_trace.h               |   10 +-
>   drivers/gpu/drm/xe/xe_vm.c                  | 1799 +++++++++----------
>   drivers/gpu/drm/xe/xe_vm.h                  |   66 +-
>   drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
>   drivers/gpu/drm/xe/xe_vm_types.h            |  165 +-
>   13 files changed, 1126 insertions(+), 1172 deletions(-)
>
...
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index bdf82d34eb66..fddbe8d5f984 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -25,10 +25,8 @@
>   #include "xe_preempt_fence.h"
>   #include "xe_pt.h"
>   #include "xe_res_cursor.h"
> -#include "xe_sync.h"
>   #include "xe_trace.h"
> -
> -#define TEST_VM_ASYNC_OPS_ERROR
> +#include "xe_sync.h"
>   
>   /**
>    * xe_vma_userptr_check_repin() - Advisory check for repin needed
> @@ -51,20 +49,19 @@ int xe_vma_userptr_check_repin(struct xe_vma *vma)
>   
>   int xe_vma_userptr_pin_pages(struct xe_vma *vma)
>   {
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   	struct xe_device *xe = vm->xe;
> -	const unsigned long num_pages =
> -		(vma->end - vma->start + 1) >> PAGE_SHIFT;
> +	const unsigned long num_pages = xe_vma_size(vma) >> PAGE_SHIFT;
>   	struct page **pages;
>   	bool in_kthread = !current->mm;
>   	unsigned long notifier_seq;
>   	int pinned, ret, i;
> -	bool read_only = vma->pte_flags & PTE_READ_ONLY;
> +	bool read_only = xe_vma_read_only(vma);
>   
>   	lockdep_assert_held(&vm->lock);
>   	XE_BUG_ON(!xe_vma_is_userptr(vma));
>   retry:
> -	if (vma->destroyed)
> +	if (vma->gpuva.flags & XE_VMA_DESTROYED)
>   		return 0;
>   
>   	notifier_seq = mmu_interval_read_begin(&vma->userptr.notifier);
> @@ -94,7 +91,8 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma)
>   	}
>   
>   	while (pinned < num_pages) {
> -		ret = get_user_pages_fast(vma->userptr.ptr + pinned * PAGE_SIZE,
> +		ret = get_user_pages_fast(xe_vma_userptr(vma) +
> +					  pinned * PAGE_SIZE,
>   					  num_pages - pinned,
>   					  read_only ? 0 : FOLL_WRITE,
>   					  &pages[pinned]);
> @@ -295,7 +293,7 @@ void xe_vm_fence_all_extobjs(struct xe_vm *vm, struct dma_fence *fence,
>   	struct xe_vma *vma;
>   
>   	list_for_each_entry(vma, &vm->extobj.list, extobj.link)
> -		dma_resv_add_fence(vma->bo->ttm.base.resv, fence, usage);
> +		dma_resv_add_fence(xe_vma_bo(vma)->ttm.base.resv, fence, usage);
>   }
>   
>   static void resume_and_reinstall_preempt_fences(struct xe_vm *vm)
> @@ -444,7 +442,7 @@ int xe_vm_lock_dma_resv(struct xe_vm *vm, struct ww_acquire_ctx *ww,
>   	INIT_LIST_HEAD(objs);
>   	list_for_each_entry(vma, &vm->extobj.list, extobj.link) {
>   		tv_bo->num_shared = num_shared;
> -		tv_bo->bo = &vma->bo->ttm;
> +		tv_bo->bo = &xe_vma_bo(vma)->ttm;
>   
>   		list_add_tail(&tv_bo->head, objs);
>   		tv_bo++;
> @@ -459,10 +457,10 @@ int xe_vm_lock_dma_resv(struct xe_vm *vm, struct ww_acquire_ctx *ww,
>   	spin_lock(&vm->notifier.list_lock);
>   	list_for_each_entry_safe(vma, next, &vm->notifier.rebind_list,
>   				 notifier.rebind_link) {
> -		xe_bo_assert_held(vma->bo);
> +		xe_bo_assert_held(xe_vma_bo(vma));
>   
>   		list_del_init(&vma->notifier.rebind_link);
> -		if (vma->gt_present && !vma->destroyed)
> +		if (vma->gt_present && !(vma->gpuva.flags & XE_VMA_DESTROYED))
>   			list_move_tail(&vma->rebind_link, &vm->rebind_list);
>   	}
>   	spin_unlock(&vm->notifier.list_lock);
> @@ -583,10 +581,11 @@ static void preempt_rebind_work_func(struct work_struct *w)
>   		goto out_unlock;
>   
>   	list_for_each_entry(vma, &vm->rebind_list, rebind_link) {
> -		if (xe_vma_is_userptr(vma) || vma->destroyed)
> +		if (xe_vma_is_userptr(vma) ||
> +		    vma->gpuva.flags & XE_VMA_DESTROYED)
>   			continue;
>   
> -		err = xe_bo_validate(vma->bo, vm, false);
> +		err = xe_bo_validate(xe_vma_bo(vma), vm, false);
>   		if (err)
>   			goto out_unlock;
>   	}
> @@ -645,17 +644,12 @@ static void preempt_rebind_work_func(struct work_struct *w)
>   	trace_xe_vm_rebind_worker_exit(vm);
>   }
>   
> -struct async_op_fence;
> -static int __xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma,
> -			struct xe_engine *e, struct xe_sync_entry *syncs,
> -			u32 num_syncs, struct async_op_fence *afence);
> -
>   static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
>   				   const struct mmu_notifier_range *range,
>   				   unsigned long cur_seq)
>   {
>   	struct xe_vma *vma = container_of(mni, struct xe_vma, userptr.notifier);
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   	struct dma_resv_iter cursor;
>   	struct dma_fence *fence;
>   	long err;
> @@ -679,7 +673,8 @@ static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
>   	 * Tell exec and rebind worker they need to repin and rebind this
>   	 * userptr.
>   	 */
> -	if (!xe_vm_in_fault_mode(vm) && !vma->destroyed && vma->gt_present) {
> +	if (!xe_vm_in_fault_mode(vm) &&
> +	    !(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->gt_present) {
introduce an xe_vma_destroyed()?
>   		spin_lock(&vm->userptr.invalidated_lock);
>   		list_move_tail(&vma->userptr.invalidate_link,
>   			       &vm->userptr.invalidated);
> @@ -784,7 +779,8 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
>   
>   static struct dma_fence *
>   xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
> -	       struct xe_sync_entry *syncs, u32 num_syncs);
> +	       struct xe_sync_entry *syncs, u32 num_syncs,
> +	       bool first_op, bool last_op);
>   
>   struct dma_fence *xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
>   {
> @@ -805,7 +801,7 @@ struct dma_fence *xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
>   			trace_xe_vma_rebind_worker(vma);
>   		else
>   			trace_xe_vma_rebind_exec(vma);
> -		fence = xe_vm_bind_vma(vma, NULL, NULL, 0);
> +		fence = xe_vm_bind_vma(vma, NULL, NULL, 0, false, false);
>   		if (IS_ERR(fence))
>   			return fence;
>   	}
> @@ -833,6 +829,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
>   		return vma;
>   	}
>   
> +	/* FIXME: Way to many lists, should be able to reduce this */
Unrelated.
>   	INIT_LIST_HEAD(&vma->rebind_link);
>   	INIT_LIST_HEAD(&vma->unbind_link);
>   	INIT_LIST_HEAD(&vma->userptr_link);
> @@ -840,11 +837,12 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
>   	INIT_LIST_HEAD(&vma->notifier.rebind_link);
>   	INIT_LIST_HEAD(&vma->extobj.link);
>   
> -	vma->vm = vm;
> -	vma->start = start;
> -	vma->end = end;
> +	INIT_LIST_HEAD(&vma->gpuva.head);
> +	vma->gpuva.mgr = &vm->mgr;
> +	vma->gpuva.va.addr = start;
> +	vma->gpuva.va.range = end - start + 1;

Hm. This comment really belongs to one of the gpuva patches, but while 
range in statistics indeed means the difference between the highest and 
lowest values, I associate range in computer science with the set of 
first and last value or first value and size. Why isn't "size" used 
here, and similarly for some arguments below.

>   	if (read_only)
> -		vma->pte_flags = PTE_READ_ONLY;
> +		vma->gpuva.flags |= XE_VMA_READ_ONLY;
>   
>   	if (gt_mask) {
>   		vma->gt_mask = gt_mask;
> @@ -855,22 +853,24 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
>   	}
>   
>   	if (vm->xe->info.platform == XE_PVC)
> -		vma->use_atomic_access_pte_bit = true;
> +		vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
>   
>   	if (bo) {
>   		xe_bo_assert_held(bo);
> -		vma->bo_offset = bo_offset_or_userptr;
> -		vma->bo = xe_bo_get(bo);
> -		list_add_tail(&vma->bo_link, &bo->vmas);
> +
> +		drm_gem_object_get(&bo->ttm.base);
> +		vma->gpuva.gem.obj = &bo->ttm.base;
> +		vma->gpuva.gem.offset = bo_offset_or_userptr;
> +		drm_gpuva_link(&vma->gpuva);
>   	} else /* userptr */ {
>   		u64 size = end - start + 1;
>   		int err;
>   
> -		vma->userptr.ptr = bo_offset_or_userptr;
> +		vma->gpuva.gem.offset = bo_offset_or_userptr;
>   
>   		err = mmu_interval_notifier_insert(&vma->userptr.notifier,
>   						   current->mm,
> -						   vma->userptr.ptr, size,
> +						   xe_vma_userptr(vma), size,
>   						   &vma_userptr_notifier_ops);
>   		if (err) {
>   			kfree(vma);
> @@ -888,16 +888,16 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
>   static void vm_remove_extobj(struct xe_vma *vma)
>   {
>   	if (!list_empty(&vma->extobj.link)) {
> -		vma->vm->extobj.entries--;
> +		xe_vma_vm(vma)->extobj.entries--;
>   		list_del_init(&vma->extobj.link);
>   	}
>   }
>   
>   static void xe_vma_destroy_late(struct xe_vma *vma)
>   {
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   	struct xe_device *xe = vm->xe;
> -	bool read_only = vma->pte_flags & PTE_READ_ONLY;
> +	bool read_only = xe_vma_read_only(vma);
>   
>   	if (xe_vma_is_userptr(vma)) {
>   		if (vma->userptr.sg) {
> @@ -917,7 +917,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
>   		mmu_interval_notifier_remove(&vma->userptr.notifier);
>   		xe_vm_put(vm);
>   	} else {
> -		xe_bo_put(vma->bo);
> +		xe_bo_put(xe_vma_bo(vma));
>   	}
>   
>   	kfree(vma);
> @@ -942,21 +942,22 @@ static void vma_destroy_cb(struct dma_fence *fence,
>   
>   static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
>   {
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   
>   	lockdep_assert_held_write(&vm->lock);
>   	XE_BUG_ON(!list_empty(&vma->unbind_link));
>   
>   	if (xe_vma_is_userptr(vma)) {
> -		XE_WARN_ON(!vma->destroyed);
> +		XE_WARN_ON(!(vma->gpuva.flags & XE_VMA_DESTROYED));
> +
>   		spin_lock(&vm->userptr.invalidated_lock);
>   		list_del_init(&vma->userptr.invalidate_link);
>   		spin_unlock(&vm->userptr.invalidated_lock);
>   		list_del(&vma->userptr_link);
>   	} else {
> -		xe_bo_assert_held(vma->bo);
> -		list_del(&vma->bo_link);
> -		if (!vma->bo->vm)
> +		xe_bo_assert_held(xe_vma_bo(vma));
> +		drm_gpuva_unlink(&vma->gpuva);
> +		if (!xe_vma_bo(vma)->vm)
>   			vm_remove_extobj(vma);
>   	}
>   
> @@ -981,13 +982,13 @@ static void xe_vma_destroy_unlocked(struct xe_vma *vma)
>   {
>   	struct ttm_validate_buffer tv[2];
>   	struct ww_acquire_ctx ww;
> -	struct xe_bo *bo = vma->bo;
> +	struct xe_bo *bo = xe_vma_bo(vma);
>   	LIST_HEAD(objs);
>   	LIST_HEAD(dups);
>   	int err;
>   
>   	memset(tv, 0, sizeof(tv));
> -	tv[0].bo = xe_vm_ttm_bo(vma->vm);
> +	tv[0].bo = xe_vm_ttm_bo(xe_vma_vm(vma));
>   	list_add(&tv[0].head, &objs);
>   
>   	if (bo) {
> @@ -1004,77 +1005,61 @@ static void xe_vma_destroy_unlocked(struct xe_vma *vma)
>   		xe_bo_put(bo);
>   }
>   
> -static struct xe_vma *to_xe_vma(const struct rb_node *node)
> -{
> -	BUILD_BUG_ON(offsetof(struct xe_vma, vm_node) != 0);
> -	return (struct xe_vma *)node;
> -}
> -
> -static int xe_vma_cmp(const struct xe_vma *a, const struct xe_vma *b)
> -{
> -	if (a->end < b->start) {
> -		return -1;
> -	} else if (b->end < a->start) {
> -		return 1;
> -	} else {
> -		return 0;
> -	}
> -}
> -
> -static bool xe_vma_less_cb(struct rb_node *a, const struct rb_node *b)
> -{
> -	return xe_vma_cmp(to_xe_vma(a), to_xe_vma(b)) < 0;
> -}
> -
> -int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node)
> -{
> -	struct xe_vma *cmp = to_xe_vma(node);
> -	const struct xe_vma *own = key;
> -
> -	if (own->start > cmp->end)
> -		return 1;
> -
> -	if (own->end < cmp->start)
> -		return -1;
> -
> -	return 0;
> -}
> -
>   struct xe_vma *
> -xe_vm_find_overlapping_vma(struct xe_vm *vm, const struct xe_vma *vma)
> +xe_vm_find_overlapping_vma(struct xe_vm *vm, u64 start, u64 range)
>   {
> -	struct rb_node *node;
> +	struct drm_gpuva *gpuva;
>   
>   	if (xe_vm_is_closed(vm))
>   		return NULL;
>   
> -	XE_BUG_ON(vma->end >= vm->size);
> +	XE_BUG_ON(start + range > vm->size);
>   	lockdep_assert_held(&vm->lock);
>   
> -	node = rb_find(vma, &vm->vmas, xe_vma_cmp_vma_cb);
> +	gpuva = drm_gpuva_find_first(&vm->mgr, start, range);
>   
> -	return node ? to_xe_vma(node) : NULL;
> +	return gpuva ? gpuva_to_vma(gpuva) : NULL;
>   }
>   
>   static void xe_vm_insert_vma(struct xe_vm *vm, struct xe_vma *vma)
>   {
> -	XE_BUG_ON(vma->vm != vm);
> +	int err;
> +
> +	XE_BUG_ON(xe_vma_vm(vma) != vm);
>   	lockdep_assert_held(&vm->lock);
>   
> -	rb_add(&vma->vm_node, &vm->vmas, xe_vma_less_cb);
> +	err = drm_gpuva_insert(&vm->mgr, &vma->gpuva);
> +	XE_WARN_ON(err);
>   }
>   
> -static void xe_vm_remove_vma(struct xe_vm *vm, struct xe_vma *vma)
> +static void xe_vm_remove_vma(struct xe_vm *vm, struct xe_vma *vma, bool remove)
>   {
> -	XE_BUG_ON(vma->vm != vm);
> +	XE_BUG_ON(xe_vma_vm(vma) != vm);
>   	lockdep_assert_held(&vm->lock);
>   
> -	rb_erase(&vma->vm_node, &vm->vmas);
> +	if (remove)
> +		drm_gpuva_remove(&vma->gpuva);
>   	if (vm->usm.last_fault_vma == vma)
>   		vm->usm.last_fault_vma = NULL;
>   }
>   
> -static void async_op_work_func(struct work_struct *w);
> +static struct drm_gpuva_op *xe_vm_op_alloc(void)
> +{
> +	struct xe_vma_op *op;
> +
> +	op = kzalloc(sizeof(*op), GFP_KERNEL);
> +
> +	if (unlikely(!op))
> +		return NULL;
> +
> +	return &op->base;
> +}
> +
> +static struct drm_gpuva_fn_ops gpuva_ops = {
> +	.op_alloc = xe_vm_op_alloc,
> +};
> +
> +static void xe_vma_op_work_func(struct work_struct *w);
>   static void vm_destroy_work_func(struct work_struct *w);
>   
>   struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
> @@ -1094,7 +1079,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>   
>   	vm->size = 1ull << xe_pt_shift(xe->info.vm_max_level + 1);
>   
> -	vm->vmas = RB_ROOT;
>   	vm->flags = flags;
>   
>   	init_rwsem(&vm->lock);
> @@ -1110,7 +1094,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>   	spin_lock_init(&vm->notifier.list_lock);
>   
>   	INIT_LIST_HEAD(&vm->async_ops.pending);
> -	INIT_WORK(&vm->async_ops.work, async_op_work_func);
> +	INIT_WORK(&vm->async_ops.work, xe_vma_op_work_func);
>   	spin_lock_init(&vm->async_ops.lock);
>   
>   	INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
> @@ -1130,6 +1114,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>   	if (err)
>   		goto err_put;
>   
> +	drm_gpuva_manager_init(&vm->mgr, "Xe VM", 0, vm->size, 0, 0,
> +			       &gpuva_ops, 0);
>   	if (IS_DGFX(xe) && xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K)
>   		vm->flags |= XE_VM_FLAGS_64K;
>   
> @@ -1235,6 +1221,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>   			xe_pt_destroy(vm->pt_root[id], vm->flags, NULL);
>   	}
>   	dma_resv_unlock(&vm->resv);
> +	drm_gpuva_manager_destroy(&vm->mgr);
>   err_put:
>   	dma_resv_fini(&vm->resv);
>   	kfree(vm);
> @@ -1284,14 +1271,18 @@ static void vm_error_capture(struct xe_vm *vm, int err,
>   
>   void xe_vm_close_and_put(struct xe_vm *vm)
>   {
> -	struct rb_root contested = RB_ROOT;
> +	struct list_head contested;
>   	struct ww_acquire_ctx ww;
>   	struct xe_device *xe = vm->xe;
>   	struct xe_gt *gt;
> +	struct xe_vma *vma, *next_vma;
> +	DRM_GPUVA_ITER(it, &vm->mgr, 0);
>   	u8 id;
>   
>   	XE_BUG_ON(vm->preempt.num_engines);
>   
> +	INIT_LIST_HEAD(&contested);
> +
>   	vm->size = 0;
>   	smp_mb();
>   	flush_async_ops(vm);
> @@ -1308,24 +1299,25 @@ void xe_vm_close_and_put(struct xe_vm *vm)
>   
>   	down_write(&vm->lock);
>   	xe_vm_lock(vm, &ww, 0, false);
> -	while (vm->vmas.rb_node) {
> -		struct xe_vma *vma = to_xe_vma(vm->vmas.rb_node);
> +	drm_gpuva_iter_for_each(it) {
> +		vma = gpuva_to_vma(it.va);
>   
>   		if (xe_vma_is_userptr(vma)) {
>   			down_read(&vm->userptr.notifier_lock);
> -			vma->destroyed = true;
> +			vma->gpuva.flags |= XE_VMA_DESTROYED;
>   			up_read(&vm->userptr.notifier_lock);
>   		}
>   
> -		rb_erase(&vma->vm_node, &vm->vmas);
> +		xe_vm_remove_vma(vm, vma, false);
> +		drm_gpuva_iter_remove(&it);
>   
>   		/* easy case, remove from VMA? */
> -		if (xe_vma_is_userptr(vma) || vma->bo->vm) {
> +		if (xe_vma_is_userptr(vma) || xe_vma_bo(vma)->vm) {
>   			xe_vma_destroy(vma, NULL);
>   			continue;
>   		}
>   
> -		rb_add(&vma->vm_node, &contested, xe_vma_less_cb);
> +		list_add_tail(&vma->unbind_link, &contested);
>   	}
>   
>   	/*
> @@ -1348,19 +1340,14 @@ void xe_vm_close_and_put(struct xe_vm *vm)
>   	}
>   	xe_vm_unlock(vm, &ww);
>   
> -	if (contested.rb_node) {
> -
> -		/*
> -		 * VM is now dead, cannot re-add nodes to vm->vmas if it's NULL
> -		 * Since we hold a refcount to the bo, we can remove and free
> -		 * the members safely without locking.
> -		 */
> -		while (contested.rb_node) {
> -			struct xe_vma *vma = to_xe_vma(contested.rb_node);
> -
> -			rb_erase(&vma->vm_node, &contested);
> -			xe_vma_destroy_unlocked(vma);
> -		}
> +	/*
> +	 * VM is now dead, cannot re-add nodes to vm->vmas if it's NULL
> +	 * Since we hold a refcount to the bo, we can remove and free
> +	 * the members safely without locking.
> +	 */
> +	list_for_each_entry_safe(vma, next_vma, &contested, unbind_link) {
> +		list_del_init(&vma->unbind_link);
> +		xe_vma_destroy_unlocked(vma);
>   	}
>   
>   	if (vm->async_ops.error_capture.addr)
> @@ -1369,6 +1356,8 @@ void xe_vm_close_and_put(struct xe_vm *vm)
>   	XE_WARN_ON(!list_empty(&vm->extobj.list));
>   	up_write(&vm->lock);
>   
> +	drm_gpuva_manager_destroy(&vm->mgr);
> +
>   	mutex_lock(&xe->usm.lock);
>   	if (vm->flags & XE_VM_FLAG_FAULT_MODE)
>   		xe->usm.num_vm_in_fault_mode--;
> @@ -1456,13 +1445,14 @@ u64 xe_vm_pdp4_descriptor(struct xe_vm *vm, struct xe_gt *full_gt)
>   
>   static struct dma_fence *
>   xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
> -		 struct xe_sync_entry *syncs, u32 num_syncs)
> +		 struct xe_sync_entry *syncs, u32 num_syncs,
> +		 bool first_op, bool last_op)
>   {
>   	struct xe_gt *gt;
>   	struct dma_fence *fence = NULL;
>   	struct dma_fence **fences = NULL;
>   	struct dma_fence_array *cf = NULL;
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   	int cur_fence = 0, i;
>   	int number_gts = hweight_long(vma->gt_present);
>   	int err;
> @@ -1483,7 +1473,8 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
>   
>   		XE_BUG_ON(xe_gt_is_media_type(gt));
>   
> -		fence = __xe_pt_unbind_vma(gt, vma, e, syncs, num_syncs);
> +		fence = __xe_pt_unbind_vma(gt, vma, e, first_op ? syncs : NULL,
> +					   first_op ? num_syncs : 0);
>   		if (IS_ERR(fence)) {
>   			err = PTR_ERR(fence);
>   			goto err_fences;
> @@ -1509,7 +1500,7 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
>   		}
>   	}
>   
> -	for (i = 0; i < num_syncs; i++)
> +	for (i = 0; last_op && i < num_syncs; i++)
>   		xe_sync_entry_signal(&syncs[i], NULL, cf ? &cf->base : fence);
>   
>   	return cf ? &cf->base : !fence ? dma_fence_get_stub() : fence;
> @@ -1528,13 +1519,14 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct xe_engine *e,
>   
>   static struct dma_fence *
>   xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
> -	       struct xe_sync_entry *syncs, u32 num_syncs)
> +	       struct xe_sync_entry *syncs, u32 num_syncs,
> +	       bool first_op, bool last_op)
>   {
>   	struct xe_gt *gt;
>   	struct dma_fence *fence;
>   	struct dma_fence **fences = NULL;
>   	struct dma_fence_array *cf = NULL;
> -	struct xe_vm *vm = vma->vm;
> +	struct xe_vm *vm = xe_vma_vm(vma);
>   	int cur_fence = 0, i;
>   	int number_gts = hweight_long(vma->gt_mask);
>   	int err;
> @@ -1554,7 +1546,8 @@ xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
>   			goto next;
>   
>   		XE_BUG_ON(xe_gt_is_media_type(gt));
> -		fence = __xe_pt_bind_vma(gt, vma, e, syncs, num_syncs,
> +		fence = __xe_pt_bind_vma(gt, vma, e, first_op ? syncs : NULL,
Do we need that conditional operator expression? Always use "syncs" ?
> +					 first_op ? num_syncs : 0,
>   					 vma->gt_present & BIT(id));
>   		if (IS_ERR(fence)) {
>   			err = PTR_ERR(fence);
> @@ -1581,7 +1574,7 @@ xe_vm_bind_vma(struct xe_vma *vma, struct xe_engine *e,
>   		}
>   	}
>   
> -	for (i = 0; i < num_syncs; i++)
> +	for (i = 0; last_op && i < num_syncs; i++)
>   		xe_sync_entry_signal(&syncs[i], NULL, cf ? &cf->base : fence);
>   
>   	return cf ? &cf->base : fence;
> @@ -1680,15 +1673,27 @@ int xe_vm_async_fence_wait_start(struct dma_fence *fence)
>   
>   static int __xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma,
>   			struct xe_engine *e, struct xe_sync_entry *syncs,
> -			u32 num_syncs, struct async_op_fence *afence)
> +			u32 num_syncs, struct async_op_fence *afence,
> +			bool immediate, bool first_op, bool last_op)
>   {
>   	struct dma_fence *fence;
>   
>   	xe_vm_assert_held(vm);
>   
> -	fence = xe_vm_bind_vma(vma, e, syncs, num_syncs);
> -	if (IS_ERR(fence))
> -		return PTR_ERR(fence);
> +	if (immediate) {
> +		fence = xe_vm_bind_vma(vma, e, syncs, num_syncs, first_op,
> +				       last_op);
> +		if (IS_ERR(fence))
> +			return PTR_ERR(fence);
> +	} else {
> +		int i;
> +
> +		XE_BUG_ON(!xe_vm_in_fault_mode(vm));
> +
> +		fence = dma_fence_get_stub();
> +		for (i = 0; last_op && i < num_syncs; i++)
This construct is often misread, people missing the last_op condition. 
Can we do if (last_op) {}?
> +			xe_sync_entry_signal(&syncs[i], NULL, fence);
> +	}
>   	if (afence)
>   		add_async_op_fence_cb(vm, fence, afence);
>   
> @@ -1698,32 +1703,35 @@ static int __xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma,
>   
>   static int xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma, struct xe_engine *e,
>   		      struct xe_bo *bo, struct xe_sync_entry *syncs,
> -		      u32 num_syncs, struct async_op_fence *afence)
> +		      u32 num_syncs, struct async_op_fence *afence,
> +		      bool immediate, bool first_op, bool last_op)

Hm. I wonder whether we could do the in-syncs - out-syncs split further 
out so we don't have to pass the first_op, last_op at all? Or is 
something prohibiting this?

>   {
>   	int err;
>   
>   	xe_vm_assert_held(vm);
>   	xe_bo_assert_held(bo);
>   
> -	if (bo) {
> +	if (bo && immediate) {
>   		err = xe_bo_validate(bo, vm, true);
>   		if (err)
>   			return err;
>   	}
>   
> -	return __xe_vm_bind(vm, vma, e, syncs, num_syncs, afence);
> +	return __xe_vm_bind(vm, vma, e, syncs, num_syncs, afence, immediate,
> +			    first_op, last_op);
>   }
>   
>   static int xe_vm_unbind(struct xe_vm *vm, struct xe_vma *vma,
>   			struct xe_engine *e, struct xe_sync_entry *syncs,
> -			u32 num_syncs, struct async_op_fence *afence)
> +			u32 num_syncs, struct async_op_fence *afence,
> +			bool first_op, bool last_op)
>   {
>   	struct dma_fence *fence;
>   
>   	xe_vm_assert_held(vm);
> -	xe_bo_assert_held(vma->bo);
> +	xe_bo_assert_held(xe_vma_bo(vma));
>   
> -	fence = xe_vm_unbind_vma(vma, e, syncs, num_syncs);
> +	fence = xe_vm_unbind_vma(vma, e, syncs, num_syncs, first_op, last_op);
>   	if (IS_ERR(fence))
>   		return PTR_ERR(fence);
>   	if (afence)
> @@ -1946,26 +1954,27 @@ static const u32 region_to_mem_type[] = {
>   static int xe_vm_prefetch(struct xe_vm *vm, struct xe_vma *vma,
>   			  struct xe_engine *e, u32 region,
>   			  struct xe_sync_entry *syncs, u32 num_syncs,
> -			  struct async_op_fence *afence)
> +			  struct async_op_fence *afence, bool first_op,
> +			  bool last_op)
>   {
>   	int err;
>   
>   	XE_BUG_ON(region > ARRAY_SIZE(region_to_mem_type));
>   
>   	if (!xe_vma_is_userptr(vma)) {
> -		err = xe_bo_migrate(vma->bo, region_to_mem_type[region]);
> +		err = xe_bo_migrate(xe_vma_bo(vma), region_to_mem_type[region]);
>   		if (err)
>   			return err;
>   	}
>   
>   	if (vma->gt_mask != (vma->gt_present & ~vma->usm.gt_invalidated)) {
> -		return xe_vm_bind(vm, vma, e, vma->bo, syncs, num_syncs,
> -				  afence);
> +		return xe_vm_bind(vm, vma, e, xe_vma_bo(vma), syncs, num_syncs,
> +				  afence, true, first_op, last_op);
>   	} else {
>   		int i;
>   
>   		/* Nothing to do, signal fences now */
> -		for (i = 0; i < num_syncs; i++)
> +		for (i = 0; last_op && i < num_syncs; i++)

Here is another instance of that construct.

>   			xe_sync_entry_signal(&syncs[i], NULL,
>   					     dma_fence_get_stub());
>   		if (afence)
> @@ -1976,29 +1985,6 @@ static int xe_vm_prefetch(struct xe_vm *vm, struct xe_vma *vma,
>   
>   #define VM_BIND_OP(op)	(op & 0xffff)
>   
> -static int __vm_bind_ioctl(struct xe_vm *vm, struct xe_vma *vma,
> -			   struct xe_engine *e, struct xe_bo *bo, u32 op,
> -			   u32 region, struct xe_sync_entry *syncs,
> -			   u32 num_syncs, struct async_op_fence *afence)
> -{
> -	switch (VM_BIND_OP(op)) {
> -	case XE_VM_BIND_OP_MAP:
> -		return xe_vm_bind(vm, vma, e, bo, syncs, num_syncs, afence);
> -	case XE_VM_BIND_OP_UNMAP:
> -	case XE_VM_BIND_OP_UNMAP_ALL:
> -		return xe_vm_unbind(vm, vma, e, syncs, num_syncs, afence);
> -	case XE_VM_BIND_OP_MAP_USERPTR:
> -		return xe_vm_bind(vm, vma, e, NULL, syncs, num_syncs, afence);
> -	case XE_VM_BIND_OP_PREFETCH:
> -		return xe_vm_prefetch(vm, vma, e, region, syncs, num_syncs,
> -				      afence);
> -		break;
> -	default:
> -		XE_BUG_ON("NOT POSSIBLE");
> -		return -EINVAL;
> -	}
> -}
> -

Will continue on Monday.



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA
  2023-04-04  1:42 ` [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA Matthew Brost
  2023-04-21 12:52   ` Thomas Hellström
  2023-04-21 14:54   ` Thomas Hellström
@ 2023-04-25 11:41   ` Thomas Hellström
  2 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström @ 2023-04-25 11:41 UTC (permalink / raw)
  To: Matthew Brost, intel-xe


On 4/4/23 03:42, Matthew Brost wrote:
> Rather than open coding VM binds and VMA tracking, use the GPUVA
> library. GPUVA provides a common infrastructure for VM binds to use mmap
> / munmap semantics and support for VK sparse bindings.
>
> The concepts are:
>
> 1) xe_vm inherits from drm_gpuva_manager
> 2) xe_vma inherits from drm_gpuva
> 3) xe_vma_op inherits from drm_gpuva_op
> 4) VM bind operations (MAP, UNMAP, PREFETCH, UNMAP_ALL) call into the
> GPUVA code to generate an VMA operations list which is parsed, commited,
> and executed.
>
> v2 (CI): Add break after default in case statement.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>   drivers/gpu/drm/xe/xe_bo.c                  |   10 +-
>   drivers/gpu/drm/xe/xe_device.c              |    2 +-
>   drivers/gpu/drm/xe/xe_exec.c                |    2 +-
>   drivers/gpu/drm/xe/xe_gt_pagefault.c        |   23 +-
>   drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |   14 +-
>   drivers/gpu/drm/xe/xe_guc_ct.c              |    6 +-
>   drivers/gpu/drm/xe/xe_migrate.c             |    8 +-
>   drivers/gpu/drm/xe/xe_pt.c                  |  106 +-
>   drivers/gpu/drm/xe/xe_trace.h               |   10 +-
>   drivers/gpu/drm/xe/xe_vm.c                  | 1799 +++++++++----------
>   drivers/gpu/drm/xe/xe_vm.h                  |   66 +-
>   drivers/gpu/drm/xe/xe_vm_madvise.c          |   87 +-
>   drivers/gpu/drm/xe/xe_vm_types.h            |  165 +-
>   13 files changed, 1126 insertions(+), 1172 deletions(-)
...
>   struct ttm_buffer_object *xe_vm_ttm_bo(struct xe_vm *vm)
>   {
>   	int idx = vm->flags & XE_VM_FLAG_MIGRATION ?
> @@ -2014,834 +2000,816 @@ static void xe_vm_tv_populate(struct xe_vm *vm, struct ttm_validate_buffer *tv)
>   	tv->bo = xe_vm_ttm_bo(vm);
>   }
>   
> -static bool is_map_op(u32 op)
> +static void vm_set_async_error(struct xe_vm *vm, int err)
>   {
> -	return VM_BIND_OP(op) == XE_VM_BIND_OP_MAP ||
> -		VM_BIND_OP(op) == XE_VM_BIND_OP_MAP_USERPTR;
> +	lockdep_assert_held(&vm->lock);
> +	vm->async_ops.error = err;
>   }
>   
> -static bool is_unmap_op(u32 op)
> +static bool bo_has_vm_references(struct xe_bo *bo, struct xe_vm *vm,
> +				 struct xe_vma *ignore)
>   {
> -	return VM_BIND_OP(op) == XE_VM_BIND_OP_UNMAP ||
> -		VM_BIND_OP(op) == XE_VM_BIND_OP_UNMAP_ALL;
> +	struct ww_acquire_ctx ww;
> +	struct drm_gpuva *gpuva;
> +	struct drm_gem_object *obj = &bo->ttm.base;
> +	bool ret = false;
> +
> +	xe_bo_lock(bo, &ww, 0, false);
> +	drm_gem_for_each_gpuva(gpuva, obj) {
> +		struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> +		if (vma != ignore && xe_vma_vm(vma) == vm &&
> +		    !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
> +			ret = true;
> +			break;
> +		}
> +	}
> +	xe_bo_unlock(bo, &ww);
> +
> +	return ret;
>   }
>   
> -static int vm_bind_ioctl(struct xe_vm *vm, struct xe_vma *vma,
> -			 struct xe_engine *e, struct xe_bo *bo,
> -			 struct drm_xe_vm_bind_op *bind_op,
> -			 struct xe_sync_entry *syncs, u32 num_syncs,
> -			 struct async_op_fence *afence)
> +static int vm_insert_extobj(struct xe_vm *vm, struct xe_vma *vma)
>   {
> -	LIST_HEAD(objs);
> -	LIST_HEAD(dups);
> -	struct ttm_validate_buffer tv_bo, tv_vm;
> -	struct ww_acquire_ctx ww;
> -	struct xe_bo *vbo;
> -	int err, i;
> +	struct xe_bo *bo = xe_vma_bo(vma);
>   
> -	lockdep_assert_held(&vm->lock);
> -	XE_BUG_ON(!list_empty(&vma->unbind_link));
> +	lockdep_assert_held_write(&vm->lock);
>   
> -	/* Binds deferred to faults, signal fences now */
> -	if (xe_vm_in_fault_mode(vm) && is_map_op(bind_op->op) &&
> -	    !(bind_op->op & XE_VM_BIND_FLAG_IMMEDIATE)) {
> -		for (i = 0; i < num_syncs; i++)
> -			xe_sync_entry_signal(&syncs[i], NULL,
> -					     dma_fence_get_stub());
> -		if (afence)
> -			dma_fence_signal(&afence->fence);
> +	if (bo_has_vm_references(bo, vm, vma))
>   		return 0;
> -	}
>   
> -	xe_vm_tv_populate(vm, &tv_vm);
> -	list_add_tail(&tv_vm.head, &objs);
> -	vbo = vma->bo;
> -	if (vbo) {
> -		/*
> -		 * An unbind can drop the last reference to the BO and
> -		 * the BO is needed for ttm_eu_backoff_reservation so
> -		 * take a reference here.
> -		 */
> -		xe_bo_get(vbo);
> +	list_add(&vma->extobj.link, &vm->extobj.list);
> +	vm->extobj.entries++;
>   
> -		tv_bo.bo = &vbo->ttm;
> -		tv_bo.num_shared = 1;
> -		list_add(&tv_bo.head, &objs);
> -	}
> +	return 0;
> +}
>   
> -again:
> -	err = ttm_eu_reserve_buffers(&ww, &objs, true, &dups);
> -	if (!err) {
> -		err = __vm_bind_ioctl(vm, vma, e, bo,
> -				      bind_op->op, bind_op->region, syncs,
> -				      num_syncs, afence);
> -		ttm_eu_backoff_reservation(&ww, &objs);
> -		if (err == -EAGAIN && xe_vma_is_userptr(vma)) {
> -			lockdep_assert_held_write(&vm->lock);
> -			err = xe_vma_userptr_pin_pages(vma);
> -			if (!err)
> -				goto again;
> -		}
> +static int __vm_bind_ioctl_lookup_vma(struct xe_vm *vm, struct xe_bo *bo,
> +				      u64 addr, u64 range, u32 op)
> +{
> +	struct xe_device *xe = vm->xe;
> +	struct xe_vma *vma;
> +	bool async = !!(op & XE_VM_BIND_FLAG_ASYNC);
> +
> +	lockdep_assert_held(&vm->lock);
> +
> +	return 0;
> +
> +	switch (VM_BIND_OP(op)) {
> +	case XE_VM_BIND_OP_MAP:
> +	case XE_VM_BIND_OP_MAP_USERPTR:
> +		vma = xe_vm_find_overlapping_vma(vm, addr, range);
> +		if (XE_IOCTL_ERR(xe, vma))
> +			return -EBUSY;
> +		break;
> +	case XE_VM_BIND_OP_UNMAP:
> +	case XE_VM_BIND_OP_PREFETCH:
> +		vma = xe_vm_find_overlapping_vma(vm, addr, range);
> +		if (XE_IOCTL_ERR(xe, !vma) ||
> +		    XE_IOCTL_ERR(xe, (xe_vma_start(vma) != addr ||
> +				 xe_vma_end(vma) != addr + range) && !async))
> +			return -EINVAL;
Perhaps unrelated, but erroring if the range doesn't contain any vmas is 
inconsistent with cpu munmap().
> +		break;
> +	case XE_VM_BIND_OP_UNMAP_ALL:
> +		if (XE_IOCTL_ERR(xe, list_empty(&bo->ttm.base.gpuva.list)))
> +			return -EINVAL;
Same here.
> +		break;
> +	default:
> +		XE_BUG_ON("NOT POSSIBLE");
> +		return -EINVAL;
>   	}
> -	xe_bo_put(vbo);
>   
> -	return err;
> +	return 0;
>   }
>   
> -struct async_op {
> -	struct xe_vma *vma;
> -	struct xe_engine *engine;
> -	struct xe_bo *bo;
> -	struct drm_xe_vm_bind_op bind_op;
> -	struct xe_sync_entry *syncs;
> -	u32 num_syncs;
> -	struct list_head link;
> -	struct async_op_fence *fence;
> -};
> -
> -static void async_op_cleanup(struct xe_vm *vm, struct async_op *op)
> +static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma,
> +			     bool post_commit)
>   {
> -	while (op->num_syncs--)
> -		xe_sync_entry_cleanup(&op->syncs[op->num_syncs]);
> -	kfree(op->syncs);
> -	xe_bo_put(op->bo);
> -	if (op->engine)
> -		xe_engine_put(op->engine);
> -	xe_vm_put(vm);
> -	if (op->fence)
> -		dma_fence_put(&op->fence->fence);
> -	kfree(op);
> +	down_read(&vm->userptr.notifier_lock);
> +	vma->gpuva.flags |= XE_VMA_DESTROYED;
> +	up_read(&vm->userptr.notifier_lock);
> +	if (post_commit)
> +		xe_vm_remove_vma(vm, vma, true);
>   }
>   
> -static struct async_op *next_async_op(struct xe_vm *vm)
> +#if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM)
> +static void print_op(struct xe_device *xe, struct drm_gpuva_op *op)
>   {
> -	return list_first_entry_or_null(&vm->async_ops.pending,
> -					struct async_op, link);
> -}
> +	struct xe_vma *vma;
>   
> -static void vm_set_async_error(struct xe_vm *vm, int err)
> +	switch (op->op) {
> +	case DRM_GPUVA_OP_MAP:
> +		vm_dbg(&xe->drm, "MAP: addr=0x%016llx, range=0x%016llx",
> +		       op->map.va.addr, op->map.va.range);
> +		break;
> +	case DRM_GPUVA_OP_REMAP:
> +		vma = gpuva_to_vma(op->remap.unmap->va);
> +		vm_dbg(&xe->drm, "REMAP:UNMAP: addr=0x%016llx, range=0x%016llx, keep=%d",
Prints unsigned long long but arguments are u64. Not guaranteed to 
match. Same below. I'd cast to unsigned long long.
> +		       xe_vma_start(vma), xe_vma_size(vma),
> +		       op->unmap.keep ? 1 : 0);
> +		if (op->remap.prev)
> +			vm_dbg(&xe->drm,
> +			       "REMAP:PREV: addr=0x%016llx, range=0x%016llx",
> +			       op->remap.prev->va.addr,
> +			       op->remap.prev->va.range);
> +		if (op->remap.next)
> +			vm_dbg(&xe->drm,
> +			       "REMAP:NEXT: addr=0x%016llx, range=0x%016llx",
> +			       op->remap.next->va.addr,
> +			       op->remap.next->va.range);
> +		break;
> +	case DRM_GPUVA_OP_UNMAP:
> +		vma = gpuva_to_vma(op->unmap.va);
> +		vm_dbg(&xe->drm, "UNMAP: addr=0x%016llx, range=0x%016llx, keep=%d",
> +		       xe_vma_start(vma), xe_vma_size(vma),
> +		       op->unmap.keep ? 1 : 0);
> +		break;
> +	default:
> +		XE_BUG_ON("NOT_POSSIBLE");
s/NOT_POSSIBLE/NOT POSSIBLE/ for consistency?
> +	}
> +}
> +#else
> +static void print_op(struct xe_device *xe, struct drm_gpuva_op *op)
>   {
> -	lockdep_assert_held(&vm->lock);
> -	vm->async_ops.error = err;
>   }
> +#endif
>   
> -static void async_op_work_func(struct work_struct *w)
> +/*
> + * Create operations list from IOCTL arguments, setup operations fields so parse
> + * and commit steps are decoupled from IOCTL arguments. This step can fail.
> + */
> +static struct drm_gpuva_ops *
> +vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
> +			 u64 bo_offset_or_userptr, u64 addr, u64 range,
> +			 u32 operation, u64 gt_mask, u32 region)
>   {
> -	struct xe_vm *vm = container_of(w, struct xe_vm, async_ops.work);
> -
> -	for (;;) {
> -		struct async_op *op;
> -		int err;
> -
> -		if (vm->async_ops.error && !xe_vm_is_closed(vm))
> -			break;
> +	struct drm_gem_object *obj = bo ? &bo->ttm.base : NULL;
> +	struct ww_acquire_ctx ww;
> +	struct drm_gpuva_ops *ops;
> +	struct drm_gpuva_op *__op;
> +	struct xe_vma_op *op;
> +	int err;
>   
> -		spin_lock_irq(&vm->async_ops.lock);
> -		op = next_async_op(vm);
> -		if (op)
> -			list_del_init(&op->link);
> -		spin_unlock_irq(&vm->async_ops.lock);
> +	lockdep_assert_held_write(&vm->lock);
>   
> -		if (!op)
> -			break;
> +	vm_dbg(&vm->xe->drm,
> +	       "op=%d, addr=0x%016llx, range=0x%016llx, bo_offset_or_userptr=0x%016llx",
unsigned long long again.
> +	       VM_BIND_OP(operation), addr, range, bo_offset_or_userptr);
>   
> -		if (!xe_vm_is_closed(vm)) {
> -			bool first, last;
> +	switch (VM_BIND_OP(operation)) {
> +	case XE_VM_BIND_OP_MAP:
> +	case XE_VM_BIND_OP_MAP_USERPTR:
> +		ops = drm_gpuva_sm_map_ops_create(&vm->mgr, addr, range,
> +						  obj, bo_offset_or_userptr);
> +		drm_gpuva_for_each_op(__op, ops) {
> +			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>   
> -			down_write(&vm->lock);
> -again:
> -			first = op->vma->first_munmap_rebind;
> -			last = op->vma->last_munmap_rebind;
> -#ifdef TEST_VM_ASYNC_OPS_ERROR
> -#define FORCE_ASYNC_OP_ERROR	BIT(31)
> -			if (!(op->bind_op.op & FORCE_ASYNC_OP_ERROR)) {
> -				err = vm_bind_ioctl(vm, op->vma, op->engine,
> -						    op->bo, &op->bind_op,
> -						    op->syncs, op->num_syncs,
> -						    op->fence);
> -			} else {
> -				err = -ENOMEM;
> -				op->bind_op.op &= ~FORCE_ASYNC_OP_ERROR;
> -			}
> -#else
> -			err = vm_bind_ioctl(vm, op->vma, op->engine, op->bo,
> -					    &op->bind_op, op->syncs,
> -					    op->num_syncs, op->fence);
> -#endif
> -			/*
> -			 * In order for the fencing to work (stall behind
> -			 * existing jobs / prevent new jobs from running) all
> -			 * the dma-resv slots need to be programmed in a batch
> -			 * relative to execs / the rebind worker. The vm->lock
> -			 * ensure this.
> -			 */
> -			if (!err && ((first && VM_BIND_OP(op->bind_op.op) ==
> -				      XE_VM_BIND_OP_UNMAP) ||
> -				     vm->async_ops.munmap_rebind_inflight)) {
> -				if (last) {
> -					op->vma->last_munmap_rebind = false;
> -					vm->async_ops.munmap_rebind_inflight =
> -						false;
> -				} else {
> -					vm->async_ops.munmap_rebind_inflight =
> -						true;
> -
> -					async_op_cleanup(vm, op);
> -
> -					spin_lock_irq(&vm->async_ops.lock);
> -					op = next_async_op(vm);
> -					XE_BUG_ON(!op);
> -					list_del_init(&op->link);
> -					spin_unlock_irq(&vm->async_ops.lock);
> -
> -					goto again;
> -				}
> -			}
> -			if (err) {
> -				trace_xe_vma_fail(op->vma);
> -				drm_warn(&vm->xe->drm, "Async VM op(%d) failed with %d",
> -					 VM_BIND_OP(op->bind_op.op),
> -					 err);
> +			op->gt_mask = gt_mask;
> +			op->map.immediate =
> +				operation & XE_VM_BIND_FLAG_IMMEDIATE;
> +			op->map.read_only =
> +				operation & XE_VM_BIND_FLAG_READONLY;
> +		}
> +		break;
> +	case XE_VM_BIND_OP_UNMAP:
> +		ops = drm_gpuva_sm_unmap_ops_create(&vm->mgr, addr, range);
> +		drm_gpuva_for_each_op(__op, ops) {
Looks like drm_gpuva_..._ops_create() may error in which case this blows 
up. Same below.
> +			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>   
> -				spin_lock_irq(&vm->async_ops.lock);
> -				list_add(&op->link, &vm->async_ops.pending);
> -				spin_unlock_irq(&vm->async_ops.lock);
> +			op->gt_mask = gt_mask;
> +		}
> +		break;
> +	case XE_VM_BIND_OP_PREFETCH:
> +		ops = drm_gpuva_prefetch_ops_create(&vm->mgr, addr, range);
> +		drm_gpuva_for_each_op(__op, ops) {
> +			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>   
> -				vm_set_async_error(vm, err);
> -				up_write(&vm->lock);
> +			op->gt_mask = gt_mask;
> +			op->prefetch.region = region;
> +		}
> +		break;
> +	case XE_VM_BIND_OP_UNMAP_ALL:
> +		XE_BUG_ON(!bo);
>   
> -				if (vm->async_ops.error_capture.addr)
> -					vm_error_capture(vm, err,
> -							 op->bind_op.op,
> -							 op->bind_op.addr,
> -							 op->bind_op.range);
> -				break;
> -			}
> -			up_write(&vm->lock);
> -		} else {
> -			trace_xe_vma_flush(op->vma);
> +		err = xe_bo_lock(bo, &ww, 0, true);
> +		if (err)
> +			return ERR_PTR(err);
> +		ops = drm_gpuva_gem_unmap_ops_create(&vm->mgr, obj);
> +		xe_bo_unlock(bo, &ww);
>   
> -			if (is_unmap_op(op->bind_op.op)) {
> -				down_write(&vm->lock);
> -				xe_vma_destroy_unlocked(op->vma);
> -				up_write(&vm->lock);
> -			}
> +		drm_gpuva_for_each_op(__op, ops) {
> +			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>   
> -			if (op->fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
> -						   &op->fence->fence.flags)) {
> -				if (!xe_vm_no_dma_fences(vm)) {
> -					op->fence->started = true;
> -					smp_wmb();
> -					wake_up_all(&op->fence->wq);
> -				}
> -				dma_fence_signal(&op->fence->fence);
> -			}
> +			op->gt_mask = gt_mask;
>   		}
> +		break;
> +	default:
> +		XE_BUG_ON("NOT POSSIBLE");
> +		ops = ERR_PTR(-EINVAL);
> +	}
>   
> -		async_op_cleanup(vm, op);
> +#ifdef TEST_VM_ASYNC_OPS_ERROR
> +	if (operation & FORCE_ASYNC_OP_ERROR) {
> +		op = list_first_entry_or_null(&ops->list, struct xe_vma_op,
> +					      base.entry);
> +		if (op)
> +			op->inject_error = true;
>   	}
> +#endif
> +
> +	if (!IS_ERR(ops))
> +		drm_gpuva_for_each_op(__op, ops)
> +			print_op(vm->xe, __op);
> +
> +	return ops;
>   }
>   
> -static int __vm_bind_ioctl_async(struct xe_vm *vm, struct xe_vma *vma,
> -				 struct xe_engine *e, struct xe_bo *bo,
> -				 struct drm_xe_vm_bind_op *bind_op,
> -				 struct xe_sync_entry *syncs, u32 num_syncs)
> +static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> +			      u64 gt_mask, bool read_only)
>   {
> -	struct async_op *op;
> -	bool installed = false;
> -	u64 seqno;
> -	int i;
> +	struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
> +	struct xe_vma *vma;
> +	struct ww_acquire_ctx ww;
> +	int err;
>   
> -	lockdep_assert_held(&vm->lock);
> +	lockdep_assert_held_write(&vm->lock);
>   
> -	op = kmalloc(sizeof(*op), GFP_KERNEL);
> -	if (!op) {
> -		return -ENOMEM;
> -	}
> +	if (bo) {
> +		err = xe_bo_lock(bo, &ww, 0, true);
> +		if (err)
> +			return ERR_PTR(err);
> +	}
> +	vma = xe_vma_create(vm, bo, op->gem.offset,
> +			    op->va.addr, op->va.addr +
> +			    op->va.range - 1, read_only,
> +			    gt_mask);
> +	if (bo)
> +		xe_bo_unlock(bo, &ww);
>   
> -	if (num_syncs) {
> -		op->fence = kmalloc(sizeof(*op->fence), GFP_KERNEL);
> -		if (!op->fence) {
> -			kfree(op);
> -			return -ENOMEM;
> +	if (xe_vma_is_userptr(vma)) {
> +		err = xe_vma_userptr_pin_pages(vma);
> +		if (err) {
> +			xe_vma_destroy(vma, NULL);
> +			return ERR_PTR(err);
>   		}
> +	} else if(!bo->vm) {
> +		vm_insert_extobj(vm, vma);
> +		err = add_preempt_fences(vm, bo);
> +		if (err) {
> +			xe_vma_destroy(vma, NULL);
> +			return ERR_PTR(err);
> +		}
> +	}
> +
> +	return vma;
> +}
> +
> +/*
> + * Parse operations list and create any resources needed for the operations
> + * prior to fully commiting to the operations. This setp can fail.
s/setp/setup/
> + */
> +static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_engine *e,
> +				   struct drm_gpuva_ops **ops, int num_ops_list,
> +				   struct xe_sync_entry *syncs, u32 num_syncs,
> +				   struct list_head *ops_list, bool async)
> +{
> +	struct xe_vma_op *last_op = NULL;
> +	struct list_head *async_list = NULL;
> +	struct async_op_fence *fence = NULL;
> +	int err, i;
> +
> +	lockdep_assert_held_write(&vm->lock);
> +	XE_BUG_ON(num_ops_list > 1 && !async);
> +
> +	if (num_syncs && async) {
> +		u64 seqno;
> +
> +		fence = kmalloc(sizeof(*fence), GFP_KERNEL);
> +		if (!fence)
> +			return -ENOMEM;
>   
>   		seqno = e ? ++e->bind.fence_seqno : ++vm->async_ops.fence.seqno;
> -		dma_fence_init(&op->fence->fence, &async_op_fence_ops,
> +		dma_fence_init(&fence->fence, &async_op_fence_ops,
>   			       &vm->async_ops.lock, e ? e->bind.fence_ctx :
>   			       vm->async_ops.fence.context, seqno);
>   
>   		if (!xe_vm_no_dma_fences(vm)) {
> -			op->fence->vm = vm;
> -			op->fence->started = false;
> -			init_waitqueue_head(&op->fence->wq);
> +			fence->vm = vm;
> +			fence->started = false;
> +			init_waitqueue_head(&fence->wq);
>   		}
> -	} else {
> -		op->fence = NULL;
>   	}
> -	op->vma = vma;
> -	op->engine = e;
> -	op->bo = bo;
> -	op->bind_op = *bind_op;
> -	op->syncs = syncs;
> -	op->num_syncs = num_syncs;
> -	INIT_LIST_HEAD(&op->link);
> -
> -	for (i = 0; i < num_syncs; i++)
> -		installed |= xe_sync_entry_signal(&syncs[i], NULL,
> -						  &op->fence->fence);
>   
> -	if (!installed && op->fence)
> -		dma_fence_signal(&op->fence->fence);
> +	for (i = 0; i < num_ops_list; ++i) {
> +		struct drm_gpuva_ops *__ops = ops[i];
> +		struct drm_gpuva_op *__op;
>   
> -	spin_lock_irq(&vm->async_ops.lock);
> -	list_add_tail(&op->link, &vm->async_ops.pending);
> -	spin_unlock_irq(&vm->async_ops.lock);
> +		drm_gpuva_for_each_op(__op, __ops) {
> +			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> +			bool first = !async_list;
>   
> -	if (!vm->async_ops.error)
> -		queue_work(system_unbound_wq, &vm->async_ops.work);
> +			XE_BUG_ON(!first && !async);
>   
> -	return 0;
> -}
> +			INIT_LIST_HEAD(&op->link);
> +			if (first)
> +				async_list = ops_list;
> +			list_add_tail(&op->link, async_list);
>   
> -static int vm_bind_ioctl_async(struct xe_vm *vm, struct xe_vma *vma,
> -			       struct xe_engine *e, struct xe_bo *bo,
> -			       struct drm_xe_vm_bind_op *bind_op,
> -			       struct xe_sync_entry *syncs, u32 num_syncs)
> -{
> -	struct xe_vma *__vma, *next;
> -	struct list_head rebind_list;
> -	struct xe_sync_entry *in_syncs = NULL, *out_syncs = NULL;
> -	u32 num_in_syncs = 0, num_out_syncs = 0;
> -	bool first = true, last;
> -	int err;
> -	int i;
> +			if (first) {
> +				op->flags |= XE_VMA_OP_FIRST;
> +				op->num_syncs = num_syncs;
> +				op->syncs = syncs;
> +			}
>   
> -	lockdep_assert_held(&vm->lock);
> +			op->engine = e;
>   
> -	/* Not a linked list of unbinds + rebinds, easy */
> -	if (list_empty(&vma->unbind_link))
> -		return __vm_bind_ioctl_async(vm, vma, e, bo, bind_op,
> -					     syncs, num_syncs);
> +			switch (op->base.op) {
> +			case DRM_GPUVA_OP_MAP:
> +			{
> +				struct xe_vma *vma;
>   
> -	/*
> -	 * Linked list of unbinds + rebinds, decompose syncs into 'in / out'
> -	 * passing the 'in' to the first operation and 'out' to the last. Also
> -	 * the reference counting is a little tricky, increment the VM / bind
> -	 * engine ref count on all but the last operation and increment the BOs
> -	 * ref count on each rebind.
> -	 */
> +				vma = new_vma(vm, &op->base.map,
> +					      op->gt_mask, op->map.read_only);
> +				if (IS_ERR(vma)) {
> +					err = PTR_ERR(vma);
> +					goto free_fence;
> +				}
>   
> -	XE_BUG_ON(VM_BIND_OP(bind_op->op) != XE_VM_BIND_OP_UNMAP &&
> -		  VM_BIND_OP(bind_op->op) != XE_VM_BIND_OP_UNMAP_ALL &&
> -		  VM_BIND_OP(bind_op->op) != XE_VM_BIND_OP_PREFETCH);
> +				op->map.vma = vma;
> +				break;
> +			}
> +			case DRM_GPUVA_OP_REMAP:
> +				if (op->base.remap.prev) {
> +					struct xe_vma *vma;
> +					bool read_only =
> +						op->base.remap.unmap->va->flags &
> +						XE_VMA_READ_ONLY;
> +
> +					vma = new_vma(vm, op->base.remap.prev,
> +						      op->gt_mask, read_only);
> +					if (IS_ERR(vma)) {
> +						err = PTR_ERR(vma);
> +						goto free_fence;
> +					}
> +
> +					op->remap.prev = vma;
> +				}
>   
> -	/* Decompose syncs */
> -	if (num_syncs) {
> -		in_syncs = kmalloc(sizeof(*in_syncs) * num_syncs, GFP_KERNEL);
> -		out_syncs = kmalloc(sizeof(*out_syncs) * num_syncs, GFP_KERNEL);
> -		if (!in_syncs || !out_syncs) {
> -			err = -ENOMEM;
> -			goto out_error;
> -		}
> +				if (op->base.remap.next) {
> +					struct xe_vma *vma;
> +					bool read_only =
> +						op->base.remap.unmap->va->flags &
> +						XE_VMA_READ_ONLY;
>   
> -		for (i = 0; i < num_syncs; ++i) {
> -			bool signal = syncs[i].flags & DRM_XE_SYNC_SIGNAL;
> +					vma = new_vma(vm, op->base.remap.next,
> +						      op->gt_mask, read_only);
> +					if (IS_ERR(vma)) {
> +						err = PTR_ERR(vma);
> +						goto free_fence;
> +					}
>   
> -			if (signal)
> -				out_syncs[num_out_syncs++] = syncs[i];
> -			else
> -				in_syncs[num_in_syncs++] = syncs[i];
> -		}
> -	}
> +					op->remap.next = vma;
> +				}
>   
> -	/* Do unbinds + move rebinds to new list */
> -	INIT_LIST_HEAD(&rebind_list);
> -	list_for_each_entry_safe(__vma, next, &vma->unbind_link, unbind_link) {
> -		if (__vma->destroyed ||
> -		    VM_BIND_OP(bind_op->op) == XE_VM_BIND_OP_PREFETCH) {
> -			list_del_init(&__vma->unbind_link);
> -			xe_bo_get(bo);
> -			err = __vm_bind_ioctl_async(xe_vm_get(vm), __vma,
> -						    e ? xe_engine_get(e) : NULL,
> -						    bo, bind_op, first ?
> -						    in_syncs : NULL,
> -						    first ? num_in_syncs : 0);
> -			if (err) {
> -				xe_bo_put(bo);
> -				xe_vm_put(vm);
> -				if (e)
> -					xe_engine_put(e);
> -				goto out_error;
> +				/* XXX: Support no doing remaps */
What does this comment mean?
> +				op->remap.start =
> +					xe_vma_start(gpuva_to_vma(op->base.remap.unmap->va));
> +				op->remap.range =
> +					xe_vma_size(gpuva_to_vma(op->base.remap.unmap->va));
Perhaps have a remap_vma?
> +				break;
> +			case DRM_GPUVA_OP_UNMAP:
> +				op->unmap.start =
> +					xe_vma_start(gpuva_to_vma(op->base.unmap.va));
> +				op->unmap.range =
> +					xe_vma_size(gpuva_to_vma(op->base.unmap.va));
> +				break;
> +			case DRM_GPUVA_OP_PREFETCH:
> +				/* Nothing to do */
> +				break;
> +			default:
> +				XE_BUG_ON("NOT POSSIBLE");
>   			}
> -			in_syncs = NULL;
> -			first = false;
> -		} else {
> -			list_move_tail(&__vma->unbind_link, &rebind_list);
> -		}
> -	}
> -	last = list_empty(&rebind_list);
> -	if (!last) {
> -		xe_vm_get(vm);
> -		if (e)
> -			xe_engine_get(e);
> -	}
> -	err = __vm_bind_ioctl_async(vm, vma, e,
> -				    bo, bind_op,
> -				    first ? in_syncs :
> -				    last ? out_syncs : NULL,
> -				    first ? num_in_syncs :
> -				    last ? num_out_syncs : 0);
> -	if (err) {
> -		if (!last) {
> -			xe_vm_put(vm);
> -			if (e)
> -				xe_engine_put(e);
> -		}
> -		goto out_error;
> -	}
> -	in_syncs = NULL;
>   
> -	/* Do rebinds */
> -	list_for_each_entry_safe(__vma, next, &rebind_list, unbind_link) {
> -		list_del_init(&__vma->unbind_link);
> -		last = list_empty(&rebind_list);
> -
> -		if (xe_vma_is_userptr(__vma)) {
> -			bind_op->op = XE_VM_BIND_FLAG_ASYNC |
> -				XE_VM_BIND_OP_MAP_USERPTR;
> -		} else {
> -			bind_op->op = XE_VM_BIND_FLAG_ASYNC |
> -				XE_VM_BIND_OP_MAP;
> -			xe_bo_get(__vma->bo);
> -		}
> -
> -		if (!last) {
> -			xe_vm_get(vm);
> -			if (e)
> -				xe_engine_get(e);
> +			last_op = op;
>   		}
>   
> -		err = __vm_bind_ioctl_async(vm, __vma, e,
> -					    __vma->bo, bind_op, last ?
> -					    out_syncs : NULL,
> -					    last ? num_out_syncs : 0);
> -		if (err) {
> -			if (!last) {
> -				xe_vm_put(vm);
> -				if (e)
> -					xe_engine_put(e);
> -			}
> -			goto out_error;
> -		}
> +		last_op->ops = __ops;
>   	}
>   
> -	kfree(syncs);
> -	return 0;
> +	XE_BUG_ON(!last_op);	/* FIXME: This is not an error, handle */

Please handle this properly if this can actually happen.


>   
> -out_error:
> -	kfree(in_syncs);
> -	kfree(out_syncs);
> -	kfree(syncs);
> +	last_op->flags |= XE_VMA_OP_LAST;
> +	last_op->num_syncs = num_syncs;
> +	last_op->syncs = syncs;
> +	last_op->fence = fence;
> +
> +	return 0;
>   
> +free_fence:
> +	kfree(fence);
>   	return err;
>   }
>   
> -static bool bo_has_vm_references(struct xe_bo *bo, struct xe_vm *vm,
> -				 struct xe_vma *ignore)
> +static void xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
>   {
> -	struct ww_acquire_ctx ww;
> -	struct xe_vma *vma;
> -	bool ret = false;
> +	lockdep_assert_held_write(&vm->lock);
>   
> -	xe_bo_lock(bo, &ww, 0, false);
> -	list_for_each_entry(vma, &bo->vmas, bo_link) {
> -		if (vma != ignore && vma->vm == vm && !vma->destroyed) {
> -			ret = true;
> -			break;
> -		}
> +	switch (op->base.op) {
> +	case DRM_GPUVA_OP_MAP:
> +		xe_vm_insert_vma(vm, op->map.vma);
Hmm, xe_vm_insert_vma() calls drm_gpuva_insert() which may error 
(-ENOMEM). We just warn on that error without any comments? Please add a 
detailed discussion why that can't fail, or propagate the error.

> +		break;
> +	case DRM_GPUVA_OP_REMAP:
> +		prep_vma_destroy(vm, gpuva_to_vma(op->base.remap.unmap->va),
> +				 true);
> +		if (op->remap.prev)
> +			xe_vm_insert_vma(vm, op->remap.prev);
> +		if (op->remap.next)
> +			xe_vm_insert_vma(vm, op->remap.next);
> +		break;
> +	case DRM_GPUVA_OP_UNMAP:
> +		prep_vma_destroy(vm, gpuva_to_vma(op->base.unmap.va), true);
> +		break;
> +	case DRM_GPUVA_OP_PREFETCH:
> +		/* Nothing to do */
> +		break;
> +	default:
> +		XE_BUG_ON("NOT POSSIBLE");
>   	}
> -	xe_bo_unlock(bo, &ww);
> -
> -	return ret;
>   }
>   
> -static int vm_insert_extobj(struct xe_vm *vm, struct xe_vma *vma)
> +static int __xe_vma_op_execute(struct xe_vm *vm, struct xe_vma *vma,
> +			       struct xe_vma_op *op)
>   {
> -	struct xe_bo *bo = vma->bo;
> +	LIST_HEAD(objs);
> +	LIST_HEAD(dups);
> +	struct ttm_validate_buffer tv_bo, tv_vm;
> +	struct ww_acquire_ctx ww;
> +	struct xe_bo *vbo;
> +	int err;
>   
>   	lockdep_assert_held_write(&vm->lock);
>   
> -	if (bo_has_vm_references(bo, vm, vma))
> -		return 0;
> +	xe_vm_tv_populate(vm, &tv_vm);
> +	list_add_tail(&tv_vm.head, &objs);
> +	vbo = xe_vma_bo(vma);
> +	if (vbo) {
> +		/*
> +		 * An unbind can drop the last reference to the BO and
> +		 * the BO is needed for ttm_eu_backoff_reservation so
> +		 * take a reference here.
> +		 */
> +		xe_bo_get(vbo);
>   
> -	list_add(&vma->extobj.link, &vm->extobj.list);
> -	vm->extobj.entries++;
> +		tv_bo.bo = &vbo->ttm;
> +		tv_bo.num_shared = 1;
> +		list_add(&tv_bo.head, &objs);
> +	}
>   
> -	return 0;
> -}
> +again:
> +	err = ttm_eu_reserve_buffers(&ww, &objs, true, &dups);
> +	if (err) {
> +		xe_bo_put(vbo);
> +		return err;
> +	}
>   
> -static int __vm_bind_ioctl_lookup_vma(struct xe_vm *vm, struct xe_bo *bo,
> -				      u64 addr, u64 range, u32 op)
> -{
> -	struct xe_device *xe = vm->xe;
> -	struct xe_vma *vma, lookup;
> -	bool async = !!(op & XE_VM_BIND_FLAG_ASYNC);
> +	xe_vm_assert_held(vm);
> +	xe_bo_assert_held(xe_vma_bo(vma));
> +
> +	switch (op->base.op) {
> +	case DRM_GPUVA_OP_MAP:
> +		err = xe_vm_bind(vm, vma, op->engine, xe_vma_bo(vma),
> +				 op->syncs, op->num_syncs, op->fence,
> +				 op->map.immediate || !xe_vm_in_fault_mode(vm),
> +				 op->flags & XE_VMA_OP_FIRST,
> +				 op->flags & XE_VMA_OP_LAST);
> +		break;
> +	case DRM_GPUVA_OP_REMAP:
> +	{
> +		bool prev = !!op->remap.prev;
> +		bool next = !!op->remap.next;
> +
> +		if (!op->remap.unmap_done) {
> +			vm->async_ops.munmap_rebind_inflight = true;
> +			if (prev || next)
> +				vma->gpuva.flags |= XE_VMA_FIRST_REBIND;
> +			err = xe_vm_unbind(vm, vma, op->engine, op->syncs,
> +					   op->num_syncs,
> +					   !prev && !next ? op->fence : NULL,
> +					   op->flags & XE_VMA_OP_FIRST,
> +					   op->flags & XE_VMA_OP_LAST && !prev &&
> +					   !next);
> +			if (err)
> +				break;
> +			op->remap.unmap_done = true;
> +		}
>   
> -	lockdep_assert_held(&vm->lock);
> +		if (prev) {
> +			op->remap.prev->gpuva.flags |= XE_VMA_LAST_REBIND;
> +			err = xe_vm_bind(vm, op->remap.prev, op->engine,
> +					 xe_vma_bo(op->remap.prev), op->syncs,
> +					 op->num_syncs,
> +					 !next ? op->fence : NULL, true, false,
> +					 op->flags & XE_VMA_OP_LAST && !next);
> +			op->remap.prev->gpuva.flags &= ~XE_VMA_LAST_REBIND;
> +			if (err)
> +				break;
> +			op->remap.prev = NULL;
> +		}
>   
> -	lookup.start = addr;
> -	lookup.end = addr + range - 1;
> +		if (next) {
> +			op->remap.next->gpuva.flags |= XE_VMA_LAST_REBIND;
> +			err = xe_vm_bind(vm, op->remap.next, op->engine,
> +					 xe_vma_bo(op->remap.next),
> +					 op->syncs, op->num_syncs,
> +					 op->fence, true, false,
> +					 op->flags & XE_VMA_OP_LAST);
> +			op->remap.next->gpuva.flags &= ~XE_VMA_LAST_REBIND;
> +			if (err)
> +				break;
> +			op->remap.next = NULL;
> +		}
> +		vm->async_ops.munmap_rebind_inflight = false;
>   
> -	switch (VM_BIND_OP(op)) {
> -	case XE_VM_BIND_OP_MAP:
> -	case XE_VM_BIND_OP_MAP_USERPTR:
> -		vma = xe_vm_find_overlapping_vma(vm, &lookup);
> -		if (XE_IOCTL_ERR(xe, vma))
> -			return -EBUSY;
>   		break;
> -	case XE_VM_BIND_OP_UNMAP:
> -	case XE_VM_BIND_OP_PREFETCH:
> -		vma = xe_vm_find_overlapping_vma(vm, &lookup);
> -		if (XE_IOCTL_ERR(xe, !vma) ||
> -		    XE_IOCTL_ERR(xe, (vma->start != addr ||
> -				 vma->end != addr + range - 1) && !async))
> -			return -EINVAL;
> +	}
> +	case DRM_GPUVA_OP_UNMAP:
> +		err = xe_vm_unbind(vm, vma, op->engine, op->syncs,
> +				   op->num_syncs, op->fence,
> +				   op->flags & XE_VMA_OP_FIRST,
> +				   op->flags & XE_VMA_OP_LAST);
>   		break;
> -	case XE_VM_BIND_OP_UNMAP_ALL:
> +	case DRM_GPUVA_OP_PREFETCH:
> +		err = xe_vm_prefetch(vm, vma, op->engine, op->prefetch.region,
> +				     op->syncs, op->num_syncs, op->fence,
> +				     op->flags & XE_VMA_OP_FIRST,
> +				     op->flags & XE_VMA_OP_LAST);
>   		break;
>   	default:
>   		XE_BUG_ON("NOT POSSIBLE");
> -		return -EINVAL;
>   	}
>   
> -	return 0;
> -}
> -
> -static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma)
> -{
> -	down_read(&vm->userptr.notifier_lock);
> -	vma->destroyed = true;
> -	up_read(&vm->userptr.notifier_lock);
> -	xe_vm_remove_vma(vm, vma);
> -}
> -
> -static int prep_replacement_vma(struct xe_vm *vm, struct xe_vma *vma)
> -{
> -	int err;
> -
> -	if (vma->bo && !vma->bo->vm) {
> -		vm_insert_extobj(vm, vma);
> -		err = add_preempt_fences(vm, vma->bo);
> -		if (err)
> -			return err;
> +	ttm_eu_backoff_reservation(&ww, &objs);
> +	if (err == -EAGAIN && xe_vma_is_userptr(vma)) {
> +		lockdep_assert_held_write(&vm->lock);
> +		err = xe_vma_userptr_pin_pages(vma);
> +		if (!err)
> +			goto again;
>   	}
> +	xe_bo_put(vbo);
>   
> -	return 0;
> +	if (err)
> +		trace_xe_vma_fail(vma);
> +
> +	return err;
>   }
>   
> -/*
> - * Find all overlapping VMAs in lookup range and add to a list in the returned
> - * VMA, all of VMAs found will be unbound. Also possibly add 2 new VMAs that
> - * need to be bound if first / last VMAs are not fully unbound. This is akin to
> - * how munmap works.
> - */
> -static struct xe_vma *vm_unbind_lookup_vmas(struct xe_vm *vm,
> -					    struct xe_vma *lookup)
> +static int xe_vma_op_execute(struct xe_vm *vm, struct xe_vma_op *op)
>   {
> -	struct xe_vma *vma = xe_vm_find_overlapping_vma(vm, lookup);
> -	struct rb_node *node;
> -	struct xe_vma *first = vma, *last = vma, *new_first = NULL,
> -		      *new_last = NULL, *__vma, *next;
> -	int err = 0;
> -	bool first_munmap_rebind = false;
> +	int ret = 0;
>   
> -	lockdep_assert_held(&vm->lock);
> -	XE_BUG_ON(!vma);
> -
> -	node = &vma->vm_node;
> -	while ((node = rb_next(node))) {
> -		if (!xe_vma_cmp_vma_cb(lookup, node)) {
> -			__vma = to_xe_vma(node);
> -			list_add_tail(&__vma->unbind_link, &vma->unbind_link);
> -			last = __vma;
> -		} else {
> -			break;
> -		}
> -	}
> +	lockdep_assert_held_write(&vm->lock);
>   
> -	node = &vma->vm_node;
> -	while ((node = rb_prev(node))) {
> -		if (!xe_vma_cmp_vma_cb(lookup, node)) {
> -			__vma = to_xe_vma(node);
> -			list_add(&__vma->unbind_link, &vma->unbind_link);
> -			first = __vma;
> -		} else {
> -			break;
> -		}
> +#ifdef TEST_VM_ASYNC_OPS_ERROR
> +	if (op->inject_error) {
> +		op->inject_error = false;
> +		return -ENOMEM;
>   	}
> +#endif
>   
> -	if (first->start != lookup->start) {
> -		struct ww_acquire_ctx ww;
> +	switch (op->base.op) {
> +	case DRM_GPUVA_OP_MAP:
> +		ret = __xe_vma_op_execute(vm, op->map.vma, op);
> +		break;
> +	case DRM_GPUVA_OP_REMAP:
> +	{
> +		struct xe_vma *vma;
> +
> +		if (!op->remap.unmap_done)
> +			vma = gpuva_to_vma(op->base.remap.unmap->va);
> +		else if(op->remap.prev)
> +			vma = op->remap.prev;
> +		else
> +			vma = op->remap.next;
>   
> -		if (first->bo)
> -			err = xe_bo_lock(first->bo, &ww, 0, true);
> -		if (err)
> -			goto unwind;
> -		new_first = xe_vma_create(first->vm, first->bo,
> -					  first->bo ? first->bo_offset :
> -					  first->userptr.ptr,
> -					  first->start,
> -					  lookup->start - 1,
> -					  (first->pte_flags & PTE_READ_ONLY),
> -					  first->gt_mask);
> -		if (first->bo)
> -			xe_bo_unlock(first->bo, &ww);
> -		if (!new_first) {
> -			err = -ENOMEM;
> -			goto unwind;
> -		}
> -		if (!first->bo) {
> -			err = xe_vma_userptr_pin_pages(new_first);
> -			if (err)
> -				goto unwind;
> -		}
> -		err = prep_replacement_vma(vm, new_first);
> -		if (err)
> -			goto unwind;
> +		ret = __xe_vma_op_execute(vm, vma, op);
> +		break;
> +	}
> +	case DRM_GPUVA_OP_UNMAP:
> +		ret = __xe_vma_op_execute(vm, gpuva_to_vma(op->base.unmap.va),
> +					  op);
> +		break;
> +	case DRM_GPUVA_OP_PREFETCH:
> +		ret = __xe_vma_op_execute(vm,
> +					  gpuva_to_vma(op->base.prefetch.va),
> +					  op);
> +		break;
> +	default:
> +		XE_BUG_ON("NOT POSSIBLE");
>   	}
>   
> -	if (last->end != lookup->end) {
> -		struct ww_acquire_ctx ww;
> -		u64 chunk = lookup->end + 1 - last->start;
> +	return ret;
> +}
>   
> -		if (last->bo)
> -			err = xe_bo_lock(last->bo, &ww, 0, true);
> -		if (err)
> -			goto unwind;
> -		new_last = xe_vma_create(last->vm, last->bo,
> -					 last->bo ? last->bo_offset + chunk :
> -					 last->userptr.ptr + chunk,
> -					 last->start + chunk,
> -					 last->end,
> -					 (last->pte_flags & PTE_READ_ONLY),
> -					 last->gt_mask);
> -		if (last->bo)
> -			xe_bo_unlock(last->bo, &ww);
> -		if (!new_last) {
> -			err = -ENOMEM;
> -			goto unwind;
> -		}
> -		if (!last->bo) {
> -			err = xe_vma_userptr_pin_pages(new_last);
> -			if (err)
> -				goto unwind;
> -		}
> -		err = prep_replacement_vma(vm, new_last);
> -		if (err)
> -			goto unwind;
> -	}
> +static void xe_vma_op_cleanup(struct xe_vm *vm, struct xe_vma_op *op)
> +{
> +	bool last = op->flags & XE_VMA_OP_LAST;
>   
> -	prep_vma_destroy(vm, vma);
> -	if (list_empty(&vma->unbind_link) && (new_first || new_last))
> -		vma->first_munmap_rebind = true;
> -	list_for_each_entry(__vma, &vma->unbind_link, unbind_link) {
> -		if ((new_first || new_last) && !first_munmap_rebind) {
> -			__vma->first_munmap_rebind = true;
> -			first_munmap_rebind = true;
> -		}
> -		prep_vma_destroy(vm, __vma);
> -	}
> -	if (new_first) {
> -		xe_vm_insert_vma(vm, new_first);
> -		list_add_tail(&new_first->unbind_link, &vma->unbind_link);
> -		if (!new_last)
> -			new_first->last_munmap_rebind = true;
> +	if (last) {
> +		while (op->num_syncs--)
> +			xe_sync_entry_cleanup(&op->syncs[op->num_syncs]);
> +		kfree(op->syncs);
> +		if (op->engine)
> +			xe_engine_put(op->engine);
> +		if (op->fence)
> +			dma_fence_put(&op->fence->fence);
>   	}
> -	if (new_last) {
> -		xe_vm_insert_vma(vm, new_last);
> -		list_add_tail(&new_last->unbind_link, &vma->unbind_link);
> -		new_last->last_munmap_rebind = true;
> +	if (!list_empty(&op->link)) {
> +		spin_lock_irq(&vm->async_ops.lock);
> +		list_del(&op->link);
> +		spin_unlock_irq(&vm->async_ops.lock);
>   	}
> +	if (op->ops)
> +		drm_gpuva_ops_free(&vm->mgr, op->ops);
> +	if (last)
> +		xe_vm_put(vm);
> +}
>   
> -	return vma;
> +static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
> +			     bool post_commit)
> +{
> +	lockdep_assert_held_write(&vm->lock);
> +
> +	switch (op->base.op) {
> +	case DRM_GPUVA_OP_MAP:
> +		prep_vma_destroy(vm, op->map.vma, post_commit);
> +		xe_vma_destroy(op->map.vma, NULL);
> +		break;
> +	case DRM_GPUVA_OP_UNMAP:
> +	{
> +		struct xe_vma *vma = gpuva_to_vma(op->base.unmap.va);
>   
> -unwind:
> -	list_for_each_entry_safe(__vma, next, &vma->unbind_link, unbind_link)
> -		list_del_init(&__vma->unbind_link);
> -	if (new_last) {
> -		prep_vma_destroy(vm, new_last);
> -		xe_vma_destroy_unlocked(new_last);
> +		down_read(&vm->userptr.notifier_lock);
> +		vma->gpuva.flags &= ~XE_VMA_DESTROYED;
> +		up_read(&vm->userptr.notifier_lock);
> +		if (post_commit)
> +			xe_vm_insert_vma(vm, vma);

Can error, so not suitable for unwind?

> +		break;
>   	}
> -	if (new_first) {
> -		prep_vma_destroy(vm, new_first);
> -		xe_vma_destroy_unlocked(new_first);
> +	case DRM_GPUVA_OP_PREFETCH:
> +	case DRM_GPUVA_OP_REMAP:
> +		/* Nothing to do */
> +		break;
> +	default:
> +		XE_BUG_ON("NOT POSSIBLE");
>   	}
> +}
>   
> -	return ERR_PTR(err);
> +static struct xe_vma_op *next_vma_op(struct xe_vm *vm)
> +{
> +	return list_first_entry_or_null(&vm->async_ops.pending,
> +					struct xe_vma_op, link);
>   }
>   
> -/*
> - * Similar to vm_unbind_lookup_vmas, find all VMAs in lookup range to prefetch
> - */
> -static struct xe_vma *vm_prefetch_lookup_vmas(struct xe_vm *vm,
> -					      struct xe_vma *lookup,
> -					      u32 region)
> +static void xe_vma_op_work_func(struct work_struct *w)

Won't review this function since it's going away.

>   {
> -	struct xe_vma *vma = xe_vm_find_overlapping_vma(vm, lookup), *__vma,
> -		      *next;
> -	struct rb_node *node;
> +	struct xe_vm *vm = container_of(w, struct xe_vm, async_ops.work);
>   
> -	if (!xe_vma_is_userptr(vma)) {
> -		if (!xe_bo_can_migrate(vma->bo, region_to_mem_type[region]))
> -			return ERR_PTR(-EINVAL);
> -	}
> +	for (;;) {
> +		struct xe_vma_op *op;
> +		int err;
>   
> -	node = &vma->vm_node;
> -	while ((node = rb_next(node))) {
> -		if (!xe_vma_cmp_vma_cb(lookup, node)) {
> -			__vma = to_xe_vma(node);
> -			if (!xe_vma_is_userptr(__vma)) {
> -				if (!xe_bo_can_migrate(__vma->bo, region_to_mem_type[region]))
> -					goto flush_list;
> -			}
> -			list_add_tail(&__vma->unbind_link, &vma->unbind_link);
> -		} else {
> +		if (vm->async_ops.error && !xe_vm_is_closed(vm))
>   			break;
> -		}
> -	}
>   
> -	node = &vma->vm_node;
> -	while ((node = rb_prev(node))) {
> -		if (!xe_vma_cmp_vma_cb(lookup, node)) {
> -			__vma = to_xe_vma(node);
> -			if (!xe_vma_is_userptr(__vma)) {
> -				if (!xe_bo_can_migrate(__vma->bo, region_to_mem_type[region]))
> -					goto flush_list;
> -			}
> -			list_add(&__vma->unbind_link, &vma->unbind_link);
> -		} else {
> +		spin_lock_irq(&vm->async_ops.lock);
> +		op = next_vma_op(vm);
> +		spin_unlock_irq(&vm->async_ops.lock);
> +
> +		if (!op)
>   			break;
> -		}
> -	}
>   
> -	return vma;
> +		if (!xe_vm_is_closed(vm)) {
> +			down_write(&vm->lock);
> +			err = xe_vma_op_execute(vm, op);
> +			if (err) {
> +				drm_warn(&vm->xe->drm, "Async VM op(%d) failed with %d",
> +					 0, err);
>   
> -flush_list:
> -	list_for_each_entry_safe(__vma, next, &vma->unbind_link,
> -				 unbind_link)
> -		list_del_init(&__vma->unbind_link);
> +				vm_set_async_error(vm, err);
> +				up_write(&vm->lock);
>   
> -	return ERR_PTR(-EINVAL);
> -}
> +				if (vm->async_ops.error_capture.addr)
> +					vm_error_capture(vm, err, 0, 0, 0);
> +				break;
> +			}
> +			up_write(&vm->lock);
> +		} else {
> +			struct xe_vma *vma;
>   
> -static struct xe_vma *vm_unbind_all_lookup_vmas(struct xe_vm *vm,
> -						struct xe_bo *bo)
> -{
> -	struct xe_vma *first = NULL, *vma;
> +			switch (op->base.op) {
> +			case DRM_GPUVA_OP_REMAP:
> +				vma = gpuva_to_vma(op->base.remap.unmap->va);
> +				trace_xe_vma_flush(vma);
>   
> -	lockdep_assert_held(&vm->lock);
> -	xe_bo_assert_held(bo);
> +				down_write(&vm->lock);
> +				xe_vma_destroy_unlocked(vma);
> +				up_write(&vm->lock);
> +				break;
> +			case DRM_GPUVA_OP_UNMAP:
> +				vma = gpuva_to_vma(op->base.unmap.va);
> +				trace_xe_vma_flush(vma);
>   
> -	list_for_each_entry(vma, &bo->vmas, bo_link) {
> -		if (vma->vm != vm)
> -			continue;
> +				down_write(&vm->lock);
> +				xe_vma_destroy_unlocked(vma);
> +				up_write(&vm->lock);
> +				break;
> +			default:
> +				/* Nothing to do */
> +				break;
> +			}
>   
> -		prep_vma_destroy(vm, vma);
> -		if (!first)
> -			first = vma;
> -		else
> -			list_add_tail(&vma->unbind_link, &first->unbind_link);
> -	}
> +			if (op->fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
> +						   &op->fence->fence.flags)) {
> +				if (!xe_vm_no_dma_fences(vm)) {
> +					op->fence->started = true;
> +					smp_wmb();
> +					wake_up_all(&op->fence->wq);
> +				}
> +				dma_fence_signal(&op->fence->fence);
> +			}
> +		}
>   
> -	return first;
> +		xe_vma_op_cleanup(vm, op);
> +	}
>   }
>   
> -static struct xe_vma *vm_bind_ioctl_lookup_vma(struct xe_vm *vm,
> -					       struct xe_bo *bo,
> -					       u64 bo_offset_or_userptr,
> -					       u64 addr, u64 range, u32 op,
> -					       u64 gt_mask, u32 region)
> +/*
> + * Commit operations list, this step cannot fail in async mode, can fail if the
> + * bind operation fails in sync mode.
> + */

But can fail in async mode as mentioned above?


> +static int vm_bind_ioctl_ops_commit(struct xe_vm *vm,
> +				    struct list_head *ops_list, bool async)
>   {
> -	struct ww_acquire_ctx ww;
> -	struct xe_vma *vma, lookup;
> -	int err;
> -
> -	lockdep_assert_held(&vm->lock);
> +	struct xe_vma_op *op, *last_op;
> +	int err = 0;
>   
> -	lookup.start = addr;
> -	lookup.end = addr + range - 1;
> +	lockdep_assert_held_write(&vm->lock);
>   
> -	switch (VM_BIND_OP(op)) {
> -	case XE_VM_BIND_OP_MAP:
> -		XE_BUG_ON(!bo);
> +	list_for_each_entry(op, ops_list, link) {
> +		last_op = op;
> +		xe_vma_op_commit(vm, op);
> +	}
>   
> -		err = xe_bo_lock(bo, &ww, 0, true);
> +	if (!async) {
> +		err = xe_vma_op_execute(vm, last_op);
>   		if (err)
> -			return ERR_PTR(err);
> -		vma = xe_vma_create(vm, bo, bo_offset_or_userptr, addr,
> -				    addr + range - 1,
> -				    op & XE_VM_BIND_FLAG_READONLY,
> -				    gt_mask);
> -		xe_bo_unlock(bo, &ww);
> -		if (!vma)
> -			return ERR_PTR(-ENOMEM);
> +			xe_vma_op_unwind(vm, last_op, true);
> +		xe_vma_op_cleanup(vm, last_op);
> +	} else {
> +		int i;
> +		bool installed = false;
>   
> -		xe_vm_insert_vma(vm, vma);
> -		if (!bo->vm) {
> -			vm_insert_extobj(vm, vma);
> -			err = add_preempt_fences(vm, bo);
> -			if (err) {
> -				prep_vma_destroy(vm, vma);
> -				xe_vma_destroy_unlocked(vma);
> +		for (i = 0; i < last_op->num_syncs; i++)
> +			installed |= xe_sync_entry_signal(&last_op->syncs[i],
> +							  NULL,
> +							  &last_op->fence->fence);
> +		if (!installed && last_op->fence)
> +			dma_fence_signal(&last_op->fence->fence);
>   
> -				return ERR_PTR(err);
> -			}
> -		}
> -		break;
> -	case XE_VM_BIND_OP_UNMAP:
> -		vma = vm_unbind_lookup_vmas(vm, &lookup);
> -		break;
> -	case XE_VM_BIND_OP_PREFETCH:
> -		vma = vm_prefetch_lookup_vmas(vm, &lookup, region);
> -		break;
> -	case XE_VM_BIND_OP_UNMAP_ALL:
> -		XE_BUG_ON(!bo);
> +		spin_lock_irq(&vm->async_ops.lock);
> +		list_splice_tail(ops_list, &vm->async_ops.pending);
> +		spin_unlock_irq(&vm->async_ops.lock);
>   
> -		err = xe_bo_lock(bo, &ww, 0, true);
> -		if (err)
> -			return ERR_PTR(err);
> -		vma = vm_unbind_all_lookup_vmas(vm, bo);
> -		if (!vma)
> -			vma = ERR_PTR(-EINVAL);
> -		xe_bo_unlock(bo, &ww);
> -		break;
> -	case XE_VM_BIND_OP_MAP_USERPTR:
> -		XE_BUG_ON(bo);
> +		if (!vm->async_ops.error)
> +			queue_work(system_unbound_wq, &vm->async_ops.work);
> +	}
>   
> -		vma = xe_vma_create(vm, NULL, bo_offset_or_userptr, addr,
> -				    addr + range - 1,
> -				    op & XE_VM_BIND_FLAG_READONLY,
> -				    gt_mask);
> -		if (!vma)
> -			return ERR_PTR(-ENOMEM);
> +	return err;
> +}
>   
> -		err = xe_vma_userptr_pin_pages(vma);
> -		if (err) {
> -			prep_vma_destroy(vm, vma);
> -			xe_vma_destroy_unlocked(vma);
> +/*
> + * Unwind operations list, called after a failure of vm_bind_ioctl_ops_create or
> + * vm_bind_ioctl_ops_parse.
> + */
> +static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
> +				     struct drm_gpuva_ops **ops,
> +				     int num_ops_list)
> +{
> +	int i;
>   
> -			return ERR_PTR(err);
> -		} else {
> -			xe_vm_insert_vma(vm, vma);
> +	for (i = 0; i < num_ops_list; ++i) {
> +		struct drm_gpuva_ops *__ops = ops[i];
> +		struct drm_gpuva_op *__op;
> +
> +		if (!__ops)
> +			continue;
> +
> +		drm_gpuva_for_each_op(__op, __ops) {
> +			struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> +
> +			xe_vma_op_unwind(vm, op, false);
>   		}
> -		break;
> -	default:
> -		XE_BUG_ON("NOT POSSIBLE");
> -		vma = ERR_PTR(-EINVAL);
>   	}
> -
> -	return vma;
>   }
>   
>   #ifdef TEST_VM_ASYNC_OPS_ERROR
> @@ -2971,15 +2939,16 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>   	struct drm_xe_vm_bind *args = data;
>   	struct drm_xe_sync __user *syncs_user;
>   	struct xe_bo **bos = NULL;
> -	struct xe_vma **vmas = NULL;
> +	struct drm_gpuva_ops **ops = NULL;
>   	struct xe_vm *vm;
>   	struct xe_engine *e = NULL;
>   	u32 num_syncs;
>   	struct xe_sync_entry *syncs = NULL;
>   	struct drm_xe_vm_bind_op *bind_ops;
> +	LIST_HEAD(ops_list);
>   	bool async;
>   	int err;
> -	int i, j = 0;
> +	int i;
>   
>   	err = vm_bind_ioctl_check_args(xe, args, &bind_ops, &async);
>   	if (err)
> @@ -3067,8 +3036,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>   		goto put_engine;
>   	}
>   
> -	vmas = kzalloc(sizeof(*vmas) * args->num_binds, GFP_KERNEL);
> -	if (!vmas) {
> +	ops = kzalloc(sizeof(*ops) * args->num_binds, GFP_KERNEL);
> +	if (!ops) {
>   		err = -ENOMEM;
>   		goto put_engine;
>   	}
> @@ -3148,128 +3117,40 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>   		u64 gt_mask = bind_ops[i].gt_mask;
>   		u32 region = bind_ops[i].region;
>   
> -		vmas[i] = vm_bind_ioctl_lookup_vma(vm, bos[i], obj_offset,
> -						   addr, range, op, gt_mask,
> -						   region);
> -		if (IS_ERR(vmas[i])) {
> -			err = PTR_ERR(vmas[i]);
> -			vmas[i] = NULL;
> -			goto destroy_vmas;
> -		}
> -	}
> -
> -	for (j = 0; j < args->num_binds; ++j) {
> -		struct xe_sync_entry *__syncs;
> -		u32 __num_syncs = 0;
> -		bool first_or_last = j == 0 || j == args->num_binds - 1;
> -
> -		if (args->num_binds == 1) {
> -			__num_syncs = num_syncs;
> -			__syncs = syncs;
> -		} else if (first_or_last && num_syncs) {
> -			bool first = j == 0;
> -
> -			__syncs = kmalloc(sizeof(*__syncs) * num_syncs,
> -					  GFP_KERNEL);
> -			if (!__syncs) {
> -				err = ENOMEM;
> -				break;
> -			}
> -
> -			/* in-syncs on first bind, out-syncs on last bind */
> -			for (i = 0; i < num_syncs; ++i) {
> -				bool signal = syncs[i].flags &
> -					DRM_XE_SYNC_SIGNAL;
> -
> -				if ((first && !signal) || (!first && signal))
> -					__syncs[__num_syncs++] = syncs[i];
> -			}
> -		} else {
> -			__num_syncs = 0;
> -			__syncs = NULL;
> -		}
> -
> -		if (async) {
> -			bool last = j == args->num_binds - 1;
> -
> -			/*
> -			 * Each pass of async worker drops the ref, take a ref
> -			 * here, 1 set of refs taken above
> -			 */
> -			if (!last) {
> -				if (e)
> -					xe_engine_get(e);
> -				xe_vm_get(vm);
> -			}
> -
> -			err = vm_bind_ioctl_async(vm, vmas[j], e, bos[j],
> -						  bind_ops + j, __syncs,
> -						  __num_syncs);
> -			if (err && !last) {
> -				if (e)
> -					xe_engine_put(e);
> -				xe_vm_put(vm);
> -			}
> -			if (err)
> -				break;
> -		} else {
> -			XE_BUG_ON(j != 0);	/* Not supported */
> -			err = vm_bind_ioctl(vm, vmas[j], e, bos[j],
> -					    bind_ops + j, __syncs,
> -					    __num_syncs, NULL);
> -			break;	/* Needed so cleanup loops work */
> +		ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
> +						  addr, range, op, gt_mask,
> +						  region);
> +		if (IS_ERR(ops[i])) {
> +			err = PTR_ERR(ops[i]);
> +			ops[i] = NULL;
> +			goto unwind_ops;
>   		}
>   	}
>   
> -	/* Most of cleanup owned by the async bind worker */
> -	if (async && !err) {
> -		up_write(&vm->lock);
> -		if (args->num_binds > 1)
> -			kfree(syncs);
> -		goto free_objs;
> -	}
> +	err = vm_bind_ioctl_ops_parse(vm, e, ops, args->num_binds,
> +				      syncs, num_syncs, &ops_list, async);
> +	if (err)
> +		goto unwind_ops;
>   
> -destroy_vmas:
> -	for (i = j; err && i < args->num_binds; ++i) {
> -		u32 op = bind_ops[i].op;
> -		struct xe_vma *vma, *next;
> +	err = vm_bind_ioctl_ops_commit(vm, &ops_list, async);
> +	up_write(&vm->lock);
>   
> -		if (!vmas[i])
> -			break;
> +	for (i = 0; i < args->num_binds; ++i)
> +		xe_bo_put(bos[i]);
>   
> -		list_for_each_entry_safe(vma, next, &vma->unbind_link,
> -					 unbind_link) {
> -			list_del_init(&vma->unbind_link);
> -			if (!vma->destroyed) {
> -				prep_vma_destroy(vm, vma);
> -				xe_vma_destroy_unlocked(vma);
> -			}
> -		}
> +	return err;
>   
> -		switch (VM_BIND_OP(op)) {
> -		case XE_VM_BIND_OP_MAP:
> -			prep_vma_destroy(vm, vmas[i]);
> -			xe_vma_destroy_unlocked(vmas[i]);
> -			break;
> -		case XE_VM_BIND_OP_MAP_USERPTR:
> -			prep_vma_destroy(vm, vmas[i]);
> -			xe_vma_destroy_unlocked(vmas[i]);
> -			break;
> -		}
> -	}
> +unwind_ops:
> +	vm_bind_ioctl_ops_unwind(vm, ops, args->num_binds);
>   release_vm_lock:
>   	up_write(&vm->lock);
>   free_syncs:
> -	while (num_syncs--) {
> -		if (async && j &&
> -		    !(syncs[num_syncs].flags & DRM_XE_SYNC_SIGNAL))
> -			continue;	/* Still in async worker */
> +	while (num_syncs--)
>   		xe_sync_entry_cleanup(&syncs[num_syncs]);
> -	}
>   
>   	kfree(syncs);
>   put_obj:
> -	for (i = j; i < args->num_binds; ++i)
> +	for (i = 0; i < args->num_binds; ++i)
>   		xe_bo_put(bos[i]);
>   put_engine:
>   	if (e)
> @@ -3278,7 +3159,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>   	xe_vm_put(vm);
>   free_objs:
>   	kfree(bos);
> -	kfree(vmas);
> +	kfree(ops);
>   	if (args->num_binds > 1)
>   		kfree(bind_ops);
>   	return err;
> @@ -3322,14 +3203,14 @@ void xe_vm_unlock(struct xe_vm *vm, struct ww_acquire_ctx *ww)
>    */
>   int xe_vm_invalidate_vma(struct xe_vma *vma)
>   {
> -	struct xe_device *xe = vma->vm->xe;
> +	struct xe_device *xe = xe_vma_vm(vma)->xe;
>   	struct xe_gt *gt;
>   	u32 gt_needs_invalidate = 0;
>   	int seqno[XE_MAX_GT];
>   	u8 id;
>   	int ret;
>   
> -	XE_BUG_ON(!xe_vm_in_fault_mode(vma->vm));
> +	XE_BUG_ON(!xe_vm_in_fault_mode(xe_vma_vm(vma)));
>   	trace_xe_vma_usm_invalidate(vma);
>   
>   	/* Check that we don't race with page-table updates */
> @@ -3338,11 +3219,11 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
>   			WARN_ON_ONCE(!mmu_interval_check_retry
>   				     (&vma->userptr.notifier,
>   				      vma->userptr.notifier_seq));
> -			WARN_ON_ONCE(!dma_resv_test_signaled(&vma->vm->resv,
> +			WARN_ON_ONCE(!dma_resv_test_signaled(&xe_vma_vm(vma)->resv,
>   							     DMA_RESV_USAGE_BOOKKEEP));
>   
>   		} else {
> -			xe_bo_assert_held(vma->bo);
> +			xe_bo_assert_held(xe_vma_bo(vma));
>   		}
>   	}
>   
> @@ -3372,7 +3253,7 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
>   #if IS_ENABLED(CONFIG_DRM_XE_SIMPLE_ERROR_CAPTURE)
>   int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
>   {
> -	struct rb_node *node;
> +	DRM_GPUVA_ITER(it, &vm->mgr, 0);
>   	bool is_vram;
>   	uint64_t addr;
>   
> @@ -3385,8 +3266,8 @@ int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
>   		drm_printf(p, " VM root: A:0x%llx %s\n", addr, is_vram ? "VRAM" : "SYS");
>   	}
>   
> -	for (node = rb_first(&vm->vmas); node; node = rb_next(node)) {
> -		struct xe_vma *vma = to_xe_vma(node);
> +	drm_gpuva_iter_for_each(it) {
> +		struct xe_vma* vma = gpuva_to_vma(it.va);
>   		bool is_userptr = xe_vma_is_userptr(vma);
>   
>   		if (is_userptr) {
> @@ -3395,10 +3276,10 @@ int xe_analyze_vm(struct drm_printer *p, struct xe_vm *vm, int gt_id)
>   			xe_res_first_sg(vma->userptr.sg, 0, GEN8_PAGE_SIZE, &cur);
>   			addr = xe_res_dma(&cur);
>   		} else {
> -			addr = xe_bo_addr(vma->bo, 0, GEN8_PAGE_SIZE, &is_vram);
> +			addr = xe_bo_addr(xe_vma_bo(vma), 0, GEN8_PAGE_SIZE, &is_vram);
>   		}
>   		drm_printf(p, " [%016llx-%016llx] S:0x%016llx A:%016llx %s\n",
> -			   vma->start, vma->end, vma->end - vma->start + 1ull,
> +			   xe_vma_start(vma), xe_vma_end(vma), xe_vma_size(vma),
>   			   addr, is_userptr ? "USR" : is_vram ? "VRAM" : "SYS");
>   	}
>   	up_read(&vm->lock);
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 748dc16ebed9..21b1054949c4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -6,6 +6,7 @@
>   #ifndef _XE_VM_H_
>   #define _XE_VM_H_
>   
> +#include "xe_bo_types.h"
>   #include "xe_macros.h"
>   #include "xe_map.h"
>   #include "xe_vm_types.h"
> @@ -25,7 +26,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags);
>   void xe_vm_free(struct kref *ref);
>   
>   struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id);
> -int xe_vma_cmp_vma_cb(const void *key, const struct rb_node *node);
>   
>   static inline struct xe_vm *xe_vm_get(struct xe_vm *vm)
>   {
> @@ -50,7 +50,67 @@ static inline bool xe_vm_is_closed(struct xe_vm *vm)
>   }
>   
>   struct xe_vma *
> -xe_vm_find_overlapping_vma(struct xe_vm *vm, const struct xe_vma *vma);
> +xe_vm_find_overlapping_vma(struct xe_vm *vm, u64 start, u64 range);
> +
> +static inline struct xe_vm *gpuva_to_vm(struct drm_gpuva *gpuva)
> +{
> +	return container_of(gpuva->mgr, struct xe_vm, mgr);
> +}
> +
> +static inline struct xe_vma *gpuva_to_vma(struct drm_gpuva *gpuva)
> +{
> +	return container_of(gpuva, struct xe_vma, gpuva);
> +}
> +
> +static inline struct xe_vma_op *gpuva_op_to_vma_op(struct drm_gpuva_op *op)
> +{
> +	return container_of(op, struct xe_vma_op, base);
> +}
> +
> +/*
> + * Let's abstract start, size, end, bo_offset, vm, and bo as the underlying
> + * implementation may change
> + */
> +static inline u64 xe_vma_start(struct xe_vma *vma)
> +{
> +	return vma->gpuva.va.addr;
> +}
> +
> +static inline u64 xe_vma_size(struct xe_vma *vma)
> +{
> +	return vma->gpuva.va.range;
> +}
> +
> +static inline u64 xe_vma_end(struct xe_vma *vma)
> +{
> +	return xe_vma_start(vma) + xe_vma_size(vma);
> +}
> +
> +static inline u64 xe_vma_bo_offset(struct xe_vma *vma)
> +{
> +	return vma->gpuva.gem.offset;
> +}
> +
> +static inline struct xe_bo *xe_vma_bo(struct xe_vma *vma)
> +{
> +	return !vma->gpuva.gem.obj ? NULL :
> +		container_of(vma->gpuva.gem.obj, struct xe_bo, ttm.base);
> +}
> +
> +static inline struct xe_vm *xe_vma_vm(struct xe_vma *vma)
> +{
> +	return container_of(vma->gpuva.mgr, struct xe_vm, mgr);
> +}
> +
> +static inline bool xe_vma_read_only(struct xe_vma *vma)
> +{
> +	return vma->gpuva.flags & XE_VMA_READ_ONLY;
> +}
> +
> +static inline u64 xe_vma_userptr(struct xe_vma *vma)
> +{
> +	return vma->gpuva.gem.offset;
> +}
>   
>   #define xe_vm_assert_held(vm) dma_resv_assert_held(&(vm)->resv)
>   
> @@ -117,7 +177,7 @@ static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
>   
>   static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>   {
> -	return !vma->bo;
> +	return !xe_vma_bo(vma);
>   }
>   
>   int xe_vma_userptr_pin_pages(struct xe_vma *vma);
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 29815852985a..46d1b8d7b72f 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -30,7 +30,7 @@ static int madvise_preferred_mem_class(struct xe_device *xe, struct xe_vm *vm,
>   		struct xe_bo *bo;
>   		struct ww_acquire_ctx ww;
>   
> -		bo = vmas[i]->bo;
> +		bo = xe_vma_bo(vmas[i]);
>   
>   		err = xe_bo_lock(bo, &ww, 0, true);
>   		if (err)
> @@ -55,7 +55,7 @@ static int madvise_preferred_gt(struct xe_device *xe, struct xe_vm *vm,
>   		struct xe_bo *bo;
>   		struct ww_acquire_ctx ww;
>   
> -		bo = vmas[i]->bo;
> +		bo = xe_vma_bo(vmas[i]);
>   
>   		err = xe_bo_lock(bo, &ww, 0, true);
>   		if (err)
> @@ -91,7 +91,7 @@ static int madvise_preferred_mem_class_gt(struct xe_device *xe,
>   		struct xe_bo *bo;
>   		struct ww_acquire_ctx ww;
>   
> -		bo = vmas[i]->bo;
> +		bo = xe_vma_bo(vmas[i]);
>   
>   		err = xe_bo_lock(bo, &ww, 0, true);
>   		if (err)
> @@ -114,7 +114,7 @@ static int madvise_cpu_atomic(struct xe_device *xe, struct xe_vm *vm,
>   		struct xe_bo *bo;
>   		struct ww_acquire_ctx ww;
>   
> -		bo = vmas[i]->bo;
> +		bo = xe_vma_bo(vmas[i]);
>   		if (XE_IOCTL_ERR(xe, !(bo->flags & XE_BO_CREATE_SYSTEM_BIT)))
>   			return -EINVAL;
>   
> @@ -145,7 +145,7 @@ static int madvise_device_atomic(struct xe_device *xe, struct xe_vm *vm,
>   		struct xe_bo *bo;
>   		struct ww_acquire_ctx ww;
>   
> -		bo = vmas[i]->bo;
> +		bo = xe_vma_bo(vmas[i]);
>   		if (XE_IOCTL_ERR(xe, !(bo->flags & XE_BO_CREATE_VRAM0_BIT) &&
>   				 !(bo->flags & XE_BO_CREATE_VRAM1_BIT)))
>   			return -EINVAL;
> @@ -176,7 +176,7 @@ static int madvise_priority(struct xe_device *xe, struct xe_vm *vm,
>   		struct xe_bo *bo;
>   		struct ww_acquire_ctx ww;
>   
> -		bo = vmas[i]->bo;
> +		bo = xe_vma_bo(vmas[i]);
>   
>   		err = xe_bo_lock(bo, &ww, 0, true);
>   		if (err)
> @@ -210,19 +210,12 @@ static const madvise_func madvise_funcs[] = {
>   	[DRM_XE_VM_MADVISE_PIN] = madvise_pin,
>   };
>   
> -static struct xe_vma *node_to_vma(const struct rb_node *node)
> -{
> -	BUILD_BUG_ON(offsetof(struct xe_vma, vm_node) != 0);
> -	return (struct xe_vma *)node;
> -}
> -
>   static struct xe_vma **
>   get_vmas(struct xe_vm *vm, int *num_vmas, u64 addr, u64 range)
>   {
> -	struct xe_vma **vmas;
> -	struct xe_vma *vma, *__vma, lookup;
> +	struct xe_vma **vmas, **__vmas;
>   	int max_vmas = 8;
> -	struct rb_node *node;
> +	DRM_GPUVA_ITER(it, &vm->mgr, addr);
>   
>   	lockdep_assert_held(&vm->lock);
>   
> @@ -230,64 +223,24 @@ get_vmas(struct xe_vm *vm, int *num_vmas, u64 addr, u64 range)
>   	if (!vmas)
>   		return NULL;
>   
> -	lookup.start = addr;
> -	lookup.end = addr + range - 1;
> +	drm_gpuva_iter_for_each_range(it, addr + range) {
> +		struct xe_vma *vma = gpuva_to_vma(it.va);
>   
> -	vma = xe_vm_find_overlapping_vma(vm, &lookup);
> -	if (!vma)
> -		return vmas;
> +		if (xe_vma_is_userptr(vma))
> +			continue;
>   
> -	if (!xe_vma_is_userptr(vma)) {
> +		if (*num_vmas == max_vmas) {
> +			max_vmas <<= 1;
> +			__vmas = krealloc(vmas, max_vmas * sizeof(*vmas),
> +					  GFP_KERNEL);
> +			if (!__vmas)
> +				return NULL;
> +			vmas = __vmas;
> +		}
>   		vmas[*num_vmas] = vma;
>   		*num_vmas += 1;
>   	}
>   
> -	node = &vma->vm_node;
> -	while ((node = rb_next(node))) {
> -		if (!xe_vma_cmp_vma_cb(&lookup, node)) {
> -			__vma = node_to_vma(node);
> -			if (xe_vma_is_userptr(__vma))
> -				continue;
> -
> -			if (*num_vmas == max_vmas) {
> -				struct xe_vma **__vmas =
> -					krealloc(vmas, max_vmas * sizeof(*vmas),
> -						 GFP_KERNEL);
> -
> -				if (!__vmas)
> -					return NULL;
> -				vmas = __vmas;
> -			}
> -			vmas[*num_vmas] = __vma;
> -			*num_vmas += 1;
> -		} else {
> -			break;
> -		}
> -	}
> -
> -	node = &vma->vm_node;
> -	while ((node = rb_prev(node))) {
> -		if (!xe_vma_cmp_vma_cb(&lookup, node)) {
> -			__vma = node_to_vma(node);
> -			if (xe_vma_is_userptr(__vma))
> -				continue;
> -
> -			if (*num_vmas == max_vmas) {
> -				struct xe_vma **__vmas =
> -					krealloc(vmas, max_vmas * sizeof(*vmas),
> -						 GFP_KERNEL);
> -
> -				if (!__vmas)
> -					return NULL;
> -				vmas = __vmas;
> -			}
> -			vmas[*num_vmas] = __vma;
> -			*num_vmas += 1;
> -		} else {
> -			break;
> -		}
> -	}
> -
>   	return vmas;
>   }
>   
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index fada7896867f..a81dc9a1a7a6 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -6,6 +6,8 @@
>   #ifndef _XE_VM_TYPES_H_
>   #define _XE_VM_TYPES_H_
>   
> +#include <drm/drm_gpuva_mgr.h>
> +
>   #include <linux/dma-resv.h>
>   #include <linux/kref.h>
>   #include <linux/mmu_notifier.h>
> @@ -14,28 +16,23 @@
>   #include "xe_device_types.h"
>   #include "xe_pt_types.h"
>   
> +struct async_op_fence;
>   struct xe_bo;
> +struct xe_sync_entry;
>   struct xe_vm;
>   
> -struct xe_vma {
> -	struct rb_node vm_node;
> -	/** @vm: VM which this VMA belongs to */
> -	struct xe_vm *vm;
> +#define TEST_VM_ASYNC_OPS_ERROR
> +#define FORCE_ASYNC_OP_ERROR	BIT(31)
>   
> -	/**
> -	 * @start: start address of this VMA within its address domain, end -
> -	 * start + 1 == VMA size
> -	 */
> -	u64 start;
> -	/** @end: end address of this VMA within its address domain */
> -	u64 end;
> -	/** @pte_flags: pte flags for this VMA */
> -	u32 pte_flags;
> +#define XE_VMA_READ_ONLY	DRM_GPUVA_USERBITS
> +#define XE_VMA_DESTROYED	(DRM_GPUVA_USERBITS << 1)
> +#define XE_VMA_ATOMIC_PTE_BIT	(DRM_GPUVA_USERBITS << 2)
> +#define XE_VMA_FIRST_REBIND	(DRM_GPUVA_USERBITS << 3)
> +#define XE_VMA_LAST_REBIND	(DRM_GPUVA_USERBITS << 4)
BUILD_BUG_ON() somewhere that we don't overflow the number of userbits?
>   
> -	/** @bo: BO if not a userptr, must be NULL is userptr */
> -	struct xe_bo *bo;
> -	/** @bo_offset: offset into BO if not a userptr, unused for userptr */
> -	u64 bo_offset;
> +struct xe_vma {
> +	/** @gpuva: Base GPUVA object */
> +	struct drm_gpuva gpuva;
>   
>   	/** @gt_mask: GT mask of where to create binding for this VMA */
>   	u64 gt_mask;
> @@ -49,40 +46,8 @@ struct xe_vma {
>   	 */
>   	u64 gt_present;
>   
> -	/**
> -	 * @destroyed: VMA is destroyed, in the sense that it shouldn't be
> -	 * subject to rebind anymore. This field must be written under
> -	 * the vm lock in write mode and the userptr.notifier_lock in
> -	 * either mode. Read under the vm lock or the userptr.notifier_lock in
> -	 * write mode.
> -	 */
> -	bool destroyed;
> -
> -	/**
> -	 * @first_munmap_rebind: VMA is first in a sequence of ops that triggers
> -	 * a rebind (munmap style VM unbinds). This indicates the operation
> -	 * using this VMA must wait on all dma-resv slots (wait for pending jobs
> -	 * / trigger preempt fences).
> -	 */
> -	bool first_munmap_rebind;
> -
> -	/**
> -	 * @last_munmap_rebind: VMA is first in a sequence of ops that triggers
> -	 * a rebind (munmap style VM unbinds). This indicates the operation
> -	 * using this VMA must install itself into kernel dma-resv slot (blocks
> -	 * future jobs) and kick the rebind work in compute mode.
> -	 */
> -	bool last_munmap_rebind;
> -
> -	/** @use_atomic_access_pte_bit: Set atomic access bit in PTE */
> -	bool use_atomic_access_pte_bit;
> -
> -	union {
> -		/** @bo_link: link into BO if not a userptr */
> -		struct list_head bo_link;
> -		/** @userptr_link: link into VM repin list if userptr */
> -		struct list_head userptr_link;
> -	};
> +	/** @userptr_link: link into VM repin list if userptr */
> +	struct list_head userptr_link;
>   
>   	/**
>   	 * @rebind_link: link into VM if this VMA needs rebinding, and
> @@ -105,8 +70,6 @@ struct xe_vma {
>   
>   	/** @userptr: user pointer state */
>   	struct {
> -		/** @ptr: user pointer */
> -		uintptr_t ptr;
>   		/** @invalidate_link: Link for the vm::userptr.invalidated list */
>   		struct list_head invalidate_link;
>   		/**
> @@ -154,6 +117,9 @@ struct xe_device;
>   #define xe_vm_assert_held(vm) dma_resv_assert_held(&(vm)->resv)
>   
>   struct xe_vm {
> +	/** @mgr: base GPUVA used to track VMAs */
> +	struct drm_gpuva_manager mgr;
> +
>   	struct xe_device *xe;
>   
>   	struct kref refcount;
> @@ -165,7 +131,6 @@ struct xe_vm {
>   	struct dma_resv resv;
>   
>   	u64 size;
> -	struct rb_root vmas;
>   
>   	struct xe_pt *pt_root[XE_MAX_GT];
>   	struct xe_bo *scratch_bo[XE_MAX_GT];
> @@ -339,4 +304,96 @@ struct xe_vm {
>   	} error_capture;
>   };
>   
> +/** struct xe_vma_op_map - VMA map operation */
> +struct xe_vma_op_map {
> +	/** @vma: VMA to map */
> +	struct xe_vma *vma;
> +	/** @immediate: Immediate bind */
> +	bool immediate;
> +	/** @read_only: Read only */
> +	bool read_only;
> +};
> +
> +/** struct xe_vma_op_unmap - VMA unmap operation */
> +struct xe_vma_op_unmap {
> +	/** @start: start of the VMA unmap */
> +	u64 start;
> +	/** @range: range of the VMA unmap */
> +	u64 range;
> +};
> +
> +/** struct xe_vma_op_remap - VMA remap operation */
> +struct xe_vma_op_remap {
> +	/** @prev: VMA preceding part of a split mapping */
> +	struct xe_vma *prev;
> +	/** @next: VMA subsequent part of a split mapping */
> +	struct xe_vma *next;
> +	/** @start: start of the VMA unmap */
> +	u64 start;
> +	/** @range: range of the VMA unmap */
> +	u64 range;
> +	/** @unmap_done: unmap operation in done */
> +	bool unmap_done;
> +};
> +
> +/** struct xe_vma_op_prefetch - VMA prefetch operation */
> +struct xe_vma_op_prefetch {
> +	/** @region: memory region to prefetch to */
> +	u32 region;
> +};
> +
> +/** enum xe_vma_op_flags - flags for VMA operation */
> +enum xe_vma_op_flags {
> +	/** @XE_VMA_OP_FIRST: first VMA operation for a set of syncs */
> +	XE_VMA_OP_FIRST		= (0x1 << 0),
> +	/** @XE_VMA_OP_LAST: last VMA operation for a set of syncs */
> +	XE_VMA_OP_LAST		= (0x1 << 1),
> +};
> +
> +/** struct xe_vma_op - VMA operation */
> +struct xe_vma_op {
> +	/** @base: GPUVA base operation */
> +	struct drm_gpuva_op base;
> +	/**
> +	 * @ops: GPUVA ops, when set call drm_gpuva_ops_free after this
> +	 * operations is processed
> +	 */
> +	struct drm_gpuva_ops *ops;
> +	/** @engine: engine for this operation */
> +	struct xe_engine *engine;
> +	/**
> +	 * @syncs: syncs for this operation, only used on first and last
> +	 * operation
> +	 */
> +	struct xe_sync_entry *syncs;
> +	/** @num_syncs: number of syncs */
> +	u32 num_syncs;
> +	/** @link: async operation link */
> +	struct list_head link;
> +	/**
> +	 * @fence: async operation fence, signaled on last operation complete
> +	 */
> +	struct async_op_fence *fence;
> +	/** @gt_mask: gt mask for this operation */
> +	u64 gt_mask;
> +	/** @flags: operation flags */
> +	enum xe_vma_op_flags flags;
> +
> +#ifdef TEST_VM_ASYNC_OPS_ERROR
> +	/** @inject_error: inject error to test async op error handling */
> +	bool inject_error;
> +#endif
> +
> +	union {
> +		/** @map: VMA map operation specific data */
> +		struct xe_vma_op_map map;
> +		/** @unmap: VMA unmap operation specific data */
> +		struct xe_vma_op_unmap unmap;
> +		/** @remap: VMA remap operation specific data */
> +		struct xe_vma_op_remap remap;
> +		/** @prefetch: VMA prefetch operation specific data */
> +		struct xe_vma_op_prefetch prefetch;
> +	};
> +};
> +
>   #endif

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-04-25 11:41 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-04  1:42 [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Matthew Brost
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 1/8] maple_tree: split up MA_STATE() macro Matthew Brost
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 2/8] maple_tree: Export mas_preallocate Matthew Brost
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 3/8] drm: manager to keep track of GPUs VA mappings Matthew Brost
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 4/8] drm/xe: Port Xe to GPUVA Matthew Brost
2023-04-21 12:52   ` Thomas Hellström
2023-04-21 12:56     ` Thomas Hellström
2023-04-21 14:54   ` Thomas Hellström
2023-04-25 11:41   ` Thomas Hellström
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 5/8] drm/xe: NULL binding implementation Matthew Brost
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 6/8] drm/xe: Avoid doing rebinds Matthew Brost
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 7/8] drm/xe: Reduce the number list links in xe_vma Matthew Brost
2023-04-04  1:42 ` [Intel-xe] [PATCH v5 8/8] drm/xe: Optimize size of xe_vma allocation Matthew Brost
2023-04-04  1:44 ` [Intel-xe] ✓ CI.Patch_applied: success for Port Xe to use GPUVA and implement NULL VM binds (rev5) Patchwork
2023-04-04  1:45 ` [Intel-xe] ✓ CI.KUnit: " Patchwork
2023-04-04  1:49 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-04-04  2:09 ` [Intel-xe] ○ CI.BAT: info " Patchwork
2023-04-04 13:20 ` [Intel-xe] [PATCH v5 0/8] Port Xe to use GPUVA and implement NULL VM binds Thomas Hellström
2023-04-04 14:56   ` Matthew Brost
2023-04-05 15:31     ` Thomas Hellström
2023-04-06  1:20       ` Matthew Brost

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).