From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 27441C5475B for ; Fri, 8 Mar 2024 05:08:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D375A112E7B; Fri, 8 Mar 2024 05:08:07 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="Sq/Eoszh"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by gabe.freedesktop.org (Postfix) with ESMTPS id E15F210F7AB for ; Fri, 8 Mar 2024 05:07:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709874474; x=1741410474; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TSrMGCXLurA9iEdpWuk8gufzjjbfms2nQyjFGfa30/g=; b=Sq/EoszhamIspIUOmN1YA6ojtHDE+zTnde99DnG0LoJs3bJABsMdyyDd ipUFU4PB7J8SnIVuOpX4qK5ESJVpe8p7zh/u9yOUrjy3+LL66DbAPb2RP ByJY8PUm+ZZQHu8aw77kOk7Ah058eMCPGUPbp/jQx0oZUVps5eec9KUSr FajE8IGqrvfXoeG4Ypr8nTMV7VpDJZxMSSC/eCRau7tifFD2IegCN2Q9y cGM6x8xa28h+aYzkyAle3ct3UgtLIGutyYBeOqt0CmY0G0gqrOtaTFrNb zVuUBnlHtd5CMLejERmG+G7YkBBXUb1fVceiv3CtOIWTrahDL9Ak9PlqK A==; X-IronPort-AV: E=McAfee;i="6600,9927,11006"; a="4761969" X-IronPort-AV: E=Sophos;i="6.07,108,1708416000"; d="scan'208";a="4761969" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2024 21:07:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,108,1708416000"; d="scan'208";a="10403010" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2024 21:07:51 -0800 From: Matthew Brost To: Cc: Matthew Brost Subject: [PATCH v4 15/30] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Date: Thu, 7 Mar 2024 21:07:51 -0800 Message-Id: <20240308050806.577176-16-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240308050806.577176-1-matthew.brost@intel.com> References: <20240308050806.577176-1-matthew.brost@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Will help with the converstion to 1 job per VM bind IOCTL. Allocation only implemented in this patch. Signed-off-by: Matthew Brost --- drivers/gpu/drm/xe/xe_pt_types.h | 12 ++++++ drivers/gpu/drm/xe/xe_vm.c | 66 +++++++++++++++++++++++++++++--- drivers/gpu/drm/xe/xe_vm_types.h | 8 ++++ 3 files changed, 81 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_pt_types.h b/drivers/gpu/drm/xe/xe_pt_types.h index cee70cb0f014..2093150f461e 100644 --- a/drivers/gpu/drm/xe/xe_pt_types.h +++ b/drivers/gpu/drm/xe/xe_pt_types.h @@ -74,4 +74,16 @@ struct xe_vm_pgtable_update { u32 flags; }; +/** struct xe_vm_pgtable_update_op - Page table update operation */ +struct xe_vm_pgtable_update_op { + /** @entries: entries to update for this operation */ + struct xe_vm_pgtable_update entries[XE_VM_MAX_LEVEL * 2 + 1]; + /** @num_entries: number of entries for this update operation */ + u32 num_entries; + /** @bind: is a bind */ + bool bind; + /** @rebind: is a rebind */ + bool rebind; +}; + #endif diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 5b93c71fc5e9..72e9bdab79d5 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -1319,6 +1319,42 @@ static void xe_vma_ops_init(struct xe_vma_ops *vops, struct xe_vm *vm, vops->num_syncs = num_syncs; } +static int xe_vma_ops_alloc(struct xe_vma_ops *vops) +{ + int i; + + for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i) { + if (!vops->pt_update_ops[i].num_ops) + continue; + + vops->pt_update_ops[i].ops = + kmalloc_array(vops->pt_update_ops[i].num_ops, + sizeof(*vops->pt_update_ops[i].ops), + GFP_KERNEL); + if (!vops->pt_update_ops[i].ops) + return -ENOMEM; + } + + return 0; +} + +static void xe_vma_ops_fini(struct xe_vma_ops *vops) +{ + int i; + + for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i) + kfree(vops->pt_update_ops[i].ops); +} + +static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask) +{ + int i; + + for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i) + if (BIT(i) & tile_mask) + ++vops->pt_update_ops[i].num_ops; +} + struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) { struct drm_gem_object *vm_resv_obj; @@ -1343,6 +1379,11 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) xe_vma_ops_init(&vm->dummy_ops.vops, vm, NULL, NULL, 0); INIT_LIST_HEAD(&vm->dummy_ops.op.link); list_add(&vm->dummy_ops.op.link, &vm->dummy_ops.vops.list); + for (id = 0; id < XE_MAX_TILES_PER_DEVICE; ++id) + vm->dummy_ops.vops.pt_update_ops[id].num_ops = 1; + err = xe_vma_ops_alloc(&vm->dummy_ops.vops); + if (err) + goto err_free; INIT_LIST_HEAD(&vm->rebind_list); @@ -1468,12 +1509,14 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags) return ERR_PTR(err); err_no_resv: - mutex_destroy(&vm->snap_mutex); + if (!(flags & XE_VM_FLAG_MIGRATION)) + xe_device_mem_access_put(xe); for_each_tile(tile, xe, id) xe_range_fence_tree_fini(&vm->rftree[id]); +err_free: + mutex_destroy(&vm->snap_mutex); + xe_vma_ops_fini(&vm->dummy_ops.vops); kfree(vm); - if (!(flags & XE_VM_FLAG_MIGRATION)) - xe_device_mem_access_put(xe); return ERR_PTR(err); } @@ -1611,6 +1654,7 @@ static void vm_destroy_work_func(struct work_struct *w) trace_xe_vm_free(vm); dma_fence_put(vm->rebind_fence); + xe_vma_ops_fini(&vm->dummy_ops.vops); kfree(vm); } @@ -2284,7 +2328,6 @@ static int xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op) return err; } - static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, struct drm_gpuva_ops *ops, struct xe_sync_entry *syncs, u32 num_syncs, @@ -2334,6 +2377,9 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, return PTR_ERR(vma); op->map.vma = vma; + if (op->map.immediate || !xe_vm_in_fault_mode(vm)) + xe_vma_ops_incr_pt_update_ops(vops, + op->tile_mask); break; } case DRM_GPUVA_OP_REMAP: @@ -2378,6 +2424,8 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, vm_dbg(&xe->drm, "REMAP:SKIP_PREV: addr=0x%016llx, range=0x%016llx", (ULL)op->remap.start, (ULL)op->remap.range); + } else { + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); } } @@ -2414,13 +2462,16 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, vm_dbg(&xe->drm, "REMAP:SKIP_NEXT: addr=0x%016llx, range=0x%016llx", (ULL)op->remap.start, (ULL)op->remap.range); + } else { + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); } } + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); break; } case DRM_GPUVA_OP_UNMAP: case DRM_GPUVA_OP_PREFETCH: - /* Nothing to do */ + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); break; default: drm_warn(&vm->xe->drm, "NOT POSSIBLE"); @@ -3243,11 +3294,16 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file) goto unwind_ops; } + err = xe_vma_ops_alloc(&vops); + if (err) + goto unwind_ops; + err = vm_bind_ioctl_ops_execute(vm, &vops); unwind_ops: if (err && err != -ENODATA) vm_bind_ioctl_ops_unwind(vm, ops, args->num_binds); + xe_vma_ops_fini(&vops); for (i = args->num_binds - 1; i >= 0; --i) if (ops[i]) drm_gpuva_ops_free(&vm->gpuvm, ops[i]); diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h index f6ea4df64e79..f5d740dcbba3 100644 --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -22,6 +22,7 @@ struct xe_device; struct xe_sync_entry; struct xe_user_fence; struct xe_vm; +struct xe_vm_pgtable_update_op; #define XE_VMA_READ_ONLY DRM_GPUVA_USERBITS #define XE_VMA_DESTROYED (DRM_GPUVA_USERBITS << 1) @@ -219,6 +220,13 @@ struct xe_vma_ops { struct xe_sync_entry *syncs; /** @num_syncs: number of syncs */ u32 num_syncs; + /** @pt_update_ops: page table update operations */ + struct { + /** @ops: operations */ + struct xe_vm_pgtable_update_op *ops; + /** @num_ops: number of operations */ + u32 num_ops; + } pt_update_ops[XE_MAX_TILES_PER_DEVICE]; }; struct xe_vm { -- 2.34.1